CRACS - Indexed Articles in Journals
Permanent URI for this collection
Browse
Browsing CRACS - Indexed Articles in Journals by Author "Álvaro Figueira"
Results Per Page
Sort Options
-
ItemAutomated Assessment in Computer Science Education: A State-of-the-Art Review( 2022) Álvaro Figueira ; José Paulo Leal ; José Carlos Paiva ; 5088 ; 5125 ; 6251Practical programming competencies are critical to the success in computer science (CS) education and goto-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not reasonable to consider that teachers could evaluate all attempts that the average learner should develop multiplied by the number of students enrolled in a course, much less in a timely, deep, and fair fashion. Unsurprisingly, exploring the formal structure of programs to automate the assessment of certain features has long been a hot topic among CS education practitioners. Assessing a program is considerably more complex than asserting its functional correctness, as the proliferation of tools and techniques in the literature over the past decades indicates. Program efficiency, behavior, and readability, among many other features, assessed either statically or dynamically, are now also relevant for automatic evaluation. The outcome of an evaluation evolved from the primordial Boolean values to information about errors and tips on how to advance, possibly taking into account similar solutions. This work surveys the state of the art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning. A new era of automated assessment, capitalizing on static analysis techniques and containerization, has been identified. Furthermore, this review presents several other findings from the conducted review, discusses the current challenges of the field, and proposes some future research directions.
-
ItemAutomated Assessment in Computer Science Education: A State-of-the-Art Review( 2022) Álvaro Figueira ; José Paulo Leal ; José Carlos Paiva ; 5088 ; 5125 ; 6251Practical programming competencies are critical to the success in computer science (CS) education and goto-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not reasonable to consider that teachers could evaluate all attempts that the average learner should develop multiplied by the number of students enrolled in a course, much less in a timely, deep, and fair fashion. Unsurprisingly, exploring the formal structure of programs to automate the assessment of certain features has long been a hot topic among CS education practitioners. Assessing a program is considerably more complex than asserting its functional correctness, as the proliferation of tools and techniques in the literature over the past decades indicates. Program efficiency, behavior, and readability, among many other features, assessed either statically or dynamically, are now also relevant for automatic evaluation. The outcome of an evaluation evolved from the primordial Boolean values to information about errors and tips on how to advance, possibly taking into account similar solutions. This work surveys the state of the art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning. A new era of automated assessment, capitalizing on static analysis techniques and containerization, has been identified. Furthermore, this review presents several other findings from the conducted review, discusses the current challenges of the field, and proposes some future research directions.
-
ItemBibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback( 2023) Álvaro Figueira ; José Paulo Leal ; 5088 ; 5125Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.
-
ItemBibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback( 2023) Álvaro Figueira ; José Paulo Leal ; 5088 ; 5125Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.
-
ItemThe community structure of a multidimensional network of news clips( 2013) José Luís Devezas ; Álvaro FigueiraWe analysed the community structure of a network of news clips where relationships were established by the co-reference of entities in pairs of clips. Community detection was applied to a unidimensional version of the news clips network, as well as to a multidimensional version where dimensions were defined based on three different classes of entities: places, people, and dates. The goal was to study the impact on the quality of the identified community structure when using multiple dimensions to model the network. We did a two-fold evaluation, first based on the modularity metric and then based on human input regarding community semantics. We verified that the assessments of the evaluators differed from the results provided by the modularity metric, pointing towards the relevance of the utility and network integration phases in the identification of semantically cohesive groups of news clips. Copyright © 2013 Inderscience Enterprises Ltd.
-
ItemThe current state of fake news: challenges and opportunities( 2017) Álvaro Figueira ; Luciana Gomes Oliveira
-
ItemImproving the benchmarking of social media content strategies using clustering and KPI( 2017) Luciana Gomes Oliveira ; Álvaro Figueira
-
ItemPredicting the Relevance of Social Media Posts Based on Linguistic Features and Journalistic Criteria( 2017) Pinto,A ; Oliveira,HG ; Álvaro Figueira ; Alves,AOAn overwhelming quantity of messages is posted in social networks every minute. To make the utilization of these platforms more productive, it is imperative to filter out information that is irrelevant to the general audience, such as private messages, personal opinions or well-known facts. This work is focused on the automatic classification of public social text according to its potential relevance, from a journalistic point of view, hopefully improving the overall experience of using a social network. Our experiments were based on a set of posts with several criteria, including the journalistic relevance, assessed by human judges. To predict the latter, we rely exclusively on linguistic features, extracted by Natural Language Processing tools, regardless the author of the message and its profile information. In our first approach, different classifiers and feature engineering methods were used to predict relevance directly from the selected features. In a second approach, relevance was predicted indirectly, based on an ensemble of classifiers for other key criteria when defining relevance-controversy, interestingness, meaningfulness, novelty, reliability and scope-also in the dataset. The first approach achieved a F (1)-score of 0.76 and an Area under the ROC curve (AUC) of 0.63. But the best results were achieved by the second approach, with the best learned model achieving a F (1)-score of 0.84 with an AUC of 0.78. This confirmed that journalistic relevance can indeed be predicted by the combination of the selected criteria, and that linguistic features can be exploited to classify the latter.
-
ItemSocial Media Content Analysis in the Higher Education Sector: From Content to Strategy( 2015) Oliveira,L ; Álvaro FigueiraSocial media has become one of the most prolific felds for interchange of multidisciplinary expertise. In this paper, computer science, communication and management are brought together for the development of a sound strategic content analysis, in the Higher Education Sector. The authors present a study comprised of two stages: analysis of SM content and corresponding audience engagement according to a weighted scale, and a classification of content strategies, which builds on different noticeable articulations of editorial areas among organizations. Their approach is based on an automatic classification of content according to a predefned editorial model. The proposed methodology and research results offer academic and practical fndings for organizations striving on social media. Copyright © 2015,.