CRACS
Permanent URI for this community
This service develops its activity in the areas of programming languages, parallel and distributed computing, data mining, intelligent systems and software architecture, with emphasis on solving concrete problems in areas of multidisciplinary collaboration, such as Biology, Medicine and Chemistry.
Browse
Browsing CRACS by Author "6251"
Results Per Page
Sort Options
-
ItemAsura: A Game-Based Assessment Environment for Mooshak (Short Paper)( 2018) José Paulo Leal ; José Carlos Paiva ; 5125 ; 6251Learning to program is hard. Students need to remain motivated to keep practicing and to overcome their difficulties. Several approaches have been proposed to foster students’ motivation. As most people enjoy playing games of some kind and play on a regular basis, the use of games is one of the most widely spread approaches. However, taking full advantage of games to teach specific concepts of programming requires much effort. This paper presents Asura, a game-based assessment environment built on top of Mooshak that challenges students to code Software Agents (SAs) to play a game, allowing them to test the SAs against each others’ SAs and watch a movie of the test. Once the challenge development stage ends, teachers are able to organize game-like tournaments among SAs. One of the key features of Asura is that it provides a means to reduce the required effort of building game-based challenges up to that of creating traditional programming exercises. © José Carlos Paiva and José Paulo Leal.
-
ItemAuthoring Game-Based Programming Challenges to Improve Students’ Motivation( 2020) José Paulo Leal ; José Carlos Paiva ; Ricardo Queirós ; 5125 ; 6251 ; 5695One of the great challenges in programming education is to keep students motivated while working on their programming assignments. Of the techniques proposed in the literature to engage students, gamification is arguably the most widely spread and effective method. Nevertheless, gamification is not a panacea and can be harmful to students. Challenges comprising intrinsic motivators of games, such as graphical feedback and game-thinking, are more prone to have longterm positive effects on students, but those are typically complex to create or adapt to slightly distinct contexts. This paper presents Asura, a game-based programming assessment environment providing means to minimize the hurdle of building game challenges. These challenges invite the student to code a Software Agent to solve a certain problem, in a way that can defeat every opponent. Moreover, the experiment conducted to assess the difficulty of authoring Asura challenges is described. © 2020, Springer Nature Switzerland AG.
-
ItemAutomated Assessment in Computer Science Education: A State-of-the-Art Review( 2022) Álvaro Figueira ; José Paulo Leal ; José Carlos Paiva ; 5088 ; 5125 ; 6251Practical programming competencies are critical to the success in computer science (CS) education and goto-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not reasonable to consider that teachers could evaluate all attempts that the average learner should develop multiplied by the number of students enrolled in a course, much less in a timely, deep, and fair fashion. Unsurprisingly, exploring the formal structure of programs to automate the assessment of certain features has long been a hot topic among CS education practitioners. Assessing a program is considerably more complex than asserting its functional correctness, as the proliferation of tools and techniques in the literature over the past decades indicates. Program efficiency, behavior, and readability, among many other features, assessed either statically or dynamically, are now also relevant for automatic evaluation. The outcome of an evaluation evolved from the primordial Boolean values to information about errors and tips on how to advance, possibly taking into account similar solutions. This work surveys the state of the art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning. A new era of automated assessment, capitalizing on static analysis techniques and containerization, has been identified. Furthermore, this review presents several other findings from the conducted review, discusses the current challenges of the field, and proposes some future research directions.
-
ItemAutomated Assessment in Computer Science Education: A State-of-the-Art Review( 2022) Álvaro Figueira ; José Paulo Leal ; José Carlos Paiva ; 5088 ; 5125 ; 6251Practical programming competencies are critical to the success in computer science (CS) education and goto-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not reasonable to consider that teachers could evaluate all attempts that the average learner should develop multiplied by the number of students enrolled in a course, much less in a timely, deep, and fair fashion. Unsurprisingly, exploring the formal structure of programs to automate the assessment of certain features has long been a hot topic among CS education practitioners. Assessing a program is considerably more complex than asserting its functional correctness, as the proliferation of tools and techniques in the literature over the past decades indicates. Program efficiency, behavior, and readability, among many other features, assessed either statically or dynamically, are now also relevant for automatic evaluation. The outcome of an evaluation evolved from the primordial Boolean values to information about errors and tips on how to advance, possibly taking into account similar solutions. This work surveys the state of the art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning. A new era of automated assessment, capitalizing on static analysis techniques and containerization, has been identified. Furthermore, this review presents several other findings from the conducted review, discusses the current challenges of the field, and proposes some future research directions.
-
ItemDefining Requirements for a Gamified Programming Exercises Format( 2019) José Carlos Paiva ; Ricardo Queirós ; José Paulo Leal ; 6251 ; 5695 ; 5125Computer programming is a complex domain both to teach and learn. This incited endeavors to find methods that could mitigate at least some of the existing barriers. In the last years, automatic assessment has been playing an important role in reducing the burden of teachers in the assessment of students' attempts to solve programming exercises and fostering the autonomy of students by allowing them to practice in any place and at any time with timely feedback. Even more recent development is the use of gamification in computer programming education in order to raise the enjoyment and engagement of students. Despite its rising spread, until now, there is not a programming exercise specification format addressing the needs of gamification, such as the definition of challenges, the underlying storyline, including the links to other exercises, or the rewards for solving challenges in form of points, badges or virtual items. Such a data format would allow the exchange of ready-to-use programming exercises along with the gamification-related data among different educational institutions and courses, providing instructors a possibility to make use of gamification in their courses without having to invest their own time in defining gamification rules themselves. In this paper, we analyze a set of concepts related to programming gamification developed in our previous work to identify the requirements for the specification of a gamified exercise format. (C) 2019 The Authors. Published by Elsevier B.V.
-
ItemEnhancing Feedback to Students in Automated Diagram Assessment( 2017) José Paulo Leal ; Helder Pina Correia ; José Carlos Paiva ; 5125 ; 6549 ; 6251Automated assessment is an essential part of eLearning. Although comparatively easy for multiple choice questions (MCQs), automated assessment is more challenging when exercises involve languages used in computer science. In this particular case, the assessment is more than just grading and must include feedback that leads to the improvement of the students’ performance. This paper presents ongoing work to develop Kora, an automated diagram assessment tool with enhanced feedback, targeted to the multiple diagrammatic languages used in computer science. Kora builds on the experience gained with previous research, namely: a diagram assessment tool to compute di erences between graphs; an IDE inspired web learning environment for computer science languages; and an extensible web diagram editor. Kora has several features to enhance feedback: it distinguishes syntactic and semantic errors, providing specialized feedback in each case; it provides progressive feedback disclosure, controlling the quality and quantity shown to each student after a submission; when possible, it integrates feedback within the diagram editor showing actual nodes and edges on the editor itself. © Hélder Correia, José Paulo Leal, and José Carlos Paiva
-
ItemEnki: A Pedagogical Services Aggregator for Learning Programming Languages( 2016) José Paulo Leal ; Ricardo Queirós ; José Carlos Paiva ; 5125 ; 5695 ; 6251This paper presents Enki, a web-based IDE that integrates several pedagogical tools designed to engage students in learning programming languages. Enki achieves this goal (1) by sequencing educational resources, either expository or evaluative, (2) by using gamification services to entice students to solve activities, (3) by promoting social interaction and (4) by helping students with activities, providing feedback on submitted solutions. The paper describes Enki, its concept and architecture, details its design and implementation, and covers also its validation.
-
ItemEshu: An Extensible Web Editor for Diagrammatic Languages( 2016) José Paulo Leal ; Helder Pina Correia ; José Carlos Paiva ; 5125 ; 6549 ; 6251The corner stone of a language development environment is an editor. For programming languages, several code editors are readily available to be integrated in Web applications. However, only few editors exist for diagrammatic languages. Eshu is an extensible diagram editor, embeddable in Web applications that require diagram interaction, such as modeling tools or e-learning environments. Eshu is a JavaScript library with an API that supports its integration with other components, including importing/exporting diagrams in JSON. Eshu was already integrated in a pedagogical environment with automated diagram assessment, configured for extended entityrelationship diagrams, that served as basis for an usability evaluation. © José Paulo Leal, Helder Correia, and José Carlos Paiva;licensed under Creative Commons License CC-BY.
-
ItemFGPE AuthorKit - A Tool for Authoring Gamified Programming Educational Content( 2020) José Paulo Leal ; José Carlos Paiva ; Ricardo Queirós ; 5125 ; 6251 ; 5695We present FGPE AuthorKit, a tool to author programming exercises featuring gamification elements that provide additional motivation for the students to intensify their learning effort. The tool allows the (1) creation of exercises and their associated metadata, (2) selection and parameterization of adequate gamification techniques for a specific exercise or their collection, (3) design of the content structure and sequencing rules, and (4) importing and exporting the content in the formats of choice. © 2020 ACM.
-
ItemFGPE Gamification Service: A GraphQL Service to Gamify Online Education( 2021) José Paulo Leal ; José Carlos Paiva ; Ricardo Queirós ; 5125 ; 6251 ; 5695Keeping students engaged while learning programming is becoming more and more imperative. Of the several proposed techniques, gamification is presumably the most widely studied and has already proven as an effective means to engage students. However, there is a complete lack of public and customizable solutions to gamified programming education that can be reused with personalized rules and learning material. FGPE Gamification Service (FGPE GS) is an open-source GraphQL service that transforms a package containing the gamification layer – adhering to a dedicated open-source language, GEdIL – into a game. The game provides students with a gamified experience leveraging on the automatically-assessable activities referenced by the challenges. This paper presents FGPE GS, its architecture, data model, and validation. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
-
ItemFostering Programming Practice through Games( 2020) José Paulo Leal ; Ricardo Queirós ; José Carlos Paiva ; 5125 ; 5695 ; 6251Loss of motivation is one of the most prominent concerns in programming education as it negatively impacts time dedicated to practice, which is crucial for novice programmers. Of the distinct techniques introduced in the literature to engage students, gamification, is likely the most widely explored and fruitful. Game elements that intrinsically motivate students, such as graphical feedback and game-thinking, reveal more reliable long-term positive effects, but those involve significant development effort. This paper proposes a game-based assessment environment for programming challenges, built on top of a specialized framework, in which students develop a program to control the player, henceforth called Software Agent (SA). During the coding phase, students can resort to the graphical feedback demonstrating how the game unfolds to improve their programs and complete the proposed tasks. This environment also promotes competition through competitive evaluation and tournaments among SAs, optionally organized at the end by the teacher. Moreover, the validation of the effectiveness of Asura in increasing undergraduate students' motivation and, consequently, the practice of programming is reported.
-
ItemGame-Based Coding Challenges to Foster Programming Practice( 2020) Ricardo Queirós ; José Carlos Paiva ; José Paulo Leal ; 5695 ; 6251 ; 5125The practice is the crux of learning to program. Automated assessment plays a key role in enabling timely feedback without access to teachers but alone is insufficient to engage students and maximize the outcome of their practice. Graphical feedback and game-thinking promote positive effects on students' motivation as shown by some serious programming games, but those games are complex to create and adapt. This paper presents Asura, an environment for assessment of game-based coding challenges, built on a specialized framework, in which students are invited to develop a software agent (SA) to play it. During the coding phase, students can take advantage of the graphical feedback to complete the proposed task. Some challenges also encourage students to think of a SA that plays in a setting with interaction among SAs. In such a case, the environment supports the creation and visualization of tournaments among submitted agents. Furthermore, the validation of this environment from the learners' perspective is also described. 2012 ACM Subject Classification Applied computing ! Interactive learning environments; Applied computing ! E-learning.
-
ItemGamification of Learning Activities with the Odin service( 2016) José Paulo Leal ; Ricardo Queirós ; José Carlos Paiva ; 5125 ; 5695 ; 6251Existing gamification services have features that preclude their use by e-learning tools. Odin is a gamification service that mimics the API of state-of-theart services without these limitations. This paper presents Odin as a gamification service for learning activities, describes its role in an e-learning system architecture requiring gamification, and details its implementation. The validation of Odin involved the creation of a small e-learning game, integrated in a Learning Management System (LMS) using the Learning Tools Interoperability (LTI) specification. Odin was also integrated in an e-learning tool that provides formative assessment in online and hybrid courses in an adaptive and engaging way.
-
ItemGEdIL-Gamified Education Interoperability Language( 2020) José Paulo Leal ; José Carlos Paiva ; Ricardo Queirós ; 5125 ; 6251 ; 5695The paper introduces Gamified Education Interoperability Language (GEdIL), designed as a means to represent the set of gamification concepts and rules applied to courses and exercises separately from their actual educational content. This way, GEdIL allows not only for an easy yet effective specification of gamification schemes for educational purposes, but also sharing them among instructors and reusing in various courses. GEdIL is published as an open format, independent from any commercial vendor, and supported with dedicated open-source software.
-
ItemImproving Diagram Assessment in Mooshak( 2017) José Carlos Paiva ; José Paulo Leal ; 6251 ; 5125Mooshak is a web system with support for assessment in computer science. It was originally developed for programming contest management but evolved to be used also as a pedagogical tool, capitalizing on its programming assessment features. The current version of Mooshak supports other forms of assessment used in computer science, such as diagram assessment. This form of assessment is supported by a set of new features, including a diagram editor, a graph comparator, and an environment for integration of pedagogical activities. The first attempt to integrate these features to support diagram assessment revealed a number of shortcomings, such as the lack of support for multiple diagrammatic languages, ineffective feedback, and usability issues. These shortcomings were addressed by the creation of a diagrammatic language definition language, the introduction of a new component for feedback summarization and a redesign of the diagram editor. This paper describes the design and implementation of these features, as well as their validation. © Springer Nature Switzerland AG 2018.
-
ItemIntegrating Rich Learning Applications in LMS( 2016) José Paulo Leal ; Ricardo Queirós ; José Carlos Paiva ; 5125 ; 5695 ; 6251Currently, a learning management system (LMS) plays a central role in any e-learning environment. These environments include systems to handle the pedagogic aspects of the teaching-learning process (e.g. specialized tutors, simulation games) and the academic aspects (e.g. academic management systems). Thus, the potential for interoperability is an important, although over looked, aspect of an LMS. In this paper, we make a comparative study of the interoperability level of the most relevant LMS. We start by defining an application and a specification model. For the application model, we create a basic application that acts as a tool provider for LMS integration. The specification model acts as the API that the LMS should implement to communicate with the tool provider. Based on researches, we select the Learning Tools Interoperability (LTI) from IMS. Finally, we compare the LMS interoperability level defined as the effort made to integrate the application on the study LMS.
-
ItemLearning Computer Science Languages in Enki( 2016) Ricardo Queirós ; José Carlos Paiva ; José Paulo Leal ; 5695 ; 6251 ; 5125This paper presents an overview and main features of Enki, a web-based learning environment for computer science languages. Enki was designed to be a sort of entry level IDE, aggregating tools for navigating and viewing course materials, for solving exercises and receiving automated feedback, as well as promoting the learning process. Enki uses services from several other systems, namely for content sequencing and recommendation, exercise assessment, and gamification.
-
ItemManaging Gamified Programming Courses with the FGPE Platform( 2022) José Paulo Leal ; José Carlos Paiva ; Ricardo Queirós ; 5125 ; 6251 ; 5695E-learning tools are gaining increasing relevance as facilitators in the task of learning how to program. This is mainly a result of the pandemic situation and consequent lockdown in several countries, which forced distance learning. Instant and relevant feedback to students, particularly if coupled with gamification, plays a pivotal role in this process and has already been demonstrated as an effective solution in this regard. However, teachers still struggle with the lack of tools that can adequately support the creation and management of online gamified programming courses. Until now, there was no software platform that would be simultaneously open-source and general-purpose (i.e., not integrated with a specific course on a specific programming language) while featuring a meaningful selection of gamification components. Such a solution has been developed as a part of the Framework for Gamified Programming Education (FGPE) project. In this paper, we present its two front-end components: FGPE AuthorKit and FGPE PLE, explain how they can be used by teachers to prepare and manage gamified programming courses, and report the results of the usability evaluation by the teachers using the platform in their classes.
-
ItemMooshak's Diet Update: Introducing YAPExIL Format to Mooshak (Short Paper)( 2021) José Paulo Leal ; Ricardo Queirós ; José Carlos Paiva ; 5125 ; 5695 ; 6251Practice is pivotal in learning programming. As many other automated assessment tools for programming assignments, Mooshak has been adopted by numerous educational practitioners to support them in delivering timely and accurate feedback to students during exercise solving. These tools specialize in the delivery and assessment of blank-sheet coding questions. However, the different phases of a student's learning path may demand distinct types of exercises (e.g., bug fix and block sorting) to foster new competencies such as debugging programs and understanding unknown source code or, otherwise, to break the routine and keep engagement. Recently, a format for describing programming exercises - YAPExIL -, supporting different types of activities, has been introduced. Unfortunately, no automated assessment tool yet supports this novel format. This paper describes a JavaScript library to transform YAPExIL packages into Mooshak problem packages (i.e., MEF format), keeping support for all exercise types. Moreover, its integration in an exercise authoring tool is described.
-
ItemMoozz: Assessment of Quizzes in Mooshak 2.0 (Short Paper)( 2018) José Carlos Paiva ; José Paulo Leal ; 6251 ; 5125Quizzes are a widely used form of assessment, supported in many e-learning systems. Mooshak is a web system which supports automated assessment in computer science. This paper presents Moozz, a quiz assessment environment for Mooshak 2.0, with its own XML definition for describing quizzes. This definition is used for: interoperability with different e-learning systems, generating HTML-based forms, storing student answers, marking final submissions and generating feedback. Furthermore, Moozz also includes an authoring tool for creating quizzes. The paper describes Moozz, its quiz definition language and architecture, and details its implementation. © Hélder Correia, José Paulo Leal and José Carlos Paiva.