CRACS
Permanent URI for this community
This service develops its activity in the areas of programming languages, parallel and distributed computing, data mining, intelligent systems and software architecture, with emphasis on solving concrete problems in areas of multidisciplinary collaboration, such as Biology, Medicine and Chemistry.
Browse
Browsing CRACS by Title
Results Per Page
Sort Options
-
Item2nd Symposium on Languages, Applications and Technologies, SLATE 2013, June 20-21, 2013 - Porto, Portugal( 2013) José Paulo Leal ; Ricardo Rocha ; Simões,A
-
Item3rd Symposium on Languages, Applications and Technologies, SLATE 2014, June 19-20, 2014 - Bragança, Portugal( 2014) Pereira,MJV ; José Paulo Leal ; Simões,A
-
Item5th Symposium on Languages, Applications and Technologies, SLATE 2016, June 20-21, 2016, Maribor, Slovenia( 2016) Mernik,M ; José Paulo Leal ; Oliveira,HG
-
Item6th Symposium on Languages, Applications and Technologies, SLATE 2017, June 26-27, 2017, Vila do Conde, Portugal( 2017) Ricardo Queirós ; Pinto,M ; Simões,A ; José Paulo Leal ; Varanda Pereira,MJ
-
ItemAccelerating Recommender Systems using GPUs( 2015) André Valente Rodrigues ; Alípio Jorge ; Inês DutraWe describe GPU implementations of the matrix recommender algorithms CCD++ and ALS. We compare the processing time and predictive ability of the GPU implementations with existing multi- core versions of the same algorithms. Results on the GPU are better than the results of the multi- core versions (maximum speedup of 14.8).
-
ItemAccelerating Recommender Systems using GPUs( 2015) André Valente Rodrigues ; Alípio Jorge ; Inês Dutra
-
ItemAccess Control and Obligations in the Category-Based Metamodel: A Rewrite-Based Semantics( 2015) Sandra Alves ; Degtyarev,A ; Fernandez,MWe define an extension of the category-based access control (CBAC) metamodel to accommodate a general notion of obligation. Since most of the well-known access control models are instances of the CBAC metamodel, we obtain a framework for the study of the interaction between authorisation and obligation, such that properties may be proven of the metamodel that apply to all instances of it. In particular, the extended CBAC metamodel allows security administrators to check whether a policy combining authorisations and obligations is consistent.
-
ItemActive Manifold Learning with Twitter Big Data( 2015) Silva,C ; Mário João Antunes ; Costa,J ; Ribeiro,BThe data produced by Internet applications have increased substantially. Big data is a flaring field that deals with this deluge of data by using storage techniques, dedicated infrastructures and development frameworks for the parallelization of defined tasks and its consequent reduction. These solutions however fall short in online and highly data demanding scenarios, since users expect swift feedback. Reduction techniques are efficiently used in big data online applications to improve classification problems. Reduction in big data usually falls in one of two main methods: (i) reduce the dimensionality by pruning or reformulating the feature set; (ii) reduce the sample size by choosing the most relevant examples. Both approaches have benefits, not only of time consumed to build a model, but eventually also performance-wise, usually by reducing overfitting and improving generalization capabilities. In this paper we investigate reduction techniques that tackle both dimensionality and size of big data. We propose a framework that combines a manifold learning approach to reduce dimensionality and an active learning SVM-based strategy to reduce the size of labeled sample. Results on Twitter data show the potential of the proposed active manifold learning approach.
-
ItemAdaptive learning for dynamic environments: A comparative approach( 2017) Costa,J ; Silva,C ; Mário João Antunes ; Ribeiro,BNowadays most learning problems demand adaptive solutions. Current challenges include temporal data streams, drift and non-stationary scenarios, often with text data, whether in social networks or in business systems. Various efforts have been pursued in machine learning settings to learn in such environments, specially because of their non-trivial nature, since changes occur between the distribution data used to define the model and the current environment. In this work we present the Drift Adaptive Retain Knowledge (DARK) framework to tackle adaptive learning in dynamic environments based on recent and retained knowledge. DARK handles an ensemble of multiple Support Vector Machine (SVM) models that are dynamically weighted and have distinct training window sizes. A comparative study with benchmark solutions in the field, namely the Learn + +.NSE algorithm, is also presented. Experimental results revealed that DARK outperforms Learn + +.NSE with two different base classifiers, an SVM and a Classification and Regression Tree (CART).
-
ItemAdaptive model rules from data streams( 2013) Ezilda Duarte Almeida ; Carlos Ferreira ; João GamaDecision rules are one of the most expressive languages for machine learning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules the antecedent of a rule is a conjunction of conditions on the attribute values, and the consequent is a linear combination of attribute values. Each rule uses a Page-Hinkley test to detect changes in the process generating data and react to changes by pruning the rule set. In the experimental section we report the results of AMRules on benchmark regression problems, and compare the performance of our system with other streaming regression algorithms. © 2013 Springer-Verlag.
-
ItemAn adjustable sensor platform using dual wavelength measurements for optical colorimetric sensitive films( 2014) Carlos Manuel Machado ; Gouveia,C ; João Ferreira ; Kovacs,B ; Pedro Jorge ; Luís LopesWe present a new and versatile sensor platform to readout the response of sensitive colorimetric films. The platform is fully self-contained and based on a switched dual-wavelength scheme. After filtering and signal processing, the system is able to provide self-referenced measures of color intensity changes in the film, while being immune to noise sources such as ambient light and fluctuations in the power source and in the optical path. By controlling the power and the switching frequency between the two wavelengths it is possible to fine tune the output gain as well as the operational range of the sensor for a particular application, thus improving the signal conditioning. The platform uses a micro-controller that complements the analog circuit used to acquire the signal. The latter pre-amplifies, filters and conditions the signal, leaving the micro-controller free to perform sensor linearization and unit conversion. By changing the sensitive film and the wavelength of the light source it is possible to use this platform for a wide range of sensing applications. © 2014 IEEE.
-
ItemANALYSING RELEVANT INTERACTIONS BY BRIDGING FACEBOOK AND MOODLE( 2016) Luciana Gomes Oliveira ; Álvaro Figueira
-
ItemAnalyzing Social Media Discourse An Approach using Semi-supervised Learning( 2016) Álvaro Figueira ; Luciana Gomes OliveiraThe ability to handle large amounts of unstructured information, to optimize strategic business opportunities, and to identify fundamental lessons among competitors through benchmarking, are essential skills of every business sector. Currently, there are dozens of social media analytics' applications aiming at providing organizations with informed decision making tools. However, these applications rely on providing quantitative information, rather than qualitative information that is relevant and intelligible for managers. In order to address these aspects, we propose a semi-supervised learning procedure that discovers and compiles information taken from online social media, organizing it in a scheme that can be strategically relevant. We illustrate our procedure using a case study where we collected and analysed the social media discourse of 43 organizations operating on the Higher Public Polytechnic Education Sector. During the analysis we created an "editorial model" that characterizes the posts in the area. We describe in detail the training and the execution of an ensemble of classifying algorithms. In this study we focus on the techniques used to increase the accuracy and stability of the classifiers.
-
ItemAn Approach to Relevancy Detection: contributions to the automatic detection of relevance in social networks( 2016) Álvaro Figueira ; Miguel Oliveira Sandim ; Paula Teixeira FortunaIn this paper we analyze the information propagated through three social networks. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors. In this paper we focus on the search for automatic methods for assessing the relevance of a given set of posts. We first retrieved from social networks, posts related to trending topics. Then, we categorize them as being news or as being conversational messages, and assessed their credibility. From the gained insights we used features to automatically assess whether a post is news or chat, and to level its credibility. Based on these two experiments we built an automatic classifier. The results from assessing our classifier, which categorizes posts as being relevant or not, lead to a high balanced accuracy, with the potential to be further enhanced.
-
ItemAn architecture for seamless configuration, deployment, and management of wireless sensor-actuator networks( 2014) Edgard Santos Neto ; Mendes,R ; Luís LopesThe goal of this work is to provide (non-specialist) users with the means to seamlessly setup and monitor a Wireless Sensor-Actuator Network (WSN) without writing any code or performing subtle hardware configurations. Towards this goal, we present an architecture that allows the seamless configuration, deployment and management of applications over WSN. We explore the fact that most deployments have a common modus operandi: (a) simple data readers running on the nodes periodically gather and send data to sinks, and; (b) sinks process incoming data and, accordingly, issue actuation commands to the nodes. We argue that, given the knowledge of a platform's capabilities, its sensors and actuators and their respective programming interfaces, it is possible to fully automate the process of configuring, building, and deploying an application over a WSN. Similarly, monitoring and managing the deployment can be vastly simplified by using a middleware that supports user defined tasks that process data from the nodes, divide the WSN into regions, defined by simple boolean predicates over data, and eventually issue actuation commands on regions.
-
ItemAutomated Diagnosis of Breast Cancer on Medical Images( 2015) Velikova,M ; Inês Dutra ; Burnside,ES
-
ItemAutomatic Generation and Delivery of Multiple-Choice Math Quizzes( 2013) Tomas,AP ; José Paulo LealWe present an application of constraint logic programming to create multiple-choice questions for math quizzes. Constraints are used for the configuration of the generator, giving the user some flexibility to customize the forms of the expressions arising in the exercises. Constraints are also used to control the application of the buggy rules in the derivation of plausible wrong solutions to the quiz questions. We developed a prototype based on the core system of AGILMAT [18]. For delivering math quizzes to students, we used an automatic evaluation feature of Mooshak [8] that was improved to handle math expressions. The communication between the two systems - AgilmatQuiz and Mooshak - relies on a specially designed LATEX based quiz format. This tool is being used at our institution to create quizzes to support assessment in a PreCalculus course for first year undergraduate students.
-
ItemAutomatic network configuration in virtualized environment using GNS3( 2015) Emiliano,R ; Mário João AntunesComputer networking is a central topic in computer science courses curricula offered by higher education institutions. Network virtualization and simulation tools, like GNS3, allows students and practitioners to test real world networking configuration scenarios and to configure complex network scenarios by configuring virtualized equipments, such as routers and switches, through each one's virtual console. The configuration of advanced network topics in GNS3 requires that students have to apply basic and very repetitive IP configuration tasks in all network equipments. As the network topology grows, so does the amount of network equipments to be configured, which may lead to logical configuration errors. In this paper we propose an extension for GNS3 network virtualizer, to automatically generate a valid configuration of all the network equipments in a GNS3 scenario. Our implementation is able to automatically produce an initial IP and routing configuration of all the Cisco virtual equipments by using the GNS3 specification files. We tested this extension against a set of networked scenarios which proved the robustness, readiness and speedup of the overall configuration tasks. In a learning environment, this feature may save time for all networking practitioners, both beginners or advanced, who aim to configure and test network topologies, since it automatically produces a valid and operational configuration for all the equipments designed in a GNS3 environment.
-
ItemAvoiding Anomalies in Data Stream Learning( 2013) João Gama ; Kosina,P ; Ezilda Duarte AlmeidaThe presence of anomalies in data compromises data quality and can reduce the effectiveness of learning algorithms. Standard data mining methodologies refer to data cleaning as a pre-processing before the learning task. The problem of data cleaning is exacerbated when learning in the computational model of data streams. In this paper we present a streaming algorithm for learning classification rules able to detect contextual anomalies in the data. Contextual anomalies are surprising attribute values in the context defined by the conditional part of the rule. For each example we compute the degree of anomaliness based on the probability of the attribute-values given the conditional part of the rule covering the example. The examples with high degree of anomaliness are signaled to the user and not used to train the classifier. The experimental evaluation in real-world data sets shows the ability to discover anomalous examples in the data. The main advantage of the proposed method is the ability to inform the context and explain why the anomaly occurs.
-
ItemBabeLO-An Extensible Converter of Programming Exercises Formats( 2013) Ricardo Queirós ; José Paulo LealIn the last two decades, there was a proliferation of programming exercise formats that hinders interoperability in automatic assessment. In the lack of a widely accepted standard, a pragmatic solution is to convert content among the existing formats. BabeLO is a programming exercise converter providing services to a network of heterogeneous e-learning systems such as contest management systems, programming exercise authoring tools, evaluation engines and repositories of learning objects. Its main feature is the use of a pivotal format to achieve greater extensibility. This approach simplifies the extension to other formats, just requiring the conversion to and from the pivotal format. This paper starts with an analysis of programming exercise formats representative of the existing diversity. This analysis sets the context for the proposed approach to exercise conversion and to the description of the pivotal data format. The abstract service definition is the basis for the design of BabeLO, its components and web service interface. This paper includes a report on the use of BabeLO in two concrete scenarios: to relocate exercises to a different repository, and to use an evaluation engine in a network of heterogeneous systems.