CRACS - Indexed Articles in Conferences
Permanent URI for this collection
Browse
Browsing CRACS - Indexed Articles in Conferences by Author "Alberto Martinez Angeles,CA"
Results Per Page
Sort Options
-
ItemA Datalog Engine for GPUs( 2014) Alberto Martinez Angeles,CA ; Inês Dutra ; Vítor Santos Costa ; Buenabad Chavez,JWe present the design and evaluation of a Datalog engine for execution in Graphics Processing Units (GPUs). The engine evaluates recursive and non-recursive Datalog queries using a bottom-up approach based on typical relational operators. It includes a memory management scheme that automatically swaps data between memory in the host platform (a multicore) and memory in the GPU in order to reduce the number of memory transfers. To evaluate the performance of the engine, four Datalog queries were run on the engine and on a single CPU in the multicore host. One query runs up to 200 times faster on the (GPU) engine than on the CPU.
-
ItemProcessing Markov Logic Networks with GPUs: Accelerating Network Grounding( 2016) Alberto Martinez Angeles,CA ; Inês Dutra ; Vítor Santos Costa ; Buenabad Chavez,JMarkov Logic is an expressive and widely used knowledge representation formalism that combines logic and probabilities, providing a powerful framework for inference and learning tasks. Most Markov Logic implementations perform inference by transforming the logic representation into a set of weighted propositional formulae that encode a Markov network, the ground Markov network. Probabilistic inference is then performed over the grounded network. Constructing, simplifying, and evaluating the network are the main steps of the inference phase. As the size of a Markov network can grow rather quickly, Markov Logic Network (MLN) inference can become very expensive, motivating a rich vein of research on the optimization of MLN performance. We claim that parallelism can have a large role on this task. Namely, we demonstrate that widely available Graphics Processing Units (GPUs) can be used to improve the performance of a state-of-the-art MLN system, Tuffy, with minimal changes. Indeed, comparing the performance of our GPU-based system, TuGPU, to that of the Alchemy, Tuffy and RockIt systems on three widely used applications shows that TuGPU is up to 15x times faster than the other systems.