CRACS - Indexed Articles in Conferences
Permanent URI for this collection
Browse
Browsing CRACS - Indexed Articles in Conferences by Author "Aguiar,A"
Results Per Page
Sort Options
-
ItemNumerical limits for data gathering in wireless networks( 2013) Mohammad Nozari ; Aguiar,AIn our previous work, we proposed to use a vehicle network for data gathering, i.e. as an urban sensor. In this paper, we aim at understanding the theoretical limits of data gathering in a time slotted wireless network in terms of maximum service rate per node and end to end packet delivery ratio. The capacity of wireless networks has been widely studied and boundaries for that capacity expressed in Bachmann-Landau notation [1]. But these asymptotic limits do not clarify the numeric limits on data packets that can be carried by a wireless network. In this paper, we calculate the maximum data that each node can generate before saturating the network. The expected number of collision and its effect of the PDR% and service rate are investigated. The results quantify the trade off between packet delivery rate and service rate. Finally, we verify our analytical results by simulating the same scenario. © 2013 IEEE.
-
ItemA Parallel Computing Hybrid Approach for Feature Selection( 2015) Jorge Miguel Silva ; Aguiar,A ; Fernando SilvaThe ultimate goal of feature selection is to select the smallest subset of features that yields minimum generalization error from an original set of features. This effectively reduces the feature space, and thus the complexity of classifiers. Though several algorithms have been proposed, no single one outperforms all the other in all scenarios, and the problem is still an actively researched field. This paper proposes a new hybrid parallel approach to perform feature selection. The idea is to use a filter metric to reduce feature space, and then use an innovative wrapper method to search extensively for the best solution. The proposed strategy is implemented on a shared memory parallel environment to speedup the process. We evaluated its parallel performance using up to 32 cores and our results show 30 times gain in speed. To test the performance of feature selection we used five datasets from the well known NIPS challenge and were able to obtain an average score of 95.90% for all solutions.