A Parallel Computing Hybrid Approach for Feature Selection

dc.contributor.author Jorge Miguel Silva en
dc.contributor.author Aguiar,A en
dc.contributor.author Fernando Silva en
dc.date.accessioned 2018-01-19T10:39:40Z
dc.date.available 2018-01-19T10:39:40Z
dc.date.issued 2015 en
dc.description.abstract The ultimate goal of feature selection is to select the smallest subset of features that yields minimum generalization error from an original set of features. This effectively reduces the feature space, and thus the complexity of classifiers. Though several algorithms have been proposed, no single one outperforms all the other in all scenarios, and the problem is still an actively researched field. This paper proposes a new hybrid parallel approach to perform feature selection. The idea is to use a filter metric to reduce feature space, and then use an innovative wrapper method to search extensively for the best solution. The proposed strategy is implemented on a shared memory parallel environment to speedup the process. We evaluated its parallel performance using up to 32 cores and our results show 30 times gain in speed. To test the performance of feature selection we used five datasets from the well known NIPS challenge and were able to obtain an average score of 95.90% for all solutions. en
dc.identifier.uri http://repositorio.inesctec.pt/handle/123456789/7049
dc.identifier.uri http://dx.doi.org/10.1109/cse.2015.34 en
dc.language eng en
dc.relation 6650 en
dc.relation 5124 en
dc.rights info:eu-repo/semantics/openAccess en
dc.title A Parallel Computing Hybrid Approach for Feature Selection en
dc.type conferenceObject en
dc.type Publication en
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
P-00K-03X.pdf
Size:
467.83 KB
Format:
Adobe Portable Document Format
Description: