Please use this identifier to cite or link to this item: http://repositorio.inesctec.pt/handle/123456789/5236
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCosta,Jen
dc.contributor.authorSilva,Cen
dc.contributor.authorMário João Antunesen
dc.contributor.authorRibeiro,Ben
dc.date.accessioned2018-01-02T15:38:59Z-
dc.date.available2018-01-02T15:38:59Z-
dc.date.issued2016en
dc.identifier.urihttp://repositorio.inesctec.pt/handle/123456789/5236-
dc.identifier.urihttp://dx.doi.org/10.1007/978-3-319-44188-7_3en
dc.description.abstractMachine learning approaches often focus on optimizing the algorithm rather than assuring that the source data is as rich as possible. However, when it is possible to enhance the input examples to construct models, one should consider it thoroughly. In this work, we propose a technique to define the best set of training examples using dynamic ensembles in text classification scenarios. In dynamic environments, where new data is constantly appearing, old data is usually disregarded, but sometimes some of those disregarded examples may carry substantial information. We propose a method that determines the most relevant examples by analysing their behaviour when defining separating planes or thresholds between classes. Those examples, deemed better than others, are kept for a longer time-window than the rest. Results on a Twitter scenario show that keeping those examples enhances the final classification performance.en
dc.languageengen
dc.relation5138en
dc.rightsinfo:eu-repo/semantics/openAccessen
dc.titleChoice of Best Samples for Building Ensembles in Dynamic Environmentsen
dc.typeconferenceObjecten
dc.typePublicationen
Appears in Collections:CRACS - Articles in International Conferences

Files in This Item:
File Description SizeFormat 
P-00K-V77.pdf483.25 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.