Learning from evolving video streams in a multi-camera scenario

dc.contributor.author Samaneh Khoshrou en
dc.contributor.author Jaime Cardoso en
dc.contributor.author Luís Filipe Teixeira en
dc.date.accessioned 2018-01-12T16:12:22Z
dc.date.available 2018-01-12T16:12:22Z
dc.date.issued 2015 en
dc.description.abstract Nowadays, video surveillance systems are taking the first steps toward automation, in order to ease the burden on human resources as well as to avoid human error. As the underlying data distribution and the number of concepts change over time, the conventional learning algorithms fail to provide reliable solutions for this setting. In this paper, we formalize a learning concept suitable for multi-camera video surveillance and propose a learning methodology adapted to that new paradigm. The proposed framework resorts to the universal background model to robustly learn individual object models from small samples and to more effectively detect novel classes. The individual models are incrementally updated in an ensemble-based approach, with older models being progressively forgotten. The framework is designed to detect and label new concepts automatically. The system is also designed to exploit active learning strategies, in order to interact wisely with operator, requesting assistance in the most ambiguous to classify observations. The experimental results obtained both on real and synthetic data sets verify the usefulness of the proposed approach. en
dc.identifier.uri http://repositorio.inesctec.pt/handle/123456789/5967
dc.identifier.uri http://dx.doi.org/10.1007/s10994-015-5515-y en
dc.language eng en
dc.relation 3889 en
dc.relation 4357 en
dc.relation 5457 en
dc.rights info:eu-repo/semantics/openAccess en
dc.title Learning from evolving video streams in a multi-camera scenario en
dc.type article en
dc.type Publication en
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
P-00G-FRW.pdf
Size:
2.22 MB
Format:
Adobe Portable Document Format
Description: