Centres
Permanent URI for this community
Browse
Browsing Centres by Author "243"
Results Per Page
Sort Options
-
ItemBMOG: boosted Gaussian Mixture Model with controlled complexity for background subtraction( 2018) Alba Castro,JL ; Pedro Miguel Carvalho ; Martins,I ; Luís Corte Real ; 4358 ; 243
-
ItemBoosting color similarity decisions using the CIEDE2000_PF Metric( 2022) Américo José Pereira ; Pedro Miguel Carvalho ; Luís Corte Real ; 243 ; 4358 ; 6078
-
ItemEfficient CIEDE2000-based Color Similarity Decision for Computer Vision( 2019) Luís Corte Real ; Américo José Pereira ; Pedro Miguel Carvalho ; Coelho,G ; 6078 ; 243 ; 4358
-
ItemEfficient CIEDE2000-Based Color Similarity Decision for Computer Vision( 2020) Américo José Pereira ; Pedro Miguel Carvalho ; Luís Corte Real ; 6078 ; 4358 ; 243Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.
-
ItemFace Detection in Thermal Images with YOLOv3( 2019) Silva,G ; Monteiro,R ; Ferreira,A ; Pedro Miguel Carvalho ; Luís Corte Real ; 4358 ; 243
-
ItemFrom a Visual Scene to a Virtual Representation: A Cross-Domain Review( 2023) Pedro Miguel Carvalho ; Paula Viana ; Nuno Alexandre Pereira ; Américo José Pereira ; Luís Corte Real ; 4358 ; 1107 ; 7023 ; 6078 ; 243The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an end-to-end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.
-
ItemStereo vision system for human motion analysis in a rehabilitation context( 2019) Matos,AC ; Teresa Cristina Terroso ; Luís Corte Real ; Pedro Miguel Carvalho ; 6217 ; 4358 ; 243The present demographic trends point to an increase in aged population and chronic diseases which symptoms can be alleviated through rehabilitation. The applicability of passive 3D reconstruction for motion tracking in a rehabilitation context was explored using a stereo camera. The camera was used to acquire depth and color information from which the 3D position of predefined joints was recovered based on: kinematic relationships, anthropometrically feasible lengths and temporal consistency. Finally, a set of quantitative measures were extracted to evaluate the performed rehabilitation exercises. Validation study using data provided by a marker based as ground-truth revealed that our proposal achieved errors within the range of state-of-the-art active markerless systems and visual evaluations done by physical therapists. The obtained results are promising and demonstrate that the developed methodology allows the analysis of human motion for a rehabilitation purpose. © 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group.
-
ItemSynthesizing Human Activity for Data Generation( 2023) Américo José Pereira ; Pedro Miguel Carvalho ; Luís Corte Real ; 6078 ; 4358 ; 243The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, these are often insufficient. To address this, we propose a semi-automated mechanism that allows the generation and editing of visual scenes with synthetic humans performing various actions, with features such as background modification and manual adjustments of the 3D avatars to allow users to create data with greater variability. We also propose an evaluation methodology for assessing the results obtained using our method, which is two-fold: (i) the usage of an action classifier on the output data resulting from the mechanism and (ii) the generation of masks of the avatars and the actors to compare them through segmentation. The avatars were robust to occlusion, and their actions were recognizable and accurate to their respective input actors. The results also showed that even though the action classifier concentrates on the pose and movement of the synthetic humans, it strongly depends on contextual information to precisely recognize the actions. Generating the avatars for complex activities also proved problematic for action recognition and the clean and precise formation of the masks.
-
ItemTexture collinearity foreground segmentation for night videos( 2020) Martins,I ; Pedro Miguel Carvalho ; Luís Corte Real ; Luis Alba Castro,JL ; 243 ; 4358One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an N-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail. © 2020 Elsevier Inc.