From a Visual Scene to a Virtual Representation: A Cross-Domain Review
    
  
 
  
    
    
        From a Visual Scene to a Virtual Representation: A Cross-Domain Review
    
  
No Thumbnail Available
      Files
Date
    
    
        2023
    
  
Authors
  Pedro Miguel Carvalho
  Paula Viana
  Nuno Alexandre Pereira
  Américo José Pereira
  Luís Corte Real
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
    
    
        The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an end-to-end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.