CTM
Permanent URI for this community
This service operates in key areas within modern communications networks and services, especially in network architectures, telecommunications services, signal and image processing, microelectronics, digital TV and multimedia.
Browse
Browsing CTM by Author "400"
Results Per Page
Sort Options
-
ItemContent Adaptation Decision to Enhance the Access to Networked Multimedia Content( 2006) Maria Teresa Andrade ; Pedro Souto ; Pedro Miguel Carvalho ; Lucian Ciobanu ; 400 ; 4358 ; 4430 ; 4558
-
ItemContext-aware content adaptation: a systems approach( 2006) Maria Teresa Andrade ; Hélder Fernandes Castro ; Pedro Miguel Carvalho ; Pedro Souto ; P. Bretillon ; B. Feiten ; 400 ; 4358 ; 4487 ; 4558
-
ItemImproving Audiovisual Content Annotation Through a Semi-automated Process Based on Deep Learning( 2018) Paula Viana ; Maria Teresa Andrade ; Pedro Miguel Carvalho ; Vilaça,L ; 1107 ; 4358 ; 400Over the last years, Deep Learning has become one of the most popular research fields of Artificial Intelligence. Several approaches have been developed to address conventional challenges of AI. In computer vision, these methods provide the means to solve tasks like image classification, object identification and extraction of features. In this paper, some approaches to face detection and recognition are presented and analyzed, in order to identify the one with the best performance. The main objective is to automate the annotation of a large dataset and to avoid the costy and time-consuming process of content annotation. The approach follows the concept of incremental learning and a R-CNN model was implemented. Tests were conducted with the objective of detecting and recognizing one personality within image and video content. Results coming from this initial automatic process are then made available to an auxiliary tool that enables further validation of the annotations prior to uploading them to the archive. Tests show that, even with a small size dataset, the results obtained are satisfactory. © 2020, Springer Nature Switzerland AG.
-
ItemMultimedia Terminal Architecture: An Inter-Operable Approach( 2008) Beilu Shao ; Marco Mattavelli ; Maria Teresa Andrade ; Samuel Keller ; Pedro Miguel Carvalho ; 400 ; 4358
-
ItemA multimedia terminal for adaptation and end-to-end Qos control( 2008) Beilu Shao ; Marco Mattavelli ; Daniele Renzi ; Maria Teresa Andrade ; Pedro Miguel Carvalho ; 400 ; 4358
-
ItemPhoto2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content( 2022) Paula Viana ; Maria Teresa Andrade ; Pedro Miguel Carvalho ; Luís Miguel Salgado ; Inês Filipa Teixeira ; Tiago André Costa ; Jonker,P ; 400 ; 1107 ; 4358 ; 5363 ; 7420 ; 7514Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.
-
ItemSemantic Storytelling Automation: A Context-Aware and Metadata-Driven Approach( 2020) Paula Viana ; Pedro Miguel Carvalho ; Maria Teresa Andrade ; Jonker,PP ; Papanikolaou,V ; Teixeira,IN ; Vilaça,L ; Pinto,JP ; Tiago André Costa ; 4358 ; 5363 ; 400 ; 1107