CRIIS
Permanent URI for this community
This service develops advanced solutions in automation and industrial robotics, including handlers and mobile robots, and promotes the integration of control intelligent systems and sensing.
Browse
Browsing CRIIS by Author "5152"
Results Per Page
Sort Options
-
ItemApplying Software Static Analysis to ROS: The Case Study of the FASTEN European Project( 2019) Neto,T ; Germano Veiga ; André Filipe Santos ; Armando Sousa ; Rafael Lírio Arrais ; 5674 ; 5152 ; 6414 ; 6551
-
ItemAutomatic generation of disassembly sequences and exploded views from solidworks symbolic geometric relationships( 2018) Carlos Miguel Costa ; Thomas,U ; Cardoso,HL ; Oliveira,EC ; Luís Freitas Rocha ; Armando Sousa ; Germano Veiga ; 6164 ; 5152 ; 5364 ; 5674
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemCollaborative Welding System using BIM for Robotic Reprogramming and Spatial Augmented Reality( 2019) Carlos Miguel Costa ; Luís Freitas Rocha ; Malaca,P ; Pedro Gomes Costa ; António Paulo Moreira ; Tavares,P ; Armando Sousa ; Germano Veiga ; 6164 ; 5152 ; 5157 ; 5159 ; 5364 ; 5674The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality. © 2019 Elsevier B.V.
-
ItemA Comparative Analysis for 2D Object Recognition: A Case Study with Tactode Puzzle-Like Tiles( 2021) Daniel Queirós Silva ; Armando Sousa ; Costa,V ; 5152 ; 8276Object recognition represents the ability of a system to identify objects, humans or animals in images. Within this domain, this work presents a comparative analysis among different classification methods aiming at Tactode tile recognition. The covered methods include: (i) machine learning with HOG and SVM; (ii) deep learning with CNNs such as VGG16, VGG19, ResNet152, MobileNetV2, SSD and YOLOv4; (iii) matching of handcrafted features with SIFT, SURF, BRISK and ORB; and (iv) template matching. A dataset was created to train learning-based methods (i and ii), and with respect to the other methods (iii and iv), a template dataset was used. To evaluate the performance of the recognition methods, two test datasets were built: tactode_small and tactode_big, which consisted of 288 and 12,000 images, holding 2784 and 96,000 regions of interest for classification, respectively. SSD and YOLOv4 were the worst methods for their domain, whereas ResNet152 and MobileNetV2 showed that they were strong recognition methods. SURF, ORB and BRISK demonstrated great recognition performance, while SIFT was the worst of this type of method. The methods based on template matching attained reasonable recognition results, falling behind most other methods. The top three methods of this study were: VGG16 with an accuracy of 99.96% and 99.95% for tactode_small and tactode_big, respectively; VGG19 with an accuracy of 99.96% and 99.68% for the same datasets; and HOG and SVM, which reached an accuracy of 99.93% for tactode_small and 99.86% for tactode_big, while at the same time presenting average execution times of 0.323 s and 0.232 s on the respective datasets, being the fastest method overall. This work demonstrated that VGG16 was the best choice for this case study, since it minimised the misclassifications for both test datasets.
-
ItemEdge AI-Based Tree Trunk Detection for Forestry Monitoring Robotics( 2022) Daniel Queirós Silva ; Filipe Neves Santos ; Vitor Manuel Filipe ; Armando Sousa ; Paulo Moura Oliveira ; 5152 ; 5761 ; 8276 ; 5843 ; 5552Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
-
ItemMeasuring Canopy Geometric Structure Using Optical Sensors Mounted on Terrestrial Vehicles: A Case Study in Vineyards( 2021) Daniel Queirós Silva ; André Silva Aguiar ; Filipe Neves Santos ; Armando Sousa ; Rabino,D ; Biddoccu,M ; Bagagiolo,G ; Delmastro,M ; 5552 ; 7844 ; 8276 ; 5152Smart and precision agriculture concepts require that the farmer measures all relevant variables in a continuous way and processes this information in order to build better prescription maps and to predict crop yield. These maps feed machinery with variable rate technology to apply the correct amount of products in the right time and place, to improve farm profitability. One of the most relevant information to estimate the farm yield is the Leaf Area Index. Traditionally, this index can be obtained from manual measurements or from aerial imagery: the former is time consuming and the latter requires the use of drones or aerial services. This work presents an optical sensing-based hardware module that can be attached to existing autonomous or guided terrestrial vehicles. During the normal operation, the module collects periodic geo-referenced monocular images and laser data. With that data a suggested processing pipeline, based on open-source software and composed by Structure from Motion, Multi-View Stereo and point cloud registration stages, can extract Leaf Area Index and other crop-related features. Additionally, in this work, a benchmark of software tools is made. The hardware module and pipeline were validated considering real data acquired in two vineyards—Portugal and Italy. A dataset with sensory data collected by the module was made publicly available. Results demonstrated that: the system provides reliable and precise data on the surrounding environment and the pipeline is capable of computing volume and occupancy area from the acquired data.
-
ItemModeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations( 2019) Rui Pedro Rodrigues ; Thomas,U ; Carlos Miguel Costa ; Germano Veiga ; Armando Sousa ; Luís Freitas Rocha ; Augusto Sousa ; 5674 ; 5364 ; 5152 ; 5579 ; 197 ; 6164Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator. © 2019 IEEE.
-
ItemModeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations( 2019) Germano Veiga ; Carlos Miguel Costa ; Rui Pedro Rodrigues ; Augusto Sousa ; Luís Freitas Rocha ; Thomas,U ; Armando Sousa ; 197 ; 5579 ; 5152 ; 5364 ; 5674 ; 6164Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.
-
ItemPerception of Entangled Tubes for Automated Bin Picking( 2019) Carlos Miguel Costa ; Germano Veiga ; Armando Sousa ; Leão,G ; 5674 ; 6164 ; 5152
-
ItemUnimodal and Multimodal Perception for Forest Management: Review and Dataset( 2021) Daniel Queirós Silva ; Filipe Neves Santos ; Armando Sousa ; Vitor Manuel Filipe ; José Boaventura ; 5152 ; 5552 ; 5773 ; 8276 ; 5843Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.
-
ItemVisible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics( 2021) Daniel Queirós Silva ; Filipe Neves Santos ; Armando Sousa ; Vitor Manuel Filipe ; 5152 ; 5552 ; 8276 ; 5843Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots.