CRIIS
Permanent URI for this community
This service develops advanced solutions in automation and industrial robotics, including handlers and mobile robots, and promotes the integration of control intelligent systems and sensing.
Browse
Browsing CRIIS by Author "5552"
Results Per Page
Sort Options
-
ItemActive Perception Fruit Harvesting Robots - A Systematic Review( 2022) Sandro Augusto Magalhães ; António Paulo Moreira ; Filipe Neves Santos ; Dias,J ; 5157 ; 7481 ; 5552
-
ItemBenchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato( 2022) Germano Filipe Moreira ; Sandro Augusto Magalhães ; Tatiana Martins Pinho ; Filipe Neves Santos ; Mário Cunha ; 5983 ; 7332 ; 7481 ; 8764 ; 5552The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.
-
ItemBenchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models( 2023) Sandro Augusto Magalhães ; Filipe Neves Santos ; Machado,P ; António Paulo Moreira ; Dias,J ; 5157 ; 7481 ; 5552
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemComputer Vision and Deep Learning as Tools for Leveraging Dynamic Phenological Classification in Vegetable Crops( 2023) Leandro Almeida Rodrigues ; Magalhaes,SA ; Daniel Queirós Silva ; Filipe Neves Santos ; Mário Cunha ; 7332 ; 8276 ; 8763 ; 5552The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops’ phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops’ phenological research, a pivotal step towards automating decision support systems for precision horticulture.
-
ItemDeep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions( 2023) Pinheiro,I ; Moreira,G ; Daniel Queirós Silva ; Magalhães,S ; Valente,A ; Paulo Moura Oliveira ; Mário Cunha ; Filipe Neves Santos ; 5761 ; 7332 ; 8276 ; 5552The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.
-
ItemEdge AI-Based Tree Trunk Detection for Forestry Monitoring Robotics( 2022) Daniel Queirós Silva ; Filipe Neves Santos ; Vitor Manuel Filipe ; Armando Sousa ; Paulo Moura Oliveira ; 5152 ; 5761 ; 8276 ; 5843 ; 5552Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
-
ItemEnd-Effectors for Harvesting Manipulators - State Of The Art Review( 2022) Oliveira,F ; Vítor Tinoco ; Sandro Augusto Magalhães ; Filipe Neves Santos ; Manuel Santos Silva ; 8387 ; 5552 ; 5655 ; 7481
-
ItemEvaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse( 2021) Sandro Augusto Magalhães ; Castro,L ; Guilherme Moreira Aresta ; Filipe Neves Santos ; Mário Cunha ; Dias,J ; António Paulo Moreira ; 5157 ; 5552 ; 7481 ; 7332 ; 6321The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15, an mAP of 51.46 and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5ms.
-
ItemGrape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models( 2021) André Silva Aguiar ; Sandro Augusto Magalhães ; Filipe Neves Santos ; Castro,L ; Tatiana Martins Pinho ; Valente,J ; Rui Costa Martins ; José Boaventura ; 5552 ; 5983 ; 5773 ; 6905 ; 7844 ; 7481The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.
-
ItemMeasuring Canopy Geometric Structure Using Optical Sensors Mounted on Terrestrial Vehicles: A Case Study in Vineyards( 2021) Daniel Queirós Silva ; André Silva Aguiar ; Filipe Neves Santos ; Armando Sousa ; Rabino,D ; Biddoccu,M ; Bagagiolo,G ; Delmastro,M ; 5552 ; 7844 ; 8276 ; 5152Smart and precision agriculture concepts require that the farmer measures all relevant variables in a continuous way and processes this information in order to build better prescription maps and to predict crop yield. These maps feed machinery with variable rate technology to apply the correct amount of products in the right time and place, to improve farm profitability. One of the most relevant information to estimate the farm yield is the Leaf Area Index. Traditionally, this index can be obtained from manual measurements or from aerial imagery: the former is time consuming and the latter requires the use of drones or aerial services. This work presents an optical sensing-based hardware module that can be attached to existing autonomous or guided terrestrial vehicles. During the normal operation, the module collects periodic geo-referenced monocular images and laser data. With that data a suggested processing pipeline, based on open-source software and composed by Structure from Motion, Multi-View Stereo and point cloud registration stages, can extract Leaf Area Index and other crop-related features. Additionally, in this work, a benchmark of software tools is made. The hardware module and pipeline were validated considering real data acquired in two vineyards—Portugal and Italy. A dataset with sensory data collected by the module was made publicly available. Results demonstrated that: the system provides reliable and precise data on the surrounding environment and the pipeline is capable of computing volume and occupancy area from the acquired data.
-
ItemAn overview of pruning and harvesting manipulators( 2022) Vítor Tinoco ; Manuel Santos Silva ; Filipe Neves Santos ; António Valente ; Luís Freitas Rocha ; Sandro Augusto Magalhães ; Luís Carlos Santos ; 5364 ; 5655 ; 5762 ; 7150 ; 7481 ; 8387 ; 5552Purpose The motivation for robotics research in the agricultural field has sparked in consequence of the increasing world population and decreasing agricultural labor availability. This paper aims to analyze the state of the art of pruning and harvesting manipulators used in agriculture. Design/methodology/approach A research was performed on papers that corresponded to specific keywords. Ten papers were selected based on a set of attributes that made them adequate for review. Findings The pruning manipulators were used in two different scenarios: grapevines and apple trees. These manipulators showed that a light-controlled environment could reduce visual errors and that prismatic joints on the manipulator are advantageous to obtain a higher reach. The harvesting manipulators were used for three types of fruits: strawberries, tomatoes and apples. These manipulators revealed that different kinematic configurations are required for different kinds of end-effectors, as some of these tools only require movement in the horizontal axis and others are required to reach the target with a broad range of orientations. Originality/value This work serves to reduce the gap in the literature regarding agricultural manipulators and will support new developments of novel solutions related to agricultural robotic grasping and manipulation.
-
ItemPath Planning Algorithms Benchmarking for Grapevines Pruning and Monitoring( 2019) Sandro Augusto Magalhães ; Filipe Neves Santos ; Rui Costa Martins ; Luís Freitas Rocha ; Brito,J ; 5552 ; 5364 ; 7481 ; 6905
-
ItemA Review of Pruning and Harvesting Manipulators( 2021) Vítor Tinoco ; Manuel Santos Silva ; Filipe Neves Santos ; Luís Freitas Rocha ; Sandro Augusto Magalhães ; Luís Carlos Santos ; 7481 ; 5552 ; 5364 ; 7150 ; 5655 ; 8387
-
ItemSCARA Self Posture Recognition Using a Monocular Camera( 2022) Vítor Tinoco ; Manuel Santos Silva ; Filipe Neves Santos ; Morais,R ; Vitor Manuel Filipe ; 5655 ; 8387 ; 5843 ; 5552
-
ItemSmartphone Applications Targeting Precision Agriculture Practices—A Systematic Review( 2020) Mendes,J ; Morais,R ; Mário Cunha ; José Boaventura ; Emanuel Peres Correia ; Sousa,JJ ; Filipe Neves Santos ; Pinho,TM ; 7332 ; 5552 ; 5653 ; 5773Traditionally farmers have used their perceptual sensorial systems to diagnose and monitor their crops health and needs. However, humans possess five basic perceptual systems with accuracy levels that can change from human to human which are largely dependent on the stress, experience, health and age. To overcome this problem, in the last decade, with the help of the emergence of smartphone technology, new agronomic applications were developed to reach better, cost-effective, more accurate and portable diagnosis systems. Conventional smartphones are equipped with several sensors that could be useful to support near real-time usual and advanced farming activities at a very low cost. Therefore, the development of agricultural applications based on smartphone devices has increased exponentially in the last years. However, the great potential offered by smartphone applications is still yet to be fully realized. Thus, this paper presents a literature review and an analysis of the characteristics of several mobile applications for use in smart/precision agriculture available on the market or developed at research level. This will contribute to provide to farmers an overview of the applications type that exist, what features they provide and a comparison between them. Also, this paper is an important resource to help researchers and applications developers to understand the limitations of existing tools and where new contributions can be performed.
-
ItemA survey on localization, mapping, and trajectory planning for quadruped robots in vineyards( 2022) Ferreira,J ; António Paulo Moreira ; Manuel Santos Silva ; Filipe Neves Santos ; 5655 ; 5157 ; 5552
-
ItemUnimodal and Multimodal Perception for Forest Management: Review and Dataset( 2021) Daniel Queirós Silva ; Filipe Neves Santos ; Armando Sousa ; Vitor Manuel Filipe ; José Boaventura ; 5152 ; 5552 ; 5773 ; 8276 ; 5843Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.
-
ItemVisible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics( 2021) Daniel Queirós Silva ; Filipe Neves Santos ; Armando Sousa ; Vitor Manuel Filipe ; 5152 ; 5552 ; 8276 ; 5843Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots.