CRIIS - Indexed Articles in Journals
Permanent URI for this collection
Browse
Browsing CRIIS - Indexed Articles in Journals by Author "5773"
Results Per Page
Sort Options
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemBringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection( 2021) André Silva Aguiar ; Monteiro,NN ; Filipe Neves Santos ; Eduardo Pires ; Daniel Queirós Silva ; Armando Sousa ; José Boaventura ; 5152 ; 5552 ; 5773 ; 5777 ; 7844 ; 8276The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
-
ItemGrape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models( 2021) André Silva Aguiar ; Sandro Augusto Magalhães ; Filipe Neves Santos ; Castro,L ; Tatiana Martins Pinho ; Valente,J ; Rui Costa Martins ; José Boaventura ; 5552 ; 5983 ; 5773 ; 6905 ; 7844 ; 7481The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.
-
ItemPID Posicast Control for Uncertain Oscillatory Systems: A Practical Experiment( 2018) Josenalde Barbosa Oliveira ; Paulo Moura Oliveira ; Tatiana Martins Pinho ; José Boaventura ; 6636 ; 5983 ; 5773 ; 5761
-
ItemSmartphone Applications Targeting Precision Agriculture Practices—A Systematic Review( 2020) Mendes,J ; Morais,R ; Mário Cunha ; José Boaventura ; Emanuel Peres Correia ; Sousa,JJ ; Filipe Neves Santos ; Pinho,TM ; 7332 ; 5552 ; 5653 ; 5773Traditionally farmers have used their perceptual sensorial systems to diagnose and monitor their crops health and needs. However, humans possess five basic perceptual systems with accuracy levels that can change from human to human which are largely dependent on the stress, experience, health and age. To overcome this problem, in the last decade, with the help of the emergence of smartphone technology, new agronomic applications were developed to reach better, cost-effective, more accurate and portable diagnosis systems. Conventional smartphones are equipped with several sensors that could be useful to support near real-time usual and advanced farming activities at a very low cost. Therefore, the development of agricultural applications based on smartphone devices has increased exponentially in the last years. However, the great potential offered by smartphone applications is still yet to be fully realized. Thus, this paper presents a literature review and an analysis of the characteristics of several mobile applications for use in smart/precision agriculture available on the market or developed at research level. This will contribute to provide to farmers an overview of the applications type that exist, what features they provide and a comparison between them. Also, this paper is an important resource to help researchers and applications developers to understand the limitations of existing tools and where new contributions can be performed.
-
ItemUnimodal and Multimodal Perception for Forest Management: Review and Dataset( 2021) Daniel Queirós Silva ; Filipe Neves Santos ; Armando Sousa ; Vitor Manuel Filipe ; José Boaventura ; 5152 ; 5552 ; 5773 ; 8276 ; 5843Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.