CRIIS - Indexed Articles in Journals

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 127
  • Item
    Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato
    ( 2022) Germano Filipe Moreira ; Sandro Augusto Magalhães ; Tatiana Martins Pinho ; Filipe Neves Santos ; Mário Cunha ; 5983 ; 7332 ; 7481 ; 8764 ; 5552
    The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.
  • Item
    Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models
    ( 2021) André Silva Aguiar ; Sandro Augusto Magalhães ; Filipe Neves Santos ; Castro,L ; Tatiana Martins Pinho ; Valente,J ; Rui Costa Martins ; José Boaventura ; 5552 ; 5983 ; 5773 ; 6905 ; 7844 ; 7481
    The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.
  • Item
    Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse
    ( 2021) Sandro Augusto Magalhães ; Castro,L ; Guilherme Moreira Aresta ; Filipe Neves Santos ; Mário Cunha ; Dias,J ; António Paulo Moreira ; 5157 ; 5552 ; 7481 ; 7332 ; 6321
    The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15, an mAP of 51.46 and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5ms.
  • Item
    On the development of a collaborative robotic system for industrial coating cells
    ( 2021) Rafael Lírio Arrais ; Carlos Miguel Costa ; Ribeiro,P ; Luís Freitas Rocha ; Manuel Santos Silva ; Germano Veiga ; 5364 ; 6164 ; 6551 ; 5655 ; 5674
  • Item
    A Versatile, Low-Power and Low-Cost IoT Device for Field Data Gathering in Precision Agriculture Practices
    ( 2021) Morais,R ; Emanuel Peres Correia ; Sousa,JJ ; Silva,N ; Silva,R ; Mendes,J ; 5653
    Spatial and temporal variability characterization in Precision Agriculture (PA) practices is often accomplished by proximity data gathering devices, which acquire data from a wide variety of sensors installed within the vicinity of crops. Proximity data acquisition usually depends on a hardware solution to which some sensors can be coupled, managed by a software that may (or may not) store, process and send acquired data to a back-end using some communication protocol. The sheer number of both proprietary and open hardware solutions, together with the diversity and characteristics of available sensors, is enough to deem the task of designing a data acquisition device complex. Factoring in the harsh operational context, the multiple DIY solutions presented by an active online community, available in-field power approaches and the different communication protocols, each proximity monitoring solution can be regarded as singular. Data acquisition devices should be increasingly flexible, not only by supporting a large number of heterogeneous sensors, but also by being able to resort to different communication protocols, depending on both the operational and functional contexts in which they are deployed. Furthermore, these small and unattended devices need to be sufficiently robust and cost-effective to allow greater in-field measurement granularity 365 days/year. This paper presents a low-cost, flexible and robust data acquisition device that can be deployed in different operational contexts, as it also supports three different communication technologies: IEEE 802.15.4/ZigBee, LoRa/LoRaWAN and GRPS. Software and hardware features, suitable for using heat pulse methods to measure sap flow, leaf wetness sensors and others are embedded. Its power consumption is of only 83 µA during sleep mode and the cost of the basic unit was kept below the EUR 100 limit. In-field continuous evaluation over the past three years prove that the proposed solution—SPWAS’21—is not only reliable but also represents a robust and low-cost data acquisition device capable of gathering different parameters of interest in PA practices.