Non INESC TEC publications - Indexed Articles in Journals
Permanent URI for this collection
Browse
Browsing Non INESC TEC publications - Indexed Articles in Journals by Author "5655"
Results Per Page
Sort Options
-
ItemAutomatic Segmentation of Monofilament Testing Sites in Plantar Images for Diabetic Foot Management( 2022) Costa,T ; Coelho,L ; Manuel Santos Silva ; 5655Diabetic peripheral neuropathy is a major complication of diabetes mellitus, and it is the leading cause of foot ulceration and amputations. The Semmes–Weinstein monofilament examination (SWME) is a widely used, low-cost, evidence-based tool for predicting the prognosis of diabetic foot patients. The examination can be quick, but due to the high prevalence of the disease, many healthcare professionals can be assigned to this task several days per month. In an ongoing project, it is our objective to minimize the intervention of humans in the SWME by using an automated testing system relying on computer vision. In this paper we present the project’s first part, constituting a system for automatically identifying the SWME testing sites from digital images. For this, we have created a database of plantar images and developed a segmentation system, based on image processing and deep learning—both of which are novelties. From the 9 testing sites, the system was able to correctly identify most 8 in more than 80% of the images, and 3 of the testing sites were correctly identified in more than 97.8% of the images.
-
ItemBin Picking for Ship-Building Logistics Using Perception and Grasping Systems( 2023) Cordeiro,A ; Souza,JP ; Carlos Miguel Costa ; Vitor Manuel Filipe ; Luís Freitas Rocha ; Manuel Santos Silva ; 5364 ; 5655 ; 6164 ; 5843Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.
-
ItemDevelopment of a Collaborative Robotic Platform for Autonomous Auscultation( 2023) Lopes,D ; Coelho,L ; Manuel Santos Silva ; 5655Listening to internal body sounds, or auscultation, is one of the most popular diagnostic techniques in medicine. In addition to being simple, non-invasive, and low-cost, the information it offers, in real time, is essential for clinical decision-making. This process, usually done by a doctor in the presence of the patient, currently presents three challenges: procedure duration, participants’ safety, and the patient’s privacy. In this article we tackle these by proposing a new autonomous robotic auscultation system. With the patient prepared for the examination, a 3D computer vision sub-system is able to identify the auscultation points and translate them into spatial coordinates. The robotic arm is then responsible for taking the stethoscope surface into contact with the patient’s skin surface at the various auscultation points. The proposed solution was evaluated to perform a simulated pulmonary auscultation in six patients (with distinct height, weight, and skin color). The obtained results showed that the vision subsystem was able to correctly identify 100% of the auscultation points, with uncontrolled lighting conditions, and the positioning subsystem was able to accurately position the gripper on the corresponding positions on the human body. Patients reported no discomfort during auscultation using the described automated procedure.