CRAS - Indexed Articles in Journals
Permanent URI for this collection
Browse
Browsing CRAS - Indexed Articles in Journals by Author "Andry Maykol Pinto"
Results Per Page
Sort Options
-
ItemAssessment of Robotic Picking Operations Using a 6 Axis Force/Torque Sensor( 2016) Moreira,E ; Luís Freitas Rocha ; Andry Maykol Pinto ; António Paulo Moreira ; Germano VeigaThis letter presents a novel architecture for evaluating the success of picking operations that are executed by industrial robots. It is formed by a cascade of machine learning algorithms (kNN and SVM) and uses information obtained by a 6 axis force/torque sensor and, if available, information from the built-in sensors of the robotic gripper. Beyond measuring the success or failure of the entire operation, this architecture makes it possible to detect in real-time when an object is slipping during the picking. Therefore, force and torque signatures are collected during the picking movement of the robot, which is decomposed into five different stages that allows to characterize distinct levels of success over time. Several trials were performed using an industrial robot with two different grippers for picking a long and flexible object. The experiments demonstrate the reliability of the proposed approach under different picking scenarios since, it obtained a testing performance (in terms of accuracy) up to 99.5% of successful identification of the result of the picking operations, considering an universe of 400 attempts. © 2016 IEEE.
-
ItemEnhancing dynamic videos for surveillance and robotic applications: The robust bilateral and temporal filter( 2014) Andry Maykol Pinto ; Paulo José Costa ; Miguel Velhote Correia ; António Paulo MoreiraOver the last few decades, surveillance applications have been an extremely useful tool to prevent dangerous situations and to identify abnormal activities. Although, the majority of surveillance videos are often subjected to different noises that corrupt structured patterns and fine edges. This makes the image processing methods even more difficult, for instance, object detection, motion segmentation, tracking, identification and recognition of humans. This paper proposes a novel filtering technique named robust bilateral and temporal (RBLT), which resorts to a spatial and temporal evolution of sequences to conduct the filtering process while preserving relevant image information. A pixel value is estimated using a robust combination of spatial characteristics of the pixel's neighborhood and its own temporal evolution. Thus, robust statics concepts and temporal correlation between consecutive images are incorporated together which results in a reliable and configurable filter formulation that makes it possible to reconstruct highly dynamic and degraded image sequences. The filtering is evaluated using qualitative judgments and several assessment metrics, for different Gaussian and Salt Pepper noise conditions. Extensive experiments considering videos obtained by stationary and non-stationary cameras prove that the proposed technique achieves a good perceptual quality of filtering sequences corrupted with a strong noise component.
-
ItemA Flow-based Motion Perception Technique for an Autonomous Robot System( 2014) Andry Maykol Pinto ; António Paulo Moreira ; Miguel Velhote Correia ; Paulo José CostaVisual motion perception from a moving observer is the most often encountered case in real life situations. It is a complex and challenging problem, although, it can promote the arising of new applications. This article presents an innovative and autonomous robotic system designed for active surveillance and a dense optical flow technique. Several optical flow techniques have been proposed for motion perception however, most of them are too computationally demanding for autonomous mobile systems. The proposed HybridTree method is able to identify the intrinsic nature of the motion by performing two consecutive operations: expectation and sensing. Descriptive properties of the image are retrieved using a tree-based scheme and during the expectation phase. In the sensing operation, the properties of image regions are used by a hybrid and hierarchical optical flow structure to estimate the flow field. The experiments prove that the proposed method extracts reliable visual motion information in a short period of time and is more suitable for applications that do not have specialized computer devices. Therefore, the HybridTree differs from other techniques since it introduces a new perspective for the motion perception computation: high level information about the image sequence is integrated into the estimation of the optical flow. In addition, it meets most of the robotic or surveillance demands and the resulting flow field is less computationally demanding comparatively to other state-of-the-art methods.
-
ItemA Localization Method Based on Map-Matching and Particle Swarm Optimization( 2015) Andry Maykol Pinto ; António Paulo Moreira ; Paulo José CostaThis paper presents a novel localization method for small mobile robots. The proposed technique is especially designed for the Robot@Factory, a new robotic competition which is started in Lisbon in 2011. The real-time localization technique resorts to low-cost infra-red sensors, a map-matching method and an Extended Kalman Filter (EKF) to create a pose tracking system that performs well. The sensor information is continuously updated in time and space according to the expected motion of the robot. Then, the information is incorporated into the map-matching optimization in order to increase the amount of sensor information that is available at each moment. In addition, the Particle Swarm Optimization (PSO) relocates the robot when the map-matching error is high, meaning that the map-matching is unreliable and the robot gets lost. The experiments presented in this paper prove the ability and accuracy of the presented technique to locate small mobile robots for this competition. Extensive results show that the proposed method presents an interesting localization capability for robots equipped with a limited amount of sensors, but also less reliable sensors.
-
ItemMARESye: A hybrid imaging system for underwater robotic applications( 2020) Aníbal Matos ; Andry Maykol Pinto ; 5446 ; 5158This article presents an innovative hybrid imaging system that provides dense and accurate 3D information from harsh underwater environments. The proposed system is called MARESye and captures the advantages of both active and passive imaging methods: multiple light stripe range (LSR) and a photometric stereo (PS) technique, respectively. This hybrid approach fuses information from these techniques through a data-driven formulation to extend the measurement range and to produce high density 3D estimations in dynamic underwater environments. This hybrid system is driven by a gating timing approach to reduce the impact of several photometric issues related to the underwater environments such as, diffuse reflection, water turbidity and non-uniform illumination. Moreover, MARESye synchronizes and matches the acquisition of images with sub-sea phenomena which leads to clear pictures (with a high signal-to-noise ratio). Results conducted in realistic environments showed that MARESye is able to provide reliable, high density and accurate 3D data. Moreover, the experiments demonstrated that the performance of MARESye is less affected by sub-sea conditions since the SSIM index was 0.655 in high turbidity waters. Conventional imaging techniques obtained 0.328 in similar testing conditions. Therefore, the proposed system represents a valuable contribution for the inspection of maritime structures as well as for the navigation procedures of autonomous underwater vehicles during close range operations.
-
ItemA mosaicking technique for object identification in underwater environments( 2019) Alexandra Nunes ; Ana Gaspar ; Andry Maykol Pinto ; Aníbal Matos ; 5446 ; 6869 ; 6868 ; 5158Purpose: This paper aims to present a mosaicking method for underwater robotic applications, whose result can be provided to other perceptual systems for scene understanding such as real-time object recognition. Design/methodology/approach: This method is called robust and large-scale mosaicking (ROLAMOS) and presents an efficient frame-to-frame motion estimation with outlier removal and consistency checking that maps large visual areas in high resolution. The visual mosaic of the sea-floor is created on-the-fly by a robust registration procedure that composes monocular observations and manages the computational resources. Moreover, the registration process of ROLAMOS aligns the observation to the existing mosaic. Findings: A comprehensive set of experiments compares the performance of ROLAMOS to other similar approaches, using both data sets (publicly available) and live data obtained by a ROV operating in real scenes. The results demonstrate that ROLAMOS is adequate for mapping of sea-floor scenarios as it provides accurate information from the seabed, which is of extreme importance for autonomous robots surveying the environment that does not rely on specialized computers. Originality/value: The ROLAMOS is suitable for robotic applications that require an online, robust and effective technique to reconstruct the underwater environment from only visual information. © 2018, Emerald Publishing Limited.
-
ItemA Safety Monitoring Model for a Faulty Mobile Robot( 2018) Andry Maykol Pinto ; Leite,A ; Aníbal Matos ; 5158 ; 5446The continued development of mobile robots (MR) must be accompanied by an increase in robotics' safety measures. Not only must MR be capable of detecting and diagnosing faults, they should also be capable of understanding when the dangers of a mission, to themselves and the surrounding environment, warrant the abandonment of their endeavors. Analysis of fault detection and diagnosis techniques helps shed light on the challenges of the robotic field, while also showing a lack of research in autonomous decision-making tools. This paper proposes a new skill-based architecture for mobile robots, together with a novel risk assessment and decision-making model to overcome the difficulties currently felt in autonomous robot design.
-
ItemUnsupervised flow-based motion analysis for an autonomous moving system( 2014) Andry Maykol Pinto ; Miguel Velhote Correia ; António Paulo Moreira ; Paulo José CostaThis article discusses the motion analysis based on dense optical flow fields and for a new generation of robotic moving systems with real-time constraints. It focuses on a surveillance scenario where an especially designed autonomous mobile robot uses a monocular camera for perceiving motion in the environment. The computational resources and the processing-time are two of the most critical aspects in robotics and therefore, two non-parametric techniques are proposed, namely, the Hybrid Hierarchical Optical Flow Segmentation and the Hybrid Density-Based Optical Flow Segmentation. Both methods are able to extract the moving objects by performing two consecutive operations: refining and collecting. During the refining phase, the flow field is decomposed in a set of clusters and based on descriptive motion properties. These properties are used in the collecting stage by a hierarchical or density-based scheme to merge the set of clusters that represent different motion models. In addition, a model selection method is introduced. This novel method analyzes the flow field and estimates the number of distinct moving objects using a Bayesian formulation. The research evaluates the performance achieved by the methods in a realistic surveillance situation. The experiments conducted proved that the proposed methods extract reliable motion information in real-time and without using specialized computers. Moreover, the resulting segmentation is less computationally demanding compared to other recent methods and therefore, they are suitable for most of the robotic or surveillance applications.
-
ItemUrban@CRAS dataset: Benchmarking of visual odometry and SLAM techniques( 2018) Ana Gaspar ; Aníbal Matos ; Andry Maykol Pinto ; Alexandra Nunes ; 6868 ; 5446 ; 5158 ; 6869
-
ItemVisual motion perception for mobile robots through dense optical flow fields( 2017) Andry Maykol Pinto ; Paulo José Costa ; Miguel Velhote Correia ; Aníbal Matos ; António Paulo MoreiraRecent advances in visual motion detection and interpretation have made possible the rising of new robotic systems for autonomous and active surveillance. In this line of research, the current work discusses motion perception by proposing a novel technique that analyzes dense flow fields and distinguishes several regions with distinct motion models. The method is called Wise Optical Flow Clustering (WOFC) and extracts the moving objects by performing two consecutive operations: evaluating and resetting. Motion properties of the flow field are retrieved and described in the evaluation phase, which provides high level information about the spatial segmentation of the flow field. During the resetting operation, these properties are combined and used to feed a guided segmentation approach. The WOFC requires information about the number of motion models and, therefore, this paper introduces a model selection method based on a Bayesian approach that balances the model's fitness and complexity. It combines the correlation of a histogram-based analysis with the decay ratio of the normalized entropy criterion. This approach interprets the flow field and gives an estimative about the number of moving objects. The experiments conducted in a realistic environment have proved that the WOFC presents several advantages that meet the requirements of common robotic and surveillance applications: is computationally efficient and provides a pixel-wise segmentation, comparatively to other state-of-the-art methods.
-
ItemWirelessSyncroVision: Wireless synchronization for industrial stereoscopic systems( 2016) Andry Maykol Pinto ; António Paulo Moreira ; Paulo José CostaThe research proposes a novel technological solution for marker-based human motion capture called WirelessSyncroVision (WSV). The WSV is formed by two main modules: the visual node (WSV-V) which is based on a stereoscopic vision system and the marker node (WSV-M) that is constituted by a 6-DOF active marker. The solution synchronizes the acquisition of images in remote muti-cameras with the ON period of the active marker. This increases the robustness of the stereoscopic system to illumination changes, which is extremely relevant for programming industrial robotic-arms using a human demonstrator programming by demonstration (PbD). In addition, the research presents a robust method named Adaptive and Robust Synchronization (ARS), that is designed for temporal alignment of remote devices using a wireless network. The algorithm models the phase difference as a function of time, measuring the parameters that must be known to predict the synchronization instant between the active marker and the remote cameras. Results demonstrate that the ARS creates a balance between the real-time capability and the performance estimation of the phase difference. Therefore, this research proposes an elegant solution to synchronize image acquisition systems in real-time that is easy to implement with low operational costs; however, the major advantage of the WSV is related to its high level of flexibility since it can be extended toward to other devices besides the PbD, for instance, motion capture, motion analysis, and remote sensoring systems.