Title | Increasing underwater manipulation autonomy using segmentation and visual tracking |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Fornas, D, Sanz, PJ, Vincze, M |
Conference Name | OCEANS 2017 - Aberdeen |
Date Published | 10/2017 |
Publisher | IEEE |
Conference Location | Aberdeen (UK) |
ISBN Number | 978-1-5090-5278-3 |
Accession Number | 17290579 |
Keywords | archaeological scenarios, archaeology, autonomous grasping, autonomous underwater manipulation, Cameras, Computational modeling, cylinder shaped objects, grasping operations, image reconstruction, image segmentation, intuitive user interface, manipulation operations, manipulators, mobile robots, RANSAC segmentation algorithm, Remote Operated Vehicles, remotely operated vehicles, robot systems, robot vision, Robots, ROV, stereo image processing, Storage tanks, Three-dimensional displays, underwater interventions, underwater manipulation autonomy, underwater robotics, underwater scenarios, underwater vehicles, user interfaces, visual tracking, Visualization, visually guided manipulation, water conditions, water tank conditions |
Abstract | The present research in underwater robotics aims to increase the autonomy of manipulation operations in fields such as archaeology or biology, that cannot afford costly underwater interventions using traditional Remote Operated Vehicles (ROV). This paper describes a work towards the long term goal of autonomous underwater manipulation. Autonomous grasping, with limited sensors and water conditions which affect the robot systems, is a growing skill in underwater scenarios. Here we present a framework that uses vision, segmentation, user interfaces and grasp planning to perform visually guided manipulation to improve the specification of grasping operations. With it, a user commands and supervises the robot to recover cylinder shaped objects, a very common restriction in archaeological scenarios. This framework, though, can be expanded to detect other kind of objects. Information of the environment is gathered with stereo cameras and laser reconstruction methods to obtain a model of the object's graspable area. A RANSAC segmentation algorithm is used to estimate the model parameters and the best grasp is presented to the user in an intuitive user interface. The grasp is then executed by the robot. This approach has been tested in simulation and in water tank conditions. |
URL | https://doi.org/10.1109/OCEANSE.2017.8084762 |
DOI | 10.1109/OCEANSE.2017.8084762 |