Increasing underwater manipulation autonomy using segmentation and visual tracking

TitleIncreasing underwater manipulation autonomy using segmentation and visual tracking
Publication TypeConference Paper
Year of Publication2017
AuthorsFornas, D, Sanz, PJ, Vincze, M
Conference NameOCEANS 2017 - Aberdeen
Date Published10/2017
Conference LocationAberdeen (UK)
ISBN Number978-1-5090-5278-3
Accession Number17290579
Keywordsarchaeological scenarios, archaeology, autonomous grasping, autonomous underwater manipulation, Cameras, Computational modeling, cylinder shaped objects, grasping operations, image reconstruction, image segmentation, intuitive user interface, manipulation operations, manipulators, mobile robots, RANSAC segmentation algorithm, Remote Operated Vehicles, remotely operated vehicles, robot systems, robot vision, Robots, ROV, stereo image processing, Storage tanks, Three-dimensional displays, underwater interventions, underwater manipulation autonomy, underwater robotics, underwater scenarios, underwater vehicles, user interfaces, visual tracking, Visualization, visually guided manipulation, water conditions, water tank conditions

The present research in underwater robotics aims to increase the autonomy of manipulation operations in fields such as archaeology or biology, that cannot afford costly underwater interventions using traditional Remote Operated Vehicles (ROV). This paper describes a work towards the long term goal of autonomous underwater manipulation. Autonomous grasping, with limited sensors and water conditions which affect the robot systems, is a growing skill in underwater scenarios. Here we present a framework that uses vision, segmentation, user interfaces and grasp planning to perform visually guided manipulation to improve the specification of grasping operations. With it, a user commands and supervises the robot to recover cylinder shaped objects, a very common restriction in archaeological scenarios. This framework, though, can be expanded to detect other kind of objects. Information of the environment is gathered with stereo cameras and laser reconstruction methods to obtain a model of the object's graspable area. A RANSAC segmentation algorithm is used to estimate the model parameters and the best grasp is presented to the user in an intuitive user interface. The grasp is then executed by the robot. This approach has been tested in simulation and in water tank conditions.