TWINBOT

TWIN roBOTs for cooperative underwater intervention missions

UdG-GIRONA1000

Since 1992, the Computer Vision and Robotics Research Group (VICOROB) of the University of Girona (UdG, subproject 2) is devoted to the research related to the areas of computer vision, opto/acoustic image processing, intelligent control architectures and underwater robotics. The UdG team works in the research and development of autonomous underwater robots for field operations, where imagery (both optical and acoustic) plays an important role. Several prototypes have been developed: GARBI, URIS, ICTINEU, SPARUS, GIRONA 500 and SPARUS II, the latter two being currently operative. A patent about a recently developed underwater laser scanner has been filed. Moreover, the laser scanner as well as the AUVs GIRONA 500 and SPARUS II have been licensed to the recently created UdG Spin-Off company Iqua Robotics SL for their commercialization. Map-based navigation, advanced image processing techniques for 2D and 3D optical mapping of the seafloor, SLAM with underwater robots using both acoustic and/or video images, and online motion planning are currently active research lines of UdG with promising results. The group has a strong experience in Intervention AUVs, being GIRONA 500 one of the few available AUVs in the world with the possibility of integrating an underwater manipulator. Also, UdG has participated 3 times in the Underwater Robotics Competition (SAUCE), winning 2 of the editions. It has also participated and won the last 2 editions of EURATHLON being the winner of the 2014 edition of the AUV competition and member of the winner team of the EURATHLON grand challenge. VICOROB has participated in many research projects on underwater robotics at national level (TAP92-0762, TAP-95- 0426, MAR97-0925, MAR99-1062, DPI-2001-2311, CTM2004-04205, DPI2005-09001, CTM2007- 64751, DPI2008-06548, DPI2011-27977, CTM2013-46718, DPI2011-27977-C03-02, DPI2015-73978- JIN) and European level (MOMARNET, FREESUBNET, SURF3DSLAM, TRIDENT, PANDORA, MORPH, EUROFLEETS2, ROBOCADEMY, EXCELLABUST, eUReady4OS, STRONGMAR and IAUVCONTROL ).


Specific objectives

The specific objectives of the UdG subproject include the design and development of new mechatronics (O1) necessary to implement the twin I-AUV system. A new GIRONA AUV vehicle will be developed, upgraded to be able to operate up to 1000 m depth (O1.1). The vehicle will be equipped with a new fish-eye camera (O1.4) that will be used to estimate the end effector pose to the leader I- AUV system (O2), using AR Markers (O2.1). UdG team will contribute to the implementation of the Multimodal Communication System (O3) between the two I-AUVs, being responsible to the optical communications through VLC (O3.1) and collaborating with the UJI team with the acoustic communications (O3.2). It will also contribute to the Object Detection & Identification objective (O4) developing a real time method (O4.1) to identify and locate objects, for which an a priori 3D model is available, using non-textured 3D point clouds (O7.4) gathered with a 3D laser scanner. This method will be integrated into a distributed semantic map, which will be updated using SLAM techniques (O6.1). This semantic map will be used to enable high level cooperative manipulation strategies with free- floating I-AUVs (O5). Leader-follower strategies leading to cooperation with (O5.2) or without (O5.1) explicit communication will be explored. At the leader side, real-time motion planning strategies integrated with a low-level control system based on a task priority framework will be used. At the follower side, methods to use force feedback (O7.1) and/or to control the end effector keeping a relative pose with respect to the leader’s end effector will be studied. Finally, the developed systems will be integrated and validated experimentally (O8). Specific objectives O2, O4, O5 and O6 are under the responsibility of IP1 (Pere Ridao). Specific objectives O1, O3, O7, O8 will be leaded by IP2 (Marc Carreras).


  • Design & Implement a twin GIRONA I-AUV

  • I-AUV motion planning and control

  • Semantic SLAM using model based 3D Object recognition in dense point clouds

  • View Planning

  • Visible Light and Acoustic communications

Acknowledgements: