Let's do it: Pipe following

From UWSim
Jump to: navigation, search

In this tutorial we will learn how to use UWSim and the benchmarks module allowing us to evaluate the algorithms used. Some exercises are proposed in order to learn basic things that can be done with the simulator. The tutorial has been created for ROS indigo, although it is also known to work in jade.

Installation

It is possible to install the software locally in the machine or directly use the virtual machine provided here, using uwsim as user and password. Virtual machine requires a more powerful machine to work at the same rate as local installation. The virtual machine has the installation, first execution and scene installation steps already done. In the case you want to install it locally, just follow the steps:

  • STEP 1: Install ROS indigo following the instructions in the ROS webpage.
  • STEP 2: Create a ROS workspace. The ROS workspace is a folder that contains all the source files of our project in a logic structure. To create it execute:
mkdir -p ~/uwsim_ws/src 
cd ~/uwsim_ws/src 
catkin_init_workspace 
cd ~/uwsim_ws 
catkin_make install 
  • STEP 3: Add the workspace to bashrc. Adding the workspace to bashrc makes it available every time a console is opened and it can be launched directly. Otherwise a source will be required in each console opened. To add it execute:
source ~/uwsim_ws/install/setup.bash 
  • STEP 4: Install uwsim libraries.
sudo apt-get install ros-indigo-uwsim-bullet ros-indigo-uwsim-osgbullet ros-indigo-uwsim-osgocean ros-indigo-uwsim-osgworks ros-indigo-visualization-osg 
  • STEP 5: State where UWSim and benchmarking module will be installed.
cd ~/uwsim_ws/ 
gedit .rosinstall 

Fill the file with the following content:

- other: {local-name: /opt/ros/indigo/share/ros} 
- other: {local-name: /opt/ros/indigo/share} 
- other: {local-name: /opt/ros/indigo/stacks} 
- setup-file: {local-name: /opt/ros/indigo/setup.sh} 
- git: {local-name: src/underwater_simulation, uri: 'https://github.com/perezsolerj/underwater_simulation.git', version: indigo-devel} 
- git: {local-name: src/benchmarks, uri: 'https://github.com/perezsolerj/uwsimbenchmarks.git', version: indigo-devel} 
- git: {local-name: src/pipefollowing, uri: 'https://github.com/perezsolerj/pipefollowing.git', version: master} 
  • STEP 6: Download files.
rosws update 
  • STEP 7: Install dependencies and build:
rosdep install --from-paths src --ignore-src --rosdistro indigo -y 
catkin_make install 

First execution

Now the system should be completely installed and build. It is time to launch a predefined scene in UWSim and check everything is working as it should.

  • STEP 1: Check roscore is running. Roscore is a server that allows communications between ROS nodes, for this reason it is necessary up and running at every moment. If it is not, open a console and run "roscore".
  • STEP 2: Execute UWSim:
rosrun uwsim uwsim 

DefaultScene.png

If it is the first UWSim execution in the machine, UWSim will ask to download a data file. Press y to download it, this data file contains basic models of terrains, vehicles and objects. This is installed in ~/.uwsim folder. If the simulator produces a segmentation fault at the start, or visual glitches appear in the main window, it usually means the graphic card drivers are missing, not correctly installed or the graphic card is not supported. Any Nvidia or ATI should be capable to run UWSim. UWSim can still run with "--disableShaders" option, but correctly installing drivers is advisable.

  • STEP 3: Check UWSim is publishing information. UWSim uses ROS interfaces to publish sensors like cameras, sonars, dvl... a list of the published information can be seen running "rostopic list" in a new console while UWSim is running. It is also possible to see the information inside each topic with "rostopic echo (TOPICNAME)".
  • STEP 4: Virtual cameras are one of the most important sensors available in UWSim. In order to see the camera output press "c" in the main window. An OSG widget shows every virtual camera simulated. It is also possible to see it through ROS network using:
rosrun image_view image:=/uwsim/camera1

Pipe following scene

This tutorial is based in an underwater pipe following example. The goal is to develop a program able to follow an underwater pipe in order to detect any leak or problem. This scenes are not downloaded in the default package of UWSim, so it is required to download and add them to the scene folders using the scene installer. In order to do so:

  • STEP 1: Go to UWSim folder:
cd ~/uwsim_ws/src/underwater_simulation/uwsim 
  • STEP 2: Execute installation script for each scene:
./data/scenes/installScene -s pipeFollowing_basic.uws 
./data/scenes/installScene -s pipeFollowing_turns.uws 
./data/scenes/installScene -s pipeFollowing_heights.uws 
  • STEP 3: Build the workspace with the new scenes in order to make them available when UWSim runs:
cd ~/uwsim_ws/ 
catkin_make install 
  • STEP 4: Check the new scenes are working:
rosrun uwsim uwsim --configfile pipeFollowing_basic.xml
rosrun uwsim uwsim --configfile pipeFollowing_turns.xml
rosrun uwsim uwsim --configfile pipeFollowing_heights.xml

PipesHeights.png

Robot teleoperation

Now the basic setup is installed and working. The final objective of this tutorial is to be able to make the vehicle follow green pipes. As can be seen scenes have different difficulties, basic is a straight line, heights needs to change vehicle depth and pipes in turns have 90 degrees angles. UWSim allows us to develop software increasingly so the learning curve is easier. For this reason the tutorial will start with basic goals to a realistic final code that could be tested in a real robot.

The first step is to be able to send commands to the robot so it moves through the scene, and being able to teleoperate it following a pipe.

The provided setup is a ROS workspace that can be built with “catkin_make install” and executed with “rosrun uwsim uwsim –configfile SCENE”. The code we will develop is placed in the pipeFollowing package in “~/uwsim_ws/src/pipeFollowing”. As this is already a ROS package it will build when “catkin_make install” is executed in the workspace, the code is inside the src folder. There you will find the barebones of each proposed exercise. The first of them is “PF_teleop.py”. This code reads the keyboard input and sends commands to the vehicle. In order to run it do:

  • STEP 1: Start UWSim with the selected scene:
rosrun uwsim uwsim --configfile pipeFollowing_basic.xml
  • STEP 2: In a new console, execute the teleoperation code:
rosrun pipefollowing PF_teleop.py
  • STEP 3: maintaining the focus in this console press w/s and the vehicle will move up and down.

Unfortunately the available movements are not useful to follow pipes, it is necessary to edit “PF_teleop.py” and change some code. But first lets see how the code sends commands to the vehicle:

twist_topic="/g500/velocityCommand" 
... 
pub = rospy.Publisher(twist_topic, TwistStamped,queue_size=1)

This lines decide the topic it sends the commands to, in this case “/g500/velocityCommand”. Inside the loop.

msg = TwistStamped()

Creates a TwistStamped message, this is the message type that allows sending velocities to the vehicle.

if c=='\n': 
  start() 
  print "Benchmarking Started!" 
elif c=='\x1b': ##This means we are pressing an arrow! 
  c2= sys.stdin.read(1) 
  c2= sys.stdin.read(1) 
    if c2=='A': 
      msg.twist.linear.z=-baseVelocity 
    elif c2=='B': 
      msg.twist.linear.z=baseVelocity

This lines check the key pressed already stored in the char variable “c”. For instance the enter key calls start() (we will see later what it means) and arrows up and down fill linear velocities in z axis message fields.

pub.publish(msg) 
rospy.sleep(0.1)

Finally this lines publish the message just filled that will make the vehicle move, and sleep for 0.1 seconds until the next iteration.

EXERCISE: Finish the code so the vehicle can move and follow the pipe in every scene. In order to do so just check the key pressed and fill the rest of the message fields:

Linear velocity: msg.twist.linear.x, msg.twist.linear.y, msg.twist.linear.z Angular velocity: msg.twist.angular.x, msg.twist.angular.y, msg.twist.angular.z

Autoevaluation

Once the teleoperation code is ready it is possible to measure the performance. In order to do so the benchmark module is already configured for it. This module is able to evaluate different metrics configured through an XML while running UWSim using the simulation as groundTruth. In this case it will measure the pipe following performance. To execute it, launchfiles are already created in the pipefollowing package. This launchfiles group ROS commands and allow to execute more than one order at the same time. However it is necessary to communicate the start and stop of execution of the pipe following.

This is the reason why we need ROS services. ROS services are similar to function calls between ROS nodes, they require an input and an output. In our case both, input and output, are empty. This services are already included in the teleop barebone:

##wait for benchmark init service 
#rospy.wait_for_service('/startBench') 
#start=rospy.ServiceProxy('/startBench', Empty) 
##wait for benchmark stop service 
#rospy.wait_for_service('/stopBench') 
#stop=rospy.ServiceProxy('/stopBench', Empty)

It is necessary to uncomment this lines to wait until start and stop services are active. Furthermore this code creates two variables, start and stop, that can be called later initiating and stopping the evaluation of the pipe following.

Once uncommented benchmark can be started pressing “enter” and stopped with “space”. So, in order to measure the performance follow this steps:

  • STEP 1: Execute launchfile for the scene:
roslaunch pipefollowing basic.launch 
roslaunch pipefollowing turns.launch 
roslaunch pipefollowing heights.launch
  • STEP 2: execute teleop:
rosrun pipefollowing PF_teleop.py
  • STEP 3: follow the pipe:

Press “enter” to start measuring. If everything worked as it should a green line, perfect trajectory, will appear. Follow the pipe, take into account green line is the perfect trajectory while the red one is the trajectory the vehicle is following. Press “space” when finished.

Once the benchmark ends results can be consulted with the script “genResults.sh” inside src folder of the pipefollowing package. This script generates plots with the error evolution through experiment time. This script uses gnuplot and epstopdf, it is necessary to install both to make it work.

The script produces two plots. ResultsError.pdf is a plot that shows the error in each measured step through time, while resultsISE.pdf is the accumulation of this error. The ISE (Integrated Squared Error) is a quality measure for path following and, as a consequence, our function to minimize. In order to do so time and precision are important.

Error creating thumbnail: Unable to save thumbnail to destination

Take into account each time a new benchmark is executed results will be overwritten. If you want to keep the graph results you will need to rename them. If you want to keep the raw data it is stored in “.dat” and “.data” files inside “~/.uwsim”,

Dynamic

Once the problem has been solved kinematically, assigning velocities to the vehicle, is time to solve it dynamically, using forces. Until this moment when a velocity command was sent to the vehicle it started moving at that velocity no matter previous velocity. Unfortunately this is not realistic.

EXERCISE: Solve the previous problem but using the dynamic model of the vehicle. In order to do so it is only necessary to change the type of message sent. We will send force requests to the vehicle thrusters.

The vehicle used in this tutorial, girona500, has 5 thrusters. The first two thrusters are pointing forward and provide forward/backward movement and yaw rotation sending different signs to each thruster. The next two thrusters are used to move up and down and pitch rotation. Finally the last thruster allows the vehicle to move laterally. The image show the thrusters position in the girona500 vehicle, note the last thruster 3D model is missing.

Girona500Thrusters.png

In order to change the message sent it is necessary to change the following lines:

from geometry_msgs.msg import TwistStamped

The message type is now an array, so it need to be change to:

from std_msgs.msg import Float64MultiArray

The message created and topic name needs to be in line with this:

twist_topic="/g500/thrusters_input"
...
msg = Float64MultiArray()

In this message the data field needs to be filled. This field is an array with the thrusters input previously described. Thrusters force ranges from -1 to 1. In order to assign it a python list must be created with a force for each thruster and finally assign it to the field. Take into account exactly 5 forces must be sent to the thrusters, even if one or more are 0.

In order to test this part a different launchfile is required to configure the dynamic environment:

roslaunch pipefollowing dyn_basic.launch 
roslaunch pipefollowing dyn_turns.launch 
roslaunch pipefollowing dyn_heights.launch

Autonomous navigation

EXERCISE: Now that we are able to send commands to the vehicle, the next step is make the vehicle move autonomously to a predefined point. At this moment the points will be the input of the problem.

The barebone of this problem is in PF_waypoints.py. As we did with the previous problem it is proposed to solve it first kinematically.

There are many different ways to solve this, but the solution suggested works like this:

While not in the last waypoint:
  Read vehicle position
  Compute distance to the target with respect to vehicle.
  Check if the vehicle reached the target.
    If the vehicle reached goal, go to the next waypoint
    If not: decide vehicle velocity.

In order to read vehicle position TF library can be used. TF is a library useful to compute positions and rotations from on point with respect to another one. UWSim is continuously publishing positions of the vehicle with respect to the world. First step is to add TF library:

import tf

Then create a transform listener to read positions from TF.

listener = tf.TransformListener()

Ask to the listener for the desired transform, in this case the vehicle position “/girona500” with respect to the world “/world”

try: 
  (trans,rot) = listener.lookupTransform('/world', '/girona500', rospy.Time(0))
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException): 
  continue

As can be seen, listener returns translation “trans” and orientation “rot”. Remember the predefined waypoints are described with respect to the world, but it is necessary to transform them to the vehicle perspective. In order to do so homogeneous matrices are used. This matrices allow to easily change the reference of the points. So the first step is to represent translation and orientation as an homogeneous matrix using TF library:

wRv=tf.transformations.quaternion_matrix(rot) 
wTv=tf.transformations.translation_matrix(trans)

Take into account the notation, the orientation (R) of the vehicle (v) with respect to the world (w) is world to vehicle rotation -> wRv. In the same way world to vehicle translation (T) is wTv. Joining this two transforms using a dot product results in vehicle (v) to world (w) matrix (M) wMv.

wMv=np.dot(wTv,wRv)

It is also needed to represent the world to point(p) matrix wMp. Take into account the point has no orientations, as it is not specified how the vehicle must follow the pipe. For this reason the dot product is not necessary. The transform we are looking for is vehicle to point Matrix vMp, for this reason the computation needed is:

vMp = vMw . Wmp = inverse(wMv) . Wmp

In TF python library, tf.transformations.inverse_matrix(MATRIX) computes the inverse matrix. Finally only the translation of the point with respect to the vehicle is necessary, it can be obtained using:

vTp=tf.transformations.translation_from_matrix(vMp)

Using this result it is easy to decide if the vehicle has reached the waypoint or not. The last step is deciding the vehicle velocity. The most common solution for this is apply a velocity in function of the distance to the target (P controller), with a maximum.

Autoevaluation and dynamics

Once again it is possible to measure the performance using the benchmarking module as explained in the previous problem.

PipeFollowing.png

EXERCISE: Now that the kinematic problem is solved, is convenient to try to solve the dynamic problem. In order to do so, it is necessary to change the message type to send force commands instead of velocities. Remember to execute the launchfile for dynamic simulation.

Vision guided navigation

EXERCISE: At this point the vehicle is capable of moving autonomously to a desired target. The next problem is being able to detect the pipe and decide where to move the vehicle using this information.

The barebone of this problem is available in PF_vision.py. In this case it is highly recommended to start from the kinematic case. There is a wide variety of possibilities to solve this exercise but the proposed solution is the following.

Read the image from the virtual camera sensor.
Binarize the image.
Find lines in the image.
Decide vehicle velocity depending on the position and orientation of the lines.

The barebone already includes a class named imageGrabber that reads the image from UWSim and shows it in a window. It is suggested to divide perception(reading sensors) from action (move vehicle). So it is proposed to let the class read and process the image and call methods such as getSize() to get information from the last image processed to decide actions. The class callback will process the image as soon as it is received while the main loop controls the vehicle.

The first step is binarizing the image to segment pipe pixels from the rest. This can be done in the same image callback where the image is read using this opencv function. Opencv is a library for image processing.

cv2.inRange(image, lower_green, upper_green)

This function receives an image and two numpy arrays (“np.array([0,0,0])”) with the start and finish values to bin the image. In other words pixels in the interval between both arrays will be in one class and the rest in the other. Result is another image that can be seen with imshow. It is advisable to check the image is correctly binarized through all the pipe as some regions have shadows and pipe colors change slightly. The previously developed pipe following code can be used to move the vehicle while checking the binarization.

If it is not possible to segment the image correctly in RGB colorspace it may be convenient to change to HSV colorspace with:

hsv = cv2.cvtColor(cv_image, cv2.COLOR_BGR2HSV)

Next step is detecting borders in the binary image. If the previous step worked as it should, only two lines should be visible. The proposed function for this step uses the canny algorithm.

edges = cv2.Canny(bin_image,50,150,apertureSize = 3)

Once the borders are visible, the next step is finding lines in these borders. In order to do so Hough detection algorithm may be used:

lines = cv2.HoughLinesP(edges,1,np.pi/180,50,60,20)

The optimal parameters for these two functions may change depending on the binarization step. The meaning of each parameter can be consulted in the opencv webpage. It is also recommended to draw the lines over the original image and show it in an image to see if everything is working as it should.

if lines!=None: 
  for x1,y1,x2,y2 in lines[0]: 
    cv2.line(cv_image,(x1,y1),(x2,y2),(0,255,0),2) 
cv2.imshow("Image window", cv_image) 
cv2.waitKey(3)

Spending some time tweaking parameters and checking the line detection works in all the pipe is worthy as the vehicle movement depends on this detection. Once line detection is completed lines should be stored so they can be retrieved in a function when necessary. At this moment the perception actions are already implemented, the next step is to decide the vehicle movements using the result. Once again there are many possibilities to decide the optimal move depending on the encountered lines, a simple solution is proposed.

Error creating thumbnail: Unable to save thumbnail to destination

The main idea is to focus on vertical lines and move forward while the detection finds them. If Hough transform does not find any vertical lines, the vehicle is probably facing a 90 degrees turn. To decide if a line (x1,y1) (x2,y2) is vertical or not arctan dx/dy can be used. If the result is 0 or close to it the line is vertical. The lines should be grouped in vertical and not vertical lines. The next step is compute the centroid of each group of lines, When the vehicle is using vertical lines to move forward the centroid can be used with two purposes:

  • Move laterally the vehicle to keep the pipe in the middle of the image.
  • Check if the pipe is ending when the centroid is in the lower part of the image.

The centroid of no vertical lines is useful when there are only no vertical lines and the vehicle needs to turn. If the centroid is in the left side of the image the vehicle will turn to face it. Additionally to the centroid is also interesting to measure the thickness of the pipe in pixels, to control the depth of the vehicle.

To sum up, the vehicle movement can be decided as it follows:

  • Forward/backward: Vertical line centroid difference with respect to image center (top down).
  • Lateral movements: Vertical line centroid difference with respect to image center (left right).
  • Up/down: Difference in pixels with respect to the pipe optimal thickness.
  • Yaw turn: No vertical line position with respect to image center (left right)

EXERCISE: Finally it is also possible to measure the results using the benchmarking module and solve the dynamic problem. In this case solving the dynamic problem it is more difficult due to the need of keeping the pipe in the camera view. But it is possible sending small forces to the thrusters.