Add navigation to a low-cost robot

By Danny Staple. Posted

Make a robot that sees with computer vision and take your first steps in OpenCV using a moving robot. Learn to add navigation to a low-cost Raspberry Pi-powered robot with The MagPi's step by step guide.

If you completed the steps in the last low cost robot-building article, you'll have added a camera to your Raspberry Pi-powered lunchbox robot. This enabled your robot to take photos and provided a robot’s-eye view of the world. Now a robot builder gets to take this much further and make the robot use this camera to make decisions about the world.

In this tutorial we'll look at how to make an environment for testing computer vision. It demonstrates using OpenCV to condition images, to remove noise and simplify them. You'll learn how to extract data, check the content of an image and how to make a robot turn.

For instructions on how to build your lunchbox robot click here.

Lunchbox robot in our colour-controlled test area. The robot's camera sees which colour wall is in front of it. The robot uses this information to choose which way to turn

You'll need

1. A test course

For trying out behaviours, robot builders make test courses. The goal is to create an environment with only the specific features to try out the robot. Find a floor area in a neutral colour, ideally somewhere white or grey without patterns or colour.

Make walls using flat colours such as red, blue, green and yellow. A toy box or coloured card work for this. Use white or neutral background walls. Cameras take better pictures with bright and consistent lighting. In good lighting, colours are clearer, making processing easier. Good options are daylight or bright white indoor lighting. Avoid tinted or patchy lighting.

Top tip: Lighting matters

Lighting should be neutral in colour, bright and diffused. Spotlights, low light and coloured lights cause problems with visual processing.

2. Installation

This step may take some time. Plug a mains-powered USB adapter into the robot’s Raspberry Pi before proceeding. Before installing the packages, make sure Raspbian is up to date with:

sudo apt update –allow-releaseinfo-change

There are some system packages needed for running the Python libraries.

sudo apt install libcairo-gobject2 libwebp6 libilmbase23 libgdk-pixbuf2.0-0 libjasper1 libpango-1.0-0 libavcodec58 libavutil56 libcairo2 libswscale5 libatk1.0-0 libgtk-3-0 libtiff5 libpangocairo-1.0-0 libavformat58 libopenexr23 libgfortran5 libatlas3-base

Finally, install the Python packages needed for OpenCV, NumPy, and picamera:

sudo pip3 install opencv-python-headless numpy imutils picamera[array]

3. Set up the camera

The function setup_camera in the file find_contours.py gets the camera ready. For quick processing time, and to simplify the image, line 11 sets a camera resolution of 128×128.

Our robot’s camera is upside down, so the rotation is set to 180 degrees. Using camera features saves processing on Raspberry Pi. Line 14 creates capture_buffer, space to store image data from the camera. Lines 15 and 16 start the camera with two seconds of warm-up time.

With the robot in front of a coloured wall, run the following commands:

export LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libatomic.so.1

python3 find_contours.py

This code send the camera’s captured image to the file original.png.

4. A little colour theory

Computers store colours as RGB or BGR, for red, green, and blue pixels. In find_contours.py, on line 21, we convert the image from BGR to the HSV colour system, which is suitable for this image processing.

Figure 1 shows how HSV works. Saturation measures how vivid or intense the colour is, from a low value being white or grey, to a full value being vivid. Hue indicates the colour – red, orange, blue, green, yellow, etc.

Transforming the image into HSV – Hue, Saturation, and Value – lets the robot pick out colour intensity (saturation) and then find its tint (hue), while mostly ignoring the colour brightness (value).

Figure 1: The HSV colour space

5. Image processing pipelines

The code processes images from the camera through a series of transformations to find the colour of a wall. Each transform is a small step; for example, finding all the pixels that match a criteria or making an outline of an area. Later stages use the transformed output of earlier ones. The outputs are joined to other inputs, forming a pipeline.

The diagram in Figure 2 shows where data flows from one process to another, making it easier to understand what is going on. Use images from real outputs, boxes for stages, and lines to show the flow of data.

Figure 2: It takes a few steps for visual processing, with a number of transformations. A pipeline is a useful way to visualise this

6. Thresholding or masking

Thresholding tests if every pixel has values within a range. Line 22 of find_contours.py uses cv2.inRange for this. It makes a new binary image, storing True if the pixel has values between the lower limits and the upper limits.

The find_contours.py range allows all hue values while filtering for saturation values over 140, for only vivid colours and the value component to values brighter than 30. The output file masked.png shows the output, with coloured walls in white (see Figure 3 for an example).

The S and V values of the lower bound on line 22 can be adjusted up if too much area is matching, or down if too little is.

Figure 3: Example of a masked or thresholded image. Pixels are only on (white) or off (black)

7. Finding contours

OpenCV can inspect a black and white image and find outlines for different areas. It calls these outlines contours. In find_contours.py, lines 28 and 29 obtain a list of contours. Each contour is a list of points describing the outline.

On line 30, the contours are sorted by area. By finding the first contour in this list (the biggest), the code has likely found the most significant coloured area.

On line 48, the contour is drawn out to a debug image with_contours.png. Run the code and download the image to see how the contours look (see Figure 4 for an example).

Figure 4: This is the original image, after a contour has been found from the threshold image and drawn back on it

8. Finding the colour

For this code to choose by colour, it needs the hue from the middle of the contour. It takes this colour from the original picture. The robot uses OpenCV moments for finding the middle of a contour.

By dividing the sum of X coordinates (m10) by their count (m00), the code obtains the average X, their centre. The code also obtains the average and centre of the Y coordinates (m01 divided by m00). The middle of the contour comes from combining these.

The code on line 36 of find_contours.py extracts the colour from the hsv output at the middle of the contour.

9. Using the pipeline in a robot

The get_saturated_colours function is imported from find_contours.py, enabling this code to reuse the pipeline from already tested code.

A continuous stream of images is needed to use the pipeline to drive the robot. Line 8 of camera_nav.py creates this stream; line 9 extracts the data. Line 8 sets up the main loop as a for loop that runs forever with a new image each time.

The main loop puts the image through the pipeline and uses the output to determine if the robot turns right, left, or goes forward. The camera’s image rate sets the timing.

The colour returned by get_saturated_colours is HSV.

10. Matching the colour

The camera_nav.py code uses the hue component from get_saturated_colours. OpenCV stores a hue value as degrees divided by 2 to fit into 8 bits (up to 255). Figure 5 shows a colour wheel with hue values in degrees and OpenCV values.

Figure 5: A hue colour wheel is handy for looking up colours. The figures below the degrees show the OpenCV values

The code in camera_nav.py matches a yellow range on line 12, and a blue range on line 15, printing the matched colour and turning the robot. By setting up a series of walls of different colours, the robot can now navigate by wall colours. Expect to change these ranges for different test areas.

Ensure the robot is on battery power and in the test course before running this.

Extending the pipeline leads to detecting edges and finding the angle of the horizon. This could be used to line a robot up with a wall

11. Improving robot vision

The find_contours.py code is a simple demonstration of computer vision. It’s also easy to confuse it. Finding the image under the contour and averaging the colour would make it more stable.

The code could be combined with distance sensors, so only walls close enough were detected. Encoders or an inertial measurement unit (IMU) could be added to make a precise turn.

Advanced techniques such as Canny Edge Detection with HoughLines could pick out the horizon, determining the angle and distance, so the robot could line up with a wall. OpenCV is capable of face detection and even has machine learning and neural network modules.

Top tip: Reduce background clutter

A cluttered background causes the robot to detect random things. Neutral backgrounds without ‘noise’ make this easier to test.

12. Further reading

Robot vision is a significant area of study in robotics, and this article has barely scratched the surface. It’s one of the more rewarding and exciting spaces of robotics, worthy of further reading.

The PyImageSearch site is a superb resource to learn more about computer vision and dig further into detecting different attributes from an image.

Danny Staple (this article author’s) book, Learn Robotics Programming, has a section on computer vision, building face- and object-following behaviours, and casting the camera and pipeline stages to a mobile phone browser to view in real time.

From The MagPi store

Subscribe

Subscribe to the newsletter

Get every issue delivered directly to your inbox and keep up to date with the latest news, offers, events, and more.