Thursday, August 7, 2014

An image-processing robot for RoboCup Junior

Helen: Today we’re delighted to have a guest post from 17-year-old student Arne Baeyens, aka Robotanicus, who has form in designing prize-winning robots. His latest, designed for the line-following challenge of a local competition, is rather impressive. Over to Arne…

Two months ago, the 24th of May, I participated in the RoboCup Junior competition Flanders, category 'Advanced Rescue'. With a Raspberry Pi, of course – I used a model B running Raspbian. Instead of using reflectance sensors to determine the position of the line, I used the Pi Camera to capture a video stream and applied computer vision algorithms to follow the line. My robot wasn't the fastest but I obtained the third place.

A short video of the robot in action:

In this category of the RCJ competition the robot has to follow a black line and to avoid obstacles. The T-junctions are marked by green fields to indicate the shortest trajectory. The final goal is to push a can out of the green field.

RPi line follower RPi line follower2 RPi line follower3

This is not my first robot for the RCJ competition. In 2013 I won the competition with a robot with the Dwengo board as control unit. It used reflectance and home-made colour sensors. The Dwengo board uses the popular pic18f4550 microcontroller and has amongst other functionalities a motor driver, a 2×16 char screen and a big, 40pin extension connector. The Dwengo board is, like the RPi, designed for educational purposes, with projects in Argentina and India.

As the Dwengo board is a good companion for the Raspberry Pi, I decided to combine both boards in my new robot. While the Pi does high-level image processing, the microcontroller controls the robot.

The Raspberry Pi was programmed in C++ using the OpenCV libraries, the wiringPi library (from Gordon Henderson) and the RaspiCam openCV interface library (from Pierre Raufast and improved by Emil Valkov). I overclocked the Pi to 1GHz to get a frame rate of 12 to 14 fps.

Using a camera has some big advantages: first of all, you don't have that bunch of sensors mounted close to the ground that are interfering with obstacles and deranged by irregularities. The second benefit is that you can see what is in front of the robot without having to build a swinging sensor arm. So, you have information about the actual position of the robot above the line but also on the position of the line in front, allowing calculation of curvature of the line. In short, following the line is much more controllable. By using edge detection rather than greyscale thresholding, the program is virtually immune for shadows and grey zones in the image.

If the line would have had less hairpin bends and I would have had a bit more time, I would have implemented a speed regulating algorithm on the base of the curvature of the line. This is surely something that would improve the performance of the robot.

I also used the camera to detect and track the green direction fields at a T-junction where the robot has to take the right direction. I used a simple colour blob tracking algorithm for this.

A short video of what the robot thinks:

Please note that in reality the robot goes a little bit slower following the line.

Different steps of the image processing

Image acquired by the camera (with some lines and points already added):
Image acquired by the camera

The RPi converts the colour image to a greyscale image. Then the pixel values on a horizontal line in the image are extracted and put into an array. This array is visualized by putting the values in a graph (also with openCV):
Visualizing pixel values along a line

From the first array, a second is calculated by taking the difference from two successive values. In other words, we calculate the derivative:
Calculating the derivative

An iterating loop then searches for the highest and lowest value in the array. To have the horizontal relative position of the line in the array, the two position values—on the horizontal x axis in the graphed image—are averaged. The position is put in memory for the next horizontal scan with a new image. This makes that the scan line does not have to span the whole image but only about a third of it. The scan line moves horizontally with the centre about above the line.

But this is not enough for accurate tracking. From the calculated line position, circles following the line are constructed, each using the same method (but with much more trigonometry calculations as the scan lines are curved). For the second circle, not only the line position but also the line angle is used. Thanks to using functions, adding a circle is a matter of two short lines of code.

The colour tracking is done by colour conversion to HSV, thresholding and then blob tracking, like explained in this excellent video. The colour tracking slows the line following down by a few fps but this is acceptable.

HSV image Thresholded image

As seen in the video, afterwards all the scan lines and some info points are plotted on the input image so we can see what the robot 'thinks'.

And then?

After the Raspberry Pi has found the line, it sends the position data and commands at 115,2 kbps over the hardware serial port to the Dwengo microcontroller board. The Dwengo board does some additional calculations, like taking the square root of the proportional error and squaring the 'integral error' (curvature of the line). I also used a serial interrupt and made the serial port as bug-free as possible by receiving each character separately. Thus, the program does not wait for the next character while in the serial interrupt.

The Dwengo board sends an answer character to control the data stream. The microcontroller also takes the analogue input of the SHARP IR long range sensor to detect the obstacles and scan for the container.

In short, the microcontroller is controlling the robot and the Raspberry Pi does an excellent job by running the CPU intensive line following program.

There's a post on the forum with a more detailed technical explanation – but you will find the most important steps below.

Electrical wiring
Both devices are interconnected by two small boards—one attaches to the RPi and the other to the Dwengo board—that are joined by a right angle header connection. The first does with some resistors the logic level converting (the Dwengo board runs on 5V), the latter board also has two DC jacks with diodes in parallel for power input to the RPi. To regulate the power to the Pi, I used a Turnigy UBEC that delivers a stable 5.25V and feeds it into the Pi by the micro USB connector. This gives a bit more protection to the sensitive Pi. As the camera image was a bit distorted I added a 470uF capacitor to smooth things out. This helped. Even though the whole Pi got hot, the UBEC stayed cold. The power input was between 600 and 700mA at around 8.2 volts.

Grippers
Last year, I almost missed the first place as the robot only just pushed the can out of the field. Not a second time! Having this in thought, I constructed two 14cm long arms that could be turned open by two 9g servos. With the two grippers opened, the robot spans almost 40 centimetres. Despite this, the robot managed—to everyone's annoyance—'to take its time before doing its job', as can be seen in the video.

Building the robot platform
To build the robot platform I followed the same technology as the year before (link, in Dutch). I made a design in SketchUp, then converted it to a 2D vector drawing and finally lasercutted it at FabLab Leuven. However, the new robot platform is completely different in design. Last year, I made a 'riding box' by taking almost the maximum dimensions and mounting the electronics somewhere on or in it.

This time, I took a different approach. Instead of using an outer shell (like insects have), I made a design that supports and covers the parts only where necessary. The result of this is not only that the robot looks much better, but also that the different components are much easier to mount and that there is more space for extensions and extra sensors. The design files can be found here: Robot RoboCup Junior – FabLab Leuven.

3D renders in SketchUp:

RCJ_Robot_2014_render3 RCJ_Robot_2014_render5

On the day of the RCJ competition I had some bad luck as there wasn't enough light in he competition room. The shutter time of the camera became much longer. As a consequence, the robot had much more difficulties in following sharp bends in the line. However, this problem did not affect the final outcome of the competition.

Maybe I should have mounted some LEDs to illuminate the line…

No comments:

Post a Comment