Follow Line (Racing Car)

Introduction.
As our first project, we are to develop the software (Python) for a racing car to complete a full lap around a circuit as quickly as possible.
The vehicle in the simulation is equipped with a front camera, which will be our only source of information from its environment. The track has a red line painted in the middle, which will guide our way throughout it (hence named Follow Line).


Our script will be reactive-control-method-based, thus it will be loop running the same function 80 times per second. For each iteration of the function, it will get an image from the camera, process it, and make a decision depending on the situation. Doesn't seem such a complex task when you put it like this, now, does it?


Perception and filtering.
We have our car with its hardware and software and we have a circuit. The camera attached to the front of our car will provide our Python script with the sensory information it requires to make decisions. However, cameras output overflowing amounts of information, thus it is necessary for us to process it before making decisions. 
The cv2 module from the OpenCV python library is well-suited for this purpose. There are a variety of methods for digital image processing which will be very helpful.

We get an image from the camera by calling self.getImage(). The image returned by this method is a 3-dimensional array vertical, horizontal and RGB dimensions. It looks something like this:




Since our car is meant to follow that red line you can see in the picture (ignoring everything else), it is required that we filter our input image to identify the line's red hue. Under different illumination, the line's intensity and saturation might change, so translating our RGB image to HSV (cv2.cvtColor()) before filtering it will grant extra sturdiness to our software. Our filtered image will be a binary image, consisting of those pixels that are within bounds of the orthohedron in HSV we define beforehand (cv2.inRange()). Here's an example:



Notice how everything but the red line was turned black? This binary image has just the information we need and nothing else. Now this is something we can accurately work with. 


Information gleaning and moving.
Circuits have curves, and we don't want our car to collide against walls (duh!). We must devise a way for our software to make the car turn either right or left upon reaching a curve. As a first approach, I thought of keeping track of the pixels on the left half of the image and on the right half. If the car is on the left side of the line, there will be considerably more white pixels on the right half of the image, and vice versa. This way we can identify whether our car is centered or not, and which side of the road it has deviated.

With a couple of nested 'for' loops we can easily go through every single pixel from our binary image, counting the white ones. But we aren't interested in the whole white area. The white area at the centre of the image is best at telling how far our car has deviated, so we will be checking those few lines only.

The difference in white pixels will let us know which side we are at. The steering speed of our car should be proportional to how far it has deviated, so we will use this difference in white pixels to calculate it, multiplied by a scalar. Of course, our car must move forward as well, but we'll leave that forward speed a constant for now. 
We then make the calls for the methods to move our car: 
self.motors.sendW(steerSpeed) and self.motors.sendV(forwardSpeed).
Here's a short video so you can see it yourself:




Locating the line.

Our car starts the race perfectly positioned at the starting line, facing in the right direction. But what if it, somehow, gets lost? There must be a mechanism that allows our car to locate the line so it can start following it again.

Losing the line means there are no white pixels in our view. Seems easy, right? Whenever the total count of white pixels is less than, lets say, 50, make our car look for the line (there might always be some random noise pixels that survive our red filter, so you can't just assume there will be 0 pixels in the image). Looking for the line translates to turning constantly and going slightly backwards (just in case we've crashed into a wall) until we find the line again.


Improving performance: Derivative Control.
So far, so good. The car managed to complete a lap in under two minutes, which is a great starting point. Anyways, our work doesn't end here. We want our car to be the fastest ever, so we increase its speed noticeably. Boom! The car just crashed against a wall. Perhaps increasing its forward speed renders our steering speed helpless (spoiler: it does).

So, in order to increase its forward speed we must provide it with a greater steering speed beforehand. Cool, lets just do it then (spoiler: it's not that simple). By carelessly increasing its steering speed we've only made it worse! The car winds from side to side, kind of like a river meandering. Let me just show you:




So we think for a while and come to a conclusion. We need our car to make swifter turns instead of the sharp turns shown in the video. This is a common practice in robotics refered to as  Derivative Control.

We have already implemented what is known as Proportional Control. Derivative Control goes one step further in correcting the error value. How does it work? It compares our current error value with the one from the previous iteration of the loop. If the error has increased, perhaps our steering speed didn't suffice and we must increase it. If the error is decreasing, then we should probably steer more subtly (we are aiming for that swift turn). This can easily be included in our steering speed formula, multiplied by some constant we must adjust.


A change in foundations.

Let's recap for a moment. Remember when we talked about counting white pixels and all that? Well, sadly it won't get us any further than this (and we want our car to be as fast as possible!).

You might wonder why I am saying this it did seem to work indeed. The thing is, it wasn't so accurate at calculating the error, and thus would not be suitable for such a fast car we're developing. So, I devised this new way of getting the error value: instead of comparing all the white sufaces from the left side and the right side, just get the position of the left and right borders and compare those. Just as we did for our previous approach, we'll be looking at just a few lines (let's say three). This method has proven to be much more efective for our purpose.


Further improving performance: Integral Control.
We use Derivative Control to make our car turn swiftly; if our error value is increasing, turn more, and if it's decreasing, turn less. But what if it remains constant? Our car could be taking curves with a constant error, turning just besides the red line and this cannot be corrected with Derivative Control. We must implement Integral Control for our car.

How does it work? Basically, we store our, let's say, 10 previous error values in a global list and check our cumulative error in every iteration of the loop. In our ideal scenario, this cumulative error would be zero, so we add a new term to our steering speed formula which makes our car turn even more if there's a constant error that the Derivative Control could not correct on its own. 

As a bonus, I added an integral term to our forward speed formula, to make our car slow down whenever the integral error value gets too high, so it doesn't escalate to madness.

It can also help us when we've missed the line. Until now, if the red line was not within our car's view, it would always turn in the same direction looking for it. This often led the car to rotate 180 degrees before locating the line, thus following it backwards upon locating it. We can use the information from the previous error values to tell our car to turn to the side where the line was last seen. This way it is highly likely that our car will follow the line in the right direction after finding it. 


Slowing down at curves.
Once again, even though our car completes the circuit in a very short time, we know we can do even better. Think about this: in real life, how do drivers reach their peak performance? They slow down at curves, but they speed up whenever they are driving in a straight line. Why not make our car do likewise? 

The only thing we have to do really is to be able to tell when we are approaching a curve and when we are driving straight. Let's have a look at what a curve looks like from our cars view:


Notice how there are some white pixels at the horizon that are deviating to the right. Those pixels will tell us how far the curve is or how sharp it is either ways, we ought to slow down proportionally to how many white pixels there are. So we count them with nested 'for' loops, as always. I chose rows from 235 to 254 and columns from 0 to 300 and from 340 to 640. These numbers ended up working quite well, but you can just try whatever rows and columns you want and see if they work out fine.

After counting the white pixels, we multiply them by a constant and add this new term to our forward speed formula.


Final version and conclusion.
Finally, we are reaching the end. We've implemented so many different mechanisms, which granted our car with the ability to assess almost every possible situation within the given scene, and react to it. All there's left to do is to tune up those constants we defined in our code. After doing so, the car managed to complete a full lap in little over 30 seconds. Impressive! Here's a video so you can see it yourself:





That is the end of this project. Hope you found it helpful. If you happen to have any questions or suggestions, just feel free to leave a comment on the comment section below. Thank you, see you soon!

-Luis

Comments

Popular posts from this blog

Cat & Mouse Drones

Obstacle Avoidance