Posts

Vacuum Cleaner

Image
Introduction. You have probably heard of Roomba and other house cleaning robots. Well, for this exercise we are going to implement the algorithm of a vacuum-cleaning robot. The aim is to clean as much surface of the house as possible, time not being a restriction. Our robot is equipped with 180 lasers displayed in a semicircle at its front that will let it sense when it has reached near a wall. This way it can avoid collision. Laser measurement processing. As we've seen in the previous exercise, lasers output a single number that represents the distance to the first obstacle in that direction. This information is key for sensing our surroundings. Nonetheless, we won't be using this raw data just like that  – there's some math we must do –. First of all, we will translate these numbers to polar coordinates. We know there are 180 lasers displayed in a 180-degree semicircle, so we know at which angle each laser is pointing. With this information we can build a polar...

Obstacle Avoidance

Image
Introduction. The aim of this project is for our car to complete a full lap around the circuit without colliding with any of the obstacles placed along it. We will be using the VFF (Virtual Forces) method. Our only sensors will be an array of lasers located at the front of the car. Although our code will be (again) reactive-method-based, we will require a map to show us the location of the targets we are to reach. Our global locating system will also be constantly providing us with the location of the robot. Despite how similar this exercise might seem to the Follow Line project, their implementations are completely different. For this exercise we are provided with an array of lasers instead of a camera, so there will be no filtering and no colour spaces. Our lasers will measure distance in a specific direction and will return a number. VFF Explained. As I said, we will be using this method to compute both the forward speed and the turn speed of the robot. This method ca...

Cat & Mouse Drones

Image
Introduction. For this project, we are to develop the code (Python) for a drone  – the   cat –  to follow another drone  – the mouse –  which will be flying randomly around de scene. Our drone is equipped with two cameras  – one front and one ventral –  through which it will receive visual input from its surroundings. We don't have access to the Python script for the drone role-playing the mouse, and its movement is apparently random, so we must follow it based solely on our visual input. Just as we did for our previous project, our script will be reactive-control-method based. This means for every iteration of the loop our drone will get an image from a camera, process it and make a decision based on it. Since there are many concepts already covered in the previous entry of this blog I'll try to focus on the new ones. Perception and filtering. What does an image captured by our front camera look like, you may wonder? Well, nothing ...

Follow Line (Racing Car)

Image
Introduction. As our first project, we are to develop the software (Python) for a racing car to complete  a full lap around a circuit as quickly as possible. The vehicle in the simulation is equipped with a front camera, which will be our only source of information from its environment. The track has a red line painted in the middle, which will guide our way throughout it (hence named Follow Line ). Our script will be reactive-control-method-based, thus it will be loop running the same function 80 times per second. For each iteration of the function, it will get an image from the camera, process it, and make a decision depending on the situation. Doesn't seem such a complex task when you put it like this, now, does it? Perception and filtering. We have our car  – with its hardware and software –  and we have a circuit. The camera attached to the front of our car will provide our Python script with the sensory information it requires to make decisions. H...