
How did we make a soccer robot
On November 25, 2012, Tallinn hosted the largest robot competition in the Baltics - Robotex. We decided to build a robot in the professional football category. Of course, this will not be Cristiano Ronaldo, but the challenge is interesting. I will describe the details of creating and programming a robot. His name is Palmer.
The football itself takes place on a green area, on which there are 11 orange golf balls. There is a gate, 15 cm high and approximately 37 cm wide, yellow on one side and blue on the other. The robot must look for balls on the field, grab them, select the desired goal and score. There are two robots on the field. The one who scores the most goals wins. Technical requirements for the robot: a cylinder with a height of 35 cm and a diameter of 35 cm. Everything is simple.
After discussing the details and a couple of boxes of beer came to this configuration. On board will be a mini-format motherboard with an Atom processor. Pluses - cooling and separate food are not required, compactness. Cons respectively in low productivity. The image is captured by the Sony Playstation 3 camera, the ideal camera for the price and quality. The frame rate is critical, since the maximum possible speed of the robot depends on it. For example, at a frequency of 30 frames per second and a robot speed of 3 meters per second, it will pass between two 10 cm frames, which is clearly not enough for the accuracy of the ball capture. PS3 delivers up to 125 frames per second. The camera driver fits nicely into the Linux kernel. Ubuntu USB bootable operating system. For recognition of images favorite OpenCV is used.
In addition to the camera, IR sensors and beacons are used to explore the surrounding world. Infrared sensors avoid collisions with another robot or gate. Power - a lithium battery, with a decrease in power from 14.8V to 12V for the motherboard. Power for LiPo motors batteries (2x 4S 14.8V 2200mAh + 2x 2S 7.4V ~ 2000mAh). Separate power for motors is used because at the start of the motors there is a strong voltage drop in the circuit, which in turn can reset the microcontroller. Yes, and a separate power supply allows you to remove high-frequency noise from the motors, which also knocks the microcontroller. There are two types of capacitors in the circuit, some smooth out voltage drops, others, faster, smooth out high-frequency noise. Broadband noise arises in the process of sparking of the motor brushes, which, passing along the chain, knocks down microcontrollers. To solve this problem, an H-Bridge with optical breakers is used. H-Bridge is a relay bridge, the circuit of which allows you to change the polarity of the current on the motors without the risk of a short circuit. Signals coming to the bridge pass through the optical choppers. Optical breakers break the network, and noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves. and the noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves. and the noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves.
For hitting the ball, a solenoid is used, which has its own block of capacitors connected in parallel to create a large pulse. A separate battery is designed to power the solenoid.

Next in the kit are electrolytic capacitors to stabilize the voltage, and ceramic capacitors and ferrite rings to combat the ubiquitous noise, because EVERYTHING is noisy. And there are such little things as the frame, wheels, motors, bolts and other cogs. And here is the beginning.

And in a few days.

Mechanics and electronics assembled non-stop, sometimes working at night. The code began to be written in parallel, using C ++ and as a Qt development environment. A robot is a classic abstract machine that has a set of states (a ball is found, a movement, a goal search, or a target is found, a missile is pointed), it doesn’t matter if it’s a military robot, a toy robot or an industrial one. The problem is that there should be as few states as possible, and as many as necessary. If the algorithm is made too confusing, then at some point you will not understand why the robot reacts in this way. We have states 8: Starting, Searching for the Ball, Searching for Gates, Aiming, Avoiding obstacles, Charging solenoid capacitors, Shock, Final. Transitions between them form a graph, which is the algorithm.
Most of the time is spent on writing and debugging the code, testing, calibration. Which is actually the most important thing.
The recognition of the robot allocated in separate classes is built like this. Each frame undergoes processing, where the colors are first converted from RGB to HSL, which is closer to our perception, and which is more convenient to work with. For the convenience of conversion, the built-in OpenCV function is not used, since conversion is a calculation and it takes such an expensive time, but a pre-calculated color table (lookup table), where the RGB value is used to get the offset in the table, and then the color is selected by address. We win in speed, we lose in memory, everything is as in theory. Then objects (balls, gates) are sequentially searched and recorded in the list. Objects are searched according to the general algorithm, each pixel is run in a cycle, and if it falls into the color range of the object, then a bit is put on the map. The result is a bitmap map with selected objects. The bitmap card is sent to the input of the OpenCV function, which returns a list of circuits. Next, we find the area of the contours and if the area of the contours is more than 40-50% of the area of the circle (rectangle) or in the 4 / Pi range, then we consider this object recognized and add it to the list. Balls are searched in a certain range (small noise, there is no big one, it makes no sense to look for balls on the ceiling either). Since the camera produces up to 125 frames per second, you cannot use some of the built-in functions of OpenCV, they are long, but the library is conveniently arranged and you can run through the entire matrix through the pointers. It remains to check whether the ball has rolled out over the edge of the field (limited to black), since the robot should not go beyond the black line. The check is quite simple, draw a line from the center of the bottom of the image to the ball and see if it crosses the black area.
Next is the most interesting. The robot has a certain number of states in which it goes into depending on the conditions. It's simple. The algorithm is written precisely at this stage. The hardest part is calibration. How to build a function of speed, direction from the obtained objects and their coordinates?
We run through the entire list and find the largest ball in terms of area. This is the closest. Now, in order to get the function of speed and direction from the coordinates of the object, you need to think and smoke a matan. The higher the coordinates of the ball in the image and the smaller it is in area, the farther it is. The more shifted along the X axis and farther / closer, the more acute, or vice versa, an obtuse angle. To get the coefficients, the robot is placed on the platform at one end in the center, then the process of crawling with a tape measure, measuring points with a certain step (33 cm) and taking readings of the values of the object in the recognition system (contour area) begins. We apply values and try to find a function that would be the connection of some values with others. Strong accuracy is not needed, because the robot does not build the path once and for all, but corrects itself 17-18 times per second, so much is our little system. Those. we need to find a function whose limit tends to the desired results with small changes in the arguments. Then the angle and direction are fed to the wheels, from where the speeds are calculated by the simplest vector equations. The figure will explain.

Further, when we know how to move to objects, it is not so difficult to write a soccer game algorithm. And now it’s close to the competition. My right.

Result in action:
On Saturday evening, we already moved to a sports hall, where competitions are held. It was great at night, my social circle, various clubs and laboratories from different countries. A bunch of people. No one is sleeping. All communicate with each other and are interested in everything that is possible. I myself slept for 2.5 hours. And all this time was tuning. As a person participating for the first time, he did not know about the subtleties and problems that are encountered.
Most importantly, it is impossible to calculate everything, and the emphasis is on testing. Who tests and brings longer, that and the main. Let's say that it is better not to make the turns with the whole robot, and the robot should rotate around the ball, because when the robot rotates the ball, the robot's impulse is added to the ball, and if it can be beaten at close distances and the error is not significant, then when hitting from long distances the error is huge, the linear speed of the goal relative to the robot is large, and the firing moment should be even more proactive. Here the problems come out. The first one is that you don’t know what rotation speed the robot had before. That is, if the robot travels at an angle of lead directly onto the ball, then it will strike immediately and by, because it did not have rotation. Or vice versa - it will approach the ball from a turn in the other direction, begin to turn and hit, but it will be past, since, due to inertia, its turning speed will not yet be calculated. Those. too many options to consider. Turning around the axis of the ball does not add an impulse to it, and you can shoot right away.
The second is the recognition problem. When the robot is spinning, at long distances, the balls and gates again have a large linear speed, and the frame for recognition is blurry. The contour search function works according to gradients, but the greased gates do not have clear gradients, and therefore instead of a single gate contour, dozens of contours are obtained at the output (this eats resources) or nothing happens (the ball is smeared into a small orange fog). I had 17 frames per second at the input, respectively, distant objects during rotation were unavailable to me. Plus processing delay. Those. when the robot understands that there were gates in the distance, they are not there already (though I used it to my advantage when I gave rotation from my gates, and the delay was just such that my robot turned almost to the enemy gates).
One of the solutions that our other team used was to cut the picture, i.e. when we see the ball, then we begin to cut the excess, center it, and cut the sides and throw it away, the FPS jumps to 60. The same logic and when recognizing the goal, especially when searching, the lower part is not needed, we cut it.
I had another camera and there would be problems with the slices. Good solution with two cameras. One wide-angle, looks directions and gives an overview. The second opposite gives distance and sees far.
To solve the problem of robot turns, I used a stop if after a couple of seconds during the turn he did not find anything. The stop gives you the opportunity to look further, but at an angle of 60 degrees, which was issued by our camera. Then search again.
My second omission is that I did not know that robots would start from the right corner. The robot must remember which side of the enemy’s gate and where it should turn, otherwise the wrong turn takes a lot of time. I determined the sides of the gate by color, but until the gate appeared in the field of view, the direction multiplier variable (plus or minus one sets the direction of rotation) defaulted to -1 counterclockwise, but it was necessary clockwise. And because of this, in a losing match, our first two goals scored on the counter and lost time. The result is 6-4, and in the second draw 5-5, and our loss. 5th place out of 18 teams. The high center of gravity was still in the way, not putting in great speed - it starts dancing. I nicknamed his printer.
In general, there are a lot of nuances. What I remember is an interesting party. And the rhythm. Those. speed and constant refinement of machines. After each match. With the champion we played 4-3! (defense tactics, closed the gates and hit from afar). The first one on one match we had at the competition! Hence the inability to test the behavior strategy. For a week, we were haunted by breakdowns. Plus, our camera went astray (axis), I tried to solve mechanical problems with a code, which should never be done, but there was no time, and only before the last match I decided everything. In total, we played 4 matches.
The first one was won by a robot cube.

In the second, they won the Skype team, which had a very interesting robot. Insanely fast, small, with 8 cameras and FPGA! That is, they did something that should not work at all. He quickly flew out, but was charming. There he is.

There were very interesting solutions. The robot, which won first place (3 years of development), has a 360-degree view. The camera looks up at a cone-shaped mirror. It turns out the picture in polar coordinates.

And regarding everything, there is such a story in the annotations of Roboteks that everyone constantly laughs at.
Some 5 years ago, two masters came to the competition and said that they had all figured out and built a robot. They were met by an explosion of laughter. And the subsequent defeat.
Thanks for attention.
The football itself takes place on a green area, on which there are 11 orange golf balls. There is a gate, 15 cm high and approximately 37 cm wide, yellow on one side and blue on the other. The robot must look for balls on the field, grab them, select the desired goal and score. There are two robots on the field. The one who scores the most goals wins. Technical requirements for the robot: a cylinder with a height of 35 cm and a diameter of 35 cm. Everything is simple.
After discussing the details and a couple of boxes of beer came to this configuration. On board will be a mini-format motherboard with an Atom processor. Pluses - cooling and separate food are not required, compactness. Cons respectively in low productivity. The image is captured by the Sony Playstation 3 camera, the ideal camera for the price and quality. The frame rate is critical, since the maximum possible speed of the robot depends on it. For example, at a frequency of 30 frames per second and a robot speed of 3 meters per second, it will pass between two 10 cm frames, which is clearly not enough for the accuracy of the ball capture. PS3 delivers up to 125 frames per second. The camera driver fits nicely into the Linux kernel. Ubuntu USB bootable operating system. For recognition of images favorite OpenCV is used.
In addition to the camera, IR sensors and beacons are used to explore the surrounding world. Infrared sensors avoid collisions with another robot or gate. Power - a lithium battery, with a decrease in power from 14.8V to 12V for the motherboard. Power for LiPo motors batteries (2x 4S 14.8V 2200mAh + 2x 2S 7.4V ~ 2000mAh). Separate power for motors is used because at the start of the motors there is a strong voltage drop in the circuit, which in turn can reset the microcontroller. Yes, and a separate power supply allows you to remove high-frequency noise from the motors, which also knocks the microcontroller. There are two types of capacitors in the circuit, some smooth out voltage drops, others, faster, smooth out high-frequency noise. Broadband noise arises in the process of sparking of the motor brushes, which, passing along the chain, knocks down microcontrollers. To solve this problem, an H-Bridge with optical breakers is used. H-Bridge is a relay bridge, the circuit of which allows you to change the polarity of the current on the motors without the risk of a short circuit. Signals coming to the bridge pass through the optical choppers. Optical breakers break the network, and noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves. and the noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves. and the noise from the motors does not fall into the controller circuit. They consist of an LED and a photocell in one housing, i.e. the circuit is open, but the signals go. Such bridges are used to control anything. We make boards for the bridge ourselves.
For hitting the ball, a solenoid is used, which has its own block of capacitors connected in parallel to create a large pulse. A separate battery is designed to power the solenoid.

Next in the kit are electrolytic capacitors to stabilize the voltage, and ceramic capacitors and ferrite rings to combat the ubiquitous noise, because EVERYTHING is noisy. And there are such little things as the frame, wheels, motors, bolts and other cogs. And here is the beginning.

And in a few days.

Mechanics and electronics assembled non-stop, sometimes working at night. The code began to be written in parallel, using C ++ and as a Qt development environment. A robot is a classic abstract machine that has a set of states (a ball is found, a movement, a goal search, or a target is found, a missile is pointed), it doesn’t matter if it’s a military robot, a toy robot or an industrial one. The problem is that there should be as few states as possible, and as many as necessary. If the algorithm is made too confusing, then at some point you will not understand why the robot reacts in this way. We have states 8: Starting, Searching for the Ball, Searching for Gates, Aiming, Avoiding obstacles, Charging solenoid capacitors, Shock, Final. Transitions between them form a graph, which is the algorithm.
Most of the time is spent on writing and debugging the code, testing, calibration. Which is actually the most important thing.
The recognition of the robot allocated in separate classes is built like this. Each frame undergoes processing, where the colors are first converted from RGB to HSL, which is closer to our perception, and which is more convenient to work with. For the convenience of conversion, the built-in OpenCV function is not used, since conversion is a calculation and it takes such an expensive time, but a pre-calculated color table (lookup table), where the RGB value is used to get the offset in the table, and then the color is selected by address. We win in speed, we lose in memory, everything is as in theory. Then objects (balls, gates) are sequentially searched and recorded in the list. Objects are searched according to the general algorithm, each pixel is run in a cycle, and if it falls into the color range of the object, then a bit is put on the map. The result is a bitmap map with selected objects. The bitmap card is sent to the input of the OpenCV function, which returns a list of circuits. Next, we find the area of the contours and if the area of the contours is more than 40-50% of the area of the circle (rectangle) or in the 4 / Pi range, then we consider this object recognized and add it to the list. Balls are searched in a certain range (small noise, there is no big one, it makes no sense to look for balls on the ceiling either). Since the camera produces up to 125 frames per second, you cannot use some of the built-in functions of OpenCV, they are long, but the library is conveniently arranged and you can run through the entire matrix through the pointers. It remains to check whether the ball has rolled out over the edge of the field (limited to black), since the robot should not go beyond the black line. The check is quite simple, draw a line from the center of the bottom of the image to the ball and see if it crosses the black area.
Next is the most interesting. The robot has a certain number of states in which it goes into depending on the conditions. It's simple. The algorithm is written precisely at this stage. The hardest part is calibration. How to build a function of speed, direction from the obtained objects and their coordinates?
We run through the entire list and find the largest ball in terms of area. This is the closest. Now, in order to get the function of speed and direction from the coordinates of the object, you need to think and smoke a matan. The higher the coordinates of the ball in the image and the smaller it is in area, the farther it is. The more shifted along the X axis and farther / closer, the more acute, or vice versa, an obtuse angle. To get the coefficients, the robot is placed on the platform at one end in the center, then the process of crawling with a tape measure, measuring points with a certain step (33 cm) and taking readings of the values of the object in the recognition system (contour area) begins. We apply values and try to find a function that would be the connection of some values with others. Strong accuracy is not needed, because the robot does not build the path once and for all, but corrects itself 17-18 times per second, so much is our little system. Those. we need to find a function whose limit tends to the desired results with small changes in the arguments. Then the angle and direction are fed to the wheels, from where the speeds are calculated by the simplest vector equations. The figure will explain.

Further, when we know how to move to objects, it is not so difficult to write a soccer game algorithm. And now it’s close to the competition. My right.

Result in action:
On Saturday evening, we already moved to a sports hall, where competitions are held. It was great at night, my social circle, various clubs and laboratories from different countries. A bunch of people. No one is sleeping. All communicate with each other and are interested in everything that is possible. I myself slept for 2.5 hours. And all this time was tuning. As a person participating for the first time, he did not know about the subtleties and problems that are encountered.
Most importantly, it is impossible to calculate everything, and the emphasis is on testing. Who tests and brings longer, that and the main. Let's say that it is better not to make the turns with the whole robot, and the robot should rotate around the ball, because when the robot rotates the ball, the robot's impulse is added to the ball, and if it can be beaten at close distances and the error is not significant, then when hitting from long distances the error is huge, the linear speed of the goal relative to the robot is large, and the firing moment should be even more proactive. Here the problems come out. The first one is that you don’t know what rotation speed the robot had before. That is, if the robot travels at an angle of lead directly onto the ball, then it will strike immediately and by, because it did not have rotation. Or vice versa - it will approach the ball from a turn in the other direction, begin to turn and hit, but it will be past, since, due to inertia, its turning speed will not yet be calculated. Those. too many options to consider. Turning around the axis of the ball does not add an impulse to it, and you can shoot right away.
The second is the recognition problem. When the robot is spinning, at long distances, the balls and gates again have a large linear speed, and the frame for recognition is blurry. The contour search function works according to gradients, but the greased gates do not have clear gradients, and therefore instead of a single gate contour, dozens of contours are obtained at the output (this eats resources) or nothing happens (the ball is smeared into a small orange fog). I had 17 frames per second at the input, respectively, distant objects during rotation were unavailable to me. Plus processing delay. Those. when the robot understands that there were gates in the distance, they are not there already (though I used it to my advantage when I gave rotation from my gates, and the delay was just such that my robot turned almost to the enemy gates).
One of the solutions that our other team used was to cut the picture, i.e. when we see the ball, then we begin to cut the excess, center it, and cut the sides and throw it away, the FPS jumps to 60. The same logic and when recognizing the goal, especially when searching, the lower part is not needed, we cut it.
I had another camera and there would be problems with the slices. Good solution with two cameras. One wide-angle, looks directions and gives an overview. The second opposite gives distance and sees far.
To solve the problem of robot turns, I used a stop if after a couple of seconds during the turn he did not find anything. The stop gives you the opportunity to look further, but at an angle of 60 degrees, which was issued by our camera. Then search again.
My second omission is that I did not know that robots would start from the right corner. The robot must remember which side of the enemy’s gate and where it should turn, otherwise the wrong turn takes a lot of time. I determined the sides of the gate by color, but until the gate appeared in the field of view, the direction multiplier variable (plus or minus one sets the direction of rotation) defaulted to -1 counterclockwise, but it was necessary clockwise. And because of this, in a losing match, our first two goals scored on the counter and lost time. The result is 6-4, and in the second draw 5-5, and our loss. 5th place out of 18 teams. The high center of gravity was still in the way, not putting in great speed - it starts dancing. I nicknamed his printer.
In general, there are a lot of nuances. What I remember is an interesting party. And the rhythm. Those. speed and constant refinement of machines. After each match. With the champion we played 4-3! (defense tactics, closed the gates and hit from afar). The first one on one match we had at the competition! Hence the inability to test the behavior strategy. For a week, we were haunted by breakdowns. Plus, our camera went astray (axis), I tried to solve mechanical problems with a code, which should never be done, but there was no time, and only before the last match I decided everything. In total, we played 4 matches.
The first one was won by a robot cube.

In the second, they won the Skype team, which had a very interesting robot. Insanely fast, small, with 8 cameras and FPGA! That is, they did something that should not work at all. He quickly flew out, but was charming. There he is.

There were very interesting solutions. The robot, which won first place (3 years of development), has a 360-degree view. The camera looks up at a cone-shaped mirror. It turns out the picture in polar coordinates.

And regarding everything, there is such a story in the annotations of Roboteks that everyone constantly laughs at.
Some 5 years ago, two masters came to the competition and said that they had all figured out and built a robot. They were met by an explosion of laughter. And the subsequent defeat.
Thanks for attention.