Raspberry Pi Robot Tank with OpenCV
At one time I was fond of assembling robotic machines on Arduino and Raspberry Pi. I liked playing the constructor, but I wanted something more.
And once, wandering around Aliexpress, I came across an aluminum chassis for the tank. This creation looked in comparison with plastic cars like a Ferrari in comparison with a cart.
I made myself a New Year gift, the tank arrived, was assembled, and then I had to revive it. I removed the Raspberry itself, the power converter, the motor controller and the battery from the typewriter. All this was put on the tank and joyfully earned.
Further on, a simple REST API for taxiing was written on python, and on Android - the same simple program that allowed you to control the tank by pulling this API.
The tank must shoot, and the next step was the appearance of his camera. I couldn’t guess with the camera body - in the open state, he did not hold the camera, and in the closed state - it was so tight that the lens flew off the camera. Having suffered, I simply attached a camera tape to the housing cover. Now the tank could not only drive around the room, but also take pictures.

It is worth noting the serious advantage of the tank over cars at home - it doesn’t make any difference on the track it is to drive on a hard floor or on a carpet. Wheel transport slips on a soft carpet, up to the impossibility of turning.
Further, I wanted to develop the tank in the direction of autonomous navigation, relying on pictures from the camera. I had to plunge into the world of computer vision and discover OpenCV. It all started with the recognition of color and contour - I printed a red circle on paper, glued it to the TV and made the robot spin until it was found.
The idea was to mark the noticeable objects in the room (sofa, TV, table) with colorful circles and teach the robot to navigate by color.
Using OpenCV, contours of the desired color (with acceptable tolerance) were searched, then a circle was searched among the contours.
It seemed that the main problem could be a random circle of the right color on any of the objects.
However, the main problem turned out to be that the color looks very changeable depending on the lighting, so the range in which red was recognized (for example) had to be stretched to shades that very remotely resemble the original color. Or select the desired color from the picture, but in any case it was no longer red, but a shade of brown.
Search for a red circle:
Color recognition began to come to a standstill, I was distracted by the Haar cascades, using a tank for photo-hunting a cat. The cat disguised itself well, making the cascade make mistakes in half the cases (if anyone does not know, OpenCV comes with the Haar cascade specially trained on seals - take it and use it).
Hunting a cat had useful consequences for the robot - since it was not always possible to catch a hunting object in a static camera, I put in a tripod with two servomotors (and a PWM module for controlling them through Raspberry).
Continuing my research on the topic of what you can squeeze out of a room’s pictures, I naturally came to neural networks. Having swallowed the Tensorflow tutorial, I processed it with a detector of pictures from the tank and the results were promising - a TV, a table, a sofa, a cat, a refrigerator were recognized without error.
These experiments were carried out on a computer, and the thing remained for small - to transfer TF to the Raspberry Pi. Fortunately, a unique person lives on the github, who had the patience and broke through the installation of all the dependencies and many hours of compilation time - and shared the collected Tensorflow for Raspberry Pi.
However, further study of the topic revealed that OpenCV does not stand still and its contributors released the DNN module (Deep Neural Networks), which offers integration with neural networks trained on TensorFlow. This solution is much more convenient to develop, plus there is no need for TF itself. I had to do a little magic, since the latest version of the Mobile SSD neural network for TF was no longer picked up by the latest version of OpenCV. I had to look
and check the working version of Mobile SSD. Plus, DNN works fine only under OpenCV 3.4, but I did not find this version for Raspberry. I had to collect it myself, since it’s much easier than messing with TensorFlow. At the same time, it was not possible to assemble OpenCV under the latest weight of Raspbian (Stretch), but on the latest version of the previous generation (Jessie) everything took off as it should.
Sample code using DNN and not using Tensorflow.
Several files responsible for the names of objects were pulled from TF and the dependence on TF itself was removed (there was only reading from the file).
Github source code.
In general, now the photos of the tank can be recognized by a neural network, and this is a very important step in navigation in terms of recognizing landmarks. Nevertheless, some pictures for full navigation were not enough, it was required to measure distances to obstacles. So the robot appeared echo sounder. To connect the sounder to the Raspberry, you need to work a little - the sounder returns the signal to 5V, and the Raspberry receives 3.3V. On the knee, this problem is mainly solved by bradboard resistors, but I did not want to make such a scumbag on the robot. As a result, the Level Shifter chip was found, which does everything that is needed, and it is the size of a fingernail.
In addition, I was preoccupied with the appearance of the robot - I didn’t really like that the microcircuits and the camera with the echo sounder were stuck to the cardboards. The development of technology in our world allows you to cut plastic with a laser with a reasonable investment of time and money. In general, I found a workshop with a laser machine, spent a little time studying the instructions for this wonderful machine, and far from the first attempt I cut out panels for microchips and a camera with an echo sounder.

Everything is ready for autonomous navigation, but the task turned out to be not so simple, and on the first attempt I buried myself somewhat. I decided to take a break, think it over well, and study the analogs. Perhaps this navigation will serve as a topic for a separate article.
REST interface that provides the robot as a base for future use:
And once, wandering around Aliexpress, I came across an aluminum chassis for the tank. This creation looked in comparison with plastic cars like a Ferrari in comparison with a cart.
I made myself a New Year gift, the tank arrived, was assembled, and then I had to revive it. I removed the Raspberry itself, the power converter, the motor controller and the battery from the typewriter. All this was put on the tank and joyfully earned.
Further on, a simple REST API for taxiing was written on python, and on Android - the same simple program that allowed you to control the tank by pulling this API.
The tank must shoot, and the next step was the appearance of his camera. I couldn’t guess with the camera body - in the open state, he did not hold the camera, and in the closed state - it was so tight that the lens flew off the camera. Having suffered, I simply attached a camera tape to the housing cover. Now the tank could not only drive around the room, but also take pictures.

It is worth noting the serious advantage of the tank over cars at home - it doesn’t make any difference on the track it is to drive on a hard floor or on a carpet. Wheel transport slips on a soft carpet, up to the impossibility of turning.
Further, I wanted to develop the tank in the direction of autonomous navigation, relying on pictures from the camera. I had to plunge into the world of computer vision and discover OpenCV. It all started with the recognition of color and contour - I printed a red circle on paper, glued it to the TV and made the robot spin until it was found.
The idea was to mark the noticeable objects in the room (sofa, TV, table) with colorful circles and teach the robot to navigate by color.
Using OpenCV, contours of the desired color (with acceptable tolerance) were searched, then a circle was searched among the contours.
It seemed that the main problem could be a random circle of the right color on any of the objects.
However, the main problem turned out to be that the color looks very changeable depending on the lighting, so the range in which red was recognized (for example) had to be stretched to shades that very remotely resemble the original color. Or select the desired color from the picture, but in any case it was no longer red, but a shade of brown.
Search for a red circle:
import cv2
import numpy as np
import sys
defmask_color(img, c1, c2):
img = cv2.medianBlur(img, 5)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, c1, c2)
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
return mask
deffind_contours(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.threshold(blurred, 30, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.bitwise_not(thresh)
im2, cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cp_img = img.copy()
cv2.drawContours(cp_img, cnts, -1, (0,255,0), 3)
return cp_img
deffind_circles(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.medianBlur(gray,5)
circles = cv2.HoughCircles(blurred,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
cimg = img
if circles isnotNone:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img,(i[0],i[1]),i[2],(255,0,0),2)
cv2.circle(img,(i[0],i[1]),2,(0,0,255),3)
print"C", i[0],i[1],i[2]
return cimg
deffind_circle(img, rgb):
tolerance = 4
hsv = cv2.cvtColor(rgb, cv2.COLOR_BGR2HSV)
H = hsv[0][0][0]
c1 = (H - tolerance, 100, 100)
c2 = (H + tolerance, 255, 255)
c_mask = mask_color(img, c1, c2)
rgb = cv2.cvtColor(c_mask,cv2.COLOR_GRAY2RGB)
cont_img = find_contours(rgb)
circ_img = find_circles(cont_img)
cv2.imshow("Image", circ_img)
cv2.waitKey(0)
if __name__ == '__main__':
img_name = sys.argv[1]
img = cv2.imread(img_name)
rgb = np.uint8([[[0, 0, 255 ]]])
find_circle(img, rgb)
Color recognition began to come to a standstill, I was distracted by the Haar cascades, using a tank for photo-hunting a cat. The cat disguised itself well, making the cascade make mistakes in half the cases (if anyone does not know, OpenCV comes with the Haar cascade specially trained on seals - take it and use it).
Hunting a cat had useful consequences for the robot - since it was not always possible to catch a hunting object in a static camera, I put in a tripod with two servomotors (and a PWM module for controlling them through Raspberry).
Continuing my research on the topic of what you can squeeze out of a room’s pictures, I naturally came to neural networks. Having swallowed the Tensorflow tutorial, I processed it with a detector of pictures from the tank and the results were promising - a TV, a table, a sofa, a cat, a refrigerator were recognized without error.
These experiments were carried out on a computer, and the thing remained for small - to transfer TF to the Raspberry Pi. Fortunately, a unique person lives on the github, who had the patience and broke through the installation of all the dependencies and many hours of compilation time - and shared the collected Tensorflow for Raspberry Pi.
However, further study of the topic revealed that OpenCV does not stand still and its contributors released the DNN module (Deep Neural Networks), which offers integration with neural networks trained on TensorFlow. This solution is much more convenient to develop, plus there is no need for TF itself. I had to do a little magic, since the latest version of the Mobile SSD neural network for TF was no longer picked up by the latest version of OpenCV. I had to look
and check the working version of Mobile SSD. Plus, DNN works fine only under OpenCV 3.4, but I did not find this version for Raspberry. I had to collect it myself, since it’s much easier than messing with TensorFlow. At the same time, it was not possible to assemble OpenCV under the latest weight of Raspbian (Stretch), but on the latest version of the previous generation (Jessie) everything took off as it should.
Sample code using DNN and not using Tensorflow.
Several files responsible for the names of objects were pulled from TF and the dependence on TF itself was removed (there was only reading from the file).
Github source code.
import cv2 as cv
import tf_labels
import sys
DNN_PATH = "---path-to:ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb"
DNN_TXT_PATH = "--path-to:ssd_mobilenet_v1_coco.pbtxt"
LABELS_PATH = "--path-to:mscoco_label_map.pbtxt"
tf_labels.initLabels(PATH_TO_LABELS)
cvNet = cv.dnn.readNetFromTensorflow(pb_path, pb_txt)
img = cv.imread(sys.argv[1])
rows = img.shape[0]
cols = img.shape[1]
cvNet.setInput(cv.dnn.blobFromImage(img, 1.0/127.5, (300, 300), (127.5, 127.5, 127.5), swapRB=True, crop=False))
cvOut = cvNet.forward()
for detection in cvOut[0,0,:,:]:
score = float(detection[2])
if score > 0.25:
left = int(detection[3] * cols)
top = int(detection[4] * rows)
right = int(detection[5] * cols)
bottom = int(detection[6] * rows)
label = tf_labels.getLabel(int(detection[1]))
print(label, score, left, top, right, bottom)
text_color = (23, 230, 210)
cv.rectangle(img, (left, top), (right, bottom), text_color, thickness=2)
cv.putText(img, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 1, text_color, 2)
cv.imshow('img', img)
cv.waitKey()
In general, now the photos of the tank can be recognized by a neural network, and this is a very important step in navigation in terms of recognizing landmarks. Nevertheless, some pictures for full navigation were not enough, it was required to measure distances to obstacles. So the robot appeared echo sounder. To connect the sounder to the Raspberry, you need to work a little - the sounder returns the signal to 5V, and the Raspberry receives 3.3V. On the knee, this problem is mainly solved by bradboard resistors, but I did not want to make such a scumbag on the robot. As a result, the Level Shifter chip was found, which does everything that is needed, and it is the size of a fingernail.
In addition, I was preoccupied with the appearance of the robot - I didn’t really like that the microcircuits and the camera with the echo sounder were stuck to the cardboards. The development of technology in our world allows you to cut plastic with a laser with a reasonable investment of time and money. In general, I found a workshop with a laser machine, spent a little time studying the instructions for this wonderful machine, and far from the first attempt I cut out panels for microchips and a camera with an echo sounder.

Everything is ready for autonomous navigation, but the task turned out to be not so simple, and on the first attempt I buried myself somewhat. I decided to take a break, think it over well, and study the analogs. Perhaps this navigation will serve as a topic for a separate article.
REST interface that provides the robot as a base for future use:
GET /ping
GET /versionGET /nameGET /dist
POST /fwd/on
POST /fwd/off
POST /back/on
POST /back/off
POST /left/on
POST /left/off
POST /right/on
POST /right/off
POST /photo/make
GET /photo/:phid
GET /photo/list
POST /cam/up
POST /cam/down
POST /cam/right
POST /cam/left
POST /detect/haar/:phid
POST /detect/dnn/:phid