Time of flight
You know, sometimes the bizarre structure of public opinion surprises me. Take for example the technology of 3D-visualization. Recently, a huge public outcry has been caused by virtual reality glasses technology: Oculus Rift , Google Glass . But there is nothing new here, the first helmets of virtual reality appeared back in the late 90s. Yes, they were complicated, they were ahead of their time, but why didn’t this cause such a WOW effect? Or 3D printers. Articles about how cool they are or how quickly they take over the world appear in the information field twice a week for the past three years. I don’t argue, it's cool and they’ll take over the world. But this technology was created back in the 80s and since then has been slowly progressing. 3D TV? The year 1915 ...
All of these technologies are good and curious, but where is the hype from every sneeze?
What if I say that in the last 10 years, 3D shooting technology was invented, developed and introduced into mass production, which is very different from any other? Moreover, the technology is already universally used. Debugged and accessible to ordinary people in stores. Have you heard about her? (probably only specialists in robotics and related fields of science have already guessed that I'm talking about ToF cameras).
What is a ToF camera? In the Russian Wikipedia ( English) you will not find even a brief mention of what it is. “Time of flight camera” translates to “time-of-flight camera." The camera determines the range through the speed of light, measuring the time of flight of the light signal emitted by the camera, and reflected at each point of the received image. Today's standard is a 320 * 240 pixel matrix (the next generation will be 640 * 480). The camera provides a depth measurement accuracy of the order of 1 centimeter. Yes Yes. A matrix of 76800 sensors providing accuracy of time measurement of the order of 1 / 10,000,000,000 (10 ^ -10) seconds. On sale. For 150 bucks. Or maybe you even use it.
And now a little more about physics, the principle of work, and where you met this beauty.
There are three main types of ToF cameras. Each type uses its own technology for measuring the range of a point. The simplest and most understandable is Pulsed Modulation, also known as Direct Time-of-Flight imagers. An impulse is given and the exact time of its return is measured at each point of the matrix:
In essence, the matrix consists of triggers that fire on the wave front. The same method is used in optical synchronization for flashes. Only here are orders of magnitude more precise. This is the main difficulty of this method. Very accurate detection of the response time is required, which requires specific technical solutions (which I could not find). Now such sensors are being tested by NASA for the landing modules of its ships .
And here are the pictures that she gives out:
There is enough backlight for them to trigger on the optical stream reflected from a distance of about 1 kilometer. The graph shows the number of triggered pixels in the matrix, depending on the distance. 90% work at a distance of 1 km: The
second method is constant modulation of the signal. The emitter sends some modulated wave. The receiver finds the maximum correlation of what it sees with this wave. This determines the time that the signal spent reflecting and arriving at the receiver.
Let the signal be radiated:
where w is the modulating frequency. Then the received signal will look like:
where is a b-shift, a-amplitude. Correlation of incoming and outgoing signal:
But it is quite difficult to make a complete correlation with all possible time shifts in real time in each pixel. Therefore, a tricky feint with ears is used. The received signal is received in 4 neighboring pixels with a 90⁰ phase shift and correlates with itself:
Then the phase shift is defined as:
Knowing the received phase shift and the speed of light, we get the distance to the object:
These cameras are slightly simpler than those constructed using first technology, but still complicated and expensive. This company makes them . And they cost about 4 kilobax . But cute and futuristic:
The third technology is Range gated imagers. Essentially a shutter camera. The idea here is terribly simple and does not require either high-precision receivers or complex correlation. Before the matrix is a shutter. Suppose we have it perfect and works instantly. At time 0, the scene lighting turns on. The shutter closes at time t. Then objects located farther than t / (2 ∙ c), where c is the speed of light, will not be visible. The light just does not have time to reach them and go back. A point located close to the camera will be illuminated throughout the exposure time t and will have a brightness I. Therefore, any exposure point will have a brightness from 0 to I, and this brightness will be a representation of the distance to the point. The brighter - the closer.
It remains to do just a couple of little things: introduce the shutter closure time and the matrix behavior in this event, the non-ideal light source (for a point light source, the range and brightness will not be linear), different reflectivity of materials. These are very large and complex tasks that device authors have solved.
Such cameras are the most inaccurate, but the simplest and cheapest: the complexity of them is the algorithm. Want an example of how such a camera looks like? Here it is:
Yes, yes, in the second Kinect there is just such a camera. Just do not confuse the second Kinect with the first (on the hub once upon a time there was a good and detailed article where they still mixed it up). The first Kinect uses structured highlighting. This is a much older, less reliable and slower technology:
It uses a conventional infrared camera that looks at the projected pattern. Its distortions determine the range (a comparison of methods can be found here ).
But Kinect is far from the only representative on the market. For example, Intel releases a camera for $ 150, which gives a 3D image card. It focuses on the closer zone, but they have an SDK for analyzing gestures in the frame. Here is another option from SoftKinetic (they also have an SDK, plus they are somehow tied to texas instruments).
I myself, however, still have not encountered any of these cameras, which is pathetic and annoying. But, I think and hope that in five years they will come into use and my turn will come. As far as I know, they are actively used for orientation of robots, they are introduced into facial recognition systems. The range of tasks and applications is very wide.
All of these technologies are good and curious, but where is the hype from every sneeze?
What if I say that in the last 10 years, 3D shooting technology was invented, developed and introduced into mass production, which is very different from any other? Moreover, the technology is already universally used. Debugged and accessible to ordinary people in stores. Have you heard about her? (probably only specialists in robotics and related fields of science have already guessed that I'm talking about ToF cameras).
What is a ToF camera? In the Russian Wikipedia ( English) you will not find even a brief mention of what it is. “Time of flight camera” translates to “time-of-flight camera." The camera determines the range through the speed of light, measuring the time of flight of the light signal emitted by the camera, and reflected at each point of the received image. Today's standard is a 320 * 240 pixel matrix (the next generation will be 640 * 480). The camera provides a depth measurement accuracy of the order of 1 centimeter. Yes Yes. A matrix of 76800 sensors providing accuracy of time measurement of the order of 1 / 10,000,000,000 (10 ^ -10) seconds. On sale. For 150 bucks. Or maybe you even use it.
And now a little more about physics, the principle of work, and where you met this beauty.
There are three main types of ToF cameras. Each type uses its own technology for measuring the range of a point. The simplest and most understandable is Pulsed Modulation, also known as Direct Time-of-Flight imagers. An impulse is given and the exact time of its return is measured at each point of the matrix:
In essence, the matrix consists of triggers that fire on the wave front. The same method is used in optical synchronization for flashes. Only here are orders of magnitude more precise. This is the main difficulty of this method. Very accurate detection of the response time is required, which requires specific technical solutions (which I could not find). Now such sensors are being tested by NASA for the landing modules of its ships .
And here are the pictures that she gives out:
There is enough backlight for them to trigger on the optical stream reflected from a distance of about 1 kilometer. The graph shows the number of triggered pixels in the matrix, depending on the distance. 90% work at a distance of 1 km: The
second method is constant modulation of the signal. The emitter sends some modulated wave. The receiver finds the maximum correlation of what it sees with this wave. This determines the time that the signal spent reflecting and arriving at the receiver.
Let the signal be radiated:
where w is the modulating frequency. Then the received signal will look like:
where is a b-shift, a-amplitude. Correlation of incoming and outgoing signal:
But it is quite difficult to make a complete correlation with all possible time shifts in real time in each pixel. Therefore, a tricky feint with ears is used. The received signal is received in 4 neighboring pixels with a 90⁰ phase shift and correlates with itself:
Then the phase shift is defined as:
Knowing the received phase shift and the speed of light, we get the distance to the object:
These cameras are slightly simpler than those constructed using first technology, but still complicated and expensive. This company makes them . And they cost about 4 kilobax . But cute and futuristic:
The third technology is Range gated imagers. Essentially a shutter camera. The idea here is terribly simple and does not require either high-precision receivers or complex correlation. Before the matrix is a shutter. Suppose we have it perfect and works instantly. At time 0, the scene lighting turns on. The shutter closes at time t. Then objects located farther than t / (2 ∙ c), where c is the speed of light, will not be visible. The light just does not have time to reach them and go back. A point located close to the camera will be illuminated throughout the exposure time t and will have a brightness I. Therefore, any exposure point will have a brightness from 0 to I, and this brightness will be a representation of the distance to the point. The brighter - the closer.
It remains to do just a couple of little things: introduce the shutter closure time and the matrix behavior in this event, the non-ideal light source (for a point light source, the range and brightness will not be linear), different reflectivity of materials. These are very large and complex tasks that device authors have solved.
Such cameras are the most inaccurate, but the simplest and cheapest: the complexity of them is the algorithm. Want an example of how such a camera looks like? Here it is:
Yes, yes, in the second Kinect there is just such a camera. Just do not confuse the second Kinect with the first (on the hub once upon a time there was a good and detailed article where they still mixed it up). The first Kinect uses structured highlighting. This is a much older, less reliable and slower technology:
It uses a conventional infrared camera that looks at the projected pattern. Its distortions determine the range (a comparison of methods can be found here ).
But Kinect is far from the only representative on the market. For example, Intel releases a camera for $ 150, which gives a 3D image card. It focuses on the closer zone, but they have an SDK for analyzing gestures in the frame. Here is another option from SoftKinetic (they also have an SDK, plus they are somehow tied to texas instruments).
I myself, however, still have not encountered any of these cameras, which is pathetic and annoying. But, I think and hope that in five years they will come into use and my turn will come. As far as I know, they are actively used for orientation of robots, they are introduced into facial recognition systems. The range of tasks and applications is very wide.