Сappasity 3D Scan - 3D scanning using Intel RealSense. Development experience



    In 2014, we decided to launch a software start-up, where we could use our five-year experience in 3D technologies, accumulated during game development. The topic of 3D reconstruction has been of interest to us for a long time, and during 2013 we conducted numerous experiments with solutions of other companies - so gradually the desire to create our own was accumulated. As you probably already understood, our dream was successfully realized; the chronicle of this embodiment is under the cut. There is also a history of our relationship with Intel RealSense.

    We were engaged in stereo reconstruction earlier for our own needs, since we were looking for the best way to produce 3D models, and the use of 3D capture solutions seemed quite logical. 3D scanning using a hand scanner seemed to us the most affordable option. Everything looked simple in advertising, but it turned out that the process takes a lot of time: tracking is often lost and you have to start from the very beginning. But the most upset was the blurry texture. That is, such content for games did not suit us.

    We asked about photogrammetry and realized that we had no extra $ 100,000 dollars. In addition, special camera tuning skills were required and then a long post-processing of high-poly data. It turned out that we definitely did not save anything, but, on the contrary, everything became much more expensive, given our volumes of content production. It was at this moment that the thought came to try to do something of their own.

    Initially, we thought out the requirements for a system that would be convenient for the production of content:

    • Stationary system - calibrated once and can be removed.
    • Instant shooting - clicked and done.
    • Ability to connect SLR cameras for photorealistic texture quality.
    • Independence of the system from the type of sensors - technology does not stand still, and new sensors appear.
    • Full implementation of the client-server technology. Each client is a computer with a sensor.
    • The lack of restrictions on the number of devices - we may need to instantly capture objects of different sizes and, therefore, any restrictions will entail problems.
    • The ability to display the model in 3D printing.
    • Existence of a plug-in for display in the browser and SDK for integration into mobile applications.

    Here is such a dream system. We did not find any analogues, so we all wrote ourselves. Almost a year passed, and in the fall of 2014, finally an opportunity appeared to tell potential investors about our achievements. And at the beginning of 2015, we demonstrated a system assembled using Intel RealSense sensors at CES 2015.



    But what happened before that?

    Writing network code with its own data transfer protocol immediately paid off. We worked with PrimeSense sensors, however, if with more than two sensors on one PC, they functioned extremely unstable. At that moment, we did not even think that Intel, Google, and other market leaders were already working on new sensors, but out of habit we designed an extensible architecture. Accordingly, our system easily supported any of this equipment.

    Most of the time was spent writing a calibration. Well-known toolkits for calibration were not ideal, since no one had a deep understanding of the anatomy of PrimeSense sensors and far from all the parameters we needed were calibrated. We abandoned the factory calibration of PrimeSense and wrote our own, based on IR data. Along the way, a lot of research was conducted on the functioning of sensors and algorithms for constructing and texturing a mesh. Much corresponded and redone. As a result, we forced all the sensors in our system to shoot the same way. And forcing, they immediately filed for a patent and at the moment we have US Patent Pending.

    After Apple bought PrimeSense, it became clear that we should turn our attention to other manufacturers. Thanks to the developments already existing at that time, software with Intel RealSense sensors worked just two weeks after receiving sensors from Intel. Now we use Intel RealSense, and for shooting at long distances - Microsoft Kinect 2. Recently watched Intel's CES keynote on sensors for robots. Perhaps they will replace us with Kinect 2 if they become available to developers.

    When switching to RealSense with a new force, the calibration problem came to light. In order to color the cloud points in the corresponding colors or texture the mesh obtained from the cloud, you need to know the positions of the RGB and Depth cameras relative to each other. At first, it was planned to use the same own manual scheme on RealSense as on PrimeSense, but we encountered a limitation of sensors - the data came after filtering, and working with RAW requires a different approach. Fortunately, the Intel RealSense SDK has functionality for converting coordinates from one space to another, for example, from Depth Space to Color Space, from Color to Depth Space, from Camera to Color, etc. Therefore, in order not to waste time developing alternative calibration methods, it was decided to use this functionality, which, surprisingly, works very well. If you compare colored point clouds with Primesense and c RealSense, then clouds with RealSense are colored better.



    What did we get? Sensors and cameras stand in a circle around the subject. We calibrate them and find the positions of the sensors, cameras and their angles of inclination relative to each other. Optionally, we use Canon Rebel cameras for texturing as optimal in price / quality. However, there are no restrictions here, since we ourselves calibrate all the optical parameters, including distortion.



    Textures are projected and blend directly onto the 3D model. Therefore, the result is very clear.



    The 3D model is built from clouds of points that we form from N sensors. The speed of data capture is from 5 to 10 seconds (depending on the type and number of sensors). And we get a complete model of the object!



    View 3D model

    Preview of 3D model:

    This year we plan to release 4 products at once: Сappasity Human 3D Scan for scanning people, Сappasity Room 3D Scan for rooms, Сappasity Furniture 3D Scan for furniture and Cappasity Portable 3D Scan for scanning using one sensor.



    Cappasity Portable 3D Scan will be released in just a couple of months for laptops with Intel RealSense. We will present it at the GDC 2015 held on March 4th in San Francisco. You can create high-quality 3D models using a turntable or by rotating the model manually. Moreover, if you have a camera, we will give you the opportunity to create high-resolution textures.



    Why did we choose Intel RealSense? We decided to focus on Intel technology for several reasons:
    • Already about 15 laptop models support RealSense;
    • planned release of tablets with RealSense;
    • opens the way to B2C sales - a new direction of monetization of our products;
    • there is a quality SDK;
    • sensors are characterized by high speed.

    Not the least role is also played by good technical, marketing and business support from the company itself. We have been working closely since 2007, and there have never been situations where we could not get an answer to our questions from colleagues from Intel.

    If we consider RealSense technology from the point of view of 3D scanning at a distance of up to one meter, we can safely call Intel a leader in this field.

    Undoubtedly, modern technology opens up great opportunities for working with the depths of the world around us!

    We regularly post new materials about our progress on our Facebook and Twitter pages .

    Also popular now: