
Using Intel RealSense Camera with TouchDesigner. Part 2
- Transfer

© Rolf Arnold
Intel RealSense Camera is a very useful tool for creating virtual reality and augmented reality projects. In the second part of this article, you will learn how to use the Intel RealSense camera nodes in TouchDesigner to set up real-time rendering or projection for multi-screen systems, single screens, 180-degree (full-dome) and 360-degree virtual reality systems. In addition, information from the Intel RealSense camera can be transferred to Oculus Rift using the Oculus Rift TOP node in TouchDesigner.
The second part will be devoted to the RealSense CHOP node in TouchDesigner.
Access the most important camera tracking features of RealSense F200 and R200, such as eye, finger, and face tracking, is done on the RealSense CHOP node in TouchDesigner. These tracking functions are especially interesting to use in real-time animations or when tracking animations according to body movements and human gestures. This seems to me the most useful for the performances of dancers or musicians, where a high level of interactivity between live video, animation, graphics, sound and performance is required.
To get the TouchDesigner * (.toe) files associated with this article, click here . A free copy of TouchDesigner is also available for non-commercial use. It has full functionality, but the maximum resolution cannot exceed 1280 x 1280.
Once again, TouchDesigner * is even more versatile and powerful with Intel RealSense camera support.
Note. Like the first part of this article , the second part is for users who are already familiar with TouchDesigner * and its interface. If you do not have experience with TouchDesigner * and you are going to gradually understand this article, then I recommend that you first look at the documentation available here: Explore TouchDesigner .
Note. When using an Intel RealSense camera, range should be considered for optimal results. On this Intel webpage the range of all camera models is indicated and recommendations on the use of cameras are given.
Historical information
All the data provided by Intel RealSense cameras is very useful for creating virtual reality and augmented reality. Some attempts to do what is now being done using the Intel RealSense camera were back in the 80s. Hand tracking technology was developed in the 80s as a glove that transmits data, the authors of this invention are Jason Lanier and Thomas G. Zimmerman. In 1987, Nintendo launched the first glove-mounted gaming control device that connects wired to a Nintendo game console.
The devices, the development of which led to the creation of Intel RealSense cameras, were originally intended for animation at performances: motion capture technologies were used to transform a person’s performance into mathematical data, that is, into digital information. Traffic capture has been used since the 70s in research projects at various universities, as well as in troops for training. One of the first animated films created using motion capture was the Sexy Robot animated movie.created in 1985 by Robert Abel and his colleagues. In the Sexy Robot video, several technologies were used to obtain information, with the help of which a digital model of the robot was created and animated. First, a robot model was created. It was measured from all sides, the information describing it was transferred to a digital form: the RealSense camera achieves similar results when shooting objects. To calculate the movement on the actor, points were drawn, the movement of which was correlated with the movement of the digital "skeleton": a vector animation was created, with which the digital model moved. The RealSense camera has the ability to infrared and infrared laser projector, which allows you to receive data for digital models and motion tracking. The tracking capabilities of the Intel RealSense camera are quite advanced:
Intel RealSense Cameras
There are currently two models of Intel RealSense cameras. They perform similar functions, but differ in some ways: this is the Intel RealSense F200 camera, for which the exercises in this article are intended, and the Intel RealSense R200 camera.
The Intel RealSense R200 offers important advantages due to its compact size. It is designed to be mounted on a tripod or on the back of the tablet. Thus, the camera lens is not aimed at the user, but at the outside world, and thanks to the improved shooting capabilities, the field of view of the camera covers a wider area. In addition, this camera has improved depth measurement capabilities. This camera is very interesting to use for augmented reality projects, because it supports the scene perception function, which allows you to add virtual objects to the captured scene of the real world. You can also overlay virtual information on the live image. Unlike the F200, the R200 does not support tracking of fingers, hands, and faces.
Intel RealSense Cameras in TouchDesigner
TouchDesigner is ideally combined with the Intel RealSense camera: there is a direct connection between facial expressions of the user and hand movements and the software interface. TouchDesigner can directly use this tracking data and location. TouchDesigner can also use depth, color, and infrared data transmitted by Intel RealSense. Intel RealSense cameras are very lightweight and compact, especially the R200, which can easily be placed next to performers unnoticed by the audience.
Adam Berg, a Leviathan researcher working on a project to use the Intel RealSense camera with TouchDesigner to create interactive installations, states: “With its compact size and simple design, the camera is great for interactive solutions. The camera is not striking, and infrastructure requirements are simplified, since the camera does not require an external power source. In addition, we liked the low latency when creating depth images. TouchDesigner is an excellent platform for work (from creating the initial prototype to developing the final version). It has built-in support for five cameras, the possibility of high-performance multimedia playback and convenient features for working with shaders. In addition, of course, great support should be noted. ”
Using Intel RealSense Camera in TouchDesigner
In the second part, we look at the CHOP node in TouchDesigner for the Intel RealSense camera.
RealSense CHOP Node
The RealSense CHOP node manages 3D tracking and position data. The CHOP node contains two types of information. (1) The position in the real world (measured in meters, but accuracy can be adjusted to units of millimeters) is used to convert along the x, y, and z axes. Rotations around the x, y, and z axes in RealSense CHOP are displayed as Euler angles along the x, y, and z axes in degrees. (2) The RealSense CHOP node also receives the pixels of the input image and converts them to normalized UV coordinates. This is useful for tracking images.
The RealSense CHOP node has two configurable options: finger / face tracking and marker tracking.
- In the Finger / Face Tracking section, you can select tracked items. You can limit the list of monitored items to only one aspect, and then, by connecting the Select CHOP node to the RealSense CHOP node, restrict the list again to only track eyebrow or eye movements.
- Tracking markers allows you to upload an image and track this item wherever it is.
Using the RealSense CHOP Node in TouchDesigner
Demo 1: Using Tracking
This simple first demo of a RealSense CHOP site shows how you can connect it to other sites and use it to track and create movement. Note again that for these demos, an extremely superficial knowledge of TouchDesigner is sufficient. If you do not have experience with TouchDesigner * and you are going to gradually understand this article, I recommend that you first look at the documentation available here: Explore TouchDesigner.
1. Create the nodes we need and arrange them in a horizontal line in the following order: Geo COMP node, RealSense CHOP node, Select CHOP node, Math CHOP node, Lag CHOP node, Out CHOP node and Trail CHOP node.
2. Connect the RealSense CHOP node to the Select CHOP node, the Select CHOP node to the Math CHOP node, the Math CHOP node to the Lag CHOP node, the Lag CHOP node to the Out CHOP Node node, and the Out CHOP node to the Trail CHOP node.
3. Open the Setup page of the RealSense CHOP node and make sure that the Hands World Position parameter is set to On. The location of the monitored joints of the arm in space is displayed. Values are indicated in meters in relation to the camera.
4. On the Select parameters page of the Select CHOP node, set the Channel Names parameter to hand_r / wrist: tx, choosing it from the available values in the drop-down list to the right of the parameter.
5. In the Rename From parametertype hand_r / wrist: tx, then type x in the Rename To parameter .

Figure 1. Selecting channels from the RealSense CHOP node occurs in the Select CHOP node
6. In the Range / To Range parameter of the Math CHOP node, enter 0, 100. For a reduced range of motion, enter a number less than 100.
7. Select the Geometry COMP node and make sure that it located on the Xform options page . Click the + button in the lower right corner of the Out CHOP node to enable viewing. Drag channel X onto the Translate X parameter of the Geometry COMP node and select Export CHOP from the drop-down menu.

Figure 2. Here you add the animation obtained from RealSense CHOP
To render the geometry, you need the Camera COMP node, the Material (MAT) node (I used the Wireframe MAT node), the Light COMP node, and the Render TOP node. Add these nodes to render this project.
8. In the Camera COMP node on the Xform parameters page, set the Translate Z parameter to 10. This will allow you to better see the movement of the created geometry, since the camera moves backward along the Z axis.
9. Swipe your hand in front of the camera and see how the geometric figure moves in the node Render TOP.

Figure 3. How nodes are connected to each other. Thanks to the Trail CHOP node, you can see the animation in graphical form at the end

Figure 4. The x transformation value of the Geometry COMP node was exported from the x channel to the Out CHOP node, which was passed down the chain from the Select CHOP node
Demo 2. RealSense CHOP Token Tracking
In this demo, we will use the marker tracking feature in RealSense CHOP to show how to use the image for tracking. You will create an image, you will have two copies of it: a hard copy and a digital copy. They must match exactly. In this case, you can initially have a digital file and print it, or, having the image on paper, scan it to obtain a digital version.
1. Add a RealSense CHOP node to the scene.
2. On the Setup options page of the RealSense CHOP node, set the Mode parameter to Marker Tracking .
3. Create a Movie File resource in TOP.
4. On the Play options page of the TOP node in the File sectionselect and upload a digital image for which you also have a print version.
5. Drag the Movie File to TOP onto the Setup page of the RealSense CHOP node and into the Marker Image TOP cell at the bottom of the page.
6. Create the nodes Geometry COMP, Camera COMP, Light COMP, and Render TOP.
7. As was done earlier in step 7 of Demo 1, export the tx channel from RealSense CHOP and drag it into the Translate X parameter of the Geometry COMP node.
8. Create a Reorder TOP node and connect it to the Render TOP node. On the Reorder options page of the Output Alpha node, select One from the drop-down list.
9. Place the printed image of the digital file in front of the Intel RealSense camera and move it. The camera should track the movement and give it to the Render TOP node. The numbers in RealSense CHOP will also change.

Figure 5. This is a full demonstration of the demonstration with marker tracking.

Figure 6. On the Geo COMP node parameters page, the tx channel from the RealSense CHOP node is dragged to the Translate x parameter
Eye tracking in TouchDesigner using the RealSense CHOP node
In the TouchDesigner program palette, in the RealSense section, there is an eyeTracking template that can be used to track the user's eye movements. This template uses RealSense CHOP node / finger tracking; the RealSense TOP node must be set to Color . In the template, the green WireFrame rectangles move in accordance with the human eye movement and are superimposed on the color image of the person in RealSense TOP. Instead of open green rectangles, you can use any other geometric shapes or particles. This is a very convenient template. Here is the image with the template.

Figure 7. Note that the eyes are tracked even through glasses.
Demo 3, Part 1. An easy way to set up full-dome rendering or virtual reality
In this demo, we take a file and show how to present it in full-dome rendering and 360-degree virtual reality. I have already prepared such a file for download. This is chopRealSense_FullDome_VR_render.toe file .
Brief description of the process of creating this file
In this file I wanted to place geometric shapes (sphere, torus, cylinders and rectangles) in the scene. Therefore, I created several SOP nodes for these various geometric shapes. Each SOP node was attached to the Transform SOP node to move (transform) geometric shapes in different places in the scene. All SOP nodes are connected to one Merge SOP node. The Merge SOP node is served to the Geometry COMP node.

Figure 8. This is the first step in marking the geometric shapes placed in the scene in the download file
Then I created a Grid SOP node and a SOP – DAT node. The SOP – DAT node was used to obtain the Geometry COMP instance so that more geometric shapes could be added to the scene. I also created a Constant MAT node, selected a green color and turned on the WireFrame parameter on the Common page .

Figure 9. The SOP – DAT node was created using the Grid SOP node
Then I created the RealSense CHOP node and connected it to the Select CHOP node in which I selected the hand_r / wrist: tx channel for tracking and renamed it x. I connected the Select CHOP to the Math CHOP node so that I could change the range, and connected the Math CHOP to the Null CHOP node. It is recommended that you always terminate the chain with a Null or Out node to make it easier to insert new filters into the chain. Then I exported the x Channel from the Null CHOP node to the Scale X parameter of the Geometry COMP node. This provides control over all movements of geometric shapes along the x axis in the scene when I swipe my right hand in front of the Intel RealSense camera.

Figure 10. Tracking data from the RealSense CHOP node is used to create real-time animation and movement of geometric shapes along the x axis
To create a full-dome 180-degree rendering from this file
1. Create the Render TOP, Camera COMP, and Light COMP nodes.
2. On the Render TOP Render options page, select Cube Map from the Render Mode drop-down menu.
3. On the Common parameters page of the Render TOP node, set Resolution to a resolution with an aspect ratio of 1: 1, for example 4096 to 4096, to obtain a 4K resolution.
4. Create the Projection TOP node and connect the Render TOP node to it.
5. On the Projection parameters page of the Projection TOP node, select Fish-Eye from the Output drop-down menu.
6. (This is not necessary, in this case the file will have a black background.) Create a Reorder TOP node and on the Reorder parameters page in the right drop-down menu Output Alpha select One .
7. Now everything is ready either to directly perform the animation, or to export the movie file. See the first part of this article for instructions . You create a circular fisheye dome dome animation. It will be a circle with a square.
To use the alternative method, return to step 2 and instead of selecting Cube Map from the Render Mode drop-down menu, select Fish-Eye (180) . Now go to step 3, and also, if desired, to step 6. Now the animation is ready to run or export.
To create 360-degree virtual reality from this file
1. Create the Render TOP, Camera COMP, and Light COMP nodes.
2. On the Render TOP Render options page, select Cube Map from the Render Mode drop-down menu .
3. On the Common parameters page of the Render TOP node, set Resolution to a resolution with an aspect ratio of 1: 1, for example 4096 to 4096, to obtain a 4K resolution.
4. Create the Projection TOP node and connect the Render TOP node to it.
5. On the Projection parameters page of the Projection TOP node, select Equirectangular from the Output drop-down menu . This will automatically set the aspect ratio to 2: 1.
6. (This is not necessary, in this case the file will have a black background.) Create a Reorder TOP node, then on the Reorder parameters page, in the right Output Alpha pop-up menu, select One .
7. Now everything is ready either to directly perform the animation, or to export the movie file. See the first part of this article for instructions . When exporting a movie, you create a rectangular animation with a 2: 1 aspect ratio for viewing with virtual reality glasses.

Figure 11. Long orange Tube SOP nodes added to the file. You can add your own geometry to this file.
Oculus Rift * output from TouchDesigner when using Intel RealSense camera
TouchDesigner has created several download templates. They show how to configure Oculus Rift in TouchDesigner. One of these templates - OculusRiftSimple.toe - can be found in the archive. To view in Oculus Rift, of course, the computer must be connected to Oculus Rift. Without Oculus Rift, you can create a file, view the images in the LeftEye Render TOP and RightEye Render TOP nodes and display them in the background of the scene. I added Oculus Rift support to the file used in demo 3. Thus, the Intel RealSense camera animates the image that I see in Oculus Rift.

Figure 12. Here in the background the left eye and the right eye are displayed. Most of the animation in this scene is controlled by tracking from the Intel RealSense camera’s CHOP node. The file used to obtain this image can be downloaded by clicking on the button in the upper right corner of this article, chopRealSense_FullDome_VRRender_FinalArticle2_OculusRiftSetUp.toe