Virtual Cinematography for VR Trailers
- Transfer

After creating trailers for Fantastic Contraption and Job Simulator in mixed reality, I wanted to dive a little deeper into virtual cinematography by shooting the entire trailer in VR rather than mixing live recordings with virtual reality.
The idea of shooting game avatars instead of the players themselves on a green background was implemented to a lesser extent in the Fantastic Contraption trailer in mixed reality . I wanted to develop this idea and check if we can record the entire trailer in this way using different focal lengths and camera movements, as is done in traditional shooting.
Mixed reality trailers look great, but to create high quality mixed reality videosit takes an incredible amount of time and money. In addition, from a technical and creative point of view, they are very logistically complex. Shooting an avatar with a third-person view, similar to the usual shooting of actors, has many advantages and in many cases it can be a much better solution for demonstrating a game / project of virtual reality.
Why first-person VR shooting is (usually) unsuccessful
There are reasons why films are not filmed in the first person. Seeing a record of actors playing their roles in front of our eyes, our brain reacts emotionally. We do not react as emotionally when we look at the world through the eyes of another person. When we do not see the bodies of the actors and how they fit into the environment, many nuances are lost. Virtual reality is no different in this, but, unfortunately, usually in the first person is used to demonstrate or advertise VR-games. For many technical and creative reasons, such recordings become emotionally “flat”.
Typically, first-person recording is standard for the current generation of VR hardware and software, so many people use it because of the ease of creation. In addition, most of the raw first-person videos in VR are hard to watch. The player’s brain and eyes naturally neutralize hundreds of micro-movements of the head, so they seem smooth. But look , most of the first-person VR recordings on Youtube are chaotic jitter. A person’s head moves more than he realizes, so 2D recordings look defocused and hard to see.
If it is absolutely necessary to use first-person recordings, then create a separate camera for the helmet, receiving its output data and smoothing them for easy viewing. We used such a camera for the Job Simulator trailer and it worked out very well .
Ideally, it is necessary to ensure that the player in the virtual environment produces the strongest visual impact and the viewer becomes emotionally attached to it. This can be achieved by creating a trailer in mixed reality or by removing a game avatar, depending on the needs and budget of the project.
Recording VR in a third-person perspective allows you to create dynamic and interesting shots that capture the essence of the sensations from the gameplay. A great example is the Space Pirate Trainer.The way the game looks in the first person does not mean that it is so felt in the process. Check out the examples below. The third-person recording looks lively, cinematic, dynamic and emotionally attached to itself. First-person recording is cumbersome, confusing, and visually uninteresting. Shooting a player in a third-person view in mixed reality or recording a game avatar solves many of these communicative problems.
Third-Person Shooting GIF

Third person camera
This record is more lively, cinematic and dynamic. The action is filmed in a cinematic style, the camera observes the player’s reaction to the flying droid. She can predict the move to a large droid, creating an interesting and convincing shot.
First Person Shooting GIF

First person camera
It’s hard to understand what’s going on in this video. Weapons and shields cover most of the frame. It is non-cinematic, lifeless, it is not interesting to watch.
Here is an example from the Fantastic Contraption trailer for Oculus Touch. Which video looks more dynamic and convincing? Which video best conveys what is happening in the game?
Third-Person Shooting GIF

Third person camera
This video feels dynamic and fun to watch. The camera starts with a general plan, then zooms in, exactly coinciding in time with the taking of a wheel from Neko. This shows that the cat is used in the game as a toolbox. Then the camera moves on and focuses on the invention created by the player, following the movements of his hands.
First Person Shooting GIF

First person camera
This video looks flat, boring, and most importantly, causes nausea due to the angle of the player’s head. Turning the head in the helmet is completely natural for the player, but as a result, the 2D recording is uncomfortable. Neko is lost in the green grass and it is not clear where the player takes the items from.
Here is another example from the Space Pirate Trainer. The gameplay of this game consists in moving around the environment, shooting bullets flying from all angles at the player and dodging them. In the first-person view, it is almost impossible to convey how it is played, because there is no context for how the player moves around the environment.
Third-Person Shooting GIF

Third person camera
This video clearly conveys in-game events. A swarm of droids fires at the player, and he dodges them in slow motion.
First Person Shooting GIF

First person camera
It is very difficult to say what the player physically does when fired. He dodges bullets, striding to the right across the playing space, but visually this is not too obvious. The droids seem to shoot left and miss the player.
Why shoot a game avatar instead of shooting in mixed reality
Mixed reality trailers perfectly convey what a player feels in VR. But creating a professional looking trailer in mixed reality requires a lot of money, time and resources. However, Owlchemy Labs' mixed reality technology will help reduce complexity as more users take possession of it.
Shooting a game avatar has many advantages:
It is more cost-effective: all you need is a wired / wireless third Vive controller , a gyro stabilizer(or even cheap steadicam, as used by me, see image) and the time of developers to create a game avatar with the appropriate style, as well as additional time to record the trailer in VR. You will get rid of all unnecessary costs for buying / renting / creating / lighting a scene with a green background, as well as for the team needed to perform such a voluminous shooting and post-processing.

To create the Space Pirate Trainer trailer, it was enough for me to ask someone to play a game and shoot it.
You have more time to experiment:when you work with 5-10 people waiting for your decisions, then time is money. If you can shoot a trailer with a smaller team in more days, then more time for experimentation and research appears. Shooting all the necessary frames for the Fantastic Contraption and Space Pirate Trainer trailers took us three days each.
Changing the script on the fly: when shooting in the studio you have very little space for experimentation. You are under the pressure of time and filming plans. We did not have such stress, so we could take breaks, check the footage and see what worked when shooting, and what required a change of course.
It is amazing how seemingly correct in the helmet during the game it looks awful when shooting in the perspective of a third person. Overlapping the frame is important, and where the player is in the world can completely change the look and feel of the frame. Even small details, such as the angle of rotation of the wrist, can significantly affect the natural appearance of the avatar. Some movements can ruin the inverse kinematics of the animation, so it is important to understand how and where to position the body. Therefore, the simultaneous control of filming from both the player and the operator greatly helps to improve the recording.
Active / tired actor:many people don’t think about it, but playing VR is tedious. Especially in games like Space Pirate Trainer. It is very difficult to constantly maintain a high quality player when recording a game with so many random elements. She exhausts after 30-45 minutes of gameplay. Due to the lack of a tight time frame for the filming process, all participants can relax, take breaks and approach the whole process as a whole more relaxed.
It is incredibly amazing how an avatar with only three data transfer points (one on the head and two on the hands) allows you to strongly convey emotions. If a player is tired or emotionally unprepared for work, this is very noticeable when recording. When shooting an actor on stage for a trailer in mixed reality, fatigue increases tenfold and requires professional actors to maintain a high level of play for several hours in a row.
The character fits perfectly into the game world:an incredible amount of time and money is spent on creating a costume similar to the Space Pirate Trainer game avatar, which allows the viewer to feel like part of the game world. If we shot our trailer in mixed reality, a player in ordinary jeans and a T-shirt would not seem to be in place, surrounded by all these futuristic robots. We would need to dress him in a suit, but even in this case we would not be able to get closer to what can be easily achieved in the game itself, because connecting a live actor with a game environment never looks as natural as an avatar that is part of the game world .
Variability is very important:watching a game from one point of view (of a player) quickly bothers. To create a trailer or video that is interesting to watch, we need to make several takes from different angles to give the viewer an understanding of what he is looking at and how it fits into the game world. This is almost impossible if you are limited only to a first-person recording with a view from the player’s eyes. For games with fast and rich gameplay, displaying from different points of view is the only way to correctly visualize the action.
A sense of scale in Fantastic Contraption
One of the aspects that we wanted to emphasize in the version of Fantastic Contraption for Oculus touch is the set scale of the game. When viewing a game from the first person perspective, it’s pretty easy to get confused about the real size of game inventions. Shooting third-person avatars allowed the viewer to show how tall he is in the context of the rest of the game world.
Third-Person Shooting GIF

This entry clearly shows the dimensions of the invention in the context of the game. You can see how small it is compared to the player. The camera focuses the viewer's attention on the top of the frame where the invention is directed. The player’s emotions when the invention reaches the goal are also well understood.
First Person Shooting GIF

First person camera
As long as the controller does not appear in the frame at the end of the clip, it is almost impossible to understand the dimensions of the true dimensions of the invention. It can be very large and far from the player. Due to the bending and movement of the head, this recording is hard to watch. In addition, it is impossible to understand the emotions of the player, despite the fact that this is the same record, only in the first person.
Simultaneous shooting of versions for PSVR and Oculus
Another minor issue with Fantastic Contraption was that we wanted to use this trailer for both the new PSVR and Oculus. But how do we remove two trailers at the same time if the game controls on these platforms are very different? Lindsay Jorgensen ( Lindsay Jorgensen ) came up with a terrific solution: split the screen into rectangles and execute rendering only Oculus controllers in boxes from the first and third person, and controllers PSVR - in other two boxes.
Rendering GIF

This worked surprisingly well, and 80% of the recordings required only a simple frame change to display the PSVR version. We had to remove all references to filming in the room in the PSVR version. Therefore, the only large fragment requiring a complete rewrite was the moment Pegasus built the first invention. In other frames, minor changes were also made, but in general, this solution suited us perfectly.
Of course, we shot both trailers using VIVE and there were no other ways to accomplish this. We needed to track the position in the room of the operator and player. There is currently no way to do this with Oculus or PSVR.
Character Rig Space Pirate
A team from i-illusions created a game avatar and used rig (equipment) with inverse kinematics to control the character (including legs) according to the position of the display and controllers. It turned out to be so successful that when editing records, some parts looked completely similar to motion capture.
Now the biggest problem with rig is that the legs do not always look natural. If someone releases a whole-body tracking system, including points for legs, knees, wrists and elbows (along with the Unity / Unreal plug-in for interpreting data), it will be much easier to implement in-game recording, and the quality of motion capture can become just as good. as in the ILM iMocap system, in which one photo costs more than my house. I feel that standardizing such a system for animating VR avatars is just a matter of time.
But until it became a reality, we solved the problem of shooting an avatar mainly from the wrists and above, in order to avoid problems with displaying the model below the wrists. We show avatars in full growth only on general plans.
Images from the trailer, GIF

Camera control
Then we needed to add camera control. We used the third wireless VIVE controller, acting as an in-game camera. I took this camera, connected it with my cheap steadicam, and we started shooting. The game has a slider that changes the scope, so we could shoot with a wide-angle or telephoto lens, as well as smooth out position and rotation control. In addition, we could turn off the display of indicators of points and lives, gender boundaries and game advertising stations, so that there were no distracting elements on the screen. The screen was divided into two rectangles, one above the other, so we had a smoothed first-person view. But as a result, we never once used these first-person recordings.
Droid camera
GIF

Dirk also added the ability to join the camera to one of the game droids. Having given sufficient smoothing, we got the opportunity to shoot panoramas of the entire virtual environment “for free”, almost without interrupting the process.
XBOX Controller Camera
GIF

We also had the ability to move the camera from a third person to a regular XBOX controller. This allowed us to place cameras where it is impossible to physically reach and make cinematic general plans of the player in the environment.
Experimenting and getting good shots
This video shows how we shot “over the shoulder” and demonstrates how much freedom we have in experimenting with different ideas. We wanted to get a first-person shooter-style shot, for example, with a change of arms, but I had problems shooting my friend Vince, because he waved his hands and holding the gun in the frame at such a large focal length was almost impossible.
Therefore, Vince came up with the idea to take the steadicans in his free hand and press him to the body. He took the camera in his hand, I just shifted the virtual camera to the desired position and angle, and we got a great shot, shot “over the shoulder” of the player!
Comparison of mixed reality and game avatars
I am very inspired by the prospects of such virtual cinematography in VR. This technique is not new - James Cameron often used virtual cinematography to shoot Avatar. Then this technique was greatly improved when working on the "Jungle Book" , but the equipment used there costs hundreds of thousands of dollars and is much more difficult to use. Now we can do almost the same thing, just by buying VIVE, another controller, and spending a little time on the developers. It is unbelievable that, in principle, we can shoot virtual games in real time just in the basement! The next step is to use several virtual cameras so that different people can shoot at the same time.
Mixed reality is also an amazing technique and a working tool. However, I believe that in any project should consider all the possibilities and choose what is best suited to a particular game. Take, for example, Rec Room - the whole game has a cartoonish style, so a living person inside the environment will look alien compared to game avatars. But, for example, in Google Earth there is no such abstraction . It makes sense to put a real person in the world (even if frames of mixed reality are imitated).
As with regular game trailers, Each VR game requires its own approach and there are no universal solutions. Think about what works best for your game / project and creates the most interesting, exciting and convincing result.