Why 3D Headache / Part 5: Geometric distortion in stereo

    S3D: No pain IS gain




    This is the fifth article in the series, and today we will talk about geometric distortions. A very common situation is when a person buys a cheap stereoig, joyfully starts shooting, and is faced with the fact that he cannot shoot so that the audience does not have a headache. And when our novice operator dives deeper into the subject - it turns out that it seemed simple cheap cameras could not do. Why is this so? Why do you need expensive cameras? Is there any way to do without them? Why, even when using expensive cameras, problematic (in terms of geometric distortion) scenes get into 3D movies? Which movie theaters have the most scenes? How does the situation change over the years? What is the situation for low-budget and high-budget films? And finally, in what situations (in terms of geometry) can you fix a problematic shoton a personal iron stereo?


    A brief excursion into the device stereorigov


    From a layman's point of view, shooting stereo is very simple. All you need to do is take two cameras, buy an adapter that allows you to mount not only one camera on a tripod, but two and ... voila, we can shoot stereo:



    Similar adapters, if you buy on the world famous AliExpress cost from 100 USD, their name is the word Professional (this is the most important thing for a beginner), at home there is a TV that shows 3D. The main expense is the second to the same camera. Happiness is closer than ever.

    However, soon, you guess, a harsh reality begins.

    As you remember, I regularly cite as an example a stereo festival that collects a wide variety of films, ranging from very professional to amateur ones. As a result, from viewing the competitive program, almost 100% of the audience’s headaches begin to hurt, because ... However, in order.

    So, geometric distortions ... Let us shoot the cheapest stereo image where two cameras are located side by side - it is called “side-by-side stereo rig”.

    In cinema, according to the laws of the operator's art genre, you need to regularly alternate close-ups (for example, the actor’s face), the middle plan and the general plan (the whole scene). When we in reality, being close to a person, look at his face, it is noticeably distorted. Suppose, in the story, we need to remove the dialogue and show a close-up of the face of the secondary character that the main character sees, with all the gamut of emotions on him. And you need to remove everything in stereo. And then it turns out that we can’t put a stereoig with the corresponding lenses, for example, a meter away from the actress. Our cameras are so large that they can only stand at a noticeable distance from each other. And this from a short distance immediately gives the “horse” parallaxes (you can calculate for yourself, there’s quite feasible geometry, more about this in the fourth part of the cycle).

    At one time, when Alfred Hitchcock in 1954, in a rather successful stereo film “Dial M for Murder” (“Type“ M ”in case of murder”), needed a close-up with a telephone, a model was made specifically for the shoot:



    As journalists write “To shoot the phone in close-up, large finger and phone mockups were built, since the camera could not focus on a regular phone” (text from Wikipedia, but journalists usually explain at the same level of understanding of the subject;). It was natural to put the correct lenses and take a close-up of the phone in those years. But here it was impossible to avoid unacceptable and painfully perceived in stereo geometric distortions. As a result, to get quality plans it was easier to make large layouts than small cameras and lenses. And it was done. From the point of view of amateurs, such expenses are an unnecessary waste of time and effort; from the point of view of a professional, it is an optimal way in terms of costs (at that time) to make a comfortable stereo. Let's figure out why.

    But can we put telephoto lenses and shoot everything from a greater distance? Then the angles and parallax will be within reasonable limits! Yes, we can, but at least four factors interfere here. Firstly, when we look closely, for example, the face is noticeably distorted: the


    girl’s face is shot with different lenses from different distances, while it is clearly visible that the close-up face is distorted, including pay attention to how the ears are visible
    Source Pictures: http://lens-club.ru/public/files/users/image/portretc.jpg

    This facial distortion is perceived quite normally. After all, we are next to a man, and his face is voluminous, our brain should think so. Our hapless operator, who bought the cheapest rig, has no other options but to take a close-up from a distance, much more than in a real conversation. Because of this, firstly, we suddenly begin to see, for example, the ears are completely different from the way we see them in reality. In the example above, note that in the first photo we practically don’t see the ears, in the second and third they are noticeably “pressed” and only in the last - they noticeably “move away”. Despite the fact that the girls have their corner, of course, one and the same. As a result, such a close-up shot from a certain distance is perceived strangely - like something unnatural, but it is not immediately clear what exactly. Secondly, and this is also visible, the face becomes “flat”. Those. Yes, we see that the face is in the foreground, and the background is somewhere further away, but the difference in parallax, for example, for the nose and for the ears, disappears. A person becomes a “cardboard” (called the “cardboard effect”) or, very close, a “backstage effect” (those who wish can find more detailed articles on this topic on the Internet). This effect is also perceived as a “strange stereo”, which is somewhat uncomfortable, but it is not clear what exactly.

    But that is not all. Thirdly, when we try to shoot a close-up from a great distance, the background will also noticeably distort, due to the fact that it will be unnaturally increased. The idea of ​​distortion is well demonstrated in this example:



    Again, this is not all. Fourth, taking a close-up shot of a stereo image whose cameras are located on the axes noticeably farther than the eyes, novice operators are often forced to bring together the optical axes of the cameras. That is, to shoot not on parallel, but on convergent axes. You can use it, but you need to be able to do it correctly (which is nontrivial). The discussion of the details is beyond the scope of this article, we only say that, taking a close-up on the converged axes, we are actually doomed to vertical parallax and rather unpleasant distortions of the shape of objects in the image:


    When shooting on converged axes, we are doomed to receive vertical parallax and distortion of the shape of objects in the image. At the same time, the picture that is normally perceived in life (since the brain can compensate for such distortions) will look uncomfortable in a movie theater. The effect can be seriously reduced, but for this you need to be well aware of the geometry of the scene and the magnitude of the problems that you get after shooting. Material shot on parallel axes also needs to be reduced at the post-production stage, but this is done simply, unlike shooting on converted axes (examples will be at the end of the article).
    Image source: http://really.ru/forum/26.html?p=26579

    Is it possible in principle to technically shoot with large lenses as if you were shooting from a short distance? Answer: Of course you can. The solution is the so-called Beam Splitters (“beam splitter stereo rig”), which today are perhaps the most popular on set. Their idea is extremely simple: one camera is placed as usual, and the second one is placed above or below, light rays for which are reflected from a translucent mirror:


    Schematic device of a beam splitter.


    A simple stereo Beam splitter with Sony cameras, a second camera from the bottom
    Source: http://www.urbanfox.tv/production/p17-3dMasters2010.htm

    The main advantage that we get using stereo images of this type is the ability to set any distance between the lenses, including less than the diameter of the lenses themselves, up to zero:


    The photo clearly shows that the distance between the centers of the lenses is not only less than the thickness of the camera, but and less lens diameter. At the same time, the optical axis of the cameras can be strictly parallel, which reduces the degree of distortion and reduces the discomfort of the resulting stereo.
    Source: http://www.urbanfox.tv/production/p17-3dMasters2010.htm

    With the help of such a rig, we can shoot close-ups up to macro photography. Moreover, the beam splitters turned out to be so convenient that today they shoot not only large, but also medium and general plans. In fact, they became the main workhorses on a modern film set: A


    fancy beam splitter, a second camera on top
    Source: https://library.creativecow.net/articles/kaufman_debra/Flying-Monsters-3D/assets/DSC_0798.jpg

    Other types of beam splitters

    Shooting the film “Planet of the Apes: Revolution” on a beam splitter, the second camera is on top, the actors are dressed in mokap costumes (MoCap - motion capture)
    Source: http://www.3alitytechnica.com/


    Shooting a close-up of the film “The Legend of Hercules ”On a bim splitter, second camera on top
    Source: http://www.3alitytechnica.com/


    Filming on a bim splitter concert, second camera down
    Source: http://www.3alitytechnica.com/


    Naturally, any medal has two sides, and in addition to the pluses, the stereo origins of bim-splitters also have disadvantages. In fact, the cons do not outweigh the main plus, but ... the cons on the points:
    • Adjusting the pan and tilt of the camera on bim-splitters is more difficult than on side-by-side stereo-levers.
    • For stereoriges, providing the same colors in the right and left angles is a problem. The fact is that in practice, lenses differ, camera arrays differ, seemingly the same settings differ, light filters differ, etc. Moreover, when the camera heats up, its colors change (this is the reality). In this regard, Beam splitters, for example, are doomed to the fact that the cameras heat up deliberately differently.
    • Dust settles on the mirror (especially if explosions are removed), it vibrates, and finally, the mirror polarizes the light. As a result, the light in one camera is not polarized, and in the second one it is polarized. This affects the colors of highlights and more. More on this will be in the next article in the series.

    And this is an incomplete list. For example, in some situations (shooting in a car, shooting from an underwater box), the dimensions of beam splitters and so on, and so on, greatly interfere.


    Filming on the Transformers: Age of Extinction beam splitter, the second camera is located below, which, by the way, reduces the amount of dust sitting on the mirror from explosions. The complexity of such scenes is that the mirror from the vibrations of explosions trembles, seriously distorting the geometry of the scene. Moreover, with strong close explosions, the sound wave bends the mirror itself, which leads to non-trivial geometric distortions of the scene
    Source: http://www.3alitytechnica.com/

    One of the participants of the 2014 stereo festival competition program was the film “BMPT Terminator” (Tank Support Fighting Vehicle). The author of the film said that it was a big problem for them to shoot a close-up shot of a stereo shot. They even had a beam splitter (which is still rare for our authors). But when the “Terminator” fired, the mirror was so distorted by the shock wave that the reflected angle became completely unusable. Those. the sound wave of the Terminator shot was too strong for the elastic glass of the stereoig mirror. Large studios in this case use conversion, and since the authors had stereo before the shot, it’s possible to build a disparity card from it and extend it to the next frames, with the caveat that several technologies are needed and quickly done only by trained people. For these reasons, such things are available only for more or less large studios, where there are relevant specialists and software.

    Another interesting topic that should be mentioned at least briefly in connection with quality is broadcasting in stereo. As you already understood, the complexity of the cameras has increased, respectively, their configuration has become more complicated. This led to the fact that only basic parameters are often monitored, and the main correction of problems occurs in off-line post-processing. Since the most high-quality correction algorithms are rather slow today, and it is impossible to fix all the problems when shooting in fact, and some problems still remain, this really leads to the fact that the final result is better. But this approach has three serious drawbacks. Firstly, it becomes impossible to use appropriate cameras for broadcasting. They simply are not able to shoot stereo with a quality sufficient to prevent a headache from him, and to broadcast it (which, certainly affected the popularity of stereo channels, primarily due to the difficulties with the live broadcast of high-quality sports content). And secondly, the time between shooting and receiving the final material noticeably increases. And finally, thirdly, some (usually a small) part of the problems when shooting turns out to be so serious that it is impossible to qualitatively fix them in post-production. In general, as you understand, there are continuous compromises, and in the process you constantly have to choose the least of evils. that it’s impossible to properly fix them on post-production. In general, as you understand, there are continuous compromises, and in the process you constantly have to choose the least of evils. that it’s impossible to properly fix them on post-production. In general, as you understand, there are continuous compromises, and in the process you constantly have to choose the least of evils.

    An alternative approach that really works is to supply the camera with sets of servos capable of real-time accurate change of the camera position parameters and camera settings and, accordingly, correcting the video. We will not advertise specific companies, but today there are solutions in which the operator is exclusively busy shooting, and a special person constantly works next to the camera, who sits at the flow quality control panel and controls the correction of distortions (using the mechanics of servos in the camera), able to let the operator know that some distortions are becoming too large, he can’t fix them, and the quality has fallen. For example, some strong flare illuminates one lens, but does not illuminate the second, or after a close explosion, a mirror becomes dusty,

    The attentive reader already guesses that systems where a specially trained person is required for the camera are very, very expensive and, as a rule, such cameras are not even bought but rented (often complete with trained people). In any case, they are still virtually inaccessible for amateur shooting. In the best case, the authors find relatively inexpensive monoblocks, in which the distance between the lenses is small, and part of the material is shot in large rigs, and part - on such cameras.


    This increases the number of technical problems when shooting and post-processing, but allows, albeit through the use of more equipment, to obtain better large-scale and general plans. In the worst case, everything is shot on the “cheapest” stereoig for $ 100, which was discussed at the very beginning, as a result you can see a stereo in which, for example, panoramas and general views are delightfully three-dimensional, but close-ups are very uncomfortable. Now, I hope you better understand the reasons for the uncomfortable “tride” in amateur and low-budget films. They just tried to take everything off on the iPhone.

    Conclusions:
    • One of the fundamental problems when shooting stereo is the need to shoot part of the scenes (especially close-ups) with lenses close to each other, which makes it impossible to use simple and cheap stereo images for such scenes.
    • The solution to the problem is the use of stereoig with a translucent mirror (bim splitter), when one camera is vertical and the other horizontal, it turned out to be very convenient and, most importantly, universal. Today, beam-splitters are perhaps the main type of camera for shooting stereo.
    • Beam splitters have a number of inherent flaws: it is more difficult to adjust the geometry on them, the mirror is not perfect, it is dusty and trembling, there should not be polarized light on the site in both lighting and glare.
    • All these problems are also successfully solved, but the cost of solving them is such that in the end they become inaccessible to amateur and low-budget films.



    Examples of real problems


    So, we briefly tried to give an idea of ​​why everything is technically not easy, what are the specifics of the cameras that cause serious problems and why you can’t use simple cameras in which, it seems, geometry problems are much easier to solve.

    I also recall the fundamental problem. In the first part of this text I cited the opinion of a professional stereographer: “It is necessary to exert pressure on distributors and hall owners to maintain high brightness, clean glasses, and regularly measure light output from the screen. No matter how high quality we make S3D, this quality will be lost in a huge number of cinemas around the world. ” That is, it makes no sense to lick quality, while around the world such poor equipment. Also in the second part of this cyclean attentive reader could see a comment , obviously from a practitioner, “if you yourself tried to shoot and edit stereo movies or at least stereo pictures, you would easily notice that small flaws like different rotation angles in the region of 2-3 degrees, center offsets or blurred one images are easily compensated by our brain ”. That is - there is no point in correcting, the brain compensates. The brain, of course, compensates not only these, but also other problems, but a certain percentage of people pays for it with a headache. Moreover, the third part explained how the brains of professionals adapt to poor stereo, which only exacerbates the problem. It is clear that real professionals realize this and use not personal feelings, but closed statistics from cinema chains. But those who believe that since he and his colleagues are not hurt, then everyone will not be hurt - really a lot.

    For example, here is the text of the job “Stereograf”, that is, the person responsible for the quality of stereo, which at the time of writing this article is hanging on the site of one of the Russian studios. Literally “Required:
    • Basic knowledge of Autodesk Maya;
    • Basic knowledge of the principles of compositing and any of the packages for compositing: Fusion, Nuke,
      AfterEffects, Blender, Ramenh, ...;
    • Knowledge of stereo technologies, terms and principles of creating stereo images;
    • The desire to develop and learn in stereo;
    • The desire to work in a team.

    Extremely welcome:
    • Knowledge of the basics of programming;
    • Basic knowledge of Python. ”


    Here they are - harsh Russian stereographers. Experience with stereo and generally understanding how binocular vision works is not necessary, the main thing is to know the terms and be able to program. ))) Further on, the idea that “the brain easily compensates for everything” is superimposed, based on personal experience and personal adaptation to its poor stereo (see the third part) ... If you remember, the second part said that the best films manage to lower the percentage of people with headaches to 2%. It is clear that when decisions are made on the basis of testing even on 20 people (forced to see the whole studio), the temptation is to say - well, only one thing is a little wrong (he is always wrong!), So we accept! And 1 out of 20 is already 5% of people with a headache. You will laugh, but real decisions are often made that way. And it's useless to argue with these people. They are harsh practices and they know better than to neglect. And this is something - your headache, gentlemen, the audience. Which, in turn, gives rise to a clear reaction:



    A separate problem is also that there are not many books on this topic. Especially if a person cannot read English fluently. More precisely - there are magnificent books of specialists from the Soviet era, but they are largely oriented towards the digital era. On the western set, the ironic meme “We'll fix it in post!” Is firmly rooted (“We will fix this on post-production”). Ironic - because not everything can be fixed at the “post”. So - the books available in Russian are magnificent, but on the whole they are written on the basis of the impossibility or great difficulty to fix a lot (which was absolutely true for the film, but not so for the numbers). Again - no one has canceled the old wisdom "he who knows does not speak, he who speaks does not know." She somehow involuntarily recalls when you read what is written about stereo in the west. ) That is, a lot has been written, but often not by professional practitioners who do not have time to write, but by those who have this time. And this is also a problem.

    The irony of fate is that today a fair amount of problems can be relatively easily and efficiently fixed by offline algorithms, but manufacturers of stereo troubleshooting software complain about a drop in sales. Moreover, such software is now available on torrents along with the “crack”, that is, take it and use it “for free”. But even for nothing this is not done. After all, this is work. But the brain still “easily compensates” our “small flaws”. At the output, we get “headache included 3D” (a free translation - “3D headache is included in the price” or “3D - a headache as a present”), and a huge percentage of viewers who do not go to 3D theaters because they experienced a headache pain. With which it remains to congratulate all involved.

    By the way - my colleague, a researcher who wrote algorithms for measuring and correcting stereo quality while reading a draft version of the article, noticed that it was somehow not good to blame the studios that they couldn’t download hacked files from torrents. But it was decided to leave so that the current situation in the industry would become clear. Since this dramatically affects the speed of development of such software. In Russia, in general, a lot is very harsh. Who followed the link to the interview, which was quoted in the last part, they read that there the guys in the studio generally watched for the first time the results of the production of a 3D film in anaglyph with free glasses. And even on a 3D monitor, which at that time cost 5-10 thousand (comparable to the salary of a cleaning woman), they did not go bankrupt right away. What are we talking about ... It is clear that the situation is better in the West, but mass outsourcing to India and, partly, China is developed in stereo. In general, the oil painting ... However, back to the content.

    Consider what geometric distortions end up in the movies.


    Rotate one angle relative to another




    The rotation of the angles is a very unpleasant problem, primarily because from childhood our eyes “adjust” very well and processing the rotation of the angles is not familiar.


    Rotate 1.6 ° from the movie “Dark Country”


    Rotate 1.1 ° from the movie “Shark Night”. A generally interesting observation is that the visual quality of horror films is on average noticeably lower than that of movies in the fantasy genre, for example. This is due to the fact that science fiction films have a higher budget, and the fact that they are usually noticeably brighter than "horror movies."


    One more horror to the heap - about 1 ° rotation from the movie “Silent Hill 2”. It’s hard for us not to recommend going to horror movies, but these examples of the most powerful turns of the scene from 105 films are quite eloquent.


    And another example of a rotation of about 1 ° in the Hong Kong film Sex and Zen.

    As you can see, a turn of one degree is clearly visible in a full frame, even with a noticeable decrease. On a large screen, such a turn leads to a very noticeable vertical parallax, which “kills the stereo effect”. If you remember, it was said above that “2-3 degrees” are “small flaws”. Fortunately, we can actually guarantee that when viewing blockbusters you will not see this.

    And finally, from the interesting:


    Rotation of 0.6 ° in the film “Bait”. If you look at the hero’s ear, even in a small picture, geometric distortions are clearly visible. What do you think it was? The answer is below.

    We have collected tens of thousands of examples of geometric distortion from hundreds of films. In all these cases, it was possible to significantly reduce the discomfort of the visual perception of these scenes by turning one of the angles to the desired angle. It is very likely that in all the films shown, the control and correction of problems at the post-production stage were either absent or carried out artificially by eye.

    What is encouraging in this case is the observed trend in this parameter, which we traditionally measured on a hundred films - in fact ALL films released on Blu-ray 3D were measured, which had a budget on IMDB.com. These are almost all films, except for documentaries and very low-budget ones. On the “color” graphs, lower - the higher the point, the better the metric value. The colors are based on percentiles, in order to better show trends and the opportunity to compare the situation: what was good in the 90s to compare incorrectly with 2013, as well as films with a low budget, it is incorrect to directly compare with expensive blockbusters. At the same time, on the border of the yellow zone, you can clearly judge the trends that really please:


    The average rotation between years from year to year. It is clearly seen that the conversion situation is actually ideal, for which they like to use it in blockbusters. The quality of the shooting is gradually growing steadily, and what was average in 2010 already by 2014 becomes worse.


    On this graph, the same metric values ​​are given not by year, but by budget, and it can be clearly seen that conversion is budgets from 500K USD per minute, and low quality in terms of rotation - this survey is mainly below 750K USD. It is also seen that there is a noticeable number of low-budget films that, with a budget several times lower than blockbusters, surpass Avatar in quality in this parameter.

    Conclusions:
    • Many horror films are terrible, including in terms of the technical quality of stereo.
    • Using the right technique, a well-organized post-production with competent people, today allows you to shoot low-budget films with better average rotation between camera angles than Avatar
    • As a result, the worst films of the end of 2013 - beginning of 2014 have average rotation values ​​between foreshortenings at the level of average films of 2010 and the best films of the digital era.


    Unequal view angles





    Filming equipment is not perfect. When changing the focal length, backlash occurs, it is difficult to make its change strictly simultaneous in time. This, as well as errors in post-production, lead to the following problems:


    The difference in scale is 6.8%, the film “Space Station”. Note that the frame is actually flat.


    The difference in scale is 6.2%, the movie “Cats versus Dogs 2”. Curiously, this and the previous examples are conversions. That is, an error occurred at the post-production stage, and there was no quality control of this parameter. And note that the frame is again flat.


    The difference in scale is 4.6% in the movie Journey 2: The Mysterious Island. The frame is not flat, but almost flat.


    The difference in scale is 4.2% in the movie Piranha. The frame is again almost flat, if you go to a horror movie, do not say that you were not warned.


    Interestingly, if someone thinks that such errors do not happen in computer graphics, then here is an example of 3.6% from the film “Pacific Rim”. True, unlike the previous examples, the scene still contains rotation of the scene, which means that there is not only pain, but also a 3D effect.

    The examples above were found using a separately written metric, and as usual, it is interesting to see its results by year on a hundred films:



    The same films, but depending on the budget per minute of the film:



    Conclusions:
    • От найденных примеров создается ощущение, что, изменяя масштаб, в студии пытались “замаскировать” плоскую сцену. Это, очевидно, увеличивало процент людей с головной болью, но не создавало 3D-эффект.
    • В конце 2009 года “Аватар” был среди лучших фильмов по аккуратности масштабов ракурсов, однако позднее было снято много фильмов, ощутимо более аккуратных.
    • Видно, что низкобюджетные аккуратные по разнице масштабов фильмы также существуют.
    • На последнем графике два фильма демонстрируют, что при желании можно испортить даже относительно дорогую конвертацию ).


    Сдвиг и перспективные искажения





    In addition to the relatively complex transformations listed above, there are also simple ones, for example, a vertical shift:


    A vertical shift of 1.5%, the film “Cats vs. Dogs 2”. Yes, this is a conversion. Yes, this is a good question, how did they do it, most likely, someone's hand on the mouse wavered. But instrumental control, of course, was not.

    Such errors arise solely because they are very rare to convert, which is why some films generally lacked vertical shift control in the final post-production. As a result, “record-breaking scenes” fall into the films, in which the vertical shift is much greater than in the films that were shot.


    A 1.4% shift in the movie “Bait”. The film is relatively low budget. In addition, even with such a decrease in the hands and the jacket, the difference in angles in sharpness is clearly visible.


    And again, “Piranha”, a shift of 1.2%. Please note that the optical axis of the cameras do not lie in the same plane at all. This is a horror movie. You have been repeatedly warned, it will not only be scary, but also painful.


    “Step Up 3D” with a shift of 1%. Pay attention to the color difference of glare - this is greetings from polarized light and a beam splitter.

    Unfortunately, we have a lot of such examples.

    The shift is characterized by the fact that it is quite easily corrected, and the presence of such problems is definitely an outdated technical process in the production of the film. Fortunately, the above examples are relatively old. “Cats versus dogs-2” - 2010 (peak of bad stereo), “Bait” ​​- September 2012, “Piranha” - May 2012, “Step Up 3D” - 2010.

    In general, the situation is approximately the same over the years - a gradual rather stable improvement , and the terrible old meanings gradually remain in the past, becoming indecent even for low-budget films:





    Conclusions:
    • It is clearly seen that if at the end of 2010 Avatar was the best (!) Film in terms of vertical shift at that time among the tested shootings, then by 2014 this value had become quite average. And it pleases.
    • By 2014, Avatar was already 6 films better on this parameter, while two of them have a very small budget.


    Fix problems



    In the text above, the phrases “difficult to fix” or “fairly easy to fix” are periodically found. Let us consider how the correction looks in practice and what is fixed simply and what is not.

    Just note that all the examples below are obtained automatically from live stereo with all its problems, the only thing done additionally is that the edges are cropped somewhere or, so that the stereo effect is better visible, the zero level is corrected (the image is shifted to the screen).


    A rotation of 0.6 ° in the movie “Bait”: was.


    The same fragment: it became. From the point of view of perceived depth, the scene is definitely uncomfortable, since it is “impossible”. By the way, the genre of the film “Horror, thriller, fantasy” ...

    Of course, the terrible turn completely disappeared, but in the end, it became clear that the head was almost flat, and the background was suspiciously enlarged. Apparently, the scene was shot on side-by-side stereoig from a long distance with the convergence of the axes. Now, we concentrate and carefully look at the head and shoulder, and we see that the shoulder of the young man in the foreground rotates in the same direction as the background, i.e. the shoulder extended towards us is farther from us than the head (!) - this is a conflict of our knowledge about the shape of the human body with binocular perception of this scene (poor viewers!). At the same time, the second person from behind, although clearly located farther (visible in focus), in visible depth, is on par with the young man, and his shoulder is turned inside out again, i.e. further from us towards the background!It is also impossible, but we see it. In this case, there will be a conflict with the perception of depth in terms of sharpness (and, most likely, the movement of scene objects) and binocularly perceived depth. Can you imagine the sensations of your brain when viewing? From the point of view of the brain, this is a doubly-triple “impossible” uncomfortable scene (a conflict of form, defocus and movement in the frame with visible depth). This is a lot, even if you remove the “impossible” rotation. And such “impossible” scenes in the film, as you might imagine, are enough. We hope that it becomes completely clear why cameras / lenses / systems with a small distance between the optical axes are needed, including the beam splitters that were discussed at the beginning of this part. It becomes clear why Hitchcock was making a big phone. And it’s understandable why novice operators shoot stereo films without such cameras,

    And another example that illustrates well the problems of low-budget films when there are no correct cameras for the close and middle plan:


    The initial rotation is 1 ° in the film “Sex and Zen”.


    The same fragment after correction. Non-linear distortions of scene objects are clearly visible, which cannot be corrected by turning and which are generally difficult to correct. We see that people are standing upright, however, from the point of view of the perceived depth of their torso, they are noticeably and “impossible” tilted. This middle ground was also clearly filmed on side-by-side stereoig with the convergence of camera axes.

    If there are no cameras on the set for the correct shooting of close-ups and middle shots, the film is very likely to have scenes that are impossible from the point of view of visible depth when viewing.If your head starts to hurt from such films, be aware that the authors brutally saved on the right cameras, which can adequately shoot the corresponding scenes. I repeat, even relatively inexpensive monoblock stereo cameras would “save” the situation in terms of correcting impossible scenes. But the authors preferred telephoto lenses, the convergence of camera axes with all the consequences.

    Correction of flat scenes for scaling is not given, the algorithm makes them virtually the same. Let's look at examples where the actual shooting was:


    The difference in scale is 4.6% in the movie “Journey 2: The Mysterious Island”.
    It was.



    The same fragment “became”, and the 3D effect of an almost flat scene also appeared.


    The difference in scale of 4.2% in the movie “Piranha” was.


    The same fragment has become. The weak-weak 3D effect again appeared on the frame, which was previously practically invisible due to the difference in scale.

    And an example of correction on computer graphics, or rather, on conversion:


    3.6% on the film “Pacific Rim”.


    The same fragment has become. Here, it turns out, was hiding a very decent stereo! It can be seen that even part of the spray is correctly in the foreground (which is not always the case in conversion).

    If you wish, you can show hundreds and thousands of such examples (everything is generated automatically or semi-automatically here), all examples for 100 films are neatly stored in a dry, dark place on a disk array. Hundreds of examples are published in our reports, available to professionals for free by subscription.

    An attentive reader may notice that the correction of the shifts is not shown. Yes, we do not show them. This is not sports. They are perfectly fixed in any, even the simplest editor. And the fact that they get into the films is the result of haste or simply the lack of instrumental quality control.

    Conclusions:
    • For geometric distortions, an effective automatic or semi-automatic correction is possible, which can drastically reduce the discomfort from the scene.
    • Shooting on convergent axes is often very difficult (read - very expensive) to fix. In the worst case, it’s easier to convert the scene - the result will be better and less uncomfortable.


    Instead of a conclusion



    The reader may wonder why everything is not being corrected. I will reveal a little professional secret. A fashionable and very well-developed topic in post-production today is tracking (or feature point tracking, if someone is interested in digging deeper). And in the tracking studio have achieved great success. He is constantly needed to stick special effects into live video with a moving camera, and to remove excess from the frame. For tracking, many programs and plugins have been written. And they even allow you to not only do tracking in one frame, but also compare points on a stereo pair (which immediately becomes a weak point, especially if the angles differ in sharpness).



    Moreover, for the video on which the researchers are debugging, everything works. And then the movie begins (in every sense). And in the movie, the background is usually a little blurry. Yes, it’s better to make it sharp in stereo, it’s more comfortable. But there is also a 2D version, and it should also be smooth and silky.expressive in terms of art, for which they make a certain compromise and somewhat blur the background in both versions. And here the algorithms for matching singular points cease to work stably ... But there are algorithms that are resistant to blur, especially since it is necessary to determine a fairly simple global frame transformation? Of course, they exist, but there are reservations in terms of simplicity and globality (camera axes in reality may not lie in the same plane, and this is a harsh reality that does not allow working only in the foreground). True, such approaches are used by individual studios in a terrible secret from others, so as not to lose their competitive advantage.

    If you already disclose the secrets of the profession ... Those who have read up to this place - you can. ) In the West, very cheap loans and a great aftermarket movie equipment (used). As a result, you can relatively cheaply and regularly upgrade projectors, etc. Again - a frequent situation when new glasses in cellophane are simply presented, and they can not be handed over. As a result, there is not all the horror that immediately comes to mind when it comes to our cinema halls. An interesting consequence of this is that the picture on the screen is really sharper, and people get used to such a high-quality picture (in many of our rooms, which use cheap Chinese projectors that no one bothered to sharpen, “spoiled” Americans simply could not walk) . Again - an order of magnitude better than ours, developed feedback. They have a minimal drop in the sharpness of the picture, so that people notice it and immediately trumpet everyone through social networks, special sites, etc. As a result, their producers are very sensitive even to a minimum loss of clarity of detail. Scaling and rotation of the picture obviously lead to a decrease in sharpness. By using competent tools and approaches, losses can be minimized, but ... As a result, when the final version of the scene is discussed, corrections are considered and “Correct or leave?” Is decided. - it is often chosen to leave the problem. After all, a drop in sharpness on good equipment is immediately visible, but “the brain can easily compensate for it”. In this regard, the “Avatar” is interesting. If you look at it under a microscope at the pixel level, then you can see along the horizontal sharp boundaries,

    Another “big secret” of 3D is that fatigue from individual stereo artifacts builds up. Fans of the “easy fix” approach cite studies that define the extreme limits of acceptability for individual artifacts. However, if you are not too lazy to look at the source, you can easily notice that although these studies were conducted on a large number of people, they usually lasted 5-15 minutes. And the fact that people without outside help could then get up and godid not complain of a headache, does not guarantee that a headache does not occur after an hour and a half of the film. And to spend many hours of experimentation is extremely long and expensive. For publication and for everyone to refer to you, 5-15 minutes is enough. And if the thresholds are higher in the study, they will often mention you, and if the hard ones are low, perhaps only the descendants will remember. And citation here and now is the main thing in modern science (much of the above was said with irony, but if you go deeper, you need to understand the rules that form the system). In this regard, a fairly uncompromising approach in “Avatar” certainly helped the audience safely and relatively painlessly watch 2 hours 42 minutes of the film.

    Well, as final conclusions. Some studios like to say, “Pay us like Cameron, and we will make you quality like Avatar.” The above results of objective measurements on three characteristics show that to continue to say so, at least for geometric distortions - this means publicly signing in complete unprofessionalism. Firstly, if in 2009 Avatar really stood out for the better in terms of quality, then already in 2014 its results became very average, that is, technologies and programs were developed that made it possible to get better quality much cheaper and easier. And although not everyone knows how to work this way, but obviously, criteria of acceptable quality have grown significantly. You can obviously predict that by 2017, when the release of Avatar-2 is planned, James Cameron will again try to set a new level of quality for the industry. As a result, those who can give the maximum maximum quality of the Avatar to the mountain, and even only at its price, will leave this market. Or learn to do even better and cheaper. ) Accordingly, the studios already “have a headache” about how to achieve this, and the audience will have less headaches when watching! 3D quality will increaseand Carthage must be destroyed !

    All less headaches in general and from 3D in particular! )

    Acknowledgments


    I would like to cordially thank:
    • of my colleagues Alexey Shalpegin, Alexander Voronov and Alexander Bokov, as well as other members of the video group, thanks to which the above algorithms were created,
    • John Karafin, Vice President of Technology and Senior Scientist at RealD, for believing in our strength and encouraging support,
    • Intel, Cisco, Verizon and YUVsoft for their serious support for the project and the fact that they care about the quality of stereo films,
    • Laboratory of Computer Graphics, VMK Moscow State University named after MV Lomonosov for computing power and not only,
    • Alexei Shalpegin, Artem Kazakov, Stanislav Dolganov, Maxim Smirnov, Vitaly Lyudvichenko, Vladislav Tyulbashev, Alexei Fedorov, and especially Alexander Voronov for a large number of sensible comments and corrections,
    • and finally, all the organizers of the International Moscow Stereofestival and personally Oleg Nikolaevich Raev, for what they do to increase the quality of stereo films in Russia.


    See also:

    Also popular now: