OpenGL vs. Direct3D Confrontation History

Original author: Nicol Bolas
  • Transfer
Before we begin, I’ll say: I know much more about OpenGL than about Direct3D. I have never written a single line of code for D3D in my life, and I wrote OpenGL tutorials. So what I will tell you here is not a matter of bias. Now this is just a story.


The origin of conflict



One day, in the early 90s, Microsoft looked around. They saw the wonderful Super Nintendo and Sega Genesis, which had a lot of great games. And they saw DOS. Developers wrote for DOS as well as for consoles: directly on the hardware. But, unlike consoles, where the developer knew exactly what hardware the user had, the developers for DOS were forced to write based on many different hardware configurations. And this is much more complicated than it seems at first glance.

Microsoft had an even bigger problem at that time: Windows. Windows wanted to single-handedly manage the equipment, unlike DOS, which allowed the developer to do whatever he wanted. Owning equipment was necessary in order to streamline the interaction between applications. Game developers didn’t like the interaction because they took away valuable resources that they could use for their wonderful games.

To attract game developers on Windows, Microsoft needed a single API that was low-level, worked on Windows and did not suffer from brakes, and, most importantly, would abstract hardware from the developer . A single API for graphics, sound and user input.

And so DirectX was born.

3D accelerators were born a few months later. And Microsoft faced several problems at once. You see, DirectDraw, the DirectX graphics component, worked only with 2D graphics: allocating graphics memory and bitwise copying between different allocated memory sections.

And Microsoft bought some intermediate driver and made Direct3D version 3 out of it. He was scolded by everyone and everywhere . And not just like that; one glance at the code was enough to recoil in horror.

Old John Carmack of Id Software looked at this garbage, said: “To hell!”, And decided to write under another API - OpenGL.

Another part of this tangle of problems was that Microsoft was very busy working with SGI to implement OpenGL for Windows. The idea was simple - to attract developers of typical working GL-applications: automatic design systems, modeling, all that. Games were the last thing Microsoft thought of then. It was all intended for Windows NT, but Microsoft decided to add this implementation to Win95 as well.

To attract the attention of professional software developers to Windows, Microsoft tried to bribe them with access to new features of 3D graphics accelerators. Microsoft made the protocol Installable Client Driver: the manufacturer of the graphics accelerator could overload the software implementation of the OpenGL hardware. The code simply automatically used the hardware implementation if it was available.

Sunrise OpenGL



So, the alignment of forces has been defined: Direct3D vs OpenGL. This is a really interesting story, considering how terrible D3D v3 was.

The OpenGL Architectural Review Board (ARB) was the organization responsible for supporting the OpenGL standard. They released many extensions, followed the extension repository, and created new versions of the API. The committee included many influential graphic industry players and OS developers. Apple and Microsoft at one time were also members of this committee.

Then came 3Dfx with Voodoo2. This was the first multitexturing device that OpenGL could not do before. Although 3Dfx did not fit into the OpenGL standard at all, NVIDIA, the developer of subsequent multitexturing graphics chips (TNT1), took a look at this implementation. ARB had to release an extension: GL_ARB_multitexture, which gave access to multitexturing.

At the same time, Direct3D v5 was released. Now D3D has become a real API , not a weird piece of cat vomit. Problem? Lack of multitexturing.

Oops

By and large, it was not as important as it should have been, because people did not particularly use multitexturing. At least directly. Multitexturing greatly degraded performance, and in many cases there was simply no point in using it instead of multi-passing. And, of course, the game developers tried to make sure that their games will work on the old hardware, in which there was no multitexturing, and many games simply did not use it.

D3D it got away with it.

Some time passed and NVIDIA released the GeForce 256 (not the GeForce GT-250; the very first GeForce), by and large stopping the arms race in graphics accelerators for the next two years. The main selling point is the ability to do vertex transformations and lighting (T&L) hardware. But that's not all: NVIDIA so loved OpenGL, that their T & L engine in nature and has been OpenGL. And literally: as I understand it, some registers did directly accept OpenGL objects as values.

Direct3D v6 comes out. Finally, multitexturing appeared, but ... there is no hardware T&L. OpenGL always hasthere was a T&L pipeline, even before 256 came out it was implemented in software. So it wasn’t very difficult for NVIDIA to convert a software implementation into a hardware one. In D3D, hardware T&L appeared only in the seventh version.

Dawn of Shaders, Twilight OpenGL



Then came GeForce 3 and many things happened at the same time.

Microsoft decided that now they certainly would not be late for the holiday. And instead of looking at what NVIDIA is doing and copying it after the fact, they came to NVIDIA and talked. Then they fell in love and from this union a small game console appeared.

Then there was a painful divorce. But this is a completely different story.

For PC, this meant that GeForce 3 came out simultaneously with D3D v8. And it’s easy to see how much GeForce 3 affected shaders in the eight. The pixel shaders in Shader Model 1 were very strongly tied to NVIDIA hardware. NVIDIA's hardware abstraction was not at all; SM 1.0 was essentially what GeForce 3 did.

When ATI entered the performance graphics race with its Radeon 8500, a problem was discovered. The 8500 pixel conveyor was more powerful than the NVIDIA. And Microsoft released the Shader Model 1.1, which essentially was "what the 8500 did."

This may seem like a failure on the part of the D3D. But failure and success is a relative matter. The real failure occurred in the OpenGL camp.

NVIDIA loved OpenGL, so when the GeForce 3 came out, they released the OpenGL extension kit. Proprietary extensions suitable only for NVIDIA. Naturally, when the 8500 came out, he could not use any of them.

You see, in D3D v8 you can at least run SM 1.0 shaders on ATI hardware. Naturally, to use the goodies of 8500 you have to write new shaders, but your code at least worked .

To get at least some shaders on the 8500 in OpenGL, ATI had to write several OpenGL extensions. Proprietary extensions suitable only for ATI. So, you had to write two ways of code execution, for NVIDIA and for ATI, in order to have at least some shaders.

Now you’ll probably ask: “What was OpenGL ARB doing, whose job was keeping OpenGL up to date?” Yes, the same thing that most committees do: stupid.

You see, I mentioned ARB_multitexture above because it is also part of this story. From the point of view of an external observer, ARB tried to avoid adding shaders at all. They thought that if you create a fairly configurable conveyor with fixed functions, it will be able to equal the capabilities of shader conveyors.

And ARB started releasing expansion after expansion. Each extension, in the name of which “texture_env” was present, was another attempt to patch this aging design. check the registry: between the ARB and EXT extensions there are already eight of them. Many are included in major versions of OpenGL.

Microsoft at that time was a member of ARB; they left around the time of the release of D3D 9. So, in principle, it is quite possible that they somehow sabotaged the development of OpenGL. I personally doubt this version for two reasons. First, they would have to enlist the help of other committee members, since one member has only one vote. And, more importantly, secondly, the Committee did not need Microsoft to come to such a failure. A little later we will see what happened.

Over time, ARB, most likely under the onslaught of ATI and NVIDIA (very active participants), woke up enough to accept assembler-type shaders.

Want to see more stupidity?

Hardware T&L. Which appeared in OpenGL earlier. Here is what is interesting. To squeeze maximum performance out of hardware T&L, you need to store data in the GPU. After all, it is in the GPU that they are used.

In D3D v7, Microsoft introduced the concept of vertex buffers. These are dedicated GPU memory locations for storing vertex data.

Want to know when your analogue of this appeared in OpenGL? Oh, NVIDIA, being a fan of everything related to OpenGL (as long as it is written remains a proprietary extension of NVIDIA), they released the extension for vertex arrays during the first release of GeForce 256. But when did ARB officially decide to provide such functionality?

Two years later . It happened afterhow they approved vertex and fragment shaders (pixels in the D3D language). That's how long it took ARB to develop a cross-platform solution for storing data in GPU memory. Exactly what you need to get the most out of hardware T&L.

One language to destroy everything



So, the development of OpenGL has been fragmented. There are no single shaders, there is no single storage in the GPU, when D3D users already enjoyed both. Could it get worse?

Well ... you could say that. Meet: 3D Labs .

Who are they, you ask? This is a bankrupt company that I consider the real killers of OpenGL. Naturally, the overall flaccid ARB made OpenGL vulnerable when it was supposed to beat D3D on all fronts. But 3D Labs is arguably the biggest reason for OpenGL's current market position. What could they do this?

They developed the OpenGL Shader Language.

The fact is that 3D Labs was a dying company. Their expensive accelerators became unnecessary when NVIDIA stepped up pressure on the desktop computer market. And, unlike NVIDIA, they had no presence in the common market; if NVIDIA won, they would disappear.

And so it happened.

And, in an attempt to stay afloat in a world that did not need their products, 3D Labs appeared at the Game Developer Conference with a presentation of what they called "OpenGL 2.0." This was supposed to be completely rewritten from scratch by the OpenGL API. And that made sense; There were a lot of roughnesses in the OpenGL API (note: they are there now). Just look at what texture loading and snapping looks like; it's just some kind of black magic.

Part of their proposal was shader language. Like this. However, unlike the current cross-platform ARB extensions, their shader language was “high level” (C is a high level language for shaders. No, really).

So, Microsoft at this time was working on its own high-level shader language. They called it, in their old habit, the "High Level Shader Language" (HLSL). But the approach to language was fundamentally different.

The biggest problem with 3D Labs shader language was that it was built-in. You see, HLSL was a language defined by Microsoft. They released for him a compiler that generated assembly code for Shader Model 2.0 (and later) that you inserted into D3D. In the days of D3D v9, HLSL was never called directly from D3D. It was a convenient abstraction, but completely optional. The developer always had the opportunity to postpone the compiler and modify the code to maximum performance.

In 3D Labs, none of this happened. You fed the driver code in a C-like language, and it returned a shader. That’s the end of the story. And not an assembler shader, not something that you can insert anywhere. A true OpenGL object representing a shader.

This meant that OpenGL users were defenseless against developer errors, who had just begun to deal with compiled assembler-like languages. The compiler bugs in the new OpenGL shader language (GLSL) were just herds . Even worse, if you were able to correctly compile a shader for several platforms (in itself a difficult task), you still had to deal with optimizers of that time. Which were not as optimal as they could.

Although this was the main problem of GLSL, but not the only one. Far from the only one.

In D3D, like the old OpenGL assembler languages, you could mix vertex and fragment (pixel) shaders. As long as they used a single interface, any vertex shader with any compatible fragment shader could be used. In addition, there were levels of incompatibility that could in principle be tolerated; The vertex shader could produce data that the fragment shader simply did not read. And so on.

There was nothing like that in GLSL. Vertex and fragment shaders were assembled into a single abstraction, which 3D Labs called the "software object". And if you wanted to use vertex and fragment programs together, you had to build several such program objects. And that was the cause of the second problem.

You see, 3D Labs thought they were doing very smart. They based the compilation model in GLSL on C / C ++. You take .c or .cpp and compile it into an object file. Then you take one or more object files and link them to the program. This is how compilation in GLSL happens: you compile a shader (vertex or fragment) into a shader object. Then you put this shader object in the program object and link them together to get the program.

Although this allowed us to do some potentially cool things like “libraries” of shaders containing additional code shared by the main shaders, in practice this meant that the shaders were compiled twice. Once at the compilation stage, once at the linking stage. No intermediate object code was created; the shader was simply compiled, the compilation result was thrown out and the compilation was repeated during linking.

So if you wanted to link your vertex shader with two different fragment shaders, you had to compile a lot more code than in D3D. Moreover, the entire compilation of C-like languages ​​occurred during development, and not when the program started.

GLSL had other problems. It's probably wrong to blame 3D Labs for everything, because ARB eventually approved and adopted the language (but nothing else came of it from their OpenGL 2.0 proposal). But the idea was just them.

But the really sad part. 3D Labs were by and large right . GLSL is not a vector shader language like HLSL has always been. This happened because the iron 3D Labs was scalar iron (just like modern NVIDIA cards), but in general they were right in terms of the direction of development of accelerators.

They were also right with the “compile-online” model for “higher level” languages. D3D subsequently also switched to it.

The problem was that 3D Labs were right at the wrong time.. And in an attempt to call the future too early, in an attempt to predict it, they rejected the present. It is also like OpenGL has always had the ability to do T&L. Apart from the fact that T&L pipelines in OpenGL were useful even before the hardware implementation, and GLSL was just a burden before the world was ready to accept it.

Now GLSL - good language. But for his time he was terrible. And OpenGL suffered for it.

Apotheosis is approaching



Although I claim that 3D Labs dealt a mortal blow, it was the ARB committee who hammer the last nail into the lid of the OpenGL coffin.

You must have heard this story. In the days of OpenGL 2.1, OpenGL faced a problem. He had a lot of old bumps. The API was difficult to use. There were five methods for each action, and no one knew which one would be the fastest. You could “learn” OpenGL with simple tutorials, but no one told you which API would give you maximum performance.

And ARB decided to make another attempt to reinvent OpenGL. It was like OpenGL 2.0 from 3D Labs, but better because it had ARB behind it. The attempt was called the "Longs Peak."

What was wrong with trying to fix old APIs? The bad news was that Microsoft was vulnerable at the time. This was Vista's release time.

In Vista, Microsoft decided to introduce the long-needed changes to the display drivers. They forced drivers to access the OS for virtualizing video memory and many other things.

Although one can doubt whether it was necessary, but the fact remains: Microsoft decided that D3D 10 will only be for Vista (and subsequent OS). Even if you had hardware capable of performing the functions of D3D 10, you could not run the D3D 10 application without starting Vista.

You also probably remember that Vista ... well, let's just say that it did not work out very well. So, you had a braking OS, a new API that runs only on this OS, and a new generation of accelerators that needed an API and OS to surpass the previous generation of accelerators.

However, developers couldAccess D3D 10 level features through OpenGL. Well, they could if ARB weren’t so busy working on Longs Peak.

By and large, ARB spent one and a half to two years to make the API better. When OpenGL 3.0 came out, Vista was running out of time, Win7 appeared on the horizon, and most developers were no longer interested in the D3D-10 functions. In the end, the hardware for which D3D 10 was intended worked wonderfully with D3D 9. And with the heyday of ports from PC to set-top boxes (or PC developers who were developing for set-top boxes), the functions of the D3D 10 class were not claimed.

If developers had access to these functions earlier, through OpenGL on machines with WinXP, OpenGL development would receive a much-needed impulse. But ARB missed this opportunity. And you know what's the worst?

Despite the fact that two valuable years were spent on developing the API from scratch ... they still failed and were forced to return to the previous version.

That is, ARB not only missed a great opportunity, but also did not do the task for which the opportunity was missed. Just a complete failure.

This is the story of the struggle between OpenGL and Direct3D. The story of missed opportunities, great stupidity, blindness and just recklessness.

Also popular now: