Learn OpenGL. Lesson 4.11 - Smoothing

Original author: Joey de Vries
  • Transfer
  • Tutorial
OGL3

Smoothing


In your research on three-dimensional rendering, you probably came across the appearance of pixelated notches along the edges of the rendered models. These marks inevitably appear due to the principle of converting vertex data into screen fragments by a rasterizer somewhere in the depth of the OpenGL pipeline. For example, even on such a simple figure as a cube, these artifacts are already noticeable:


A cursory glance may not notice anything, but it’s worth a closer look and the marked notches will appear on the edges of the cube. Let's try to enlarge the image:


No, this is no good. Do you really want to see such image quality in the release version of your application?


The effect of the apparent visibility of the pixel-by-pixel image structure at the edges of objects is called aliasing. The computer graphics industry has already accumulated a lot of techniques called anti-aliasing or anti-aliasing techniques, which just struggle with this effect, allowing for smooth transitions at the boundaries of objects.

For example, one of the first was the technique of supersampling ( super sampling anti-aliasing , SSAA) Implementation is performed in two passes: first, the render goes to an off-screen frame buffer with a resolution noticeably greater than the screen; then the image was transferred with a decrease in the screen frame buffer. This data redundancy due to the difference in resolution was used to reduce the aliasing effect and the method worked perfectly, but there was one “But”: performance. The conclusion of the scene in high resolution took decently the GPU and the age of glory of this technology was short-lived.

But from the ashes of the old technology, a new, more advanced one was born: multi sampling anti-aliasing ( MSAA ). It is based on the ideas of SSAA, but implements them in a much more effective way. In this lesson, we will take a closer look at the MSAA approach, which is natively available in OpenGL.

Multisampling


To understand the essence of multisampling and how it works, we first have to go deeper into the guts of OpenGL and look at the work of its rasterizer.

A rasterizer is a complex of algorithms and procedures that stand between final processed vertex data and a fragment shader. The rasterizer receives all the vertices belonging to the primitive as an input and converts this data into a set of fragments. The vertex coordinates, theoretically, can be absolutely any, but not the coordinates of the fragments - they are strictly limited by the resolution of your output device and the size of the window. And almost never the coordinates of the vertex of the primitive will not overlap one by one on the fragments: one way or another, the rasterizer will have to decide in some way in which fragment and on which screen coordinate each of the vertices will be.


The image shows a grid representing screen pixels. In the center of each of them is a sampling / sampling point , which is used to determine whether a given pixel covers a triangle. The sample points covered by a triangle are marked in red - for them the rasterizer will generate the corresponding fragment. Despite the fact that the edges of the triangle overlap some pixels in places, they do not overlap the sampling point - here the fragment will not be created and the fragment shader will not be executed for this pixel.

I think you already guessed the reasons for aliasing. The render of this triangle on the screen will look like this:


Due to the finite number of pixels on the screen, some along the edges of the triangle will be painted over and some will not. As a result, it turns out that the primitives are not rendered with smooth edges at all, which is manifested in the form of those notches.

When using multisampling, not one point is used to determine the overlap of a pixel by a triangle, but several (hence the name). Instead of one sampling point in the center of the pixel, 4 subsampling points located along some template will be used to determine the overlap. The consequence is that the size of the color buffer should also increase four times (in terms of the number of sample points used).


The left shows a standard approach for determining overlap. For the selected pixel, the fragment shader will not be executed, and it will remain unpainted since the overlap was not registered. On the right is the case with multisampling, where each pixel contains 4 subsample points. Here you can see that the triangle overlaps only 2 subsample points.
The number of subsample points can be changed within certain limits. More dots - better smoothing quality.
From this moment everything that happens becomes more interesting. Having determined that two pixel sub-sampling points were covered by a triangle, you need to display the final color for this pixel. The first hunch would be to execute a fragment shader for each subsampled point overlapped by the triangle and then average the colors of the entire subsample points in a pixel. In this case, it would be necessary to execute a fragment shader several times with vertex data interpolated to the coordinates of each of the overlapped points of the subsample (twice in this example) and save the resulting colors at these points. Fortunately, in fact, the multisampling process does not work that way - otherwise we would have to make a considerable number of additional calls to the fragment shader, which would greatly impact performance.

Upon using MSAA fragment shader runs exactly one time, regardless of the number of primitive closed subsample points. The fragment shader is executed with vertex data interpolated to the center of the pixel, and the color obtained during its execution is stored in each of the sample points drawn by the primitive. When all points of the frame buffer subsample are filled with the colors of the primitives we have drawn, then pixel-by-pixel averaging of colors to one value per pixel occurs. In this example, only two subsample points were overlapped and, accordingly, filled with the color of the triangle. The remaining two were filled with a transparent background color. When mixing the colors of these subsamples, a light blue color is obtained.

The frame buffer as a result contains the image of primitives with much smoother edges. See what the definition of coverage for subsamples on an already familiar triangle looks like:


It can be seen that each pixel contains four subsample points (pixels that are not important for the example are left blank), while the blue covered triangle points the subsample, and gray - uncovered. Inside the perimeter of the triangle for all pixels, a fragment shader will be called once, the result of which will be saved in all four subsamples. On the edges, not all subsamples will be covered, so the result of the fragment shader will be saved only in part of them. Depending on the number of subsampling points covered by the triangle, the final color of the pixel is determined based on the color of the triangle itself and other colors stored at the subsampling points.

Simply put, the more subsamples covered by a triangle, the more the pixel color will match the color of the triangle. If you now fill in the colors of the pixels in the same way as in the example with a triangle without using multisampling, the following picture will come out:


As you can see, the less pixel subsamples belong to a triangle, the less its color corresponds to the color of the triangle. The clear boundaries of the triangle are now surrounded by pixels of a slightly lighter shade, which creates a smoothing effect if observed from a distance.
But not only color values ​​are affected by the multisampling algorithm: the depth and stencil buffers also begin to use multiple subsamples for the pixel. The vertex depth value is interpolated for each of the subsample points before performing a depth test. The stencil values ​​are not stored for the entire pixel, but for each of the subsample points. For us, this also means an increase in the memory occupied by these buffers, in accordance with the number of subsamples used.

Here we covered the very basics of how multisampling works. The true internal logic of the rasterizer will be more complicated than the review given here. However, for the purposes of a general understanding of the principle and operation of multisampling, this is quite enough.

Multisampling in OpenGL


To use multisampling in OpenGL, you must use a color buffer that can store more than one color value per pixel (MSAA implies storing the color value at subsample points). Thus, we need some special type of buffer that can store a given number of subsamples — a multisample buffer.

Most window systems can provide us with a multisample buffer instead of the standard color buffer. GLFW also has this functionality, all that is required is to set a special flag signaling our desire to use a buffer with N subsample points instead of the standard one:

glfwWindowHint(GLFW_SAMPLES, 4);

Now calling glfwCreateWindow will create an output window with a color buffer storing four subsamples per screen coordinate. GLFW will also automatically create depth and stencil buffers using the same four sub-sampling points per pixel. And the size of each of the mentioned buffers will grow four times.

After creating the multisample buffers using GLWL, it remains to enable multisampling mode already in OpenGL:

glEnable(GL_MULTISAMPLE);  

In most OpenGL drivers, multisampling is active by default, so this call will be redundant, but explicit inclusion of the functions you need is a good tone, and this will also enable the mode regardless of the defaults of a particular implementation.

Actually, after ordering the multisample buffer and turning on the mode, all our work is completed, since everything else falls on the OpenGL rasterizer mechanisms and happens without our participation. If now we try to bring out a green cube, familiar from the very beginning of the lesson, we will see that its faces are now much smoother:


Indeed, the edges of this cube look much more attractive. And the same effect will affect any object in your scene.

The source code for the example is here .

Off-screen multisampling


Creating a basic frame buffer with MSAA enabled is a simple task, thanks to GLFW. If we want to create our own buffer, for example, for off-screen rendering, we will have to take this process into our own hands.

There are two main methods for creating multisampling buffers for further attachment to the framebuffer, similar to those already parsed from the corresponding lesson : texture attachments and render type attachments.

Multisampling texture attachment


To create a texture that supports multiple subsamples, the texture target type GL_TEXTURE_2D_MULTISAPLE and the function glTexImage2DMultisample are used instead of the usual glTexImage2D :

glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, tex);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, GL_RGB, width, height, GL_TRUE);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0);  

The second argument indicates the number of subsamples in the texture to be created. If the last argument is set to GL_TRUE , then the texture will use the same number and position of the subsample points for each texel.

To attach such a texture to the frame buffer object, the same glFramebufferTexture2D call is used , but this time with the specified texture type GL_TEXTURE_2D_MULTISAMPLE:

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, tex, 0);

As a result, the current frame buffer will be provided with a texture-based color buffer with support for multisampling.

Multisampling renderer


Creating a render buffer with multiple subsample points is no more difficult than creating such a texture. Moreover, it is even simpler: all you need to do is change the glRenderbufferStorage call to glRenderbufferStorageMultisample when preparing the memory for the currently bound render object:

glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_DEPTH24_STENCIL8, width, height); 

Of the new here, there is only one additional parameter, coming after the type of render target of the render buffer, which indicates the number of sample points. Here we have indicated four such points.

Render to frame buffer with multisampling


Rendering to the multisample framebuffer is automatic, without the required actions on our part. Each time we render to the attached frame buffer, the rasterizer itself performs the necessary operations. And we get at the output a color buffer (depth, stencil) with a lot of subsample points. Since the frame buffer with many subsampling points is still somewhat different from the usual one, it will not be possible to directly use its separate buffers for various operations, such as texture sampling in the shader.

Image with multisampling support contains more information than usual, because it is necessary to resolve ( the RESOLVE ) is an image, or in other words, to convert it to a lower resolution. This operation is, as usual, performed using a callglBlitFramebuffer , which allows you to copy the area of ​​one frame buffer to another with the associated resolution of the present buffers with many subsample points.
This function transfers the source region specified by four coordinates in the screen space to the receiver region also specified by four screen coordinates. Let me remind you the lesson on frame buffers : if we bind a frame buffer object to the GL_FRAMEBUFFER target , then the binding is implicitly performed both to the target of reading from the frame buffer and to the target of writing to the frame buffer. To bind to these goals individually, special target identifiers are used: GL_READ_FRAMEBUFFER and GL_DRAW_FRAMEBUFFER, respectively.

During its operation, the glBlitFramebuffer function uses these anchor points to determine which of the frame buffers is the source of the image and which is the receiver. As a result, we could simply transfer the image from the multisample frame buffer to standard using blitting:

glBindFramebuffer(GL_READ_FRAMEBUFFER, multisampledFBO);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);

By assembling and running the application, we would get an image identical to the previous example, which did not use the frame buffer: an acid-green cube drawn using MSAA, which can be seen by looking at its faces - they are still smooth:


Sample sources are here .

But what should we do if we would like to use an image from a frame buffer with many subsample points as a data source for postprocessing? We cannot directly use the multisample texture in the shader. But you can try to transfer the image from the multisample buffer of the frame using blitting to another, with ordinary, not multisample, buffers. And then you can use the regular image as a resource for post-processing, essentially getting all the benefits of MSAA and adding post-processing on top of this. Yes, for this entire process, you will have to create a separate frame buffer, which serves purely as an auxiliary object for resolving MSAA textures into regular ones that can be used in a shader. In the form of pseudocode, the process looks like this:

unsigned int msFBO = CreateFBOWithMultiSampledAttachments();
// затем создайте еще один FBO с обычной текстурой в качестве прикрепления цвета
...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, screenTexture, 0);
...
while(!glfwWindowShouldClose(window))
{
    ...
    glBindFramebuffer(msFBO);
    ClearFrameBuffer();
    DrawScene();
    // разрешение мультисэмпл буфера с помощью вспомогательного
    glBindFramebuffer(GL_READ_FRAMEBUFFER, msFBO);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, intermediateFBO);
    glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
    // теперь образ сцены сохранен в обычной текстуре, которая используется для постобработки
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    ClearFramebuffer();
    glBindTexture(GL_TEXTURE_2D, screenTexture);
    DrawPostProcessingQuad();  
    ... 
}

If we add this code to the post-processing examples from the lesson on the frame buffer , we can apply all those effects to the scene image without jagged edges. For example, with the blur effect, you get something like this:


Since a standard texture with one subsample point is used for postprocessing, some processing methods (searching for borders, for example) can again bring noticeable sharp edges and notches into the scene. To get around this artifact, you will either have to blur the result, or implement your own smoothing algorithm.
As you can see, for the combination of MSAA and off-screen rendering techniques, some details have to be taken into account. But all the extra effort pays off with a much higher quality of the resulting image. However, remember that activating multisampling can still significantly affect the final performance, especially when a large number of subsample points are set.

Native smoothing method


In fact, you can transfer a multisample to a shader directly to texture shaders, without blitting into an auxiliary normal one. In this case, the GLSL capabilities provide access to individual subsampling points in the texture, which can be used to create your own smoothing algorithms (which is often the case in large graphic applications).

First, you need to create a special sampler like sampler2DMS , instead of the usual sampler2D :

uniform sampler2DMS screenTextureMS;

And to get the color value at the subsample point, the following function is used:

vec4 colorSample = texelFetch(screenTextureMS, TexCoords, 3);  // считывание из 4ой точки подвыборки

Here you can see an additional argument - the number of the subsampling point (counting from zero), which is being accessed.

We will not consider the details of creating special smoothing algorithms here - this is nothing more than a starting point for your own research on this topic.

PS : We have a telegram conf for coordination of transfers. If you have a serious desire to help with the translation, then you are welcome!

Also popular now: