Learn OpenGL. Lesson 4.5 - Framebuffer

Original author: Joey de Vries
  • Transfer
  • Tutorial
OGL3

Frame buffer


At the moment, we have already managed to use several types of screen buffers: a color buffer, in which the color values ​​of fragments are stored; depth buffer storing information about the depth of the fragments; a stencil buffer that allows you to discard part of the fragments according to a specific condition. The combination of these three buffers is called a frame buffer (framebuffer) and is stored in a specific memory area. OpenGL is flexible enough to allow us to create our own frame buffers by setting our own color buffers and, optionally, depth and stencil buffers.


All the rendering operations that we have performed so far have been executed within the framework of buffers attached to the base frame buffer. The basic frame buffer is created and configured at the time of creating the application window (GLFW does the hard work for us). Creating our own frame buffer, we get additional space where you can direct the render.

At first glance, it may not seem obvious what the use of your own frame buffers is, but outputting the image to an additional buffer allows you to at least create mirror effects or perform post-processing. But first, we will figure out how the frame buffer is arranged, and then we will consider the implementation of some interesting post-processing effects.

Framebuffer Creation


Like any other object in OpenGL, a frame buffer object (abbreviated FBO from FrameBuffer Object ) using the following call:

unsigned int fbo;
glGenFramebuffers(1, &fbo);

There is already a familiar and dozens of times our approach to creating and using OpenGL library objects: we create a frame buffer object, bind it as the current active frame buffer, perform the necessary operations and untie the frame buffer. The binding is as follows:

glBindFramebuffer(GL_FRAMEBUFFER, fbo);  

After binding our frame buffer to the GL_FRAMEBUFFER binding point, all subsequent read and write operations for the frame buffer will use it. It is also possible to bind a read-only or write-only frame buffer by snapping to special anchor points GL_READ_FRAMEBUFFER or GL_DRAW_FRAMEBUFFER, respectively. A buffer bound to GL_READ_FRAMEBUFFER will be used as the source for all reads of type glReadPixels . And the buffer associated with GL_DRAW_FRAMEBUFFER, will become the receiver of all render operations, buffer clearing and other write operations. However, for the most part, you do not have to use these anchor points using the GL_FRAMEBUFFER anchor point .

Unfortunately, we are not yet ready to use the frame buffer, because it is not complete. To become complete, the frame buffer must meet the following requirements:

  • At least one buffer (color, depth, or stencil) must be connected.
  • At least one color attachment must be present.
  • All connections must also be completed (provided with dedicated memory).
  • Each buffer must have the same number of samples.

Do not worry about what samples are for now - this will be discussed in a later lesson.
So, from the list of requirements it is clear that we need to create some kind of “attachment” and connect them to the frame buffer. If we have met all the requirements, we can check the frame buffer completion status by calling glCheckFramebufferStatus with the GL_FRAMEBUFFER parameter . The procedure checks the current bound frame buffer for completeness and returns one of the values ​​specified in the specification . If GL_FRAMEBUFFER_COMPLETE is returned , then work can continue:

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE) {
  // все хорошо, можно плясать джигу!
}

All subsequent rendering operations will output to the frame buffer that is currently bound. Since our frame buffer is not basic, the output to it will not have any effect on what is displayed in the window of your application. That is why a render to its own frame buffer is called an off-screen render. In order for the output commands to take effect on the application output window again, we must screw the base frame buffer into the active one:

glBindFramebuffer(GL_FRAMEBUFFER, 0);   

It is the transmission of 0 as the identifier of the frame buffer that indicates the binding of the base buffer as active. After performing all necessary actions with the created frame buffer, do not forget to delete its object:

glDeleteFramebuffers(1, &fbo);  

So, let's go back one step before checking the buffer is complete: you need to create and connect at least one attachment to our frame buffer. An attachment is an area in memory that can act as a receiver buffer for a frame buffer, making it easier to imagine it as an image. When creating an attachment, we have a choice: use textures or render objects.

Texture attachments


After connecting the texture to the frame buffer, the result of all subsequent commands will be written to this texture as if it were a regular buffer of color, depth or stencil.

The advantage of using a texture object is that the results of the render operations will be saved in a texture format, making them easily accessible for processing in shaders.

The process of creating a texture for use in the frame buffer is approximately the same as for a regular texture object:

unsigned int texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 800, 600, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);  

The main difference is that the size of the texture is set equal to the size of the screen (although this is not necessary), and instead of a pointer to an array of texture values, NULL is passed . Here we only allocate memory for the texture, but do not fill it with anything, because filling will happen by itself when the render is called directly to this frame buffer. Also note the lack of texture repeat mode settings and mipmapping settings, since in most cases using off-screen buffers is not required.

If you want to render the entire screen into a texture of a smaller or larger size, then immediately before rendering , you must additionally call glViewport , which passes the dimensions of the texture used. Otherwise, either only a fragment of the screen image will get into the frame buffer, or the frame buffer texture will be filled only partially with the screen image.

After creating the texture object, you need to attach it to the frame buffer:

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);  

The function takes the following parameters:

  • target - the type of frame object to which we connect the texture (read only, write only, read / write).
  • attachment - the type of attachment that we plan to connect. In this case, we connect the color attachment. Pay attention to 0 at the end of the attachment identifier - its presence implies the possibility of connecting more than one attachment to the buffer. This point is considered in more detail later.
  • textarget is the type of texture you plan to include.
  • texture - the texture object itself.
  • level - the MIP level used for output.

In addition to color attachments, we can also attach depth and stencil textures to the frame buffer object. To attach the depth, we set the type of attachment GL_DEPTH_ATTACHMENT . Do not forget that the format and internalformat parameters of the texture object must be set to GL_DEPTH_COMPONENT in order to be able to store depth values ​​in the appropriate format. To attach a stencil, the type is set to GL_STENCIL_ATTACHMENT , and the texture format settings are set to GL_STENCIL_INDEX .

There is also the possibility of connecting both the depth buffer and the stencil at the same time using just one texture. For this configuration, each 32-bit texture value consists of 24 bits of depth value and 8 bits of stencil information. The attachment type GL_DEPTH_STENCIL_ATTACHMENT is used to connect the depth buffer and the stencil as one texture , and the texture format is configured to store the combined depth and stencil values. An example of connecting a depth buffer and a stencil as a single texture is shown below:

glTexImage2D(
  GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 800, 600, 0, 
  GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL
);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, texture, 0); 

Renderbuffer Attachments


Chronologically, renderer buffers as another type of frame buffer attachment were added to the library later texture, which was the only option for working with off-screen buffers in the old days. Like texture, the render object represents a real buffer in memory, i.e. an array of bytes, integers, pixels, or something else.

However, it has an additional advantage - the data in the render buffer is stored in a special, understandable library format, which makes them optimized specifically for off-screen rendering.

Renderbuffer objects save render data directly, without additional conversions to specific texture data formats, which ultimately gives a noticeable advantage in speed on the processes of writing to the buffer. Unfortunately, in a general sense, the render buffer is for writing only. You can read something from it only indirectly, through a call to glReadPixels , and this will return the pixel data of the frame buffer currently being used, and not the render buffer itself.

Since the data is stored in the internal format for the library, render buffers are very fast when writing to them or when copying their data to other buffers. Buffer switching operations are also very fast when using render objects. So, the glfwSwapBuffers function, which we used at the end of each render cycle, can also be implemented using render objects: we write to one buffer, then switch to another after rendering is complete. In such tasks, the render buffer is clearly on horseback.

Creating a render object is pretty similar to creating a frame buffer object:

unsigned int rbo;
glGenRenderbuffers(1, &rbo);

Expectedly, we need to bind the render object so that subsequent drawing operations direct the results to it:

glBindRenderbuffer(GL_RENDERBUFFER, rbo);  

Since render objects are generally not readable, they are often used to store depth and stencil data — for the most part, we don’t often need specific values ​​for these buffers, but in general we need their functions. More precisely, we need a depth and stencil buffer for the corresponding tests, but we do not plan to make selections from them. In cases where fetching from buffers is not planned, a renderer is an excellent choice, because a big performance is also a bonus.

Creating a render buffer for depth and stencil:

glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 800, 600);

Creating a render object is similar to texture objects. The only difference is that the render buffer was intended for direct storage of the image, in contrast to the general purpose buffer, which is a texture object. Here we specify the internal format of the GL_DEPTH24_STENCIL8 buffer , which corresponds to 24 bits per depth and 8 bits per stencil.

Do not forget that the object must be connected to the frame buffer:

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo);

Using a render buffer may provide some performance benefits for processes using off-screen buffers, but it's important to understand when to use them and when to use textures. The general principle is this: if you never plan to make selections from the buffer, then use the render object for it. If at least sometimes you need to make a selection from the buffer, such as the color or depth of a fragment, then you should turn to texture attachments. In the end, the performance gains will not be huge.

Render in texture


So, armed with knowledge of how (in general terms) framebuffers work, we begin to use them directly. Let's try to bring the scene into the texture attachment of the frame buffer, and then draw one full-screen quad using this texture. Yes, we won’t see any differences - the result will be the same as without the use of a frame buffer. What is the profit of such an undertaking? Wait for the next section and find out.

First, create a frame buffer object and bind it right away:

unsigned int framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer); 

Next, we will create a texture object, which we will attach to the attaching color frame buffer. Again, we set the texture dimensions equal to the dimensions of the application window, and leave the data pointer empty:

// создание текстурного объекта
unsigned int texColorBuffer;
glGenTextures(1, &texColorBuffer);
glBindTexture(GL_TEXTURE_2D, texColorBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 800, 600, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
// присоедиение текстуры к объекту текущего кадрового буфера
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texColorBuffer, 0); 

We would also like to be able to conduct a depth test (and a stencil test, if you need it), so we will not forget about the task of attaching a depth (and stencil) for our frame buffer. Since we plan to make only samples from the color buffer, we can use the renderer as a medium for depth and stencil data.

Creating a render object is trivial. It is worth remembering only that we are going to create a combined buffer of depth and stencil. Therefore, we set the internal format of the render object to GL_DEPTH24_STENCIL8 . For our tasks, 24 bits of depth accuracy is sufficient.

unsigned int rbo;
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo); 
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 800, 600);  
glBindRenderbuffer(GL_RENDERBUFFER, 0);

As soon as we requested memory for an object, we can untie it.
Then we attach the render object to the combined attachment point of the depth and frame buffer stencil:

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo);

The final chord will be to check the frame buffer for completeness with the output of a debug message, if this is not so:

if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
	std::cout << "ERROR::FRAMEBUFFER:: Framebuffer is not complete!" << std::endl;
glBindFramebuffer(GL_FRAMEBUFFER, 0);  

Do not forget to untie the frame buffer object at the end so that you do not accidentally start rendering in the wrong direction.

Well, we have a frame buffer object fully prepared for rendering into it, instead of the default frame buffer. All that remains to be done is to bind our buffer and all subsequent render commands will affect the attached frame buffer. All operations with depth and stencil buffers will also use the corresponding attachments of the current bound frame buffer (unless, of course, you created them). If, for example, you forgot to add a depth buffer to the frame buffer, then the depth test will not work anymore, since there simply will not be any source data in the frame buffer.

So, we list the steps required to output the scene to the texture:

1. Bind our frame buffer object as current and display the scene in the usual way.
2. Bind the default frame buffer.
3. Output a full-screen quad with texture overlay from the color buffer of our frame buffer object.

We will draw the scene taken from the lesson about the depth test, but this time using the already familiar container texture .

To display full-screen quad, we will create a new set of trivial shaders. There will not be any intricate matrix transformations, since the coordinates of the vertices we will immediately transmit to them in the form of normalized device coordinates ( NDC ). Let me remind you that in this form they can be immediately transferred to the output of the fragment shader:

#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoords;
out vec2 TexCoords;
void main()
{
    gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0); 
    TexCoords = aTexCoords;
}  

Nothing special, right? The fragment shader will be even simpler, since all it does is select from the texture:

#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D screenTexture;
void main()
{ 
    FragColor = texture(screenTexture, TexCoords);
}

The code responsible for creating and configuring VAO for the quad itself remains on your conscience. The render iteration ultimately has the following structure:

// первый проход
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // буфер трафарета не используется
glEnable(GL_DEPTH_TEST);
DrawScene();	
// второй проход
glBindFramebuffer(GL_FRAMEBUFFER, 0); // возвращаем буфер кадра по умолчанию
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); 
glClear(GL_COLOR_BUFFER_BIT);
screenShader.use();  
glBindVertexArray(quadVAO);
glDisable(GL_DEPTH_TEST);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer);
glDrawArrays(GL_TRIANGLES, 0, 6);  

A few comments. Firstly, since the created frame buffer object has its own set of buffers, it is necessary to clear each of them by setting the corresponding flags for the glClear function . Secondly, when displaying a quad, we turn off the depth test because it is redundant when rendering a simple pair of triangles. However, the test should be included with the direct rendering of the scene itself.

Phew! Properly stages of work in which it is easy to make a mistake. If your program does not display anything, try debugging where possible, and also re-read the relevant parts of this lesson. If everything worked successfully, then the conclusion will be similar to this result:


On the left, the result is identical to the image from the depth test lesson , but this image is displayed on a full-screen quad. If you switch the render mode to wireframe (glPolygonMode (GL_FRONT_AND_BACK, GL_LINE) - enter the mode, glPolygonMode (GL_FRONT_AND_BACK, GL_FILL) - return to normal mode, approx.per . ), You can see that only a couple of triangles are drawn to the frame buffer by default .
The source code for the example is here .

Well, what is the use of all this? Since we now have a texture with the contents of the finished frame, we can easily access the value of each pixel and implement many intricate effects in the fragment shader! Collectively, this approach is called post-processing.or postprocessing .

Post processing


Having in our hands a texture containing the image of the entire frame, we can realize various effects with the help of simple operations with texture data. This section will demonstrate some common postprocessing techniques, as well as ideas on how to make your effect with a little imagination.

Let's start with, perhaps, the simplest effect.

Color inversion


Since we have full access to the color data of the final frame, it is easy to obtain the color value opposite to the original in the fragment shader. To do this, a color sample is taken from the texture and is subtracted from unity:

void main()
{
    FragColor = vec4(vec3(1.0 - texture(screenTexture, TexCoords)), 1.0);
}  

Color inversions, despite the simplicity of the effect, can bring quite interesting results:


All the colors in the scene were inverted with just one line of code in the shader, not bad, right?

Grayscale Translation


Another interesting effect is the deletion of all color information with the translation of the image in grayscale. The naive solution is obvious, it is enough to sum the brightness values ​​of each color channel and average, replacing the initial values ​​with the average:

void main()
{
    FragColor = texture(screenTexture, TexCoords);
    float average = (FragColor.r + FragColor.g + FragColor.b) / 3.0;
    FragColor = vec4(average, average, average, 1.0);
} 

The results of this approach are quite acceptable, but the nature of the human eye implies a greater sensitivity to the green part of the spectrum and less sensitivity to the blue. So a more physically correct reduction to grayscale uses color averaging with weighting factors for individual channels:

void main()
{
    FragColor = texture(screenTexture, TexCoords);
    float average = 0.2126 * FragColor.r + 0.7152 * FragColor.g + 0.0722 * FragColor.b;
    FragColor = vec4(average, average, average, 1.0);
}   


At first glance, the difference is not obvious, but in more saturated scenes, a balanced reduction to grayscale gives a better result.

Convolutional core application


Another advantage of post-processing using a texture map is the fact that we can access any part of the texture. For example, take a small area around the current texture coordinate and select values ​​around the current texel. And combining the values ​​of these samples, it is easy to create certain special effects.

A convolutional kernel (convolution matrix) is a small array of values ​​similar to a matrix, the central element of which corresponds to the current pixel being processed, and its surrounding elements to adjacent texture texels. During processing, the core values ​​surrounding the central one are multiplied by the values ​​of the samples of adjacent texels, and then everything is added together and written into the current (central) texel. By and large, we simply add a small offset of texture coordinates in all directions from the current texel and calculate the final result using values ​​from the kernel. Take, for example, the following convolution kernel:

$ \ begin {bmatrix} 2 & 2 & 2 \\ 2 & -15 & 2 \\ 2 & 2 & 2 \ end {bmatrix} $


This kernel multiplies the values ​​of adjacent texels by 2, and the current texel by -15. In other words, the kernel multiplies all adjacent values ​​by the weight coefficient stored in the kernel, and “equalizes” this operation by multiplying the value of the current texel by a large negative Weight coefficient.
Most of the convolutional matrices that you find on the network will have the sum of all the coefficients equal to 1. If this is not so, then the image after processing will become either brighter or darker than the original.

Convolutional kernels are an incredibly useful tool for creating postprocessing effects, since they are quite simple to implement, easy to experiment with, and many ready-made examples are already available on the network.

To support the convolutional kernel, we will have to slightly modify the fragment shader code. We make the assumption that only 3x3 cores will be used (most of the known cores really have this dimension):

const float offset = 1.0 / 300.0;  
void main()
{
    vec2 offsets[9] = vec2[](
        vec2(-offset,  offset), // top-left
        vec2( 0.0f,    offset), // top-center
        vec2( offset,  offset), // top-right
        vec2(-offset,  0.0f),   // center-left
        vec2( 0.0f,    0.0f),   // center-center
        vec2( offset,  0.0f),   // center-right
        vec2(-offset, -offset), // bottom-left
        vec2( 0.0f,   -offset), // bottom-center
        vec2( offset, -offset)  // bottom-right    
    );
    float kernel[9] = float[](
        -1, -1, -1,
        -1,  9, -1,
        -1, -1, -1
    );
    vec3 sampleTex[9];
    for(int i = 0; i < 9; i++)
    {
        sampleTex[i] = vec3(texture(screenTexture, TexCoords.st + offsets[i]));
    }
    vec3 col = vec3(0.0);
    for(int i = 0; i < 9; i++)
        col += sampleTex[i] * kernel[i];
    FragColor = vec4(col, 1.0);
}  

Here we first define an array of 9 vec2 values , which is an array of texture coordinates offsets relative to the current texel. The size of the displacement is determined through a constant, the value of which you are free to choose yourself. Next, we define the core, in this case realizing the effect of sharpening. Then we fill in the array of samples, adding the value of the corresponding offset of the texture coordinates to the current ones. And finally, we summarize all the samples multiplied by the corresponding weights.
The effect of using such a kernel looks like this:


It may come in handy in scenes where the player is in a drug trip.

Blur


The core that implements the blur effect is as follows

$ \ begin {bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \ end {bmatrix} / 16 $


Since the total amount of elements is 16, it is necessary to divide the result by 16 in order to avoid an extraordinary increase in brightness. Kernel definition:

float kernel[9] = float[](
    1.0 / 16, 2.0 / 16, 1.0 / 16,
    2.0 / 16, 4.0 / 16, 2.0 / 16,
    1.0 / 16, 2.0 / 16, 1.0 / 16  
);

Changing the elements of the array of numbers representing the core itself led to a complete transformation of the picture:


The blur effect has great potential for application. So, for example, you can change the amount of blur over time to simulate the intoxication of a character, or lift up the amount of blur in scenes where the hero forgot to wear glasses. Also, blurring allows you to make color transitions smooth, which will be useful in subsequent lessons.
I think it’s already clear that having prepared the code for using the convolution kernel, you can simply and quickly create new post-processing effects. In conclusion, we will deal with the last of the popular convolutional effects.

Boundary definition


Below is the kernel for detecting boundaries:

$ \ begin {bmatrix} 1 & 1 & 1 \\ 1 & -8 & 1 \\ 1 & 1 & 1 \ end {bmatrix} $


Reminds a kernel for increase of sharpness, but in this case selects all borders on the image, at the same time shading other parts. It is very useful if in the image you are only interested in the borders:


I think you will not be surprised by the fact that convolutional kernels are used in image processing programs and filters, such as Adobe Photoshop. Real-time pixel-by-pixel modification of images becomes quite accessible due to the outstanding speed of parallel processing of fragments. That is why in recent years, graphics packages are increasingly using the capabilities of video cards in the field of image processing.

PS From the comments to the original: an excellent interactive demonstration of various convolutions.
PPS : We have a telegram confe to coordinate transfers. If you want to fit into the cycle, then you are welcome!

Also popular now: