Learn OpenGL. Lesson 5.8 - Bloom

Original author: Joey de Vries
  • Transfer
  • Tutorial
Ogl3

Bloom


Due to the limited range of brightness available to conventional monitors, the task of convincingly displaying bright light sources and brightly lit surfaces is complex by definition. One of the most common methods to highlight bright areas on the monitor is a technique that adds a halo of glow around bright objects, giving the impression that the light is spreading beyond the light source. As a result, the observer has the impression of a high brightness of such illuminated areas or light sources.

Described halo effect and output light source for the limits achieved post-processing technique, called Bloom ( bloom ). Applying an effect adds a characteristic halo of glow to all bright areas of the displayed scene, which can be seen in the example below:



Bloom adds to the image a well-visible visual hint about the significant brightness of objects covered by the halo from the applied effect. Being applied selectively and in a verified volume (with which many games, alas, do not cope), the effect can significantly improve the visual expressiveness of the lighting used in the scene, as well as add drama in certain situations.

This technique works in conjunction with HDR-rendering almost as a matter of course. Apparently, because of this, many people mistakenly mix these two terms up to complete interchangeability. However, these techniques are completely independent and are used for different purposes. It is quite possible to implement bloom using the default frame buffer with an 8bit color depth, just like applying HDR rendering without using bloom. The only thing is that the HDR-rendering allows you to realize the effect in a more efficient way (we will see this later).

To implement the bloom, the lit scene is first rendered in the usual way. Next, HDR color buffers and a color buffer containing only bright areas of the scene are extracted. This extracted image of bright areas is then blurred and superimposed over the original HDR image of the scene.

To clarify the process step by step. Render a scene containing 4 bright light sources rendered as color cubes. All of them have a brightness value in the range from 1.5 to 15.0. If you output to the HDR color buffer, the result is as follows:


From this HDR color buffer, we extract all fragments whose brightness exceeds the specified limit. It turns out an image containing only brightly lit areas:


Further this image of bright sites is blurred. The severity of the effect is essentially determined by the strength and radius of the blur filter applied:


The resulting blurred image of bright areas is the basis of the final halo effect around bright objects. This texture is simply mixed with the original HDR image of the scene. As the bright areas were blurred, their sizes increased, which ultimately gives a visual effect of luminosity beyond the boundaries of the light sources:


As you can see, bloom is not the most sophisticated technique, however, to achieve its high visual quality and reliability is not always easy. For the most part, the effect depends on the quality and type of blur filter applied. Even small changes in the filter parameters can dramatically change the final quality of the equipment.

So, the above steps give us a step-by-step algorithm for the post-processing effect for the bloom effect. The image below summarizes the necessary actions:


First of all, we will need information about bright areas of the scene based on a predetermined threshold value. And this will do.

Removing bright areas


So, first we need to get two images based on our scene. It would be naive to render the render twice, but use the more advanced method of multiple render targets ( Multiple Render Targets , MRT ): we set more than one output in the final fragment shader and thanks to this, the extraction of two images can be done in one pass! To specify which color buffer the shader will be output to, the layout specifier is used :

layout (location = 0) out vec4 FragColor;
layout (location = 1) out vec4 BrightColor;  

Of course, the method will only work if we have prepared several buffers for writing. In other words, in order to implement multiple output from the fragment shader, the frame buffer used at this point must contain a sufficient number of connected color buffers. If we turn to the lesson of the frame buffer , then remembered that when binding a texture as a color buffer, we can specify the number of colors of the attachment ( color attachment ). Until now, we didn’t need to use an attachment different from GL_COLOR_ATTACHMENT0 , but this time GL_COLOR_ATTACHMENT1 is also useful , because we need two targets for recording at once:

// настройка кадрового буфера с плавающей точкойunsignedint hdrFBO;
glGenFramebuffers(1, &hdrFBO);
glBindFramebuffer(GL_FRAMEBUFFER, hdrFBO);
unsignedint colorBuffers[2];
glGenTextures(2, colorBuffers);
for (unsignedint i = 0; i < 2; i++)
{
    glBindTexture(GL_TEXTURE_2D, colorBuffers[i]);
    glTexImage2D(
        GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL
    );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // присоединение текстуры к фреймбуферу
    glFramebufferTexture2D(
        GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, colorBuffers[i], 0
    );
}  

Also, by calling glDrawBuffers , you need to explicitly tell OpenGL that we are going to output to several buffers. Otherwise, the library will still output only to the first attachment, ignoring write operations to other attachments. As an argument of the function, an array of identifiers of the used attachments from the corresponding enum is passed:

unsignedint attachments[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, attachments);  

For this frame buffer, any fragment shader that specified the location specifier for its outputs will write to the corresponding color buffer. And this is great news, because in this way we avoid an extra draw pass to extract data on bright parts of the scene - you can do everything at once in a single shader:

#version 330 core
layout (location = 0) out vec4 FragColor;
layout (location = 1) out vec4 BrightColor;
[...]
voidmain(){            
    [...] // сначала делаем обычные расчеты освещения
    FragColor = vec4(lighting, 1.0);
    // проверка фрагмента на превышение по яркости заданного порога// если ярче - вывести в отдельный буфер, хранящий яркие участкиfloat brightness = dot(FragColor.rgb, vec3(0.2126, 0.7152, 0.0722));
    if(brightness > 1.0)
        BrightColor = vec4(FragColor.rgb, 1.0);
    else
        BrightColor = vec4(0.0, 0.0, 0.0, 1.0);
}

In this fragment, the part containing the typical code for the calculation of illumination is omitted. The result is written to the first output of the shader - the variable FragColor . Next, the resulting color of the fragment is used to calculate the brightness value. To do this, weighted translation is performed in grayscale (by scalar multiplication, we multiply the corresponding components of the vectors and add them together, resulting in a single value). Then, when exceeding the brightness of a fragment of a certain threshold, we record its color in the second output of the shader. For cubes replacing light sources, this shader is also executed.

Having dealt with the algorithm, we can understand why this technique fits so well with HDR rendering. Rendering in HDR format allows color components to go beyond the upper limit of 1.0, which makes it possible to more flexibly adjust the brightness threshold beyond the standard interval [0., 1.], providing the ability to fine-tune which parts of the scene are considered bright. Without using HDR, you will have to be content with a brightness threshold in the interval [0., 1.], which is quite acceptable, but leads to a more “sharp” cutoff in brightness, which often makes the bloom too intrusive and flashy (imagine yourself on a snowy field high in the mountains) .

After the shader is executed, the two target buffers will contain a normal image of the scene, as well as an image containing only bright areas.


The image of bright areas should now be processed using blur. You can do this with a simple rectangular ( box ) filter, which was used in the post-processing section of the lesson for the frame buffer . But a much better result is provided by Gauss filtering .

Gaussian blur


The postprocessing lesson gave us an idea of ​​blurring using simple color averaging of adjacent image fragments. This blurring method is simple, but the resulting image may look more attractive. Gaussian blur is based on a bell-shaped distribution curve of the same name: high values ​​of the function are located closer to the center of the curve and fall off to both sides of it. Mathematically, the Gauss curve can be expressed with different parameters, but the general form of the curve remains as follows:


Blur with weights based on Gaussian curve values ​​looks much better than a rectangular filter: due to the fact that the curve has a larger area in the vicinity of its center, which corresponds to greater weights for fragments near the center of the filter core. Taking, for example, the 32x32 core, we will use the weighting factors the smaller the further the fragment is from the central one. It is this filter characteristic that gives a visually more satisfying Gaussian blur result.

The implementation of the filter will require a two-dimensional array of weights, which could be filled based on the same two-dimensional expression describing the Gauss curve. However, we will immediately encounter a performance problem: even a relatively small blur core in a 32x32 fragment will require 1024 texture samples for each fragment of the image being processed!

To our happiness, the expression of the Gaussian curve has a very convenient mathematical characteristic - separability, which will make it possible to make two one-dimensional expressions from a single two-dimensional expression, describing horizontal and vertical components. This allows you to perform a blur in turn in two approaches: horizontally, and then vertically with sets of weights corresponding to each of the directions. The resulting image will be the same as in the processing by a two-dimensional algorithm, but it will require much less computing power of the video processor: instead of 1024 texture samples, we will need only 32 + 32 = 64! This is the essence of two-pass Gauss filtering.


For us, all this means one thing: the blurring of one image will have to be done twice, and here, by the way, we will have to use the frame buffer objects. Apply the so-called ping-pong technique: there are a couple of frame buffer objects and the contents of the single framebuffer color buffer is rendered with some processing into the color buffer of the current framebuffer, then the source and subframes are reversed and the process is repeated a specified number of times. In essence, the current frame buffer for displaying the image is simply switched, and with it is the current texture from which the selection is made for drawing. The approach allows you to blur the original image by placing it in the first frame buffer, then blur the contents of the first frame buffer, placing it in the second, then blur the second one, placing it in the first one, and so on.

Before moving on to the frame buffer configuration code, let's take a look at the Gaussian blur shader code:

#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D image;
uniform bool horizontal;
uniform float weight[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);
voidmain(){             
    // получаем размер одного текселя
    vec2 tex_offset = 1.0 / textureSize(image, 0); 
    // вклад текущего фрагмента
    vec3 result = texture(image, TexCoords).rgb * weight[0]; 
    if(horizontal)
    {
        for(int i = 1; i < 5; ++i)
        {
            result += texture(image, TexCoords + vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
            result += texture(image, TexCoords - vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
        }
    }
    else
    {
        for(int i = 1; i < 5; ++i)
        {
            result += texture(image, TexCoords + vec2(0.0, tex_offset.y * i)).rgb * weight[i];
            result += texture(image, TexCoords - vec2(0.0, tex_offset.y * i)).rgb * weight[i];
        }
    }
    FragColor = vec4(result, 1.0);
}

As you can see, we use a rather small sample of Gaussian curve coefficients, which are used as weights for the samples horizontally or vertically relative to the current fragment. The code has two main branches dividing the algorithm into a vertical and horizontal pass based on the value of the uniform horizontal . The offset for each sample is set equal to the size of the texel, which is defined as the reciprocal of the size of the texture (a value of type vec2 , returned by the function textureSize ()).

Create two frame buffers containing one texture-based color buffer:

unsignedint pingpongFBO[2];
unsignedint pingpongBuffer[2];
glGenFramebuffers(2, pingpongFBO);
glGenTextures(2, pingpongBuffer);
for (unsignedint i = 0; i < 2; i++)
{
    glBindFramebuffer(GL_FRAMEBUFFER, pingpongFBO[i]);
    glBindTexture(GL_TEXTURE_2D, pingpongBuffer[i]);
    glTexImage2D(
        GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL
    );
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glFramebufferTexture2D(
        GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, pingpongBuffer[i], 0
    );
}

After we get the HDR texture of the scene and extract the texture of the bright areas, we fill the color buffer of one of the pair of prepared framebuffers with the brightness texture and start the ping-pong process ten times (five times vertically, five horizontally):

bool horizontal = true, first_iteration = true;
int amount = 10;
shaderBlur.use();
for (unsignedint i = 0; i < amount; i++)
{
    glBindFramebuffer(GL_FRAMEBUFFER, pingpongFBO[horizontal]); 
    shaderBlur.setInt("horizontal", horizontal);
    glBindTexture(
        GL_TEXTURE_2D, first_iteration ? colorBuffers[1] : pingpongBuffers[!horizontal]
    ); 
    RenderQuad();
    horizontal = !horizontal;
    if (first_iteration)
        first_iteration = false;
}
glBindFramebuffer(GL_FRAMEBUFFER, 0); 

At each iteration, we select and bind one of the frame buffers based on whether this iteration blurs horizontally or vertically, and the color buffer of the other framebuffer is then used as the input texture for the blur shader. At the first iteration, we have to explicitly use an image containing bright areas ( brightnessTexture ) - otherwise both ping-pong framebuffers will remain empty. After ten passes, the original image takes on the appearance of a five-fold blurred Gaussian full filter. The approach used allows us to easily change the degree of blurring: the more ping-pong iterations - the stronger the blur.

In our case, the result of the blur looks like this:


To complete the effect, it remains only to combine the blurred image with the original HDR image of the scene.

Texture blending


Having at hand an HDR texture of a rendered scene and a blurred texture of overexposed areas, all you need to realize the famous bloom effect or glow is to combine these two images. The final fragmentary shader (quite similar to the one present in the HDR format lesson ) does just that - additively blends two textures:

#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D scene;
uniform sampler2D bloomBlur;
uniform float exposure;
voidmain(){             
    constfloat gamma = 2.2;
    vec3 hdrColor = texture(scene, TexCoords).rgb;      
    vec3 bloomColor = texture(bloomBlur, TexCoords).rgb;
    hdrColor += bloomColor; // additive blending// тональная компрессия
    vec3 result = vec3(1.0) - exp(-hdrColor * exposure);
    // также не забудем о гамма-коррекции
    result = pow(result, vec3(1.0 / gamma));
    FragColor = vec4(result, 1.0);
}

What to look for: mixing is carried out before applying tone mapping . This will allow you to correctly translate the additional brightness from the effect in the LDR ( Low Dynamic Range ) range, while maintaining the relative brightness distribution in the scene.

The result of processing - all bright areas have a noticeable glow effect:


Cubes replacing light sources now look much brighter and better convey the impression of a light source. This scene is quite primitive, because the implementation of the effect of particular enthusiasm will not cause, but in complex scenes with well-thought-out lighting, a quality-realized bloom can be a decisive visual element that adds drama.

The source code of the example is here .

I note that the lesson used a fairly simple filter with only five samples in each direction. By making more samples in a larger radius or by performing several iterations of the filter, you can visually improve the effect. Also, it should be said that visually the quality of the entire effect directly depends on the quality of the blur algorithm used. By improving the filter you can achieve a significant improvement and the entire effect. For example, more impressive results are shown by the combination of several filters with different core sizes or different Gauss curves. Below are additional resources from Kalogirou and EpicGames that deal with improving the quality of the bloom by modifying the Gaussian blur.

Additional resources



Also popular now: