Dynamic local exposure
- Transfer
Hi, Habr! I present to you the translation of the article "Dynamic Local Exposure" by John Chapman.
In this article I will introduce a couple of ideas about dynamic local exposure in HDR rendering. Bart Vronsky already has a great article on this topic and I highly recommend reading it right now if you have not done so already; the ideas here are more based on his article. At the end I included several other great links.
In the good old days (1990s), games were rendered directly in the displayed LDR (narrow dynamic range) format (gamma space, 8 bits). It was simple and cheap, but, on the other hand, significantly hampered the creation of a truly photo-realistic picture.
Currently, especially with the advent of PBR (physically-based rendering), games are rendered with a huge dynamic range in linear space with higher accuracy. With this movement towards photorealism comes the real problem: how can we display the HDR image in the LDR ?
The standard approach to automatic exposure control is to measure the average (or average logarithmic) brightness of the scene, optionally with a weight function, preferring values close to the center of the image. This can be done very effectively by using parallel reduction or by multiple downsampling in the mipmap of the luminance buffer (brightness buffer). The latter approach has some advantages, which I will discuss in the next section.
The average brightness is subsequently converted to an exposure value, for example, by calculating the reciprocal of the maximum permissible scene brightness:
Obtained from the ISO standard for calculating speed based on saturation, see full explanation in (3).
Since the potentially average brightness is unstable under dynamic conditions, it is usually smoothed over time using the exponential hysteresis function (2) :
Due to its global nature, this method suffers from strong shadowing or glare of areas of the image in which there is a deviation from the average brightness:
Although this corresponds to the ability of the eye to adapt to changes in the light level, the overall effect is rather far from what we actually perceive in real life. the world.
If we generate medium brightness using downsampling , we have access to lower mip levels for luminance buffer (4) to get local average brightness .
Please note that in order for this to work, the hysteresis should be used only in the last step (when recording 1x1 mip level), otherwise there will be artifacts.
In theory, this is a great idea: each area of the image can have a good exposure, while being in contrast with the neighboring areas. However, in practice, a disgusting result is obtained:
The most unpleasant are block “halos”, which are found in areas with high contrast:
However, they can be smoothed by either pre-filtering the luminance buffer , or simply using bicubic sampling:
It still looks disgusting, but better.
Sampling different levels of mipmap for luminancechanges the radius of the halo. This parameter is useful for controlling the overall “appearance” of the result, as well as for minimizing the halo effect, although at the expense of a general decrease in contrast (it becomes a border filter) or a loss of locality of exposure control:
Still, smoothing halos is not enough. The result is generally not natural; It looks like an extreme “HDR photo” style, unlike what a person sees. However, by mixing global and local values, we can get the best of both worlds:
By changing the blending factor, you can adjust the local exposure so that as a result, you can minimize artifacts and maximize perceived realism:
Adjusting the blending ratio manually is normal in situations where we have absolute control over camera position, lighting, etc. However, in many cases (for example, outdoor games with a dynamic change of day and night) this level of control is simply impossible. In this case, it would be nice to generate the mix ratio automatically.
In the image below, we have a wide dynamic range; mostly medium-low brightness values and several areas with high intensity (the sky in the windows):
Without local exposure, the color of the sky is lost. In this case, I would like a large mixing ratio :
Now consider the image below, which has a small dynamic range mainly with a high brightness value:
In this case, applying local exposure too much reduces the brightness of “bright” areas:
Observation data hints at a simple method of mixing local and global values: if the difference between average and maximum the brightness of the scene is greater, then the mixing ratio of the local exposure must be greater. Generating the maximum brightness of the scene can be done trivially during the calculation of brightness, using hysteresis to smooth the result in the same way as for the average value. Therefore, we can extend the previous code snippet as follows:
Обратите внимание, что у нас на вход появился uLocalExposureMax для контроля абсолютной максимальной степени влияния локальной экспозиции. У меня хороший результат дал uLocalExposureMax < 0.3.
The approach outlined above imposes some restrictions on when to measure the brightness of the scene. Usually, the measurement is performed immediately after the passage of the lighting, in order to avoid adaptation of particle effects, bloom , etc. However, when local brightness is used, it is important that the real value that is involved in the exposure is represented in the luminance map . This means that the brightness measurement must be done immediately before applying the exposure. If this is unacceptable, then the solution will be to generate local brightness separately from the average and maximum values.
Although I think that using local and global scene brightness together is the “right” approach to creating a balanced, natural looking image, the quality of the result is obviously subjective. Whether a similar method is suitable for a particular game depends entirely on the content and desired visual style. I would be interested to hear other ideas on this.
HDR images are taken from sIBL Archive .
In this article I will introduce a couple of ideas about dynamic local exposure in HDR rendering. Bart Vronsky already has a great article on this topic and I highly recommend reading it right now if you have not done so already; the ideas here are more based on his article. At the end I included several other great links.
Low / High Dynamic Range
In the good old days (1990s), games were rendered directly in the displayed LDR (narrow dynamic range) format (gamma space, 8 bits). It was simple and cheap, but, on the other hand, significantly hampered the creation of a truly photo-realistic picture.
Currently, especially with the advent of PBR (physically-based rendering), games are rendered with a huge dynamic range in linear space with higher accuracy. With this movement towards photorealism comes the real problem: how can we display the HDR image in the LDR ?
Global Auto Exposure
The standard approach to automatic exposure control is to measure the average (or average logarithmic) brightness of the scene, optionally with a weight function, preferring values close to the center of the image. This can be done very effectively by using parallel reduction or by multiple downsampling in the mipmap of the luminance buffer (brightness buffer). The latter approach has some advantages, which I will discuss in the next section.
The average brightness is subsequently converted to an exposure value, for example, by calculating the reciprocal of the maximum permissible scene brightness:
float Lavg = exp(textureLod(txLuminance, uv, 99.0).x);
float ev100 = log2(Lavg * 100.0 / 12.5);
ev100 -= uExposureCompensation; // optional manual bias float exposure = 1.0 / (1.2 * exp2(ev100));
Obtained from the ISO standard for calculating speed based on saturation, see full explanation in (3).
Since the potentially average brightness is unstable under dynamic conditions, it is usually smoothed over time using the exponential hysteresis function (2) :
Lavg = Lavg + (Lnew - Lavg) * (1.0 - exp(uDeltaTime * -uRate));
Translator comment
Данную функцию нужно применять в шейдере downsampling текстуры яркости и только во время расчета последнего mip уровня (1x1). Далее про это будет написано, но на мой взгляд это легко упустить из виду.
Due to its global nature, this method suffers from strong shadowing or glare of areas of the image in which there is a deviation from the average brightness:
Although this corresponds to the ability of the eye to adapt to changes in the light level, the overall effect is rather far from what we actually perceive in real life. the world.
Local auto exposure
If we generate medium brightness using downsampling , we have access to lower mip levels for luminance buffer (4) to get local average brightness .
float Lavg = exp(textureLod(txLuminance, uv, uLuminanceLod).x;
Please note that in order for this to work, the hysteresis should be used only in the last step (when recording 1x1 mip level), otherwise there will be artifacts.
In theory, this is a great idea: each area of the image can have a good exposure, while being in contrast with the neighboring areas. However, in practice, a disgusting result is obtained:
The most unpleasant are block “halos”, which are found in areas with high contrast:
However, they can be smoothed by either pre-filtering the luminance buffer , or simply using bicubic sampling:
It still looks disgusting, but better.
Sampling different levels of mipmap for luminancechanges the radius of the halo. This parameter is useful for controlling the overall “appearance” of the result, as well as for minimizing the halo effect, although at the expense of a general decrease in contrast (it becomes a border filter) or a loss of locality of exposure control:
Still, smoothing halos is not enough. The result is generally not natural; It looks like an extreme “HDR photo” style, unlike what a person sees. However, by mixing global and local values, we can get the best of both worlds:
float Llocal = exp(textureLod(txLuminance, uv, uLuminanceLod).x;
float Lglobal = exp(textureLod(txLuminance, uv, 99.0).x;
float L = mix(Lglobal, Llocal, uLocalExposureRatio);
// .. use L to compute the final exposure scale as before
By changing the blending factor, you can adjust the local exposure so that as a result, you can minimize artifacts and maximize perceived realism:
Automatic mixing ratio
Adjusting the blending ratio manually is normal in situations where we have absolute control over camera position, lighting, etc. However, in many cases (for example, outdoor games with a dynamic change of day and night) this level of control is simply impossible. In this case, it would be nice to generate the mix ratio automatically.
In the image below, we have a wide dynamic range; mostly medium-low brightness values and several areas with high intensity (the sky in the windows):
Without local exposure, the color of the sky is lost. In this case, I would like a large mixing ratio :
Now consider the image below, which has a small dynamic range mainly with a high brightness value:
In this case, applying local exposure too much reduces the brightness of “bright” areas:
Observation data hints at a simple method of mixing local and global values: if the difference between average and maximum the brightness of the scene is greater, then the mixing ratio of the local exposure must be greater. Generating the maximum brightness of the scene can be done trivially during the calculation of brightness, using hysteresis to smooth the result in the same way as for the average value. Therefore, we can extend the previous code snippet as follows:
float Llocal = exp(textureLod(txLuminance, uv, uLuminanceLod).x;
float Lglobal = exp(textureLod(txLuminance, uv, 99.0).x; // average in xfloat Lmax = exp(textureLod(txLuminance, uv, 99.0).y; // max in yfloat Lratio = min(saturate(abs(Lmax - Lglobal) / Lmax), uLocalExposureMax);
float L = mix(Lglobal, Llocal, Lratio);
// .. use L to compute the final exposure scale as before
Обратите внимание, что у нас на вход появился uLocalExposureMax для контроля абсолютной максимальной степени влияния локальной экспозиции. У меня хороший результат дал uLocalExposureMax < 0.3.
Финальный код
float Llocal = exp(textureLod(txLuminance, uv, uLuminanceLod).x;
float Lglobal = exp(textureLod(txLuminance, uv, 99.0).x; // average in xfloat Lmax = exp(textureLod(txLuminance, uv, 99.0).y; // max in yfloat Lratio = min(saturate(abs(Lmax - Lglobal) / Lmax), uLocalExposureMax);
float L = mix(Lglobal, Llocal, Lratio);
float ev100 = log2(L * 100.0 / 12.5);
ev100 -= uExposureCompensation; // optional manual bias float exposure = 1.0 / (1.2 * exp2(ev100));
vec3 result = hdrColor * exposure;
result += bloom;
//etc
outColor.rgb = result;
Заключение
The approach outlined above imposes some restrictions on when to measure the brightness of the scene. Usually, the measurement is performed immediately after the passage of the lighting, in order to avoid adaptation of particle effects, bloom , etc. However, when local brightness is used, it is important that the real value that is involved in the exposure is represented in the luminance map . This means that the brightness measurement must be done immediately before applying the exposure. If this is unacceptable, then the solution will be to generate local brightness separately from the average and maximum values.
Although I think that using local and global scene brightness together is the “right” approach to creating a balanced, natural looking image, the quality of the result is obviously subjective. Whether a similar method is suitable for a particular game depends entirely on the content and desired visual style. I would be interested to hear other ideas on this.
Links
- Localized Tonemapping (Bart Wronski)
- Implementing a Physically Based Camera (Padraic Hennessy)
- Moving Frostbite to PBR (Sébastien Lagarde, et al.)
- A Closer Look at Tonemapping (Matt Pettineo)
- The Importance of Being Linear (Larry Gritz, et al.)
- Color Pipelines Advanced Techniques and Optimization of HDR / VDR (Timothy Lottes)
HDR images are taken from sIBL Archive .