2D Directional Lighting and Shading


Good afternoon, Khabravchians!
I would like to talk about one of the ways to render lighting and shading in 2D-space, taking into account the geometry of the scene. I really like the implementation of lighting in Gish and Super MeatBoy, although in mitba it can only be seen at dynamic levels with collapsing or moving platforms, and in Guiche it is everywhere. The lighting in such games seems to me so "warm", lamp, that I certainly wanted to implement something similar myself. And here is what came of it.

It is clear what is and what needs to be done:
  • there is a certain 2D world in which you need to incorporate dynamic lighting + shading; the world is not necessarily tile, from any geometry;
  • light sources should, in principle, be an unlimited number (limited only by system performance);
  • the presence of a large number of light sources at one point, or one light source with a large coefficient of "lighting", should not only illuminate the area by 100%, but should illuminate it;
  • everything should be calculated, of course, in real time;



All of this required OpenGL, GLSL, FrameBuffer technology, and some math. Limited to versions of OpenGL 3.3 and GLSL 3.30 since the video card of one of my systems is very outdated by today's standards (GeForce 310), and for 2D this is more than enough (and earlier versions cause rejection due to inconsistent versions of OpenGL and GLSL). The algorithm itself is not complicated and is done in 3 stages:
  1. Form a texture the size of a black render area and draw illuminated areas in it (the so-called light map), while accumulating a light factor for all points;
  2. Render the scene into a separate texture;
  3. In the context of the render, display the quad that completely covers it, and in the fragment shader reduce the resulting textures. At this stage, you can "play around" with the fragment shader, adding, for example, the effects of refraction from water / fire, lenses, color correction for every taste and other post-processing.


1. Lighting map


We will use one of the most common technologies - deferred shading , porting it to 2D (PS I apologize, not deferred shading, but a shadow map , my jamb, thanks for the correction). The essence of this method is to render the scene by moving the camera to the position of the light source, having obtained a depth buffer. With simple operations with the matrices of the camera and the light source in the shader, you can learn pixel by pixel about shading by translating the pixel coordinates of the rendered scene into the texture coordinates of the buffer. In 3D, z-buffer is used, here I decided to create my one-dimensional depth buffer, on the CPU.
I do not pretend to be rational and optimistic of this approach, there are enough lighting algorithms and each has its own pluses and minuses. During the brainstorming, the method seemed quite entitled to life, and I set about implementing it. I note that at the time of writing the article I found this way ... well oh well, a bicycle is a bicycle.

1.1 Z-buffer aka depth buffer


The essence of the Z-buffer is to store the remoteness of the scene element from the camera, which allows you to cut off pixels invisible behind closer objects. If in the 3D scene the depth buffer is a plane

, then in our flat world it will become a line or a one-dimensional array. Light sources - point, emitting light from the center in all directions. Accordingly, the buffer index and value will correspond to the polar coordinates of the location of the object closest to the source. I determined the size of the buffer empirically, as a result of which I stopped at 1024 (of course, it depends on the size of the window). The smaller the buffer dimension, the greater the discrepancy between the boundary of the object and the illuminated area will be noticeable, especially in the presence of small objects, and in places completely unacceptable artifacts may appear:
Hidden text


Buffer Algorithm:
  • fill in with the value of the radius of the light source (the distance at which the light intensity reaches zero);
  • for each object located in the radius of the light source, take those edges that are turned to the light source with the front side. If you take the edges rotated by the back side, the objects will automatically become highlighted, but there will be a problem with those standing next to them:
    Hidden text

  • project the resulting list of edges, converting their Cartesian coordinates to polar light sources. Recalculate point (x; y) in (φ; r):
    φ = arccos (xAxis • normalize (point))
    where:
    • is the scalar product of vectors;
    xAxis is the unit vector corresponding to the x axis (1; 0), because 0 degrees correspond to the point right from the center of the circle;
    point - a vector directed from the center of the light source to a point belonging to the edge (coordinates of the point of the edge in the coordinate system of the light source);
    normalize - normalization of the vector;
    r = | point | - distance to the point;
    We project the two extreme points of the edge and the intermediate. The number of points needed for recounting corresponds to the number of buffer cells that are covered by the projection of the edge.
    Calculation of the buffer index corresponding to the angle φ:
    index = φ / (2 * π) * Buffer size;
    Thus, we find the two extreme indexes of the buffer corresponding to the extreme points of the edge. For each intermediate index, translate into the value of the angle:
    φ = index * 2 * π / Buffer size
    , construct a vector segment from (0; 0) at this angle with a length equal to or greater than the radius of the light source:
    v = vec2 (cos (φ), sin (φ)) * radius
    and we find the intersection point of the resulting segment and the edge, for example, like this:
    • 2 lines with coefficients A 1 , B 1 , C 1 and A 2 , B 2 , C 2

    • using the Cramer method to solve this system of equations, we get the intersection point:


    • if the denominator is zero (in our case, if the value of the denominator is less than the absolute value of the error, because float), then there are no solutions - the lines either coincide or are parallel;
    • check for the location of the resulting point within both segments.

    And the last step is to translate all the obtained intermediate points into polar coordinates. If the distance to the point is less than the value of the buffer at the current index, then write to the buffer. The buffer is now ready for use. On this, in principle, all mathematics ends.


1.2 Vertex frame


Now, according to the data in the depth buffer, it is necessary to build a polygonal model covering the entire area that illuminates the light source. For this, it is convenient to use the Triangle Fan method. A

polygon is formed from the first point, the previous and the current. Accordingly, the first point is the center of the light source, and the coordinates of the remaining points:
  for( unsigned int index = 0; index < bufferSize; ++index ) {
    float alpha = float( index ) / float( bufferSize ) * Math::TWO_PI;
    float value = buffer[ index ];
    Vec2 point( Math::Cos( alpha ) * value, Math::Sin( alpha ) * value );
    Vec4 pointColor( color.R. color.G, color.B, ( 1.0f - value / range ) * color.A );
    ...
  }
  

and close the chain by duplicating the zero index. The color of all points is the same after the difference in the brightness transparency value - in the center is the maximum brightness, at the radius of the light source (range) 0.0. The transparency value can also be useful in the fragment shader as an indicator of the distance of a point from the center of the source, so you can replace the linear dependence of illumination on distance with a more interesting one, up to using textures.
At this stage, it is also possible to forcefully retract the obtained points by a certain value so that the surface onto which the rays fall is illuminated, creating the appearance of a volume.

1.3 Framebuffer


Just one texture bound to the framebuffer - the GL_RGBA16F format is enough, this format will allow you to store values ​​outside [0.0; 1.0] with an accuracy of half-precision floating-point.
A bit of 'pseudo code'
    GLuint textureId;
    GLuint frameBufferObject;
    //текстура. width и height - размеры окна
    glGenTextures( 1, &textureId );
    glBindTexture( GL_TEXTURE_2D, textureId );
    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glBindTexture( GL_TEXTURE_2D, 0 );
    //фреймбуфер
    glGenFramebuffers( 1, frameBufferObject );
    glBindFramebuffer( GL_FRAMEBUFFER, frameBufferObject );
    //аттач текстуры к буферу
    glFramebufferTexture2D( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0 );
    //вернуть рендер на место
    glBindFramebuffer( GL_FRAMEBUFFER, 0 );
    //ну и на всякий случай, если что-то пошло не так...
    if( glCheckFramebufferStatus( GL_FRAMEBUFFER_EXT ) != GL_FRAMEBUFFER_COMPLETE ) {
      ...
    }
    ...
    


Bindim the buffer, set the additive blend glBlendFunc (GL_ONE, GL_ONE) and "draw" the illuminated areas. Thus, the alpha channel will accumulate the degree of illumination. You can also add global lighting by drawing a quad in the entire window.

1.4 shaders


Vertex shaders for rendering rays from light sources are standard, taking into account the position of the camera, and in the fragment shader we accumulate color taking into account brightness:
    layout(location = 0) out vec4 fragData;
    in vec4 vColor;
    ...
    void main() {
      fragData = vColor * vColor.a;
    }
  

In the end, we should get something like this:


2. Render the scene into texture


It is necessary to render the scene into a separate texture, for which we create another framebuffer, attach the usual GL_RGBA texture and render in the usual way.
Let's say there is such a scene from the notorious platformer:


3. Combining the lighting map with the scene


The fragment shader should be something like this:
    uniform sampler2D texture0;
    uniform sampler2D texture1;
    ...
    vec4 color0 = texture( texture0, texCoords ); //чтение из текстуры отрендеренной сцены
    vec4 color1 = texture( texture1, texCoords ); //чтение из карты освещения
    fragData0 = color0 * color1;
  

Nowhere is easier. Here, before the multiplication, a certain coefficient can be added to the color of the scene color0 in case the game setting is extremely dark and it is necessary to see the rays of light.
Hidden text
fragData0 = ( color0 + vec4( 0.05, 0.05, 0.05, 0.0 ) ) * color1;

And then ...

If the character is not described by simple geometry, then the shadow of it will be very, very wrong. The shadows are built from geometry, respectively, the shadows from the sprite character are obtained as from a square (hmm, and Mitboy, I wonder, from what considerations is square?). So the sprite textures should be drawn as square as possible, leaving as few transparent areas around the edges as possible? This is one of the options. Can the character’s geometry be described in more detail by smoothing the corners, but not the same geometry for each frame of the animation? Suppose you have smoothed the corners, now the character is almost an ellipse. If the scene is completely dark, then such a shadow is striking. Adding smoothing of the lighting map and global lighting, the picture is more acceptable:
    vec2 offset = oneByWindowCoeff.xy * 1.5f; //степень размытости
    fragData = (
      texture( texture1, texCoords )
      + texture( texture1, vec2( texCoords.x - offset.x, texCoords.y - offset.y ) ).r
      + texture( texture1, vec2( texCoords.x, texCoords.y - offset.y ) ).r
      + texture( texture1, vec2( texCoords.x + offset.x, texCoords.y - offset.y ) ).r
      + texture( texture1, vec2( texCoords.x - offset.x, texCoords.y ) ).r
      + texture( texture1, vec2( texCoords.x + offset.x, texCoords.y ) ).r
      + texture( texture1, vec2( texCoords.x - offset.x, texCoords.y + offset.y ) ).r
      + texture( texture1, vec2( texCoords.x, texCoords.y + offset.y ) ).r
      + texture( texture1, vec2( texCoords.x + offset.x, texCoords.y + offset.y ) ).r
      ) / 9.0;
  

where oneByWindowCoeff is the coefficient for converting pixel coordinates to texel.
In the absence of global illumination, it may be better to turn off the shadows for such “characters” or to make them shine (ideal, in my opinion, option), or to get confused and describe the geometry of the object for all animations.

I recorded a small demonstration that out of all these reflections and completions came out:


4. Optimization


As the saying goes, "First write, and then optimize." The initial code was sketched quickly and roughly, so there was enough room for optimization. The first thing that came to mind was to get rid of the excessive number of polygons that draw illuminated areas. If there are no obstacles in the radius of the light source, then it makes no sense to draw 1000+ polygons, we don’t need such a perfect circle, the eye just doesn’t perceive the difference (or maybe this monitor is too dirty for me).
For example, for a depth buffer of dimension 1024 without optimization:
Hidden text

and with optimization:
Hidden text

For scenes with an abundance of static objects, you can cache the results of calculating the projection of objects into the buffer, which gives a good increase, since the number of cosines / roots and other expensive mathematics is reduced. Accordingly, for each buffer, we start a list of pointers to objects, check for changes in their parameters that affect position or shape, and then either fill the cache directly in the buffer or recount the object completely.

5. Conclusion


This lighting technique does not pretend to be optimal, fast and accurate, the goal was the fact of implementation. There are different techniques, such as constructing shadows alone (lighting, as I understand it, is dopped additionally), but soft , with lots of calculations, or even such an extremely entertaining one , found already in the process of writing the article (in general, the logic is similar to the one I used).
In general, what was planned was realized, objects cast shadows, the necessary oppressive atmosphere in the game was created, and the picture became more pleasant in my opinion.

6. References



Also popular now: