256 lines of bare C ++: we write a ray tracer from scratch in a few hours

  • Tutorial
I am publishing the next chapter from my lecture course on computer graphics ( here you can read the original in Russian, although the English version is newer). This time the topic of conversation - drawing scenes using ray tracing . As usual, I try to avoid third-party libraries, as this forces students to look under the hood.

Such projects on the Internet are already the sea, but almost all of them show complete programs, which are extremely difficult to understand. For example, a very well-known rendering program that climbs onto a business card.. Very impressive result, but to understand this code is very difficult. My goal is not to show how I can, but to tell in detail how to reproduce the similar. Moreover, it seems to me that specifically this lecture is useful even not so much as educational material on computer graphics, but rather as a programming manual. I will consistently show how to arrive at the final result, starting from scratch: how to decompose a complex task into elementary solvable steps.

Warning: just to consider my code, as well as just reading this article with a cup of tea in hand, does not make sense. This article is designed for you to take up the keyboard and write your own engine. He will certainly be better than mine. Well, or just change the programming language!

So, today I will show how to draw such pictures:

Stage one: save the image to disk

I don't want to bother with window managers, mouse / keyboard handling, and the like. The result of our program will be a simple image saved to disk. So, the first thing we need to be able to do is save the image to disk. Here is the code that allows you to do this. Let me give him the main file:

    constint width    = 1024;
    constint height   = 768;
    std::vector<Vec3f> framebuffer(width*height);
    for (size_t j = 0; j<height; j++) {
        for (size_t i = 0; i<width; i++) {
            framebuffer[i+j*width] = Vec3f(j/float(height),i/float(width), 0);
    std::ofstream ofs; // save the framebuffer to file
    ofs << "P6\n" << width << " " << height << "\n255\n";
    for (size_t i = 0; i < height*width; ++i) {
        for (size_t j = 0; j<3; j++) {
            ofs << (char)(255 * std::max(0.f, std::min(1.f, framebuffer[i][j])));

In the main function, only the render () function is called, nothing else. What is inside the render () function? First and foremost, I define a picture as a one-dimensional array of framebuffer values ​​of type Vec3f, these are simple three-dimensional vectors that give us the color (r, g, b) for each pixel.

The class of vectors lives in the geometry.h file, I will not describe it here: first, everything is trivial, simple manipulation of two and three-dimensional vectors (addition, subtraction, assignment, scalar multiplication, scalar production), and second, gbg has already described it in detail as part of a course on computer graphics.

I save the picture in ppm format; This is the easiest way to save images, although it is not always the most convenient for further viewing. If you want to save in other formats, I recommend to connect a third-party library, for example, stb . This is an excellent library: it is enough to include one stb_image_write.h header file in the project, and this will allow you to save at least in png, at least in jpg.

So, the goal of this stage is to make sure that we can a) create a picture in memory and record different color values ​​there b) save the result to disk so that it can be viewed in a third-party program. Here is the result:

Stage two, the most difficult: direct ray tracing

This is the most important and difficult stage of the whole chain. I want to define one sphere in my code and show it on the screen without bothering with materials or lighting. This is how our result should look like:

For convenience, there is one commit for each stage in my repository; Github makes it very convenient to view your changes. For example , what has changed in the second commit compared to the first.

To begin with: what do we need to represent the sphere in the computer's memory? Four numbers are enough for us: a three-dimensional vector with the center of a sphere and a scalar that describes the radius:

structSphere {
    Vec3f center;
    float radius;
    Sphere(const Vec3f &c, constfloat &r) : center(c), radius(r) {}
    boolray_intersect(const Vec3f &orig, const Vec3f &dir, float &t0)const{
        Vec3f L = center - orig;
        float tca = L*dir;
        float d2 = L*L - tca*tca;
        if (d2 > radius*radius) returnfalse;
        float thc = sqrtf(radius*radius - d2);
        t0       = tca - thc;
        float t1 = tca + thc;
        if (t0 < 0) t0 = t1;
        if (t0 < 0) returnfalse;

The only non-trivial thing in this code is a function that allows you to check whether a given beam (originating from orig in the direction of dir) intersects with our sphere. A detailed description of the algorithm for checking the intersection of the ray and sphere can be found here , I highly recommend doing it and checking my code.

How does ray tracing work? Very simple. At the first stage, we just swept the picture with a gradient:

for (size_t j = 0; j<height; j++) {
        for (size_t i = 0; i<width; i++) {
            framebuffer[i+j*width] = Vec3f(j/float(height),i/float(width), 0);

Now for each pixel we will form a ray going from the center of coordinates and passing through our pixel, and check if this ray intersects our sphere.

If there is no intersection with the sphere, then we put color1, otherwise color2:

Vec3f cast_ray(const Vec3f &orig, const Vec3f &dir, const Sphere &sphere){
    float sphere_dist = std::numeric_limits<float>::max();
    if (!sphere.ray_intersect(orig, dir, sphere_dist)) {
        return Vec3f(0.2, 0.7, 0.8); // background color
    return Vec3f(0.4, 0.4, 0.3);
voidrender(const Sphere &sphere){
    for (size_t j = 0; j<height; j++) {
        for (size_t i = 0; i<width; i++) {
            float x =  (2*(i + 0.5)/(float)width  - 1)*tan(fov/2.)*width/(float)height;
            float y = -(2*(j + 0.5)/(float)height - 1)*tan(fov/2.);
            Vec3f dir = Vec3f(x, y, -1).normalize();
            framebuffer[i+j*width] = cast_ray(Vec3f(0,0,0), dir, sphere);

At this point, I recommend taking a pencil and checking all calculations on paper, both the intersection of the ray with the sphere and the sweeping of the picture by the rays. Just in case, our camera is defined by the following things:

  • picture width
  • picture height
  • viewing angle, fov
  • camera location, Vec3f (0,0,0)
  • gaze direction, along the z axis, in the direction of minus infinity

Stage three: add more spheres

All the hardest is over, now our path is unclouded. If we can draw one sphere. then obviously add some more work will not be. Look at the changes in the code, and here is the result:

Stage Four: Lighting

Everyone is good at our picture, but that's just not enough lighting. Throughout the rest of the article, we will only talk about it. Add a few point sources of lighting:

structLight {
    Light(const Vec3f &p, constfloat &i) : position(p), intensity(i) {}
    Vec3f position;
    float intensity;

Considering real lighting is a very, very difficult task, so, like everyone else, we will fool the eye by drawing completely non-physical, but as far as possible believable results. First note: why is it cold in winter and hot in summer? Because the heating of the earth's surface depends on the angle of sunlight. The higher the sun above the horizon, the brighter the surface is illuminated. Conversely, the lower over the horizon, the weaker. Well, after the sun sets below the horizon, the photons do not even reach us. With reference to our spheres: here is our beam, emitted from the camera (no relation to photons, pay attention!) Crossed with the sphere. How do we understand how the intersection point is lit? You can simply look at the angle between the normal vector at this point and the vector that describes the direction of the light. The smaller the angle the better the surface is lit. To make it even more convenient, you can simply take a scalar product between the normal vector and the light vector. I recall that the production between two vectors a and b is equal to the product of the norms of the vectors and the cosine of the angle between the vectors: a * b = | a | | b | cos (alpha (a, b)). If we take vectors of unit length, then the simplest scalar product will give us the intensity of illumination of the surface.

Thus, in the cast_ray function, instead of the constant color, we will return the color taking into account the sources of illumination:

Vec3f cast_ray(const Vec3f &orig, const Vec3f &dir, const Sphere &sphere){
    float diffuse_light_intensity = 0;
    for (size_t i=0; i<lights.size(); i++) {
        Vec3f light_dir      = (lights[i].position - point).normalize();
        diffuse_light_intensity  += lights[i].intensity * std::max(0.f, light_dir*N);
    return material.diffuse_color * diffuse_light_intensity;

See the changes here , but the result of the program:

Stage Five: shiny surfaces

The trick with the scalar product between the normal vector and the vector of light approximates well the illumination of opaque surfaces, in the literature it is called diffuse illumination. What to do if we want smooth and shiny? I want to get this picture:

See how little change needed to be made. In short, the brightening on brilliant surfaces is brighter, the smaller the angle between the direction of gaze and the direction of reflected light. Well, the angles, of course, we will consider through the scalar product, exactly as before.

This gym with matt and shiny surface lighting is known as the Phong model.. The wiki has a fairly detailed description of this lighting model, it is readable when compared in parallel with my code. Here is the key picture to understand:

Stage Six: Shadows

And why do we have light, but no shadows? Disorder! I want this picture:

Only six lines of code allow us to achieve this: when drawing each point, we just make sure that the beam does not cross the point-source of light objects of our scene, and if it crosses, then skip the current source of light. There is only a small subtlety: I just shift the point in the direction of the normal:

Vec3f shadow_orig = light_dir*N < 0 ? point - N*1e-3 : point + N*1e-3;

Why? Yes, just our point lies on the surface of the object, and (excluding the question of numerical errors) any beam from this point will cross our scene.

Stage Seven: Reflections

This is unbelievable, but to add reflections to our scene, we only need to add three lines of code:

    Vec3f reflect_dir = reflect(dir, N).normalize();
    Vec3f reflect_orig = reflect_dir*N < 0 ? point - N*1e-3 : point + N*1e-3; // offset the original point to avoid occlusion by the object itself
    Vec3f reflect_color = cast_ray(reflect_orig, reflect_dir, spheres, lights, depth + 1);

Convince yourself: when intersecting with an object, we simply consider the reflected ray (the function from the calculation of the debris was useful!) And recursively call the cast_ray function in the direction of the reflected ray. Be sure to play with the depth of recursion , I set it equal to four, start from scratch, what will change in the picture? Here is my result with a working reflection and a depth of four:

Stage Eight: Refraction

Having learned to count reflections, refractions are considered exactly the same . One function allows us to calculate the direction of the refracted ray ( according to Snell's law ), and three lines of code in our recursive cast_ray function. Here is the result, in which the nearest ball became “glass”, it refracts and reflects a little:

Stage nine: add more objects

And why are we all without milk, but without milk. Up to this point, we have rendered only spheres, since this is one of the simplest non-trivial mathematical objects. And let's add a piece of the plane. The classic of the genre is a chessboard. For this we need a dozen lines in a function that considers the intersection of the beam with the stage.

Well, here's the result:

As I promised, exactly 256 lines of code, calculate for yourself !

Stage ten: homework

We have traveled quite a long way: we learned how to add objects to the scene, count the rather complex lighting. Let me leave two tasks as homework. Absolutely all the preparatory work has already been done in the homework_assignment branch . Each task will require a maximum of ten lines of code.

Task one: Environment map

At the moment, if the beam does not cross the scene, then we just give it a constant color. And why, in fact, permanent? Let's take a spherical photo (file envmap.jpg ) and use it as a background! To make life easier, I linked our project with the stb library for the convenience of working with zhpegami. It should be such a render:

The second task: quack!

We can render both spheres and planes (see chessboard). So let's add a drawing of triangulated models! I wrote code that allows you to read a grid of triangles, and added a ray-triangle intersection function there. Now add a duckling to our scene should be quite trivial!


My main task is to show projects that are interesting (and easy!) To program, I really hope that I can do it. This is very important, as I am convinced that the programmer must write a lot and with taste. I don’t know about you, but personally, accounting and sappers, with quite comparable code complexity, do not appeal at all.

Two hundred and fifty lines of raytracing can actually be written in a few hours. Five hundred lines of the software rasterizer can be mastered in a few days. Next time we will sort out rajkasting , and at the same time I will show the simplest games that my first-year students write as part of C ++ programming training. Stay tuned!

Also popular now: