Depth of field in computer graphics
- Transfer
Unlike the human eye, a computer renders an entire scene in focus. Both the camera and the eye have a limited depth of field due to the finite diameter of the aperture of the pupil or lens. To achieve greater photorealism, it is recommended to use the effect of depth of field in images obtained on a computer. In addition, controlling the depth of field helps to uncover the author’s artistic intent, highlighting an object that is important in meaning.
Until now, the task of displaying a realistic depth of field in computer graphics has not been completely solved. There are many solutions with pros and cons, applicable in different cases. We will consider the most popular at the moment.
Determination of blur diameter (see Wikipedia for more details ).
The methods used in the renderers use the pinhole camera model (in which the input aperture is → 0, and therefore, all objects will be in focus). Simulating an aperture of finite size and, therefore, depth of field, requires additional effort.
A point in the scene is projected onto the picture plane as a scattering spot.
Object space methods work with a 3D representation of scene objects and are thus applied during rendering. Image space methods, also known as postprocess methods, operate on raster images obtained using the standard pinhole camera model (fully in focus). To achieve the effect of depth of field, these methods blur areas of the image, given the depth map. In general, object space methods are capable of producing a more physically accurate result and have fewer artifacts than image space methods, while image space methods are much faster.
Object space methods are either based on geometric optics, or on wave optics. Most applications use geometric optics, which is enough to achieve the vast majority of goals. However, in defocused images, diffraction and interference can play an important role; to take them into account, it is necessary to apply the laws of wave optics.
Image space methods can be divided into those applied to generated images and applied in digital photography. Traditional post-processing techniques require a depth map, which contains information about the remoteness of the image point from the camera, but such a map is difficult to obtain for photographs. There is an interesting technique of light fields that allows you to blur objects out of focus without a depth map. The disadvantage of the technology is that it requires special equipment, but the resulting images do not have restrictions on the complexity of the scene.
The description shows that the image is formed, taking into account the physical laws of optics (excluding wave). Therefore, the images obtained in this way are quite realistic and are considered the “gold standard” by which you can check the post-processing methods. The disadvantage of the method is obvious: for each sample, you need to calculate the number of rays sufficient to obtain high-quality blur, respectively, the rendering time increases. If you want to get a shallow depth of field, you will need to increase the rendering time by hundreds or thousands of times. If you use an insufficient number of extra rays in blurry areas, noise will appear.
The method is implemented in the mia_lens_bokeh shader:
The result of applying the shader (the code is taken from the manual mental ray , the picture is from the same).
Lens groups in the lens (Pat Hanrahan picture).
Optical lens specifications provided by manufacturers are correctly implemented as a mathematical model. The model includes a simulation of lens groups and also provides a model of the outlet (within which the renderer will emit rays for one sample). The rays entering the outlet are calculated, taking into account the optical properties of the lens groups through which they pass.
The method allows physically correctly simulating both the depth of field and the distortions introduced by the lens.
Lenses with different focal lengths: as the focal length and lens model change, the perspective changes and distortions may appear (for example, as in the top picture) - Pat Hanrahan picture.
An example of a shader that implements a fisheye lens:
The result of applying the shader (the code is taken from the manual mental ray , the picture is from the same).
The obtained points are stored in a tree and when choosing a point from a blurred area, the search is performed in a certain radius. This allows you to calculate fewer samples in the defocused areas of the image.
It is logical to assume that a rather strict limitation of the method is the ability to represent the scene in the required form.
Scatter image. In blurry areas, the sampling density is lower (picture by Jaroslav Krivanek).
An ideal post-processing method should have the following properties:
where B is the blurred image, psf is the filter core, x and y are the coordinates in the output image, S is the original image, i and j are the coordinates in the input image.
PSF can in some sense take into account optical effects such as diffraction and interference. Disadvantages of the method: lack of intensity, non-continuous depth.
Black stripes are visible in the upper image - artifacts resulting from applying layer-by-layer blur without using ObjectId (Barsky picture).
Image taking into account the peculiarities of the eye of a person suffering from keratoconus (picture Barsky).
Given the pixel depth, according to the laws of optics, we determine the magnitude of the scattering spot, and within the spot we select samples that form the pixel color as a result. In order to optimize memory, in areas of the image where CoC has a large radius, pixels are taken not from the input image, but reduced by several times. To reduce the number of artifacts of lack of intensity, the depth of pixels is also taken into account. The method has artifacts of continuity of depth.
The image was blurred using the pyramid method (Magnus Strengert picture).
With separable blurring, performance does not depend on the PSF area, but on its diameter.
It is worth noting that in another work Barsky emphasizes that proper blurring, taking into account depth, cannot be separable: when using this method, artifacts are possible in some cases.
The position map, used instead of the depth map in this method, contains information about the three dimensions of the point, and not just about the depth (Barsky picture).
Physically incorrect blur (picture by Kosloff and Barsky)
We can say that in the cameras the light field (between the planes of the lens and the matrix) is integrated naturally: we do not distinguish from which point of the lens the light ray came from. However, taking this into account, we can interactively, having the described data structure, control the depth of field after taking the sensor readings.
In addition, we can focus on different parts of the image using the fast Fourier transform in four-dimensional space.
In images generated on a computer, it is easy to obtain light field data by rendering the scene from different angles.
There are cameras (physical) capable of recording light fields. In front of the sensor are microlenses that separate the light coming from different directions. Thus, unlike the classical camera, the light is not summed up at one point, but is distributed depending on the direction. According to the sensor, already at the processing stage, you can select the object in focus and the blur size.
Light field: a small portion of the RAW image from the Lytro sensor. We see microleaves located in front of the matrix.
Spotted photography offers an alternative way to obtain a light field, bypassing the matrix size limit. Instead of a large number of microlenses, a translucent mask is used that changes the light according to a certain law. Using the inverse Fourier transform, you can get the original light field.
Currently, the problem of effectively achieving photorealistic depth of field is still open. In addition, the problem of reconstructing a depth map from a photograph (determining the distance to an object) is also open.
Until now, the task of displaying a realistic depth of field in computer graphics has not been completely solved. There are many solutions with pros and cons, applicable in different cases. We will consider the most popular at the moment.
In optics
Light refracted in the lens forms an image on a photosensitive element: film, matrix, retina. In order for a sufficient amount of light to enter the chamber, the input aperture (diameter of the light beam at the entrance to the optical system) must be of sufficient size. Rays from one point in space always converge exactly one point behind the lens, but this point does not necessarily coincide with the selected picture plane (sensor flatness). Therefore, images have a limited depth of field - that is, objects will be more blurry the greater the difference in the distance from the object to the lens and the focal length. As a result, we will see a point at a certain distance as a blurry spot: the Circle of Confusion (CoC). The blur radius is calculated according to a certain law:Determination of blur diameter (see Wikipedia for more details ).
The methods used in the renderers use the pinhole camera model (in which the input aperture is → 0, and therefore, all objects will be in focus). Simulating an aperture of finite size and, therefore, depth of field, requires additional effort.
A point in the scene is projected onto the picture plane as a scattering spot.
general review
Ways to implement depth of field can be divided into two large groups: object space methods and image space methods.Object space methods work with a 3D representation of scene objects and are thus applied during rendering. Image space methods, also known as postprocess methods, operate on raster images obtained using the standard pinhole camera model (fully in focus). To achieve the effect of depth of field, these methods blur areas of the image, given the depth map. In general, object space methods are capable of producing a more physically accurate result and have fewer artifacts than image space methods, while image space methods are much faster.
Object space methods are either based on geometric optics, or on wave optics. Most applications use geometric optics, which is enough to achieve the vast majority of goals. However, in defocused images, diffraction and interference can play an important role; to take them into account, it is necessary to apply the laws of wave optics.
Image space methods can be divided into those applied to generated images and applied in digital photography. Traditional post-processing techniques require a depth map, which contains information about the remoteness of the image point from the camera, but such a map is difficult to obtain for photographs. There is an interesting technique of light fields that allows you to blur objects out of focus without a depth map. The disadvantage of the technology is that it requires special equipment, but the resulting images do not have restrictions on the complexity of the scene.
Object space approaches
Distributed ray tracing
The method directly simulates geometric optics. Instead of tracing one ray per sample (in the original, it’s a pixel, but I found this inappropriate, because the number of rays calculated will change depending on the AA settings and rarely equals one pixel), which simulates a pinhole camera, you need to select several rays to get an analog image obtained on a camera with a finite aperture. The rays for each sample come from one point on the picture plane, but are directed to different parts of the lens. After refraction by the lens, the beam is emitted into the scene.The description shows that the image is formed, taking into account the physical laws of optics (excluding wave). Therefore, the images obtained in this way are quite realistic and are considered the “gold standard” by which you can check the post-processing methods. The disadvantage of the method is obvious: for each sample, you need to calculate the number of rays sufficient to obtain high-quality blur, respectively, the rendering time increases. If you want to get a shallow depth of field, you will need to increase the rendering time by hundreds or thousands of times. If you use an insufficient number of extra rays in blurry areas, noise will appear.
The method is implemented in the mia_lens_bokeh shader:
// параметры шейдера
struct depth_of_field {
miScalar focus_plane_distance;
miScalar blur_radius;
miInteger number_of_samples;
};
miBoolean depth_of_field (
miColor *result, miState *state, struct depth_of_field *params )
{
// получаем параметры
miScalar focus_plane_distance = *mi_eval_scalar(¶ms->focus_plane_distance);
miScalar blur_radius = *mi_eval_scalar(¶ms->blur_radius);
miUint number_of_samples = *mi_eval_integer(¶ms->number_of_samples);
miVector camera_origin, camera_direction, origin, direction, focus_point;
double samples[2], focus_plane_z;
int sample_number = 0;
miColor sum = {0,0,0,0}, single_trace;
// переводим в другую систему координат
miaux_to_camera_space(state, &camera_origin, &camera_direction);
// ищем точку пересечения
focus_plane_z = state->org.z - focus_plane_distance;
miaux_z_plane_intersect(&focus_point, &camera_origin, &camera_direction, focus_plane_z);
// считаем заданное количество семплов
while (mi_sample(samples, &sample_number, state, 2, &number_of_samples)) {
miaux_sample_point_within_radius(&origin, &camera_origin, samples[0], samples[1], blur_radius);
mi_vector_sub(&direction, &focus_point, &origin);
mi_vector_normalize(&direction);
miaux_from_camera_space(state, &origin, &direction);
mi_trace_eye(&single_trace, state, &origin, &direction);
miaux_add_color(&sum, &single_trace);
}
// нормализуем результат
miaux_divide_color(result, &sum, number_of_samples);
return miTRUE;
}
The result of applying the shader (the code is taken from the manual mental ray , the picture is from the same).
Realistic camera models
In the previous method, the refraction in the lens was calculated according to one law. However, this is not always the case. Lenses consist of groups of lenses with different properties:Lens groups in the lens (Pat Hanrahan picture).
Optical lens specifications provided by manufacturers are correctly implemented as a mathematical model. The model includes a simulation of lens groups and also provides a model of the outlet (within which the renderer will emit rays for one sample). The rays entering the outlet are calculated, taking into account the optical properties of the lens groups through which they pass.
The method allows physically correctly simulating both the depth of field and the distortions introduced by the lens.
Lenses with different focal lengths: as the focal length and lens model change, the perspective changes and distortions may appear (for example, as in the top picture) - Pat Hanrahan picture.
An example of a shader that implements a fisheye lens:
struct fisheye {
miColor outside_color;
};
miBoolean fisheye (miColor *result, miState *state, struct fisheye *params )
{
miVector camera_direction;
miScalar center_x = state->camera->x_resolution / 2.0;
miScalar center_y = state->camera->y_resolution / 2.0;
miScalar radius = center_x < center_y ? center_x : center_y;
miScalar distance_from_center = miaux_distance(center_x, center_y, state->raster_x, state->raster_y);
if (distance_from_center < radius) {
mi_vector_to_camera(state, &camera_direction, &state->dir);
camera_direction.z *= miaux_fit(distance_from_center, 0, radius, 1, 0);
mi_vector_normalize(&camera_direction);
mi_vector_from_camera(state, &camera_direction, &camera_direction);
return mi_trace_eye(result, state, &state->org, &camera_direction);
} else {
*result = *mi_eval_color(¶ms->outside_color);
return miTRUE;
}
}
The result of applying the shader (the code is taken from the manual mental ray , the picture is from the same).
Accumulation buffer
An accumulation buffer can also be used to achieve the effect of depth of field. Several frames are rendered, after which, averaging them, we get the desired image. The method is very similar to distributed ray tracing, but faster than it, because rendering occurs using hardware. However, in the first method, we can adaptively control the number of samples and get a picture of acceptable quality using fewer samples. The method is applicable only where the scene can be hardware-mediated.Wave propagation simulation (wawe propagation)
All the methods discussed above use the laws of geometric optics, ignoring diffraction and interference. If there are several point sources emitting light of a certain wavelength in the scene, you can track the propagation of light waves in space. The picture plane is located at a certain distance and to determine the value of the sample, the contribution from all the waves emitted by the sources is taken into account. Calculations can be made in the frequency domain using the Fourier transform.Scatter [Krivanek]
When rendering, the scene is presented not as a set of geometric primitives with textures, but as a set of points. Points are scattered according to a certain law, most often Gaussian. To achieve greater speed, when scattering points, a convolution operation is used that takes into account the point spread function (PSF). In the case of Gaussian blur, the PSF parameter is standard deviation.The obtained points are stored in a tree and when choosing a point from a blurred area, the search is performed in a certain radius. This allows you to calculate fewer samples in the defocused areas of the image.
It is logical to assume that a rather strict limitation of the method is the ability to represent the scene in the required form.
Scatter image. In blurry areas, the sampling density is lower (picture by Jaroslav Krivanek).
Analytical visibility [Catmull]
Having a three-dimensional scene, you can analytically determine which objects are out of focus. For such objects, a smaller number of samples is taken, as a result they look blurry. The method allows you to get accurate images without noise, in contrast to distributed ray tracing.Image-space approaches
An ideal post-processing method should have the following properties:
- Choosing a point spread function (PSF)
The type of blur depends on the PSF, which determines which scattering spot we get from one point. Since this characteristic will be different in different optical systems, a good method should allow choosing the type of PSF.
Different PSFs allow you to get a different blur pattern. - Pixel-based blur control
At each point in the image, the size and nature of the scatter spot is different. Typically, post-processing methods do not allow you to change the nature of the blur depending on the position of the point. This is also due to the fact that often methods use either separable filters or the Fourier transform, which makes such a choice difficult to implement.
In the first image, the PSF is the same; the second one changes, which more accurately simulatesHelios-44features of some lenses. - Lack of artifacts of intensity leakage artifacts
Blurred object in the background never goes beyond the boundaries of the object in focus. However, primitive linear filters may not take this fact into account; artifacts of lack of intensity shown as a result of this error reduce the realism of the image.
In the image, the green figure is in focus, therefore, blurring the object in the background should not extend to it. - Lack of artifacts caused by non-continuous depth (depth discontinuity artifacts)
In reality, blurring the object in the foreground will be soft, the object will not have a visible hard outline. Often filters blur the object so that it simultaneously has both blur and silhouette, which is wrong. This behavior is due to the features of smoothing the depth map, as a result of which the depth changes stepwise at the object’s border (and it turns out that pixels of mixed colors on the edge of the object, as well as outside it).
The result of applying different filters. Due to the fact that the image (beauty map) is smoothed out and the depth map is not, similar artifacts may occur. - Correct simulation of partial occlusion of objects
In reality, defocused objects on the front have smoothly blurred borders through which objects located at the back are visible. This effect is called partial intersection because the rear object is only partially blocked by the front. We would not be able to see these visible areas of the object in the background if we were looking through a pinhole camera. For a geometric explanation of the effect, see the figure. Because post-processing methods work with images received on a pinhole camera, simulating a partial intersection is a difficult task: the color of the invisible points must be extrapolated from the available data.
Partial intersection of objects (picture Barsky). - High performance The
performance of filters applied “directly” in the image space (meaning the simplest implementation of the filter) decreases with increasing blur radius. For large radii, the process can take several minutes. Ideally, I want the filter to be applied in real time, which is not always possible.
Linear filtering [Potmesil and Chakravarty]
One of the first methods to get DoF in the post-processing phase. Depending on the depth of the point (determined by the depth map), the parameters of the blur function (PSF) change. The larger the radius of the PSF, the lower the filter performance. The filter can be expressed by the formula:where B is the blurred image, psf is the filter core, x and y are the coordinates in the output image, S is the original image, i and j are the coordinates in the input image.
PSF can in some sense take into account optical effects such as diffraction and interference. Disadvantages of the method: lack of intensity, non-continuous depth.
Ray distribution buffer [Shinya]
The method suggests taking into account the visibility of objects, thereby we can get rid of a lack of intensity. Instead of creating a blurred image, first for each point a buffer is created for the distribution of the rays emanating from it. Such a buffer includes possible coordinates at which light from a point can come, with depth. After calculating the ray distribution buffers for all points, the average color value is calculated. The method works with the visibility of objects quite correctly, but requires more memory and calculations, compared to linear filtering. Note that the set of maps obtained by the RDB method is called the light field.Layered DoF [Scofield]
The method is intended for a special case of the location of objects: objects should be parallel to the picture plane. Objects are divided into layers, layers are washed out separately in the frequency domain (using the fast Fourier transform). FFT allows the use of large radius PSFs without affecting performance. The method has no lack of lack of intensity and works very quickly, but its scope is very limited.Intersection and discretization [Barsky]
The restriction imposed by the previous method is very strict. The image is divided into layers, so the depth of the image samples is rounded to the selected depth of the nearest layer. The resulting image will have discretization artifacts in the form of stripes or hard boundaries along the lines of intersection of the layers. In this method, the problem of such artifacts is solved by using the ID of the objects obtained by the method of finding the boundaries (or from the ObjectId map). If one object belongs to two layers, the layers are merged. Another problem with the method is partial intersection. To blur objects in the background, approximation by visible samples is used.Black stripes are visible in the upper image - artifacts resulting from applying layer-by-layer blur without using ObjectId (Barsky picture).
Blur, taking into account the features of the human eye (vision-realistic rendering) [Kolb]
The human eye is difficult to describe in the form of an analytical model consisting of several lenses - how this can be done for the lens. In this method, using a special device called wavefront aberrometer (I did not dare to translate this), a set of PSFs corresponding to the human eye is determined. Next, layer blur is used in accordance with the resulting PSF. The method allows you to get images visible to people with visual diseases.Image taking into account the peculiarities of the eye of a person suffering from keratoconus (picture Barsky).
Importance ordering [Fearing]
The method works similarly to the antialiasing mechanism of renderers: first, a low-resolution image is formed, after which the samples next to which the color change exceeds the threshold are processed at the next iteration and more samples of the original image are taken to obtain the pixel of the final image, and so on. Thus, the method achieves better quality in less time.Hybrid Perceptual Hybrid Method [Mulder and van Lier]
The features of human perception of the image are such that the details in the center are more important than the details along the edges of the image. The center of the image can be blurred in a slower and more accurate way, while the periphery uses a quick blur approximation. For fast blurring, the Gauss pyramid is used, the blurring level is selected depending on the pixel depth; The result has artifacts.Repeated convolution [Rokita]
The method is intended for quick application in interactive applications. It works on hardware devices where it is possible to efficiently implement the convolution operation with a 3x3 pixel core. The convolution is performed several times, thereby achieving a large amount of blur. Performance drops as the blur radius increases. There is a restriction on the PSF: it must be Gaussian.GPU Depth of Field [Scheueremann and Tatarchuk]
Depth of field can also be read on the GPU. One of the methods was proposed by Scheueremann and Tatarchuk.Given the pixel depth, according to the laws of optics, we determine the magnitude of the scattering spot, and within the spot we select samples that form the pixel color as a result. In order to optimize memory, in areas of the image where CoC has a large radius, pixels are taken not from the input image, but reduced by several times. To reduce the number of artifacts of lack of intensity, the depth of pixels is also taken into account. The method has artifacts of continuity of depth.
Integral Matrix (summed area table) [Hensley]
As an alternative to sampling within CoC, the averaged color of the image pixel region can be found using an integral matrix (SAT). In this case, the computational speed is high and does not fall with an increase in the blur radius, and there is no need to generate a lower resolution image. Initially, the method was intended for smoothing textures, but later it was adapted for depth of field, including on the GPU. The method has almost all types of artifacts.Pyramidal method [Kraus and Strengert]
The scene is divided into layers, depending on the depth. Pixels that are close to the layer boundary do not relate to the nearest layer, but partially to several layers: this eliminates sampling artifacts at the layer boundaries. Then the values of pixels that are absent in the layers (those that are covered by objects in the foreground) are extrapolated. After that, each layer is washed out by the pyramid method , the point weight is used to exclude artifacts. The resulting layers are mixed taking into account the transparency of the layers. The method is faster than layered methods using FFT, but imposes restrictions on the PSF used.The image was blurred using the pyramid method (Magnus Strengert picture).
Separable blur [Zhou]
In the same way as in classical blur methods that do not take into account depth (box blur, gaussian blur), separable PSFs can be used to calculate the depth of field. First, the image is blurred horizontally, then vertically - as a result, we get a speed that depends not on the area of the spot, not on sharpness, but on its diameter. The method can be implemented on the GPU and can be applied in real time. The idea of using separable functions is illustrated in the figure:With separable blurring, performance does not depend on the PSF area, but on its diameter.
It is worth noting that in another work Barsky emphasizes that proper blurring, taking into account depth, cannot be separable: when using this method, artifacts are possible in some cases.
Simulated heat diffusion [Barsky, Kosloff, Bertalmio, Kass]
Heat dissipation is a physical process in which blurring can also be observed (although it is not related to optics). If the temperature is not evenly distributed in the heat-conducting substance, we will observe blurring over time. Differential equations describing the effect of such blurring can be used to simulate the depth of field. Even for sufficiently large blur radii, the method can be applied on the GPU in real time.The position map, used instead of the depth map in this method, contains information about the three dimensions of the point, and not just about the depth (Barsky picture).
Generalized and semantic depth of field
So far, we have described methods that simulate depth of field as it happens in nature. However, blurring may not be the same as we used to observe it. In computer graphics, we are not limited to physically feasible lens models, so the blur region can be set arbitrarily - for example, we can distinguish several people from the crowd. The method can be implemented as a variation of the method of simulating heat dissipation, using laws other than physical ones as a blur map.Physically incorrect blur (picture by Kosloff and Barsky)
Light fields
The light fields were originally described as a method that describes the image of a scene from different points, regardless of the complexity of the scene. The standard method for coding light fields is two-plane parametrization. Two parallel planes are selected; each ray is described by a point on both planes. The result is a four-dimensional data structure. The data obtained can be manipulated, such as changing the focus plane or depth of field.We can say that in the cameras the light field (between the planes of the lens and the matrix) is integrated naturally: we do not distinguish from which point of the lens the light ray came from. However, taking this into account, we can interactively, having the described data structure, control the depth of field after taking the sensor readings.
In addition, we can focus on different parts of the image using the fast Fourier transform in four-dimensional space.
In images generated on a computer, it is easy to obtain light field data by rendering the scene from different angles.
There are cameras (physical) capable of recording light fields. In front of the sensor are microlenses that separate the light coming from different directions. Thus, unlike the classical camera, the light is not summed up at one point, but is distributed depending on the direction. According to the sensor, already at the processing stage, you can select the object in focus and the blur size.
Light field: a small portion of the RAW image from the Lytro sensor. We see microleaves located in front of the matrix.
Spotted (dappled) Photography
The method described above requires many matrix points to encode a single pixel, therefore, has a low resolution. Indeed, the resolution of this camera is somewhere around 800 pixels for the most part with the 11MPix matrix. The problem can be solved by using sensors with a very high resolution (but this will lead to more expensive sensors and a prohibitively large data structure).Spotted photography offers an alternative way to obtain a light field, bypassing the matrix size limit. Instead of a large number of microlenses, a translucent mask is used that changes the light according to a certain law. Using the inverse Fourier transform, you can get the original light field.
Defocus magnification
It would be nice to be able to apply the effect of depth of field in a “normal” photo, without a light field (where, unlike the render, there is no depth map). This method involves the definition of blur and its increase (the method assumes that there is already a blur in the photo, but not enough - for example, the photo was taken on a soap dish, where due to the size of the matrix it is impossible to achieve a large blur radius). The greater the blur already present in the image, the more blur will be applied additionally.Autofocus
When using the depth of field in virtual reality applications and video games, as well as in photography, autofocus is required, that is, the task of determining the depth at which the pixels are in focus. An area is selected in the center of the image, samples from this area are involved in determining the depth of focus. It takes into account both the weighted average value of depth and the inherent importance that is known for the objects depicted (for example, you can focus the “look” on one of their characters, but not on a wooden box or wall) - this is called the semantic weight of the object. It is also necessary to take into account the process of accommodation of the gaze (focusing changes gradually over time), for this, for example, a low-pass filter is used.Conclusion
We examined most of the most common methods used to achieve the effect of depth of field in modern computer graphics. Some of them work with 3D objects, some are post-processing methods. We also described the basic characteristics that the right methods must satisfy.Currently, the problem of effectively achieving photorealistic depth of field is still open. In addition, the problem of reconstructing a depth map from a photograph (determining the distance to an object) is also open.
References
Read links
Остальные работы Brian A.Barsky
Accurate Depth of Field Simulation in Real Time (Tianshu Zhou, Jim X. Chen, Mark Pullen) — один из методов расчёта DoF
An Algorithm for Rendering Generalized Depth of Field Effects Based on Simulated Heat Diffusion (Todd Jerome Kosloff, Brian A. Barsky) — полное описание рассмотренного выше метода
Investigating Occlusion and DiscretizationProblems in Image Space Blurring T echniques (Brian A. Barsky, Michael J. Tobias, Daniel R. Horn, Derrick P. Chu) — описание ещё одного метода
Quasi-Convolution Pyramidal Blurring (Martin Kraus) — метод пирамид
Focus and Depth of Field (Frédo Durand, Bill Freeman) — понятия и определения, имеющие отношение к глубине резкости: cheat sheet. Много полезных формул в одном месте.
Accurate Depth of Field Simulation in Real Time (Tianshu Zhou, Jim X. Chen, Mark Pullen) — один из методов расчёта DoF
An Algorithm for Rendering Generalized Depth of Field Effects Based on Simulated Heat Diffusion (Todd Jerome Kosloff, Brian A. Barsky) — полное описание рассмотренного выше метода
Investigating Occlusion and DiscretizationProblems in Image Space Blurring T echniques (Brian A. Barsky, Michael J. Tobias, Daniel R. Horn, Derrick P. Chu) — описание ещё одного метода
Quasi-Convolution Pyramidal Blurring (Martin Kraus) — метод пирамид
Focus and Depth of Field (Frédo Durand, Bill Freeman) — понятия и определения, имеющие отношение к глубине резкости: cheat sheet. Много полезных формул в одном месте.