# Imitating iridiscence: a shader CD-ROM

Original author: Alan Zucconi
• Transfer
This tutorial is about iridiscence . In this tutorial, we explore the very nature of light to understand and recreate the behavior of material that creates color reflections. The tutorial is intended for developers of games on Unity, however the techniques described in it can be easily implemented in other languages, including Unreal and WebGL.

The tutorial will consist of the following parts:

• Part 1. The nature of light
• Part 2. Improving the rainbow - 1
• Part 3. Improving the rainbow - 2
• Part 4. Understanding the diffraction grating
• Part 5. Mathematics of the diffraction grating
• Part 6. Shader CD-ROM: diffraction grating - 1
• Part 7. Shader CD-ROM: diffraction grating - 2

#### Introduction

Iridiscence is an optical phenomenon in which objects change color when the angle of illumination or viewing angle changes. Thanks to this effect, bubbles have such a wide palette of colors.

Iridiscence also manifests itself in a pool of spilled gasoline, on the surface of a CD-ROM and even on fresh meat. Many insects and animals use iridiscence to create flowers without the presence of appropriate pigments.

This is because iridiscence occurs due to the interaction of light and microscopic structures that are on the surfaces of all these objects. Both the CD-ROM tracks and the flakes of the external skeleton of the insect (see images below) have the same order of magnitude of the wavelengths of light with which they interact. In fact, iridiscence was the first phenomenon that revealed the true wave nature of light. We cannot explain and reproduce iridiscence without first understanding what light is, how it works, and how it is perceived by the human eye.

#### Nature of light

Like many subatomic particles, light simultaneously exhibits the properties of particles and waves. Quite often, light is modeled as first or second. For most applications, light can be thought of as being made up of trillions of individual particles called photons . For example, most shaders believe that photons behave like tiny billiard balls and are reflected from objects at the angle at which they collided with it (see diagram below).

But light can also be modeled as a wave. Physicists are familiar with this concept, but developers are not always aware of it. So let’s spend some time understanding what it means for light to exist in the form of a wave.

We all know the ocean waves. Each point on the surface of the ocean has a height. The higher it is from the average, the higher the wave. If you disturb the surface of the water, the waves begin to propagate through the ocean until their energy dissipates.

Light is a wave, but instead of measuring it as a height on the surface of the water, it is defined as the energy that an electromagnetic field hasat the right point. According to this model, light is a perturbation of an electromagnetic field propagating through space. We can imagine a bulb, either creating a wave, or emitting many photons into the surrounding space.

The amount of energy transferred by a photon determines the color of light. Low energy photons are perceived as red; high-energy photons are perceived as purple. Waves have a property similar to particle energy: wavelength . For intuitive understanding, we can say that this is the distance between the two peaks of the wave.

Light always moves at the same speed (approximately 299,792,458 meters per second), that is, electromagnetic waves propagate at the same speed. Although their speed is constant, the wavelength can be different. High energy photons are short wavelengths. It is the wavelength of light that ultimately determines its color.

As you see in the diagram above, the human eye can perceive photons with a wavelength in the range of about 700 nanometers to 400 nanometers. A nanometer is a billionth part of a meter.

How small is a nanometer?
When trying to figure out the smallest scale at which Nature works, it is difficult to imagine the dimensions discussed. The average person is about 1.6 meters tall. The thickness of a human hair is approximately 50 micrometers (50 microns). A micrometer is a millionth of a meter (1 μm = 0.000001 meters =meter). A nanometer is one thousandth of a micrometer (1 nm = 0.000000001 meters =meter). That is, the wavelength of visible light is approximately one hundredth of the thickness of a human hair.

##### What's next?

After this brief introduction in the remainder of the tutorial, we will focus on understanding iridiscence and its implementation in Unity.

• Improving the rainbow. As stated above, different wavelengths of light are perceived by the human eye as different colors. In the next two parts, we will figure out how to associate these wavelengths with RGB colors. This step is necessary to recreate iridiscent reflections with a high degree of accuracy. In these parts, I will also introduce a new approach that will be both physically accurate and computationally efficient.
• Diffraction grating. In parts 4 and 5 of this tutorial we will look at the diffraction grating . This is the technical name for one of the effects that cause materials to show iridescence reflections. Despite its “technicality”, the derived equation governing this optical phenomenon will be very simple. If you are not interested in the mathematics of the diffraction grating, you can skip part 5.
• Shader CD-ROM. The core of this tutorial is the CD-ROM shader implementation. It will use the knowledge gathered in the previous parts to implement the diffraction grating in Unity. It is an extension of the Unity 5 Standard Surface Shader; which makes this effect both physically correct and photorealistic. With a little effort, you can change it to fit other types of iridiscent reflections based on a diffraction grating.

#### Summarize

From this section, we started the iridiscence tutorial. In the remainder of this article, we explore ways of simulating and realizing iridiscent reflections on various materials, from bubbles to CD-ROMs, and from spilled gasoline to insects.

## Part 2. Improving the rainbow - 1.

Our journey into the world of photorealism requires us to understand not only how light works, but also how we perceive colors. How many colors are there in the rainbow? Why is pink not included in them? Here are just some of the issues that we will cover in this part.

#### Introduction

In this part, we will get to know the most popular techniques used in computer graphics to recreate rainbow colors. Although this may seem like a futile exercise, it actually has a very practical use. Each color of the rainbow corresponds to a specific wavelength of light. Such a fit will allow us to simulate physically valid reflections.

In the next part, “Improving the Rainbow - 2”, we will introduce a new approach that is very well optimized for shaders and at the same time creates the best results at the moment (see below).

A comparison of the WebGL versions of all the techniques discussed in this tutorial can be found in Shadertoy .

#### Flower perception

The retina is the part of the eye that recognizes light. It has cone cells that can transmit signals to the brain when recognizing certain wavelengths of light. Light is a wave in an electromagnetic field, so cones work according to the same principles that allow us to recognize radio waves. De facto cones are tiny antennas. If you have studied electronics, you should know that the length of the antenna is related to the wavelength it picks up. That is why in the human eye there are three different types of cones: short, medium and long. Each type specializes in recognizing a specific wavelength range.

The graph above shows how each type of cone responds to different wavelengths. When one of these types of cones is activated, the brain interprets its signal as a color. Despite the fact that this is often said, short, medium and long cones do not correspond to certain colors. More specifically, each type reacts differently to different color ranges.

It would be wrong to assume that short, medium, and long cones recognize blue, green, and red. Despite this, many textbooks (and even shaders!) Make this assumption to create a relatively acceptable approximation of this rather complex phenomenon.

#### Spectral color

If we want to recreate the physical phenomena that make iridiscence possible, then we need to rethink the way we store and process colors in a computer. When we create a light source in Unity (or any other game engine), we can set its color as a mixture of three main components: red, green and blue. Although a combination of red, green, and blue can indeed create all visible colors, at the most fundamental level, light works differently.

The light source can be modeled as a constant stream of photons. Photons that carry different amounts of energy are perceived by our eye as different colors. However, the “white photon" does not exist. This is the sum of many photons, each of which has a different wavelength, which gives the light a white color.

To move on, we need to talk about the “building blocks” of light themselves. When we talk about “wavelengths,” it’s worth thinking about specific colors of the rainbow. In this part, we will show various approaches that implement this connection. As a result, we want to get a function that for a given color returns the perceived color:

fixed3 spectralColor (float wavelength);

In the remainder of the post, we will express wavelengths in nanometers (billionths of a meter). The human eye can perceive light in the range from 400 nm to 700 nm. Wavelengths outside this range exist, but are not perceived as colors.

Why is there no optimal solution?
Earl F. Glynn answered this question best of all :
“There is no unique correspondence between wavelength and RGB values. Color is an amazing combination of physics and human perception. ”

The search for the finally correct correspondence of wavelengths and colors is inevitably doomed to failure. The nature of light is objective, but our perception is not. Cones that perceive certain wavelengths of the visible spectrum have significant variations in different people. Even if we assume that all cones work the same and constant for all people, their distribution and number in the retina are mostly random. No two retinas are the same, even in one person.

Finally, color perception depends on how the brain perceives this input. Due to this, various optical illusions and neuroadaptations arise, which make the perception of colors a unique and truly personal experience.

#### Spectral map

The figure below shows how the human eye perceives waves ranging in length from 400 nanometers (blue) to 700 nanometers (red).

It is easy to see that the distribution of colors in the visible spectrum is very nonlinear. If we plot the corresponding components R, G and B of the perceived color on the graph for each wavelength, we will get something similar as a result:

There is no simple function that can fully describe this curve. The simplest and least expensive implementation approach is to use this texture in the shader as a means of attaching wavelengths to colors.

The first thing to do is provide the shader with access to the new texture. We can do this by adding a Propertiestexture property to the new shader block .

// Свойства
Properties
{
...
_SpectralTex("Spectral Map (RGB)",2D) = "white" {}
...
}
// Код шейдера
{
...
CGPROGRAM
...
sampler2D _SpectralTex;
...
ENDCG
...
}

Our function spectralColorsimply converts wavelengths in the interval [400,700] into UV coordinates in the interval [0,1]:

fixed3 spectral_tex (float wavelength)
{
// длина волны: [400, 700]
// u:          [0,   1]
fixed u = (wavelength -400.0) / 300.0;
return tex2D(_SpectralTex, fixed2(u, 0.5));
}

In our particular case, we do not need to forcibly limit the wavelengths to the interval [400, 700]. If the spectral texture is imported with Repeat: Clamp , all values ​​outside this range will automatically be black.

Loop texture sampling
Below we will see that to reproduce the effects of iridiscence, we need to sample several colors from the rainbow. On some devices, the shader may not support sampling the texture in a loop. This is the most important reason that using texture may not be the best approach, especially on mobile platforms.

#### JET color scheme

Sampling a texture might seem like a good idea. However, it can significantly slow down the shader. We will see how critical this is, in terms of iridiscence on a CD-ROM, where each pixel will require several texture samples.

There are several functions that approximate the color distribution of the light spectrum. Probably one of the easiest is the JET color scheme. This color scheme is used by default in MATLAB, and it was originally developed by the National Center for Supercomputer Applications for better visualization of simulations of fluid jets in astrophysics.

The JET color scheme is a combination of three different curves: blue, green and red. This is clearly seen when splitting the color:

We can easily implement the JET color scheme by writing the equation of the lines that make up the above scheme.

// Цветовая схема MATLAB Jet
fixed3 spectral_jet(float w)
{
// w: [400, 700]
// x: [0,   1]
fixed x = saturate((w - 400.0)/300.0);
fixed3 c;
if (x < 0.25)
c = fixed3(0.0, 4.0 * x, 1.0);
else if (x < 0.5)
c = fixed3(0.0, 1.0, 1.0 + 4.0 * (0.25 - x));
else if (x < 0.75)
c = fixed3(4.0 * (x - 0.5), 1.0, 0.0);
else
c = fixed3(1.0, 1.0 + 4.0 * (0.75 - x), 0.0);
// Ограничиваем компоненты цвета интервалом [0,1]
return saturate(c);
}

The R, G, and B values ​​of the resulting color are limited to the interval [0,1] using the Cg function saturate. If HDR ( High Dynamic Range Rendering ) is selected for the camera , this is necessary to avoid the presence of colors with components greater than one.

It is worth noting that if you want to strictly adhere to the JET color scheme, then the values ​​outside the visible range will not be black.

#### Bruton color scheme

Another approach to converting wavelengths into visible colors is the scheme proposed by Dan Bruton in his article " Approximate RGB values ​​for Visible Wavelengths ". Similar to what happens in the JET color scheme, Bruton begins with an approximated distribution of perceived colors.

However, his approach better approximates the activity of long cones, which leads to a stronger shade of purple in the lower part of the visible spectrum:

This approach translates to the following code:

// Дэн Брутон
fixed3 spectral_bruton (float w)
{
fixed3 c;
if (w >= 380 && w < 440)
c = fixed3
(
-(w - 440.) / (440. - 380.),
0.0,
1.0
);
else if (w >= 440 && w < 490)
c = fixed3
(
0.0,
(w - 440.) / (490. - 440.),
1.0
);
else if (w >= 490 && w < 510)
c = fixed3
( 0.0,
1.0,
-(w - 510.) / (510. - 490.)
);
else if (w >= 510 && w < 580)
c = fixed3
(
(w - 510.) / (580. - 510.),
1.0,
0.0
);
else if (w >= 580 && w < 645)
c = fixed3
(
1.0,
-(w - 645.) / (645. - 580.),
0.0
);
else if (w >= 645 && w <= 780)
c = fixed3
( 1.0,
0.0,
0.0
);
else
c = fixed3
( 0.0,
0.0,
0.0
);
return saturate(c);
}

#### Color Scheme Bump

The color schemes of JET and Bruton use discontinuous functions. Therefore, they create quite sharp color variations. Moreover, outside the visible range, they do not become black. In the book “GPU Gems,” this problem is solved by replacing the harsh lines of the previous color schemes with much smoother bumps . Each bend is a regular parabola of the form. More specifically



The author of the scheme, Randima Fernando, uses parabolas for all components, arranged as follows:

We can write the following code:

// GPU Gems
inline fixed3 bump3 (fixed3 x)
{
float3 y = 1 - x * x;
y = max(y, 0);
return y;
}
fixed3 spectral_gems (float w)
{
// w: [400, 700]
// x: [0,   1]
fixed x = saturate((w - 400.0)/300.0);
return bump3
( fixed3
(
4 * (x - 0.75), // Red
4 * (x - 0.5), // Green
4 * (x - 0.25) // Blue
)
);
}

An additional advantage of this color scheme is that it does not use texture samples and branching, which makes it one of the best solutions if you prefer speed over quality. At the end of this tutorial, I will show you a revised version of this color scheme, which provides more speed, while maintaining high clarity of colors.

#### Spektre color scheme

One of the most accurate color schemes is the one created by user Stack Overflow Spektre . He explains his methodology in the RGB values ​​of visible spectrum post , where he samples the blue, green, and red components of material data from the solar spectrum. After which it fills individual intervals with simple functions. The result is shown in the following diagram:

What gives us:

And here is the code:

// Spektre
fixed3 spectral_spektre (float l)
{
float r=0.0,g=0.0,b=0.0;
if ((l>=400.0)&&(l<410.0)) { float t=(l-400.0)/(410.0-400.0); r=    +(0.33*t)-(0.20*t*t); }
else if ((l>=410.0)&&(l<475.0)) { float t=(l-410.0)/(475.0-410.0); r=0.14         -(0.13*t*t); }
else if ((l>=545.0)&&(l<595.0)) { float t=(l-545.0)/(595.0-545.0); r=    +(1.98*t)-(     t*t); }
else if ((l>=595.0)&&(l<650.0)) { float t=(l-595.0)/(650.0-595.0); r=0.98+(0.06*t)-(0.40*t*t); }
else if ((l>=650.0)&&(l<700.0)) { float t=(l-650.0)/(700.0-650.0); r=0.65-(0.84*t)+(0.20*t*t); }
if ((l>=415.0)&&(l<475.0)) { float t=(l-415.0)/(475.0-415.0); g=             +(0.80*t*t); }
else if ((l>=475.0)&&(l<590.0)) { float t=(l-475.0)/(590.0-475.0); g=0.8 +(0.76*t)-(0.80*t*t); }
else if ((l>=585.0)&&(l<639.0)) { float t=(l-585.0)/(639.0-585.0); g=0.82-(0.80*t)           ; }
if ((l>=400.0)&&(l<475.0)) { float t=(l-400.0)/(475.0-400.0); b=    +(2.20*t)-(1.50*t*t); }
else if ((l>=475.0)&&(l<560.0)) { float t=(l-475.0)/(560.0-475.0); b=0.7 -(     t)+(0.30*t*t); }
return fixed3(r,g,b);
}

#### Conclusion

In this part, we looked at some of the most common techniques for generating rainbow-like patterns in a shader. In the next part, I will introduce you to a new approach to solving this problem.

 Title Gradient Jet Bruton GPU Gems Spektre Zucconi Zucconi6 Visible spectrum

## Part 3. Improving the rainbow - 2.

#### Introduction

In the previous part, we analyzed four different ways of converting the wavelengths of the visible range of the electromagnetic spectrum (400-700 nanometers) into their respective colors.

Three of these solutions (JET, Bruton, and Spektre) actively use if constructs . This is standard practice for C #, however branching is a bad approach in a shader. The only approach that does not use branching is the one discussed in the GPU Gems book. However, it does not provide an optimal approximation of the colors of the visible spectrum.

 Title Gradient GPU Gems Visible spectrum

In this part I will talk about the optimized version of the color scheme described in the book GPU Gems.

#### Bump Color Scheme

The original color scheme outlined in the GPU Gems book uses three parabolas (called bumps by the author ) to recreate the distribution of the R, G, and B components of rainbow colors .

Each bump is described by the following equation:



Each wavelength  in the range [400, 700] is mapped to a normalized value in the interval [0,1]. Then the components R, G and B of the visible spectrum are defined as follows:







All numerical values ​​are selected by the author experimentally. However, you see how poorly they correspond to the true distribution of colors.

#### Quality optimization

In the first solution I came to, the exact same equations were used as in the color scheme of the GPU Gems. However, I optimized all the numerical values, so that the final range of colors matches, as far as possible, the real colors from the visible spectrum.

The result boils down to the following solution:

And leads to a much more realistic result:

 Title Gradient GPU Gems Zucconi Visible spectrum

Like the original solution, the new approach does not include branching. Therefore, it is ideal for shaders. The code is as follows:

// На основе кода из GPU Gems
// Оптимизовано Аланом Цуккони
inline fixed3 bump3y (fixed3 x, fixed3 yoffset)
{
float3 y = 1 - x * x;
y = saturate(y-yoffset);
return y;
}
fixed3 spectral_zucconi (float w)
{
// w: [400, 700]
// x: [0,   1]
fixed x = saturate((w - 400.0)/ 300.0);
const float3 cs = float3(3.54541723, 2.86670055, 2.29421995);
const float3 xs = float3(0.69548916, 0.49416934, 0.28269708);
const float3 ys = float3(0.02320775, 0.15936245, 0.53520021);
return bump3y ( cs * (x - xs), ys);
}

To find an optimization algorithm, I used the Python scikit library .

Here are the options needed to recreate my results:

• Algorithm: L-BFGS-B
• Tolerance: 
• Iterations: 
• Weighted MSE:
• 
• 
• 
• Fitting
• Image: Linear Visible Spectrum
• Wavelength range: from  to 
• Range resized to:  pixels
• Original solution:
• 
• 
• 
• 
• 
• 
• 
• 
• 
• The final solution:
• 
• 
• 
• 
• 
• 
• 
• 
• 

#### Improving the rainbow

If we take a closer look at the distribution of colors in the visible spectrum, we will notice that parabolas cannot actually repeat the curves of the colors R, G and B. It would be a little better to use six parabolas instead of three. Tied to each major component of the two a bump , we get a much more correct approximation. The difference is very noticeable in the violet part of the spectrum.

The difference is clearly visible in the violet and orange parts of the spectrum:

 Title Gradient Zucconi Zucconi6 Visible spectrum

This is what the code looks like:

// На основе кода из GPU Gems
// Оптимизировано Аланом Цуккони
fixed3 spectral_zucconi6 (float w)
{
// w: [400, 700]
// x: [0,   1]
fixed x = saturate((w - 400.0)/ 300.0);
const float3 c1 = float3(3.54585104, 2.93225262, 2.41593945);
const float3 x1 = float3(0.69549072, 0.49228336, 0.27699880);
const float3 y1 = float3(0.02312639, 0.15225084, 0.52607955);
const float3 c2 = float3(3.90307140, 3.21182957, 3.96587128);
const float3 x2 = float3(0.11748627, 0.86755042, 0.66077860);
const float3 y2 = float3(0.84897130, 0.88445281, 0.73949448);
return
bump3y(c1 * (x - x1), y1) +
bump3y(c2 * (x - x2), y2) ;
}

There is no doubt that it spectral_zucconi6provides a better approximation of colors without using branching. If speed is important to you, then you can use a simplified version of the algorithm - spectral_zucconi.

#### To summarize

In this part, we looked at a new approach to generating rainbow-like patterns in shaders.

 Title Gradient Jet Bruton GPU Gems Spektre Zucconi Zucconi6 Visible spectrum

## Part 4. Understanding the diffraction grating

In the first part of the tutorial, we got acquainted with the dual nature of light, which exhibits the properties of waves and particles. In this part, we will see why both of these two aspects are necessary for the occurrence of iridiscence.

#### Reflections: Light and Mirrors

In the scientific literature, a ray of light is often mentioned as a way of indicating the path traveled by photons in space and interacting with objects. In most shading models, light is perceived as being created from homogeneous particles behaving like perfect billiard balls. In the general case, when a ray of light collides with a surface, it is reflected from it at the same deflection angle. Such surfaces behave like perfect mirrors, completely reflecting the light.

Objects rendered using this technique are like mirrors. Moreover, if the light is incident from the direction of L , the observer can see it only when looking from the direction of the R . This type of reflection is also called specular , which means mirror-like.

In the real world, most objects reflect light in a different way called diffuse . When a ray of light hits a scattering surface, it is more or less uniformly scattered in all directions. This gives the objects a uniform diffuse color.

In most modern engines (like Unity and Unreal), these two behaviors are modeled using different sets of equations. In my previous Physically Based Rendering and Lighting Models tutorial, I explained the Lambert and Blinn-Fong reflectivity models that are used for diffuse and specular reflections, respectively. Despite the fact that they look different, the diffuse reflection can be explained through the mirror. No surface is completely flat. You can simulate a rough surface as created from tiny mirrors, each of which is fully characterized by specular reflectivity. The presence of such micro-faces leads to the scattering of rays in all directions.

The multidirectionality of such micro faces is often modeled by physically accurate shaders using properties such as Smoothness or Roughness . You can read more about this on the Unity help page explaining the Smoothness property of the standard engine shader.

В этом разделе мы сказали, что рассеянное (диффузное) отражение можно полностью объяснить, рассмотрев зеркальное отражение на поверхности, состоящей из разнонаправленных микрограней. Однако это не совсем правда. Если поверхность демонстрирует только зеркальное отражение, то это означает, что при полной полировке она будет выглядеть чёрной. Хорошим контрпримером можно считать белый мрамор: никакая полировка не способна сделать его чёрным. Даже если нам удастся достичь идеально гладкой поверхности, белый мрамор всё равно будет проявлять белый диффузный компонент отражения.

И в самом деле, за этот эффект ответственно кое-что ещё. Диффузный компонент поверхности также возникает из вторичного источника: преломления. Свет может проникать сквозь поверхность объекта, отражаться внутри него и выходить под другим углом (см. рисунок выше). Это значит, что какой-то процент всего падающего света будет повторно излучаться поверхностью материала в любой произвольной точке и под любым углом. Такое поведение часто называют подповерхностным рассеянием (subsurface scattering) и вычисления для его симуляции часто бывают очень затратны.

Подробнее об этих эффектах (и их симуляции) можно прочитать в статье Basic Theory of Physically Based Rendering компании Marmoset.

#### Свет как волна

It is very convenient to simulate light rays as if they are composed of particles. However, this will not allow us to recreate behavior that demonstrates a lot of materials, including iridiscence. Some phenomena can be fully understood only if one accepts the fact that under certain conditions light behaves like a wave.

Most shaders work with light as particles. The result of this enormous simplification is that it undergoes an additive composition . If two rays reach the observer, then their brightness simply adds up. The more rays a surface emits, the brighter it is.

In the real world, this is not so. If two rays of light reach the observer, then the final color depends on how their waves interact with each other. The animation below shows how two simple sine waves can amplify or cancel each other depending on their phase .

Animation

When two waves coincide in phase , then their peaks and troughs ideally coincide: in this case, the final wave is amplified. Otherwise, they can literally destroy each other. This means that if two rays of light fall on the observer in the correct configuration, then he will not receive any light.

Wave interaction may seem like a strange principle. However, we all experienced it in everyday life. Science popularizer Derek Muller explains this well in his video The Original Double Slit Experiment , where he demonstrates the amplifying and damping interference of water waves.

But how does this relate to light and iridescence? The reason for iridiscence is the interaction of light waves of different lengths. Some materials can only reflect photons in the right direction, enhancing certain colors and destroying others. As a result of this interaction, we can observe a rainbow.

#### Diffraction

In the first section of this part, we studied one of the types of interaction of light and matter: reflection. Reflection occurs when light is modeled as a particle. However, if you treat it like a wave, then a new set of behaviors arises. One of them is called diffraction . If all the incident light reaches the surface at one angle, then this is called a plane wave . For example, directional light sources in Unity create plane waves. When a plane wave passes through the gap, it refracts, as shown in the following animation:

Animation

If light passes through two different slits, then two new wave fronts are generated. And as we said above, these new light waves can interact with each other. The animation below shows how the light behaves when there are two such slots. You can see that they actually interact, both amplifying and quenching the rays.

Animation

Now we have all the necessary basics to discuss the causes of iridiscence.

##### Diffraction grating

When a plane wave passes through a slit or is reflected in a roughness, it is refracted, creating a new spherical wave front. This means that light is scattered in all directions, similar to how it occurs in diffuse reflection. If the surface of the material is non-uniform, then the resulting plane waves are scattered randomly and no interference pattern arises at the microscopic level.

However, some materials have surface patterns that are repeated on a scale comparable to the wavelength of the incident light. When this happens, the repeatability of the pattern leads to a repeated and nonrandom interaction of the fronts of the diffracted waves. The resulting interaction creates a repeating interference pattern that can be seen at the macroscopic level.

The above effect is called a diffraction grating . Some wavelengths are greatly amplified, while others are destroyed. Since different wavelengths correspond to different colors, the diffraction grating leads to the fact that the reflections of some colors become more pronounced.

This mechanism provides the occurrence of iridiscence on surfaces with a repeating pattern. They are often found in nature: the external skeletons of insects and feathers of birds contain microscopic scales aligned in repeating patterns. In the image below, you see an enlarged image of a peacock feather.

#### To summarize

In the next part of the tutorial, I will show how you can mathematically model a specific type of iridiscence. After deriving the equations, they are easy to implement in the shader.

## Part 5. The mathematics of the diffraction grating.

#### Introduction

In the previous part, we explained why iridiscence occurs in some materials. Now we have everything we need to start modeling this phenomenon mathematically. Let's start by presenting a material in which there are heterogeneities repeating over known distances. In order to derive equations, we denote the angle between incident light beams and the surface normal as. Let's also imagine that the observer is positioned so that he receives all the reflected rays with an angle. Each heterogeneity scatters light in all directions, therefore there will always be light rays incident on the observer, regardless of.

Because heterogeneities are repeated regularly in increments  nanometers, then the scattering pattern itself is repeated every nanometers. This means that there is at least one ray of light coming to the observer from each slit.

#### Derivation of Equations

The two rays of light shown in the diagram above travel a different distance before reaching the observer. If they begin to move, coinciding in phase, then reaching the observer, they may not coincide.In order to understand how these two rays interact (amplifying or quenching each other), we need to calculate how much they do not coincide in phase when they reach the observer.

These two rays will exactly coincide in phase until the first one hits the surface. The second beam travels an extra distance(highlighted in green), after which it also falls to the surface. Using simple trigonometry, we can show that the length of the green line is equal to .

Using a similar construction, we can calculate the extra distance that the first ray passes until the second one collides with the surface. In this case, we see that.

These two segments critical to determining if two beams coincide in phase when they are received by an observer. Their difference measures the difference in the lengths of these two rays. If it is equal to zero, then we know for sure that the two rays coincide in phase, because they essentially passed the same distance.

However, two rays can coincide in phase not only in this case. If the difference in lengths is an integer multiple of the wavelengththen they will still be in phase. From a mathematical point of view, two rays coincide in phase if they satisfy the following condition:





#### Visualization

Let's take a minute to understand the meaning of this equation. If the light falls at an angle, what an observer will see when looking at the material at an angle ? All wavelengthsbeing integer multiples , will interact with amplification, and will strongly manifest in the final reflection. Therefore, it is these colors that the viewer will see.

This effect is visualized by the following diagram, taken from a very interesting discussion of A complex approach: Iridescence in cycles :

The white ray follows the path followed by photons for specular reflection. An observer looking at the material from different angles will see a circular rainbow pattern. Each color corresponds to its own wavelength, and the order determines the corresponding whole. As you can see, the diffraction grating equation is satisfied even at negative valuesbecause the quantity may be negative. From a computational point of view, it makes sense to simplify the search space, limiting ourselves only to positive values. The new equation that we will use is:



## Part 6. Shader CD-ROM: diffraction grating - 1

In this part, we will look at creating a shader that recreates the rainbow reflections visible on the surface of a CD-ROM or DVD.

#### Introduction

In the previous part, we derived equations describing the very nature of iridiscent reflections demonstrated by some surfaces. Iridiscence occurs in materials on the surfaces of which there is a repeating pattern, the size of which is comparable to the wavelength of the light reflected by it.

The optical effects that we want to recreate ultimately depend on three factors: the angle between the light source and the surface normal ( direction of light ), the observer's viewing angle ( viewing direction ), and the distance between the repeated gaps.

We want the shader to add iridescence reflections on top of the usual effects that standard material usually creates. Therefore, we will expand the lighting function of the Standard Surface shader . If you are not familiar with this procedure, then it is worth exploring my tutorial Physically Based Rendering and Lighting Models .

#### Creating a Surface Shader

The first step is to create a new shader. Since we want to expand the capabilities of a shader that already supports physically accurate lighting, let's start with the Standard Surface Shader .

The created CD-ROM shader will need a new property: distance used in the diffraction grating equation. Let's add it to the block Properties, which should look like this:

Properties
{
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
_Distance ("Grating distance", Range(0,10000)) = 1600 // nm
}

This will create a new slider in the Material Inspector. However, the property _Distancestill needs to be associated with the variable in the section CGPROGRAM:

float _Distance;

Now we are ready to work.

#### Change lighting function

The first thing we need to do is replace the CD-ROM shader lighting function with our own. We can do this by changing the directive #pragmahere:

#pragma surface surf Standard fullforwardshadows

on the:

#pragma surface surf Diffraction fullforwardshadows

This will force Unity to delegate a lighting calculation to a function called LightingDiffraction. It is important to understand that we want to expand the functionality of this surface shader, and not redefine it . Therefore, our new lighting function will begin by calling the standard Unity PBR lighting function:

#include "UnityPBSLighting.cginc"
inline fixed4 LightingDiffraction(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
// Исходный цвет
fixed4 pbr = LightingStandard(s, viewDir, gi);
// <здесь будет код дифракционной решётки>
return pbr;
}

As you can see from the above code snippet, the new function LightingDiffractionsimply calls LightingStandardand returns its value. If we compile the shader now, we will not see any difference in the way the materials are rendered.

However, before moving on, we need to create an additional function for processing global illumination . Since we don’t need to change this behavior, our new global lighting function will be just a proxy function of the standard Unity PBR function:

void LightingDiffraction_GI(SurfaceOutputStandard s, UnityGIInput data, inout UnityGI gi)
{
LightingStandard_GI(s, data, gi);
}

It is also worth noting that since we use LightingStandardand directly LightingDiffraction_GI, we need to include it in our shader UnityPBSLighting.cginc.

#### The implementation of the diffraction grating

This will be the foundation of our shader. We are finally ready to implement the diffraction grating equations derived in the previous part. In it, we came to the conclusion that the observer sees an iridescence reflection, which is the sum of all wavelengthssatisfying the lattice equation :



Where  - integer greater than .

For each pixel, the values(determined by the direction of the light ),(determined by the direction of the review ) and( distance between gaps ) are known. Unknown variables are and . The easiest way would be to loop through the valuesto see which wavelengths satisfy the lattice equation.

When we know what wavelengths contribute to the final iridiscent reflection, we will calculate the colors corresponding to them and add them. In the section “Improving the Rainbow”, we examined several methods for converting the wavelengths of the visible spectrum into colors. We will use this tutorial spectral_zucconi6because it provides the best approximation with the least computational cost.

Let's look at the following possible implementation:

inline fixed4 LightingDiffraction(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
// Исходный цвет
fixed4 pbr = LightingStandard(s, viewDir, gi);
// Вычисляет цвет отражения
fixed3 color = 0;
for (int n = 1; n <= 8; n++)
{
float wavelength = abs(sin_thetaL - sin_thetaV) * d / n;
color += spectral_zucconi6(wavelength);
}
color = saturate(color);
// Прибавляет цвет отражения к цвету материала
pbr.rgb += color;
return pbr;
}

In this code snippet we use the values up to 8. For better results, you can take larger values, however, this is already enough to take into account a significant part of the iridiscent reflection.

The last thing left for us is to calculate sin_thetaLand sin_thetaV. To do this, you need to introduce another concept: the tangent vector . In the next part, we will learn how to calculate it.

## Part 7. Shader CD-ROM: diffraction grating - 2

#### Introduction

In the previous part of the tutorial, we created the first approximation of iridiscent reflections that appear on the surface of a CD-ROM. It is important to remember that this shader is physically correct. For the correct simulation of the reflection we need, we must make sure that all the tracks of the CD-ROM are arranged in a circle. This will create a radial reflection.

#### Slot Orientation

The lattice equation we derived has a great limitation: it assumes that all the slots are located in the same direction. This is often true for external insect skeletons, but the tracks on the surface of the CD-ROM are arranged in a circle. If we implement the solution literally, we will get a rather unconvincing reflection (the right side of the image).

To solve this problem, we need to take into account the local orientation of the slots on the CD-ROM. Using the normal vector will not help, because all the slits have the same normal direction, perpendicular to the surface of the disk. The local orientation of the slit can be determined using the tangent vector (left side of the top image).

In the diagram above, the normal direction  shown in blue and the direction of the tangent - red. The angles of the light source and the observer formed with the direction of the normalare called  and . Similar angles to - this is  and . As mentioned above, when used in calculations and  we get a “flat” reflection, because all the slits have the same . We need to find a way how to use and , because they correctly correspond to local directions.

So far, we know that:





Because the  and  are perpendicular, they have the following property:





This is also very convenient because Cg provides a native implementation of the scalar product. We only need to calculate.

Where did cosine come from?
All vectors considered here have a common property: their length is 1. For this reason, they are also called unit vectors . The simplest operation applicable to unit vectors is a scalar product .

Roughly speaking, the scalar product of two unit vectors is a measure of codirectionality. In fact, it turns out that the scalar product of two unit vectors is the cosine of the angle between them. Therefore, cosine appears in the equations.

#### Calculation of the tangent vector

To finish our shader, we need to calculate the tangent vector . Usually it is indicated directly at the tops of the meshes. However, taking into account how simple the surface of the CD-ROM is, we can calculate it ourselves. It is worth considering that the approach shown in this tutorial is quite simple and will only work if the surface of the CD-ROM mesh has the correct UV scan.

The diagram above shows how the directions of the tangents are calculated. It is assumed that the surface of the disk is UV-developed as a quadrangle with coordinates in the range from (0,0) to (1,1). Knowing this, we reassign the coordinates of each point on the surface of the CD-ROM in the range from (-1, -1) to (+ 1, + 1). Taking this principle as a basis, we get that the new coordinate of the point also corresponds to the outward direction from the center (green arrow). We can rotate this direction 90 degrees to find a vector that is tangent to the concentric tracks of the CD-ROM (shown in red).

These operations must be performed in the surfshader function because UV coordinates are not available in the lighting function LightingDiffraction.

// IN.uv_MainTex: [ 0, +1]
// uv:            [-1, +1]
fixed2 uv = IN.uv_MainTex * 2 -1;
fixed2 uv_orthogonal = normalize(uv);
fixed3 uv_tangent = fixed3(-uv_orthogonal.y, 0, uv_orthogonal.x);

We only need to convert the calculated tangent from the space of the object to world space . When converting, the position, rotation and scale of the object are taken into account.

worldTangent = normalize( mul(unity_ObjectToWorld, float4(uv_tangent, 0)) );

How to transfer the tangent direction to the lighting function?
Иридисцентное отражение вычисляется в функции освещения LightingDiffraction. Однако ей требуется вектор касательной worldTangent, который вычисляется в поверхностной функции surf. Сигнатуру функции освещения нельзя изменить, то есть её нельзя заставить получать больше параметров, чем она уже имеет.

Если вы незнакомы с шейдерами, то я подскажу: существует очень простой способ передачи дополнительных параметров. Нужно просто добавить их как переменные в тело шейдера. В нашем случае мы можем использовать общую переменную worldTangent, которая инициализирована функцией surf и используется функцией LightingDiffraction.

Как переключаться между пространствами координат?
The coordinates are not absolute. They always depend on the starting point. Depending on the necessary operations, it can be more convenient to store vectors in one or another coordinate space.

In the context of shaders, you can change the coordinate spaces of a point by simply multiplying them by a special matrix. A matrix that allows you to convert the coordinates expressed in the space of objects into world space is called unity_ObjectToWorld. If you use the old version of Unity, then this constant will be called _Object2World.

#### Putting it all together

Now we have everything we need to calculate the effect of color on iridescence reflection:

inline fixed4 LightingDiffraction(SurfaceOutputStandard s, fixed3 viewDir, UnityGI gi)
{
// Исходный цвет
fixed4 pbr = LightingStandard(s, viewDir, gi);
// --- Эффект дифракционной решётки ---
float3 L = gi.light.dir;
float3 V = viewDir;
float3 T = worldTangent;
float d = _Distance;
float cos_ThetaL = dot(L, T);
float cos_ThetaV = dot(V, T);
float u = abs(cos_ThetaL - cos_ThetaV);
if (u == 0)
return pbr;
// Цвет отражения
fixed3 color = 0;
for (int n = 1; n <= 8; n++)
{
float wavelength = u * d / n;
color += spectral_zucconi6(wavelength);
}
color = saturate(color);
// Прибавляет отражение к цвету материала
pbr.rgb += color;
return pbr;
}

How does this relate to the rainbow?
The variable wavelengthdeclared in the for loop contains the wavelengths of light that affect the iridiscent reflection of the current pixel.

Each wavelength in the visible range (from 400 to 700 nanometers) is perceived by the human brain as a separate color. In particular, the wavelength in the visible range corresponds to the colors of the rainbow.

Improving the Rainbow Parts, we showed that any wavelength can be converted to the appropriate colors. The function used for such projection is called spectral_zucconi6. It is an optimized version of the solution presented in the GPU Gems shader tutorial .