Mtg 20/26: Thu-20-Mar-2025

Outline for Today

Reflection Models

Administration

Today

For Next Meeting

Wiki

Link to the UR Courses wiki page for this meeting

Media

Transcript

Audio Transcript

  • Hello.
  • So we're at meeting 20 today, of 26
  • any questions or concerns You
  • okay, I'm
  • in the base color parameter, therefore we perform linear
  • interpolation between The column F zero for dielectrics and the
  • base color for metals. Using the metallic parameter as weight, F
  • zero equals mix F zero base color, metallic. For a metallic
  • value of zero, we get the dielectric F zero. And for a
  • metallic value we
  • Okay, so I want to talk about reflection models. So we asked
  • you look at the First two sections of chapter Nine And
  • so in pbrt, where Are reflection computations Performed in
  • so this the answer that I was Looking for from the Book,
  • reflection coordinate system you
  • is So if you find that you're not happy with the way things
  • got marked by The system. Let me know when I'll review them. You
  • so why is cosine weighted hemisphere sampling useful in
  • diffuse reflection computations?
  • Was it because they could be directly used? Was that the
  • answer? I wasn't sure if that's the answer you're looking for or
  • not. They could
  • also, like, improve the error without
  • additional competition.
  • Yeah. I so
  • I was looking for something Like it reduces error without
  • additional computation so
  • it, so reduces error without increased computational cost.
  • So you can see in there's an interactive
  • there's, well, it's not interactive so much. There's one
  • with and without the cosine weighted sampling, and you can
  • see that the one with looks better.
  • That's that weird gargoyle looking thing. Yes, I
  • How would you describe material with the property if you choose
  • a point on the surface and rotate it around its normal axis
  • at that point, the distribution of light reflected at that
  • if you choose a point on the surface and rotate this surface
  • around its normal access axis at that point, distribution of
  • light reflected at that point does not change.
  • Isotropic, Yes, I
  • so What's the other way we could describe a surface that doesn't
  • have that property?
  • Pitch Black. No what the distribution would not changed.
  • There's no lighting. I
  • don't know if I'm saying it right, but anisotropic
  • somewhere.
  • Yeah, I
  • so here is an example of an isotropic i
  • So an isotropic surface,
  • something with a matte paint and maybe A black chalkboard. I
  • uh, velvet And the moon surface. Yeah. So
  • another one is Cat size.
  • So these were things that reflected, like right back at
  • the light or the camera. Was that the definite or retro
  • reflective?
  • Yeah.
  • So would a mirror be an example, or is that too much scatter? And
  • it's not really directed or vocal. It's
  • a mirror, just as it is,
  • would be
  • a different type of reflective surface,
  • if it's if it's a perfect Mirror, then, then we've got
  • the light going off at an angle equal to the angle of incidence
  • and
  • does that make sense?
  • Yeah, because it compares to these ones, will send the light
  • right back to the surface right during rebounding. Is that
  • correct?
  • Yeah, it's collecting the incoming rays and directing them
  • back
  • so we can Have like
  • reflective material or reflectors on a bike or
  • so I have a couple videos today. I
  • that may be
  • provide a bit of context for some of the other things we've
  • been discussing. I
  • it so part of the part of them are implementation, which we
  • won't look at, we'll just look at the theory stuff,
  • and you can let me know if you think they're valuable. I
  • so I was looking at
  • blin fun reflectance reflection model,
  • because
  • Now in the section on diffuse reflection and talked about
  • micro facet
  • models.
  • Anyway,
  • that was a bit of my thinking, but let's get started on the
  • videos. Okay,
  • do you look at the gorgeous woman? And immediately you think
  • to yourself, no, this woman is out of my league. She's
  • unattainable. The average middle aged man
  • d squared for the radius we choose the distance you
  • there are three simple changes I tell all of my arthritis
  • patients to make. Number one, you have to start drinking more
  • water. You wouldn't believe happy, huh?
  • Hello everybody. Welcome to shaders monthly. Today we talk
  • about the form and bling form shading models. Both models are
  • reflection models. This means they describe mathematically how
  • light is reflected at an opaque surface. Here you see the result
  • of the shader implementation that we will have achieved at
  • the end of this episode, the Fung model was published by
  • Bucha Fong in 1975 and later refined to the blinnfong model
  • by Jim Blinn two years later. Blinn Fung shading used to be
  • the standard in real time graphics for quite a while. For
  • example, it was the only model available in the OpenGL fixed
  • function pipeline before the introduction of programmable
  • shaders in 2003 from today's point of view, these models are
  • quite simple. Consequently, I will not only talk about the
  • form and bling form models in this video, but I will present
  • these models in a general theoretical framework that can
  • be easily extended to more complex reflection models. In
  • particular, I will introduce the rendering equation and the B
  • directional reflection distribution function, BRDF,
  • which are the fundamental tools in computer graphics to describe
  • what happens when light is reflected at an opaque surface.
  • If you are not interested in the theory, you can jump directly to
  • the implementation at the end of the video. To perform shading,
  • we need to simulate how light is emitted from light sources, how
  • it interacts with 3d objects that are made of different
  • materials, and how it is finally observed by our eyes or the
  • sensor of a camera and creates an image of the world. So what
  • is light? Light is a quantized electromagnetic wave. It is
  • quantized because it is made of particles called photons, which
  • are packets of energy that travel as waves to space. And
  • light travels extremely fast, with about 300 million meters
  • per second. The wavelength of the visible range is
  • approximately between 380 and 770 nanometers. Visible light
  • that is monochromatic, which means that it only contains a
  • single wavelength, corresponds to a speckled color from violet,
  • starting at 380 nanometers over blue, cyan, green, yellow,
  • orange, and ending at Red, is 770 nanometers. Visible light is
  • just one form of electromagnetic radiation. Gamma radiation has
  • the shortest wavelength and carries the highest energy per
  • photon. Gamma radiation is produced, for example, in a
  • nuclear explosion. Then we have X rays, which also pass through
  • a material. For example, we all know X rays from medical
  • applications, ultraviolet, infrared, microwaves, which we
  • all know well because they heat our food in the microwave oven,
  • and finally, radio waves with the longest wavelengths, which
  • we use for radio and TV broadcast and wireless
  • communication, light Sources often emit a wide spectrum of
  • different wavelengths. In particular, white light is a
  • superposition of many wavelengths. Here you see the
  • spectral power distribution of daylight. This figure shows the
  • spectrum of the standardized CIE Illumina P 65 that can be used
  • as a reference for the so called White Point, which we will see
  • later. So daylight is composed out of all these spectral
  • colors. We can witness this in nature when we observe rainbows.
  • Rainbows are created when rain drops split the sunlight into
  • its individual spectral colors. What happens when light
  • interacts with objects? Let's assume light is emitted from a
  • light source in a homogeneous medium, such as air. Light is
  • traveling along straight lines through the world. Some light
  • rays may hit the eye directly. Others are reflected from an
  • object surface towards the eye. When a light ray hits the
  • surface, the photons are either reflected, transmitted or
  • absorbed. The energy of absorbed photons is converted into other
  • forms of energy. Typically, this is heat dependent on the
  • material. Photons with certain wavelengths are absorbed, which
  • means that light changes its spectrum and consequently its
  • perceived color. For example, this tea port appears red under
  • white light because most photons that are not in the range of the
  • red spectrum are absorbed. The process of reflection can repeat
  • many times. In a scene, for example, this ray hits the Blue
  • cup and changes its color because of absorption. The
  • remaining light is reflected and hits the red teapot. The
  • spectrum is further reduced at this surface, and the remaining
  • part is reflected in the direction of the eye. If the
  • light rays reach the eye, receptors on the retina are
  • activated and an image is formed in the brain, there are two
  • systems of light sensory cells in humans. The first system
  • consists of rods that react very sensitive to light, but do not
  • produce color vision. The second system consists of cones, which
  • are the color receptors. There are L, M and S cones, which
  • react to light at long, medium and short wavelengths. The
  • diagram shows the curves for normalized absorption of the
  • cones over the wavelengths. The Center for blue, sensitive s
  • cones is at 420 nanometers. For Green, sensitive M, cones at 534
  • nanometers, and for red, sensitive L, cones at 564
  • nanometers. Because there are only three types of cones, the
  • human visual system cannot resolve the real spectral power
  • distribution of light. Instead, the true distribution is sampled
  • very sparsely by the response of the cones. Consequently, human
  • observers can be tricked to some extent, different spectral power
  • distributions can produce the same perceived color in low
  • light situations, the cones are not sensitive enough, and only
  • the rods are working. That is why we do not perceive color at
  • night, as you have already seen in previous episodes of shaders
  • monthly. We typically use the RGB color model as the output of
  • our fragment shaders. This means different colors are created by
  • the additive mixture of three primary colors, red, green and
  • blue. So for red, we have the weighting factors, 100, which is
  • the first dimension of this color space. For Green, the
  • weights are 010, which is the second dimension. And for blue,
  • the weights are 001, which is the third dimension. The
  • additive mixture of red and green gives 110, resulting in
  • yellow. The additive mixture of red and blue gives 101,
  • resulting in magenta. The additive mixture of green and
  • blue gives 011, resulting inside if all three weights are equal,
  • this results in a shade of gray. When using arbitrary RGB
  • weighting factors in the range from zero to one, many different
  • colors can be created. The use of three primary colors is to
  • some extent, motivated by the three code types in the human
  • visual system. However, until now, these RGB weights have no
  • reconnection to quantitative values in the physical world. To
  • establish this connection, we have to talk about the CIE RGB
  • color space. This color space was defined by the CIE in 1931
  • and is based on experiments with human participants. The
  • participants were shown a target color shown on the left here,
  • and had the task to recreate the same color perception by
  • shooting the additive mixture of three lines of 700 nanometers,
  • 541.1 nanometers. And 435.8 nanometers. So the CIE
  • experiments used real, physical red, green and blue,
  • monochromatic light sources of a defined wavelength. The question
  • was, if all possible colors could be reproduced by additive
  • mixture of these three lights. The experiment showed, yes,
  • three colors are sufficient, but some weights had to be negative.
  • So what does this mean? How are negative weights created? The
  • trick is simple. Light can be subtracted from the right side
  • by adding light to the left side. So if the participants use
  • additive light on the left side to match the two colors. This
  • was counted as a negative weight. Here you see the
  • resulting color matching functions for red, green and
  • blue. So for example, if we want to create a monochromatic bluish
  • sign, spectral color with 480 nanometers, we have to use the
  • RGB weights of minus 0.05, for red, 0.04 for green and 0.15 for
  • blue. To represent any spectral power distribution in the CIE
  • RGB color space, we can multiply it with the color matching
  • functions and integrate over the complete spectrum. This results
  • in three scalar RGB values. So the CIE RGB color space defines
  • a three dimensional space of all perceivable colors. This space
  • is visualized here in a special two dimension called the CIE
  • chromaticity diagram. Here on the border, we see the
  • monochromatic spectral colors which enclose the space of all
  • possible colors. The colors that can be created with positive
  • weights are always in this triangle, spared by the three
  • primary colors at 700 nanometers, 546.1
  • nanometers and 435.8 nanometers. For colors outside of this
  • triangle, negative weights must be used. Next, we introduce the
  • sRGB color space, which is different from CIE RGB. SRGB is
  • the current standard for monitors, websites and images
  • without an explicit color profile. Therefore, if we write
  • color values from our fragment shader into the frame buffer and
  • display the result in a standard consumer monitor. The expected
  • shader output is sRGB. The RGB values are in the range from
  • zero to one. SRGB uses different primary colors. Here you see the
  • color gamut of sRGB. Consequently, the range of
  • displayable colors is smaller compared to CIE RGB, the
  • standardized CIE illuminant D 65 defines a white point for sRGB.
  • Fortunately, we do not lose a connection to physical
  • quantities when using sRGB instead of CIE RGB. If gamma
  • correction is performed beforehand, there is a linear
  • transformation from s RGB to CIE RGB. What do I mean by gamma
  • correction? Here you see the function to decode a color
  • channel from s RGB to radiometric linear s RGB values
  • in the figure, this function is plotted as a black curve. The
  • colors that you see down here are s RGB values. So these color
  • steps are linear in s RGB space. Interestingly, as you are
  • probably watching this video in something that is close to s
  • RGB, these values should appear approximately linear to you.
  • However, if you measured the brightness with the radiometric
  • light meter, it would not be linear. The reason is that our
  • visual perception is not linear. The human visual system is
  • better at distinguishing darker intensities than lighter ones,
  • which is considered in CS RGB encoding. However, in rendering,
  • it is important to work in a linear space, because otherwise
  • we cannot simply add contributions from different
  • light sources. Therefore, it is important that we apply gamma
  • decoding and encoding in our shader. We could use the exact
  • function that is denoted here, or we can approximate it. In the
  • figure, the red curve is a simpler gamma decoding function
  • which decodes the color value by raising it to the power of 2.2
  • ok. This concludes what we need to know about representing
  • colors in our shader. The main take home message is that we do
  • not need to simulate the Light Transport for every wavelength.
  • Computing the Light Transport for three primary colors is
  • sufficient for most applications. Next we talk about
  • the rendering equation. The rendering equation was
  • introduced in a SIGGRAPH paper by James Cargill in 1986 this is
  • more than 10 years after the form model was published. The
  • rendering equation describes what happens when light is
  • reflected at an opaque surface. Solving the rendering equation
  • is very challenging, and many different approaches try to
  • approximate the exact solution. As you see on this slide, there
  • are several terms, such as a solid angle, irradiance and
  • radians, that we need to introduce before coming back to
  • the rendering equation, let's start with a solid angle. In
  • school, we all learned about angles in a 2d plane. In 2d if
  • we want to know the angle that is covered by an object seen
  • from a certain point, we place a circle around this point and
  • project the 2d object onto the circle. Then the angle is
  • defined as the arc length s over the radius r of the circle.
  • Although the angle is a dimensionless quantity, it is
  • specified in the unit radian. Obviously, for unit circle with
  • a radius of one, we can drop the division by the radius for a
  • half revolution in the unit circle, the arc length is pi,
  • which is in fact, the definition of pi. Consequently, for a full
  • revolution in the unit circle, the angle is two pi, the
  • equivalent of an angle in 3d space is called a solid angle.
  • If we want to know the solid angle that is covered by an
  • object seen from a certain point, we place a sphere around
  • this point and project a 3d object onto the sphere, then the
  • solid angle is defined as the size of the projected area s
  • over the radius squared. The solid angle is a dimensionless
  • quantity, but it is specified in the unit steer radian. If we
  • want to compute the solid angle, we can split the whole solid
  • angle into infinitesimal solid angle pieces denoted by d omega
  • here, and integrate over the whole domain. One way to solve
  • this integral is to parameterize the solid angle in spherical
  • coordinates. These coordinates are the polar angle, theta and
  • the other muscle angle, phi, d omega is equal to an
  • infinitesimal surface patch ds over the radius squared. We set
  • the Radius to one and compute the area of a surface patch, ds
  • by the multiplication of d theta and d phi now we see in the
  • figure that the size of a d phi step depends on theta. When
  • theta equals zero, the contribution is zero, and for
  • theta equal to pi over two, the contribution is a full D phi
  • step. We can take this into account and use a factor sinus
  • theta, to scale the d phi step accordingly. Rearranging the
  • factors gives a final result of the root. Two quick examples, if
  • we want to compute the solid angle of an entire sphere, the
  • integration limits of phi go from zero to two pi, and of
  • theta from zero to pi, we can solve this interval
  • analytically, and the result is a solid angle of four pi. Is the
  • radians. If we do the same for a hemisphere, the result is two
  • pi. Okay, let's define the other terms that we need for the
  • rendering equation. First we have the radial flux, which is a
  • measure of power. It is equal to radian energy per time. The unit
  • is watt, which is joule per second. We can think of it this
  • way, every photon carries energy that is equal to Planck's
  • constant h times the speed of light, C, divided by the
  • wavelength lambda. So the radian flux is the sum of the photon
  • energies that are emitted per time in graphics. The radian
  • flux can be used as a typical parameter for a point light
  • source that emits light uniformly in all directions.
  • Another quantity that is often used in graphics is radian
  • intensity. It is equal to radian flux per solid angle, the unit
  • is one plus the radian in graphics, the radian intensity
  • is often used if a light source does not radiate equally in all
  • directions. For example, a spotlight, radian intensity is
  • the sum of the photon energies that are emitted per time in
  • solid angle, as I have tried to illustrate in this figure here,
  • let's consider an example that relates the two quantities,
  • gradient flux and radiant intensity that we have already
  • introduced. If we have a point light that emits light uniformly
  • in all directions with a certain flux, then the radiant intensity
  • can be computed by dividing the total flux by the total solid
  • angle. We know from the previous slide that the total solid angle
  • for a sphere around the point light is four pi irradiance. So
  • for a point light, we get radiant intensity is equal to
  • radian flux divided by four pi
  • another important quantity is irradiance. It is equal to
  • radian flux per area. In other words, irradiance is the sum of
  • the photon energies that are received by a surface per time
  • and per area. The unit is watt per square meter. The received
  • flux can come from all directions within the hemisphere
  • above the surface. As an example, we compute the
  • irradiance produced by a point light source for this small blue
  • surface, EA of infinitesimal size, irradiance is flux per
  • area. We know the total amount of flux that is emitted by the
  • point light in all directions, but how much of this flux is
  • received by the blue surface? Well, we have already computed
  • the rated intensity, which was flux per solid angle. We put
  • this result in the equation for the irradiance. Now we need to
  • compute the solid angle. The solid angle d omega is equal to
  • the projected area, ds divided by radius squared. For the
  • radius, we choose the distance from the blue surface to the
  • point light, then we don't have to project the surface onto the
  • sphere, at least not if we assume that it is of
  • infinitesimal size, but we have to consider that the surface
  • area that is seen from the direction of the point light
  • appears shorter if theta is the angle between the surface normal
  • and the incident light direction. Then the required
  • foreshortening factor is cosine theta. This means, if the
  • incident light direction is perpendicular to the surface,
  • theta is zero, cosine theta is one and the visible area is
  • largest. But if the light comes from the side, we have to
  • consider the cosine theta for shortening factor. If we put
  • this result into the equation for the irradiance, we see that
  • the factor DA is canceled the irradiance caused by a point
  • light is the radiant intensity of the light times cosine theta
  • divided by the squared distance between the surface and the
  • light. In the case of a point light, we can further replace
  • the radiant intensity with the radiant flux divided by four pi,
  • as we have computed in the last example. To summarize this
  • result, firstly, the irradiance caused by a point light
  • decreases quadratically with distance. Secondly, the
  • irradiance is largest for a light direction perpendicular to
  • the surface, and decreases with the cosine of the angle of
  • incidence. The last quantity that we need for the rendering
  • equation is radians. If we observe a surface patch and want
  • to measure its brightness, we are not interested in the total
  • flux per area, but only in the fraction that is going in a
  • certain direction. For example, we are only interested in the
  • part that is going in the direction of our eye or camera.
  • So we are interested in the flux per area per solid angle.
  • However, when we say area here, we mean the area in flux
  • direction. If a surface area is not oriented in flux direction.
  • Its contribution in terms of area, is smaller. We have to
  • multiply the surface area with cosine theta to get the
  • contributing area in the flux direction. Radians is flux per
  • solid angle per projected area. An important property of radians
  • is that it does not change with viewing distance, at least if we
  • do not consider participating media such as fog or smoke. When
  • we move a camera or our eye towards the surface, the solid
  • angle per pixel gets larger. But on the other hand, the area that
  • we observe per pixel gets smaller. These two effects
  • cancel each other out, and the radiance does not change. This
  • is something that we observe also in our everyday lives.
  • Let's say we look at a wall and move towards it, then the
  • brightness does not change. How can we compute the irradiance
  • produced by incoming radians? Irradiance is flux per area. So
  • received flux can come from all directions within the hemisphere
  • above the surface. Radians is flux per solid angle per
  • projected area. If we put the definition of irradiance in this
  • equation, we get radians is equal to de over D, omega times
  • cosine, theta. Rearranging the equation gives de is equal to
  • radians times cosine, theta times d, omega. De is a
  • contribution to the irradiance by the radians received from
  • certain solid angle, D, omega. To compute the complete
  • irradiance, E, we have to integrate over the hemisphere.
  • The irradiance for the surface patch is given by the integral
  • over the incoming radians, l times cosine theta, where theta
  • is the angle of incidence. Let's remember this result and go back
  • to the rendering equation.
  • The rendering equation calculates the outgoing
  • radiance, l, o, in the direction v, for surface patch at location
  • X with normal n, the outgoing radiance is the sum of two
  • terms. The first term is the emitted radiance, L, E in the
  • direction v. This term is only larger than zero if the surface
  • is a light source that produces radian flux itself, the second
  • term is integral over the hemisphere. It is integrating
  • over the contributions of all incoming radiances, Li from the
  • hemisphere above the surface patch each incoming radiance Li
  • from direction l generates an irradiance contribution. De This
  • is the same relationship between radians and the resulting
  • irradiance contribution that we have discussed a moment ago on
  • the previous slide. Again, we have the cosine theta factor,
  • where theta is the angle between the surface normal and the
  • direction of the incoming radians. Consequently, radians
  • with the larger angle theta contributes less to the
  • irradiance of the surface patch. Each infinitesimal irradiance
  • contribution de is now multiplied with the BRDF, which
  • is shown for B directional reflection distribution
  • function. This is a function that describes the material
  • property of the surface patch. It defines how much radiance is
  • emitted by the patch in direction v, if it receives an
  • irradiance contribution from direction L. Of course, the vrdf
  • and the radiance values are also dependent on the wavelengths, so
  • we have to evaluate the rendering equation for each one
  • of our three RGB primary colors. The rendering equation describes
  • the full light transport in a, c. This means that an outgoing
  • radiance, l, o of a surface patch contributes this radians
  • as incoming radiance, Li and another surface patch. So all
  • surface patches in the scene are connected via the radians. They
  • are contributing to each other. This is why the rendering
  • equation is very hard to solve. In practice, the BRDF describes
  • the angle dependent spectral reflection factor for a surface
  • by the ratio of the outgoing radians, lo and the incoming
  • irradiance, E, both the incoming direction L and outgoing
  • direction d could be parameterized with spherical
  • coordinates, theta and phi. Consequently, the BRDF is a four
  • dimensional function, if you do not consider the dependency on
  • the wavelength by specifying this 40 function, the reflection
  • properties of a surface are described precisely. The BRDF of
  • a material could be measured and stored in a 40 table. However,
  • this would require a significant amount of memory. Furthermore,
  • we cannot edit materials easily with such a representation.
  • Consequently, for most cases, parametric BRDF models are used.
  • For example, the form model and the bling form model, which we
  • want to implement today in our shaders, are parametric BRDF
  • models. Here you see different materials. The metal material on
  • the left is almost a perfect mirror. For such a material, the
  • angle of incidence is equal to the angle of reflection for the
  • second metal sphere, the material is not perfectly
  • smooth. For such a material, we assume that there is some
  • variation in the orientation of the micro facets in graphics,
  • the term micro facets is used for tiny surface patches that
  • are much smaller than a pixel. Each micro facet behaves like a
  • perfect mirror. It is assumed that the orientation of the
  • micro facets follow some distribution around the
  • macroscopic surface normal so statistically, these small
  • variations in orientation cause the reflected light to be
  • distributed around the macroscopic reflection
  • direction, if the roughness of the surface increases, the
  • scattering of light around the reflection direction becomes
  • larger. For metals, photons are either reflected or they enter
  • the material and are absorbed. This is dependent on the
  • wavelength, therefore, the reflected light might be
  • colored. Examples are copper or gold. The behavior that I have
  • just described is true for metals, for dielectric
  • materials, the reflection model is a bit different as for
  • metals, a certain fraction of photons are reflected at the
  • surface, and we call this part the specular reflection. For
  • dielectric materials, the specular reflection does not
  • depend on the wavelengths, so it is not colored. Photons that are
  • not reflected enter the material here, they are either absorbed
  • or they are scattered in random directions below the surface. At
  • some point, they might come out of the surface again and exit in
  • random directions. We call this part the diffuse reflection. The
  • diffuse reflection depends on the wavelength and consequently
  • is colored for dielectrics. Okay. Now we are ready for the
  • form BRDF. The form BRDF has two parts, one for diffuse and one
  • for specular reflection. The Diffuse part is a simple
  • constant denoted by KD. It does not depend on the light
  • direction L or the view direction v. Note that the light
  • direction L is defined to go towards the light. So the yellow
  • arrow that goes from the light towards the surface is minus l.
  • In this figure, the reflected direction R has the same angle
  • to the surface normal n as the light direction. So if I change
  • the light direction here, the reflection direction changes
  • accordingly. The specular part is a dot product of the view
  • direction v and the reflection direction r, raised to the power
  • of NS and weighted with a constant Ks. If I increase Ks,
  • here you see that the specular part is appearing as a lobe
  • around the reflected direction, the dot product of the view
  • direction v and the reflection direction r is equal to the
  • cosine of the angle between these vectors. If the view
  • direction matches the reflection direction, the contribution of
  • the specular lobe is largest and reduces when the angle gets
  • larger, the exponent NS controls the fall off of the specular
  • lobe. For a higher exponent, the fall off is faster, as we have
  • learned. A higher exponent corresponds to a smoother
  • surface. The exponent Ns is typically referred to as
  • shininess. Okay, great. Let's implement the form blef, as we
  • have seen, the exact evaluation of the rendering equation is
  • difficult in computationally demanding the equation becomes
  • much simpler if the light exchange between surfaces is
  • neglected and only the so called direct light is taken into
  • account, which is emitted from a number of light sources. For
  • each light source, we compute the irradiance and multiply it
  • with the BRDF. Then we can sum up the individual light source
  • contributions. Or if the emitted light is zero and we have only
  • one light, we simply have outgoing radians equals the BRDF
  • multiplied with the irradiance. Let's put in the form BRDF and
  • the irradiance of a point light, the cosine theta term can be
  • replaced with the dot product of the light direction and the
  • surface normal. Our implementation starts from the
  • total stock example from episode number three. The link is
  • provided in the video description. I will use the GS
  • and composer at GSN minus lip.org as a shader editor here,
  • but you can use any other shader editor as well. Links to XM i
  • What'd you think of that? I
  • thought it was cool.
  • Yeah,
  • it's very thorough. I kind of want to go through the video
  • again on my own time, just to sort of go over it a little bit
  • slower. But like some of his things, like when he
  • demonstrated how, like the palm shooters work, like that last
  • Demonstration he did, I thought that was really interesting.
  • I You didn't mention the garage here, though.
  • This is your brain when you're struggling to focus, and this is
  • your brain when you're in the zone, but getting in the zone
  • and staying there,
  • therefore we perform linear interpolation between the
  • current F Zero for dielectrics and the base color for metal.
  • Now.
  • Music.
  • Hello, everybody. Welcome to Sheila's Mansfield Today we talk
  • about the cook Torrance micro facet, BRDF. This is a
  • reflection model that was introduced by Robert Cook and
  • Kenneth Torrance in 1981 micro facets are imaginary, tiny
  • surface patches that are much smaller than an output pixel.
  • This model allows describing the behavior of the macroscopic
  • surface patch Mathematically, the micro facet model
  • approximates the real world physical material properties
  • more closely than the form or plane form reflection model,
  • which we have discussed in episode number four, blin form
  • shading used to be the standard in real time rendering because
  • it is fast to compute. Nowadays, the computing power has
  • increased, and the computer graphics industry has almost
  • completely moved to Physically Based Rendering, or PBR for
  • short. Today, the my professor model is used everywhere in
  • offline as well as real time rendering. Therefore, it is
  • important for us to know the underlying theory. In a tutorial
  • presented at C graph in 2012 brand Burley, who works for Walt
  • Disney Animation Studios, presented a user friendly
  • interface to the micro facet, BRDF. In this interface, all
  • parameters are in the interval from zero to one. This approach
  • is a current de facto industry standard to represent a
  • material, and is known as the metallic roughness workflow. As
  • the name suggests, the two most important parameters are the
  • metallic and the roughness parameters for metals such as
  • gold, silver, copper and so on. The metallic parameter is one
  • for dielectric materials, which are also called non metals, such
  • as plastic, wood and rubber. The metallic parameter is zero. The
  • roughness parameter controls the distribution of the orientations
  • of the micro facets. The micro facets are not real geometry,
  • but are only evaluated statistically. In the micro
  • facet model, each tiny micro facet surface behaves like a
  • perfect mirror for very smooth polished surfaces, the roughness
  • is zero. All micro facets point in the same direction, and the
  • material behaves like a perfect mirror. If the roughness
  • parameter is increased, the orientations of the micro facets
  • are more random, and light is collected from larger range of
  • incoming light directions in Disney's model, the metallic
  • parameter could also be set to a fractional value, for example,
  • 0.5 This would mean that the material behaves with 50% like
  • metal and with the other 50% like a dielectric material,
  • which is physically not plausible. Therefore we should
  • not use values other than zero and one for the metallic
  • parameter. If we stay within the physical world, it is much
  • easier to produce a realistic material that works well in
  • different lighting conditions. However, in textures or mid
  • maps, the resolution is limited, and we might have a mix of
  • materials within a single pixel. In these situations, a
  • fractional value for the metallic parameter makes perfect
  • sense. Another parameter is called base color. It is an RGB
  • vector for dielectrics. It contains the RGB values for the
  • albedo, which is the amount of diffuse reflection in the
  • interval from zero to one for metal this parameter contains
  • the RGB values for the Fresnel reflectance, which we have not
  • talked about yet, but it will be introduced in a minute. Then
  • there is an additional parameter called reflectance, which is
  • used only for die electrics. It also contains a Fresnel
  • reflectance, but this time for dielectric materials. This
  • parameter is called specular in Disney's original tutorial
  • notes. But I like the name reflectance much more. Whatever
  • it is called, the parameter, controls the amount of
  • dielectric specular reflection. In contrast to metals, for which
  • the specular reflection at a perpendicular incident angle can
  • reach up to 100% for die electrics, it is much less. It
  • ranges approximately from zero to 16% the quadratic remapping
  • function is used to map zero to 16% to a user friendly interval
  • from zero to one. These very few parameters are sufficient to
  • control our basic micro facet. BRDF, Disney's full model has
  • several other parameters for subsurface scattering, an
  • additional clear coat layer and other advanced features that we
  • are not discussing today. Now let's dive more deeply into the
  • theory. Our microface BRDF is used in combination with the
  • rendering equation. As a refresher, here is a slide from
  • episode number four. The rendering equation calculates
  • the outgoing radians, l, o, in the direction v, for surface
  • patch application x, with normal n, the outgoing radiance is a
  • sum of two terms. The first term is the emitted radiance, l, e,
  • in the direction v. This term is only larger than zero. As a
  • surface is a light source that produces some radian flux
  • itself. The second term is the integral over the complete solid
  • angle of the hemisphere above the surface. We integrate over
  • the contributions from all incoming radiances, Li from the
  • hemisphere above the surface. Patch each incoming radiance, Li
  • from direction l generates an irradiance contribution de, each
  • infinitesimal irradiance contribution de, is multiplied
  • by the BRDF, which is short for V directional reflection
  • distribution function. The BRDF defines how much radiance is
  • emitted by the surface patch in direction v, if it receives an
  • irradiance contribution from direction L. This way the BRDF
  • describes the material properties of the surface patch.
  • This is where we need to insert the microface BRDF that we
  • discussed today. Here you see the equation for the complete
  • BRDF. A quick introduction of the use notation before we talk
  • about it, V is the outgoing view direction. L is the light
  • direction it points from the surface location towards the
  • lot. Consequently, minus L is the incoming vector from the
  • light to the surface. N is the surface normal. H is the halfway
  • vector. It has a unit vector at the half angle between the view
  • and the light direction. The vrdf is a function of the view
  • and the light direction. It has a diffuse part and a specular
  • part. The constant diffuse part, rho, d, divided by pi, is known
  • from the normalized form vrdf that we have introduced in
  • episode number four. For the specular part, we use the cook
  • tolerance microfacet model, the cook Torres microface model has
  • three terms, Fresnel reflectance, the normal
  • distribution function and the geometry term. We will introduce
  • these terms one by one in the following slides. Let's start
  • with Pranav reflectance. Here you see the image of a swimming
  • pool with a very flat water surface. In the bottom part of
  • the image, we can see the ground of the pool, but here in the
  • upper part, we don't see the ground, but only the reflected
  • sky. Why do we observe this effect? The effect occurs
  • because the ratio of transmitted and reflected light is not
  • constant. It depends on the angle of incidence and the
  • refractive indices of the involved materials. If we look
  • into the water from above, only a small part of the light from
  • the sky is reflected in our view direction at the water surface,
  • most of the light is transmitted and we see the ground of the
  • pool, the more we look from the side and observe the surface at
  • increasingly grazing angles, the larger becomes the reflected
  • part, the more light is reflected, the less light is
  • transmitted. The setup is illustrated in this figure. We
  • have the interface between two materials with two different
  • indices of refraction, eta one and eta two. In the swimming
  • pool example, we observed the interface between water and air
  • because water is an optically denser medium than air, its
  • index of refraction is larger. The angle of incidence of the
  • light is defined relative to the surface normal, and is denoted
  • by theta one here for a perfectly flat surface, the law
  • of reflection tells us that the reflected direction has the same
  • angle as the incoming Ray. Therefore this angle here
  • between the normal and the reflected direction is equal to
  • theta one as well. Let's have a look at the angle of the
  • transmitted ray, theta two, during the transition of the
  • light from an optically thinner to an optically denser medium,
  • the light ray deviates towards the normal consequently, theta
  • two is smaller than theta one for such a situation,
  • mathematically, the relationship between the angle of incidence
  • theta one and the angle of the transmitted ray. Theta two is
  • given by Snells law, eta one times sine of theta equals eta
  • two times sine of theta two. Given eta one, eta two and theta
  • one, we can solve for theta two. With theta two, we can compute
  • the fraction of the light that is reflected using the Fresnel
  • equation shown here, the transmitted fraction is then
  • given by one minus the reflected part.
  • Let's look at Fresnel reflection for dielectrics light photons
  • are either reflected, transmitted or absorbed with an
  • angle dependent relative frequency. The reflected part is
  • scattered on the micro face. Macroscopically, this creates a
  • specular part of the reflection in opaque materials, the
  • transmitted part is randomly deflected below the surface,
  • partially absorbed and emitted into random directions.
  • Macroscopically, this creates a diffuse part of the reflection.
  • The more light is reflected, the less light is transmitted, and
  • the diffused part becomes smaller. The likelihood of
  • absorption inside the medium is wavelength dependent. Therefore
  • the diffuse part is colored and the specular part is not. For
  • metals, the complete transmitted fraction is absorbed. This means
  • there is no diffuse reflection. For metals, the reflected part
  • depends on the wavelength, so the specular part is covered. On
  • this slide, we see the index of refraction for several
  • dielectric materials. The table also contains the F zero values
  • in percent. F zero is a reflectance for perpendicular
  • incidence of light when the incident angle, theta one is
  • equal to zero. If you insert theta one equals zero into the
  • Fresnel equations, all the cosine terms evaluate into one,
  • and the equations become much simpler, as you see here. If we
  • then put in 1.0 for eta one for vacuum or air, and set eta two
  • to 1.5 for glass, we can compute a value of 4% for F zero. This
  • means that at the transition from air to glass at normal
  • incidence, only 4% of the light is reflected and 96% is
  • transmitted. On this slide, we have listed the F zero values
  • for several metals as discussed. These are wavelengths dependent.
  • So we have three values for the red, green and blue channel in
  • the first column, the reflectance values are given in
  • linear space, which we can use directly in our shader code. The
  • second column gives the corresponding sRGB values, as we
  • have discussed in episode number four, it is important to perform
  • gamma correction when we convert between these two
  • representations, when we want to apply the Fresnel reflectance in
  • the context of the micro facet model, we need to be careful
  • which angle we use for theta one for a perfectly flat surface,
  • theta one is the angle between the incident light direction and
  • the normal of the surface, or because of the law of
  • reflection, theta one is also The angle between the view
  • direction and the surface normal. However, in the micro
  • facet model, we assume that the surface is built from tiny
  • surface patches that are much smaller than an output pixel.
  • Each micro facet behaves like a perfectly flat mirror surface.
  • This means we only get a light contribution in the view
  • direction if the normal of the micro facet is pointing exactly
  • in the direction of the macroscopic half wave vector.
  • Consequently, the Fresnel reflectance must be computed for
  • those micro facets which have the macroscopic halfway vector
  • as surface normals. Therefore, for the micro facet model, theta
  • one is the angle between the incident light direction and the
  • macroscopic halfway vector, or because of the law of
  • reflection, the same angle can be found between the view
  • direction and the macroscopic halfway vector in the paper by
  • cook and Torres from 1981 the Fresnel reflectance for eta one
  • equals one is calculated by the shown equations. We see that the
  • scalar product of the view direction and the macroscopic
  • half wave vector is used. Though these equations do not use
  • trigonometric functions like sine or cosine. They are
  • nevertheless computationally expensive. Therefore the faster
  • Schlick approximation is used in many implementations. The figure
  • shows a comparison between schliep s approximation and the
  • true equations for glass, for theta, one equals zero, we get
  • 4% reflectance at a grazing angle of incidence of 90
  • degrees. The reflectance is 100% we see that the approximation is
  • not perfect for the in between angles, but it follows the true
  • graph reasonably well. This was the Fresnel reflectance term.
  • The next term is a normal distribution function. The
  • distribution of the micro faced normals changes depending on the
  • roughness of the surface. For a perfectly smooth, polished
  • surface, all the micro facet normals point in the same
  • direction as a macroscopic normal. For a rougher surface,
  • the micro facet model assumes that the orientations of the
  • micro facets follow some distribution around the
  • macroscopic surface normal. So statistically, these small
  • variations in orientation cause a reflected light to be
  • distributed around the macroscopic reflection
  • direction. If the roughness of the surface increases, the
  • scattering of light around the reflection direction becomes
  • larger. There are several suggestions for the normal
  • distribution function in the literature, all three options
  • that are shown here are parameterized by alpha, where
  • alpha is equal to Rp squared and RP is our user friendly
  • perceived roughness value in the interval from zero to one.
  • Furthermore, we see that all three options for the normal
  • distribution function depend on the scalar product of the normal
  • and the halfway vector, which we have also used for the specular
  • lobe in the blend form model in episode number four. In the
  • figure on the right, we compare the three options for the same
  • roughness value. The x axis represents the angle between the
  • normal and the halfway vector. The yellow curve is the GG X
  • distribution proposed by Walter and co authors in 2007 nowadays,
  • it is a predominant model because of the softer tails that
  • can also be observed in brdfs captured from real world
  • materials. The last term of the microfacet model is the geometry
  • term depending on the light's incident direction and the
  • observer's viewing direction. Shadowing and masking effects
  • occur at the micro facets. Blinn as well as cook and tolerance.
  • Assume the micro facets are V shaped and derive a geometry
  • factor from this model. There can be either no interference, a
  • masking effect for grazing view directions, or a shadowing
  • effect for grazing light directions. Another well known
  • model for the geometry term was proposed by B J Smith in 1967
  • the geometry term is created by multiplying two factors that are
  • computed with the same function, g1 the g1 function is evaluated
  • for the light direction and the view direction. Different g1
  • functions that are mentioned in the paper by Walter and co
  • authors are given here. The first approximation by Schlick
  • for the GG X variant is given by this equation. Use the graphs
  • for the mentioned g1 functions, bag, man, GG X and slip GG X for
  • different roughness values, for a roughness value of Z row, the
  • geometry term is one, which means it has no effect. For
  • larger roughness values, we get a stronger influence of this
  • term, which is the expected result, because masking and
  • shadowing becomes more likely for rougher surfaces, let's
  • implement the Cooke tolerance, microfacet BRDF, as we have seen
  • the Cooke Torres microface BRDF that we use in our specular part
  • is quite general. We can use different options for the three
  • terms. Let's take the shake approximation for the Fresnel
  • reflectance, the GG X normal distribution function and the
  • Smith geometry term with the Schlick GG x, g1 function. I use
  • the GSN composer and GSN minus lip.org as a shader editor, but
  • you can use any other shader editor as well. I
  • now right on time.
  • Any questions or thoughts about
  • those two videos, does He say,
  • like, roughness is between zero and one don't exist. Like at the
  • start of that second video, that part, I didn't click
  • it. He said, We
  • it's either metallic or it's not.
  • So we don't have objects that are half metallic and half
  • dielectric.
  • I think that's the roughness can vary continuously.
  • Thanks for today.

Responses

No Responses