Outline for Today
Shapes (Chapter 6), Ray Tracing Videos
Administration
Today
- if you didn’t submit your Assignment 1 last week, you have until 23:59 today to submit (and receive a late penalty)
- exams will be graded before our next Tuesday meeting (watch for email from gradescope.com) and will be discussed then
- there is an opportunity to give me 2 kinds of feedback: your formative evaluation of the class so far (under the Exams topic on UR Courses) – this is anonymous – and your evaluation of the fairness of the midterm (under the Participation topic on UR Courses) – not anonymous, since it is counted towards participation
- Videos by Eric Haines:
For Next Meeting
Wiki
Link to the UR Courses wiki page for this meeting
Media
Transcript
Audio Transcript
-
Okay, I think that's
working.
-
All right, so we're eating 15
-
of 26 so it's our first meeting
in March. You
-
I haven't heard that expression
for March in like a blank and
-
out like a blank. They're
different blanks. I
-
so if it starts out being mild,
it's going to be go out being
-
not very nice. It's not it's
going to end being not very
-
nice. And if it is not very nice
to begin with, it'll be nice by
-
the end of the month. I don't
know that this is still accurate
-
in this era of climate change,
but that just came to mind,
-
so maybe There are other ways to
update that
-
old saying and it anyway. So I
just wanted to I noticed there
-
were some people who hadn't
submitted assignment one. So if
-
you want to get a mark with a
late penalty submitted by the by
-
midnight tonight, after that,
you won't get a you won't be
-
able to get a mark for it. So
the exams will be graded before
-
our next Tuesday meeting. So
you'll get an email from grade
-
scope.com because I'm going to
scan I'm scanning the exams, and
-
so I have the PDFs, and you'll
get a copy of the scan PDF with
-
type notes, so it may be a
little more legible than my
-
handwriting and a little bit
more organized perhaps.
-
Anyway, it's
-
so I'll be using that so you'll
get an email, and you'll get it
-
before next Tuesday, so you can,
if you have questions, you can
-
use grade scope to ask questions
about the marks and so on. So
-
it's got a lot of nice features,
I think. So
-
there's that, so
-
there's an opportunity to give
me two kinds of feedback. I'll
-
just point you down here. I'm
-
so formative feedback at midterm
for formative feedback for me,
-
it's under exams. I'm not sure
when I've set the cutoff time
-
for this. I let me check. Maybe
-
I'll make this to Six instead.
So Thursday,
-
I Okay, and then
-
the other one is feedback about
the midterm exam. What
-
did I make this one for? I
-
i mean that Friday can Make the
other One Friday. I
-
so the formative feedback for
me, that's anonymous, but this
-
last one, so this one is
anonymous, the formative
-
feedback at midterm, and then
this one is not anonymous,
-
because it's part of
participation. Okay, so
-
So I came across when I was
looking for exam question,
-
making up the exam questions
last week, I came across some
-
videos by Eric Haynes, who's a
an engineer at Nvidia, and who I
-
met. I many, many years ago at
the SIGGRAPH conference. And he
-
was, I associate him with this
sphere flake. I
-
so I see that he has he created
a set of videos about ray
-
tracing that I thought might be
interesting to look at.
-
So if I turn the lights down, we
can watch some videos and
-
I see if I've done this
properly.
-
Hi, my name is Eric Haynes, and
I'm an engineer, and we're going
-
to do a series of audio entries
-
to do it. This is David Kirk,
who's a fellow, says There's an
-
old joke that goes ray tracing
is the technology. I the
-
technology.
-
You analogy of
-
the future, and it always will
be well the future is now here,
-
as of least 2018 that a single
NVIDIA card, terrain card, can
-
now do real time ray tracing.
Let's start with the basics.
-
What's a ray? Well, a ray is
defined by just two things. It
-
has an origin, some point in
space, x, y, z and a direction.
-
And ray casting is the idea of
taking that Ray and shooting it
-
out in that direction and
finding what gets hit. And this
-
is not in a rendering algorithm.
It's just a basic tool in the
-
toolbox. You can use it however
you want. You can be using it
-
for checking radiation or doing
all kinds of other things. We
-
use it for rendering. So ray
casting is just shooting a ray
-
out and seeing where it hits
something. You can also use ray
-
casting between two points. So
you may say, Well, I've got a
-
point A, point B, and I shoot a
ray, and I want to see if
-
anything's in between. This
could be used, for example, if I
-
want to see if there's a shadow.
If you already have the points
-
like a and b, there's a light
and a surface that you can shoot
-
that Ray, and if anything gets
in the way, then you know that
-
point B is in shadow. Ray
casting is a way that you can
-
actually make an image. So if
you think of a screen like a
-
screen door, and you think of
each little square on that
-
screen door that you're looking
through, think of that that's
-
your pixel. So you want to know
what's at that pixel and what
-
the ray hits. So the ray shoots
through that pixel and goes up
-
into the environment and hits a
bunch of things, and whatever is
-
closest is what you're going to
see through that pixel. And then
-
you can shoot rays towards the
lake, for example, and see if
-
you have hit or missed anything
in between. And if you've hit
-
something, then the objects,
your point of intersection is in
-
shadow, otherwise it's lit. And
this is actually the first use
-
of ray tracing in a
computational form, is for by
-
this person at Apple back in
1968 he traced rays towards
-
lights to get shadows. And his
outfit device was a pen plotter,
-
a pen that draws on a big sheet
of paper. So Ray tracing really
-
takes off back in 1980 with the
seminal paper by Turner with it,
-
it covers a lot of interesting
basic things that we still do
-
nowadays, like anti aliasing and
bounding, vying hierarchies,
-
which I'll talk about in a
minute. But basically he has
-
this intuition, or this idea of,
how can I get reflections and
-
refractions and shadows, and how
can I do this in kind of a
-
recursive way? That's the big
breakthrough. So let's show how
-
that works. Here we're shooting
ray from the eye again, and it
-
hits a piece of glass. So this
glass is nice and shiny, so it's
-
reflective and it's also
refractive. So we might first
-
shoot a shadow ray, and the ray
goes towards the light. OK,
-
good. That's eliminated. But we
also do this thing, which is to
-
spawn off two more rays, one in
the reflection direction going
-
down below, and one in a
refraction direction going
-
through the glass. So we can
follow both of these rays. I'll
-
ignore the one going off the
screen, the reflection
-
direction, one, the refraction
one. Then we shoot another
-
shadow ray and see again, the
effect of the light. And then
-
again, we spawn more rays. We
can shoot a reflection, if
-
there's an internal reflection
Ray that's going upwards. And
-
again, we'll kind of ignore how
further it bounces and shoots up
-
more rays. So we'll just follow
the refraction one that's going
-
off to the right. So then when
going off to the right, it's
-
that box. And again, we can
shoot a shadow ray, and that one
-
actually is blocked, so we know
that the box is in shadow. So
-
with that, we now take all those
contributions, all those
-
intersection points, the two on
the glass and the one on the
-
box, and we kind of add them all
up, and we get a color at the
-
eye. We get all color for the
pixel. So that's wooded style
-
ray tracing. It's really good
for things like sharp shadows
-
and reflections and refractions.
The advantage of this kind of
-
rendering algorithm is that you
can basically do it from the
-
eye. You know that you are
hitting a mirror surface, and it
-
goes to the light, and you
basically get a very few number
-
of rays that you need to cast,
versus if you had sort of shut
-
all the rays from the light, and
had it bounced around, and
-
almost all those rays are not
going to actually get to the so
-
the next breakthrough as far as
Ray tracing goes is these Cook
-
stochastic, or sometimes called
distribution ray tracing in 1984
-
The idea here is that instead of
shooting just a single
-
reflection Ray, for example, you
might have a glossy surface,
-
something with kind of a sheen,
and you shoot a sort of a burst
-
of a rays. Instead, you can also
get cool effects like motion
-
blur. And the idea here is just
that instead of shooting one
-
reflection Ray, you're shooting
a bunch. Or instead of shooting
-
one shadow ray, you're shooting
a bunch to try to get a soft
-
shadow with stochastic ray
tracing, you shoot a ray out. It
-
hits the box, and then you shoot
one ray at the area light. So
-
our sun now is a little bit
larger to give it some actual
-
area, just like the real sun.
And we picked some arbitrary
-
point on that sun. This one made
it all the way to the light. And
-
here's two more rays, one hit,
one missed. So now we know at
-
this point that two thirds of
our rays are hitting the area
-
light, and so we can say, well,
OK, the shadows somewhat soft,
-
two thirds illuminated, but we
can shoot more and more and more
-
rays and get a better answer. So
this is stochastic ray tracing.
-
And the idea here is, like, I
say, just shooting burst arrays,
-
it's more expensive. You have to
shoot more rays, and the more
-
rays you shoot, the better the
answer you get. But that's, you
-
know, it's often worth the cost.
So in 1986 was sort of the next
-
theoretical leap, which is
kijiya style, diffuse, inter
-
reflection. And this is a paper,
a classic paper, which we'll go
-
to in a further lecture, in a
later lecture, called the
-
rendering equation. And
basically his idea is, well, you
-
know what? If we say the sky is
the limit, we're just going to
-
shoot rays out from the eye, and
we're going to have each ray hit
-
something, and we don't
necessarily know which way it's
-
going to reflect. It's a mirror.
We know, sure, or reflect on the
-
mirror direction. But say it's
something like unglazed pottery
-
or some other thing like cement
or something, you then don't
-
really know which way the lights
coming from. Well, you know the
-
lights coming from all kinds of
different directions. So what
-
you do is you shoot more rays in
different directions, but with
-
path tracing, you shoot just one
ray in one direction and follow
-
it along a path. So let's, let's
show you what that looks like.
-
So here's path tracing where we
shot one ray through, you know,
-
through our pixel, and it's in
one particular location in the
-
pixel. It hits this box, and we
shoot a secondary ray in some
-
direction, and it goes off to
the sky. Say, we shoot another
-
ray that one happens to hit a
light. So that's actually going
-
to be a fairly important
contribution to sort of a lot of
-
direct illumination from the
sun. And notice that we've also
-
put the pixel sample in a
slightly different location
-
within the pixel. This allows
you to get sort of anti
-
aliasing, kind of for free,
because by moving the samples
-
around within the pixel, you're
basically sampling the whole
-
box, the whole pixel box,
instead of just the center of
-
the pixel. And so we go. We
continue here. Here's a path
-
where we hit the ground, and
then it hit the cylinder, and
-
then it shot up down to the
ground, somewhere else and so
-
on. You keep shooting these
rays, and you get more and more
-
paths. And once you have all the
paths, then all meaning you're
-
tired of shooting rays, you
basically then add up all those
-
contributions. You basically
have figured out where the
-
lights coming from for a bunch
of different directions, a bunch
-
of different paths, and add them
up and get a color. So in the
-
film industry, for example,
you'll often see scenes where
-
they'll use 1000 rays, 3000 rays
per pixel, and so those take a
-
little while to compute. The
point is, is that by doing this,
-
you will eventually get the
right answer. You know, it's
-
sort of reversing the whole
process of getting light
-
percolating through the system
and but doing it just from the
-
eye, and you will eventually get
the right answer. So that's path
-
tracing. And what makes ray
tracing great is the fact that
-
it can be that simple. It's just
you're shooting a ray and
-
bouncing it around along
different paths. And here's a
-
ray tracer, in fact, that's on
the back of a business card.
-
This is from Paul Ekberg
business card when he was at
-
Pixar in the 80s. And it
actually is a ray tracer. It
-
will actually make a little ray
traced scene with, I think it's
-
shooting against a bunch of
spheres. So it's a very compact,
-
simple kind of algorithm. You
have this simple tool, a ray and
-
using those rays in various
great ways. So you can basically
-
get beautiful results like this,
where you're getting soft
-
shadows and lovely reflections.
And that's it for this lesson. I
-
wanted to point at some further
resources that you'll see on the
-
web page links to places where
you can find books, and there's
-
also going to be a link for ray
tracing gems, which is a book
-
that I helped co edit, so
thanks, and I'll catch you at
-
the next lecture.
-
Hi. My name is Eric Haynes at
Nvidia, and this lecture is
-
rasterization versus ray tracing
to start with. I'd like to have
-
a quote, and this one is the
brute force approach is
-
ridiculously expensive. This is
from a very long paper, I think
-
it's 5060, odd pages long, a
seminal paper about hidden
-
surface algorithms. And the
interesting thing about this
-
quote is it's from appendix B,
and they talk about this
-
algorithm, they go, Well, this
algorithm is ridiculous. It's
-
going to use up a quarter of a
megabyte. Who has $60,000 for
-
that much memory, and it turns
out to be the one that GPUs now
-
use all the time. It's called
the Z buffer. Sometimes it's
-
just a matter of brute force
winning out. And rasterization
-
has been quite successful with
this algorithm for, you know,
-
for decades now. And the
difference between rasterization
-
and ray tracing is pretty
simple. In rasterization, what
-
we're doing is we're taking a
grid and we're kind of throwing
-
objects at that grid of pixels,
and for each object, we
-
basically look at each pixel
that that object covers, and
-
says, Well, is the object closer
in this point or not? And if it
-
is closer, we save it. If not,
then we discard in ray tracing.
-
We flip that loop, we go for
each pixel, and we go for each
-
object, then, does the object
cover that pixel? So in other
-
words, at a pixel, we shoot a
ray, and we look at each object
-
and we find out which is the
closest one, if any. So let's
-
just zip through rasterization,
just to make sure we're all on
-
the same page. Here's three
triangles. That's sort of the
-
ultimate image that we want to
form, but we're only using the
-
centers of pixels, so we're not
going to get anything that high
-
res. So you start with an empty
grid, you throw a red triangle
-
against it, and you cover the
pixels that are where the
-
triangle covers the center of
the pixel. Now you take the
-
triangle away, and let's get the
next triangle in. Here's a green
-
triangle, and again, you cover
the ones that it covers. And the
-
trick is, as the green triangle
is in front of the red triangle,
-
so we know that any pixels that
it covers will also cover the
-
red triangle. Finally, we have
this blue triangle. It's behind
-
the green and the red triangle.
So when we cover its pixels, we
-
both say, oh, OK, is the pixel
covered by this triangle? Yes,
-
oh. But we also store for each
of those pixels a depth, and if
-
the depth says, Oh, I already
have something there, and it's
-
closer than discard. So the blue
is further away, only the pixels
-
that are not covered by written
green get covered by the blue.
-
With ray tracing, as I say, we
reverse the process. We start
-
shooting rays. And as we go, we
basically are hitting various
-
objects. So at this point, we
hit a red and a blue triangle,
-
and the red triangle is closer,
so that's the one that shades
-
the pixel. And we keep through
this process, and we hit green
-
triangles and red and so on. And
as we go, we now go through the
-
whole process and we get the
same answer. We get the same
-
image as the rasterizer. The
thing that makes ray tracing
-
particularly useful as the
scenes get larger and larger and
-
more complex is something called
the bounding volume hierarchy.
-
So with the bounding volume
hierarchy, what you're doing is
-
you have a circle that encloses
your whole scene, and within
-
that, then there are other
circles, within circles, within
-
circles, and on down the tree
until you get to actual objects.
-
And this has a great advantage
for ray tracing, because you
-
shoot a ray against that
structure, and it can be
-
extremely efficient. So for
example, if the ray doesn't hit
-
the outer circle at all, then
we're done. We don't have to
-
look at any of the circles
within circles, because we know
-
we haven't hit the outer circle.
But if we hit that outer circle,
-
what we do is we open it up and
we look at the two children
-
circles, and we shoot the ray
against those, and not down the
-
chain until either we hit
something or we're clearly not
-
going to hit anything. We're out
of circles to shoot against. And
-
so this process tends to be
order log n in computer science
-
terms. In other words, it's
faster than just throwing all
-
the objects against the ray. So
instead of shooting you know, a
-
ray against a million objects.
You may shoot a ray against only
-
a very few objects, because
those are the only ones that are
-
close enough to the ray that
there's any chance that it's
-
going to hit. Here's another
view of that kind of algorithm,
-
and it's more kind of what we
actually really do in ray
-
tracing, which is to use
bounding boxes. So here we have
-
a hierarchy of boxes, and as we
go on down the chain, we
-
basically are looking in those
boxes, and then there's sub
-
boxes and sub sub boxes. And
eventually we find out, okay,
-
that one box has a bunch of
triangles in it, and, oh, look,
-
the ray hit that one triangle.
So instead of shooting the ray
-
against 1000 triangles, we might
shoot it against a handful of
-
boxes and a few little few
triangles. And that's it. So
-
it's much more efficient.
Interestingly enough, this used
-
to be the reason that people
thought, Well, Ray tracing is
-
going to totally beat
rasterization back in the 80s,
-
like they thought, OK, this is
it. Look we have this order log
-
n rasterization. Well, look for
rasterization. We have to throw
-
every triangle against the
screen. So that's going to be
-
order n, and being the number of
objects in the scene. However,
-
rasterization has a bunch of
ways that it can do the same
-
kinds of tricks and so on that
allow you to get sort of order
-
log n behavior. So that's not
really a great argument. And
-
this slide is rasterization and
ray tracing, and compares the
-
two. And I'm not going to go
through it, but you can look at
-
it, you can put pause and try to
see what all the differences
-
are. But the real point is here
is that they're kind of just
-
complimentary. They're two
different ways of looking at the
-
same rendering system. Is that
you want to make a picture,
-
well, what's an efficient way to
do it? Well, with Ray tracing,
-
you are shooting rays. With
rasterization, you're throwing
-
triangles on the screen. The key
here is that it really isn't
-
rasterization versus ray
tracing. Rasterization is really
-
good at situations where there's
a point and like such as a high
-
point or a light and you want to
look out into the scene, and
-
you're happy to get a nice
regular grid, or pixels or
-
whatever, and have these samples
at a regular distance and so on.
-
And Ray tracing is much more
general, and that you can take
-
from any point to any point and
see what's in between or what's
-
along that ray. So they each
have their strengths, and they
-
can be used together doing so
you get wonderful images like
-
this, where the eye rays are all
rasterized. We're basically
-
using rasterization to get our
initial view of the world, and
-
then we're using ray tracing to
get all the lovely glossy
-
effects and lighting effects.
And that's the end of this
-
lesson. So I do want to point
you to resources on our website
-
which give links to all kinds of
great stuff about ray tracing,
-
including free books that you
can get. One of those free books
-
is ray tracing gems, which is
all about modern practice. It's
-
downloadable for free. It's a
PDF. Hi, my name is Eric hings,
-
and I'm an engineer at Nvidia,
and we're going to do a series
-
of lectures about ray tracing.
The first one is the basics of
-
ray tracing. Let's get going the
thing I like to do it in it. I
-
knew that
-
was too good to be true. I
-
Eric, hi. My name is Eric Haynes
with Nvidia, and this talk is
-
called ray tracing hardware. So
I'd like to open these talks
-
with a little quote, and this
one I love, which is, pretty
-
soon computers will be fast.
That's kind of always the case,
-
right? It's like, oh, with just
a little more power, I could do
-
this new, cool thing, but I
think there's like a constant,
-
which is that Windows will
always take five seconds to open
-
a new window, no matter what Ray
tracing is, one of those
-
processes that's what we call
embarrassingly parallel. Since
-
you're calculating a color at
each different pixel, each
-
different pixel is independent
of the other pixels. So you
-
could throw a processor at each
one. Turner witted famously came
-
up with this idea of, well, what
if we took a football field and
-
filled it full of Cray
computers, and we put light
-
bulbs on the top of each one,
red, green and blue, and we just
-
had each computer compute a
single pixel, and then we fly a
-
blimp over it, so we can see the
image produced in 1987 is the
-
first ray tracing machine that I
ever saw. It's the at&t pixel
-
machine. It was kind of a
surprise to me to hear about
-
this, like when I arrived at
SIGGRAPH and oh, whoa, there's
-
this machine that's doing ray
tracing. And it was doing these
-
scenes where I had just come up
with this database for testing
-
Ray tracers. Procedural database
is what it's called. And so
-
there's one image sphere flake,
which they were using to test
-
and would take me hours to
render one of these little
-
scenes. And for them, it was
taking 30 seconds. And with
-
tuning a year later, it was
taking them just 16 seconds to
-
run on this machine. Now this
machine, admittedly cost
-
hundreds of 1000s of dollars,
and is now sort of a footnote in
-
history. It was very expensive
for the time. Nonetheless, was
-
the first one where I saw real
life ray tracing, and better
-
yet, I saw real time ray
tracing. They had a tiny little
-
postage stamp on the screen,
little 64 by 64 pixel kind of
-
thing, maybe 32 by 32 and they
had a case where you could move
-
the mouse and move this sphere
on top of this desktop kind of
-
thing, or on top of a plane, and
it would reflect things in real
-
time. And now we can run that
on, you know, the crummiest cell
-
phone, but at the time, it was
just magic. So Moore's law is
-
ending. Sad to say, it's not
just a few people saying it
-
this. It's one of those things
that people said for years, oh,
-
Moore's law is just about to
end. However, the sign that's
-
actually kind of being true.
It's where, you know, we're
-
hearing it from Nvidia, we're
hearing it from Google, and
-
we're hearing it from other
companies. That there's just not
-
as much of an oomph every year
from the previous year. So what
-
that means is we have to rethink
how we designed hardware. Where
-
we're going in, Nvidia is
considering more special purpose
-
hardware. If you think about
transistors, there's basically a
-
bunch of transistors on the
chip, and it's your budget. It's
-
how many you can use for things.
You can use it for memory. You
-
can use them for general CPU
kind of processors, or you can
-
use it for something special
purpose. So on GPUs, on
-
traditional GPUs, we have
special purpose hardware for
-
rasterization. With this new
generation Turing, we now have
-
special purpose hardware for
both artificial intelligence,
-
deep learning operations and for
ray tracing, and so they're
-
called RT cores for ray tracing,
and that's what we're going to
-
talk about next. So RT cores, or
ray tracing cores, perform two
-
basic functions. They accelerate
Ray bounding, volume hierarchy
-
traversal and Ray triangle
intersection. So with traversal,
-
they are basically shooting the
ray against a bunch of boxes,
-
and they return the boxes that
they hit with Ray triangle.
-
They're shooting against a bunch
of triangles. And there's sort
-
of these two levels where you
can basically have a mesh and
-
have that mesh be copied as many
times as you want, because it
-
just has a hierarchy on is
triangles, so you get this
-
instancing, so it could have
lots of bunnies for very little
-
extra cost. And we use string
multi processors, which are
-
shader cores for doing other
kinds of instancing or custom
-
intersection, like Ray Sphere or
Ray subdivision surface, and for
-
traditional shading. The other
thing that's been interesting is
-
to see how much memory has
changed over the years. It used
-
to be we'd have some small
number of megabytes, you know,
-
maybe 100 megabytes, if we're
lucky, and it's gone up to the
-
point where we're now having
machines that have as many as
-
512 gigabytes. And film
production uses, for example,
-
scenes that are maybe 50
gigabytes or more, for very
-
complicated scenes. So we're
within the realm of actually
-
being able to hold entire scenes
from a film inside GPU memory.
-
And so this lets us do things
like Ray tracing, where we kind
-
of have to have all the data
around at the at the same time,
-
or at least most of the data,
and be able to swap data in as
-
needed. So the idea here is just
that we can now store things
-
like a whole ray tracing
database and do something with
-
it. Back in the old days, we
could only store, not very much
-
at all, a few textures, a few
triangle meshes, maybe, if we're
-
lucky. So for example, to just
show the speed up that you can
-
get, this is from Metro Exodus
and just an analysis of one of
-
their frames. So in the Pascal
architecture the previous
-
generation, it took someone so
much time in Turing if you
-
didn't use ray tracing, the ray
tracing cores, but did use some
-
of the improved functionality in
Turing otherwise, like integer
-
math, you could trace rays
against boxes and so on in an
-
efficient manner, and it would
save you some time. Then
-
finally, though, if you turn on
these ray tracing cores, it
-
takes that center chunk and just
squishes it right down to a very
-
small footprint as far as how
much time it's actually taking.
-
And the other good news is that
you can do a lot of tuning. It's
-
one thing to make the hardware,
but as we saw with the 18 T
-
pixel machine, where they
started at 30 seconds per frame
-
and went down to 16 after a year
of tuning. Well, the same thing
-
can happens with games or any
other applications, where as you
-
learn the hardware and you learn
what the problems are, you can
-
get faster and faster. So over
just a few months time, we're
-
able to have a considerable
increase in the speed for
-
Battlefield five. For example,
Nvidia developed a demo with ILM
-
and Unreal engine called Star
Wars reflections in March 2018
-
we were able to present this
along with Microsoft's
-
announcement of DirectX for ray
tracing, we used four volta
-
water cool GPUs to run this in
real time. What was great was,
-
just a few months later, when
Turing came out, we could run
-
this all on just one GPU. Back
in 1987 I was astounded to see
-
the 18 T pixel machine doing my
512 by 512 image of 8000 spheres
-
running in just 30 seconds of
frame. Now in a Turing class, I
-
can run the same demo at 5
million spheres and gets 60
-
frames per second. No problems.
To see the full Star Wars
-
reflections demo, or to find
other resources about ray
-
tracing. Come to our website.
You can also get the full ray
-
tracing gems book for free as a
download.
-
Let me try this again.
-
Hi. My name is Eric hings, and
I'm an engineer at Nvidia, and
-
we're going to do a series of
lectures about ray tracing. The
-
first one is the basics of ray
tracing. Let's get going. The
-
thing I like to do in any of
these lectures is give a little
-
quote at the beginning. And this
is David Kirk, who's a fellow Ed
-
you
-
Hi. My name is Eric Haynes with
Nvidia, and this talk is about
-
the ray tracing pipeline. I'd
like to start with a quote, and
-
this one's from Matt far around
year, 2008 GPUs are the only
-
type of parallel processor that
has ever seen widespread
-
success, because developers
generally don't know they are
-
parallel. Rasterization and ray
tracing both make use of
-
parallelism. Rasterization is
straightforward in that you send
-
a triangle to the screen, you do
vertex shading, you do pixel
-
shading, and then whatever the
result is, you do a raster
-
operation to blend it into the
screen. With ray tracing, we
-
have a similar kind of flow. You
start with a ray, you traverse
-
the environment, and then you
shade it. However, at this
-
point, we actually have the
ability to recurse, to go back
-
to the beginning and shoot more
rays, to spawn off more
-
possibilities, such as shadow
rays or reflection rays. This
-
bottom part in the green box is
what we actually do with RTX
-
acceleration is we can do that
traversal and intersection piece
-
very rapidly. In DirectX for ray
tracing, and in Vulkan for ray
-
tracing, there are five new
kinds of shaders that are added.
-
There's a ray generation shader.
And what that does is it's kind
-
of the manager. It basically
starts the rate going and keeps
-
track of it and gets its final
result. There are intersection
-
shaders. So if you wanted to
intersect a sphere, you'd have
-
an intersection shader for that,
or a subdivision surface, or
-
whatever you want. There's a
different kind of shader for
-
each one. Then there's these
last three shaders which are
-
sort of a group. There's a miss
shader which says, Well, I shot
-
a ray and it didn't hit
anything. What do I get? There's
-
a closest hit shader, which is,
well, I hit something. What
-
shall I do with it? Kind of a
traditional shader, but you can
-
also spawn off rays at that
point, such as reflection or
-
shadow. And there's also any hit
shaders, which I'll talk a
-
little bit more about in a
second. So just to sum up again,
-
we have the ray generation
shader, which controls all the
-
shaders. There's the
intersection shader, which
-
essentially defines the object
shape. And there are these
-
control per Ray behavior shaders
Miss closest hit and any hit. So
-
how do these fit together? Well,
there's the complicated, many
-
boxes, kind of version here.
Here's the simple version of
-
that same flow chart. What we do
is we have a trace ray which is
-
called to generate the ray, and
then it goes into this
-
acceleration structure loop
where we walk through the
-
bounding volume hierarchy and
find out objects that could
-
potentially be hit by the ray.
The intersection shader is then
-
applied to that object, and if
we hit and it's the closest hit,
-
we keep track of that
information. We also then use
-
the any hit shader if available
for testing, if the object is
-
transparent and the ray should
actually just continue on. Once
-
we get through this traversal
loop, we eventually get to the
-
end where there's nothing else
in the acceleration structure to
-
hit. It's gone through the whole
bounding volume parking, and now
-
we take our closest hit and say,
Okay, what's that shader S, or
-
if we missed everything, then we
use our Miss shader and that's
-
what color we get back for the
pixel. The any head shader I
-
wanted to talk about a little
bit more. It's an optional
-
shader. It's one that basically
is used for transparency. So
-
mention, you have this leaf on
the right, which you're doing a
-
tree of leaves, and so you're
not really caring about each
-
individual leaf. So you really
just take a rectangle and you
-
put a texture of a leaf on it.
Now, much of that texture is
-
empty, it's blank, it's
transparent. So what the any hit
-
shader does is it goes and
checks the texture. So let's say
-
I hit with my ray in the upper
left hand corner of that
-
rectangle, the any hit shader
would say, Oh well, that's
-
transparent. So don't really
count this as a hit, and let's
-
just keep going so real time ray
tracing can clearly be used for
-
games. Here are three shipping
titles that are showing
-
reflections, global illumination
and shadows. What's also cool
-
about real time ray tracing, or
interactive ray tracing,
-
accelerated ray tracing, is that
you can use it for all kinds of
-
other things. For example, you
can do faster baking. Baking is
-
where you shoot lots and lots of
rays. It's an offline process
-
that takes a bit and you
basically bake the results into
-
a bunch of texture maps. I was
reading about how one studio
-
went from 14 minutes of bake
time to 16 seconds of bake time
-
once they switched over to GPU
ray tracing, even if you don't
-
want to use ray tracing,
particularly in your title, you
-
can use it for a ground truth
test. Shaders that are part of
-
the Direct X can be used on any
machine. You can basically put
-
your shaders into a ray tracer
and get the correct answer. So
-
if you're trying to do some
approximation, you can always
-
get the ground truth and know
what you're aiming for, and then
-
back off and try to figure out
what your faster method might
-
be. The other cool thing you can
do with Ray tracing hardware is
-
that you can abuse it. In other
words, we're now looking into
-
researching ideas of what else
can we do with this. Could we do
-
collision detection, or could we
do volume rendering, or could we
-
do other kinds of queries? And
work is just starting on this,
-
and I think it's a really
interesting open field where
-
there's all kinds of possible
ways we can use and abuse the
-
new hardware. And that's it for
this talk, for more information
-
about ray tracing and plenty of
other good free resources such
-
as free books. Go to this
website. You can also get a free
-
book that's very modern. What
-
do you think so far? I
-
really like the spheres. Yeah,
-
hi, my name is Eric Haynes. I'm
with Nvidia, and this talk is
-
about ray tracing effects. It's
got all the eye candy you would
-
ever want. I like to start with
a quote, and I promise I won't
-
try to sing it. This is from
Queen and it's from Bohemian
-
Rhapsody. Is this the real life?
Is this just fantasy? And I like
-
to say that because which one is
real of these two images, one is
-
a photo and one is the
simulation. Turns out the one on
-
the one on the left is the photo
and the one on the right is the
-
simulation. This is the famous
Cornell Box. The Cornell Box was
-
actually made in 1984 I believe
it was actually at the lab at
-
the time. In fact, they're not
part of that group. And it was
-
pretty funny because Don
Greenberg comes in and he goes,
-
Okay, guys, we're going to make
a box. And they're like, Yeah,
-
sure, I can do that with a
little computer graphic. He's
-
like, no, no, we're going to get
some plywood. And you're gonna
-
get some paint. And they were
all like, what are we doing? But
-
eventually, you know, they got
with the program, and the rest
-
is history. So with the Cornell
Box, what you can do is start
-
with a really simplified system
and look at the various effects.
-
So we're gonna start with just
hard shadows here. And so
-
instead of an area light source.
Overhead, we have just a point
-
light source, a single point
where the light's emitting. You
-
can see that what you get is
these very sharp shadows. And a
-
sharp shadow is just going from
some intersection point, some
-
point what we're looking at to
the light. And some of those
-
will be blocked, and those are
in shadows, and some are
-
illuminated. If you want soft
shadows, you have to go with an
-
area light where you're sampling
on various points on the light.
-
So this is that more stochastic
ray tracing, kind of a feel,
-
where you're shooting a bunch of
rays, and you're figuring out
-
what percentage of the light you
see if you're fully in light,
-
great, if you're partially in
light, that's called the
-
penumbra, and that's where the
soft shadow is. Or if every ray
-
is blocked, then you're in the
Umbra full shadow area. The next
-
step is to start bouncing that
light around, to let the light
-
percolate through the
environment. So instead of just
-
looking directly at the light,
we're going to have light that
-
bounces around. This is called
by a lot of different names.
-
There's inter reflection,
because the light's reflecting
-
off of a bunch of surfaces, or
indirect lighting or color
-
bleeding, which is sort of
specifically where color comes
-
off of one wall and illuminates
something else. And they all
-
have this group term called
Global Illumination. So in this
-
case, we can see a ray is going
from the floor where there's a
-
green spot, and hits the wall
and it goes to the light. There
-
are, in fact, tons of paths.
This is one of many, many paths,
-
if it could hit the wall and go
towards the light, and all of
-
them contribute a green color to
that floor area. In that last
-
scene, everything was diffuse,
matte. Didn't really have any
-
kind of reflection to it. You
can also go with glossy
-
reflections, where, again,
you're doing that stochastic
-
kind of process, where you're
shooting a ray in a burst, like
-
you're shooting a burst of rays
from your reflective surface,
-
and that gives you this fuzzy,
softer reflection in it. So
-
those are a bunch of different
kinds of effects. And here's
-
another example of glossy
reflection. It's very shiny on
-
the left and goes to more and
more rough as we go to the
-
right. So here's your quiz
question. This image has a
-
number of effects in it. Which
effects that we've talked about.
-
Do you see in this image? The
three that I see are there's
-
glossy reflections on the
ceiling and on the floor and on
-
the wall to the right, there's
inter reflection throughout, and
-
there's soft shadows. And I
especially want to note the
-
inter reflection throughout. The
point of this is that if you
-
were to just have this seam lit
by just the sun, you would only
-
have a few small portions of the
interior that would actually get
-
any light at all. The rest of
the interior would just be
-
entirely dark. So by having this
light bouncing around the
-
interior, we get a realistic
look to things. There are other
-
operations you can do with Ray
tracing where it's not
-
particularly physically based,
but it's physically plausible.
-
So this is called ambient
occlusion, where there's no
-
particular light source in this
scene. But what we're doing is
-
we're shooting out little bursts
of rays from every location in
-
the scene, and trying to see if
things are in crevices or could
-
be occluded by other objects. So
let's look at that scooter in
-
particular with ambient
occlusion. What you do is you
-
take a point and you shoot out a
burst of rays, and you shoot
-
them a certain distance. You may
shoot them, say three feet or
-
whatever scale makes sense for
your scene. If a bunch of rays
-
hit something like underneath
the scooter or the tire or
-
whatever, then we can say that,
well, this area is kind of dark
-
like odds are that if a light
were to be shown upon the
-
scooter, that area would be in
shadow. On the other hand, if
-
you're out in the open, you have
a burst of rays, and almost all
-
the rays get to that maximum
distance, like three feet, or
-
whatever it is, and maybe a rare
to hit something. But overall,
-
you're kind of in the light, and
you can see it all, so that area
-
gets no shadow at all. So notice
that it's a term that's not
-
physical in that if you actually
shown the light underneath the
-
scooter, it would still look
dark using ambient occlusion,
-
but it's a really good
approximation of how crevices
-
darken up and so on. And so it's
commonly used in games, and it's
-
been used in rasterization for a
long time, but with Ray tracing,
-
you can get a better answer.
Basically. Another cool thing
-
you can do with Ray tracing is
depth of field, you can get a
-
background blur, as in this
shot, or you can get a
-
foreground blur, or you can get
Foreground and Background blurs.
-
For that matter, the idea is
that by using depth of field,
-
it's a very cinematic effect,
and you can lead the user's eye
-
to whatever point you want to
focus on, so the character on
-
the right in this case, the
other thing you can do is motion
-
blur, instead of varying where
the rays go, what you're doing
-
is varying where the model is in
time, and you just kind of add
-
up the rays at various points
during the frame, and you get
-
this blurry effect. And again,
this is a cinematic effect, and
-
it's actually quite important to
have in games or films, because
-
what you want to do is you want
to sort of not have this kind of
-
stroboscopic effect, where, if
you just had a single flat frame
-
with very sharp edges and so on,
it animates as if someone is
-
flashing a strobe light on the
scene. This can actually be used
-
in various films, like gladiator
for effect, but generally it's
-
not the effect you want. And
when the film is actually
-
running, you don't tend to see
this motion blur so much. It
-
just looks natural. You can also
do atmospheric effects. So if
-
you have, say, a beam of light,
you can do a thing called Ray
-
marching, where the ray hits the
beam and marches through it, and
-
it looks at light scattering in
and light scattering out, and so
-
on. And you just kind of walk
through that thing and sample it
-
as you go. And you can get these
nice beams of light, God Ray,
-
kinds of effects. So I'm going
to just show this short clip
-
from Minecraft RTX, a demo we're
making in conjunction with
-
Microsoft of bringing ray
tracing to Minecraft. And I'll
-
just leave it as sort of a quiz
question for you as to figure
-
out which effects are happening
in this little demo you
-
one last effect I want to talk
about is caustics, which sounds
-
dangerous. It sounds like acid
or something, and they are
-
dangerous, and not because of
the octopus in the picture here,
-
what caustics are is the
reflection of light off the
-
surface of water, or refraction
of the light through water or
-
through glass or through other
transparent media. So here we
-
have light reflecting off the
water, and you can see it
-
underneath and above. In this
next picture, you can see beams
-
of light, and you can see the
caustics on the ground under
-
water. And this is a little clip
from the Justice demo, which
-
shows how these caustics are
underneath the bridge and how
-
they really sort of bring that
area to life, gives it a real
-
vibrancy. Now, as far as the
danger of caustics, it's a real
-
life danger. So this is a
picture of my office, and on my
-
window sill. Like a lot of
computer graphics, people have
-
little chaskas of different cool
materials and things I can stare
-
at. One of them was this little
crystal ball on a wooden
-
platform. And if you zoom in on
that crystal ball, you'll see,
-
oh, gee, there's there's funny
little marks on the wooden
-
platform. And I hadn't realized
this for quite a while. I was
-
once in one office, then I moved
to another office, and you can
-
see the effect. There are these
burn marks in two different
-
areas, and it just depended on
which way the ball was facing.
-
And luckily, I did not burn down
our office. That would not have
-
been so good. But anyway, this
is actually a real problem, like
-
people will if you Google it,
you'll find someone has their
-
house can light on fire due to
snow globes. So beware of
-
caustics. So I like to keep my
caustics virtual. Like to keep
-
them in the computer graphics
world. And so these are just two
-
shots from the physical based
renderer by Matt fire and
-
others. And it gives these
gorgeous effects of light
-
refracting through these various
glass and so on surfaces, and
-
that's it. For further
resources, see the link for all
-
kinds of free books and whatnot.
And one free book in particular,
-
I'd like to point out is ray
tracing. J
-
Hi. My name is Eric Haynes with
Nvidia, and this talk is about
-
the rendering equation. I'd like
to start with a quote from Roman
-
pain the Muse is not an artistic
mystery, but a mathematical
-
equation. And this is really
good for the rendering equation,
-
the rendering equation, it's not
a rendering equation, it's the
-
rendering equation. It really
sums up how light gets to the
-
eye. And I can already see your
eyes drooping, in fact, because
-
you've just seen an equation put
on a slide. Oh my gosh. What are
-
we doing? This is, if you're
going to have one equation in
-
your life, make it this one. If
you're in two computer graphics
-
it has a few terms in it. It all
looks a bit like much, but we'll
-
break it down in this lesson,
and you'll find it's really
-
worthwhile. It gives you this
tool where you can think about
-
how light works, what effects
you want to do, what effects you
-
want to leave behind. So to
start with, there's a bunch of
-
inputs for the rendering
equation, which is there's a
-
point x, which is some point in
the scene, and that's the point.
-
Let's say that you're looking at
and there's an outgoing
-
direction, which is this omega
hat, that W looking thing, omega
-
hat out o, sub o, and that's an
outgoing direction. So it's
-
basically a direction, say,
towards the I. There's also an
-
incoming direction, which if you
look at the far right of the
-
equation, you see an omega hat
I, and that's just some light is
-
coming in from some other
direction. There's this surface
-
normal, if you have a flat
surface, say the floor, the
-
normal is the direction pointing
straight up, for example. And
-
then finally, there's this S
squared term, which means it's
-
all incoming directions. So
we're going to be evaluating
-
this piece on the right for all
incoming directions. Light is
-
coming from a bunch of different
directions. How do they affect
-
what we finally see from our
eye? The terms in this equation,
-
the first two, are really easy.
The outgoing light on the far
-
left, that's basically saying
given a point and giving an
-
outgoing direction. What light
do I see? In other words, I'm
-
looking in a direction at some
point. Well, what light is
-
coming from that point? To start
with, there's the emitted light,
-
which is a function that looks
just about the same. It
-
basically says, given a point
and given an outgoing direction,
-
what light is coming from that
point? So if you have a light
-
source, a lot of light is coming
out from there, and that just
-
shines right into your eye. And
if everything in the world was a
-
light source, then we'd be good.
We wouldn't have to think about
-
any of the equation to the right
where the right equation comes
-
in. It has an incoming light, a
material, and this Lambert
-
geometric term. So the incoming
light is basically saying, OK,
-
given a point and given some
direction, what do I see in that
-
direction? What light is coming
from that direction? The
-
material equation is just simply
a function that says, OK, given
-
an incoming and an outgoing
direction, what light goes in
-
the outgoing direction? So in
other words, like a mirror, for
-
example, will be very reflective
in one direction. So the
-
incoming and the outgoing
directions are closely related,
-
but other surfaces, many
different incoming directions
-
will give a different term,
different amount of light
-
bouncing off the surface. And
then finally, there's this
-
Lambert term, which is this
incoming direction times the
-
normal and that's a geometric
term, and I'll show what that
-
means. What happens with the
Lambert cosine law. It's an old
-
idea, and it's just simply
something that's kind of
-
intuitively true, that if a
light's directly overhead, it's
-
going to have the most effect on
the surface. But as you tilt the
-
light, the effect of that light
is going to be less and less as
-
it gets to the horizon. So on
the far right of this you can
-
see how the light kind of
spreads out as we get to this
-
shallower angle. And that's all
that term is. And this is just
-
another visualization of that
term, where it's showing how it
-
decreases over angle. So what
pure path tracing does is it
-
basically says, all I'm going to
do is I'm going to sum up the
-
light in all directions, and
that's what's going to go
-
towards the eye. So in path
tracing, what we do is we kind
-
of shoot a ray and then we shoot
another ray off that surface in
-
some other direction. So with
path tracing, what we did is
-
shoot a ray from the eye and
hits this box, and then from
-
there, it scatters out rays in
various directions. So each path
-
is a different direction. One
may go up and hit the sky,
-
another one may go and hit the
ground, and that bounces
-
somewhere else, and so on and so
forth. And we add up all the
-
contributions of those paths to
get the color at the EI. The
-
trick with this is that we don't
really know in a given direction
-
what the light is. Sometimes we
do if we look directly at the
-
sun, well, we know what that is,
but often we'll hit something
-
else. And so the rendering
equation is actually a recursive
-
equation. It's one where it
says, well, what's the incoming
-
light direction and intensity?
Rather from a direction? Well,
-
we don't know. So what we have
to do is use the rendering
-
equation again on the place that
we just hit. So if I hit the box
-
and then I hit the cylinder,
we'd apply the rendering
-
equation at the cylinder and so
on and so forth until we finally
-
get an answer. The trick with
that is that if you did a pure
-
path trace, it's a very slow
process to converge if you don't
-
have a big light source, like,
so if you didn't have, for
-
example, the sky, so that you
know when a ray hits the sky,
-
you're done. You could be in
trouble, like if you just have a
-
small light source in the scene.
Pure path tracing says, Well,
-
I'm going to bounce my rays
around until I hit a light. And
-
that's a problem, because if the
light's small, that could be a
-
very, very long number of
bounces. There's things. What we
-
do about this one is called
important sampling, and it
-
basically says there's got to be
good directions for me to go
-
shoot my rays in. Let's see what
they are. One approach with
-
important sampling is to just
look at the effect of the
-
material. So we take that
Lambert term, and we take the
-
material function, where it has
an incoming and an outgoing
-
direction, which, if you want to
impress people at cocktail
-
parties, is the bi directional
scattering distribution
-
function, the BSDF. And all that
is, is a fancy way of saying,
-
well, when light comes in from
this direction, what effect does
-
it have on the material? For
example, if the material is
-
black, well, there's not going
to be any outgoing light, but in
-
general, there's some outgoing
term that will respond to
-
different directions. With a
mirror, for example, it's clear
-
that the outgoing direction is
going to basically be a
-
reflection direction from the
incoming direction, and that's
-
the only direction that really
matters to us. So if we're path
-
tracing and we hit a perfect
mirror, we can just always shoot
-
our ray out in that direction
and feel good about life,
-
because we know that light from
different directions, that it's
-
not going to really matter too
much for a glossy surface where
-
it's got like a sheen, then you
might shoot out a burst of rays.
-
So for each path, you might
choose a different Ray and
-
decide to go one way or the
other and so on. And then adding
-
up all those paths should give
you a pretty good result.
-
Finally, you have something like
a diffuse or a matte surface,
-
like unglazed pottery or cement
or things like that, where light
-
can be coming in from all
different directions and
-
contribute to the outgoing
direction. And there you're
-
doing more of like a Lambert's
law, kind of a distribution
-
going in all different
directions however you want.
-
Again, that can be expensive. So
there's yet another way that we
-
go to try to reduce the load.
It's called multiple important
-
sampling. And here we say, well,
OK, we will vary by the
-
material, but we also want to
vary using the light direction.
-
So if we know that there's an
important light source in the
-
room, or if they know that
there's the suns out there, or
-
something like that, we'll also
add that in as an important
-
place to shoot rays, basically.
So instead of just shooting them
-
kind of randomly, we'll say, Oh,
I also shoot some rays towards
-
that light, because I know that
light's going to be a really
-
important direct lighting
effect. But it's also worthwhile
-
to shoot these other rays
because we want to catch some of
-
the indirect lighting. So in a
mirror reflection or a glossy or
-
diffuse, it's all sort of the
same idea. We're going to try to
-
shoot a bunch at the light, and
we might weight them by how much
-
they'll matter, that the diffuse
one, it's going to matter
-
perhaps more. The reflection
one, perhaps less, because we
-
really mostly care about what's
in the reflection direction. So
-
I will not go through this
image, but it's really worth
-
looking at. This image of
multiple important sampling.
-
It's from a classic paper 1995
and you can try to think this
-
one through. Consider this, your
quiz is, shininess is varying.
-
At the top, there are a bunch of
light sources, and the radius of
-
those light sources is
increasing in each one of the
-
three figures. And in one, we're
just sampling the light source,
-
and another, we're sampling this
BRDF, which is the bi
-
directional reflectance
distribution function, and
-
that's like the scattering
function, but it's meant for
-
surfaces, and it's for things
that are opaque, not like glass.
-
And then you have this mis
multiple important sampling
-
where you're doing both, and you
can see that both is better,
-
basically. And this is a good
one to try to think through. Why
-
do some of these images look
fuzzy? Once you understand that,
-
then you've got a level of
enlightenment on the rendering
-
equation. So as an example of
path tracing, Nvidia picked up
-
some open source code from some
researchers who were looking at
-
quick two an old game, but they
decided, well, now that we have
-
ray tracing acceleration, let's
do a path trace version of it,
-
like really crank up the good
lighting and materials and so
-
on. And it makes a dramatic
difference. It's the same
-
assets, but just improving the
lighting alone can really make
-
the game shine, literally. And
now you've got the rendering
-
equation of your build. It's
really, honestly a wonderful
-
tool for thinking about lighting
and how it percolates through a
-
scene and what you care about
for further resources. See the
-
website, and I also want to
mention that there's this book
-
called ray tracing gems that's
downloadable for free. You
-
Hi, I'm Eric Hayne for Nvidia,
and this talk is about de
-
noising for ray tracing. I have
a quote today from Daphne
-
Culler, a Sanford professor of
works on AI for biomedical
-
applications. She says, The
world is noisy and messy. You
-
need to deal with the noise and
uncertainty. This is
-
particularly good for ray
tracing, because ray tracing can
-
be extremely noisy. On the left,
you'll see an image that has
-
five samples per pixel using
path tracing, and you can see
-
it's quite noisy. There's a lot
of black pixels that where light
-
just didn't get anywhere near
where we needed it to. After 50
-
samples, it's starting to look
better. 500 better yet, 5000 is
-
looking pretty darn good. And if
you looked at really closely,
-
you'd still see some noise in
there. And in fact, films
-
themselves have this trouble
where they'll shoot 3000 rays
-
per pixel, but they'll still see
some noise. So what they'll use,
-
and what we're going to use
today, is a process called de
-
noising to make those images
better. And you can see there's
-
a diminishing return here. It's
pretty much the variance goes
-
down with the square root of the
number of samples. So if you go
-
from, say, four samples, that's
twice as good as just one
-
sample, nine samples, that's
three times as good as one
-
sample. But it's diminishing
returns. So we just can't afford
-
to shoot 5000 rays per pixel
right now, so we have to do
-
something else. And as they say,
That's de noising. Our reality
-
is that we start with a noisy
image if we're doing any kind of
-
more elaborate effects, and we
have to try to get to a nicer
-
image. We'd love to get this
kind of image, but we often will
-
have a crude, noisy image, and
then we have to reconstruct. So
-
reconstruction is called de
noising, and there are various
-
ways to do it. Here's another
example, where the left is noisy
-
and the right is de noised. This
de noising process can be
-
extremely fast. It says
identical time, but it's almost
-
identical time. It's the blink
of an eye. And what de noising
-
does is basically look at that
area at the various surfaces,
-
and tries to use data, both in
color channel and any other kind
-
of information they might want
to use, like the normal or the
-
color of the texture that's
underlying the surface. And use
-
that to come up with some kind
of filtering process where it
-
tries to fill in, tries to infer
what's going on in the surface,
-
so you could denoise by effect,
for example. So in this image,
-
we have a nice, soft shadow
given by that plant onto a
-
floor. But it might be difficult
to actually de noise this if you
-
try to do it on the final image,
because the floor of rain would
-
possibly mess up your algorithm.
So what you could do is instead
-
de noise just the shadow image,
which would just be a bunch of
-
gray scale tones, and then you
would fold that in with the
-
textured surface and get a nice
final effect. The problem with
-
denoising by effect is that if
you did this for every single
-
pass, it would start to really
add up. The de noising cost
-
would become exorbitant. So what
we try to do is de noise on the
-
final image that we would have
just one denoising pass. So on
-
the left we have the original
image, and on the right we have
-
a human filtered image and a
neural network filtered image.
-
And the human one is basically
using sort of traditional
-
denoising techniques and then
tweaking and so on, and the
-
neural network is using an
entirely different process.
-
What's interesting here is that
I think the computers are
-
winning, because in the upper
right, you can kind of see that
-
back wall, there's a little
strip of green. It's kind of
-
blurred out a little bit too
much with the human version, but
-
the neural network has picked up
on that and kept the vertical
-
stripes there. Deep learning can
be used for image denoising. The
-
way this works is that we have a
bunch of rendered images, 20,000
-
40,000 however many we can get
as training data, and then we
-
train our neural net using those
images to have the neural net
-
kind of know what the
environment is like. We can then
-
use that neural net to take a
noisy image and have it infer
-
what the real image should look
like. So we have some huge
-
training set, or some reasonable
training set. It just depends.
-
And from that, we then can
actually do a great job of de
-
noising images, surprisingly
good. So here's a noisy image,
-
one sample per pixel, and here's
our de noise image. So the
-
shadows look really nice, and
notice how soft they are. A
-
little bit of soft shadow. It's
a fairly nice final dimension.
-
You can compare that to the
ground truth. They're almost
-
identical. In comparison, the
traditional method in
-
rasterization is to use shadow
mapping, where you render
-
everything from the lights point
of view. Here you get somewhat
-
sharper kinds of shadows.
They're just not as beautiful,
-
let's face it, and it has other
kinds of problems, like that.
-
One person is floating a little
bit. It's called Peter panning,
-
and this can be avoided by using
newer techniques. Here's another
-
example of denoising, where we
have this shiny surface varying
-
in roughness and de noised. It
looks pretty good. And here's
-
the ground truth. Now, there's a
fair bit of difference between
-
the de noised and the ground
truth here, but it's enough to
-
be plausible. It's a reasonable
result, and it's one that is
-
going to basically be reasonable
to most people's visual systems.
-
They're not going to be
surprised or shocked by the
-
result. In comparison, here's
one that's pretty different,
-
actually. This is called
stochastic screen space
-
reflection method that uses
rasterization, and it's kind of
-
using information in the screen,
and it has problems. There are
-
ways that it works fairly well,
but there's other places where
-
it kind of falls apart. To show
you the comparison, again, we
-
have the ground truth and the
spring space reflection, and you
-
can see they're considerably
different. Here's another image.
-
Here we have just one sample per
pixel of Ray Traced global
-
illumination and de noising, we
get this really pretty fantastic
-
result. It just blows me away
that it can do this well. To
-
compare this to ground truth, in
the ground truth image, you'll
-
notice a little bit of darkening
around the fringes of things and
-
in the crevices and so on. But
for the most part, the images
-
are quite comparable. Last I'm
going to finish off with an
-
animation. So there's a movie
that was rendered called zero
-
day by this person called
people, and he kindly put his
-
entire database and animation
path and so on on the web for
-
people to reuse as they will. So
we use this at Nvidia to
-
experiment with different
denoising operations. This is a
-
pretty complicated scene.
There's actually about more than
-
7000 individual triangles that
are moving around that are
-
lights. So those light sources
are all moving around. And
-
moving light sources can be
quite tricky to capture nicely.
-
And so in this video, what
you're seeing is, on the left,
-
you're seeing four samples per
pixel and about 16 raised shot
-
per pixel total. They're
bouncing around a bit, and the
-
de noised is on the right now.
This is not real time at this
-
point. It's about seven frames
per second. It's the calculation
-
going on here. But you can see
that this noise result is quite
-
nice. Here's the final result
using denoising, and if you
-
want, you can compare it to the
original. There will be links on
-
the website. It looks quite
nice. You have to really kind of
-
freeze frame and do a side by
side to see where there are
-
slight differences between this
and the one where they traced
-
1000s of rays per pixel. De
noising, to me, is magic. To
-
summarize, it's just this cool
technique that can work
-
surprisingly well. I'm cleaning
up a lot of problems and a lot
-
of under sampling that we'd love
to have more samples, but we
-
can't. And I think, to me, it's
what really made ray tracing
-
jump ahead a little bit more
quickly than people expected. I
-
think we all sort of thinking,
Well, Ray tracing eventually
-
there will be hardware, but de
noising really takes a great
-
leap. You know, instead of
needing 1000s of samples or
-
hundreds of samples, or even 10s
of samples, we can get by with
-
just a few samples in many, many
situations. To conclude, I'd
-
like to have one more quote. So
we started this whole series
-
with There's an old joke that
goes, Ray tracing is the
-
technology of the future and
always will be. Well, the future
-
is here. And I like this quote
from Steve Parker, which is, Ray
-
tracing is simple enough to fit
on a business card, yet
-
complicated enough to consume an
entire career. Prefer the
-
resources see the website, Ray
tracing gems is a book I highly
-
recommend, given that I co
edited, and it's free for
-
download, and I hope you take
advantage of it, and thanks for
-
letting me have your time. You
-
what'd you think do? Yes,
actually, it's related
-
my thesis in the
-
course of the deep learning I
thought I mean Ambient Occlusion
-
was pretty cool. Anybody was
never in yet. How can we do the
-
de noise thing in pbrt? Because
often we get out of gamut
-
pixels. Can we? Can we reduce
that using like denoising?
-
I'm not
-
my parent Sure, yeah,
-
I'm not sure if that needs a
GPU.
-
I will investigate. I'm
-
thinking, Oh, that would be like
a perfect, like solution to our
-
out of gap pixels. Is the de
noise out
-
there? Yes. And did the crown
look familiar?
-
That was the test image that we
looked at early On this
-
semester. It's
-
I so I just want to Touch on our
quiz for today. Thank
-
you. So what is a convenient
bounding volume for shapes? For
-
for some reason that sentence
sounds really weird to my ears.
-
What is a convenient bounding
like, sorry.
-
Okay, I'll accept that it's
maybe a bit of tension between
-
singular and plural.
-
So it was in the videos that it
-
was
-
like objects that, like emit
light, like it was trying to
-
remember exactly what I read.
-
So it's not about emitting it's
-
so we talked first about circles
and spheres bounding objects in
-
the scene. Then he said, but in
ray tracing, we Use boxes and
-
more particularly, access
aligned boxes. Oh,
-
so how are area lights defined
in pbrt? Do?
-
Does
-
it? Doesn't it? Like, use, like
the normals of like, the shape
-
to project lines. Is that right?
-
Oh, we want to attach an emit.
-
Attach An emitter. We
-
attach an emission profile and
-
so we're going to attach that
profile to a square,
-
or whatever we want to
-
have be the area light we're
-
and then, why is it useful to
have bounds for normals? Why are
-
normal bounds specifically
useful in rendering? Do
-
maybe I just misunderstood what
I meant by when a shape is
-
emissive. But it's useful for
for that, and that's is that
-
when you're having like, a like,
an actual shape for your light
-
source, like, because normally,
lights come from a point, often,
-
is what we do. So if you have
like, let's say, a balloon that
-
was emitting light, that would
be an emissive balloon. Is that
-
correct? Like you're emitting
light from a shape, as opposed
-
to a point, and that's why it's
that's why it's useful the
-
normal bounds.
-
So we can think of like a cone
that contains the bound it
-
balances the light using its
shape and
-
so if we have a bound for So, a
similar thing is, if we have a
-
spotlight, then we're shooting
-
in the cone. And if we can bound
the normals in a similar way,
-
that helps us to determine
what's visible
-
and efficient. I
-
I'll review the answers you
gave.
-
So It's basically for
illumination.
-
I'm late here.
-
From the text, it's normal
boundary, specifically useful in
-
lighting time patients, when a
shape is emissive, they
-
sometimes make it possible. This
sometimes makes it possible to
-
officially determine that the
shape does not eliminate a
-
bigger point in the scene.
-
Yeah. So
-
So it's more of a bound. It
limits it. It's like saying a
-
light a light fixture, points
the light downward, and then it
-
doesn't need to calculate to the
right and to the left of that
-
light. Is that sort of
-
apt. We'll talk about it next
day.
-
And also you
-
can think about
-
how to make that desktop
tetrahedron. We'll
-
talk about that as well. Okay,
-
the pyramid that one,
-
it's not a pyramid. Pyramid has
four sides. Oh,
-
three sides. Okay, yeah, I think
I did program it
-
already. Okay. Anyway, thank you
for today.
-
See you on Thursday. I.
Responses
No Responses