Outline for Today
pbrt + Radiometry, Spectra, and Colour
Administration
Today
For Next Meeting
Wiki
Link to the UR Courses wiki page for this meeting
Media
Transcript
Audio Transcript
-
Okay? Ed Jerome is
working Today.
-
Do but so I owe
you a choice activity about
-
assignments. I'll do that today,
and I thought because I CS,
-
there's an option of dealing
with the code. I would show you
-
a video with Matt far walking
through the code, so that it's a
-
little bit dated, because it was
done in 2020 but I think it
-
might be interesting of value if
you're thinking about doing
-
something with code to to see
how things are set up. And then
-
I want to show you some examples
with pbrt and what I've been
-
I've been trying some examples
with different settings, so
-
we'll look at that as well, and
then We'll get into the
-
beginning of the chapter four
stuff you
-
No, so I have pbrt version four
and TEV are installed in
-
classroom 135 so you're able to
go in there when it's rooms not
-
being used, if you want to use
that software in case you don't
-
have your computer with you, or
is there a code to you in that
-
room? Yes, so you have to swear
that you're not going to share
-
the code with anyone,
-
and I will post it in the
announcements in your courses
-
today. I on all
-
the machines in there,
-
or just one.
That's what they told me. It's
-
on the desktop. Of all the both
programs are on all The desktops
-
i
-
Okay, Just see her,
forewarned.
-
Okay, so this will watch this
video
-
with random
YouTube ads sprinkled in what
-
you needed is for
entertainers. Yeah,
-
oh, That's it.
-
Okay, sound. I
-
Oh, sound settings there.
-
Alright. So what does it
actually look like to dedicate
-
an entire week to just tackling
everything that we've
-
procrastinated in one fell
swoop, leads, finances, folding
-
laundry, calling your mom back,
personal projects, you name it,
-
life would be so much easier if
we could just get everything
-
done and out of the way for
good. Does all that sound or
-
Dev?
-
So this is running around my
laptop. I hmm, alrighty. So
-
today we released an early
access version of the next
-
version of pbrt, The Great Race
for the will be described in the
-
forthcoming fourth edition of
physical based rendering book.
-
You know, CS is an early
release. It has rough edges and
-
it's not, not very documented,
you know, it will, in time, in
-
time, be documented through the
book. And we're going to kind of
-
focus on documentation efforts
there, however, you know, it
-
should be kind of, you know, we
hope it'll be kind of
-
understandable if you know pbrt
already. But I did want to make
-
a short video just to kind of
walk through a couple of new
-
things in kind of how the system
is structured, you know. So this
-
is more focused on the system
structure, and in particular,
-
how the GP back end is
implemented. I'm not going to
-
talk about the new graphics
algorithms. There are a whole
-
bunch of them there. They're
prescribed to read me. You know,
-
for those at this point until
the book's done, if you want to
-
understand their implementation,
you'll need to kind of refer to
-
both the code and the
corresponding paper on that
-
topic. Okay, so first, I just
want to start with briefly, one
-
of the things I'm excited about
is that the system now supports
-
the image as it's being
rendered, using Thomas rulers to
-
have image viewer. So tab is on
GitHub. It's really nice image
-
viewer. And now pbrt can talk to
it over a socket and send
-
images. So just to show that,
quickly, you know, you pass this
-
display server command line. You
pass the IP address of machine,
-
Ryan dev on the image, or Dev,
you know. So this is running
-
around my laptop. So you know,
it takes a few seconds for pbrt
-
To get started, but once
rendering begins, you know, you
-
can see the images as it's being
rendered. And so it's fun. It's
-
it's useful. Zoom in, pixel
geek, all that sort of thing.
-
And you know, it's just kind of
close you see your images being
-
rendered. CS bug. So if you
check out pbrt, the you know,
-
it's a pretty similar directory
structure before, with couple
-
changes though source pbrt And
the external dependencies, like,
-
the XR and then we'll just go
into DRG. So the biggest change
-
is that, whereas before we had
separate source files for each
-
and every shape and camera and
filter and all that stuff, now
-
we've collected that so all the
cameras are in cameras that H
-
and cameras that CPD and so
forth, there are a couple of
-
different sub directories. Now
you know that that should be
-
fairly clear. There's util for
kind of lower level utility
-
code, math and four level
matrices, data structures and
-
containers and stuff like this.
There's command which has the,
-
you know, for each binary
executable built by those in the
-
build. You know, there's there's
the file with the main function,
-
there's some command, and then
there's a base, CP and GP. So
-
let's start with base. So the
biggest change, one of the one
-
of the biggest changes, you
know, in terms of kind of system
-
organization, is that we've,
we've moved away from virtual
-
functions for dynamic dispatch,
and this, this was kind of an
-
enabling changes that enabled
the GPU port, so that you know
-
that that has an effect if
you're trying to add a new shape
-
pretty late. So just want to
briefly walk through a little
-
bit of that.
-
Did you know that you can make
between five and 25% on your
-
money per month, passively,
without predicting the market,
-
without trading experience, and
without doing a thing? I
-
so what we have now is we have
this base directory. So you
-
know, for each of the kind of
main base classes in the system,
-
shape, light, camera, material,
all of those have a header file
-
and base. And this header and
base, basically it does two
-
things. First of all, it defines
these candle types. So here we
-
have a shape handle, and all of
these candles, you know, and
-
this is kind of a replacement
for a shape, shape pointer with
-
virtual with virtual functions.
So now we have these handles,
-
and they inherit from tag
pointer. And tag pointer is some
-
kind of C Plus Plus template
insanity, but that it kind of
-
that's gives us the engine to do
the dynamic dispatch, but you
-
don't have to clarify
communication. But the point is
-
that you need to register each
instance of a particular type in
-
this tag pointer template
parameter list. So you can see
-
here all the shapes that PBR
team now supports, and by
-
registering them in this way, a
is able to do that in dispatch
-
and then B, in the GPU path, C
will be able to kind of do some
-
fancy things. For like. We
iterate over all the types, say,
-
for each type of sampler which
is different than like it or
-
like the types that are
essentially so these headers in
-
base define the interface that
that each implementation of this
-
type has to fulfill. So so every
shape has to provide an
-
implementation of these methods.
And this is essentially the same
-
set of methods that that PBR, TV
three supported or required from
-
shapes, but they are through
abstract base classes and
-
virtual functions. So here you
don't have the same level of
-
kind of compile time enforcement
of that stuff. But you know,
-
hopefully it's pretty
straightforward. I Okay? So the,
-
you know, so, so that's kind of
the the key, you know, you know,
-
as far as the system structure
of the CPU, port of pbrt, that's
-
basically the only change. Well,
I mean, there's more, but you
-
know, that's kind of the most
important change, or that's the
-
only one that's worth really
going into here. Next step, I
-
want to say a bunch about the
GPU implementation and walk
-
through some of that code, but
let me, let me motivate that
-
first. Okay, so I've logged into
another I'm logged into another
-
machine here. This one has 32
cores. It's one of the thread,
-
or photos with 32 cores and 60
more threads. So let's start out
-
by rendering that on that CPU
and see what that looks like. So
-
here I'm going to pipe the pipe
the image to the to the instance
-
of a tavrick in my laptop. Here,
just go throw that out. Start
-
rendering now, we'll wait just
for the same view of parts and
-
stuff. And it's much faster,
right, you know. And of course,
-
you know, it's clocking charge
on my laptop. It's got a lot
-
more cores, you know. And we see
this image, you know, chugging
-
along nicely. If you go back,
it's supposed to render 56
-
samples per pixel. I just killed
the render, but you know, we're
-
looking at about 565, seconds to
render it at 256, samples per
-
pixel. Okay, but let's, try
doing that on the GPU. So same
-
command line. If you build a GPU
support, you still have to do
-
dash dash GP on the command line
to enable it.
-
Now this is actually looking
slower than it is because of
-
network buffer. So that first
set of samples was actually done
-
much more quickly than it
appeared. And if we go back to
-
the shell, it's actually done so
if we scroll back up, so it's
-
about 12 and a half seconds to
render that at 256 samples per
-
pixel on the GPU. And to me,
that's, that's pretty exciting,
-
you know, because, you know, if
we go back to the 165 seconds we
-
were, we were trending for, for
the 32 core CP version, you
-
know, we're 10x faster than
than, you know, a beefy, beefy
-
modern CPU. So, you know, that's
the motivation for the little
-
bit of complexity we're going to
go into now, for kind of how the
-
GPU backing works, I should add,
you know, that kind of, you
-
know, as I explained all this
stuff, you know, if you're
-
working with pbrt, or if you're
extending it, kind of at the
-
level of graphics and graphics
algorithms. If you're adding new
-
light source or something like
that, you know, you shouldn't.
-
If you don't want to, you
shouldn't really need to worry
-
about how, what I'm about to
show you about how the GPU stuff
-
is wired together, you know, so
it should just work, you know,
-
if you have any details, right,
you know that your new light or
-
whatever will just work on the
GPU path in the same way that,
-
you know, you don't have to
worry about the the
-
implementation of bi directional
path tracer, you know, when
-
you're adding a light to pbrt
before, right? You know, you
-
just fulfill the interface, and
then when you're seeing and
-
then, you know, so should be the
same sort of thing for GPU path.
-
But I think there's some
interesting stuff to see. So I'm
-
just going to quickly go through
some of that. So the big
-
challenge in the GPU version is
polymorphism. And pbrt, you
-
know, by design, is highly
extensible, and there are all
-
sorts of incidents of different
types of of the same thing. And
-
what this means, you know, and
on the CPU, it's fine, right?
-
You're, you're rendering a
single pixel sample in a thread,
-
you know, if your thread just
kind of does whatever it needs
-
to do for the type of light
attach, or the type of shape it
-
hit, or whatever, you know, by
the GPU, because you have this
-
kind of, you know, groups of
threads executing together,
-
stuff, polymorphism Can, can
have much bigger effect on
-
performance. So to get good
performance GPU, you have to
-
address this, and kind of that's
how the design of the back end,
-
the GPU back end, was informed.
The key ideas here are a couple
-
of things, you know. So we have,
we have a sequence of kernels.
-
We're kind of, we're doing some
chunk of work in rendering
-
computation, and these kernels
are fed by queues, so kind of
-
proceeding kernels in the
pipeline can queue up work for
-
subsequent kernels, and then
each kernel just kind of
-
consumes the work from this
queue and operates out of in
-
parallel. And that's like pure
data. Parallel elements are
-
processed independently, and
that's what we parallel so. And
-
this is effectively the
architecture that was described
-
in Mega kernels, considered a
harmful paper and by Simone
-
friends. And this is a really
great paper, this kind of you
-
like to understand why the GPU
staff and pbrt is structured,
-
why it is this beautifully lays
out the kind of trade offs and
-
issues. So definitely check that
out. Okay, so these are the main
-
kernels in the system, and we'll
walk through some of this code
-
shortly, but just to give a high
level view of how it all works,
-
you know, and it's the classic
structure of the path tracer,
-
just decomposed into kernels
with cues feeding them, we
-
generate a camera aid, you know,
for edit pixel, for example. In
-
this case, we have a separate
kernel to generate random
-
samples, sec, and then we find
the closest hit and then meet,
-
queue up and do additional
kernels based on what happened
-
at that hit. This version of
pipeline does not include some
-
sort of scattering, or
volumetric scattering, which
-
adds a number of additional
kernels that I've discovered.
-
Most of the work happens the
key, the key stage, the key to
-
the efficiency of it, is the
sorting stage that happens
-
during intersection. So when a
gray surface intersection is
-
found, the work for that is done
in a different cue, is put on
-
different cues based on the
specific material found. So
-
everything diffuse goes here,
and everything dielectric goes
-
here, inductors and so, you
know. So what that does is it
-
lines up kind of coherent work
to work on. So we can work on
-
all the few stuff all at once.
That's more coherent than
-
interleaving different material
types, you know. And then in
-
this kernel, which we'll look at
shortly, you know, basically all
-
the usual stuff happens. You
know, we evaluate the textures.
-
We get a PSDF, choose a light
source, choose point and light
-
source, send PSDF, view, a
shadow ray, impute, indirect
-
array, brush. So classic path
tracer. It's kind of the heart
-
of the heart of your regular
path tracer. So, yeah. Okay, so
-
let's, let's go look at the code
for this. Okay, so this kind of
-
material evaluation stuff I just
described is in our GPU service,
-
scattered at cpp. I should add
that, you know, all of this
-
stuff will probably shift around
in the coming months. So this
-
may be a little scale. Okay, so
this material in BSDF, somewhat
-
work happens in this evaluating
material, BSDF method of the
-
path integral. You'll notice
here that it's a templated
-
method. So it's templated on the
type of material, and then a
-
texture evaluator thing, which
I'm not going to go into today.
-
It's just a little wrapper on
texture evaluation, but
-
basically in other code, which
we won't look at now, what's
-
happening is that code it's
looping over all of the types of
-
material in dbrt via that tag
corner thing we saw before,
-
which allows us to kind of
iterate over the types and then
-
for each type, it's calling this
method with that particular
-
material type. So then we get
instantiated for diffuse
-
material. Now we have this for
all queued construct. And what
-
this is, is it wraps a parallel
for loop over one of these
-
queues feeding into a kernel. So
each kernel has a kernel, a
-
queue going in with its own
little struct that describes
-
what's, what the work items are
in the queue. And then we can do
-
a parallel for that, and then
that takes in which we use for
-
printing stats and stuff, makes
a queue full work from and in
-
this case, the aval queue is
kind of this multi queue where
-
there's kind of wraps a queue
over each type of material. So
-
we have to ask to a queue for
our specific material type and
-
the maximum queue size. Then it
takes a lambda. And then, you
-
know, these lambdas, you know,
the key thing with the lambda is
-
this first argument is this work
item. So, you know, this is a
-
structure that kind of, you
know, withholds all the, all the
-
values for particular work item
on the queue. So you're just
-
given the start like, Hey,
here's your input. So then what
-
happened, you know, and then you
start doing work that is, you
-
know. So kind of, from here on
out, it's pretty much the same,
-
you know, as the path tracer,
you know, on the CPU, you know.
-
And the difference is just that,
you know, when you want the
-
shading normal. Well, it turns
out that's one of the, you know,
-
some of the value stored in the
work in the work item you're
-
given, you just put a pull it
out of that work item start. So
-
you're just getting that out of
the key, you know. And then
-
everything else follows similar
so we do the same bump mapping
-
calculation as before, the same
bump back function to compute
-
that partial derivatives, BSDF.
There's a little bit of stuff
-
related to G buffer, don't worry
about but then there on out, you
-
know, again, it kind of
continues in a very familiar
-
way. You know, we sample the
PSDF for the right lighting. Did
-
we get a sample? If so, then,
you know, we have throughput,
-
some PDFs, a little more work.
Apply Russian roulette. And then
-
we respond the indirect Ray, you
know. So whereas before we would
-
just be doing a while loop, you
know, in the integrator, you
-
know, where we kind of go around
again to trace the indirect Ray,
-
in this case, we're just pushing
the indirect Ray onto the ray
-
cube for the next bounce, and
subsequently, eventually the
-
kernels will run to consume
that. I won't, I won't walk
-
through all the details for
direct lighting, but it's the
-
same idea. You can check out the
code yourself. You know, it's
-
just kind of, you know, same
structure as the CPU prediction.
-
I'd like to talk briefly. Okay,
so this is kind of work items
-
that H and all the different
types of work item are declared
-
there. So like, you know, we
have a ray work item. So that's
-
the, you know, the state for,
you know, the ray, and a ray
-
cube to be traced the ray, and
you know which pixel it's
-
associated with, and, you know,
a little bit of information
-
about the previous intersection
to use for mis light source, you
-
know, and just kind of the state
of array. So these are things
-
that just be local variables,
you know, in the old path, you
-
know, we have a different table
type for escape grades. When a
-
ray doesn't intersect anything.
It gets put on a queue. So we
-
can deal with the fact that
there's, like an environment map
-
that has to be sampled. So work
items that age has all of these
-
declarations of all these work
items, or all the sorts of
-
things that are stored in the
queues. Now, a couple things
-
about that. So one thing to
note, and again, this is
-
something you don't need to
worry about, you know, if you're
-
just kind of writing rendering
code, but you know, if you're an
-
expert GPU programmer, you may
be wondering about data layout,
-
you know. So here I've just
defined these structs, and I've
-
said, and in fact, the kernels
are just past these structs
-
here, your values doing the book
like a share kind of thing. Now,
-
the the optimal layout for these
things in memory is, is
-
structure, a raised layout,
where kind of you know, for
-
each, each you know pixel you
know, integer, pixel index or
-
whatever, instead of having it
in regular struct layout, it's
-
better if all the pixel indices
for the work queue are
-
contiguous in memory, so that
way the group of threads reads
-
their pixel index, then that's a
contiguous performance. So
-
structure of a raised layout is
a much better way to do this.
-
Now it can get pretty grungy to
kind of redeclare all of your
-
structs and to kind of write the
logic code, to kind of re
-
Swizzle it in memory. And I
couldn't find a way of doing
-
that I was familiar with. So I
ended up we have this kind of
-
hacked up structure of arrays
compiler called soac, which is
-
one of the things that gets
built. So this is work items,
-
SOA file and this, this is kind
of C like, but not C. But
-
effectively, what this is doing
is it's basically a
-
redeclaration of the types with
some additional information
-
about, you know, laying these
things out, flat memory and
-
stuff like this. What it does is
it parses these, these kind of,
-
you know, pars is a very small,
you know, subset of C,
-
basically, to be able to kind of
figure out these types and then
-
automatically generate code to
do, to read them and write the
-
construction of the race layout.
So this is nice, because a T is
-
to write that code B, you know,
it can be error prone, but it
-
also lets us kind of add some
syntactic short so we'll go back
-
to the look at camera rate
generation for a second here. So
-
camera rays are a little
different where, you know, they
-
don't consume from a queue like
this is where it started, where
-
it all starts, and then it
pushes race into a queue. So
-
that starts out with this GPU,
parallel four, but it does all
-
of the usual things to generate
camera, Ray, camera, generate
-
Ray eventually, but then you can
see when it's writing out the
-
state for a particular pixel.
The syntax is really clean. You
-
might expect this to be written
as pixel sample State Index, if
-
it was kind of an array of
structures layout. So we have
-
the indexing in the slightly
funny place, but under the
-
covers, you know? So from from
the user perspective, this is
-
all you need to do. But under
the covers, this is actually
-
transformed into SOA, right? An
SOA right into an SOA array for
-
the pixel radiance value. The
last thing I wanted to mention,
-
you know, so we do a bunch of,
you know, we saw with materials
-
that we're kind of storing all
over all the material types, you
-
know, and having a separate
kernel for each one. And just to
-
show one more example of that,
another example is samplers. So,
-
you know, in this case, we don't
have to iterate over all the
-
samplers, but we, what we end up
doing is we say, like, well, on
-
the CPU side, we say, what type
of sampler do we have? Oh, it's
-
ALT and sampler, right? And then
what we end up doing is then
-
dispatching the specialized
kernel based on the sampler type
-
to generate the random samples,
you know. And you know, we can
-
do this on the CPU as well. It
wouldn't have a lot of benefit
-
there, you know. It would save
us, you know, the, you know, the
-
indirect dispatch, but on the
GP, it's a big name, because,
-
you know, we can have this
kernel implemented knowing the
-
concrete you have multiple
kernels, one for example, or
-
type and recall this one, but
it's implemented knowing the
-
concrete type the sample. And by
doing so, I would just stack,
-
allocate it, which, in turn,
allows it to not
-
live in
-
graphics memory, but I can live
in registers GPU, which is a lot
-
faster, and that works out
myself. Okay, I'm gonna wrap up
-
there. I think that's all the
key stuff, hopefully enough to
-
get folks going and enjoy VRT
before. Please send thought
-
reports, suggestions, anything
would be fantastic. And I'll
-
wrap up by rendering the
landscape scene from the GPU. So
-
this is that scene from loud,
loud, work, and this is going to
-
keep Google Now, again, we have
this leg because it's coming in
-
over the network. But see it
renders pretty quickly. And so,
-
for examples, we just finished
in 1.3 seconds. So that's cool.
-
Okay, all right. Thank you. So
-
hi, I'm Daniel Wright, and today
I'm going to talk about using
-
radiance caching to solve real
time global illumination.
-
Surprised no ad, no ad. And it
seems like a relevant
-
suggestion,
-
just The next video, but It's
good one.
-
It's okay, so,
-
Any thoughts about that video? I
-
anything you'd like to share.
-
I noticed he continued to say,
This is what's different. What
-
was different from the previous
version?
-
Yeah, it
-
helps you familiar with the
previous version. Would help if
-
you were more familiar with
their previous version? Yes. So
-
so I just want to show you where
I got this from I'm
-
so under
resources I should have, I
-
should put a link for that. I
-
So i i visited Benedict.
Bitterly sight and chose an
-
image there.
-
So I picked the gray and white
room. And then I downloaded the
-
PBR key version four input file
or specification files, and
-
so I did this myself On my
laptop. And this is 1024 samples
-
per pixel.
-
So here are some previous
versions.
-
Here's me choosing the wrong
parameters for the image.
-
So I I had set the maximum depth
for the integrator to be a
-
default instead of in the input
file. Let's look at the input
-
file and
-
so it is a path integrator,
integer max depth, 65 so instead
-
of 65 I just used the default,
and it seemed to generate More
-
work because the estimated time
kept increasing
-
the transparents in
that window makes it look like
-
it's like been burnt. I feel
like the printer or something.
-
Yeah, I'm not sure how I can
here. Let me see if I can open
-
Another window I
-
sound a little better.
-
So what's the
keyword to separate the file
-
into two? I Yeah,
-
so there's no world end. I think
that's appropriate, because we
-
don't need to think about the
end of the world. There's enough
-
stuff going on to make us think
of keep that in in mind. So the
-
first part we describe the
rendering parameters, and then
-
we'll begin then we describe the
textures and the shapes and how
-
they're put together. You
-
so we have a variety of
materials and
-
to describe different elements
of the scene, we have leaves for
-
the plant the plant pot,
branches
-
so those are the materials, and
then We have The shapes and
-
then we position them, and then
At the end we have a light
-
source. And
-
let Me copied This and
-
Let's make a
-
Oh, so this is
128 samples per Pixel. I'm
-
so if you wanted to,
you could go through and in the
-
parameters of the file, you
could change, like leaf texture
-
to be Gold again, for instance,
like we did before, yeah,
-
I could have done a few Last
samples. So the thing about five
-
minutes is still faster than
some of the other ones. 200
-
seconds instead of 2000 Yeah,
still a fraction of the time.
-
So I was gonna say, if you're
running files in CO 135 your job
-
ends when you log out, you can
set Up to run, have a coffee and
-
come back.
-
Snap. Make it easier. Well, I
have to get out. Okay, let me
-
start running this and then I'll
cut back after this class would
-
have been
-
heavy, yeah, it's a good
feature, though, for letting us
-
get access To the lab that was
-
the best pass so far.
-
They cleaned up a lot of pixel
in the living room. Yeah.
-
So I notice,
-
I notice this
interesting feature. I didn't
-
try this before. You can play
through the images.
-
Interesting and one of
these doesn't belong, very
-
interesting treat like a jump
scare. I jump scare.
-
So we can see
that the floor is getting
-
cleaned up here,
-
we're in a different
way. Yeah, I know. Oh, okay, but
-
the sea ring and the other
details that are don't get as
-
much light are still quite
noisy. I think when I did, oh,
-
it's done now. That's with 128
CS. I
-
it, yeah, and it
also generate a segmentation
-
fault at the end. So we finished
the work, but it
-
there's a segmentation fault. So
if anyone's interested in
-
finding out what's the cause of
that, I
-
I realized I needed to put two
dashes in front of stats.
-
Let's do 32 samples per pixel,
-
and I did one test. So the wave
front is I talked about that a
-
little bit, or in that paper,
the word was mentioned the wave
-
front. That's the GPU
integrator. So we can, if we
-
give it the argument, wave front
to the to pbrt. It's going to
-
use the code, the method from
The GPU, but it's going Around
-
the CPU and you
-
it. So we can specify the
samples per pixel on the command
-
line instead of editing the
file.
-
So where is it storing
all these images? Because I
-
notice they all have the same
name just by the different sort
-
of numbering after that,
-
I'm not sure. I'm
guessing that they're cached. I
-
because
-
they're Yeah, they because the
files don't exist anymore, with
-
these earlier versions in them.
I
-
So here's some interesting
stats. Yeah, so 3300 67,657, out
-
of gamma pixels clamped to 01,
so that's about 35% of the
-
pixels are not resolved. The
BVH, that's bounding volume
-
hierarchy. So we just give some
stats about
-
about these different things. So
geometry, buffer, cash hits. So
-
I
-
Is there a is there a
difference in performance, if
-
you were to use a different type
of graphics card, like if you
-
did the same passes and stuff
like that, the same settings,
-
would it be the same output just
in a shorter time or so?
-
It's their Monte Carlo method.
So details can be different, but
-
we'll get to a point that we're
happy with. If we convert we can
-
convert can converge on
something, then the variability
-
will be negligible.
-
Would it be just faster so on
average? Or will this be
-
quicker? Or will it be rough at
the same speed as well? Because
-
I have two different graphics
cards, and I could. I'm
-
wondering if they would perform
in a similar manner. I would use
-
either of them, despite the fact
that they are like different
-
generations and stuff like that.
-
Why don't you test it? Find it.
I'd have to plug in the other
-
part.
-
So with the one
that's plugged in, did you
-
build? Were able to build the
GPU? I haven't had enough.
-
I have one that's like
a 4000 series, and I have one
-
that's a 16 series, and it's
just, I'm curious if I should
-
maybe just plug in the other
one, because I know that
-
sometimes if you overuse a
graphics card, who causes
-
issues, probably not doing these
renderings, though, as far as I
-
should keep my one safe, or if I
should just use it, I think
-
that's mostly when people are
doing like Bitcoin mining or
-
whatever, that they ruin their
graphics cards. I'm just a
-
little cautious. That's all i
-
Yes, I'd be
cautious about Bitcoin mining.
-
So
-
anyway, these are
somewhat self explanatory
-
geometry intersections, Green
Triangle intersection tests, 96
-
million, I
-
so there's one light 20. 20
materials and 13 textures.
-
Anyway,
-
I Want to,
-
want to show you,
I
-
so this talks about rendering
The scene from the PBR TV for
-
Scenes repository And
-
I haven't tried this before. I
-
Oh, that's more
reasonable pixel samples per
-
pixel time
-
at the front of the command,
-
yeah, That's that
gives the timing information for
-
The process. I
-
No, it's, it's still working. I
-
Yeah, it'd be nice to know that
It's still doing something
-
productive. I
-
um radiance.
-
Irradiance sounds good.
-
Plus intensity,
-
intensity and right?
-
Anyone?
-
Thrown off a bit by multiple
choice, who You choose all the
-
other the Other question, I
-
yeah, what's An adequate model?
-
Geometric optics. I
-
What are some features of that
do? What does that allow us to
-
so it shows how light interacts
with objects that are much
-
larger than them,
-
okay, but to what assumptions is
this permit.
-
I so we can think of the
-
you Can
-
we can linearly combine
different contributions and
-
What are some other ones?
-
No qualification. I
-
energy conservation, no
fluorescence or phosphorescence.
-
Do estimates
-
and steady states and
-
so that means state. State means
that lights are
-
always flickering in this room.
Technically, we just can't see
-
it, so it just assumes a
constant light, right?
-
Yeah, or not, the energy is
distributed
-
over time, right? Yeah,
-
very quickly reaches the steady
state almost instantaneously and
-
So light has reached
equilibrium, so radians
-
distribution is not changing
over time.
-
So it's not these
aren't wild assumptions. They
-
just help us deal with light in
a in a better in a more
-
straightforward way.
-
It's 216
-
Thank you for
today. Have a good weekend, and
-
I will have things on Your
courses for you to look at.
-
Thanks again. Bye.
-
Thanks, you, too. You
too.
-
Thanks, you, too. You
too.
Responses
No Responses