Raytracing Multiple Objects
CS 481 Lecture, Dr. Lawlor
Raytracing Multiple Objects
There are two main approaches to render several raytraced objects:
- Pure Raytracing: Render
everything in a single pass, from the same proxy geometry. You
loop over objects in GLSL, per ray. Basically, you just look for
the object with the closest visible "t" value. There are some
really cool lighting tricks that are only possible with this method:
for example, you can do refraction and reflection quite easily (see
next lecture!).
- Hybrid Raytracing: Render
objects one at a time, with separate proxy geometry for each
object. You loop over objects in C++, and let them fight it out
in the framebuffer and depth buffer (see below). This hybrid
approach lets you mix and match raytraced and conventional polygon
geometry, but you do give up some of the cool features of a raytracer.
Pure Raytracing t Comparison
In a pure raytracer, if two objects are visible onscreen, you simply test every ray against both objects. You then keep:
- The closest object with a valid hit.
This sounds really easy, and if you do it right, it is easy.
Unfortunately, there are a bunch of ways for hits to be invalid
(behind your head, beyond your trim surface, through a pasted-in hole,
etc.), and you can tie yourself in knots quickly
For example, if you keep a list of valid object hits, you just return
the closest list element. If the list is empty, you missed
everything. Sadly, OpenGL shading language doesn't yet support
lists.
You only care about the closest object, so it is possible to
incrementally overwrite the closest hit, keeping only the closest
value. This works reasonably well in OpenGL.
Another cool trick is to make each object a class with a virtual "hit"
method, which updates the intersection if it's the new closest
hit. Sadly, OpenGL shading language doesn't yet support virtual
methods.
You can reduce the number of comparisons by setting invalid hits have a
huge t value (like 1.0e9), which makes them all far away (never
"closest"). But then you do have to convert all invalid hits to
this value, and checking for a total miss becomes a bit more complex.
Sometimes I just end up making a big ball of boolean conditions. I don't like it, but it works... eventually!
Z-Buffer integration with gl_FragDepth
At the moment, our raytraced geometry writes the proxy geometry's depth
value into the Z buffer. This is incorrect, of course, because the real
geometry is somewhere inside the proxy, leading to depth errors where
ordinary geometry (like the ground plane) intersects with the proxy
geometry.
You can fix this by computing a new depth value in your pixel shader,
and writing this to "gl_FragDepth" (just like gl_FragColor, but it's a
float for the depth buffer). We can compute the Z buffer depth
value by running our 3D ray-object intersection point P through the
gl_ProjectionMatrix, and do the perspective divide, just like happens
to each vertex. This looks like:
vec3 P = ... world coordinates ray-object intersection point ...
vec4 projP = gl_ProjectionMatrix*vec4(P,1);
float depth = projP.z/projP.w; // perspective divide
Note that we usually don't want to also run through the
gl_ModelViewMatrix, because we compute ray-object hit points in world
space already.
However, OpenGL also applies a "depth range", set with glDepthRange, to convert the projection matrix's -1 to +1 value into a 0 to 1 range, so we also need to scale our depth to get gl_FragDepth:
gl_FragDepth = 0.5+0.5 * depth;
Writing to gl_FragDepth this way works correctly, and allows our
raytraced geometry to coexist in the Z buffer and intersect other
raytraced geometry and ordinary polygon geometry without problems.
Moving Raytraced Objects
If I've written the equation to raytrace a sphere, centered at the
origin in world coordinates, my sphere is going to be centered at the
origin regardless of what I do with the proxy geometry or
transformation matrices.
That's rather annoying. It'd be nice to be able to move raytraced objects around.
One way to move objects around is to change their equations so they're
centered at a new location. This turns out to be rather annoying
for some objects, introducing new complexity into the already-hairy
object equation that we'd prefer not to deal with.
An alternative to moving the object is to move all the rays that
intersect the object. This can often be done without too much
trouble, for example by shifting the ray origin at the start of the
ray-object intersection function:
float object_intersection(vec3 C,vec3 D) {
C=C-obj_center; /* move the ray so the object is centered on the origin */
... do object intersection as before, now using the new ray start point "C" ...
}
You can apply an arbitrary linear transformation to rays, including
matrix transformations, although you've got to be a bit careful
applying projective transformations to the ray direction--as a
direction vector, its "w" component should be treated as zero.
Curiously, the returned t value from a shifted intersection calculation
actually applies to the original ray, which can be used unmodified. If you move the ray origin permanently, the object (or
its shadow, or its normal vector) is likely to show up in the wrong
place. One way of thinking about this is to explicitly list all
out our coordinate systems:
- OpenGL vertex coordinates are per-object coordinates used for
specifying proxy geometry. The modelview matrix takes them to
world coordinates.
- World
coordinates are where we do lighting calculations, and what I think of
as the "real" coordinate system everything else is referenced to.
- Intersection coordinates are where we do ray-object
intersections. They're typically referenced off world coordinates
for each object.
- Camera coordinates are used by OpenGL to get things onto the
screen. The projection matrix takes world coordinates to camera
coordinates.
Moving the ray is actually sometimes more efficient than moving the object.
GLSL "uniform" Variables: Smuggling Data into GLSL from C++
To do raytracing or specular lighting, we need to know where the camera is.
Only C++ knows this. Luckily, you can smuggle data from C++ into
GLSL using a GLSL "uniform" variable.
Inside your GLSL (vertex or fragment) shader, you declare a uniform variable just like a 'varying' variable:
uniform vec3 cam; /* camera location, world coordinates */
After compiling, from C++ you can set this uniform variable. You
do this by passing it's string name to "glGetUniformLocationARB", which
returns an integer index (into a table of variables somewhere in the
guts of your graphics card). You can then set a "vec3" uniform's
value by passing the uniform's location to glUniform3fvARB (or set a
vec4 with glUniform4fvARB, etc). Here I'm passing one (1)
float-pointer as the camera location, which I've named
"camera" in C++ ogl/minicam.h:
glUseProgramObjectARB(prog);
glUniform3fvARB( /* set the GLSL uniform variable named "cam" */
glGetUniformLocationARB(prog, "cam"),
1, /* <- number of variables to set (just one vec3) */
camera /* C++ variable new uniform value is read from */
);
Calling
"glGetUniformLocation" every frame is somewhat expensive, and it's
rather annoying to call. So I've got a wrapper macro (in
ogl/glsl.h) that caches the uniform's location in a static variable,
for faster execution and a simpler interface:
glFastUniform3fv(prog,"cam",1,camera);
Make sure your program is still bound in use before you try to set
uniform variables; the "glUseProgramObjectARB" above is only needed
once, but it is needed!