Raytracing Multiple Objects
2010, Dr. Lawlor, CS
481/681, CS, UAF
Z-Buffer integration with gl_FragDepth
At the moment, our raytraced geometry writes the proxy geometry's depth
value into the Z buffer. This is incorrect, of course, because the real
geometry is somewhere inside the proxy, leading to depth errors where
ordinary geometry (like the ground plane) intersects with the proxy
geometry.
You can fix this by computing a new depth value in your pixel shader,
and writing this to "gl_FragDepth" (just like gl_FragColor, but it's a
float for the depth buffer). We can compute the Z buffer depth
value by running our 3D ray-object intersection point P through the
gl_ProjectionMatrix, and do the perspective divide, just like happens
to each vertex. This looks like:
vec3 P = ... world coordinates ray-object intersection point ...
vec4 projP = gl_ProjectionMatrix*vec4(P,1);
float depth = projP.z/projP.w; // perspective divide
Note that we usually don't want to also run through the
gl_ModelViewMatrix, because we compute ray-object hit points in world
space already.
However, OpenGL also applies a "depth range", set with glDepthRange, to convert the projection matrix's -1 to +1 value into a 0 to 1 range, so we also need to scale our depth to get gl_FragDepth:
gl_FragDepth = 0.5+0.5 * depth;
Writing to gl_FragDepth this way works correctly, and allows our
raytraced geometry to coexist in the Z buffer and intersect other
raytraced geometry and ordinary polygon geometry without problems.
Moving Raytraced Objects
If I've written the equation to raytrace a sphere, centered at the
origin in world coordinates, my sphere is going to be centered at the
origin regardless of what I do with the proxy geometry or
transformation matrices.
That's rather annoying. It'd be nice to be able to move raytraced objects around.
One way to move objects around is to change their equations so they're
centered at a new location. This turns out to be rather annoying
for some objects, introducing new complexity into the already-hairy
object equation that we'd prefer not to deal with.
An alternative to moving the object is to move all the rays that
intersect the object. This can often be done without too much
trouble, for example by shifting the ray origin at the start of the
ray-object intersection function:
float hyp_intersection(vec3 C,vec3 D) {
C=C-obj_center; /* move the ray so the object's centered on the origin */
// Quadratic equation terms:
... as before, now using the new ray start point "C" ...
}
You can apply an arbitrary linear transformation to rays, including
matrix transformations, although you've got to be a bit careful
applying projective transformations to the ray direction--as a
direction vector, its "w" component should be treated as zero.
Curiously, the returned t value from a shifted intersection calculation
actually applies to the original ray, which should not be
modified. If you move the ray origin permanently, the object (or
its shadow, or its normal vector) is likely to show up in the wrong
place. One way of thinking about this is to explicitly list all
out our coordinate systems:
- OpenGL vertex coordinates are per-object coordinates used for
specifying proxy geometry. The modelview matrix takes them to
world coordinates.
- World coordinates are where we do lighting calculations, and what I think of as the "real" coordinate system everything else is referenced to.
- Intersection coordinates are where we do ray-object
intersections. They're typically referenced off world coordinates
for each object.
- Camera coordinates are used by OpenGL to get things onto the
screen. The projection matrix takes world coordinates to camera
coordinates.
Moving the ray is actually sometimes more efficient than moving the object.