Raytracing
CS 381 Lecture, Dr. Lawlor
The basic idea in all raytracing is to write down an
analytic expression for a point on a ray, which is just a line starting from a
point:
vec3 S=...; // starting point of ray--often the camera location
vec3 D=...; // direction ray is travelling--e.g., through a screen pixel
float t=...; // distance along ray to travel (typically unknown)
vec3 P=S+t*D; // a point on the ray
So given a ray (that is, S and D vectors) and given a t value, we can
find the point on the ray. Given a ray, and some equation the
point on the ray has to satisfy, we can actually solve for the t value.
For example, say we want to intersect the ray with a plane.
Points P on a plane satisfy the equation dot(P,N)=d, where N is the
plane's normal. So the point where the ray intersects the plane
must have
dot(P,N)=d
dot(S+t*D,N)=d
dot(S,N)+t*dot(D,N)=d
t*dot(D,N)=d-dot(S,N)
t=(d-dot(S,N))/dot(D,N)
Notice that we can rearrange the ray-plane intersection equation to
solve for the t value along the ray that intersects the plane! In
other words, with just a little bit of arithmetic, we can *find* the
ray-plane intersection point!
This is raytracing:
- Write an equation for the points along your ray. It's almost always P=S+t*D, where the float t is the only unknown.
- Write an equation that points on your object must obey. This can be as complicated as you want!
- Plug the ray-points equation (1) into the object equation (2), and solve for t, the position along the ray.
- The actual intersection point is P=S+t*D
- Texture and shade using this intersection point.
For another example, here's how you raytrace a sphere:
- For points on the ray, P=S+t*D
- For points on a sphere, distance(P,C)=r, where C is the center point of the sphere and r is the radius.
- For points on the ray that hit the sphere, distance(S+t*D,C)=r. Solve for t like so:
- distance(S+t*D,C)=r
- length(S+t*D-C)=r
- Set SC=S-C. Then:
- length(SC+t*D)=r
- dot(SC+t*D,SC+t*D)=r*r; // since dot(x,x) = length(x)*length(x)
- dot(SC,SC) + t*2*dot(SC,D) + t*t*dot(D,D) = r*r; // distribute dot product
- 0=(dot(SC,SC)-r*r) + t*2*dot(SC,D) + t*t*dot(D,D); // classic quadratic equation!
- Solve for t using the usual quadratic equation: t=(-b - sqrt(b*b - 4*a*c))/(2*a);
- Note that we want the *smallest* t, so the "+sqrt" quadratic solution isn't needed.
- Plug in the t from step 3 into the ray equation, P=S+t*D to get the intersection location.
- Shade normally!
All of this can be
done from inside a pixel shader, which has the nice effect of allowing
us to draw spheres using just one quad--the depth and color variation are all computed via raytracing, and are analytically accurate.
Try it out with my downloadable GLSL raytracing example program (Zip, Tar-gzip)!
In software, we normally find the color down one ray by hit-testing all
possible geometry, and computing the color based on the closest
(smallest hitting t)
geometry. This makes it easy to find the color in any direction,
which lets us easily compute true reflections (just shoot a ray in each
direction), or compute true shadows (if a ray makes it from you to the
light source without hitting anything, you're lit). It's not
clear how to implement this "geometry search" raytracing on the
graphics card, although people have done so using one long texture that
lists all (relevant) geometry.
But because in general every pixel could hit every piece of
geometry, with classic raytracing we've got O(pg) work to do to render
p pixels and g
pieces of geometry. OpenGL "feed-forward" or
"rasterization-style" rendering, where
you first think about a piece of geometry, then look at the pixels it
covers, is normally O(p+g):
there's a certain amount of work to do per pixel, and a certain amount
per piece of geometry (vertex setup). For a scene with p=1
million pixels and g=100,000 polygons, the difference between O(pg) and
O(p+g)
is substantial--with equal constants, feedforward rendering is 100,000
times faster. Of course, smart raytracers try to optimize away
most hit tests using a spatial search structure (bounding volume trees,
grids, etc.), so the advantage of being able to find light in any
direction might only slow down rendering by a factor of ten or so.