Making Points Look Good, and Shadows
CS 480 Lecture,
Dr. Lawlor
First off, OpenGL draws GL_POINTs as ugly boxes by default. You can make your points at least look round with:
glEnable(GL_POINT_SMOOTH);
Second, points are a fixed size, specified in pixels with:
glPointSize(pixelsAcross);
This is pretty silly, because in 3D the number of pixels covered by an
object should depend strongly on distance (far away: only a few pixels;
up close: many pixels).
Worse yet, you can't call glPointSize during a glBegin/glEnd geometry
batch. This means even if you did compute the right
distance-dependent point size, you couldn't tell OpenGL about it
without doing a zillion performance-sapping glBegin/glEnd batches.
Luckily, you can compute point sizes right on the graphics card pretty darn easily, like this:
/* Make gl_PointSize work from GLSL */
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
static GLhandleARB simplePoint=makeProgramObject(
"void main(void) { /* vertex shader: project vertices onscreen */\n"
" gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;\n"
" gl_PointSize = 3.0/gl_Position.w;\n"
"}\n"
,
"void main(void) { /* fragment shader: find pixel colors */\n"
" gl_FragColor = vec4(1,1,0,1); /* silly yellow */ \n"
"}\n"
);
glUseProgramObjectARB(simplePoint); /* points will now draw as above */
(This uses my ogl/glsl.h functions to compile the GLSL code using OpenGL calls.)
This scales points by a hardcoded constant, here 3.0, divided by the
"W" coordinate in the projection matrix. Because we project 3D
points onto the 2D screen by dividing by W, it make sense to divide the
point's size by W as well.
Another more rigorous approach is to figure out the point's apparent
angular size in radians, then scale radians to pixels by multiplying by
the window height (in pixels) and dividing by the window field of view
(radians). A point of radius r has angular size asin(r/d) when
viewed from a distance d, so:
float d=length(camera-vec3(gl_Vertex));
float angle=asin(r/d);
gl_PointSize = angle*glutGet(GLUT_WINDOW_HEIGHT)/fov_rad;
Of course, we need to pass in the camera location via a glUniform, and
similarly compute the conversion factor between radians and pixels in
C++ and pass that into GLSL.
glUniform3fv(glGetUniformLocation(simplePoint,"camera"),1,camera);
glUniform1f(glGetUniformLocation(simplePoint,"angle2pixels"),
glutGet(GLUT_WINDOW_HEIGHT)/fov_rad);
In practice, this GPU point scaling is extremely fast, and looks pretty nice!
Shadows
True shadows are really tricky to get right, but we can fake a "decal"
shadow pretty easily by just drawing a dark glob down on the ground
under the real object (at z=0).
But shadows look a lot nicer with fuzzy edges. We could fake
fuzzy shadow edges easily in a pixel shader, but we first need to
figure out where each pixel sits relative to the point center. On
a triangle or quad, you keep track of where you are using texture
coordinates, with a different coordinate on each vertex. But
GL_POINTs have just one vertex each, so the vertex shader's output
"varying" parameters... don't!
There's a handy little workaround for this problem called "point
sprites", where the hardware automatically generates texture
coordinates across your GL_POINT. You turn on point sprites with:
/* Smooth shadow edges using GL Point Sprites
(magically makes texcoords appear in fragment program)
*/
glEnable(GL_POINT_SPRITE_ARB);
glTexEnvi(GL_POINT_SPRITE_ARB,GL_COORD_REPLACE_ARB,GL_TRUE);
And then you can read "vec2(gl_TexCoord[0])" from your fragment
shader. These generated texture coordinates run between 0 and 1
across the area drawn for your GL_POINT. You can use these
texture coordinates to fade from transparent on the outside to opaque
black in the center with:
"void main(void) { /* fragment shader */\n"
" vec2 center=vec2(gl_TexCoord[0])-vec2(0.5);\n"
" float cendist=1.0-length(center)*2.0;\n"
" gl_FragColor = vec4(0,0,0,clamp(cendist,0.0,1.0));\n"
"}\n"
Of course, you can pick any colors you like for your shadows.