The Modern Approach to OpenGL
2010, Dr. Lawlor, CS
481/681, CS, UAF
Graphics Processing Units (GPUs) have gained incredible super powers
over the past ten years. No longer limited to plain
fixed-function OpenGL, you can both perform arbitrary computations at
each pixel (including loops, function calls, and arbitrary
table/texture lookups) and you can perform those computations
ridiculously fast.
This is starting to seriously affect how we do interactive
graphics. Back in the day, you were stuck with polygons and
lines, and mostly had to learn how to get OpenGL to draw them.
Today, when you draw polygons it's really just a convenient way to
specify which pixels are supposed to run your pixel shader
program. The actual object you're drawing may be constructed
entirely computationally inside the pixel shader, and have little or
nothing to do with the enclosing "proxy" geometry. For example,
instead of drawing a sphere using a million tiny polygons, today you
typically draw a sphere by drawing one big quad, and analytically
determining which pixels actually show the sphere.
This all-programmable approach lets you toss out huge chunks of OpenGL:
- OpenGL's increasingly ancient fixed-function lighting system
(glLight, glLightModel, glColorMaterial, etc) can be replaced with
simpler, clearer per-pixel equations based on the surface normal.
This immediately gives you nice per-pixel Phong-type shading instead of
the lumpy per-vertex Gauraud shading from the fixed-function pipeline,
and more importantly allows you to compute more complex lighting
models, soft shadows, etc.
- Texture coordinate generation can be performed almost trivially
in the vertex shader, or even generated dynamically per pixel (for
example, via parallax mapping).
- Generally, polygon tesselation isn't as important, because you can add arbitrary complexity on a per-pixel basis.
However, there are parts of OpenGL that still perform a useful function:
- OpenGL's texture support, including texture formats, clamping,
and filtering, are all still extremely useful. In addition to
classic texture-as-image rendering, programmable shaders use textures
the way C++ programs use vectors or arrays to store arbitrary
application data.
- OpenGL's basic rendering calls (glBegin, glVertex) are still
useful. If you have a lot of data to render, it's still valuable
to use a vertex buffer object (see example code).
- OpenGL's matrix manipulation functions (glPushMatrix, glRotatef)
are still handy, and if you eliminated them, you'd soon have to build a
replacement set of affine coordinate transform functions. Of
course, if you're doing nonlinear geometry deformation (and nobody's
going to stop you anymore!), then you would need a more complex
coordinate system.
- The depth buffer (GL_DEPTH_TEST) is still a handy way to combine unrelated geometry.
- Alpha blending (GL_BLEND) is still useful for antialiasing and
transparent objects. Both of these could eventually be replaced
if programmable shaders eventually gain the ability to both read and
write the same texture.
- User input event handling like GLUT is still needed.
In this class we'll be exploring the modern rendering approach, and see
how it can dramatically improve rendering accuracy, speed, and
expressive power.