OpenGL Shader Language (GLSL)
CS 493
Lecture, Dr. Lawlor
GLSL is the standard language today for describing the code used to
drawing pixel to the screen on modern graphics cards.
Non-Programmable
Shaders Stink
Back in the day (2000 AD), graphics cards had finally
managed to compute all of OpenGL in hardware. They had
hardware projection matrices, hardware clipping, hardware
transform-and-lighting, hardware texturing, and so on. Folks
were thrilled, because glQuake looked amazing and ran great.
There's a problem with hardware, though. It's hard to
change.
And no two programmers ever want to do, say, bump mapping
exactly the same way. Some want shadows. Some want
bump-and-reflect. Some want bump-and-light. Some want
light-and-bump. nVidia and ATI were going crazy trying to
support every developer's crazy desires in hardware. For
example, my ATI card still supports these OpenGL
extensions, just for variations on bump/environment mapping:
GL_EXT_texture_env_add, GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ATI_envmap_bumpmap, GL_ATI_texture_env_combine3, GL_ATIX_texture_env_combine3,
GL_ATI_texture_mirror_once, GL_NV_texgen_reflection, GL_SGI_color_matrix, ...
This was no good. Programmers had good ideas they
couldn't get into hardware. Programmers were frustrated
trying to understand what the heck the hardware guys had
created. Hardware folks were tearing their hair out trying
to support "just one more feature" with limited hardware.
The solution to the "too many shading methods to support in
hardware" problem is just to support every possible shading
method in hardware. The easy way to do this is just make the
shading hardware programmable.
So, they did.
Programmable
Shaders are Very Simple in Practice
The graphics hardware now lets you do anything you want to
incoming vertices and fragments. Your "vertex shader" code
literally gets control and figures out where an incoming glVertex
should be shown onscreen, then your "fragment shader" figures out
what color each pixel should be.
Here's what this looks like. The following is C++
code, relying on the "makeProgramObject" shader-handling function
listed below. The vertex and fragment shaders are the
strings in the middle. These are very simple shaders, but
they can get arbitrarily complicated.
void my_display(void) {
glClearColor(0,0,0,0); /* erase screen to black */
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
/* Set up programmable shaders */
static GLhandleARB prog=makeProgramObject(
"//GLSL Vertex shader\n"
"void main(void) {\n"
" gl_Position=gl_ModelViewProjectionMatrix * gl_Vertex;\n"
"}\n"
,
"//GLSL Fragment (pixel) shader\n"
"void main(void) {\n"
" gl_FragColor=vec4(1,0,0,1); /* that is, all pixels are red. */\n"
"}\n"
);
glUseProgramObjectARB(prog);
... glBegin, glVertex, etc. Ordinary drawing here runs with the above shaders! ...
glutSwapBuffers(); /* as usual... */
}
A few meta-observations first:
- Even with programmable shaders, you've still clearly got
plenty of normal C++ OpenGL code.
- The GLSL programmable shader language is suspiciously similar
to C++, Java, C#, etc. This is by design!
- The programmable shader goes into OpenGL as a *runtime
string*. This means shaders get compiled for your graphics
hardware at runtime. This is good! It means the same
(C++) executable can run on ATI and nVidia cards (as well as
hypothetical future cards like the Speartm Asparagon-9000).
Your program can supply the shader-strings by:
- Hardcoding the shaders into your program, like above.
- Reading the shaders from a file (I like "vertex.txt" and
"fragment.txt", when I don't hardcode.)
- Downloading shaders from the net.
- Creating new shaders on the fly (with just string
processing!)
The stuff in strings is all "OpenGL Shading Language"
(GLSL) code. Just think of GLSL as plain old
C++ with a nice set of 3D vector classes, and you're pretty darn
close (I often use osl/vec4.h to copy and paste code between GLSL
and C++!).
- gl_Position is the onscreen location of the vertex. This
is the one value the vertex shader is required to output.
gl_Position is a "vec4", and stored in the usual OpenGL
coordinates, from -1 to +1 on all axes.
- gl_Vertex is the vertex's raw C++ location, like as passed to
a "glVertex3f(x,y,z);" call.
- gl_ModelViewProjectionMatrix is the whole OpenGL matrix stack,
including both the GL_PROJECTION and GL_MODELVIEW matrices.
- gl_FragColor is the onscreen color of the pixel. This is
the one value the fragment shader is required to output.
It's a "vec4", and I'm using the constructor-style syntax to
initialize it above.
Data types in GLSL work exactly like in
C/C++/Java/C#. There are some beautiful builtin datatypes,
though:
- float. Works exactly like C/C++/Java/C#.
- vec4. A class with four floats in it, which you can
think of as the XYZW components of a vector, or the RGBA
components of a color. vec4 supports + - * / exactly like
you'd expect. vec4 is the native datatype of the graphics
hardware, so all of these operations are
single-clock-cycle.
- You can get to the first component of a vec4 named "v" as
follows:
- "v.x", treating the vec4 as a spatial position or vector.
- "v.r", treating the vec4 as a color. This is the
same data, the same speed, the same everything as ".x"; it's
basically just a comment or a hint to the human reader that
you're dealing with a color.
- "v[0]", treating the vec4 as an array. Again, it's
the same underlying data.
- You can initialize a vec4 as follows:
- "vec4 v=vec4(0.0);\n", sets all four components to zero.
- "vec4 v=vec4(0.1,0.2,0.3,0.4);\n" sets all four components
independently.
- "vec3 d=vec3(0.1,0.2,0.3);\n"
"vec4 v=vec4(d,0.4);\n"
You can make a 3-vector into a 4-vector by just adding the
missing components.
- The "w" component is used for homogenous
coordinates. It's 1.0 for ordinary position
vectors, and 0.0 for direction or offset vectors. You
care about this when you're deriving a new
projection matrix, but otherwise you usually ignore it.
- vec3. A class with three floats in it. Doesn't
have a ".w" or ".a" component. Useful for representing
directions (surface normals, light directions, etc) when you
don't want the "w" component messing up your dot products.
- vec2. A class with just two floats. Missing ".z"
or ".b" and ".w" or ".a". Useful for representing 2D
texture coordinates, or complex numbers.
- mat4, mat3, mat2. Matrices that operate on vec4's,
vec3's, and vec2's. See my
caveats on
how to load up the matrix values (the constructor takes
column-major order), or just load them from C++ via a builtin
like gl_ModelViewMatrix.
- "int" is fairly rare for computation (the graphics hardware
usually doesn't have integer math!). Some drivers are very
picky about distinguishing between "2" the integer and "2.0" the
float.
- A variable declared as "varying" gets transmitted from the
vertex shader to the fragment shader. This is the only way to
communicate between your vertex and fragment shaders!
- A variable declared as "uniform" gets passed in from
outside. In THREE.js, you set "uniform float foo;"
using code like "myshader.uniforms.foo.value=3;".
Bottom line: programmable shaders really are pretty easy to
use.
Further Info
Try it! Here's my
simple PixAnvil fragment shader demo.
See also the GLSL cheat
sheet (especially for builtin variables).
The official GLSL Language Specification isn't too
bad--chapter 7 lists the builtin variables, chapter 8 the builtin
functions. OpenGL
ES / GL 3.0 is similar, but they deprecated a bunch of the
builtin variables from fixed-function GL.
If you're writing a C++ program, I built a GLSL-for-GPGPU
library.