MPIglut: Practical Use
CS 481/681 2007 Lecture, Dr. Lawlor
Here are the steps for setting up MPIglut on your own powerwall.
1. Set up DMX on Powerwall
MPIglut currently pulls the screen sizes and locations from "DMX", Distributed Multiheaded X, an open-source X server built on the x.org server code. So step 1 in using MPIglut is setting up DMX on your cluster.
DMX pulls the list of screens to use, and their locations, from a
configuration file. Here's the configuration file for the
powerwall (in /home/shared/dmx/):
virtual bigwall {
display powerwall0:0 1680x1050+0+0 @0x0;
display powerwall0:0 1680x1050+0+1050 @0x1200;
display powerwall1:0 1680x1050+0+0 @0x2400;
display powerwall1:0 1680x1050+0+1050 @0x3600;
display powerwall2:0 1680x1050+0+0 @1830x0;
display powerwall2:0 1680x1050+0+1050 @1830x1200;
display powerwall3:0 1680x1050+0+0 @1830x2400;
display powerwall3:0 1680x1050+0+1050 @1830x3600;
display powerwall4:0 1680x1050+0+0 @3660x0;
display powerwall4:0 1680x1050+0+1050 @3660x1200;
display powerwall5:0 1680x1050+0+0 @3660x2400;
display powerwall5:0 1680x1050+0+1050 @3660x3600;
display powerwall6:0 1680x1050+0+0 @5490x0;
display powerwall6:0 1680x1050+0+1050 @5490x1200;
display powerwall7:0 1680x1050+0+0 @5490x2400;
display powerwall7:0 1680x1050+0+1050 @5490x3600;
display powerwall8:0 1680x1050+0+0 @7320x0;
display powerwall8:0 1680x1050+0+1050 @7320x1200;
display powerwall9:0 1680x1050+0+0 @7320x2400;
display powerwall9:0 1680x1050+0+1050 @7320x3600;
}
This makes one big virtual server out of the 20 powerwall
screens. 1680x1050 is the resolution of each individual
screen. The "+0+1050" specifies the corner of the "sublocal" DMX
display on the local screen. The "@XxY" specifies the corner of
the sublocal DMX display on the global overall screen.
To fire up the DMX virtual server using your config file, run:
/usr/X11R6/bin/Xdmx :1 +xinerama -configfile ./mydmxconfigfile
The
":1" says to start the DMX server on DISPLAY=localhost:1.
"+xinerama" turns on an X extension that makes some applications a bit
smarter on the powerwall. "-configfile" loads your config file.
You don't have to run the Xdmx server as root.
Once this has started, you should be able to
export DISPLAY=localhost:1
xclock
And you should see a clock on the powerwall! Once DMX is running, you can actually already use OpenGL applications
on the powerwall--DMX is smart enough to take one running program, and
broadcast all the geometry data across the wall. But MPIglut is more
efficient, because it broadcasts user input events instead of geometry.
On the UAF powerwall, DMX should always be running--it's the big red-background screen.
2. Build MPIglut
You can either link with the system mpiglut, or build your own with:
cp -r ~olawlor/research/powerwall/mpiglut/build .
cd build
make all
This will create a "libmpiglut.a", which contains the MPIglut code.
It will also create a modified "libglut.a", which contains the few hooks
MPIglut needs inside glut itself. Currently this comes from freeglut
2.4.0.
3. Convert your code to use MPIglut
- In your code, replace #include <GL/glut.h> with include <GL/mpiglut.h>, or else just rename mpiglut.h as glut.h
- Make sure you call glLoad* (e.g., glLoadIdentity) for the GL_PROJECTION matrix every time you start a frame or
change the viewport. mpiglut intercepts this call, and inserts
the global-to-local display transformation matrix. You can also
get and apply the subwindow matrix manually with
mpiglutGetSubwindowMatrix, which is often needed when using
programmable shaders that do not access glProjectionMatrix.
- If your code is
geometry-rate limited, or would run better if it knew about the local
backend's display size, call
mpiglutGetRect(MPIGLUT_RECT_VIEWPORT,r) to get the pixel bounding
box you'll actually be rendering. Alternatively, pull your
clipping planes out of the projection matrix--glGet for GL_PROJECTION
will return the MPIglut-modified projection matrix.
- Compile your code with "mpiCC" or "mpicc" (else you'll get an error message about "Can't find header 'mpi.h'").
- Link your code with the MPIglut library "-lmpiglut", or you'll get link errors about "mpiglut..." routines.
- Make sure the MPI machine file (/etc/mpich/machines.LINUX) and
DMX configuration file (/home/shared/dmx/dmx.conf) list the same
machines in the same order.
Otherwise you'll get OpenGL remote-rendering errors at startup and/or
really bad performance, because the backends aren't rendering
locally. (This has already been taken care of on the UAF
powerwall.)
It's almost always possible to write the code so it transparently
works with either plain glut or MPIglut. You can also use #ifdef
MPIGLUT_H in the source code to protect MPIglut-specific parts.
Be very careful when printing to the console with an mpiglut
program. Each node in the cluster will print the
information. At the very least prefix each line with the node
it's coming from.
4. Run code on Powerwall
Take your MPIglut code and run it with the appropriate DISPLAY variable, like:
export DISPLAY=powerwall0:1
./myprog
The main program you run becomes the "frontend", which opens a big
empty window to the DMX screen. The frontend does nothing but
forward events--it doesn't run your code. Internally, at startup
the MPIglut frontend calls mpirun to spawn off a set of "backends", one
per DMX screen, which separately run your code.