You are currently browsing the daily archive for February 6, 2009.

As part of my coursework for this semester, my peers and I are developing either a 3D animation system based on OpenGL, or a raytracing application. It’s for this project that I took CSC418, so I’ve already gotten the ball rolling by re-organizing the starter code given for the assignment, and creating a more modular Makefile (but wait — don’t let me get started on Makefiles. Makefiles are incandescent light bulbs; they’re fossils. Make should be *extinct*).

One thing I want to do with this raytracer is make it easily modifiable. To this end I’m creating the concept of a **Feature**, which is a pluggable part of the raytracing system. Because this is C++, we’ll have several abstract base classes that a Feature can implement, depending on which part of the rendering pipeline should be changed. In this way, we can easily add/remove things like texture mapping (by providing different classes that implement exactly what happens when a ray hits a material), photon mapping (as a pre-process to each scene update). The details still have to be worked out, but I hope this plan will give the marker and easy way to see how the different systems interact.

Additionally, the raytracer will allow different types of sample generation. When building a raytracer, one of the choices that is most influential to the resulting image is the method used to determine pixel colour. Aliasing (as seen in your favourite 3D computer games) occurs because the pixel is idealised as a point and not as a rectangle in image space. Because it is computationally infeasable to calculate the exact integral over this rectangle, and thus get the exact pixel colour, we sample points in the rectangle. Therefore, the quality of the final image depends largely on the choice of sampling method.

Of these methods, there are three main varieties: uniform sampling, random (the Monte Carlo method) sampling, and sub-random sampling. This sub-random sampling (called Quasi-Monte Carlo, or QMC) uses sequences that, while not pseudo-random, have desirable qualities while still being deterministic (and often easy to compute). I think I’m getting ahead of myself, though; the reason why uniform sampling does not work is because it leads to aliasing. Add more samples and you get into grid anti-aliasing methods that, while popular, still provide a computerized look. Random sampling trades aliasing for noise; however, because pseudo-random numbers generated by a computer make pains to be independent, what you get are clumping effects; entire parts of the pixel rectangle that are not looked simply due to “luck of the draw”.

Enter QMC: because this method uses sequences of numbers that are not random, we can make sure they have optimal spread throughout the space we wish to sample. However, not just any sequence will do; we pick sequences known to have low discrepancy, which is a mathematical idea that I have no real understanding of (and I endeavour to fix this). In any case, it’s enough for the implementor to know that these sequences have the effect of removing aliasing while preventing clumping of samples — the best of both worlds.

There will be some hurdles. For example, a technique known as Photon Mapping can be used to provide indirect illumination off of diffuse surfaces (for example, if you have a lit red wall and a piece of white paper held up to it, light bouncing off the wall will colour the paper slightly red), and to provide caustic effects (bright, focused patches of light due to an object refracting light like a lens). However, if we used random sampling for the time dimension and the scene is not static, we would have to re-compute the photon map for *every temporal sample*. For example, if we had 4 random temporal samples per pixel, we would have to compute the radiosity of the scene 4 · w · h times. This is because the scene is sampled at a random time four times per pixel. This can be alleviated by switching to a QMC method for temporal sampling that remains constant for each pixel.

Maybe I’ll even be able to liven up this site a little bit, once the raytracer is producing some kind of output!