You are currently browsing the category archive for the ‘School’ category.

SCOOP, or Simple Concurrent Object-Oriented Programming, is a model originally conceived for Eiffel to easily create parallel code. Eiffel has native support for rich types and pre-conditions, which has led to a design-by-contract paradigm in that programming language. The JSCOOP project that I am working on aims to bring the principles of design-by-contract and simple concurrency to Java.

The first thing we need to think about is the design-by-contract idea in Eiffel. How do we translate a model that builds off of this paradigm (explicit pre-conditions and post-conditions for any method)? JSCOOP takes it half of the way there by adding support for pre-conditions to any method. It looks like this:

@await(pre="!buffer.isEmpty()")
public String consume(@separate Buffer buffer) {
    return buffer.take();
}

I’ll explain the separate annotation later, but let’s take a look at @await. Whenever this method is invoked, the system will enforce that the buffer is not empty before it allows the call to go through. Note that this does not mean that this method is called every time the buffer is empty. Note also that we haven’t gained any parallelism here, yet; we’ve just restricted ourselves.

Enter @separate. This simple keyword says, this is a reference to an object that runs on a different thread than this current code is running on (worth noting that it is certainly more complicated than that). When we execute a method on a separate object, we’re really telling that method to execute on the other thread, whenever it gets the chance to.

The concept of who executes these “remote” methods is called a Processor in JSCOOP. The processor has its own thread of execution and–most importantly–executes requests in the order that they are given. A processor can handle one or more objects. This means that we could have two separate Buffer objects both running on the same Processor, and thus only one of them would be executing at any given time.

Stepping back to consider this, we must understand then that the @separate annotation marks references that are potentially separate, rather than references that are definitely separate.

How does JSCOOP prevent concurrent access?

Consider a Producer-Consumer problem in which all Producers and Consumers share a reference to a common Buffer. Using the JSCOOP model, we store a @separate reference to the same Buffer in each of the Producer/Consumer instances. But surely when we run the Producers/Consumers on separate threads, we’ll concurrently modify the buffer, leading to classic problems.

We introduce the concept now of locking a Processor. Whenever a @separate reference is declared as the parameter of a function (as in the consume example above), we lock the Processor that owns the object referenced in the parameter. Locking a processor says, “only I can add remote calls to this processor’s queue”. What it means is that we’re not actually executing buffer.take() immediately; we’re putting it onto the run queue of buffer’s Processor (its handler). In this case, we require the result of buffer.take(), so we must wait for the call to finish, but that isn’t always the case (this concept called wait-by-necessity). After adding all of the calls in the method to the queue, and waiting for whichever ones we require, we can unlock the processor, allowing other methods that need it (read: have a @separate parameter whose reference is handled by the same Processor) to have their requests go through.

Thus, concurrent access is prevented in the Processor. Because it has a run queue, it waits to execute the next method when it can, according to locking and pre-conditions. This is guaranteed to be safe because each object has only one handling Processor.

That’s all for now

There’s a lot more to the system, but the above serves as a good first step for programmers familiar with concurrency issues in understanding how to program with JSCOOP. Specifically, there is a Scheduler for the entire system that I haven’t mentioned; this checks preconditions, manages Processors, and ensures fairness.

Today, Greg challenged the entire consulting course to come up with a novel book idea that relates to teaching computer scientists something relevant. What would I like to see? A book that shows how to marry imperative and “alternative” programming languages, so that we can e.g. do our AI in prolog while still being able to write our game engine in an imperative language. Alternatively, how does a developer go about accessing a library from a language for which no binding (API) is available? The question has been answered for most imperative languages through some kind of “our datatypes” => C datatypes conversion and access of native libraries, but what about using, say, Cairo under Haskell (lazy evaluation, remember)?

The basic question is this: how do we convince someone to move to a new language if nothing they already use works there? The question is there because the “best” way to do these things would not be using Eiffel+OpenGL, but some sort of Eiffel Graphics implementation. Or possibly even leveraging the programming language itself to distribute over all available processing units (standard 3D acceleration would come from the language knowing the GPU is best at matrix operations, data-parallel, etc.).

I guess the point is we don’t live in a perfect world, so if we want to leverage any of the power of these alternatives we’re going to have to interface with the more popular languages. That’s where this book comes in.

So I’m on the CS Games team now for the University of Toronto, and I’m creating the so-called “Flashout Video” for our team. The production will probably follow a mockumentary style (and that got a bit of showing of support at the meeting today), but I’ve asked for some more suggestions/comments so that we can focus on something that the majority of my teammates are interested in.

Oh, and I’m also doing AI and Web programming for it. I’m anticipating an extremely fun time!

As part of my coursework for this semester, my peers and I are developing either a 3D animation system based on OpenGL, or a raytracing application. It’s for this project that I took CSC418, so I’ve already gotten the ball rolling by re-organizing the starter code given for the assignment, and creating a more modular Makefile (but wait — don’t let me get started on Makefiles. Makefiles are incandescent light bulbs; they’re fossils. Make should be extinct).

One thing I want to do with this raytracer is make it easily modifiable. To this end I’m creating the concept of a Feature, which is a pluggable part of the raytracing system. Because this is C++, we’ll have several abstract base classes that a Feature can implement, depending on which part of the rendering pipeline should be changed. In this way, we can easily add/remove things like texture mapping (by providing different classes that implement exactly what happens when a ray hits a material), photon mapping (as a pre-process to each scene update). The details still have to be worked out, but I hope this plan will give the marker and easy way to see how the different systems interact.

Additionally, the raytracer will allow different types of sample generation. When building a raytracer, one of the choices that is most influential to the resulting image is the method used to determine pixel colour. Aliasing (as seen in your favourite 3D computer games) occurs because the pixel is idealised as a point and not as a rectangle in image space. Because it is computationally infeasable to calculate the exact integral over this rectangle, and thus get the exact pixel colour, we sample points in the rectangle. Therefore, the quality of the final image depends largely on the choice of sampling method.

Of these methods, there are three main varieties: uniform sampling, random (the Monte Carlo method) sampling, and sub-random sampling. This sub-random sampling (called Quasi-Monte Carlo, or QMC) uses sequences that, while not pseudo-random, have desirable qualities while still being deterministic (and often easy to compute). I think I’m getting ahead of myself, though; the reason why uniform sampling does not work is because it leads to aliasing. Add more samples and you get into grid anti-aliasing methods that, while popular, still provide a computerized look. Random sampling trades aliasing for noise; however, because pseudo-random numbers generated by a computer make pains to be independent, what you get are clumping effects; entire parts of the pixel rectangle that are not looked simply due to “luck of the draw”.

Enter QMC: because this method uses sequences of numbers that are not random, we can make sure they have optimal spread throughout the space we wish to sample. However, not just any sequence will do; we pick sequences known to have low discrepancy, which is a mathematical idea that I have no real understanding of (and I endeavour to fix this). In any case, it’s enough for the implementor to know that these sequences have the effect of removing aliasing while preventing clumping of samples — the best of both worlds.

There will be some hurdles. For example, a technique known as Photon Mapping can be used to provide indirect illumination off of diffuse surfaces (for example, if you have a lit red wall and a piece of white paper held up to it, light bouncing off the wall will colour the paper slightly red), and to provide caustic effects (bright, focused patches of light due to an object refracting light like a lens). However, if we used random sampling for the time dimension and the scene is not static, we would have to re-compute the photon map for every temporal sample. For example, if we had 4 random temporal samples per pixel, we would have to compute the radiosity of the scene 4 · w · h times. This is because the scene is sampled at a random time four times per pixel. This can be alleviated by switching to a QMC method for temporal sampling that remains constant for each pixel.

Maybe I’ll even be able to liven up this site a little bit, once the raytracer is producing some kind of output!

Who’s writing this?

My name is Cameron Gorrie. I'm an undergraduate student at the University of Toronto, with a strong interest in Artificial Intelligence and Computer Graphics. You can read more about me, or read my CV. If you have work or research opportunities in my interest areas, please do not hesitate to contact me.
April 2021
S M T W T F S
 123
45678910
11121314151617
18192021222324
252627282930