Eve GUI Mockup, 3rd October 2009

I have most certainly made some progress on Eve.

Over the past week I took another look at the Eve project, made some necessary re-factorings and started sketching out a new concept for the user interface. The resulting program isn’t much more than a mock-up, but it is something. Currently the Project resource browser is slightly working, the scrubbing interface is partially complete (the fine-tuning interface at the bottom there still needs implementation), and tagging functionality has to be added in the GUI (the backend is in place).

I was thinking of continuing on with the GUI and really polishing the interface, but I think I’ll save that for when some back-end compositing functionality is done. To that end, here’s the order in which the next month or so will go (I hope! gulp).

  1. Implement the Gstreamer FrameProducer. I’m proud to say that Gstreamer’s Source is already complete! For now, the FrameProducer will–at a loss of performance–use RGBDataSink and marshal that data into a BufferedImage.
  2. Create an implementation of the Compositor class using Java2D/BufferedImage.
  3. Continue building the pieces that the Compositor depends upon until simple timelines render (one track, A then B then C then D).
  4. Modify the classes that do the compositing so that they marshal correctly to/from XML.
  5. Create an XMLCompositorRunner that will take input from an XML file and composit it to an output.

After this–yikes. Well I’m not even going to really expand on it, because obviously it’s a big if that these five items are ging to get accomplished given that it’s Midterm season already!


http://jscoop.googlecode.com/

SCOOP, or Simple Concurrent Object-Oriented Programming, is a model originally conceived for Eiffel to easily create parallel code. Eiffel has native support for rich types and pre-conditions, which has led to a design-by-contract paradigm in that programming language. The JSCOOP project that I am working on aims to bring the principles of design-by-contract and simple concurrency to Java.

The first thing we need to think about is the design-by-contract idea in Eiffel. How do we translate a model that builds off of this paradigm (explicit pre-conditions and post-conditions for any method)? JSCOOP takes it half of the way there by adding support for pre-conditions to any method. It looks like this:

@await(pre="!buffer.isEmpty()")
public String consume(@separate Buffer buffer) {
    return buffer.take();
}

I’ll explain the separate annotation later, but let’s take a look at @await. Whenever this method is invoked, the system will enforce that the buffer is not empty before it allows the call to go through. Note that this does not mean that this method is called every time the buffer is empty. Note also that we haven’t gained any parallelism here, yet; we’ve just restricted ourselves.

Enter @separate. This simple keyword says, this is a reference to an object that runs on a different thread than this current code is running on (worth noting that it is certainly more complicated than that). When we execute a method on a separate object, we’re really telling that method to execute on the other thread, whenever it gets the chance to.

The concept of who executes these “remote” methods is called a Processor in JSCOOP. The processor has its own thread of execution and–most importantly–executes requests in the order that they are given. A processor can handle one or more objects. This means that we could have two separate Buffer objects both running on the same Processor, and thus only one of them would be executing at any given time.

Stepping back to consider this, we must understand then that the @separate annotation marks references that are potentially separate, rather than references that are definitely separate.

How does JSCOOP prevent concurrent access?

Consider a Producer-Consumer problem in which all Producers and Consumers share a reference to a common Buffer. Using the JSCOOP model, we store a @separate reference to the same Buffer in each of the Producer/Consumer instances. But surely when we run the Producers/Consumers on separate threads, we’ll concurrently modify the buffer, leading to classic problems.

We introduce the concept now of locking a Processor. Whenever a @separate reference is declared as the parameter of a function (as in the consume example above), we lock the Processor that owns the object referenced in the parameter. Locking a processor says, “only I can add remote calls to this processor’s queue”. What it means is that we’re not actually executing buffer.take() immediately; we’re putting it onto the run queue of buffer’s Processor (its handler). In this case, we require the result of buffer.take(), so we must wait for the call to finish, but that isn’t always the case (this concept called wait-by-necessity). After adding all of the calls in the method to the queue, and waiting for whichever ones we require, we can unlock the processor, allowing other methods that need it (read: have a @separate parameter whose reference is handled by the same Processor) to have their requests go through.

Thus, concurrent access is prevented in the Processor. Because it has a run queue, it waits to execute the next method when it can, according to locking and pre-conditions. This is guaranteed to be safe because each object has only one handling Processor.

That’s all for now

There’s a lot more to the system, but the above serves as a good first step for programmers familiar with concurrency issues in understanding how to program with JSCOOP. Specifically, there is a Scheduler for the entire system that I haven’t mentioned; this checks preconditions, manages Processors, and ensures fairness.

To trace a deadlock, I put my head down for 3 hours, talked to no-one, grunted a few times, and then let out a scream of success when I had built a drop-in replacement for the JSCOOP scheduler that was written before I started the project. Thanks, CSC301, for making me learn Swing (and sorry to everyone else who works in BA2270 for my unnecessary enthusiasm yesterday).

JSCOOP code and manual scheduler

JSCOOP code and manual scheduler

I’m working at the University of Toronto this summer, on a research project called JSCOOP. We’re bringing the SCOOP concurrency model to Java, by way of an Eclipse plugin and Java 1.5 annotations. It’s really, really cool stuff that started at York University and moved down here. I’m working under Marsha Chechik with Faraz Torshizi.

I’m also continuing the Cowichan parallel survey with Andrew Borzenko, under Greg Wilson. Last term we successfully implemented the problem set using Boost::MPI and Threading Building Blocks (TBB). So that’s two parallel paradigms down, and many, many more to go. This is what we’re up to this term:

  1. Tightening up the serial implementation. There’s a lot to be said for having a strong base to start from, and having the fastest serial implementation we can produce will be important for performance comparison.
  2. Standardizing the header files. As we’re only looking at implementing the problem set in C++ using libraries available to it at the moment, we’re defining a standard interface for the problems so that they can be called/chained in an abstract fashion. This also lets us put the performance evaluation code in one place.
  3. Uploading what we have so far. By the end of the term, we want our work to be available in a publically viewable location, complete with repository. We have that right now, to some extent, but clean-up must occur before we can actually “publish”.
  4. Implement with 2 more parallel systems. I’m using a product called LinuxTuples and Andrew’s using OpenMP.

I’m finding tuple spaces to be really interesting, and very theoretically pleasing. The performance seems to be decent too, and it seems like every problem fits into a nice box. More on that later.

Eve has, as one can probably tell from the sorry state of this page, come to a standstill. I’ll need to re-evaluate and make some less lofty (read: attainable) goals and an actual roadmap. When I get time to work on it, I’ll post more.

Today, Greg challenged the entire consulting course to come up with a novel book idea that relates to teaching computer scientists something relevant. What would I like to see? A book that shows how to marry imperative and “alternative” programming languages, so that we can e.g. do our AI in prolog while still being able to write our game engine in an imperative language. Alternatively, how does a developer go about accessing a library from a language for which no binding (API) is available? The question has been answered for most imperative languages through some kind of “our datatypes” => C datatypes conversion and access of native libraries, but what about using, say, Cairo under Haskell (lazy evaluation, remember)?

The basic question is this: how do we convince someone to move to a new language if nothing they already use works there? The question is there because the “best” way to do these things would not be using Eiffel+OpenGL, but some sort of Eiffel Graphics implementation. Or possibly even leveraging the programming language itself to distribute over all available processing units (standard 3D acceleration would come from the language knowing the GPU is best at matrix operations, data-parallel, etc.).

I guess the point is we don’t live in a perfect world, so if we want to leverage any of the power of these alternatives we’re going to have to interface with the more popular languages. That’s where this book comes in.

I used to love C++. Now I hate it. I’m giving unhelpful error messages, header files, and everything non-standard my notice. Just parse the header files and then try to link everything together, please. Please. PLEASE.

The following things are on my mind right now, and I really want to learn more about them/have the time to use them or participate in them/come up with brilliant things based on them.

  • OpenCL is a standard interface for programming GPUs or CPUs (think: Cell processor) in a unified way. They’ve solved the problem of making something generic that also has the ability to use every smidgeon of computing power that you throw at it. I can’t wait to use it to implement the main Eve compositor.
  • The CS Games 2009 Flashout video for the U of T team.
  • Raytracing has been on my mind for nearly a week now, ever since the final assignment specification for CSC418 (Computer Graphics) was posted. It’s already casting rays and intersecting with spheres correctly. Next up is to implement a generic framework to allow for distribution of rays, so that I can flick them on and off whenever I want (these are the Features I was talking about previously). With this generic framework, the renderer could model soft shadows, motion blur, chromatic dispersion, and depth-of-field in the same way (before optimization).
  • Should I name my raytracer? It seems premature and cocky at this point, especially if I’m just going to drop it after the assignment is complete. There’s enough interesting stuff happening in raytracing that will make me want to continue, however. The key word here is probably want.
  • I’m looking forward to going back home for the university’s Reading Week. That’s not always the case (and I’m usually sick during it), which is why it’s exciting that I am not and am (respectively) this year.
  • I’m eagerly awaiting the response to the NSERC application I’m involved in, especially because the project behind it is so interesting. I’ll elaborate if the response is positive!

So I’m on the CS Games team now for the University of Toronto, and I’m creating the so-called “Flashout Video” for our team. The production will probably follow a mockumentary style (and that got a bit of showing of support at the meeting today), but I’ve asked for some more suggestions/comments so that we can focus on something that the majority of my teammates are interested in.

Oh, and I’m also doing AI and Web programming for it. I’m anticipating an extremely fun time!

As part of my coursework for this semester, my peers and I are developing either a 3D animation system based on OpenGL, or a raytracing application. It’s for this project that I took CSC418, so I’ve already gotten the ball rolling by re-organizing the starter code given for the assignment, and creating a more modular Makefile (but wait — don’t let me get started on Makefiles. Makefiles are incandescent light bulbs; they’re fossils. Make should be extinct).

One thing I want to do with this raytracer is make it easily modifiable. To this end I’m creating the concept of a Feature, which is a pluggable part of the raytracing system. Because this is C++, we’ll have several abstract base classes that a Feature can implement, depending on which part of the rendering pipeline should be changed. In this way, we can easily add/remove things like texture mapping (by providing different classes that implement exactly what happens when a ray hits a material), photon mapping (as a pre-process to each scene update). The details still have to be worked out, but I hope this plan will give the marker and easy way to see how the different systems interact.

Additionally, the raytracer will allow different types of sample generation. When building a raytracer, one of the choices that is most influential to the resulting image is the method used to determine pixel colour. Aliasing (as seen in your favourite 3D computer games) occurs because the pixel is idealised as a point and not as a rectangle in image space. Because it is computationally infeasable to calculate the exact integral over this rectangle, and thus get the exact pixel colour, we sample points in the rectangle. Therefore, the quality of the final image depends largely on the choice of sampling method.

Of these methods, there are three main varieties: uniform sampling, random (the Monte Carlo method) sampling, and sub-random sampling. This sub-random sampling (called Quasi-Monte Carlo, or QMC) uses sequences that, while not pseudo-random, have desirable qualities while still being deterministic (and often easy to compute). I think I’m getting ahead of myself, though; the reason why uniform sampling does not work is because it leads to aliasing. Add more samples and you get into grid anti-aliasing methods that, while popular, still provide a computerized look. Random sampling trades aliasing for noise; however, because pseudo-random numbers generated by a computer make pains to be independent, what you get are clumping effects; entire parts of the pixel rectangle that are not looked simply due to “luck of the draw”.

Enter QMC: because this method uses sequences of numbers that are not random, we can make sure they have optimal spread throughout the space we wish to sample. However, not just any sequence will do; we pick sequences known to have low discrepancy, which is a mathematical idea that I have no real understanding of (and I endeavour to fix this). In any case, it’s enough for the implementor to know that these sequences have the effect of removing aliasing while preventing clumping of samples — the best of both worlds.

There will be some hurdles. For example, a technique known as Photon Mapping can be used to provide indirect illumination off of diffuse surfaces (for example, if you have a lit red wall and a piece of white paper held up to it, light bouncing off the wall will colour the paper slightly red), and to provide caustic effects (bright, focused patches of light due to an object refracting light like a lens). However, if we used random sampling for the time dimension and the scene is not static, we would have to re-compute the photon map for every temporal sample. For example, if we had 4 random temporal samples per pixel, we would have to compute the radiosity of the scene 4 · w · h times. This is because the scene is sampled at a random time four times per pixel. This can be alleviated by switching to a QMC method for temporal sampling that remains constant for each pixel.

Maybe I’ll even be able to liven up this site a little bit, once the raytracer is producing some kind of output!

Who’s writing this?

My name is Cameron Gorrie. I'm an undergraduate student at the University of Toronto, with a strong interest in Artificial Intelligence and Computer Graphics. You can read more about me, or read my CV. If you have work or research opportunities in my interest areas, please do not hesitate to contact me.
April 2024
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930