You are currently browsing the tag archive for the ‘timeline’ tag.

Eve GUI Mockup, 3rd October 2009

I have most certainly made some progress on Eve.

Over the past week I took another look at the Eve project, made some necessary re-factorings and started sketching out a new concept for the user interface. The resulting program isn’t much more than a mock-up, but it is something. Currently the Project resource browser is slightly working, the scrubbing interface is partially complete (the fine-tuning interface at the bottom there still needs implementation), and tagging functionality has to be added in the GUI (the backend is in place).

I was thinking of continuing on with the GUI and really polishing the interface, but I think I’ll save that for when some back-end compositing functionality is done. To that end, here’s the order in which the next month or so will go (I hope! gulp).

  1. Implement the Gstreamer FrameProducer. I’m proud to say that Gstreamer’s Source is already complete! For now, the FrameProducer will–at a loss of performance–use RGBDataSink and marshal that data into a BufferedImage.
  2. Create an implementation of the Compositor class using Java2D/BufferedImage.
  3. Continue building the pieces that the Compositor depends upon until simple timelines render (one track, A then B then C then D).
  4. Modify the classes that do the compositing so that they marshal correctly to/from XML.
  5. Create an XMLCompositorRunner that will take input from an XML file and composit it to an output.

After this–yikes. Well I’m not even going to really expand on it, because obviously it’s a big if that these five items are ging to get accomplished given that it’s Midterm season already!

The rendering pipeline of Eve follows the producer/consumer design pattern. In order to create output, timelines must be able to create a producer to render content in a one-use, low-cost way. The render pipeline then acts as the consumer for this producer, feeding frames from the timeline into the output (whatever that might be — the output is abstracted from the rendering pipeline).

The first drawback with the producer/consumer pattern involves the idea of passing around buffers, the designed abstraction of low-level media data for Eve’s purposes. Because we can implement the Buffer interface any way we choose, we can pass around buffers that do not need to be copied, making the cost of permeating data through the pipeline negligible relative to the cost of the actual work being done. The input layer, rendering layer, and output layer no longer have to agree on Buffer implementations, just that these Buffers must have the same media type. The problem then lays in how we pass data between the layers of the pipeline.

The solution to this is two-fold: first, introduce marshaling and unmarshaling methods for Buffers passing into and out of the rendering stage, respectively. This lets the rendering pipeline — the one with most of the heavy lifting to do — mandate which buffer format is best for its job. However, it would be a mistake (and go against the idea of loose binding) to force the renderer to account for every type of buffer that the input layer could throw at it; indeed, years later, we want the exact same renderer to be able to work with any new input layers that have been added on. So the second step is to create a common, “native” Buffer format to use when all else fails. This nets us the performance benefit when the renderer knows how to efficiently encapsulate its input, the ease of programming when the input layer actually¬†uses this format, and the flexibility to use any input layer we want, as the renderer can request a conversion of the input buffer to the native format before encapsulation. Finally, this solves the unasked question: what kind of buffer should the output layer be passed?

I’d like to close with a discussion on the hierarchical nature of Producers. Like the timelines they are meant to express, Producers too are used in a recursive fashion — each Clip creates a delegate Producer to pass into the render queue. Because these could be track or timeline clips, they could create their own hierarchy of Producers within. The one remaining problem with this hierarchical producer/consumer pattern is: how do we make it fast? I won’t lie: a plan is in the works, but it hasn’t been finalised. The hope is that the performance part of the code will be separated from the functionality part of the code, allowing Eve first to be correct, and then to be usable in real-time.

Here’s a quick run-down of the terminology that I’ll use in this discussion.

A clip is an object that represents a piece of media, such as “the first 23 seconds of video from vacation.avi” or “50,000 samples of an 8kHz sine wave”. A track is a container for clips of a certain media type (audio or video).¬† A timeline, then, can be seen simply as a named collection of tracks. Finally, an effect is a (length-preserving) transformation placed on a clip, such as a film grain effect, a saturation adjustment, or a volume normalisation.

In an attempt to make editing large projects easier, and to facilitate re-use of common timeline elements (bumpers, episode introductions, credit sequences, and so on), Eve takes a hierarchical approach to the timeline of a video project. Put simply, timelines can reference other timelines (and parts of other timelines). This means that you can insert the video and/or audio from one timeline into another, and have it act as a sort of black box. Timeline clips, as these are called, can reference either a particular track of another timeline, a different track of the current timeline, or the fully-composited output of a given media type from a different timeline. A good question is, where’s the use case?

Say you want to add a colour cast to your entire production: simply create a new timeline, and add in the composited audio and video tracks from your main timeline. All that is left to do is to apply the colour cast effect to the correct clip, and you’re set. Notice that because timeline clips are a type of clip, and clips must choose one specific media type, there will be both a video and an audio clip in your new timeline.

A more industry-specific example would be that you could split the workload of a long project up into several different slices of time, and define each as a new timeline. After being worked on seperately, the timelines can be added together at the very end very simply — just add each timeline to the master timeline in series, add in any boilerplate introduction or credit sequence timelines, and you’re ready to export. All this, without having to copy around and assemble hundreds or thousands of clips. You get a cleaner workflow, the ability to add transitions for free and mix up the order after the fact, as well as the semantic plus of being able to name the timelines for easier navigation. Finally, all of these last-minute decisions can be reversed more easily, allowing for more creative play.

Hopefully I’ve made all of this clear. Any criticism?

Who’s writing this?

My name is Cameron Gorrie. I'm an undergraduate student at the University of Toronto, with a strong interest in Artificial Intelligence and Computer Graphics. You can read more about me, or read my CV. If you have work or research opportunities in my interest areas, please do not hesitate to contact me.
April 2021
S M T W T F S
 123
45678910
11121314151617
18192021222324
252627282930