You are currently browsing the tag archive for the ‘buffer’ tag.

TheĀ buffer, as mentioned in the previous design note, is Eve’s abstraction for media data, whether it is video, audio, or some other form of media that has not yet even been conceived. To recap, buffers store a format specification that details the kind of media type, audio channel count, byte width, video frame size, etc., along with the data itself. However, buffers need not store the data themselves, only a promise that the data still exists somewhere and is accessible by a knowledgeable object. This means that the data might not exist locally, inside of the Buffer structure (e.g. it may be in GPU memory), the data may simply be a pointer (e.g. in a reference counting scheme), and it might even be implicit (e.g. the frames are generated upon request).

Buffers in Eve cannot be resized, but their data can be changed. Buffers have one assigned media format descriptor, and this cannot be changed. One buffer (or part of it) can be copied into another, if and only if:

  1. the two Buffers media formats are as identical as possible, andĀ one of:
    1. the two Buffers are of the exact same Java type
    2. there exists a correct marshaling method in the Compositor

The marshaling methods provide conversion methods from other Buffer types to the Compositor’s Buffer type, and vice versa. Because these methods can often cause bus saturation due to the amount of memory transfer, Eve’s design allows them to be implemented manually, allowing hand-optimization of a potential bottleneck. As for the term “as identical as possible”, this will be revisited on a later post regarding media formats.

The simplest way to represent a buffer of audio or video data is to allocate a contiguous block of space in system memory, the size of which being equal to the size per frame multiplied by the number of frames. Because this is such a common way of representing data, Eve implements this class as NativeBuffer, which implements a Buffer interface. As implied, NativeBuffer uses java’s nio package to provide near-native performance on all operations. Furthermore, if special hardware is not used for the input, output, or compositing stages of the media pipeline, Buffers will never have to be (un)marshaled because reflection can tell us at any part of the process whether or not incoming Buffers are of the correct type, or not.

Who’s writing this?

My name is Cameron Gorrie. I'm an undergraduate student at the University of Toronto, with a strong interest in Artificial Intelligence and Computer Graphics. You can read more about me, or read my CV. If you have work or research opportunities in my interest areas, please do not hesitate to contact me.
April 2021