Learning to Use Both Eyes: Stereo Postproduction
Patrick Palmer Speaks about Stereo
Stereo wouldn’t have meant anything to me for the first 25 years of my life. Due to a physiological alignment issue, I had no depth perception. I had to rely on my left eye. As a child was told that if I “worked at it really hard” I could train my eyes to work together, but I guess I either didn’t try hard enough, or it was just bad advice. In the end I had surgery and for the first time I had the tools to integrate two images. I soon learned, like most of you, to see in stereo.
My personal experience was not unlike what we went through as a company. In 2003 we had a customer who needed stereoscopic playback for Starlight Express a touring version of the Broadway musical. Stereo was a brand new area for us but we accepted the challenge. Our solution involved loading uncompressed, parallel streams in FrameCycler which drove two digital projectors. The content had already been created, the two streams were the same length and it worked beautifully.
The same year we got our first stereo postproduction customer. The technology worked here, too, but, looking back, I think we were more-or-less telling them to “work at it really hard” – the same advice I have been given as a child. Whenever the two images didn’t fit together, the customer had to move the data around until the left and right eye matched. It made for a very restrictive workflow.
Ultimately, we learned to do what I had learned to do: treat two images as one. Rather than creating two streams and using an intermediate system to put the left and right eye together, as soon as you open one “eye,” the other opens automatically: much easier; much more efficient. Rather than marketing specialized stereo applications, we made this mature version of our DualStream technology available as an option for all of our real-time application.
Stereo is More Than One Plus One
Maybe it sounds obvious that to make stereo, you have to work in stereo, but this is still the exception to the rule. Even though the necessary hardware is becoming available, in particular with display technologies, there are still not enough software tools out there for working in stereo.
Consider sound: We don’t work on six separate files and then assemble them at the end to get our 5.1 surround sound; we work on them together, distributing voices, sounds, and musical instruments to get the effects we want. We treat it as a totality and then craft our soundtrack within that space. It’s the same with stereoscopic imagery: our tools should not load two separate image tracks which we are then forced to match; we need to work in stereoscopic space as we achieve the results we want.
Stereo is more than just one plus one. Not only do our tools need to treat stereo content as a single entity, we need to be able to watch it that way as we work on it. Some image issues – such as flares or camera misalignment – can’t be caught any other way.
Alignment – or binocular – adjustments are a case in point: these adjustments are critical to ensuring viewer comfort and avoiding issues such as eye-strain, headaches, etc. In SpeedGrade we do this by using the same image adjustment functions that control pan & scan to reposition and skew the image.
And there is no need to wait until post to do this. With the right tools and monitoring system, shots can be checked and fixed – or re-shot – during production.
And it’s not just about fixing problems: seeing the work in stereo informs all of our decisions from editing to grading and finishing. It’s about being able to make good art into great art.
Managing Double the Data
Ok. So far so good. But what are the technical obstacles to stereoscopic postproduction? If we think 4K is a lot to handle, imagine doubling that! Three factors are converging to solve this problem. The first is obvious: Moore’s Law. Hardware is just getting faster and cheaper by the year and that will continue.
Secondly, we see a big future for RAW workflows. It only makes sense. You get greater image data latitude at one-third the size of a standard RGB format; optimum picture quality in a smaller file. At this point, IRIDAS is the only vendor with the technology to offer real-time playback of all RAW file formats, but that will change in time as the benefits of RAW workflows become more apparent.
The third factor allowing us to manage the necessary data throughput is the use of the GPU: by offloading image processing to the graphics card we reduce the burden on the CPU, and we can save all our image adjustment instructions as metadata. Metadata is tiny: a SpeedGrade .Look file is typically under 20 KB, for example. This approach also means that all these adjustments: grading, pan & scan, film stock LUTs we might want to use etc. … All of that is editable since none of it is baked into the image. Not only that, but metadata allows us to share all this information with other applications. For example, CineForm recently announced an implementation of this with their RAW format which allows users to load – and exchange – SpeedGrade .Looks in editing applications like Adobe Premiere.
Telling Better Stories
But still, when all is said and done, is stereo worth the trouble? You can still tell great stories without it. Just like you can still shoot in black and white, or record in mono and produce something beautiful. But usually you don’t.
Stereo adds a whole new dimension, literally and figuratively, creating a new, more immersive entertainment experience. As with so many technologies when they are introduced, the first uses of stereo have been all about “Wow.” But, that quickly gets old. Treating stereo as a “cool effect” is understandable in the beginning, but using it that way risks taking the audience out of the story rather than keeping them in it.
At IRIDAS we have followed this trajectory ourselves with our customers. The first projects – shows, theme parks, a Star Trek in Vegas, and even a science application for a Nobel Prize institute in Sweden – were all about an impressive effect. A lot of work went into creating a location-based experience that would be open to the public, possibly for several years. In those cases, stereo was the attraction.
The next step involved bring a cool technology into the theaters on big budget features like U2 3D and Journey to the Center of the Earth – ambitious projects with complex production pipelines. These are “spectacles in the multiplex,” and – as with VFX-driven films – there will always be a place for spectacle.
But we think one of the most exciting steps is just starting to happen now: where stereo is simply used to tell the story better. Of course we are still refining the tools, just as filmmakers are refining their techniques, but the technology is here and you can do stereo with much less than blockbuster budgets. Paradise FX’s “Dark Country,” which is in production right now, is a perfect example of that.