Skip to main content
AnnouncementsASWF ProjectsBlogOpenEXR

ASWF Deep Dive: OpenEXR Origin Story: Part 2

By April 19, 2021No Comments

Part 2: The Evolution of OpenEXR

ILM continued to oversee OpenEXR for the next several years, with major contributions later added from Weta Digital and DreamWorks. Florian was the recipient of a Technical Achievement Award for the development of OpenEXR at the 2007 AMPAS Scientific and Technical Awards. OpenEXR became an industry standard, supported by many common digital content creation and rendering tools, enabling easier data interchange across studios, and encouraging other studios to share their own internal projects as open source. Read part one of the OpenEXR origin story here.

Florian Kainz: For the next seven years or so after we released it, I was working on OpenEXR at least half of the time, with some periods where I was working on it full time. OpenEXR continued to grow. There are always things, no matter how careful you were when you designed it, that aren’t quite right or new features that people want, and you have to think really carefully about how you’re going to add those without breaking compatibility and without cutting off avenues to implement even more features later. Sometimes outside people would contribute some amount of code that had to be integrated, but for quite awhile, it was mostly that people would request features and we’d figure out how to add them. 

Weta Digital was the first studio outside of ILM to contribute major pieces of code to OpenEXR, which ultimately led to the release of OpenEXR 2.0 in 2013. Peter Hillman (at Weta from 2007-present) joined the conversation to recall the early days of their involvement, along with ILM’s Cary Phillips (at ILM from 1994-present) who today chairs the ASWF Technical Steering Committee that oversees OpenEXR’s continued development.

Peter Hillman: ‘Avatar’ in 2009 was the first major stereo 3D movie that Weta Digital worked on. We were keen to find a file format that could store both the left and right eye image into the same file to simplify our pipeline and minimize confusion. We had already adopted the OpenEXR format for storing renders, and its support for multiple channels made it possible to store two images in different sets of channels. We were still storing live action footage using the DPX file format, since both digital cameras and film scanners output images in that format. For ‘Avatar,’ we switched to using OpenEXR for both CG renders and for the live action imagery so we could store the left and right image in the same file, and developed a standardized naming channel naming scheme to identify the left and right images and metadata within the file. This became multi-view – our first big contribution.

Florian Kainz: Certainly Weta’s most significant contribution was what OpenEXR calls deep images. The idea with deep images is that with every pixel you don’t store just one value; multiple values, each associated with a “depth” or distance from the camera, can be encoded separately within each pixel. This is very useful for certain types of image processing operations, for example if you want to do physically accurate depth-of-field blur.

Peter Hillman: Following ‘Avatar,’ OpenEXR had been widely adopted in the industry for storing computer generated images, and there was also growing industry interest in an open format for storing computer graphics renders that supported deep images. At that time, Weta developed a pipeline for rendering and compositing with deep images, which included a prototype file format to support deep images, based on OpenEXR. Following discussions with various software vendors and visual effects facilities, it was decided to integrate deep image support directly into the OpenEXR library. So I worked with ILM to adapt our prototype format to become part of OpenEXR.

Florian Kainz: We talked to Weta and they agreed to make this part of OpenEXR. Peter produced the initial code base and then he and I worked together on integrating this into the existing OpenEXR, and we also collaborated on some dark corners of the math for deep image compositing operations. Peter had it figured out for the most part, but there were a few cases where it still needed to be refined, so we worked on those and produced documentation.

Cary Phillips: The addition of deep image support provides a very convenient operation for blending images together and creating new images.

Rod Bogart: The result of all this was OpenEXR 2.0, which was released in 2013. It was a big enough update that we had a version number bump there.

Soon after OpenEXR 2.0 was released, DreamWorks also contributed a key update for lossy compression. Andrew Pearce (at DreamWorks from 2004-present) added his reflections on this development.

Andrew Pearce: Around 2008, DreamWorks was interested in moving to HDR images to improve flexibility in the lighting and color grading pipeline, but the lossless compression techniques available increased storage requirements approximately sixfold, which broke budgets. Karl Rasche undertook research to find a lossy compression technique that kept the image quality high, while delivering storage requirements close to 1:1 with non-HDR images. The criteria on quality was judged by our VFX and Lighting Supervisors comparing the lossy and non-lossy images in color-accurate, high-quality review rooms. While the lossy compression is not suited for all workflows, it satisfies the requirements for a sufficiently large number of scenarios to be applicable outside of DreamWorks, which motivated us to contribute to the standard in 2014. Since its introduction, it has been the default and predominant format for production at DreamWorks.

Florian Kainz: The compression methods we had originally developed for OpenEXR were lossless, meaning that saving an image in a file preserves its contents exactly. This is desirable for workflows where images are repeatedly loaded from a file, processed, and then saved in a new file. Lossless compression ensures that there is no generational loss, or gradual degradation as pixels are copied from one file to the next. However, noise (random pixel-to-pixel brightness variations caused by the physics of image capture) and other essentially random patterns, which are present in most images, limit the effectiveness of lossless compression, and file sizes can rarely be reduced by a factor of more than 2 or 3. Karl addressed this by developing lossy “DWA compression” for floating-point images, based on the Discrete Cosine Transform, the same technology that is at the heart of JPEG compression. This lossy compression reduces randomness in an image by discarding details that are too subtle to be perceived by the human eye, which can allow for file size reductions by a factor of 10 or more without visible image degradation. It was so successful in production at DreamWorks that Karl and Andrew pushed the company to ultimately contribute the code to the OpenEXR project to make lossy compression available to anyone.

—–

OpenEXR, a project at the ASWF, is an HDR image file format for high-quality image processing and storage, originally developed by Industrial Light & Magic (ILM) in 1999 to fill a demand for higher color fidelity in VFX. Released to the public in 2003, it is one of the earliest open source software projects specific to VFX and set the industry on a new course. We caught up with some of OpenEXR’s key contributors from throughout the years to compile an oral history of the development, evolution, and ongoing impact of this groundbreaking file format.

Special thanks to Florian Kainz, Rod Bogart, Cary Phillips, Peter Hillman, and Andrew Pearce for taking the time to share their stories with the ASWF. Read part one of the OpenEXR Origin Story here.