Tuesday, December 15, 2009

30 Megapixel CineForm

This post showcases one of the many applications for the CineForm codec. BrainSalt Media uses CineForm decoders to drive 16 seamlessly tiled projectors on a 250 square meter curve screen for an awesome virtual aquarium installation in China.

To top the aquarium, here is BrainSalt's 30 Megapixel at 60 Frames dome projection also using CineForm (15 titled 2K projectors.) Here is a photo for the dome during setup


Search on Macau City of Dreams on Youtube for spinets of these huge screens.

P.S. I have just heard that domed presentation above won an award -- Themed Entertainment Association Award 2009.  Remember that is 1.8GPixels per second running from a CineForm fully software/CPU based decoder.

Friday, November 27, 2009

Displaying Metadata

Just before the Thanksgiving break, CineForm released new betas for the Neo and Prospect product lines (http://tr.im/FNSb.) These have a new feature that we have been planning for ages (years it seems,) the ability for the decoder to render its own passive metadata. The decoder has been applying Active Metadata for many years, developing the image through color parameters and 3D LUTs for creative looks, yet the classic metadata has remained dormant within each compressed frame -- we left it up to vendors using the SDK to extract as needed (and so few do this.) As metadata is so often lost and misplaced, you are lucky if you are left with just the timecode in many workflows, so we long ago moved metadata from side-car files or within the file wrapper (AVI/MOV/MXF) and placed it within the compressed sample itself. This enables the decoder to read its own metadata (not possible with 99% of video types), all that was missing was the font engine to render the results in the display. The decoder now has that font engine. Offline workflows typically have a range of burn-ins top of the video image, returning to burnin free media for online/finishing. The CineForm burnins are non-destructive allowing the operator to enable the overlay display, choose which elements to display, switch from offline to online with a single click. Any tools that uses the CineForm decoder will gain this feature.

The First Light control panel:


The placement and font controls are primitive today, but the engine already works with transparency, color and outline stroke controls (vendors using SDK can select these today.) Sample images from the overlay engine tests: http://twitpic.com/obs9l http://twitpic.com/ob1d9
http://twitpic.com/oay77

For those who want get started with the metadata burn-ins, here are some simple steps:

1) Start the new First Light (within version 4.1.3 of Neo or Prospect.)
2) Import a CineForm clip that has the type of metadata you wish to display.
3) Select the Passive Metadata tab to reveal all the metadata types with.
4) Select an item from the list.
5) optionally -- click the 9-way justification control to determine where you want to place the burn-in.
6) Click "Add / Remove" to apply the burn (and again to remove.)
7) Repeat steps 4-6 to add more metadata to the output.

8) The Overlay checkbox (near histogram control,) globally enables these burn-ins for all clips in the system.

If you want custom formatting for your metadata we are using the C language printf formatting. Instead of the recode data "2009-11-26", in the customer formatting add "Date: %s" for a display of "Date: 2009-11-26". You can use this to add freeform burnins like "property of me" by select a random metadata line and not putting a "%s" (string) or "%d" (for decimals) in the formatting. Font name and sizing also are active, setting Arial and size 70 will render the next added burn-in with those characteristics.

Users of Red and SI cameras will have lots of metadata to display, unfortunately there is not much for HDV/AVCHD users -- yet. The reason we implemented this feature, is not just to display today's metadata, but to encourage more metadata to be stored at acquisition time (something our own tools have a reason to do more of.) Also to store changing metadata during capture -- good examples would be lens data and GPS/orientation coordinates that are coming to more cameras. Even Red One metadata out from the SDK is only per clip, not per frame (we understand this with be addressed with a future R3D SDK release.) There is an opportunity for those doing live HDSDI/HDMI captures into CineForm, to generate their own metadata streams (see how at techblog.cineform.com.)

We aren't stopping at display of new metadata, next we will use this metadata to trigger external applications and tools to act in new and programmable ways -- think of third party apps for your decoder. We want metadata to approach the power of the image data that it is stored with.

Thursday, October 08, 2009

DPX-C: Compressed DPX as a New File Type

I'm am enlisting your help (testers needed) -- details at the end of this post *.

For many years now, fans of CineForm compression have been wanting a single frame version in addition to streaming AVIs/MOVs, as this would be particularly handy for frame based render-farms. Yet there is a catch-22 for supporting any new image format, as too many tools need to support the format before it is viable, prevented the new format's creation. This is very different for streaming formats like AVI and MOV, as most tools use a registered codec infrastructure, like those provided by QuickTime, DirectShow or VideoForWindows, enabling new compressors to be created without modifying the calling application -- we added CineForm compression to FCP without Apple's need to change FCP (even though we wish they would.) For still formats we can't just add our compression to TIFF or PNG, and expect existing tools to support it. Unfortunately still image formats don't use the same codec model like that of QuickTime, slowing down the implementation of new image formats (consider how digital cameras still mostly use JPEG, even though there have been better formats for years.)

CineForm has an idea to help bridge some of that catch-22, to enable support for a CineForm compressed frame format more quickly -- the idea is compressed DPX, I'm call them DPX-C. By using existing DPX structures, some level of backward compatibility can be obtained. Of course we are not expecting existing tools to magically open the compressed data of a DPX-C file, so we add a thumbnail of image that visually reports that the image is compressed (a text overlay within the thumbnail image.) The compressed data remains within the proprietary data fields of the DPX file. While the thumbnail greatly helps for image browsing, we still need a way to full resolution decoding. The first basic step is to provide free tools that convert DPX-C back to DPX, or DPX-C back into a streaming format like AVI or MOV as needed. But we also thinking about file system virtualization, so that a directory of DPX-C files is seen a standard DPX files that load within existing tools without tool modification (** more on this to follow.)

CineForm file virtualization has been done before such that 2K DPX files can be played in real-time (24p) off a bandwidth limited device like a thumb-drive (this was done at a facility outside of CineForm -- there are some very smart users out there.) As the CineForm compression (and decompression) is so fast, the increase in CPU load is often less that percentage of bandwidth load that uncompressed DPX consumes. e.g. DPX playback might take 70% of you RAID bandwidth, yet DPX-C would use 30% of modern CPU for the same playback rate and only 10-15% of RAID. This means DPX-C is will produce greater throughput even as is moves some of the disk load to CPU load -- it is a good trade-off.

Encoding can be virtualized in the same way, enabling a render farm to export standard 10-bit RGB DPX files to a volume that automatically stores DPX-C files, greatly improving network performance at the same time. This DPX to DPX-C virtualization solves the problem of trying to render farm frame directly in a stream media file -- frames don't arrive in order, and CineForm is a variable length format, meaning some frames are bigger than others. DPX to MOV virtualization would require file padding or a constant bit-rate designs, both are undesirable -- not such issue with DPX-C files. While a wide range of virtualization configurations are possible, DPX to DPX-C is the simpliest, particularly as file naming would be identical, allowing switching between virtualized and extracted DPX files as needed.

The ultimate goal is to virtualize DPX to those tools that need it, while encouraging native support in existing tools. DPX-C files will contain any the CineForm compression types from 422, 444, 4444 and RAW (anyone for a DPX-C CineForm RAW still camera?) The first tools to add DPX-C will be CineDDR and Drastic's DDR recorders and virtual decks. The DPX-C was needed for better VTR emulation, allowing telecine sessions to operate with normal inserts without creating overlapping AVI/MOV files, that may need to be re-ordered into the virtual tape. With these products the DPX-C file sequences can be simply and losslessly parsed into MOV or AVI streams for archive or downstream processing.

* While all this sounds great, it hasn't completely happened yet. I need your help in determining the best way to extend DPX with the widest range of compatibility with your tools. This download: www.miscdata.com/DPX/dpxc.zip contains five DPX-C files with subtle variations on how the data is stored, like thumbnail first following by the compressed data or vise-versa, and how the extra data is flagged. The files labeled A through E, are 1920x1080 frames compressed from 8MB to around 715k (422), some or all will load in you existing video tools -- I would like to know which work and which don't in your existing DPX applications. So far I have only tested PC based tools; these work for all DPX-C files: AE CS4, Combusion 2008, Vegas 9 and XnView. C, D and E files work in AE CS3 and Photoshop. So something works in all tools tested so far. So please report back you own findings with these tools or others, and with different versions (note the behavior change between CS3 and CS4.)

** We looking for a developer who has implemented virtual file systems based of FUSE or similar for all platforms (Windows, OS-X and Linux -- we will need Win/OS-X versions first.) We believe a FUSE/MacFUSE implementation would be straight forward, we just have too many projects internally at the moment, and will consider contracting this part out. Of course if we can't find the right person or team, links to good samples would be helpful, as we want this to happen as soon as we can.

Thanks.

---- Update March 15, 2010 ----

Several developers are now using this format, thinks to all the testing users performed for us.  We end up using a variation on type-E, being the most compatible and extra useful in image browsing systems as the thumbnail was stored before the compressed image data.  For the more complete technical specification on DPX-C, please visit the CineForm Techblog. Within v4.3 or (greater) of Neo and Prospect PC tools, the DPX2CF and CF2DPX utilities now support the DPX-C foramt.




Tuesday, September 01, 2009

Canon EOS 7D


This will likely be the first Canon still camera I ever own. Had plenty of Canon video cameras, but I've been a Pentax guy for years (my lens collection has kept me there,) but the video features of the Canon 7D will change that. While the 5D Mk-II produces some awesome looking motion images, the 30.0fps has been a complete pain. We competed against several 5D movies for San Diego's last 48 Hour Film Project, and some of those teams didn't follows the rules and submitted 30.0 masters (we had to convert them all to 29.97 for presentation.) For the 50 teams that submitted, everything was presented at 1080p23.976 except the 5DmkII films -- I'm sure none of those filmmakers wanted to shoot 30p. So the big fix for the 7D, the camera also supports 29.97 and 23.976 -- these frame rates make much more sense for the filmmaker.

Now the 7D is still a heavily compressed I-frame H.264 4:2:0 8-bit, so like the 5D, we will see a huge attach-rate to our Neo product lines (Mac and PC.) The decoder speed of H.264 is poor, so many convert to CineForm just for the improved editing speed.

Now for some speculation, we're hoping the Live View is now full 1080i/p over HDMI, and that the burn-ins can be turned off, as then you would able to capture directly, bypassing the H.264 compression. Fingers crossed! If this came true, the 23.976p mode will likely go out as 60i, with pulldown (for wider display compatibility), fortunately we can remove that on the fly. In addition to removing compression issues, the HDMI preview may be 4:2:2, rather than just 4:2:0, so it will be nicer for keying. It will still be 8-bit -- but for a $1700 camera, with tethering recording with much lighter compression (via BM Instensity or AJA Xena/IOexpress) -- you can't beat it.

We look forward to testing and optimizing our workflow for this new camera.

Tuesday, August 25, 2009

Theatrical Successes

I've been missing from much of my usual online forum and twitter activities over the last 10 days, I've been flat out with the 48 Hour Film Project, my annual step in the creative side of the business. Lots of new things to report this year. First non-tech thing is CineForm staff members Jake, Tim, Craig and myself, formed two teams rather than just one this year. While this was saddened me at first, the results do not show anything negative from our divided efforts, both resulting films received more awards than the sum of our previous 5 year history with this competition. My team film draw the genre Detective/Cop to make "The Case of the Wild Hare" in a comic Film Noir style, winning an audience award a two jury prizes. Tim and Jake's team draw genre Film de Femme, shot a thriller, and got a jury prize and runner up for best of San Diego (out of 47 submitted films.)

Check both films out on Vimeo in HD, or just watch embedded.

The Case of the Wild Hare from David Newman on Vimeo.



Touch from Jake Segraves on Vimeo.



Now for the technical: one film used an SI-2K (running beta 2.0 software -- nice), a Kenyon gyro stabler (interesting), and range of 30+ year-old C-mount primes and zooms (classic), the other used an HV30 with a lens adapter with Canon still lenses. Can you tell which is which? Shooting technology is clearly only a small factor in making an enjoyable film, as both films won against very polished projects shooting from 5Ds to Red Ones. While the shooting technology could hardly me more different, the post workflows where very alike. Editing was done is Premiere Pro CS3 (Windows) on i7s or dual quad Core2 Intel systems (OSes range form XP64 to Win7.) CS4 wasn't used only for HDSDI monitoring reasons, as we haven't got that working yet. CS4 would have helped as we always used the beta features of our own tools, and one of the new features only worked in CS4 (we didn't realize until mid-post.) Both films where shoot for 2.35:1 presentation, as 16:9 has become so TV like these days. The new feature that neither team got to use was an addition to First Light, enabling 2.35:1 crop and centering as Active Metadata -- would have saved some time in post for positioning and rendering. First Light was used extensively on both projects, all grading was done as a real-time operation in First Light, particularly aided with the new auto sync. feature which keeps First Light connected with the NLE's timeline. No color correction was needed within Premiere itself.

More on First Light. One thing that helped us was a range of 3D LUTs (look files) that we have been preparing for sometime. These LUTs are now available for download to use with your Neo HD/4K and Prospect under Windows (Mac versions soon.) In such compressed schedule, you get very little time to work on your look, I think I put about 30-40 minutes into color correcting Wild Hare, about twice the time I had for color work for last year's project, but it is not much so prepared LUTs helped greatly. The Active Metadata LUT system, works on the final output for the image, with all the linear light processing for white balance, color matrixing (saturation) and channel gains, lift, and gamma being applied to the input of the LUT. This makes it pretty easy to mix and match a range of sources to produce on common look (stylized or not.) As I was working with two co-directors, one of whom had never working in film before (only stage work), I prepared different look profiles as switchable color databases, so the entire timeline could have is look/style switched dynamically. This helped showcase possible finishing styles without impacting the editing session, which went into 47th hour.

Here is an example of a before and after First Light processing.

Source:




Final:



After our two teams had submitted, the remainder of the week was spent preparing the other 45 films of presentation in CineForm format out to the Christie 2K projector to the local UltraStar theater. As CineForm is San Diego 48 Hour sponsor we requested that 1080p CineForm AVI/MOV be the default HD submission format (we gave all teams a 30-day Neo HD license,) fortunately more than half of films where submitting this way, making our life easier. It is was the remaining SD submissions, with its many pixel aspect ratios, letter-boxing, pillar-boxing, cropping, and pulldown variations, that was a time consuming headache (upres'ing it to look decent.)

For theatrical presentation we were prepared to use a Wafian F-1 as a playback server, which has worked flawlessly in previous years, and is not so rarely used for this purpose (as works really well -- F-1(s) where use to present HD at Comicon.) Unfortunately an hour before the first theatrical presentation the drive sled that goes into the Wafian crashed. This piece of bad luck put as in a panic, as it was going to take more than the remaining hour to prep a new drive with the 23 films that were screening that evening. Fortunately Tim had been experimenting with scheduled playback using, CineForm AVIs, a Blackmagic card and the open source Media Player Classic, and his system than all the films on it. Basically a cheap PC with a $300 HDSDI card hooked to a theater projector, become an awesome 10-bit 1080p presentation system. This was not flawless, it stopped about 7 films into the second screening group (resulting in 45 seconds of black as Tim run up into the projection booth to give MPC a kick.) While a Wafian would never have done that, the cheapness of this presentation solution made us pursue the same setup the next night and for the best of San Diego screening. There is still work to tweak this solution, but something like this is needed, as it is crazy that San Diego is still the only city to project in HD for this international festival, this being San Diego's third HD year.

Extending our experience from the local festival,we want every festival to stop presenting DVDs or Beta (still so common,) when most sources are film or now HD or originated. The solution has got to be cheap and simple, to allow for last minute playlist changes for pre-rolls, differing frame rates (24p, 30p and 60i,) audio level adjustments and skipping bars and tone, etc, i.e. BluRay will not do. We had one beautifully master film this year, that had the wrong black levels, put it through First Light, fixed it in seconds without any rendering. Duane Trammel, San Diego 48 Hour producer, used the playlist flexibility, to inter-cut interviews with several of the filmmakers in this short problem -- it was a pretty cool touch. More of this style of flexible is what is needed, and we are hoping to help.

Tuesday, July 28, 2009

Upcoming speaking engagements

This year will be the first time I'll will make to IBC, thanks to the persistence of Phil Streather to have me present the editing workflow section of his "3D at the Movies" super panel. The full panel will cover many elements such as pre-visualization, shooting and post, all presented in a large Real-D equipped movie theater. It will be a first for me to demonstrate 3D convergence manipulation live onto a screen of that size (seating over 800 people apparently.) So if you're in Amsterdam the morning of September 14th, might be worth dropping by early to seek good seating (for the better 3D experience.)

Something closer to home. This weekend I will be presenting part of the editing panel, covering post for the 48 Hour Film Project. If you have read here at all, you know I'm a big fan of this competition. I will be at the SaVille Theater at San Diego City College from around 1pm through 5pm, this Saturday. Primarily I will be showing local 48 Hour teams how to produce HD masters that are ready for projection. If you are putting a team in this year, and many already are (almost at capacity for teams now), and if you can't come to this session, please read up on the new HD submission guidelines -- And practice before the competition weekend. The a seminars on film making for this competition (and others) starting at 10am, you can see them all for only $15 -- more info here.

Thursday, July 09, 2009

Sponsoring 48 Hour Film Project for San Diego 2009

For last 5 years CineForm as put a team in this touring film making competition. In the last two we have sponsored the event by providing HD playback and HD up-res services for those submission that need it (everything is converted to CineForm AVI/MOVs for playback.) San Diego's 48 Hour is still the only city that is allows HD submissions for 1080p projection. This year we expanding, we are helping in Filmmaker Seminars in the weeks before, and we are putting in two teams this year to the competition -- this is not part of the sponsorship we pay like everyone else, and we don't even use the company name for these teams, never have -- always just for fun.

Come and join the craziness 48hourfilm.com/sandiego/

Wednesday, May 06, 2009

No Problem With 3-D

The title of the post is a come back to Daniel Engber's article in Slate "The Problem With 3-D", with the sub-title "It hurts your eyes. Always has, always will." As CineForm is entering the 3D post production world, I was curious whether his claims are valid. I'm personally one who doesn't find modern 3D film difficult to watch, and very much prefer to look at recent films like Coraline 3D over their 2D presentation, but I spend much of my day staring at images for quality analysis, so I'm not the best test subject.

Engber states the visual fatigue that "plague flight simulators, head-mounted virtual-reality displays, and many other applications of 3-D technology" is directly connected to 3D movie eye strain and that "no one yet knows exactly what causes this." Engber then goes to propose a reasonable sounding theory that our eyes want to refocus on objects that are not really closer or further away than the physical screen plane, and a likely cause of strain. This seems like a logical explanation for those experiencing eye fatigue, and he offers no other. The article then goes on to suggest "if 3-D becomes as widespread" the possible blindness (well "permanently cross-eyed") of your children -- Wow! Now I was initially going to accept the earlier claim of convergence without refocus being a potential cause of eye strain for some, but now that my kid's eyesight is involved I had to dig deeper.

I dug, and I now believe he is wrong, at least for most correctly presented theatrical presentations. I'm also proposing a theory without rigorous test data (just like Engber), but focusing on the optical characteristics of the human eye. I wondered whether hyperfocal distance, the particular range from x feet to infinity such that all appears in focus. While a typical lens has a single point of focus, there is range in which focus is still considered sharp. Whenever depth of field is discussed, that is talking about the same range of acceptable focus. So from Wikipedia "hyperfocal distance is a distance beyond which all objects can be brought into an "acceptable" focus." If the screen is beyond the hyperfocal distance of the human eye, all 3D images behind the screen plane still appear in focus, and a certain amount in front will still be in focus, with some simple rules. With all images in the theatrical 3D space appearing in focus, it doesn't matter if your eyes do change the focus range, so Engber's claim does not hold up, and your children are safe.

Basically the problem described in the article only happens in close screen viewing conditions or extreme "coming at ya!" 3D that has been losing favour as 3D projection becomes more common. In a typical movie theater the viewing distances are such that the eye can do its natural convergence and refocusing without losing focus on the presented 3D world.

Now to calculate the acceptable distances for 3D, we needed to calculate the human eye's hyperfocal distance. With some online research I was able to determine the eye is approximately a 22mm lens system (seems about right,) with a maximum f-stop of 2.4 (darkened theater would do the trick.) There is a great article on The Photographic Eye from which I gather numbers I used (they agreed with many sources.) Now we can plug these numbers into a lens calculator, and get a number for 35mm cameras -- 22.3 feet hyperfocal, focus range 11.1 feet to infinity. So if eyes were 35mm cameras, as long as the 3D object remains more than 11 feet away from us we can comfortably and safely view it and everything behind it into the 3D world. But of course our eyes are not 35mm cameras, and are more complex to model, but the heart of all this is the Circle of Confusion (CoC - the amount of allowable blur.) So instead of guessing the camera system that models the human eye, let's calculate what as acceptable blur for the typical theater viewing environment.

For our theater model, we have nice 40 foot horizontal movie screen at a viewing range of one and half screen distances, i.e. 60 feet away, using a common 2K projector (99% of all digital projection is either 2K (2048) or 1920 line HD.) So the amount of allowable blur is related to the pixel size, as we don't see a lot of chunky pixels, the resolution is high enough that it fuses into a continuous image for the audience. So let's estimate that a half pixel blur is OK and is still perceived as sharp. For the approximate 2000 pixels around 40' screen, 0.5 pixels will be 0.5/2000*40 = 0.01 feet or a blur of around 1/10th of an inch. The viewing angle for our blur at 60' is calculated as 0.01 degrees. As Circle of Confusion or CoC is calculated at 25cm, the 0.01 degrees results in a CoC of 0.04mm. Now using the CoC number in our lens calculator we get these results: When viewing the screen at 60' away, all objects from 13.1' to infinity will appear in focus. If an object jumps 75% off the screen and is perceived as 15' away, and you focus on it at 15', it and the screen plane are still in focus, so no source of eye strain. We now have the safe/enjoyable range to present a 3D image. You might be thinking the amount of allowable blur at 0.5 pixels was overly generous, and it was, in 3Ds favor. Wikipedia and other sites place the average acceptable CoC at 0.2mm, yet the numbers above are five times sharper than that (so there is plenty of headroom for the average viewer.)

This potentially points to the home screening environment having issues with the screen being so much closer. Yet using the same human eye lens modelling, a 3D depth range can be created such that the eye can focus at will without causing blur issues at the screen plane, or introducing eye strain. Plus as the average home environment is not as dark as the theater experience, we can use a different lens f-stop in our calculations. If our eye is more typically at f/4 for home viewing (totally guessing -- love help here) for a screen at 12' distance, 3D images can be placed at 6' (half out of the screen) to infinity (still using the same very sharp 0.04mm CoC.) So there is a reformatting required between theatrical and home release, but this was already an accepted factor to adjust for the smaller home screen.

This is not to say that there aren't other factors that contribute to eye strain in today's 3D technologies, such as imperfect filtering causing left/right cross-talk and poor quality glasses introducing other optical distortions. Yet the biggest factor to eye strain is more likely due to inexperience of 3D film making, such that there are good and bad presentations, which have nothing to do with the technology. The film making and the tech can only improve, and there is no inherent cause of eyestrain in today's 3D presentation technology.

-------

Here is a fun aside. For those wanting to experience the 2D and 3D version of a film without going to see the film twice (or closing one eye for prolonged periods of time), create yourself a pair of 2D glasses. These will also come in handy if you happen to see one of those bad 3D movies. Get two pairs of standard RealD glasses and pop out the left eye lens from one and the right eye lens from the other, and swap them. With a pair of scissors to cut the lens down a bit, you can restore the lenses to the frame with the opposite position (be careful not to flip the front and back side of the lens), so that you have one set of glasses that is a left only and another that is right only. At any time during a 3D film, put on your 2D glasses, to experience that retro 2D world.

Wednesday, April 29, 2009

NAB Coverage of CineForm Neo 3D

We are now all back and working very hard after our huge NAB. We pick several (all?) the major show awards, win best of show from both Videography Magazine's Vidy award (recognizing outstanding achievement in the art and science of video technology) and TV Technology's 2009 STAR award (Superior Technology Award Recipient.)

In addition to the NAB Filmmaking Central on the last blog post, we've had great written coverage from Adam Wilt's NAB wrap-up and a long two part interview with Fresh DV covering 3D from production & post through distribution.

Part 1: 22 minutes -- 3D concepts and production issues



Part 2: 12 minutes -- 3D post

Tuesday, April 21, 2009

At NAB talking up First Light and NEO 3D

While the camera is in way too much of a close up on my head (thanks to Dave Basulto of Filmmaking Central), you can see some of what CineForm is showing at NAB this year in this video interview.

Tuesday, March 31, 2009

An Early Glimpse at First Light

When CineForm developed the first RAW video compression (yes ages before those other guys,) we developed a related feature called Active Metadata. You see the problem with RAW imaging is the more RAW it is, the more boring it looks, frustrating to constantly explain to you film's investors, "yes it is supposed to look flat, with low contrast and green." Active Metadata came to rescue, allowing the cinematographer to specify how the image should be development upon decode, while preserving the internal flat, low contrast image for the most flexibility in downstream finishing. Users of the SI-2K have been loving this feature for years, as the camera had a lite version Iridas SpeedGrade OnSet built in for cool color development controls, but at less than 1% of the whole market, and Active Metadata support was RAW sources only, this feature wasn't getting the attention it deserved.

While back we added Active Metadata support for 4:2:2 and 4:4:4 CineForm encodes, but still many CineForm users didn't take advantage as the the controls where limited within Prospect 4K, as they were always intended to be replaced by a standalone tool -- First Light. First Light is only weeks away, arriving in time for NAB -- coincidence? The press release went out today so you will all visit us at NAB, but it only talks about the renderless color workflow, which is only scratching the surface for what First Light will be doing in the future, some of it future abilities will even be shown at NAB.

First Light will be available for all version 4.x users of Prospect HD/4K, Neo HD/4k on Mac and PC platforms. Version 4.x will start shipping before NAB.

Saturday, February 28, 2009

New Canon 5D Mk-II support with Neo Scene 1.1.2

Another Canon 5D Mk-II post, you would think I owned one, I don't -- Canon can you help me solve that? It is the camera's behavior that has caught my interest, plus the large video post market it is generated.

In the previous two posts I determined that the Canon 5D is using the full YUV range all 0 to 255 values. This can cause headaches throughout post as cameras typically use a luma range of 16 to 235, so it requires the operator know there is image data in the blacks and clipped highlights, and that the NLE can handle that information to restore it (many can't.) However, using the the full range offers the best tonal range for a limited 8-bit compression, and we don't want to lose that by crushing the range into the regular 16-235 as the CoreAVC decoder does with a user option (see the resulting spikey histogram.)

Neo Scene has a 10-bit solution. Reducing the 8-bit data from 255 levels to only 220 levels (with 16-219) something will always be lost, so we bump the data to 10-bit first, a 0 to 1023 range, then reduce that data to the 10-bit YUV standard range of 64-940, which has the precision to maintain all the source levels without introducing image banding. It is this 10-bit range-corrected data that we compress to CineForm from the Canon 5D Mk-II. By normalizing the Canon 5D data, the full dynamic range is now preserved in all editing and viewing environments.

We will be adding this feature to all our products over the next few weeks.

Wednesday, January 21, 2009

Correction: Canon 5D is fine, tools are wrong.

In my last post I discovered that using different decoders produce different results on Canon 5D footage, some more favorable than others, but I was wrong in my conclusion that something was being misinterpreted in the Canon 5D bit-stream. The Canon 5D video bit stream is fine, in fact is a little better that, as it making the best use of its heavily compressed 8-bit MOV (of course a less compressed 10-bit would have been nice.)

I concluded this by directly examining the luma outputs of two popular AVC/H.264 decoders: MainConcept and CoreAVC (both tested by snooping the YUV output via DirectShow.) Those users with CoreAVC were finding the output data easy to color correct, seeing the full dynamic range, whereas many MainConcept users (standard within Adobe and Sony NLEs) much less so. See these two histograms below explain this issue.

First CoreAVC. The luma data ranged between the broadcast standards of 16 to 235, not a single value existed out of this range (which is odd for most YUV source.) However, this is why CoreAVC users are having no big issues with dynamic range, even when using 8-bit RGB decodes, which will map the 16-235 luma to 0-255 in RGB -- they get everything. However, the spikes in the histogram are a tell-tale sign that something else is wrong with this picture.

Compare this to the Mainconcept decoder's output (same output as QT and other common AVC decoders), the completely smooth histogram over the full range from 0 to 255, shows that CoreAVC is in fact post compressing the data into the broadcast 16-235 range, and that is not how the Canon 5D compressed the image. This is range compression is reducing the tonal range, and the remapping of 8-bit values into a smaller 8-bit range can introduce banding. Think of a sky gradient with values 10,11,12,13,14,15 being compressed to 10,11,12,(12),13,14 -- the flat spot can add to visible contouring for which 8-bit signal are already prone.

So if the default decoders are correct, but dynamic range is truncated in the NLE, what is the solution? Stop using 8-bit RGB tools and codecs helps. The problem only arises when remapping the 8-bit YUV data into 8-bit limited computer graphics RGB space (Studio RGB would be fine *.) If your application and intermediate codec can decode to YUV, you are in fine shape, the problem is none of the common NLEs are doing this with native AVC/H.264 decoding. Intermediate codecs like ProRES and CineForm, support full range YUV (float) precising within FCP and Premiere Pro, so those are good routes to go. So you need to convert from the Canon MOV outside of NLE, using tools that do all processing in YUV (image remains unclipped.) Stu Maschwitz has posted that Apple Color will support the full range, and I can happily report that all CineForm products are handling this correctly (even for ProRes output if you are on a Mac with NEO HD/4K.) But even a ProRes file will clip if 8-bit RGB tools are used.

CineForm a cool solution for this using NEO HD/4K for the Mac and Prospect HD/4K on the PC, even if you have to use an 8-bit RGB clipping tools like Compressor, QT Player or VirtualDub, etc. The full range 0-255 YUV is preserve in the CineForm MOV or AVI, but using Active Metadata you can choose to decode the smaller range when needed. As this is all happening with high precision math in the decoder, we can safely mapping the full range YUV to full ragne RGB, without truncation or introducing banding. Normally we using Active Metadata for attaching look to the decode stream, but it can also use it to reformat the data to fix the needs of your tools.

The first RGB parade is the untouched CineForm encode from the MainConcept AVC decoder output, it appears that the shadows and highlight have been crushed. While I could place a 32-bit Levels filter to correct it on the timeline when work with the CineForm encoded file, correcting it within Active Metadata will allow this clip to also be corrected in 8-bit tools.
The second RGB parade is corrected through Active Metadata alone. I simply bumped up the black level and reduced the gain inside the Active Metadata control panel (Mac or PC) to reveal all the missing data. Active Metadata uses the clip's GUID (Global Unique IDentifier) to set new decoding rules, for any application use that clip with the computer system. Expect there to be more problems solved using Active Metadata with our next major software release. So much cool stuff coming -- need more software engineers.

* - At the time of the post Apple released a patch to QuickTime (7.6) that addresses some of the this. Now under QT 7.6 the 5D YUV data is decoded using Studio RGB, which places luma 0 (not 16) at RGB 0,0,0, allowing RGB to have the same gamut of the YUV source. I imagine the 5D is flags the data to do this and various AVC decoders are either ignoring or misusing the flag.

----

01/22/2009 It turns out the CoreAVC control has an auto mode, trying to guess the correct decode behavior (16-235 vs 0-255). Turn this off, make it behave just like all other. So all these decoders work if you manage the outputs correctly.

Monday, January 19, 2009

Full dynamic range video from Canon 5D MkII.

I been seeing many posts on how crushed the black levels and highlights are with the Canon 5D video mode. Stu Maschwitz has a great post on how to tweak the camera to improve the situation, but also points out the default presentation FCP is also wrong. It turns out that this presentation fault is in all popular editing packages on Mac and PC. They could possibly share the same AVC decoder, but I have a different theory below.

Using a 5D MOV video originally posted DPReview.com, here you can see the clipping that occurs within tools like Vegas, Premiere Pro and FCP when directly opening the MOV files. This zoomed in section shows much of the tree details are crushed into the blacks, when compared to the same sequence converted to an AVI using HDLink with CineForm NEO HD.

After some image enhancement, the black truncation becomes more apparent. These images where manipulated with a 32-bit levels filter (it is not truncating the data) so we see how much data is being lost, a failure to see "the trees from the forest" type of problem.

One common reason for all this error, is the camera H.264 I-frame compression is using 4:2:0 YUV, where display black is 16 and white is 235 out of a 0-255 luma range. If the NLE took this YUV data directly, everything should be fine, but instead is likely going through series YUV to RGB transforms that are truncating the data. Most NLEs extracting RGB from YUV sources map the 16-235 YUV to 0 to 255 RGB, truncating data in the supers, yet it turns out the Canon 5D doesn't even use much in super range (blacks between 0-15 and whites up to 255) as this non-truncated luma histogram shows: It seems that the 5D is limiting the output to broadcast standards (yet another thing the 5D does wrong for video heading for post.) Whereas most video cameras do use the extra range, they assume the broadcast limiting will be handled in post (as they should.) So the NLE's mapping of the 16-235 to 0-255 RGB is actually not the issue, somehow the H.264/AVC decoder used is actually mapping a range like 30-220 only into the NLE space. I have not seem such a big error before, which makes me think there an incorrect flag in the 5D bit-stream that is causing the use of the wrong output math.

Luckly within CineForm NEO HD we use direct YUV to YUV conversion wherever possible, this eliminates many trunction errors, and preserves highlight for mulitple camera sources. In current version of NEO/Prospect HD (v3.4) we don't ship an AVC decoder, so NEO HD searches out for registered decoders already installed, resulting in some users reporting this error and others not so. So the failure is AVC decoder dependent, although this may not be a bug in the decoders, as they all handle regular AVCHD sources without issue (again leading me to believe there is something wrong/different in the 5D bit-stream.) The one AVC decoder that produces the full range of YUV range is CoreAVC. We have been recommending it to AVCHD users as it is both fast and inexpensive (only $15), but it seems it might have extra value for those trying to get the most out of their Canon 5Ds.

In our next major release of NEO HD (and up) we intend to direct support Canon 5D, yet in the meantime for those using CineForm products I recommend you get CoreAVC, the latest version of NEO or Prospect which now favors CoreAVC over any other AVC decoder it might find. This version is currently in public beta and will likely become official in a day or so.

------
1/21/09
Update 1: The new software is now released and available for upgrade or trial directly from cineform.com.

Update 2: I have discover more on AVC Decoders issues for the Canon 5D in the next post.

Tuesday, January 13, 2009

CineForm's Winning Streak

Not only did Slumdog Millionaire clean up at the Golden Globes this year, Slumdog being a mostly CineForm RAW acquired feature shot with the Silicon Imaging SI-2K Mini, one of CineForm's engineers just won a Audi RS4 (Audi Record the Rush Challenge) from a CineForm posted video he self produced also shot with an SI-2K and a Sony V1U.  Looking to be a good 2009 for those at CineForm.