Wednesday, December 24, 2008

Mastering 24p DVDs from HD using Premiere Pro.

24p DVD mastering.

First some basics, there is no true 24p mode on standard DVD, there are is only 60i encoding (and 50i for PAL.)  So all those film source are encode to DVD by adding 3:2 pulldown (or 2:2 for PAL with a 4% speed up for 24p sources.)  How this pulldown is added impacts how well your DVD presents on today's common progressive displays. 

Adding 3:2 pulldown for NTSC DVD creation has been has been tricky, seemingly there are lots of dead ends (like using Encore to encode directly to 24p -- this should work, I have made several blurry DVDs trying to use it.)  This is one of the most common support questions.  McCarthyTech has a good blog post on using AE to manually insert pulldown before encoding as 60i, and this will work everytime.  It was his post that has prompted this one, as there is an even simpler way using Premiere CS3+ directly, and if you are careful it can be even better for final presentation.  McCarthyTech's technique can be improved if we trust the MPEG2 encoder can to add the pulldown using repeat flags.  Fortunately this works correctly using the 23.976p encode mode within Premiere's Adobe Media Encoder (MPEG2-DVD preset), now we just have to watch out for other Premiere limitations.  

The advantage of pulldown that uses the MPEG2's repeat flags, this can help with quality as only fields used to construct the 24p signal are compressed, the repeat flag pad the data out to 60i. This flags also help progressive scan DVD players reconstruct the correct 24p signal more reliably. The manually created pulldown in AE works for most situations as many DVD players can use the data-rate pattern to guess the pulldown, but it is not always extracted correctly (seen as a weave pattern during motion.)  Of course there is no issue for non-progressive outputs where the display is reasonable for pulldown detection (if needed.)

The way I produced several 24p DVDs in the last weeks is to export out of Premiere as a 1920×1080 24p (23.976) master CineForm AVI, then I used VirtualDub and scale using Lanzcos 3 filter to 720×480 and export out to CineForm 444 SD.  Load the SD clip back into Premiere SD 24p preset, interpret footage back to 16×9, and export with Adobe Media Encoder to MPEG2-DVD 24p. Encore will take this file without further transcoding.  Now that seems to be an odd path and it is, plus using VirtualDub is unnecessary, but the export to 1920x1080 master first is a very important step. 

I first made the mistake of exporting my 1080p timeline directly to MPEG2-DVD, and it looks horrible.  I've made this mistake before, as you simply expect it to work (never has in Premiere up to the tested PPro v3.2), but here why it doesn't always do what you want.  When you add any spatial distorting filter (motion, blurs and sharpens, etc), you see the results previewed at 1080p, any scaling for your preview display is applied after the filter operations -- so you adjust you filter so they look correct on a 1080p source.   We you use the Adobe Media Encoder, the scaling is applied first, before any of your filters -- as a result your output doesn't look like you previewed -- spatial filter are around 2.5 times stronger than you intended.   In one of my recent projects, I was using the additional resolution of the 1080p source to reframe for a nicer DVD output, see below how much it matters in what order of operations that scale occurs (see below.)

The image on the left is soft and badly aliased, and it looks far worse in motion.  Simply exporting the timeline to 1080p first, then using that new file to export to DVD solves the problem, without ever leaving Premiere.  The VirtualDub step in my above technique can be as simply skipped, as loading the exported 1080p AVI into Premiere will use the CineForm importer's own Lanzcos 3 scaler for exactly the same results, much faster and more convient.  

Friday, December 12, 2008

What, no more Aspect HD?

After 5 years we are now retiring Aspect HD, and I say Yah!!!! David Taylor, CineForm's CEO, has post as to why on, but I like this change as we no longer need to remove key features from Prospect HD to make Aspect HD (that is how it is built.) This will streamline development and testing, plus slicing off all the really good stuff always bothered me. We will still have entry level products in the NEO line (more news to come,) but today's customers are increasingly purchasing their first CineForm product at the Prospect HD level (some going directly to P4K as Red users really need a workflow.) Recently Aspect was falling into no man's land, too pricey for the hobbiest and too feature limited for many professionals. The Prospect product line now starts at $749, lowering the entry point for professionals and existing Aspect users can product jump for $199 (only around $50 more than typical upgrade between AHD versions.) This upgrade will also include the upcoming CS4 version of PHD.

Aspect and Prospect a short History.

5+ years ago when Aspect HD was first released it retailed $1199, it was the only HDV professional product of its day for post. The only HDV camera at the time was the barely HD single chip 720p JVC HD10U, so Aspect HD took in only one camera format, 720p HDV and converted into an 8-bit AVI for much easier post within Premiere 6.5 (wow, we come so far since then.) Aspect HD was upgraded to Premiere Pro 1.0 and we soon added 1080i support as Sony FX1/Z1U was coming (we have a bread-boarded prototype of the Z1 in box at the office -- somewhere.) Then HDV exploded, many great cameras, and CineForm implemented the first HDV support for Adobe into Premiere 1.5.1, and Aspect HD prices fell and the sales volumes grew. Around the same time a Prospect HD beta version was used in the post of Dust to Glory, the first film sourced (and HDCAM, DV, etc.) theatrical release ever to have had an compressed online DI workflow. Support for more and more cameras where added, XDCAM-HD, P2, AVCHD, etc. While Aspect was growing beyond it HDV roots (bumping up against it 1440x1080 8-bit limits,) and Prospect was turning in the swiss army knife for all input types with the addition of DPX sequences, XDCAM-EX, Grass Valley Infinity, SI-2K, Red, Dalsa, Phantom all with HDSDI I/O. Over the years of Prospect, feature additions includes 10-bit, 12-bit, 4:4:4, alpha channels, RAW compression, Active Metadata and much more.  So today the Prospect line fits the role for today's indie professional, with so many input formats and many possible output types, it is nice to have an intermediate that handle yours resolution, bit-depth needs in real-time, without breaking the budget, your disk system, or the PC your are try run on. Aspect HD was being left behind when compared to Prospect, so I hope many Aspect users will make the jump to Prospect, as it is a big step up.

Bits of Fun

Here are a couple of videos internally created by CineForm engineers (plus friends) just for fun. The first is this year's entry to the 48 Hour Film Project. We drew the tough genre of "Holiday Film", so it is now seasonally appropriate (we created this one back in August.)

Competition requirements: a "Holiday Film" to be written, shoot, edited and delivered within 48 hours with these elements:
1) Character: Joe or Josie Beeble -- Construction Worker
2) Prop: Tweezers
3) Line of Dialog: Hey, have you heard the news?

The second is currently a finalist on, which is a competition entry to help one of our engineers win an Audi S4. The entry is called Visceral Thirst, so please vote for Tim (he said I can have a drive if he wins.)

Both films where shot using an SI-2K Mini, although Tim used some Sony V1U footage (all the beautiful car shoots in Visceral Thirst are SI-2K,) and both where finished in tight time-frames using Prospect 4K and Active Metadata.

Wednesday, November 19, 2008

Intel Core i7 and CineForm

Wow! That was my immediate reaction when I first did a CineForm decoder performance test on the new Core-i7 processor. I've had access to these new Intel processors for a while now and I knew they where fast, I just didn't how fast. The system we very honored to have early access to was a Intel Core i7-965 Extreme Edition system (Nehalem architecture) running a 3.2Ghz quad core. When we first booted the system, we saw 8 CPUs within task manager, even though this is a quad core. These new chips have re-introduced the concept of Hyper Threading, each core can be setup as 2 virtual cores -- this means we will likely see 16 virtual CPUs in upcoming dual quad workstations. Nice! With so many virtual CPUs to run on, I knew we had to upgrade our decoder for better n-way threading (which the encoder already had.) This work I was most involved with over the last month, resulted a 50% boost in frame rate over our already fast decoder on Core-2 architecture dual-quads. Now it is time to test Core i7-965 .

In these tests I compared my beloved xw8600 HP workstation 3.16Ghz 8 core with 4GB RAM, running XP-Pro 4GB RAM, with a gaming configured desktop Core-i7 4 core running Vista 64 and 3GB RAM. No operations used GPU assistance.

Running with only half the number cores, this new processor nearly doubles the average performance of CineForm HD and 2K in 4:2:2, 4:4:4 and RAW formats, and even approaches real-time full resolution playback of 4K (a workstation class Core-i7 will be playing back 4K without issue from CineForm RAW or 4:4:4 encoded sources.) All this frame rate overhead greatly eases multiple stream processing and allows for huge efficiency increasing in batch processing of mezzanine and image archives. It also allows for much more Active Metadata (AM) processing through CDL style color databases, 3D LUT film looks and other yet to annouced AM features.

Intel and/or HP, when can I get my hands on an i7 dual Xeon? Please.

Where i7 didn't scale as well was with high quality 4K RAW demosaicing filters. Both the 4K R3D decodes and high quality debayer modes in CineForm RAW produce minimal speed-ups from 8-core Core2 to the 4-core i7 (still amazing considering the reduction in cores.) Looking at our own code, the demosaic has not used much of the SIMD (media) instruction set, nor is it particularly memory I/O limited, just lot of operations per pixel. It seems we do have room for more performance optimization in the demosaic.

All the transcodes where performed using the CineForm R2CF utility that comes with our NEO 4K and Prospect 4K products. R2CF has a very efficient implementation of the R3DSDK, allowing for close to 100% CPU utilization. I have included the R3D to CineForm transcode times, as REDCODE is known to be a particularly compute heavy format. These times do also include time for a CineForm encode, but this only effects the FPS numbers by around 10% as our encoder is very fast (up to ten times faster than a R3D decode plus additional processing [adding curves and color space controls.]) I'm showing the combined numbers as that is the CineForm workflow for R3D, you do a R3D decode once to convert to CineForm, then work with (decoding multiple time) CineForm files for the extra speed and flexibility.

I expect another factor in the widening margins for the CineForm decode performance on the Core-i7, is we avoid arithmetic coding, which is tricky and compute intensive for CPUs (and nearly impossible for GPUs to do efficiently -- we are asked about GPU acceleration often.) CineForm codec was always designed for speed on Intel processors, where faster memory and faster media instructions almost directly relates to proved frame rates, as we have compute-lite entropy coding engine. While arithmetic coding would increase bit-efficiency maybe 5-10%, the performance gains of 4-6X by not using it, made the easy choice when we started this codec work 7 years ago (on 1.7GHz P4s using MMX -- the fastest we could get could only do NTSC/PAL SD in real-time.) Now someone needs to suggest a fun use of for Express files running at 450 frame per second.

Tuesday, November 18, 2008

2K is still compelling

On my drive into the office I'm often listening to media related podcasts like TWIM, Red Centre, and Filmspotting. Today's drive I was catching up on Filmspotting podcast episode #235, which opens with a nice review of Slumdog Millionaire, the latest feature from Danny Boyle (Trainspotting, 28 Days Later, Sunshine, etc.) Now I knew this was Silicon Imaging SI-2K project, shoot as CineForm RAW, but other than that I've been too busy to learn anything more about. Filmspotting is a film review show, so not a techno-geek-out fest like Red Centre or elements of TWIM, so you never hear them talk about cameras, but after raves for the story, acting, narrative structure, there where high praises for the cinematography, amazed that is was digitally acquired and the types of shots where able to get in India, commenting "you do not film in the streets of are taking your life into your own hands, and they actually did." Clearly this points to the huge advantage of a 2K camera with a real 11 stops of dynamic range at the size of deck of tarot cards. Yet this is "only" a super-16mm equivalent sensor, proving again that making a film has very little to do with megapixels and sensor size. Now I'll have to see the film, as it is getting a 92% freshness rating over on

Thursday, November 13, 2008

My Take on Red's Announcements

Very cool. The higher the bit-rates and the higher the resolution goes, the even greater need for high performance compressed digital immediate for post, mezzanine and long term archive, i.e. CineForm. And for anyone wishing to produce their own compressed RAW video cameras, Red and SI are not going to be in that business alone; CineForm RAW is ready when you are.

Anyone notice how the whole Red "Brain" line-up looks like chunker versions of Silicon Imaging dockable SI-2K Mini? Red continues to confirm SI's vision. :)

Friday, November 07, 2008

Even More Decode Speed

The last 2-3 weeks I found my time consumed with more decoder optimization. While the core of the CineForm codec has now been around for nearly 7 years, it enhancement has never stopped, whether we are adding new pixel formats, Active Metadata, improving quality or striving for more performance. Working on the codec core is more rewarding as an engineering success is not dependent on eccentricities the third party applications like Premiere or FCP, which like to get in the way.

The decoding engine has been threaded for 8 cores for some time, but it was only efficiently using about 3-4 cores. This inefficiency was not an issue for real-time playback as the codec was already very fast, faster than necessary for real-time multi-stream playback (even on dual core systems.) Each decoder during a transition or layered effect would happily use much of the available CPU. Better codec threading was needed for a new market the CineForm is finding itself in, file based film and television archives and mezzanine storage for HD distribution. These markets have been limited by the real-time nature of tape format like D5 and HDCAM SR. If you are going to switch to file based storage, no point in limiting yourself to 1:1 real-time, you want faster than real-time for batch processing and file format conversions wherever possible. This is one reason CineForm is displacing JPEG2000 for archives, is it just too slow for batch processing in software (typically much slower than real-time 1:1, i.e. slower than tape.)

While the current public beta is more efficiently threaded for up to 8 cores, up a 50% decoder speed-up for some sources, the in-house decoder (out soon) will support up to 32 cores, ready for those new Intel powered workstations with will have 6 and 8 cores per physical part coming very shortly.

Some performance numbers from my stock HP xw8600 8-core 3GHz workstation:
444 1080p 12-bit per channel StEM footage -- 64fps.
444 1080p 12-bit per channel Stereo (3D) -- 43fps per eye (86fps total throughput.)
RAW 4K 12-bit per channel with demosaic (no GPU acceleration) -- 22fps.
RAW 4K 12-bit per channel decoding at 2K (no GPU acceleration) -- 59fps.

All testing used Build 186 of Prospect 4K beta.

Tuesday, October 21, 2008


With all these new high resolution video sources out there, it great to know you can pan into a 2K, 3K or 4K frame and still deliver sharp results at your 720p/1080i output.  However, this oversampling advantage can be easily lost with incorrect use or setup within your NLE. We often see user setups that run the larger source media on a timeline that matches the target resolution, e.g. 4K source in a 1080i/p timeline.   This seems to be an obvious configuration, but I'm going to point out why this is the wrong way to setup your project, particularly if you want any oversampling advantage when re-cropping a larger frame.  Using Premiere Pro as an example, dragging 4K media into a 1080/2K timeline defaults to center cropping the image. This is an annoying default as you likely don't want any cropping for most clips, other than a few scenes that might need a little re-framing, so the 50% center crop default is a nuisance.  The are two solutions to address this center crop:
1) Go through all your clips on the timeline, adding a motion filter to re-frame back to the entire image (particular trickly to do fast with mix resolution sources.)  
2) Use the neat feature, "scale to frame size", which can be set up as the default for these types of projects.

If you pick 2), as it is certainly the easiest, you have lost all your oversampling advantages.  Really!  This scale happens BEFORE any of your timeline based filters see the data.  So 4K images on a 1080 timeline becomes only a 1080 source when you zoom in; push in more than 20% and the image will get soft.  There is nothing wrong with 4K on a 1080p timeline in this mode, as long as you remember to turn that feature off when doing any re-cropping, it is just too easy to forget.

If you go with solution 1), you still have your oversampling benefits for re-framing.  But now you have to be careful as the NLE is doing the resize not the importer, so you have to make sure all your scaling filters are deeper than 8-bit, otherwise you will have lost precision.  Adding a motion filter would seem best, as the Premiere motion filter does support 32-bit processing -- within Prospect HD/4K you can turn on the feature that displays the depth of the filter stack (handy to confirm you are maintaining quality.)  This additional scaling step can make things slower during your edit.

The better solution is to do your edit at your largest source resolution, i.e. editing 4K at 4K and 2K at 2K.  Set the "scale to default size" on so any lower resolution media (e.g. over-cranked sequences) are scaled to fit (no issues with the upscaling.) There are no limitations to doing this, particularly as Prospect 4K already dynamically decodes larger source data to half or quarter resolution as needed for editing speed.  There are many advantages, such as freeing the range of output options for frame size and frame aspect, to be performed at maximum quality.  4K within a default scale to frame 1080 project, will have a little poorer 720p exports, as scales would be 4K to 1080 then 720, rather than 4K oversampling and scaling directly to 720.   I've received some feedback to suggest some feel that zooming into 4K frame in a 4K timeline would result in softer images than the same source in a 2K timeline; they are missing the fact that oversampling happens upon export to whatever the target resolution may be, not how it previews at 100% pixel view on the timeline.  When pushing into a 4K frame, within a 4K project, you can check your sharpness at 1080/2K by setting the program window view to 50%, or look at the 1080p feed over HDSDI, both will allow you to see if you are pushing in too far and preview the quality of the final output.

While instinct tells us for most projects a 1080p export will be plenty of resolution, that doesn't automatically extend to being the ideal timeline format, particularly for oversampled re-framing.

Sunday, October 12, 2008

Playing with encoding curves.

As a gift for my 40th birthday, my wife got me the awesome 2nd gen. iPod Touch. I've been playing with a Graphing Calculator app, experimenting with various video encoding curves.


Red - Linear
Blue - Gamma 2.2
Green - RedLog (Silicon Imaging Log-90 is very close)
Yellow - RedSpace

Todo -- Add PanaLog and Viper Filmstream

Friday, October 03, 2008

New betas for Prospect/NEO 4K

It is Friday and time for another beta release. Version 3.4.3 contains some cools upgrades and fixes, particular for Premiere support, but there are base level codec changes for NEO as well.

In Premiere we have renamed all the appropriate filters to it have 32-bit in their name (Prospect HD/4K only.) It's handy to know, but it is also great for selecting filters, just put "32" in the search field and up pops all of the CineForm deep pixel filters.

One of the long missing 32-bit filters is a Levels plug-in -- Finally! The Premiere levels control is only 8-bit, limiting it usefulness, while the Premiere Fast Color Corrector include 32-bit levels, it is pretty slow. This new filter is fast and simple to use and plays back in real-time.

As computers are getting faster the scrubbing will remain in 32-bit mode, unless you turn that feature off. Previously we switched between 32-bit and 8-bit to get a small improvement scrubbing speed, but it made high dynamic range work more painful. I think you will prefer it this way.

YC scopes now work correctly. Previously they clipped at 100% even if the data did not clip. That was also a related bug in the last few builds that clipped 107 IRE from YUV sources, that is now fixed.

Several minor bug fixes.

Of course let us know if there any introduced bugs, but if you are not in the middle of a big project, this is a beta worth trying out.

Download Links for v3.4.3: Prospect 4K and NEO 4K (only active while this beta version is the latest.)

PHD/NHD is on the build machine.

P.S. As it is a weekend and I'm in LA for the RED Los Angeles User Group where I will presenting using this new version on Prospect 4K. Please use my twitter page to send me any reports. Follow me at

See me live at the RED LA User Group

I will be presenting the CineForm workflow for RED sources at Kappa Studios for the
RED Los Angeles User Group. Saturday October 4th, 9:00 - 12:00 noon. If you are interested in attending, please RSVP. CineForm is offering a door prize for one Prospect 4K or NEO 4K seat (winners choice.)

Monday, August 18, 2008

My 48 Hour Weekend

The weekend just past featured the screening the San Diego division of the 48 Hour Film Project, a film festival for extreme film-making (41 films shown all Saturday.) Like last year, CineForm sponsored the event by providing HD projection and up conversion services for all the submitted films. 90% of HD teams even submitted CineForm AVI or MOVs, making our job so much easier -- a thank you goes to all those teams. Also huge thanks goes to Wafian for loaning us a HR-F1 as a mobile playback server. While this unit is normally for on-set or field recording, at has a really neat new feature (in beta) to simplify festival projection. I had wireless access to a play-list, so I could fully control the Wafian's playback while sitting in the auditorium with all the paying customers. I used my laptop to start and stop the projection, jump to particular films, seek to startup frames while the presenters spoke, and select entire playback lists for automated projection. The same can done using any WiFi hand-held like a iPhone. Have you wished to have a remote control that worked in a full movie theater? That the power I had, it was great fun. Looking for forward to next year's innovations.

Sunday, August 03, 2008

Updates and Random Stuff August '08

Things continue to happen, but clearly not much on this blog. We have several of high-end projects and partnerships pending, but no news I can break yet. So often I post on technology issues, compression comparisons and pixel format, etc., but now with the base technology so mature, it the business relationships that are coming to the forefront (and I can't talk too much about those.)

Not that haven't been large additions to the code base in the last two months, and the plenty of next generation stuff in the works. The feature we have coined as Active Metadata has been expanded greatly, from a tool only SI-2K cameras could use, to supporting any source that can be encoded to CineForm. It is so crazy cool, potentially turning large sections of the workflow on it head (once again,) now to getting the market to really understand its power, and also dealing with the development feature creep this so easily generates. We need to create several how-to videos to help easily explain all the new stuff--ironic that a video company hasn't produced any online videos in the past.

Well we do produce videos, once per year we enter the San Diego division of the 48 Hour Film Project with a team of engineers and company friends. This is happening next weekend, so expect another post of the result short film in a few weeks. Like last year CineForm with Wafian is doing the HD projection for the entire festival (currently at 41 teams and rising.) Teams can submit HD content using CineForm files, instead of the standard DVD or DV submissions of other cities. Last year a good third of the teams submitted HD shorts in the 48 hours (hope for more this year.) So if your in San Ddiego on August 16th, please come to theatrical premiere.

This will be our fifth year for the 48 Hour, and it is interesting how rapidly the camera technology has developed through the equipment we used.

2004 - Shoot with two JVC HD10Us at 720p30 (first and only HDV camera.)
2005 - Shoot with two Sony HVR-z1Us at 1080F25 (CineFrame mode.)
2006 - Shoot with a way prototype SI-2k (then SI-1920) and Canon XL-H1 for sound and pickup shoots.
2007 - Shoot with two SI-2Ks with one with synced audio.
2008 - Planning on shooting a single SI-2K (learning our lesson the two is not always between than one.)

So we went from a contrasty barely 720p to wide latitude 2K and stayed there (single sensor, through three CCDs, and back to single sensor.) If you're thinking why not shoot Red as a logical progression, honestly they is no place for 4K is this type of project (the development times would kill your post time), plus it is not just resolution now that is defining the quality of the picture (although the larger sensor size would be helpful.) Also it is so much easier to move a 1-2kg 16mm camera and lens through rapid and interesting setups. There are several new technologies being applied this year, but I'll leave that discussion for the postmortem and another blog entry.

Friday, June 13, 2008

Nike + CineForm + SI-2K = Way Cool!

I just discovered this Nike "Evolution" commerical was shot nearly entire point of view using a SI-2K. The results are a lot of fun. Check it out: Nike Football.

Thursday, May 15, 2008

Slow at the full blogging, trying microblogging

I will be doing more updates here, just been too busy since NAB to create any well thought out content. In the meantime I'm trying out Twitter, you can follow me at Yet me know if there is any particular content you want be to try and post there.

Saturday, April 19, 2008

Recorded at NAB

I didn't completely avoid being captured on tape or disk this NAB, but I tried. There where more bloggers snapping photos this year, and people stopping me in the aisles, as it seems that having an avatar over on has killed my anonymity. Also I think I was captured in 4K at the reduser workflow event, I don't think anyone needs to see me in that much detail. :) On the show floor David Taylor did most of our on-camera interviews, including this cool one showcasing the Mac workflow using active metadata here at (see the video link titled "NAB 2008 -- CineForm 2K and 4K workflow".) I did one short audio interview for Bob Diaz's podcast. No new information here, but for anyone wanting an answer to the question, "what is CineForm?", that five minute effectively covers it.

Thursday, April 17, 2008

NAB -- The Results

I'm now back in Solana Beach, CineForm HQ, returning a day early from the show. The CineForm reception at NAB this years was simply awesome, with more and more high-end companies seeing the value of compression in high resolution workflows. While most of the staff are still in Vegas, I had to get back to working on our 3.3 release, adding great features suggested by customers and fixing the bugs we always find at show time. Trade shows always have the habit of showcasing bugs, so it is good to have engineers doing the demos with pen and paper jotting down the list of things they will need fix back at the office -- we never hold back demoing work in progress features, but showing the cutting-edge of our development has it risks. This year as no different, but fortunately most the bugs showed themselves during Sunday setup and were fixed by Monday morning when the show opened. It turned out we never tried playing 3K on 4K timeline at full res. before (in preview it worked fine on our dinky 24" displays,) but the at the office we don't have the wonderful 30" monitors Intel loaded us, so we cranked up the resolution to make use of the display area. At show start everything looked great and presented smoothly from HD to 4K. We were playing 125 frame per second over cranked 4K Phantom 65 footage, thanks to guys at Vision Research, AbleCine Tech, and the Dual Quad HP xw8600 workstations that power our booth -- the most demanding footage I have worked with to date (clean, but amazingly detailed, and played real-time even while using Iridas's 3D LUTs.) It fun dealing with the extremes like discussing the detail of over cranked 4K footage with one customer, and the next minute discussing pulldown removal from Canon HV20s.

"All sources welcome" could have be a slogan at the CineForm booth, but fortunately I'm not in marketing (anyone seeing our website will know we don't have marketing department.) On the booth we where showing footage and editing Dalsa Origin, Grass Valley Infinity, Vision Research Phantom 65, Red One, Silicon Imaging SI-2K, and Sony EX1 (likely many more.) Grass Valley kindly loaned us an Infinity camera (pretty awesome BTW) to help us showcase the Infinity support we will be offering shortly, and someone loaned us the tripod to put it on (thank to that unknown company.)

In addition to the regular CineForm crew, I would like to thank those friends of the company that helped out at the booth: Isaac Anderson, Jim Hays, Mike McCarthy and Jason Salonen, providing the much valued outside prospective for CineForm and our customers. Thank you to Steve Sherrick who invited me to present at the Red User Workflow panel, and the warm reception from the engaging audience. For the Reduser presentation, thanks to Terry Cullen of 1Beyond who loaned CineForm a QuadCore laptop, and amazing machine that made the demo fly. And finally to all the companies that made up a "end-to-end digital workflow" booth: Wafian, Silicon Imaging, Iridas and Intel, each one helping with a sizable piece of the digital film making puzzle.

Friday, April 04, 2008

NAB preparations.

Sorry there have been no blog updates. I have been working like crazy getting Prospect v3.3 ready for NAB along with several Mac upgrades. There will be a lot of cool new features to show this year, which helps as CineForm has a bigger presence then ever before (in a booth 3 times bigger -- partnering again with Wafian and Silicion Imaging and adding Iridas to the mix.) Booth SL10609 (Lower South Hall.)

I will be at the show from Sunday through Wednesday, I have normally lost my voice by the Thursday, so don't wait if you want to come by the booth.

Tuesday, March 04, 2008

Shot Down By My Better Half.

If you have ever read this blog, you might get a laugh at my wife's skillful skewering of the crazy language I have used over the years. Please read Riding the Wavelet over on her blog

Sunday, March 02, 2008

Metadata Matters

In recent builds of the CineForm tools and codecs, we are focusing a lot more on the robustness and a flexibility of metadata. The CineForm codec has been carrying a special class of metadata for the last 18 months; we use the term "active metadata" to describe how it differs from classic data like edge codes and copyright info. Active metadata is the mechanism that allows camera control unit functionality, such as white balance, curves, color matrix and 3D looks, to be stored non-destructively to maintain all your source image information, yet allow the playback to turn on and off or alter the active decoding/display of the metadata -- I need to do a post on this as it is very cool (or please come and see us at HD Expo (March 6) or NAB (April 14-17.)) We had traditional passive metadata for the same period of time; we just haven't started to emphasize it until now.

The CineForm structure for metadata data is little different than other streaming formats (AVI, MOV, MXF, etc), and as the metadata is embedded within each compressed video sample no matter the wrapper type. Streaming format typically attach metadata to a channel within the file wrapper, parallel to the video stream, or as a separate XML file, which is fine if you can stick with one wrapper type (AVI,MOV,MXF) and keep your files together. However, re-wrapping the stream or breaking up streams can result in the loss of this metadata. As CineForm is wrapper agnostic, we needed to store it differently so that it can't be lost. We still can use the wrapper standard metadata streams when needed, we are simply not depending on it.

As metadata is in each compressed sample, we can achieve greater flexibility as the video decoder is now aware of the metadata (not possible in parallel streams or side car files,) allowing applications that otherwise have little to no support for metadata to act on the data in new ways -- this is how our active metadata works. An example of this use is in our latest DPX to CineForm to DPX tools will completely preserve all the DPX metadata and reconstruct it when export back to DPX at a later date. We are working toward allowing passive metadata to be selected for optional burn-in display, in addition to all the search and retrieval applications of this data. Lots of cool stuff happening.

For those developers programmatically implementing CineForm compression into their tools, (Wafain, Silicon Imaging and Iridas are good examples), it is easy to directly set and control the adding of metadata. Via our SDK you get frame accuracy for metadata insertion. Yet we have added a mechanism to allow the non-programmer to add free-form metadata to any of their CineForm encodes. For Windows users, there two registry keys allowing you to add your own metadata :

Here is an example of using MetadataGlobal:

----------- cut here --------------------
Windows Registry Editor Version 5.00

"Copyright Owner"="David Newman"
"Computer Used"="M90 Laptop"
"Random info"="put whatever you want in the file"
"Add Numbers"=dword:00000050
----------- cut here --------------------

and save as MyGlobalData.reg. Of course you can change and add your own text and numerical data (hex format, that dword:00000050 = 80) after the HKEY line. Double click on the .reg file to install. Now every frame of every capture, or render, with have this data embedded. I've been using this in the office to know which PC any particular clip originated.

The second registry entry, MetadataLocal, only stores changes made since the last frame's encode. This is useful for attaching slowly changing data, like GPS data, or other changing data streams that occasionally change or change on every frame. Or you have a large amount of data you only want attached occasionally, rather than store it in every frame. I have yet to do much with this feature myself, but I sure it will have some powerful applications in the future.

Update: Use MetadataGlobal for changing data if you want that data to be present in every frame. Timecode is a changing field that is stored in every frame, and therefore should be using the global class. Accurate frame based changes require SDK level access.

Of course there are many new features in the works that I can't yet disclose, but please comment on your own ideas for metadata features your like to see.

Friday, February 22, 2008

The dark side (but funny)

I don't normally post clippings, but this is so me. Try to maintain an active forum presence has its dark side.

Source XKCD, found through BoingBoing.

Thursday, February 07, 2008

CineForm 444 always better than the HDCAM-SR?

Sony's HDCAM SR and CineForm 444 are very different compression technologies, one designed for tape-based existing deck control conform/master workflows and the other designed for online disk based workflows. Below I discuss how their core differences impact image quality, and why it is not so simple to state that one is better than the other for all image sources.

About this time last year, I blogged on a test we performed comparing CineForm's then new 444 mode to that of the respected de facto industry standard HDCAM SR; the results of that test are posted on CineForm's quality analysis page (and one of the graphs is repeated below.) While we never expected to exceed SR quality, we did select test material that would not favor SR, i.e. using very detailed source material, like full frame StEM footage (which is designed to trip up compression) plus live green-screen footage shot using a Thompson Viper. We informed Sony in advance that we were performing the test and that we were not intending to show up SR; rather we set out to prove that both SR and CineForm 444 are indistinguishable from the source, and this is still completely true. We also had the results independently verified.

Back then, it seems we had only slightly ruffled feathers at Sony, as Sony Broadcast CTO Hugo Gaggioni tried to find me at NAB 2007 to work out what we did, and if the test procedure was valid. I tried to follow up with Hugo but we never overlapped at that trade show. Sony and CineForm have long been good partners and we want to keep it that way; to demonstrate our long term relationship we even have the extremely rare bread-boarded Sony Z1 camera somewhere around the office. But in the months following NAB' 07 there was no more discussion on the test, well, until a few months ago. It seems the Sony had independently performed their own CineForm vs SR tests and drew mixed conclusions of "no significant difference between CineForm and SR 4:4:4 SQ in terms of PSNR." Sony went on to say that their 880/HQ mode is better still; in our tests we found HQ to be better than the SQ mode but of no significant difference when compared with CineForm highest quality mode.

How could Sony and CineForm do valid tests, yet get different results and both be correct in their findings? The fact is, you could almost draw these conclusions without doing the actual tests and here is why: Any very light compression (say 4:1) of DCT, Wavelet, long GOP or I-frame, will yield similar image quality; the real differences happen with much higher compression ratios. As compression ratios increase, I-Frame Wavelets (CineForm) will show their advantages over I-Frame DCT (SR, D5, DVCPRO), and as you compress more again the long GOP codecs like MPEG2 and H.264 start to shine over I-frame solutions. So in these tests we really weren't testing Wavelet vs DCT, but rather constant quality (CineForm Wavelet) vs constant bit-rate (SR). Tape systems like SR are always constant bit-rate, modulating the compression quality so the data rate matches that of the tape system.

(Click on image for clearer view)

So when you see a graph like this showing quality over time, there is a reverse graph that shows the CineForm data rate changing and SR's remaining constant. In this test, the average bit-rate of CineForm happens to be lower than SR, even though the quality is higher, so there was a real measured advantage to CineForm, yet there are tests that can show the reverse. When the image is simpler for compression, like a lot of sky or large regions with low noise or detail, the data rate in a constant quality codec like CineForm's will fall. In this latter example you may be comparing an 8:1 Wavelet to a 4:1 DCT, which would likely be in favor of the DCT. The quality on the Wavelet image has not changed, yet the quality of the constant bit-rate DCT has increased, making good use of the available bits.

A recent good example of this happened with a customer (who will remain nameless until we get an OK) that is scanning a classic 65mm color print for restoration that uses CineForm as the archive format. They independently did CineForm vs SR testing, and found in some frames SR had better PSNR numbers (than CineForm 444), and sent me the frames to investigate. The 65mm source produced really clean source for a 2K scan, and the images had shallow depth of field, so there was minimal foreground material in focus that would give the compressor a full work-out. So while SR was using all its estimated 440Mb/s, CineForm's data was running around 345Mb/s (Filmscan 2 444 keying mode.) For these frames SR yielded a PSNR of 57dB and CineForm measured 55dB--both pretty awesome by the way.

As CineForm is not limited to a tape transport mechanism, we decided to increase our quality for the Filmscan 2 mode in our next release (we have always called this mode "overkill" in house.) The sample images from our unnamed source now produce 58dB PSNR using CineForm 444 compression, averaging 390Mb/s (still under SR's bit-rate.) While Sony can leapfrog our quality for these images with their new decks that can do the 880Mb/s mode (we found it had about 3dB PSNR boost over its more common 440Mb/s mode), there are diminishing benefits for a disk-based workflow. Uncompressed DPX 1080p sequences are about 1520Mb/s, so 880Mb/s is less than 2:1, and at that rate uncompressed DPX is likely a better disk based workflow. CineForm intends to target 4:1 compression or more (usually averaging 6:1) where the visual quality is the same as uncompressed (even with extreme post manipulation) but the data rates are low enough to significantly reduce the cost of storage and the bandwidth required for capture and playback.

All of this is designed to continue to improve image quality in ways you can't even see. :)

Monday, February 04, 2008

Demosaicing (a.k.a De-bayering)

When shooting with a RAW single sensor camera like Arri D-20, Red One or SI-2K, the final output quality is impacted significantly by the demosaic algorithm, possibly as much as any compression applied. The demosaic algorithm is the process that converts the one value per pixel of a raw bayer sensor to the more useful three color primaries per pixel like RGB (see wikipedia article on demosaicing.) The weird thing about demosaicing, is there is no single correct way to do it, as all the algorithms must interpolate the missing values, one guess can be as good as another. Of course some look more natural than others, and that gets tricky (i.e. impossible) to mathematically prove -- there are some parallels to building a visually lossless compression. Originally CineForm hasn't really got heavily involved in choosing one demosaic algorithm, as there are now hundreds of different algorithms, all pretty much claiming some superiority over the rest. Instead we have offered a plug-in structure that allows third parties to add any demosaic filter they wish. Silicon Imaging did this and believe Weisscam looked into doing the same. But of course CineForm is judged only by the filters we offer as standard, as we saw when we demonstrated a standalone Red One file convertor (now on hold to meet agreement terms with Red.) Graeme Nattress of Red, skillfully skewered the faults in the default demosaic we were using in that beta product in a post. Of the beauty of competitive development is we were able to address the issues the next day. All this brought my attention back to having a good range of demosacic filters for the user to select from, rather then each camera vendor offering their own.

The latest builds on all our 2K+ products, there are now twice the number of demosaic filters offered, allowing you to tweak you image look as you need. See the subtle effects of each demosaic in closup on this image :
Click each of the check boxes to see the image with the various demosaicing filters applied.

5x5 Enhanced
CF Advanced Smooth
CF Advanced Detail 1
CF Advanced Detail 2
CF Advanced Detail 3

When I find the time, I will shoot something that will really push the demosaic.

While I did this test images under Premiere Pro on Windows, we now have the same control of the demosaic filters on the Mac. For more information please see our new CineForm RAW on Mac tech note.

Wednesday, January 16, 2008

CineForm's Red One Footage Convertor

This post has been revised to reflect the current status.

If you haven't heard the news, here are the direct links to various online announcements :, and the CineForm page for this new R3D file conversion utility. So I won't need to repeat myself.

On the blog, which really needs some more posts, I will focus on why we created this new tool.

As many of you are aware, CineForm has been using Wavelets to compress RAW bayer images longer than anyone. We starting in RAW encoding over four years ago now, and have to continued to refine the compression and post production work-flow through our integration within Silicon Imaging SI-2K camera and new partners like Iridas and Weisscam. Yet there are about to be more Red cameras than any other 4:4:4 HD+ cameras on the market, so there is a lot of interest at CineForm (and from many customers) to support our work-flow with the Red camera. We waited for RedCine release just like everyone else, hoping to find good support for CineForm's work-flow. Fortunately RedCine was pretty cool, as it supported 16-bit per channel color through the QuickTime API, we feared it might be only 8-bit. The problem is that it didn't work reliably for many of are customers, likely because their PC are too old and/or the graphic cards in not sufficient. But some issues where in RedCine itself, as we weren't the only compression type impacted. We do expect these glitches to be fixed at some point, but we couldn't wait.

In addition to our impatience, there was one feature missing within RedCine that we had requested nearly a year ago (we hoped to get it while their product was under development.) That was a RAW mode out of RedCine, through the Quicktime API would have been fine. The RAW mode would decompress the camera's wavelet, but not do a demosaic to RGB, which is slow and increases the data-rate 3 times without adding anything to the image. Through our own tool we can do this direct conversion from camera wavelet RAW to CineForm RAW. This is a significant saver on disk space and conversion time.

We still hope that RedCine upgrades and/or Red's upcoming SDK will allow for this type of flexibility, but as a small company it is really hard to wait for the big guys and Red is big compared to us.

Update: CineForm has agreed with Red to withhold distribution of the R3D2DPX utility we previously released until such time as Red makes its SDK available to others in the industry. Our understanding is that these "hooks" (as referred to by Red) should be available around the NAB 2008 timeframe. CineForm intends to offer compelling workflows for Red users, so all feedback is welcome.