Thursday, May 09, 2019

Know Your Fisheyes

When you say a GoPro is a fisheye lens, which one do you mean?  GoPro is less "fisheye" than you likely thought, somewhere between stereographic and a rectilinear (non-fisheye, pin-pole style lens -- the one people say is non-distorted.)

Wednesday, October 25, 2017

CineForm Goes Open Source

16 years is a long time for a piece of software to remain useful. The CineForm codec, initially developed in 2001, was designed to enable real-time consumer video editing. CineForm, Inc. was a team of engineers that knew image and video processing, but very little about codec design (which likely helped.) Back then, DV (remember mini-DV tape?) was popular but too slow on the consumer's Pentium III and Pentium 4 desktops for software based video editing. We had worked out that the Intel processors had plenty of speed for editing without dedicated hardware, but the camera's compression, DV or MPEG based, were too difficult to decode. So in 2001, the CineForm codec became the first "visually lossless" intermediate codec (not a proxy) that replaced the source with a new file with much more speed. We didn't know this was a somewhat new idea. While there were other performance optimized codecs, like HuffYUV, CineForm was the first to offer significant compression, balancing quality, speed and performance like no other before it. Since then, Avid DNxHD and Apple ProRES have followed similar strategies. As CineForm Inc (the start up), the codec became a core differenciator, open source was never felt to be a viable option (we were probably wrong.)

Consumers weren't the only one's with speed and compression issues. So in 2003, the company pivoted from the consumer market to the professional when we created the CineForm HD codec.  This version shares the same DNA as today's compression, with 10 and 12-bit support and the resolution increase needed for film and television production. Video producers aren't interested in codecs, workflow is what was sold. CineForm compression was bundled with products like Aspect HD, Prospect HD, NEO 4K and Neo 3D, selling a workflow that depended on a codec. We knew there was little value in compression modules alone. Typically codecs sold for only a few dollars a unit, so the workflow obfuscated the codecs value. While the decoder was free, the encoder was not sold separately. The little start-up was not brave enough to make the encoder free, let alone its source.

One reason for keeping it closed, you might be surprised to hear, is that the codec's core tech was very simple. The codec idea was sketched out on a single piece of paper and the performance was determined first by counting the number of Intel instructions needed using an Excel spreadsheet -- even that fit on a single page. The codec was primarily written and maintained by two engineers, Brian Schunck and myself, with some great early help from Yan Ye. The simplicity meant that the magic of the compression was best shrouded in secrecy.

In 2011, CineForm was aquired by GoPro, which it used it to drive its HD and 3D editing utility, GoPro Studio and resulted in producing millions of consumer-created edits. Our initial play for real-time consumer video editing finally happened…10 years later. Now the CineForm codec could be released, and it was licensed to many, including Adobe, for free distribution within Creative Cloud. But it was still not open source.

The CineForm codec has shined most brightly when there is a market change, changes like SD to HD, HD to 2K+ compressed RAW, 2K to 4K, 2D to 3D and HD to 4K. At these times of transition, CPU and GPU vendors didn't yet have hardware optimized solutions -- try software HEVC decoding of 4K60 so see this issue. For the last few years, our computers and mobile devices have had hardware accelerated video decoding for most types of media we are producing. Even GoPro switched from the CineForm powered GoPro Studio to the hardware accelerated Quik for Desktop. So for a couple of years, CineForm didn't get much attention. It was used internally for prototyping, but it received very few public updates.

In 2017, the production market changed again, with new opportunities for the CineForm codec to provide solutions… particularly for greater than 4K sources from premium 360° products, like GoPro's own Fusion and Omni cameras. The new Fusion Studio desktop software defaults to CineForm for all the greater than 4K exports. But unlike GoPro Studio, where CineForm was primarily an internal format, Fusion Studio is not an editor. The CineForm files are meant to be edited in external tools, like Adobe Premiere Pro, After Effects, DaVinci Resolve and HitFilm. While many video tools do have native CineForm support, not all do… and some can’t. This gives GoPro more compelling reasons to make CineForm open source to help support our high resolution cameras.

The CineForm SDK is now open sourced, dual licensed under Apache 2.0 or the MIT license (integrator's choice.) You are now welcome to build new applications around the CineForm SDK, build it into existing applications, and extend its abilities with new performance optimizations and explore new image formats. I would love to see it go into many new places and new markets, but in particular, into popular open source players and transcoders. While the CineForm wavelet engine might be simple, 16 years of coding and optimization have complicated the source. So, to help with new upcoming compression-ists, the open source includes a simplified wavelet compression modeling tool -- simplified CineForm -- for education purposes and some codec design and tuning.

This was another great step for GoPro as it contributes more original projects to the open source community. Learn more about the CineForm SDK open source project on GitHub at

Wednesday, August 23, 2017

Eclipse results

After my last post, here is the follow up with the results

 Eclipse 2017 Edit from David Newman on Vimeo.

One thing I wish I did, was set an additional camera for video with a locked exposure.  The light levels changed fast when approaching the totality, too fast for long interval time-lapse to deliver the real-time experience.  Also the auto exposure of the TLV and night-lapsing cameras reduced the drama of the lighting change, so a locked camera would solve both issues.  I used four GoPro cameras for the above video, next time I will use five or more.

Friday, August 18, 2017

Shooting the Eclipse 2017 with a GoPro

I'm planning to find my way into the path of totality, and be there with a bunch of GoPro gear. I've been asked many times how to shoot this event with a GoPro, so here are my thoughts. Disclaimer: this is my completely unpracticed opinion on shooting a total solar eclipse with a GoPro.

Some basics. The whole transit from start to finish, somewhat depending on your location, is about 3 hours. That is pushing the average GoPro a little beyond its battery life in a time-lapse mode, even more beyond in regular video mode. As three hours likely produces pretty boring video, time-lapse is the way to go. If you intend to time-lapse the entire transit, you can use any USB power brick to extend the GoPro's run time; I've done a week long time-lapse via USB power. If you intend to shoot on battery power alone, on a full charge I typically get about 2 to 2.5 hours of time-lapse on a HERO5 Black with a 5-10 second interval.  Plan to start your time-lapse about 1 hour before totality.

Timing. Time-lapse interval for two hour capture, my best estimate is 5 seconds. Which will be a 48 second video when played at 30p.  A shorter video to share would be better, yet if you are lucky to have two minutes of totality, this interval only gets you 24 frames (0.8s) of time in the totality.  If you intend to work on the video with a speed ramp for the less exciting bits, then a 1 or 2 second interval might be better, but watch out for your battery life.

Framing.  As you know the GoPro lens is very wide, forget about getting any close-up views -- well without mounting the GoPro against an eyepiece of a telescope, which I will be doing in one setup.

Practicing: A video frame extracted from GoPro through the eyepiece of a sub-$200 4" telescope with a solar filter.  
With a wide time-lapse, consider how the light will change across the landscape, so compose your framing to capture that.

Filters. A GoPro will not be shooting through those safety filters, those are for your eyes and telescopes etc. Using a solar filter on a GoPro will give you a very small orange dot that moves across the frame, if you are lucky -- don't do this.  If you have ND filters, you can use them or not, modern GoPros are used to shooting images that also contain the sun, the sun's image is too small on the sensor to do damage.

Exposure control. In most cases a GoPro is an auto exposing camera, this is a good thing for those in the path of totality as the camera will adjust for all lighting conditions, give you a good video throughout. The downside for those not in the path of totality the auto-exposure will reduce dramatic level of changing light level. On a Hero 4 Silver and on HERO 4/5 Black, you can lock the ISO to 100 and set a fixed shutter speed, but only in the video modes with Protune enabled, so you will be left to process a lot of video into a timelapse in post.  You will also need ND16 or ND32 filters to make locking the exposure work for a correctly exposed image at the beginning of you capture.

Time-lapse video vs time-lapse photo vs Night-Lapse photo. Time-lapse video (TLV) is the easiest by far, producing a small MP4 that is ready to share, as soon as the cell service recovers from the network load of millions of eclipse chasers filling small country towns. The downside of TLV is there are no Protune controls, it is all automatic. The other two time-lapse modes will produce JPGs (and GPRs if RAW is enabled) and you can have Protune level controls to set the look (GoPro vs Flat), white balance, ISO, sharpness etc.  If you are in the path of totality, choose Night-Lapse, it will still work during daylight, but will take much longer exposures as needed for the dark few minutes.

My Recommendations: For those willing to do color correction and post assemble a time-lapse: Night-Lapse, Auto shutter, 4 second interval, Protune Flat, Native White balance (or 5500K for simpler color correction), ISO Min 100, ISO Max 800.  I will enable RAW.  This will produce 1800 images over 2 hours, one set of JPGs and one set of GPRs, using about 18 GB for storage.  If you want a fast easy time-lapse, use Time-lapse video with a 5 second interval.

Wednesday, May 03, 2017

GPMF - GoPro's Metadata Format

Have you ever noticed how little metadata you get from a video file?  Take any JPEG or RAW photo from an iPhone, a GoPro or a DSLR, and you will find extensive metadata stored in Exif, a photographic metadata standard that stores hundreds of properties such as exposure, focal length and camera model etc.  For your average MP4, MOV or AVI, not so much, particularly without the various sidecar files which are so easily lost. Some of the earliest GoPro HD cameras didn't even say "GoPro" anywhere in their video bit-streams.  I feel a manufacturer ID is the most fundamental of metadata: answering, what made this file?  Even some of the professional cameras I worked with at CineForm weren't much better.  Metadata should also answer, how was this file made? In what environment? At what location? Was the camera moving?  Etc. So why is photo metadata ubiquitous, and video metadata spotty at best.  The lack of useful video metadata relates to standards, or the lack of them. We have no Exif equivalent for video files, particularly for consumer video within MP4, so GoPro created one.

For GoPro cameras we couldn't just place an Exif within an MP4, as the exposure is changing per frame, so that would be a lot of Exif data, and Exif had no clear model for time varying signals. This photo standard also assumes that the metadata applies to a particular frame, that doesn't work so well for gyro and accelerometer data. GoPro needed to store both high frequency sensor data and Exif-like frame-based data to describe the image exposure.

A multiple-year effort begins.

The software ecosystem would love JSON or some XML variant, but embedded camera developers will have none of that, complaining of excessive memory usage, system load, blah blah blah.  The camera developers would love nothing more than storing raw I²C packets (actually proposed once), very low overhead, but completely opaque to software developers.  There needed to be a compromise.

How I accidentally became a Firmware engineer.

To investigate a possible low overhead metadata format, I started with the internal format used by CineForm's Active Metadata, it needed some work and extensions, but it was efficient, easy to read and write, and should appeal to both embedded firmware and software camps.  So I wrote a sample application to demo the reading and writing of metadata stored in any camera-native format.  I though it was only a model, I was thinking if the format was accepted the camera team would write something better. Firmware liked it, used it as is, then filed bug reports against it which I had to fix. That was about two years ago, and I've been part-timing as a firmware engineer ever since, focusing on metadata.

Fast-forward to today.

Here is a taste of GoPro metadata and telemetry:

Freq. (Hz)
3-axis accelerometer
All HERO5 cameras
3-axis gyroscope
All HERO5 cameras
lattitude, longitute, altitude, 2D ground speed, and 3D speed
HERO5 Black with GPS enabled
UTC time and data from GPS
Within the GPS stream
Within the GPS stream: 0 - no lock, 2 or 3 - 2D or 3D Lock
GPS precision (PDOP)
Within the GPS stream, under 300 is good
Image sensor gain
24, 25 or 30
HERO5 v2.0 or greater firmware
Exposure time
24, 25 or 30
HERO5 v2.0 or greater firmware

You can use some of this data today within GoPro's Quik Desktop tool, generating cool overlays, or do even more using the GoPro acquired Dashware tools.   A random video a created 8 months ago using GPMF extracted within Dashware, gauges rendered and composited in HitFilm:

Dashware+Telemtry from David Newman on Vimeo.

However, for this to be truly interoperable, its support needs to go beyond GoPro.  In a big step in that direction, GoPro has open-sourced the GPMF reader, as a way for third parties to collect, process and display video metadata from MP4s.

Developers please check it out  There is extensive GPMF documentation and simple code samples.

Sunday, September 18, 2016

Protune 3.0

There is new firmware out for your HERO4 Black and Silver cameras, and it adds a key feature most often requested by professionals, exposure lock.  This will be my third blog entry in the series in what makes Protune the best shooting mode on your GoPro.  See the previous posts on the extended Protune 2.0 features for Hero 3+ and the original Protune post for HERO2.
Protune continues in its flexibly in providing the ultimate control over your GoPro camera, but with last month's firmware update, Protune expands the control beyond that of Hero 3+ Black.

In addition to the classic Protune NATIVE white balance (for wide gamut) and FLAT color profile (log for greater dynamic range), the new shutter control allows you to switch from automatic (default) to a locked shutter-speed related to your selected frame rate.  For 24p capture, the new shutter options are Auto, 1/24, 1/48 and 1/96th of a second:

For a 60p mode the options are:  Auto, 1/60, 1/120 and 1/240.  So for all Protune video modes the shutter can be set for 360, 180 or 90 degrees of effective shutter angle.

The ISO Mode control when set to MAX, works identically to ISO Limit is the previous release, where the ISO will automatic range from 100 to whatever to set in MAX.  I typical never want the sensor gain above 1600.  

A good use the shutter control feature is for removing the motion blur from low-light (indoor) slow-motion shots.  When shooting at 120fps or greater, the camera will naturally open to a 360 degree shutter if isn't daylight.  This will make the slow-motion very smooth, but too dreamy for some action sequences.  Using a 90 degree shutter will keep the motion blur to a minimum for these demanding shooting conditions.

Switching to LOCK will disable any auto-exposure behaviors in the camera. In most lighted situations, setting the camera to a fixed ISO 100 is best, offering the lowest sensor noise. But all is not perfect, as GoPro cameras are designed for durability with no moving parts, there no iris (aperature) control.  Fixing the ISO to 100 and the shutter to a longer than typical for action sports (normal is around 1/1000th second) often there will be too much light due to the wide open F2.8 lens. For daylight shooting you will need to add neutral density filters, and a lot.  Fortunately the ND32 needed is available for direct lens mounting from this Polar Pro filter set (unfortunately they don't sell the ND32 separately.)  There are other options to adapt the GoPro lens mount to traditional filters like this from Snake River Prototyping, great for handling for those rare days when you need ND64. I've used them all with excellent results.

Manual exposure is here, stock up on you ND.

Tuesday, April 05, 2016

Lens curvature, and Embracing its Perspective-Distorting Goodness

When you think of GoPro video, you may often think of extreme sports, or maybe great life moments, but rarely narrative cinema. The super-wide "fish-eye" lens, which has everything in focus, goes against many learned instincts of the cinematic language of storytelling. In late 2009/early 2010, coinciding with the rise of GoPro and the HERO HD, the DSLR video revolution began. This enabled consumers, using full-frame sensors and wide open lenses, to achieve shallow depth of field video imagery at an affordable price. While DSLRs were said to achieve a "cinema look", democratizing film and television production, the GoPro revolution was all over YouTube and Instagram. This gets flipped on its head when the filmmakers behind the viral YouTube® hit Bad Motherf***er (somewhat NSFW) landed a feature film deal and three years later gets a wide theatrical release (April 8, 2016) for Hardcore Henry. Nearly 100% of the film uses GoPro footage, without resorting to a fish-eye "correction" to give a first person experience, with all the barrel-distortion goodness.
Viral video hit from Biting Elbows
STX Entertainment's first trailer for Hardcore Henry
What am I on about?  Barrel distortion or fish-eye, with horizons and buildings appearing curved and all geometry in the image is distorted, these are unwanted artifacts, aren't they?  While it's true that Stephen Spielberg and Janusz Kaminski selected vintage lenses for Bridge of Spies, resulting in a more curved look than all other contemporary features in 2015, but bent tank barrels and non-geometric windows were employed for deliberate artistic reasons, not technical limitations.
Images from Bridge of Spies
There are good technical reasons to embrace the curve. And for those wishing to dismiss a POV feature film as only a live action first-person shooter, you need to revisit video games, as they do not use fish-eye projections. Video games are rectilinear ("standard" lens with straight lines everywhere), just like 99% of the lenses you might use to shoot a wedding or a feature film, and video games typically render a field of view (FOV) narrower than a typical GoPro. There is a lot more going on within the technical achievement that is Hardcore Henry.

Disclaimers and context... I did not always see barrel distortion as good.  I had no idea if Hardcore (its original name) could be remotely viewable on the large screen when I met the filmmakers, led by writer-director Ilya Naishuller, even before the first draft of Henry was finished.  I was GoPro's technical contact for the film, it was my job to convince the team that the HERO3 Black (new at the time) was up to the task.  I was a little bit over my skis here, but maybe we all were (it was an exciting time.)

I was from the school that super-35 was good, but full frame sensors were better, and film making was a lot about revealing less through shallow depth of field and extreme letter-boxing. Yes, I'm an amateur film-making hack.  And the rectilinear look was part of this desired look for me. It was through my insistence that the Fish-eye Removal feature was added to GoPro Studio after hearing from some professionals that the "GoPro look" doesn't always cut well with other cameras, plus I personally wanted it badly -- as did many customers based on analytics data.  The hack in me said rectilinear was the only correct/cinematic look.  At the same time frame, I was also taking up the quadcopter hobby, and rectilinear "corrected" GoPro footage looked so right to me.

Cine Gear Fly-over from David Newman on Vimeo
(HERO3+ shooting with a Medium FOV, this is not as wide a GoPro can go.)

Yet something nagged at me, rectilinear was not perfect (whether a physical lens or a post-correction), as it looked weirder with wider the source lens, even with gimbal-stablized aerial footage.  In GoPro Studio we limited the correction to a centered 16x9 crop as the distortions become undesirable when the full horizontal field of view is preserved. Why was rectilinear considered the ideal lens, when rectilinear itself can look extremely distorted? Examples of rectilinear distortion below.

HERO4 Photo, Stock lens, with about 120 degrees horizontal FOV

Rectilinear conversion, preserving horizontal FOV

I prefer the HERO4 look, as it is more representative of reality, rather than the elongated image of the rectilinear correction, however both are usable.  To push rectilinear beyond its limits I have simulated an even wider lens using a spherical 6-camera GoPro rig (stitched in GoPro's Kolor AVP.)

Simulated Equidistant "Fish-eye" with 150 degrees Horizontal FOV

Simulated Rectilinear Lens with 150 degrees Horizontal FOV
I'm standing on the left edge of both images, they are from the same frame in the spherical source, I much prefer the way I look in the fish-eye image. A practical fish-eye or rectilinear lens would produce the same result. We can see that at some point a lens-curved image becomes easier to view, and the more natural looking image.   Determining that exact point becomes an interesting problem to solve.

Showing the distortion in a few stills doesn't showcase how bad the problem can be in motion, as is clearly going to be the case in a POV action film like Hardcore Henry. 
Surrounded by bad guys in Hardcore Henry
This pan showing that "you" are surrounded by bad guys would be much more disorienting using rectilinear (at the same FOV.)  Familiar objects, like human faces, should not dramatically change size or shape as you pan. This rough render below, using spheres as an analog for human heads, demonstrates the distortion on the edges is clearly apparent when switching from a HERO4 lens to a rectilinear lens.

Same pan speed, same HFOV, switching between different lens curves.
I've discussed this with many, and I've heard this a good number of times, "when I see the world, lines are straight." Its as if linearity is the only distortion that matters, and that rectilinear lenses must be the correct simulation for our vision of the world. Yet our eyes aren't seeing the world as a projected image onto a flat screen.  A rectilinear lens is only a correct imaging model when the viewer FOV matches the shooting field of view, as many gamers are well aware. By viewing these images in a narrow formatted blog post, you aren't seeing them anywhere near their intended viewing FOV.  This mismatch from viewer FOV to camera FOV, is where the perceived image distortion comes from. 

From on "Game Graphics, Physics and AI"

Extra: fun reading on gamer's FOV theories

If you were to view the rectilinear image of the red car (from the examples above), from a distance half its displayed width, the distortion would go away (might be hard to do if viewing on a cell phone or iPad, even a large may screen my require reading glasses to pull focus that close.)  The same is true for a fish-eye lens, except using a curved screen with matching FOV.  We are used to rectilinear, but it is just as distorted when projected on a screen with the wrong viewer FOV.  When using a longer lens (telephoto), for display on wider FOV screens, the distortion appears as shallower depth of field, and a flattened image (much less so on smartphone.) A wide lens (28mm or less on DSLR) will enhance the depth when viewed on a TV, but it will look and behave like a standard lens on an IMAX screen. Many perceived lens attributes are due to the perspective distortion when the images are displayed with a different viewer FOV. When selecting a lens, we are often using perspective distortion, altering our images subtly to help tell our stories. 

So distortion is good, and is a key part of cinematography. We have been using it in all our photographic and film projects since its inception.  Super-wide style shooting can have a very large disparity between the captured FOV and presentation FOV (theater, TV, smartphone, etc.)  This type of shot benefits from a different type of distortion, a different lens curve, that helps map one FOV to a more screen compatible presentation format.  GoPro has been riding this wave since HERO HD. 

Distorting an image from a wide FOV to a narrower presentation has it advantages in that motion becomes more stable, particularly angle motion (camera shake and pans) is less pronounced with a small viewer FOV.  GoPro users have often commented how good the stabilizer is, when we don't have one, the wide curve lens inherently appears more stable. For a wide rectilinear lens the benefits peak around 90-95°, beyond that some barrel distortion is needed to extended the stabilization properties. The table below shows the size of edge movement for a horizontal pan at different capture field of views. The HERO4 is inherently more stable from 60° and above, with rectilinear motion quality reversing at 100° and beyond. Below 60° both lens types have similar motion characteristics.
Edge Motion as Lens HFOV increases, for a narrow viewer FOV

Given all the distortion difficulties of presenting super-wide images at standard viewing distances, it should become more obvious why a film like Hardcore Henry used GoPros. Put simply, the wider the fish-eye lens, the more world motion (particular angular motion like pans) slows down. When shooting human POV, with body-mounted cameras, cinematography rules for ideal panning speeds are difficult to control.  Wide fish-eye lenses are significantly more viewable, resulting in less motion sickness than many viewer experienced with similar shaky-cam sequences in films like The Blair Witch Project.  So with fish-eye, the wider the lens, the slower camera motion, with rectilinear, the wider the lens, the faster the camera motion (at the edges, where we are most sensitive to motion.) 

Although the advantage to shooting action with a fish-eye lens is not obvious to many reviewers.
  • From "It’s not for everyone, and a lot of viewers might be made sick thanks to the jittery, fish eye lens through which everything is viewed."  
  • From  "For what it's worth, I'd say the filmmakers did a pretty damn good job at keeping a lot of it coherent, although I'm not crazy about the fish-eye view for an entire movie."
While it is a new look for a movie, fish-eye is a technical requirement for making this of type first person POV possible without damping down the action.  I even experimented removing the fish-eye from the Hardcore trailer (it was crudely done, so I will not share here), and the result just like the animated-GIF above was unviewable in rectilinear.

Side bar: So why 120°, and not 150° or 180°?  There is nothing magical about this number, however it does somewhat match the human view FOV for both eyes, while we experience near 180° when you consider the peripheral vision of left and right eyes, the combined stereo image is significantly less.  In VR applications, many of the modern head-mounted displays range between 100-120° horizontally.  For a GoPro camera, 120° has more practical origins. Image sensors are rectangular, and we use a 4x3 sensor, so that we can achieve a wide and tall FOV.  The good region (still sharp) of a fish-eye lens would be between 150 and 170°, so the horizontal FOV can only be between 120-136°.  
Typical action camera lens and sensor setup.
To achieve significantly more than 130
° horizontally would require the stitching of a multi-lens system as required for 360/VR capture. So Hardcore Henry 2 (I have no idea, or I couldn't tell you if I did) could use VR rigs, but the simplicity of a single camera capture would likely win out.     
Upcoming GoPro Omni

Shooting POV? Shoot GoPro. That would seem the obvious message to come from a GoPro engineer, but that is not quite the message I'm going for.  Yes, there are clear advantages to a fish-eye style lens when shooting very wide images, particularly if there will be a lot of motion.  Planning content for YouTube, you already know the GoPro look is working great, leave it as is with no "lens correction" required or desired.   But for a theatrical release, you could potentially optimize the curve in post, to be slightly less for the theatrical version. 

GoPro camera's themselves have had more than one look.  
  • GoPro HERO HD was almost an ideal equidistant fish-eye. With 170° diagonal in the 4x3 modes.
  • HERO 2 pushed the fish-eye look further, slightly more curved in the center, with similar FOVs as HERO HD
  • HERO 3/3+/4 introduced a less curved lens, improving with each camera generation, closer to a stereographic projection.  Not striving for as much in the diagonal, with little over 120° in the horizontal.  It was designed to keep the action of high-speed motion looking real.
The filmmakers once commented that their music video was hard to watch on the big screen, so they needed to develop new techniques to make Hardcore Henry possible.  However, there was also a subtle benefit switching from HERO2 to HERO3, having a lens curvature more suited for the large screen. 

When determining the amount of lens distortion, screen size matters. Or more precisely, viewer FOV matters. If you are producing content for consumption on a smart phone the perceived distortion is different than when targeting a theater projection.  Wide FOV rectilinear looks even worse on a smartphone as the distortion is more pronounced. Something new to think about, as the viewer FOV is increasingly more varied, from IMAX to video played within a Facebook News Feed, we may need to optimize lens curve for one, or produce multiple versions in post. Just as theatrical and TV releases have different color correction passes, POV sequences can be optimized for the average viewer FOV. 

The average viewing FOVs for the large range of viewing environments:
  • Smartphone video portrait: 10-15°
  • Smartphone video landscape: 15-25°
  • HDTVs: 25-35°
  • Movie theaters: 30-60° (unless you sit in the front rows, this can be very high, don't sit in the front rows of Hardcore Henry, Sharlto Copley considers these rows "the splash zone".) 
  • IMAX average of 70° (from IMAX® 101 Theatre Geometry Extended)
Using the front wheel from the two car images from above, I have re-projected both images to observe the distortion at different viewer FOVs.
Where you sit matters

For such wide source, from a smartphone to movie theater, the HERO4 lens holds up well, but at the viewer FOV increases (like in IMAX) you can see the benefits for reducing lens distortion.

Telling long form immersive stories from a human POV is hard. To approximate the human vision we need a very wide lens, VR headset or sit front row center of an IMAX theater.  However, VR and IMAX can come with some unwanted side effects; as the viewer FOV gets larger we are more likely to experience motion, and potentially motion sickness. Large vistas or slow moving scenes in either IMAX or VR, activating our peripheral vision can be a fantastic experience, but the kinetic action and whip-pan nature of a film like Hardcore Henry works because it doesn't fill our FOV. It works on the large screen from normal seating, middle rows and further back, it will work on your home TV and mobile viewing devices. It works because there is a lens distortion optimally remapping the wide, human like, perspective to more typical viewing environments. I saw the U.S. Premiere of HH at the SXSW film festival in Austin, I was sitting towards the back with a FOV of around 30° and experienced only the slightest unwanted motion illusions during the first part of the movie where the action was slower.  Once the action picked up, all motion issues were gone, the experience is one-of-a-kind and I did not want it to end. 

April 14th '16  P.S.  I just re-watched the movie on a much larger screen, at a closer viewing distance, and I had no motion issues at all.  Seems that first person action is an experience that the audience will quickly adapt to. With the 15 or so GoPro engineers that joined me for this viewing, several did experience what I felt in my first viewing, but only in the early parts of the movie.  One co-worker did have to step out the theater, so it is not for everyone.  It was extra fun seeing the movie again, so many clues I missed the first time around.  

Good further reading on lens curvatures: 
About the various projections of the photographic objective lenses by Michel Thoby
In Praise of the Fisheye (in video games), by Daniel Rutter
Perspective Projection: The Wrong Imaging Model, by Margaret M. Flect
FOV and the "fisheye" effect - learn the difference, Randomeneh (on Reddit)