Wednesday, May 06, 2009

No Problem With 3-D

The title of the post is a come back to Daniel Engber's article in Slate "The Problem With 3-D", with the sub-title "It hurts your eyes. Always has, always will." As CineForm is entering the 3D post production world, I was curious whether his claims are valid. I'm personally one who doesn't find modern 3D film difficult to watch, and very much prefer to look at recent films like Coraline 3D over their 2D presentation, but I spend much of my day staring at images for quality analysis, so I'm not the best test subject.

Engber states the visual fatigue that "plague flight simulators, head-mounted virtual-reality displays, and many other applications of 3-D technology" is directly connected to 3D movie eye strain and that "no one yet knows exactly what causes this." Engber then goes to propose a reasonable sounding theory that our eyes want to refocus on objects that are not really closer or further away than the physical screen plane, and a likely cause of strain. This seems like a logical explanation for those experiencing eye fatigue, and he offers no other. The article then goes on to suggest "if 3-D becomes as widespread" the possible blindness (well "permanently cross-eyed") of your children -- Wow! Now I was initially going to accept the earlier claim of convergence without refocus being a potential cause of eye strain for some, but now that my kid's eyesight is involved I had to dig deeper.

I dug, and I now believe he is wrong, at least for most correctly presented theatrical presentations. I'm also proposing a theory without rigorous test data (just like Engber), but focusing on the optical characteristics of the human eye. I wondered whether hyperfocal distance, the particular range from x feet to infinity such that all appears in focus. While a typical lens has a single point of focus, there is range in which focus is still considered sharp. Whenever depth of field is discussed, that is talking about the same range of acceptable focus. So from Wikipedia "hyperfocal distance is a distance beyond which all objects can be brought into an "acceptable" focus." If the screen is beyond the hyperfocal distance of the human eye, all 3D images behind the screen plane still appear in focus, and a certain amount in front will still be in focus, with some simple rules. With all images in the theatrical 3D space appearing in focus, it doesn't matter if your eyes do change the focus range, so Engber's claim does not hold up, and your children are safe.

Basically the problem described in the article only happens in close screen viewing conditions or extreme "coming at ya!" 3D that has been losing favour as 3D projection becomes more common. In a typical movie theater the viewing distances are such that the eye can do its natural convergence and refocusing without losing focus on the presented 3D world.

Now to calculate the acceptable distances for 3D, we needed to calculate the human eye's hyperfocal distance. With some online research I was able to determine the eye is approximately a 22mm lens system (seems about right,) with a maximum f-stop of 2.4 (darkened theater would do the trick.) There is a great article on The Photographic Eye from which I gather numbers I used (they agreed with many sources.) Now we can plug these numbers into a lens calculator, and get a number for 35mm cameras -- 22.3 feet hyperfocal, focus range 11.1 feet to infinity. So if eyes were 35mm cameras, as long as the 3D object remains more than 11 feet away from us we can comfortably and safely view it and everything behind it into the 3D world. But of course our eyes are not 35mm cameras, and are more complex to model, but the heart of all this is the Circle of Confusion (CoC - the amount of allowable blur.) So instead of guessing the camera system that models the human eye, let's calculate what as acceptable blur for the typical theater viewing environment.

For our theater model, we have nice 40 foot horizontal movie screen at a viewing range of one and half screen distances, i.e. 60 feet away, using a common 2K projector (99% of all digital projection is either 2K (2048) or 1920 line HD.) So the amount of allowable blur is related to the pixel size, as we don't see a lot of chunky pixels, the resolution is high enough that it fuses into a continuous image for the audience. So let's estimate that a half pixel blur is OK and is still perceived as sharp. For the approximate 2000 pixels around 40' screen, 0.5 pixels will be 0.5/2000*40 = 0.01 feet or a blur of around 1/10th of an inch. The viewing angle for our blur at 60' is calculated as 0.01 degrees. As Circle of Confusion or CoC is calculated at 25cm, the 0.01 degrees results in a CoC of 0.04mm. Now using the CoC number in our lens calculator we get these results: When viewing the screen at 60' away, all objects from 13.1' to infinity will appear in focus. If an object jumps 75% off the screen and is perceived as 15' away, and you focus on it at 15', it and the screen plane are still in focus, so no source of eye strain. We now have the safe/enjoyable range to present a 3D image. You might be thinking the amount of allowable blur at 0.5 pixels was overly generous, and it was, in 3Ds favor. Wikipedia and other sites place the average acceptable CoC at 0.2mm, yet the numbers above are five times sharper than that (so there is plenty of headroom for the average viewer.)

This potentially points to the home screening environment having issues with the screen being so much closer. Yet using the same human eye lens modelling, a 3D depth range can be created such that the eye can focus at will without causing blur issues at the screen plane, or introducing eye strain. Plus as the average home environment is not as dark as the theater experience, we can use a different lens f-stop in our calculations. If our eye is more typically at f/4 for home viewing (totally guessing -- love help here) for a screen at 12' distance, 3D images can be placed at 6' (half out of the screen) to infinity (still using the same very sharp 0.04mm CoC.) So there is a reformatting required between theatrical and home release, but this was already an accepted factor to adjust for the smaller home screen.

This is not to say that there aren't other factors that contribute to eye strain in today's 3D technologies, such as imperfect filtering causing left/right cross-talk and poor quality glasses introducing other optical distortions. Yet the biggest factor to eye strain is more likely due to inexperience of 3D film making, such that there are good and bad presentations, which have nothing to do with the technology. The film making and the tech can only improve, and there is no inherent cause of eyestrain in today's 3D presentation technology.

-------

Here is a fun aside. For those wanting to experience the 2D and 3D version of a film without going to see the film twice (or closing one eye for prolonged periods of time), create yourself a pair of 2D glasses. These will also come in handy if you happen to see one of those bad 3D movies. Get two pairs of standard RealD glasses and pop out the left eye lens from one and the right eye lens from the other, and swap them. With a pair of scissors to cut the lens down a bit, you can restore the lenses to the frame with the opposite position (be careful not to flip the front and back side of the lens), so that you have one set of glasses that is a left only and another that is right only. At any time during a 3D film, put on your 2D glasses, to experience that retro 2D world.

23 comments:

Jason Rodriguez said...

Great article, but quick question: are you saying that the reason for eye-strain is that we want to see something as "sharp" if it's within the hyperfocal distance of the eye, but that the real-world camera footage was shot with the object out-of-focus, or with too much blur, so now we're straining to make something in-focus that is out-of-focus in the "3D" world? In other words, the headaches and eye-strain of 3D are the same as symptoms experienced by people with bad eyeglass prescriptions who are constantly using their eye-muscles to try and make objects in-focus that are out-of-focus?

David said...

I wasn't going there yet. The issue of shallow deep for field in the camera vs very wide for 3D projects is still up for discussion. I was only pointing out the holes in the Slate article which missed some lens theory. Of course it is easiest to think focusing on objects from 15' to infinity if they were shoot in focus. If you drive the audience attention appropriately a shallow depth of field in camera will be OK. It is just harder to do, and will require more experience in 3D shooting.

Tim Savage said...

"Circle of Confusion" is what I was always familiar with, not "conclusion." Typo?

Another variable I was made aware of working as assistant producer on an Oldsmobile 3-D movie in 1982 (ah yes... when the automotives had money for such nonsense) with the 2 original House of Wax lenses (NOT the Paris Hilton flick!) was the variable distance between the "eyes" -- that is, the 3-D effect could be made greater or lesser by changing the eye-width value. This film also had Lenny Lipton (lennylipton.com) on it, who had tables for setting these variables to the appropriave value for the scene's focal length, the primary subject's position in frame, and peripheral objects (that could enhance or detract from the 3-D, especially if on showed up only in one "eye" or lens. This was called "occlusion," and yes, could cause headaches.

David said...

Fixed the CoC typo, thanks. I agree there are additional factors, far more than a single blog post can cover.

Jim Krisvoy said...

If 3D projects are filmed properly, with avoiding eyestrain as one priority, and the finished product is presented and viewed properly, there should be no eyestrain whatsoever. The companies involved in this right now are going out of their way to avoid any viewing fatigue and Engber's article is just so much very hot air.

Unknown said...

does any of your analysis look at the way 3D is actually displayed in the movie theatre on a 2K digital projector or on a 4K digital projector. the current technology for displaying 3D on a
2K digital projector uses something called "triple flash" technology to create the 3D image, but the 4K uses continuous 2K right eye and left eye images to create the 3D image. do you have any knowledge or opinion on the 2K "triple flash" technology causing increased "eye fatigue" due to this black space always flashing and your brain having to interpolate the image?

Anonymous said...

With a lack of any objective scientific data, the only real conclusion in all of this is the statement, "'no one yet knows exactly what causes this.'" David is assuming that the eye does try to focus on a blurred object that is out of focus and then he assumes that it will not try to focus on it if it is within the depth of field limits. For all we know, the eye does not necessairly try to focus on objects that are blurred. If it did, they anyone who needs glasses would have constant headaches when they take them off. I am not stating this as proven fact, but I don't think it actually happens. On the other hand, how would we know if the eye does not automatically try to change focus when the other visual information tells it that the object is not at the distance that stereo vision tells it it is. Depth perception is not tied to a single element of vision. A great amount of depth information can be and is gathered by a single eye. Perhaps the headaches are caused by conflicting information in this process when viewing a 3D moving image.

I would look to real scientific studies, not simple opinions.

Paul A.

Dave Gregory said...

Focus and convergence are two separate issues. The issue here is not so much that of being able to focus but rather what is called "viewer accommodation breakdown" -- the fact that your eyes are trying to converge on something apparently getting closer to you but have to remain focused on the screen plane. Most people can handle that, but constantly requiring the audience to do it for too many off-screen effects can lead to eyestrain. On the other hand, neither of your articles mentioned "divergence", which is when the common dual points of a BG object are greater than 2.5 inches apart on the screen's surface--thus causing your eyes to diverge like Marty Feldman's should you try to fuse on the object. THAT is what really can cause eyestrain. Engber was right to make people aware of these issues, though his attitude was way too negative. Also, he sounds like the kind of fellow who will mistakenly sit near the screen--thinking it will enhance the experience. Not true. That makes just it harder to watch. Those of us working in 3D know to sit in the rear-center of the theater. As Jim Krisvoy pointed out, everyone is addressing these issues and taking great pains to minimize them. However, they can never be totally done away with because it's virtually impossible to satisfy every seat in the auditorium. Also filmmakers have no control over the size of screen our films are projected upon--and an object at infinity may have an on-screen parallax greater that 2.5 inches on larger screens. Therefore, sorry Jim, there is always going to be eyestrain for some people, even in the most perfectly shot & projected 3D situation--and especially for those sitting closer than two screen heights away. So use this tip and always sit in the rear-center of the auditorium--especially in those bloody "IMAX 3D experiences"--and you'll be headache free (unless the movie's 3D is just really bad to begin with).

Unknown said...

This is a great article and makes a good point about 3D. There are variations in available depth budget across display technologies that are important for production teams to understand. It does very much depend on viewing distance (Helmholtz in 1867 knew this..).

You can find some of our scientific research and a free stereo camera calculator for one method of accounting for the variable display depth budget at www.binocularity.com

The use of camera depth of field in 3D movies is an issue - my experience is when it is behind the action it is ok, when it is in front it can be difficult.

David said...

David,

The black "flashes" between frames is not likely a source of eye strain, as that is standard film presentation for years. However the small temporal difference between the eyes can cause 3D perception issues, that a dual projection system doesn't suffer (although I'm not sure if I've every had such issues.)

I was only one incorrect claim of the Slate, and that I believe technology can overcome any remaining factors.

David said...

Re: "then he assumes that it will not try to focus on it if it is within the depth of field limits"

Paul, I'm saying the opposite, the human eye is welcome to focus wherever it would naturally, there is nothing different in a correct 3D theatrical presentation and the real work for this optically conditions, as all the objects with the range will be perceived as sharp. Of course the object would only be sharp if it was filmed that way, but there is nothing in 3D projection that is inherently unable to satisfy normal human eye movement and focus.

David said...

Dave Gregory, My argument is the eye doesn't not have to remain focused on the screen plane, and I total expect it wouldn't, yet this not a problem as postulated by the Stale article. I'm well aware that divergence would fall in the category of badly presented 3D, and yes that would cause eye strain. 3D films to need to be reformatted for the screen size they are being presented on. Scale down is normally okay and doesn't cause divergence issues, but scaling to a larger screen than the intended presentation is big issue.

Dave Gregory said...

David,

Regardless of where the object is in the z-axis, its image is still at the screen-plane--so that is where the viewers' eyes will be focused (unless they're watching something other than the movie).

Now, where they are "converging" is a different matter entirely, as that is a separate function.

Since a 3D movie using many off-screen effects requires our eyes and brain to de-couple the way our eye muscles focus and converge (the way they normally work together), eye-strain can indeed result. And that WAS the point of the SLATE article and it is a very valid point.

Do you consider it practical for a film distributor to prepare several different D-Cinema versions of a 3D film for different screen sizes? I don't--though Paramount & Sony claim to have done so for BEOWULF. But who of us have their budgets?

We have to just design a version for the biggest average screen we'll presume we'll be on.

-- Dave Gregory

John said...
This comment has been removed by the author.
John said...

Over the years I have seen 3D/VR come and go. Some interesting projects I have worked on:
SpaceSpuds, Amiga, 1988 (LCD shutter)
Flight simulator running on Fakespace Labs VR device (Leep Optics), Cyberthon 24hrs of VR, ~1989
VR Slingshot, Amiga, PC, 1994 (LCD shutter)
Virtual i-O i-Glasses (HMD with optics setting the image plane about 30" from eyes, head tracker).
H3D (we ported Quake to stereo 3D, wireless LCD shutter), 1998
Akumira (solved the color anaglyph problem, 2000)

EyestrainPrimary cause: convergence/accommodation

To reduce: move accommodation plane farther from eyes, physically or with optics. A collimator can help with perceiving far away images (for flight simulation, etc.).

Other causes of eyestrain:Vertical parallax: don’t rotate the cameras- they should be kept parallel if possible.
Different brightness between eyes. Color differences aren’t as bad. Consider that “full color” can be perceived with color anaglyph- the brain can also fuse color components between eyes. Brightness difference between eyes is harder to deal with.

Framing errors: objects clip the window edge when out of screen, or near the display plane.

Viewer tilts head: eyes/brain must work harder to fuse the image (shutter or filter glasses).

Flicker- primarily an issue with low-rate shutter glasses (< 100Hz).

Temporal errors: both eyes should see the same frame at the same time. If one frame changes before the other (per eye), not only can eyestrain occur, but also “VR sickness”. VR sickness can be summed up as a type of motion sickness where inputs to the brain are altered so that the body reacts in the same way as poisoning. This causes the viewer to feel nauseated / throw up (evolution: “expel poison”. Poison can cause “timing” errors between eyes and between both eyes and inner ear. Simulate it and the body reacts appropriately). HMD’s are especially challenging as the motion tracker must be very fast/noise-free to prevent VR sickness.

As one spends more time viewing stereoscopic 3D, they can experience less discomfort as they get more used to it (over a period of days and weeks; not in one sitting).

From commercial experience, the biggest problem beyond discomfort is that consumers don’t like having to put something on their face. Perhaps 3D won’t really take off until we have high-quality autostereoscopic displays (similar to a color hologram). The vertical barrier stuff works OK as long as the viewer sits in the sweet spot.

David said...

Dave,

Please reread, the focus of an object, on or off the screen plane, will be perceptably the same due to the depth field of the human eye at those view distances. Therefore focus off the screen plane is going to happen and be a good thing, and not a source of eye strain. I know it doesn't seem obvious at first.

John, That is some good stuff.

John said...

Thanks David.

A few more causes of eyestrain, which also affect 3D perception:

Ghosting (a form of crosstalk) : cause: shutter glasses don’t get dark enough, polarized filters don’t cut enough light (and/or screen does not reflect polarized light very well, or the viewer tilts their head (linear polarizers; not an issue with circular polarizers), anaglyph filters not matched very well to colorspace of presentation material (more challenging with CMYK and printed material than RGB displays). Ghosting can be minimized by limiting the maximum parallax.

Crosstalk, general case: any time left/right image material is viewable in the other eye. This is one of the biggest challenges for showing color anaglyph on broadcast television. When RGB material is converted to YUV and then compressed, some form of color correction is usually also applied. In the red/cyan case, this typically causes information from the read channel to be shown in the cyan (blue+green) channel, and vice versa. Crosstalk can vary from minor (causing eyestrain) to extreme, where there is no perceivable 3D effect (no stereoscopic eyestrain- the image appears 2D with a double image).

Changing the 3D depth manually: any time an object is viewed at a depth not consistent with the other normal depth cues (perspective, lighting, etc.), the eyes+brain have to work harder, causing fatigue and eyestrain. An interesting effect (which makes sense, but is amusing the first time viewed): if an image is moved into the screen (left image left, right image right), without changing its pixel size, it will be perceived to be larger. Likewise, if an image is moved out of the screen it will appear smaller. If an object moves in 3D space and does not follow the natural size change, the brain might perceive this as a disconnect and trigger VR sickness.

Jim Krisvoy said...

One of the areas that should be considered in all of this is the fit and design of the 3D glasses themselves.

If you have been to an Imax Digital 3D presentation, you may have noticed that the frames are rather thick and due to this, some vision is cut off around the septum area (the nose area). This gives off the impression that the field of view is cut off. In that design, it would be better to thin the frames, at least around the nose area, which would improves things dramatically.

In the case of Dolby 3D, when one tilts their head up or down, there is some color degradation, however cancellation between the left and right eyes is pretty much right on the button. At a screening a few months back of a certain classic 3D title, the light levels were too low and this could be problem for people who may need more light in order to see clearly.

With X-PAND, early prototypes were too heavy, uncomfortable and when the battery died, the glasses had to be thrown away (or possibly recycled for use at a later date). I have not experienced the latest, but understand that those problems have been eliminated.

So far, Real-D still appears to be the best solution for comfort, image cancellation and, from a projection standpoint, very reliable.

As far as seeing those 3D images, one has to keep in mind, that you are still looking at a flat screen surface. Whatever imaging that is supposed to either be deep or off screen should, at least theoretically, has happened at the source and I have never seen or known of anyone who had any focus problems with close off screen images, unless those types of images were not shot properly in the first place. As I mentioned in a recent comment, the companies behind the technology, the cameraman, etc are going out of their way to ensure, for the most part, comfortable and well shot stereoscopic images.

One can only hope that producers will keep this in mind on all future 3D projects, which could ultimately will affect ongoing acceptance of 3D cinema.

As far as exhibition goes, it is virtually impossible for an exhibitor to manipulate image alignment, although their could be issues with screen illumination, particularly if 3D presentations in any given location are not monitored for quality by the distributors themselves.

Clinton Torres said...

I wanted to explore the idea that stereo ghosting changes the single-lens approximations you were making David, and wrote: http://blog.clinttorres.com/index.php/2009/more-3d-optics/

Since then, the conversation here has given me a few leads on how to analyze such a thing. Very interesting stuff.

Unknown said...

As you know, convergence is one adjustable knob on the camera (whether real or virtual), and all the similar issues of focus apply to convergence (i.e. there is a cricle of convergence for which our eye deterimnes an object is converged; which leads to a similar depth of visually acceptable convergence, similar to the hyperfocal distance; setting the focus to infinity is similar to setting the two lenses in parallel, etc).

The trouble is when your eye tries to focus on an object outside the convergence range of the image (which may actually in focus photographically). The cinematographer can intentionally screw it up by setting the convergence to something different than the focal plane. I would argue that if you increase the depth of field, you actually increase the convergence problems, as your eye more naturally wanders around the image and tries to focus (and converge) on things that are in focus in the image. By limiting the depth of field, you can better control the parts of the image that the eyeball tries to bring into focus, and therefore ensure that the eyeball is only focusing on iems that are also in the plane of convergence.

Ghosts of the Abyss has some footage that actually screws it up (some of the footage from the submersible units where the calibration went slightly awry) and the vergence differs from the main intersting thing in the frame, making you feel weird.

I'm not sure I agree with your assumption that if it's in focus on the screen, it will not be a source of focal strain -- I believe that our the focus and convergence of our eyes are naturally linked, so as we try to converge on objects behind the screen, we also try to focus on them. Crossing your eyes, which is estentially forcing your eyes to decouple the focus from the convergence in your eyes, produces noticable strain. Plus it makes you look funny.

David said...

"I'm not sure I agree with your assumption that if it's in focus on the screen, it will not be a source of focal strain -- I believe that our the focus and convergence of our eyes are naturally linked, so as we try to converge on objects behind the screen, we also try to focus on them. "

That is what I'm saying, and what is wrong with the slate article. As we converge (or diverge) to a object off the screen plan, our eyes do what they do naturally -- refocus. The point I was making is that there is a range of reforcusing allowed off the screen plane as it is within the hyperfocus distance of the human eye. So the slate story is wrong to think eyestrain is due to eye beinf forced to always focus on the screen plane independent of convergence. While we have to do that for crossing in images at short range (glasses free stereo images), but the same is not happening in the theater.

Carlo Macchiavello said...

Good 3d not cause problem, bad 3D cause headace, and many other problem.

The right actual problem is that too much people do 3D without the right knowledge to understand what they do.

3D is the near way to see what we can see in real world, 24h at day...

bad 3D is like a bad trip, that cause problems.

Thad Beier said...

David,

Thank you for your calculations of the hyperfocal distance of the eye. I had reached identical conclusions independently -- that in the theater, there should never be any accomodation problem for objects "behind the screen" -- that the screen is well beyond the hyperfocal distance of the eye, even with your pupils wide-open in the dark.