Future Tense: 3D Or Not 3D
But…
I also believe that the current implementation of 3D technology—using polarized or active shutter glasses is only an intermediate stage.
The ideal 3D system—the one we really want—will be some form of wall-sized holographic projection that does not require any special glasses at all. And while that technology was seemingly impossible only a few short years ago, I think we’re on the threshold of a breakthrough. I’ll explain.
Image courtesy dj-design.com
The history of photography, from the first pinhole cameras to the present has been about focusing light onto a flat image-sensitive surface. Whether that surface was film or an electronic sensor, the key element was the lens and its ability to focus light onto a flat plane. All of photography is about the relationship of the lens to the recording surface. Current 3D imagery is still about the lens—two lenses. Stereoscopy is a carefully constructed illusion, produced by two images photographed about 2.25 inches apart. When it’s well done, the illusion is wonderful. But it’s still an illusion, and as some critics have pointed out, the illusion sometimes contradicts itself. The eyes are focusing on a flat plane that is often at a contradictory distance from the illusory depth of the stereoscopic image. (There are other issues as well. The lenses used, the depth of field, and the amount of separation also complicate the creation of the illusion.)
Holography, on the other hand, is a lensless process. It’s not about capturing an image on a flat plane, it’s about capturing a wave-form map of the light passing through a plane. With laser holography, you shoot a laser at a beam-splitter. Half the beam reflects off the object, the other half is the reference beam. When the two beams are reintegrated, you get an interference pattern. A laser hologram is the recording of that interference pattern onto a light-sensitive surface, a large piece of film. When light is aimed through the interference pattern, the original wave forms are reconstructed and the viewer perceives that as a true three-dimensional image.
A hologram presents the same wave forms to the eyes as the original image. It’s as if you’re looking through a window. You can move your head, tilt your head, stand on your head. It doesn’t matter. You’re not focusing on the static plane of the film or the screen. You’re focusing on what appears to be the actual object.
Unfortunately, laser-holography does not lend itself easily to capturing color or moving images. It can be done. I’ve seen color holograms and they’re so realistic that the image defies comparison with any other form of three-dimensional reproduction—with or without glasses. But projecting such a holographic image so that it can be viewed by an audience is another technological hurdle.
As I said above, I think we’re on the threshold of solving both parts of the problem—recording the image and projecting it. Up till now, the creation of a hologram required a carefully calibrated beam-splitter and laser. Not practical for a point-and-shoot situation at the park or a wedding or a vacation.
But the laser may no longer be necessary. We finally have the processing power to change the entire nature of photography. We can synthesize a holographic interference pattern from a digital model of an object. All we need is the digital model of the scene, real or constructed, and the computer can create a color hologram.
How do we get that digital model in real time without the laser? Multiple simultaneous photographs. Let me speculate. Imagine a camera the size of a small saucer. The rear is an electronic viewfinder. The front is an array of lenses. Two might be sufficient, but I suspect a dozen, set in a circle, might be even more efficient. The lenses need only have image-sensors of 2 megapixels each. But for sharpness and detail, you could put a 20-megapixel lens in the center. The little sensors record placement information. The one big sensor records texture, color, and detail. When you take a picture, the camera records 44 megapixels of information. 20 megapixels of actual photograph and another 24 megapixels of difference information. A multi-core processor in the camera constructs a 3D model of the scene and maps the 20-megapixel data onto it. Data can be interpolated as needed to fill in things missed by the central sensor, but picked up by the lesser quality edge sensors. Perhaps the professional models would have three 20-megapixel sensors spaced equidistantly around the edge of the saucer to capture even more detail. Or some other arrangement.
The point is that we don’t need a laser. If we have multiple views of the same scene, we have enough information for the software to construct a digital model and extrapolate an interference pattern for a color hologram viewable in white light. If we can record images at 30fps, we’ll have holographic video. (BTW, all those multiple sensors can give you real-time High Dynamic Range video too. And probably a lot of other nifty exposure effects, like synthesizing an amazingly high ISO.) Existing 3D movies might also be translatable into a holographic format for viewing without glasses.
Presenting the synthesized interference pattern on a video panel might require greater resolution than our video displays are currently capable of. 1900x1080 might be insufficient for the kind of clarity and crispness we’ve gotten used to. We might have to go up to 4K resolution. Or more. But screens of that resolution have already been demonstrated in the lab and could become consumer products in the foreseeable future.
If and when we can put an interference pattern on a video panel, one that will allow white light shone through it to construct the digitally-synthesized waveform, we will have true 3D television. If we can find a way to recreate that waveform on a theater-sized screen, we’ll have 3D movies that don’t require glasses.
There are still going to be technical issues. If the photographic window is the size of a saucer, then how do you expand that window to the size of a home theater, or for that matter an Imax installation? Part of the image processing will have to address that issue. You don’t want Beverly Hills Chihuahua to turn into Bel Air St. Bernard.
But I think that problem could be solvable too. It’ll take some experimentation, some engineering, a lot of testing and tweaking, but if we’re construct a digital model of the scene when we photograph it, we can also reconstruct that digital model for whatever size screen it’s going to be presented on, whether it’s your laptop, your home theater, or the local multiplex.
(The bandwidth problems could be horrendous, of course. It all depends on how much information actually needs to be transmitted and where the bulk of the processing is done—maybe most of it can be done in the receiver. I’m sure that Scotty will figure it out just before the last commercial.)
Some of what I’ve described here has already been demonstrated in the lab. Adobe has shown some very sophisticated technology for mapping detailed textures onto 3D surfaces, other companies have produced computer generated interference patterns for credit cards and driver’s licenses. Those are the two front-end issues. The last piece of the puzzle is the presentation window.
If it all comes together, I suspect we could see the first experimental units before the decade is over. And if I’m right, then the current generation of 3D televisions will be obsolete right on schedule. Ten years.
What do you think?