EOSHD brings you an interview with the man behind the first broadcast quality 3D DSLR rig and you won’t need 3D glasses to read it…
David Cole and his team at Beampath are currently building a DSLR 3D rig with dual Panasonic GH1s to compete with the best of them. The rig will be used by professionals, for broadcast or for film. It uses dual hacked Panasonic GH1s and it’s hoped that customised firmware can be created for it.
Here, David gives a great overview of how 3D camera technology works and what he is attempting to do with the GH1 3D rig currently under development.
EOSHD: How important is a clean codec on a 3D camera rig?
DC: This is just now being understood. Motion artifact is a big problem, particularly with negative-parallax objects (that protrude beyond the screen) moving quickly, parallel to the sensor. Motion coding imprecision, that is a function of low-bitrate, makes this problem VERY BAD. Anyone who saw the FIFA 3D broadcast, earlier this summer, probably noticed that the depth effect disappeared for the ball when it was is fast play (e.g. kicked). This was partially a function of the heavy DCT-based compression used for cable and satellite
Fast motion and quick pans, that have a lot of change per frame, tax the long-GOP coding structure of H.264 (for example) beyond the breaking point. This is annoying in 2D… but completely destructive to the 3D effect. Also, the macroblock errors that accompany low-bitrare DCT-based compression are different in each eye. This introduces disparity that can be annoying or completely unwatchable.
One the hopeful aspects of the GH13 has is that it opens up many parameters of the compression to experimentation (e.g. GOP length). Once we’ve solved the hairy problem of sync, we can look at tuning the compression for better stereo effect. At the least, the GH1 will be a great lab-rat for stereoscopic acquisition research.
EOSHD: Have you spoken to Vitaliy Keslev (Tester13) yet about any required firmware features?
DC: We are lucky in that regard. Vitaliy has chimed in on the thread several times and has suggested a path to resolving the sync issue – which several of us are pursuing.
EOSHD: Please explain to the normal reader why a perfect sync is required for the rig.
DC: Simply – an object in motion must be recorded at the same time in both eye-views, in order for the stereoscopic effect to work. Any offset in the time, and the object will appear in different positions in the left and right-eye views. This dissymmetry blows the effect in a variety of BAD ways. There is no universal rule for what is “good enough” sync, but, if there IS one rule in stereo its, “good enough… isn’t.
EOSHD: Why the selection of the Olympus 9-18mm on the GH1 3D rig?
DC: Nothing magic here… We needed a 11mm (approx. 22 mm expressed as 35mm equivalent) to avoid vignetting thru our mirrorbox. We’d prefer a prime, but… these are good, relatively inexpensive lenses. Also, we wanted to use a non-Panansonic Micro 4/3 lens. This is because it is speculated that the Panny lenses place a load on the camera CPU to apply image geometry correction. We want to give the CPU the maximum bandwidth to run the compression… so we avoided Panny lenses for testing.
EOSHD: Is DSLR CMOS sensor rolling shutter an issue when producing a 3D rig?
DC: Without a doubt. For one thing, if drives the need for perfect sync. If there is image skew induced by rolling shutter, you at least want it to be uniform in both eyes. Additionally, it’s important to mount the cameras on the mirrorbox so that each sensors are integrating in the same direction (bottom to top or vis versa).
I do think that rolling-shutter and sync disparity often take the blame for problems that are actually induced by compression inadequacy in lower-cost cameras. I’ve seen many problems from cameras like the EX3 that remain – even if you clone one eye-view and fake some parallax offset to give you pseudo -3D. In fact, this is a good sanity check for diagnosing problems in post. If the problem doesn’t go away when you are looking at the same video stream in your left and right eyes, it’s obviously not induced by dissymmetry between cameras.
EOSHD: What function does the rig’s mirror box have?
DC: The mirrorbox takes what would be a compact and rugged rig and makes it bulky and delicate! But, it’s a necessary evil to effectively position the camera lenses closer together than the camera bodies will allow. The distance between lenses, called the inter-axial (or often inter-occular) distance, directly relates to the acceptable distance to the foreground object in the scene. The smaller the inter-axial, the closer the foreground object can be to the camera and still provide a comfortable viewing experience. For macro work, it’s common for the effective position of the lenses to be almost overlapping.
EOSHD: I heard somewhere that Avatar was shot on small chip 2/3″ modified RED cameras? Any idea of why only small chips are used?
DC: Actually, I think the Pace-Cameron rigs used Sony F950’s for Avatar. Maybe other cameras too – but – not REDs. But to your point, there is a lot of discussion around sensor size for 3D and the related depth-of-field. Most stereographers do not often (or ever) use the narrow [shallow]depth-of-field that a large sensor (e.g. Canon 5D) affords. Most 3D is shot with a wide [deep]depth-of-field and the entire scene in focus. This makes sense because your brain struggles to place blurry objects in space (which sort of defeats the whole stereo thing).
It’s a bit dogmatic if narrow [shallow]DOF has a place in the stereographer’s toolkit. Some say there are times when this effect can be used advantageously, and some say don’t do it. As for light gathering / exposure…bigger is obviously better for 3D filming (just as in 2D).
EOSHD: James Cameron believes 3D requires at least 48p, but Avatar was shot on 24p. Will you use the GH1’s 24p or 60p mode?
DC: Well – most of the distribution outlets for high-def stereo today are 24P (e.g. 3D Blu-Ray). In fact, HDMI 1.4 based 3DTVs only accept 1080 24P… no 30P and certainly no 60P. Even internet-based distribution channels will have to go 24P because of the HDMI 1.4 limitation. So – we’ll ALL have to kowtow to 24P for most applications. That said, *shooting fast action at 60P with the GH1 then retiming to get smooth slo-mo, opens up a world of possibilities for the shooter. We’ve used this for a test shoot at a free-running gym and it’s great.
EOSHD: What software in post is used to produce the finished effect?
DC: There are a lot of options here. As far as immediate downstream workflow from the GH1 – we use NeoScene from Ciniform to ingest the GH1 AVCHD… and the MPEG out of the cameras just as it is. Beyond that… you can go CS5 (Premier, After Effects, etc.), Final Cut Pro with Neo3D or Tim Dashwood’s Stereo 3D Toolbox plug-in, Sony VEGAS – or you can go to an assortment of MUCH more expensive NLE and compositor options.
If you have Final Cut Pro, I’d look seriously at the Dashwood plug-in to get started. It’s low-cost and there are some excellent tutorials provided.
EOSHD: Will the footage be compatible with consumer 3D HD TVs?
DC: Absolutely. Footage from the GH1’s (once sync is resolved) can go to 3D Blu-ray, or into a variety of formats that are HDMI 1.4 compatible. I’ve just finished reviewing some encoded test footage from the GH1 on a new Samsung 3D Plasma… it rocks.
EOSHD: What do you make of Panasonic’s consumer 3D lens – can a lens and single imaging chip possibly cut it?
DC: The lens and the Panny consumer camcorder will be excellent for “user created”, non-pro content. The fundamental problem with these solutions for professional applications is that they anamorphically scale the left and right images onto one frame (halving the horizontal resolution). This creates a format known as side-side. One of the biggest problems with side-side is that the scaling is very destructive of the sharp edges needed to provide the brain with the “edge” information that it uses to define objects in space. Combine this with the artifacts that accompany relatively low bitrate DCT-based compression and you have a recipe for headache-vision.
This has been demonstrated on a wide scale with the recent cable and satellite broadcasts of the FIFA World Cup, NASCAR, and PGA golf. Those were side-side scaled then transcoded to a fairly low bitrate for distribution. The effect, at times, made people rip the glasses off of their faces.
Luckily, there are MUCH better 3D broadcast solutions on the way.
EOSHD: Big thank you to David for the interview. EOSHD.com wishes the team at Beampath all the best in developing the GH1 3D rig for professional jobs.