Hey MartinL & fskognir & other holofriends
I want to weigh in here – I built a lightfield camera to shoot footage for a looking glass about two months ago. Naturally, I wanted to work in the most controlled predictable environment possible while learning about the basics of this new style of media, so I built an underwater camera and went to go film giant manta rays.
Here’s the camera I built:
Definitely +1 to fskognir’s point about the flat array – this is how we move our cameras in the virtual environment, as well. You want to film the subject from a variety of perspectives, which, when you display those views in a looking glass, feels like the different perspectives that your left and right eye have. Your eyes are (generally) arranged parallel to the looking glass display, and you want the cameras to also move parallel to the looking glass surface if you want the illusion of three-dimensionality. I make a point of that because you can also present other illusions in the looking glass that aren’t 3D, but are still wonderful. Your example of the camera arc, when viewed in the looking glass, would make the subject appear to rotate as you move left to right, like a lenticular postcard animation. This can be really neat – the trouble is that your left and right eyes would fight eachother inside your brain – your eyes are used to being arranged on the same plane, both looking in the same direction (because that’s how they’re laid out in your head). However, if you have an arced camera, when you look at those views in the looking glass, your left eye and right eye are seeing views that weren’t captured in a plane, but in an arc. This messes with your head a bit. It’s not bad, necessarily, but it feels unusual.
If you’re building cameras, one thing that I’ve found helpful is to think about the relationship between the camera geometry, how you setup your scene, and how it looks in the looking glass. Just like a regular camera, the main controls for a lightfield camera are a zoom and a focus knob. We just tweak those knobs after the shot, using software. The way I set a focus plane in my camera is to go through each camera in the array and pick out a point of interest that I can see from each camera, like the tip of my nose. I then crop out a region from each camera with that point at the center of the camera – this puts the tip of my nose at the focal plane of the looking glass display – right in the center of the block. The rest of my face would appear to the back of the back. If I then move forward or backwards, it looks like I’m moving forward and backwards in the block. If I move too far forwards or backwards, my right eye might start to see left eye views and vicey versa, and the result is this kind of blurring, or doubling of views – similar to what happens in a real camera when the subject gets out of focus. Check out this video of me in the looking glass:
When we want to display a set of lightfield images in a Looking Glass, we arrange them into what we call a quilt – this is just a single image with all of the different views tiled together. Here is a quilt of a manta ray, with the manta at the plane of focus:
You can see that the framing of the ray is the same in every shot, even though the perspective of each camera is different. You can also see, in the topright of the quilt, that my left-most camera didn’t have the full manta in frame. That’s fine – not perfect, but fine, and I just put in the part of the frame that I did have.
You can adjust the focal plane, just like you can adjust the focus ring on a real camera. What you can’t change in post is the perspective that the camera had when it captured the scene. This means that you can get all kinds of shots into a Looking Glass, but if you want it to look like the object is really there, like there is a real-world version of your subject in the Looking Glass, you have to film with the same perspective that the Looking Glass displays at.
When we think about how the Looking Glass displays information, we think about the “view cone” – this is the angle where you can see the information that the Looking Glass sends out. We can tweak this in software – the hardware limit for this generation of Looking Glass is about 52 degrees, but we can tweak the shader to adjust the width of each view. I’m still thinking about how to think about this, but I like to say that when I place the subject so the perspective of the camera is the same as the view cone of the display, I’m at the sweet spot of the camera. For my camera, which is 8’ wide, the sweet spot for a 52 degree view cone is 8.3’ back from the camera, and 11.1’ back for a 40 degree view cone.
This style of thinking feels pretty good – we’ve used the same rough formula for taking still images of people by moving one camera along a rail and taking 32 exposures. If my camera moves 3’ along the rail, I put the subjects about 3’ back from the camera, at a roughly 50 degree view cone. This does a pretty good job at capturing close-to-real-world perspective of the subject. Here’s one of me:
I’m laying out all the constraints (that I understand so far) about working with real-world lightfield images. It has lots of implications for lightfield camera design, which I’m not going to go into in this post because it will get really long and I want to go outside sometime today. Good rules of thumb: use a FOV lens wide enough so that all your cameras can see a centered subject at the sweet spot. Shoot in high res and high framerate – I ran my video cameras at 1080p60. High res because you’re going to crop in to each camera view to build the quilt, and you want to be able to crop into your subject and not have a tiny number of pixels. High framerate because, if you’re like me and not using synced shutters across your cameras, you should try to have your cameras capture a moving scene as close to the same instant as possible. With a 60fps shutter, I could have up to 1/60 of a second difference between when different cameras capture the scene. Fortunately, this is really fast and hard to perceive. The higher your framerate, the more leeway you have to work with regular cameras.
Good luck! Keep posting updates and questions! I’ll answer as best I can. If this didn’t come across, I’m really stoked about lightfield cameras, and am working hard, along with several other people within Looking Glass to build tools to streamline the capture and processing of lightfield images and video. We’re getting a bunch of tools ready for release soon – let me know if you’re interested in testing out early versions.
Happy hacking,
Alex