Capturing video for the Looking Glass display


I’m one of the many backers of the Looking Glass kickstarter campaign, and if all goes well I should be receiving my display sometime in December. So until then I plan to design and build a device for capturing video with 45 individual cameras, very much inspired by another project done by fskognir .

However, I could really use some information on what the correct angle between each camera should be.

I got my current angle of the cameras after I heard in a video that the total viewing angle of Looking Glass is supposed to be 50 degrees, but I would really like to have this confirmed or have the correct numbers.

Thanks a lot in advance!

I am on a vacation. So here’s my late reply…
My set up of cameras are not in an arc.
In my opinion, flat angle array is better.
So for me, angle is not relevant.

What is more important is the distance between the cameras. To mimic human eyes, smaller cameras will be better. Larger they are, the captured field will seem as if a giant is looking at whatever are inside that field. Giving somewhat like toy feel. So I went with smallest camera modules I could place my hands on. They still are altogether big. On top of that I am working on Pan and tilt.

I should get back to my project shortly! I will post the progress soon. One of my setbacks was not having enough screws. They were odd sizes. They were back ordered and I finally got 'em.

When cameras are placed on an arc, the target(aim of the cameras) is fixed. Therefore foreground and background are moreless fixed as well. If cameras are on a straight line, the target(aim, focus, etc.) can be adjusted. (Downside is that the image area may have to be cropped as needed or become partially blanked. Vignetting will help though.)

If you are sure of the target, arc placement will work like natural human eyes. Yet cameras need to be small for small objects.

If cameras are on a straight line, the target(aim, focus, etc.) can be adjusted. (Downside is that the image area may have to be cropped as needed or become partially blanked. Vignetting will help though.)

My aim is to use my camera for filming objects up close, so I’m not too concerned about the fixed target distance. But I am considering the possibility of creating a setup that will allow for straightening the camera arch to a flat line, but more experimentation is needed to determine the feasibility of such a construction.

When cameras are placed on an arc, the target(aim of the cameras) is fixed. Therefore foreground and background are moreless fixed as well. If you are sure of the target, arc placement will work like natural human eyes. Yet cameras need to be small for small objects.

In my first setup I used cameras that measures approximately 40*40mm, and with 45 cameras in total that gives us an arch length of around 1.8 meters - that’s a big setup!

With a total viewing angle of the Looking Glass display of 50 degrees, that will give us a target distance of 2 meters to the objects we want to film, and with the 3.6mm lens and 160 degree viewing angle of the cameras I will be using, that distance is too large to get good close up footage of the objects I want to film.

And since I cannot move the objects I want to film closer to the cameras, there are only two solutions to this problem.

  1. Digital zoom. I plan on using 1080p cameras, and since this is a much higher resolution than what is needed for the individual views for the Looking Glass, there should be plenty of pixels left to make digital zoom possible without degrading the image quality.

  2. Somehow move the cameras closer together and make the arch smaller. I have been experimenting with printing mirror mounts for the cameras (see attached images) that will allow me to make a setup of half the size and therefore half the target distance.

But I will have to do some more experimentation to determine what the best setup will be for me.

I should get back to my project shortly! I will post the progress soon.

That sounds great! I was afraid your project had been abandoned. I’m really looking forward to more updates!

Hey MartinL & fskognir & other holofriends

I want to weigh in here – I built a lightfield camera to shoot footage for a looking glass about two months ago. Naturally, I wanted to work in the most controlled predictable environment possible while learning about the basics of this new style of media, so I built an underwater camera and went to go film giant manta rays.

Here’s the camera I built:

Definitely +1 to fskognir’s point about the flat array – this is how we move our cameras in the virtual environment, as well. You want to film the subject from a variety of perspectives, which, when you display those views in a looking glass, feels like the different perspectives that your left and right eye have. Your eyes are (generally) arranged parallel to the looking glass display, and you want the cameras to also move parallel to the looking glass surface if you want the illusion of three-dimensionality. I make a point of that because you can also present other illusions in the looking glass that aren’t 3D, but are still wonderful. Your example of the camera arc, when viewed in the looking glass, would make the subject appear to rotate as you move left to right, like a lenticular postcard animation. This can be really neat – the trouble is that your left and right eyes would fight eachother inside your brain – your eyes are used to being arranged on the same plane, both looking in the same direction (because that’s how they’re laid out in your head). However, if you have an arced camera, when you look at those views in the looking glass, your left eye and right eye are seeing views that weren’t captured in a plane, but in an arc. This messes with your head a bit. It’s not bad, necessarily, but it feels unusual.

If you’re building cameras, one thing that I’ve found helpful is to think about the relationship between the camera geometry, how you setup your scene, and how it looks in the looking glass. Just like a regular camera, the main controls for a lightfield camera are a zoom and a focus knob. We just tweak those knobs after the shot, using software. The way I set a focus plane in my camera is to go through each camera in the array and pick out a point of interest that I can see from each camera, like the tip of my nose. I then crop out a region from each camera with that point at the center of the camera – this puts the tip of my nose at the focal plane of the looking glass display – right in the center of the block. The rest of my face would appear to the back of the back. If I then move forward or backwards, it looks like I’m moving forward and backwards in the block. If I move too far forwards or backwards, my right eye might start to see left eye views and vicey versa, and the result is this kind of blurring, or doubling of views – similar to what happens in a real camera when the subject gets out of focus. Check out this video of me in the looking glass:

When we want to display a set of lightfield images in a Looking Glass, we arrange them into what we call a quilt – this is just a single image with all of the different views tiled together. Here is a quilt of a manta ray, with the manta at the plane of focus:

You can see that the framing of the ray is the same in every shot, even though the perspective of each camera is different. You can also see, in the topright of the quilt, that my left-most camera didn’t have the full manta in frame. That’s fine – not perfect, but fine, and I just put in the part of the frame that I did have.

You can adjust the focal plane, just like you can adjust the focus ring on a real camera. What you can’t change in post is the perspective that the camera had when it captured the scene. This means that you can get all kinds of shots into a Looking Glass, but if you want it to look like the object is really there, like there is a real-world version of your subject in the Looking Glass, you have to film with the same perspective that the Looking Glass displays at.

When we think about how the Looking Glass displays information, we think about the “view cone” – this is the angle where you can see the information that the Looking Glass sends out. We can tweak this in software – the hardware limit for this generation of Looking Glass is about 52 degrees, but we can tweak the shader to adjust the width of each view. I’m still thinking about how to think about this, but I like to say that when I place the subject so the perspective of the camera is the same as the view cone of the display, I’m at the sweet spot of the camera. For my camera, which is 8’ wide, the sweet spot for a 52 degree view cone is 8.3’ back from the camera, and 11.1’ back for a 40 degree view cone.

This style of thinking feels pretty good – we’ve used the same rough formula for taking still images of people by moving one camera along a rail and taking 32 exposures. If my camera moves 3’ along the rail, I put the subjects about 3’ back from the camera, at a roughly 50 degree view cone. This does a pretty good job at capturing close-to-real-world perspective of the subject. Here’s one of me:

I’m laying out all the constraints (that I understand so far) about working with real-world lightfield images. It has lots of implications for lightfield camera design, which I’m not going to go into in this post because it will get really long and I want to go outside sometime today. Good rules of thumb: use a FOV lens wide enough so that all your cameras can see a centered subject at the sweet spot. Shoot in high res and high framerate – I ran my video cameras at 1080p60. High res because you’re going to crop in to each camera view to build the quilt, and you want to be able to crop into your subject and not have a tiny number of pixels. High framerate because, if you’re like me and not using synced shutters across your cameras, you should try to have your cameras capture a moving scene as close to the same instant as possible. With a 60fps shutter, I could have up to 1/60 of a second difference between when different cameras capture the scene. Fortunately, this is really fast and hard to perceive. The higher your framerate, the more leeway you have to work with regular cameras.

Good luck! Keep posting updates and questions! I’ll answer as best I can. If this didn’t come across, I’m really stoked about lightfield cameras, and am working hard, along with several other people within Looking Glass to build tools to streamline the capture and processing of lightfield images and video. We’re getting a bunch of tools ready for release soon – let me know if you’re interested in testing out early versions.

Happy hacking,


Thanks a lot for your comment Alex!

That’s a really awesome camera you have there! But isn’t it a real pain to keep all those batteries charged? :grinning:

I’ve been reading through your post a few times, not sure I got it all. But is it correct to say that you would want to capture the footage from the same locations as where you will stand in front of the looking glass when viewing it again? So if you capture the footage in an arch configuration, you will have to move around the looking glass display in a perfect arch similar to the one used when recording, or stand still and then rotate the looking glass display, to get the best viewing experience.

If so, then the problem with an arch configuration is not so much that the cameras are all rotated to face the “sweet spot”, but more that the cameras are not placed on the same straight line that the viewer is usually moving along in front of the display when viewing the footage.

Is this correct?

Hey Martin,

I’m another Looking Glass employee working on lightfield capture experiments. Yep, your interpretation of Alex’s post was correct, as far as I can tell. (And yep, you should see the hydra of USB cables Alex has to carry around every time he wants to run his camera. I think a more streamlined system akin to yours is definitely in the works. Have you designed custom camera boards? What sort of device are you planning on using to interface with all those cameras simultaneously?)

The linear camera system was one of the first things we ironed out when designing the virtual camera arrangement in the HoloPlayer One SDK, which is the same one that we use in the Looking Glass.

Definitely one would think that a curved camera system makes the most sense considering the radial distribution of views emanating from a Looking Glass; however, we discovered quickly that, with virtual cameras in the SDK, this produces a sort of “toe-in” effect, where the image distorts as you move towards the edge views. People that check out the Looking Glass tend to move their heads from side to side as opposed to rotating around it at a fixed radius and pointing their eyes directly into the center of the view cone. The virtual cameras in our SDK are arranged in a linear configuration and we have had success following that rule of thumb with our real-world capture array.

Alex has worked primarily on video lightfield captures, and I’ve worked primarily on a single-camera slider assembly that takes multiple instantaneous photographs, so I do want to add one more point in favor of a linear rail. The ability to change the distance between views is pretty crucial. With a curved rail system designed to focus on a specific item, unless you have some method to change the curvature, you do lose that flexibility. Alex’s picture above describes a good “rule of thumb” to approximate a 40-degree view cone with a live capture setup, where the ‘sweet spot’ is (rail distance) / (2*tan((40 degrees)/2) from the cameras. I applied this rule of thumb in my own lightfield capture experiments, but found that, subjectively, everything I was capturing turned out looking, subjectively, flat. I had to tweak the lightfield camera’s view separation basically as a matter of taste, depending on subject and location. Also, at least subjectively, I’ve also actually found 32-view camera captures to look superior to 45-view ones, so if you’re trying to budget for your project, definitely keep in mind that having 45 cameras is by no means a necessity. If it would simplify the decoding process, it might even be possible to knock it down to 24, but no guarantees.

1 Like

Hi Evan,

My original plan for a curved design is very much out of the window now :grinning:

Have you designed custom camera boards?

It’s nothing that fancy at all. I’m on quite a tight budget since this is just a spare time project, so I’m very much inspired by fskognir’s idea to just use CCTV camera modules (at least that is what I think he is doing), they are cheap and very easy to interface. So I ordered a few different modules for testing, and so far it seems like I will be using a 1080p AHD module with a 3.6mm lens. For recording the actual footage I plan on using a few 4K multichannel DVRs with a simple hack to start and stop recordings at the same time (Some timing correction might be needed in post), exactly how many DVRs depends on how much resolution I will be needing. If I ever manage to get something working, I plan on releasing the complete parts list along with the 3D files needed to create one.

Also, at least subjectively, I’ve also actually found 32-view camera captures to look superior to 45-view ones

Interesting, but how does this work then? are the missing views created by blending existing views by your software? or is something else going on?

Hey Martin,

The number of views that the Looking Glass displays can actually be set in software. If you’re developing a Unity app there are a few “recommended” view count presets; I believe they are 24 / 32 / 45 / 60. If you want to experiment further you can actually set the view count to whatever you want, but no guarantees that things continue to work as expected after that.
Even with live rendered content, 32 views was what we were using internally until a few months ago - we just thought that 45 is a slightly better sweet spot for most people. With fewer views, the effective resolution per view is higher, so that may be part of why in my subjective opinion the lightfield captures pop a bit more. I may try a 24-view lightfield next week, but I suspect that knocking down the view count that far will result in jarring transitions between the views. 32 or 45 will both look fine 99% of the time.

1 Like

The camera is great for how fast it came together with off-the-shelf cameras, and how nice [sic] it is to take out into challenging environments, but man! It’s definitely a pain to run. It takes about eight hours of post-processing work to get 30 seconds of video (4 hours to copy all files off of the SD cards (and charge everything in the process), 1 hour to sync video, ~2 hours to crunch lightfield frames and an hour of fixing stuff that didn’t come out right). I’m really looking forward to building a more streamlined version of the camera that does lightfield processing in realtime rather than in post.

And yes, you’re reading my comment correctly. With an arch configuration, even standing in one place and viewing the image in the Looking Glass feels a little funny, because your eyes expect that they are on the same plane, but instead, they’re seeing views from cameras that are arranged on an arch, rather than a plane. It’s not disastrous, but each eye sees a rotated (rather than perspective-skewed) version of the subject, and it doesn’t read as a 3D object as much as an animated lenticular.

1 Like

I’m eager to try out some of this myself, there’s just one step I’m missing - once you have created your “quilt” image, how are you loading that to the Looking Glass to display as an interlaced image?


Hey Eric,
Sorry – missed your message earlier. We have a command-line tool called Quiltr (public release coming soon, pm me if you have a Looking Glass now and want to experiment) that can display quilt images, image sequences and videos in a Looking Glass. I also wrote a basic graphical user interface program for doing the same thing, but more user-friendly. We’ll be refining and releasing that over the next month or two, but let me know if you’re interested in beta testing an early version.


Hi Alex,

I would love to experiment with the beta version. I have a few techniques for creating 3-D image sequences that I’m eager to test on the display.


Hi there,

would love to test an early version of the command line tool too if possible.


Hi Alex,

I am also interested in experimenting with the Quiltr tool on my Looking Glass. Looking forward to try to create some holographic visuals with Blender.


Apologies to all for the silence, but here’s a small update on my progress.

Like many others I have now received my Looking Glass display, and boy is it cool! My only regret is now that I didn’t get the large one :smile:

I’m of course way behind my original schedule with the camera build, mostly due to me underestimating the complexity of all the parameters that goes into this, real life getting in the way, and other terribly bad excuses as well. But I have decided that I would rather enjoy the journey and try to make it as good and enjoyable as possible rather than just finishing as fast as I can.

I have quite a few design decisions that I feel like I’m not entirely sure about, like the number of camera modules, the type, and other parameters as well. I’m also still not entirely sure that the cheap analogue AHD CCTV camera module setup is the right way to go, perhaps a bunch of Raspberry Pi’s with Pi Cams would be better, the quality would surely be a lot higher, but so would the price.

So to try and take the guesswork out of the equation I decided to build a programmable camera slider, just like the one Evan showed us in his video posted in one of the kickstarter updates, so I can actually test out the various parameters and camera modules.

My camera slider is almost done, I just need to do some simple arduino programming, and then I can start to make my own lightfield photographs and hopefully from that make some design decisions for the video setup.


Hi! I learned steroscopic 3D from an industry expert, and I agree that a linear array is better than an arc. You do not want vertical disparity between views, which an arc will make. To get the same convergence like an arc, you offset the backplane instead. If you don’t have the ability to do that (like a view camera), overcapture horizontally, and then crop and offset the images. So your leftmost camera uses the right hand side of the captured image (and crops the left) while your rightmost camera uses the left hand side of the image (and crops the right). In-between cameras slide between these extremes. This places your object of interest at zero depth (or near).

1 Like

Your picture wakes my memory up. The tool was from 2013.

I have to say both ways are good. It depends on your limitation and cost.
We used all of them into a layout.
To make a comfortable and strong 3d effect is the final target.

The real multi-camera system…
I recommend the linear array. It is more stable.

1 Like

I saw the film is playing. It is not a single image. The looking glass can play film? That is surprise me.