Another Looking Glass user, Dany Qiumseh, made a really nice DIY microscope for the Looking Glass that captures lightfield imagery by moving one of those cheap USB dissection microscopes in an arc over the subject and capturing a set of images from different positions.
This technique makes really nice still images, but the trick for you is getting the image to refresh at video rates – doing that requires multiple cameras or a very special camera capable of filming microscopes.
My hunch is that the lytro cameras won’t work without modification for capturing microscopic subjects, and I’m also skeptical of the stereo camera -> depth approach – If you have multiple 2D cameras, you can calculate depth info, but getting consistent depth scale from one frame to another is challenging, and doing it in realtime is also quite hard. Almost all stereo -> depth conversion will have a number of artifacts. Reflective elements like shiny solder can also throw artifacts into the depth conversion.
I was just googling around for a microscopic Time of Flight sensor, which is probably your best bet – Texas Instruments put together a nice intro on their ToF sensors here. From a video point of view, they work like a kinect or realsense – they give you a depth map of the scene in front of the sensor. Unlike a realsense or kinect, they don’t use a structured light projector, and can probably get much closer to the subject. The sensors work with a variety of lenses, and I would bet that somebody has put a microscope lens onto one by now (although a quick google didn’t turn up anything, but maybe there’s a special name for this type of microscope that I don’t know).
I think that’s your easiest option – there’s probably an off the shelf sensor that does what you want, especially for large-ish subjects like a circuit board. You can use the camera feed to reconstruct a point cloud or mesh in a 3D environment like unity or three.js and then get that 3D scene into a Looking Glass. Bada bing bada boom.
The other option, which is super DIY-heavy, is to make an array of a large number of 2D cameras, aligning the output of the 2D cameras to make a quilt image, and then passing that to the post-process shader to get an image into the Looking Glass. It’s not easy to get more than a few live video feeds in to a single computer – you can only plug in so many USB cameras before you overwhelm the bandwidth of a USB port. I designed a system to get around that by having a bunch of tiny computers, each of which drives a single camera, aligns the image and sends just the cropped, aligned portion over the network to a central computer. This is quite a process, and the end result looks like this:
The images from this camera are really nice, though, and they don’t have any artifacts from reflectivity. Here’s a writeup I did of some lightfield filming experiments with this rig. I haven’t tried putting a macro lens on the cameras, although I know they do exist.
I don’t really recommend this for you – it’s a ton of work, and even though the end results are wonderful, you’ll spend a lot of your time making cameras rather than soldering. If you can find an off-the-shelf microscopic depth sensor, I think that’s the fastest train to solder town.
Let me know how it goes!