"But Charlie, don’t forget what happened to the man who suddenly got everything he always wanted… He lived happily ever after.” -Willy Wonka
Hi everyone!
tl;dr: 3D interaction directly with floating 3D content is actually possible without VR or AR headgear!!! GIF below! Want to buy one of these interactive lightfield dev kits NOW to experiment with? Email me at smf@lookingglassfactory.com!
What if you could reach into the 3D scenes of a Volume, directly? Directly touching and transforming 3D content without need of head-mounted displays is a core tenet of the dream of the hologram to me. Now, hot damn, it’s happening! Check this out (no CGI or tricks here):
That’s a superstereoscopic (22 views that are tight enough to be viewed in glorious glasses-free 3D in a 50-degree view cone) aerial display (it’s floating over the glass surface shown above) with the views being updated dynamically off of my laptop. My finger moving the X-wing is being tracked by an infrared curtain (or, in a version we just got working this week, being tracked fully in three dimensions with the SR300 Intel Realsense camera).
The multiple views are made with a slanted lenticular on a 2048px x 1538px screen (a brilliantly simple trick invented by a couple researchers at Philips 21 years ago), with content being generated in realtime with a new Unity SDK that Kyle of Valley Racer fame forged into existence last month. By pulling the main image plane into free space with an optical circuit consisting of a retroreflecting array and a few reflective polarizers and quarterwave plate films, interaction with and around the floating 3D content is unlocked! And because the 3D scene is generated in real-time, that makes direct interaction with the 3D scenes possible.
This is going to sound super nerdy, but the thing I’m really stoked about is that this interface’s coordinate system is both directly interactive AND viewer independent. If I touch the back thruster of the floating X-wing for instance, someone sitting next to me sees my finger coincident with that thruster’s same coordinates in real meat-space, just like I do. We achieved that viewer-independent coordinate system in Volume by actually scattering points of light off of physical surfaces, which has the unfortunate byproduct of requiring those physical scattering surfaces (which tend to get in the way of meaty fingers). This system sidesteps that problem.
I’d actually read about advanced lightfield interfaces somewhat like this being shown around conferences and in university labs for a couple decades, but they were always cloaked in mystery – they were super expensive to make, with little content and no content creation tools, with limited or mostly no interaction – and never available for sale as a kit or system.
Boo. Time to change that! If you’re interested in getting one of these first handmade interactive lightfield dev kits, write me at smf@lookingglassfactory.com. We’re going to be selling the first few of these prototypes at $500 (that’s basically our cost right now + a pizza) and that’ll include a built-in Realsense SR300 for full 3D interactive fun. And yes indeed!, we’ll post a github link to a beta version of the Unity SDK and Realsense SDK by Kyle and Dez that handle all of the image and interaction processing here shortly.
(For the deeply curious: here’s one of the calibration scenes that we’re using on these first prototype systems. Notice how you can glance around that test cylinder and occlude the small white cube).
full disclosure: while the core fundamentals of this system were pioneered decades ago and are out of patent, Looking Glass is patenting some improvements we’ve made over the past few months that improve contrast and resolution, add good 3D interaction, improve the number of views/viewing axes and view cone, and ultimately shrink the entire system down. I’m hoping this allows this class of system to finally escape the confines of the lab and venture into the wider world. In a related note re: the SDKs we’re writing for this system, we’re debating now whether it’s a good idea to opensource those tools and welcome debate on the subject here. If you have any questions about what we’re patenting and why (and whether that’s a good idea for the community or not), please email me or post your thoughts in the forum!