I’m convinced the realization of the dream of the hologram is imminent-- the 3D interactive lightfield interface (aka HoloPlayer One) is what I wanted as a kid dreaming of holograms, and a group of a dozen 10-year-olds shoving each other out of the way to play around with holographic blobs yesterday backs me up. But how do the few folks that have worked with HoloPlayer One and built amazing apps for it share their groundbreaking work with the world? Without everyone thinking the 2D video recording of their work is either a CGI fake-out or a simple 2D pepper’s ghost? If a hologram tree falls in the forest…
To help solve this problem, we’re doing a bunch of experiments in Looking Glass lab to (attempt to) better communicate the in-person experience with HoloPlayer One over 2D media. Don’t quite have it figured out yet, but a couple approaches we’ve been experimenting with:
The moving camera
Either swiveling the Holoplayer on a turntable or moving a camera on a track seems to give a decent sense of parallax between the floating 3D scenes and the device’s glass surface - so, presumably folks who have never seen a system in person would see that the 3D scenes are floating above the glass, rather than on or behind it.
Video recording from within the Holoplayer
Reverse to the over-the-shoulder shot. Getting the view of the operator with the floating 3D scene in front of the operator, either by placing a camera in the cabinet of the HoloPlayer (which is what I did in the attempt below) or installing a dedicated “selfie cam” in each system.
Both of the above approaches just use a regular camera with zero post-processing or compositing. In that sense, they’re pure, but maybe they veer too far from the real-life experience.
Other options we’re experimenting with are something @oliver brought up a couple times, using a dedicated multi-camera rig (6-12 cameras synced up) looking over the shoulder of the operator (which would give us post-process control over what piece of a video we wanted to freeze and wiggle gif aka “bullet time”). Or perhaps something @alex brought up yesterday, of using the built-in Realsense camera to capture the hand positions and the RGB video of the operator and composite on the exact 3D scene that the person is interacting with, to give a truer-to-life representation of the interaction with the floating 3D scenes than may be possible with pure video alone.
Would love to hear from anyone that has seen videos of HoloPlayer One, and then seen the system in person + hear ideas on how we can try to narrow the gulf between the experience in real life and that communicated via 2D video/gif. And in the meantime the search continues – more experiments will be posted in this thread!