We’ve been playing with AR holographic systems for a while, but most of our virtual content has only loosely fit the physical set in the AR system. A good example of this is my favorite mini-demo, fishtank godzilla:
Fishtank godzilla is pretty great, but it’s important to realize that he’s just running a predefined animation loop – he’s not “world-locked”, or in plain English, there’s no glue that holds the virtual and physical stuff together. Move the fishtank a bit, and suddenly godzilla is wandering through the glass and outside the tank, or his fire going behind the plant instead of right at its base. World-locking is a really tricky problem, and this is the first of a series of experiments in how to world-lock well.
For my first foray into world-locking, I wanted to do a pretty simple demo: I just wanted to change the color of a box. The model couldn’t be simpler, and it let me get straight to the crux of the issue: aligning a digital model of a box directly on top of a physical box.
I looked around for a suitably-sized box and came up with this one:
I popped it inside of one of our little pepper systems, opened up Unity, and created a 3D cube and started stretching and adjusting it until it looked like it was a good overlay.
This is not a good overlay. The problem is, you have to make the virtual cameras looking at the virtual 3D scene have the same field of view and other optical characteristics as your physical eyes looking at the real-world box. One thing that makes this especially hard is that the little pepper that I’m working with here generates its light field with a lenticular, meaning that it can account for eye movement from left to right, but it has no way to show different images as I move up and down. It meant that, for this entire experiment, I had to pick a certain eye height and consistently hold my head there as I tried to adjust the virtual model to match the physical.
There are a lot of knobs to adjust here, and a lot of things that look ok from one position but don’t hold up as you move around. Another thing I noticed as I was working on it is that overlaying virtual stuff onto a mostly white box is real tough – the little pepper is an additive display – it can’t make anything darker, only brighter. If the physical object is already white, it’s real hard to see the difference between the physical white object and the virtual white object (or almost any color virtual object). Some flat black spray paint was in order.
Even so, I was struggling to get a persistent display. The best I got was something like this:
I asked Kyle, the author of the HoloPlay SDK for some help, and he pointed out a hidden setting in the SDK called Vertical Angle. Vertical angle is a kind of parallelogram skew of the virtual scene that can account for a different eye height. In almost all of our software projects, we never apply any skew to a scene, and it works out fine, because we’re not trying to directly match up with a physical object – if there’s no real object in the scene, or if the virtual and real stuff aren’t right on top of eachother, it’s easy to overlook perspective mismatches.
batman, demonstrating what a vertical angle is
showing the vertical angle effect on the holoplay capture object
This setting was the key. It let us account for the (extremely common) scenario where the viewer’s head is not directly in line with the display, and once we have that, we can finally get a decent world-locking. Here’s Kyle putting the finishing touches on the scene perspective:
It seems simple, right? This isn’t the fanciest demo in the world, but it lays the foundation for the more elaborate, world-locked projects that I’m working on next. Want to see more? Tune in next week!