Unity Camera technical deep dive request

The Getting Started walk-through is great for Day 1, but I’m starting to hit some walls as I move beyond simple tests and into attempting to adapt previous projects to work on LKG. I’m writing to appeal to the folks at LGF to consider compiling some tips and insight that moves beyond What and into How and most importantly, Why.

For example, what’s the relationship between the Capture and Quilt components? The Quilt component suggests that there can be multiple Captures, but doesn’t explain what that could be useful for. Why are the clipping planes of HoloPlay Camera locked to 35.99479/49.74479 when the size of the Capture is 5? Obviously, it’s derived from an important function that is never explained. What other quilt details are likely to be useful, and why?

What’s the story on the HoloPlay UI Camera? It doesn’t seem to be explicitly necessary but some information on supporting (or disabling) the primary monitor when the LKG is engaged would be useful. It would be nice to know what the external resources being accessed are used for.

The main issue I keep encountering as I work to adapt existing code is that you see a frequent pattern - with, for example, car camera rigs - where you have:

Car Camera Rig
-> Pivot
—> Main Camera

Where does the Holoplay Capture / Holoplay Camera hierarchy fit into that structure? Does the Capture need to be a top level object in the scene hierarchy, or can it be safely put in the car rig hierarchy? Does the Holoplay Camera have to be a direct child of Holoplay Capture? Could I put the Capture components on the car rig (in this example) and augment the Main Camera with the Holoplay Camera components?

Is naming or tagging or layering ever relevant?

You get the idea! Thanks again. I love this device. This will make us much more able to build the first wave of post-launch content. :slight_smile:

1 Like

I love that you had these questions on christmas!

I’ll take a crack at an answer – I didn’t work on the SDK software, so take these with a grain of salt, but the broad strokes should be accurate:

In a nutshell, the overall pipeline works like this:

HoloPlay Capture takes the multiview images of the 3D scene, and Quilt takes the images from HoloPlay Capture and makes them show up in the Looking Glass. Capture knows all about the 3D camera – how to move it around, set the scene geometry and post-process effects. Quilt knows all about the Looking Glass – it reads the calibration information from the Looking Glass and actually draws the imagery into the Looking Glass.

When you first start a scene, Quilt loads the visual calibration settings from the Looking Glass over USB. These settings tell Quilt the Looking Glass’ resolution and how to arrange the camera views correctly behind the Looking Glass’ lenticular lens so they get sent out to the right spots.

Every frame, Quilt asks HoloPlay Capture to take a certain number of renders – that number is set up in Unity Editor under Quilt->Advanced->Tiling. The exact number is set by the Quality settings in the scene, but in a new Unity project it will default to 45. Quilt takes the renders from Capture and tiles them all together into a single texture – we call this texture a ‘quilt’, and it contains all the visual information that will be sent to the Looking Glass. Quilts are really handy, not just for real-time renders but also for recording and replaying multiview scenes, or showing scenes that are rendered with a non-realtime render engine*.

Quilt then takes the quilt texture and applies a post-process shader to the texture. This shader takes in all the calibration information from Quilt and the quilt texture, and it garbles up all the views so they line up in the right spot under the lenticular lens. Finally, Quilt takes the output texture from the post-process shader and draws it into a fullscreen window in the Looking Glass display.

Here are some handy things to change in Capture and Quilt. Everything can be set in the Unity editor or controlled by a script:


  • Capture box size – how much of the scene shows up in the Looking Glass. Think of this like a zoom knob

  • Clipping planes – adjust the near and far clipping planes. If you go too far out, you start seeing double images in the Looking Glass. We find that double-image to be distracting, so we set the clipping plane default positions to a position just before the double-imaging starts to show up, but you can override the defaults and expand your plane. Another way to counter the double-imaging, if you want to play with deeper scenes, is to use a depth of field pass on the camera post-process shader (see below)

  • Camera FOV – change the perspective of the camera. By default, it’s set to feel like a real-world perspective, meaning that the HoloPlay camera FOV matches your eye’s FOV on the Looking glass, giving virtual objects the perspective to feel like they’re on your desk.

  • Set camera to orthographic – occasionally, I want zero perspective in my Looking Glass scenes, so I check the ‘orthographic’ checkbox. This has the effect of rotating the HoloPlay camera around the scene, rather than sliding it along a straight line.

  • Add a post-processing stack to the camera to have more fine-tuned control over camera renders (handy unity tutorial here – just add the post-process stack to the child camera under HoloPlay Capture)


  • Quilt RT – this is the quilt render texture that actually holds all the views from Capture. Hitting ‘F10’ saves an image of the render texture to your project’s root folder.

  • Override Quilt – sometimes I want to play quilt images that I rendered ahead of time, from real-world cameras or from a non real-time render. You can put any quilt texture into ‘Override Quilt’ – as long as ‘Tiling’ matches the quilt texture’s tiling, it’ll show up in the Looking Glass.

  • Tiling – manually set how many views the system renders. Set to low view count for lightweight computers without GPUs or with low-end GPUs (cough mac cough). Tiling defaults to ‘High’ (45 views, 819x455 px/view), and we occasionally run at ‘Standard’ for scenes that push the GPU’s limits. ‘Custom’ lets you try whatever arrangement on a quilt that you’d like. Higher view count feels a little smoother and nicer, but you get diminishing returns beyond 45.

HoloPlay UI Camera is super handy if you want to do a dual-screen app, with 2D UI/text elements on your computer screen and 3D stuff in the Looking Glass. The Library and Model Viewer apps are examples of this.
The UI camera sets up a UI canvas and sets up the scene so that a build will put two fullscreen windows – one in your main display and another in the Looking Glass. This is more in-depth and deserves its own tutorial – we’ll look into putting one together.

To answer your other questions –
HoloPlay Capture can go anywhere – doesn’t have to be at the top level. It’s actually nice to child it to, say, a camera in a first or third-person game and have it follow the game logic for moving the camera around. In your car camera example, child the whole thing to the Pivot and you’ll get what your main camera sees, but in the Looking Glass.

The camera under HoloPlay Capture does need to be a direct child of HoloPlay Capture – Capture actually will create that camera if it’s not there, and it’s looking for a direct child.

Naming, tagging and layering don’t matter to our SDK – use them as you would in any Unity scene.

I hope that helps – let me know if anything’s unclear, or if you have any more questions.

*Take a look at this vimeo channel to see a bunch of quilt videos generated by people in the community (this channel is where vimeo app in the library gets its videos)


Hey man… some people drink to forget. I prefer holograms. :santa:

This reply is certified: excellent++

Thank you. I’m excited to see the tutorial on Holoplay UI and hope that it covers setting up two screen apps where the primary display is showing something completely different or nothing at all.

1 Like

This is a very helpful addition to the available tutorials. Looking forward to the UI Camera post.

Also, a post or tutorial on the buttons would be helpful - I am digging into the Controls Test C# script but a simple useful example project (like the scene switching in some of the library apps) would help us immensely.


1 Like

I´m currently trying to have the UI display on the looking glass itself, but it´s always been shown on the second screen. Do you have any suggestion for this?


1 Like