I love that you had these questions on christmas!
I’ll take a crack at an answer – I didn’t work on the SDK software, so take these with a grain of salt, but the broad strokes should be accurate:
In a nutshell, the overall pipeline works like this:
HoloPlay Capture takes the multiview images of the 3D scene, and Quilt takes the images from HoloPlay Capture and makes them show up in the Looking Glass. Capture knows all about the 3D camera – how to move it around, set the scene geometry and post-process effects. Quilt knows all about the Looking Glass – it reads the calibration information from the Looking Glass and actually draws the imagery into the Looking Glass.
When you first start a scene, Quilt loads the visual calibration settings from the Looking Glass over USB. These settings tell Quilt the Looking Glass’ resolution and how to arrange the camera views correctly behind the Looking Glass’ lenticular lens so they get sent out to the right spots.
Every frame, Quilt asks HoloPlay Capture to take a certain number of renders – that number is set up in Unity Editor under Quilt->Advanced->Tiling. The exact number is set by the Quality settings in the scene, but in a new Unity project it will default to 45. Quilt takes the renders from Capture and tiles them all together into a single texture – we call this texture a ‘quilt’, and it contains all the visual information that will be sent to the Looking Glass. Quilts are really handy, not just for real-time renders but also for recording and replaying multiview scenes, or showing scenes that are rendered with a non-realtime render engine*.
Quilt then takes the quilt texture and applies a post-process shader to the texture. This shader takes in all the calibration information from Quilt and the quilt texture, and it garbles up all the views so they line up in the right spot under the lenticular lens. Finally, Quilt takes the output texture from the post-process shader and draws it into a fullscreen window in the Looking Glass display.
Here are some handy things to change in Capture and Quilt. Everything can be set in the Unity editor or controlled by a script:
Capture box size – how much of the scene shows up in the Looking Glass. Think of this like a zoom knob
Clipping planes – adjust the near and far clipping planes. If you go too far out, you start seeing double images in the Looking Glass. We find that double-image to be distracting, so we set the clipping plane default positions to a position just before the double-imaging starts to show up, but you can override the defaults and expand your plane. Another way to counter the double-imaging, if you want to play with deeper scenes, is to use a depth of field pass on the camera post-process shader (see below)
Camera FOV – change the perspective of the camera. By default, it’s set to feel like a real-world perspective, meaning that the HoloPlay camera FOV matches your eye’s FOV on the Looking glass, giving virtual objects the perspective to feel like they’re on your desk.
Set camera to orthographic – occasionally, I want zero perspective in my Looking Glass scenes, so I check the ‘orthographic’ checkbox. This has the effect of rotating the HoloPlay camera around the scene, rather than sliding it along a straight line.
Add a post-processing stack to the camera to have more fine-tuned control over camera renders (handy unity tutorial here – just add the post-process stack to the child camera under HoloPlay Capture)
Quilt RT – this is the quilt render texture that actually holds all the views from Capture. Hitting ‘F10’ saves an image of the render texture to your project’s root folder.
Override Quilt – sometimes I want to play quilt images that I rendered ahead of time, from real-world cameras or from a non real-time render. You can put any quilt texture into ‘Override Quilt’ – as long as ‘Tiling’ matches the quilt texture’s tiling, it’ll show up in the Looking Glass.
Tiling – manually set how many views the system renders. Set to low view count for lightweight computers without GPUs or with low-end GPUs (cough mac cough). Tiling defaults to ‘High’ (45 views, 819x455 px/view), and we occasionally run at ‘Standard’ for scenes that push the GPU’s limits. ‘Custom’ lets you try whatever arrangement on a quilt that you’d like. Higher view count feels a little smoother and nicer, but you get diminishing returns beyond 45.
HoloPlay UI Camera is super handy if you want to do a dual-screen app, with 2D UI/text elements on your computer screen and 3D stuff in the Looking Glass. The Library and Model Viewer apps are examples of this.
The UI camera sets up a UI canvas and sets up the scene so that a build will put two fullscreen windows – one in your main display and another in the Looking Glass. This is more in-depth and deserves its own tutorial – we’ll look into putting one together.
To answer your other questions –
HoloPlay Capture can go anywhere – doesn’t have to be at the top level. It’s actually nice to child it to, say, a camera in a first or third-person game and have it follow the game logic for moving the camera around. In your car camera example, child the whole thing to the Pivot and you’ll get what your main camera sees, but in the Looking Glass.
The camera under HoloPlay Capture does need to be a direct child of HoloPlay Capture – Capture actually will create that camera if it’s not there, and it’s looking for a direct child.
Naming, tagging and layering don’t matter to our SDK – use them as you would in any Unity scene.
I hope that helps – let me know if anything’s unclear, or if you have any more questions.
*Take a look at this vimeo channel to see a bunch of quilt videos generated by people in the community (this channel is where vimeo app in the library gets its videos)