Essential Unity SDK Feature - Capture for static still or video display then enable trigger for live capture


Hi after much testing with massive/heavy scenes I think it would be wise to have a capture/snapshot feature that can be set as a static broadcast and then resume/toggle real-time capture based on dynamic content such as a player coming into proximity of the capture zone. Additionally, to be able to capture and compile video into a “compressed” video quilt would benefit multiple scenarios from a simple animated yet static broadcast for scene waiting (dynamic elements to trigger live capture) or cinematic transitions.

This will greatly free up game performance especially used by multiple displays.
There are already some 360 video capture tools like Helios that you can integrate compatibility with… instead of writing entire code from scratch. Maybe specific numbered images can be automatically added to generate a quilt from the Helios camera? -just an idea.

Adding this still/video capture feature will increase the usability for other developers that might otherwise feel that they aren’t getting the performance they need or the lack of not being able to have cinematic transitions. If there is a way do do this all now, please let me know, thank you.


We’re working on some optimizations that will help with rendering, but any way we approach it, massive/heavy scenes will not be a match for real-time rendering in the Looking Glass.

As for switching between static and dynamic rendering, that solution doesn’t sound general enough to warrant integrating into the main SDK, but I’d be happy to help you implement it in your own project. A good starting point might be checking out Vimeo’s recording SDK (they have quite a bit of experience with recording quilt videos from our SDK at this point). Check it out here:


@kyle Cool! thanks for the links! Is there a lot more overhead to the quilt displaying on the looking glass than to generate the quilt from multi views? If the answer is no then having the option to disable live feed from the game but just have a prercorded video quilt should save a lot of performance when not needed for live. I believe the looking glass concept will evolve into mini theme park jump scares & comedy etc so running a live broadcast in this scenario also wouldn’t be required, videos could be just triggered. Even if functioning to capture /record cinematic game transitions or even promos would help create content -I’m checking out the Vemo links.

Additionally I hope you add Vulkan support to support Multi GPUs and multi core CPU & multicore GPU. If I used two RTX 2080 Ti or more all of them will combine memory and GPU power as one without the bottlenecks of the old GTX based SLI it would easily give the HP1 and VR together enough power. Also Unity’s burst mode (Unity version 2018.3+ only) may help if you can add support for it… the code/data is apparently accessed faster.

Lastly there is a GPU occlusion culling system on the Asset store that I’d like to see working with the display but maybe you’ll benefit from making your own (I’ve doubled frame rates using InstantOC which isn’t GPU powered) I’m talking to some other devs about how to get this Occlusion system working for multi cams/players which would hide everything both cams can’t see rather than only clipping distance. For my looking glass project people would need to see distance but still hide everything that isn’t in the frustum.

What I really need help with right away is multi display support - running VR as native but having the ability to switch on the looking glass. My tower is powerful enough to run both at the same time but I don’t know the script I need to attach to the looking glass to force it active during runtime with it’s own native resolution & frame rate, window mode etc.