is it possible to use these two together? i’ve just done a quick conversion of a project over from the previous sdk, so far (mostly) so good. however the main feature we want to utilize (the simple dof) blurs out all the ui. haven’t done much digging yet, just wondering if anyone might have an idea off the top of their head. cheers!
This is actually a really tough problem, and it’s a good point that we might need some documentation on how to deal with it! I unfortunately don’t have an easy solution, but let me at least explain the problem.
DoF relies on the depth buffer to know how much to blur things. UI (and many other types of materials, especially materials with transparency) don’t write to the depth buffer. Because of this, DoF reads the UI as being drawn on the far clipping plane, which is of course inaccurate and leads to extensive blurring.
If your UI doesn’t need to be in world space but can instead exist in screen space, the conventional solution to this is to render your UI to a secondary camera that renders last. This camera doesn’t have to be a HoloPlay Capture object at all, it can just be a standard Unity camera. I should also note that this isn’t only an issue for our SDK - the issue comes up quite often in 2D rendering as well. Here’s a solution I found on Unity forums that might work: https://forum.unity.com/threads/is-it-possible-to-exlude-menu-ui-from-in-depth-of-field-image-effect.383529/
Hmm, I looked into this a bit more and it seems to only be an issue using world space canvas, on our SDK or in standard Unity.
The route to the solution, I think, would be to find a way to have the UI shader write to the depth buffer. This isn’t easy though, and may indeed be impossible: https://forum.unity.com/threads/can-worldspace-ui-be-made-to-write-to-the-depth-buffer.526969/
We have a prototype implementation of a custom UI that works in true 3D without the need of adding an additional HoloPlay Capture. I’m going to bump the priority on refining that with our SDK team!
@alxdncn appreciate the quick responses. it’s mostly an issue because doing screen or overlay canvases doesn’t gel well with content on this display -> most multi-cam approaches won’t work. previously i’d been testing multiple holoplay cameras in the scene and found performance to be too slow, though perhaps with the latest sdk that’s improved? will keep my eyes peeled for the custom ui work!
is there no simple way to do this in the updated sdk? previously we could just add captures to the quilt. it seems now the sdk is trying to be “more inline” with unity (personally, i preferred the previous split component model, but anyways), so i’m setting up my holoplay components as i would a traditional multi-cam setup. however, this obviously won’t work correctly with the clear flags set to depth on the ui camera.
one option that comes to mind is spitting the output of the main camera into a texture and an override quilt on the ui camera, but this is much less straightforward than the previous workflow, and i’m not even entirely sure i’ll get the correct result.
I wasn’t aware that you could accomplish something like this before on the old SDK, that’s an interesting technique. You’re right that we’ve tried to get more inline with Unity and also streamline the number of components users need to understand. You might be right that setting the camera to render to a quilt and then do the UI over top could work, but I’ll need @kyle to weigh in on this. Any thoughts Kyle?
Hi Timmeh, sorry for the inconvenience! The split component model definitely had its upsides, and I didn’t realize people were utilizing the ability to use multiple captures (in fact, this is the first I’m hearing about it, which is why I didn’t prioritize it in the update). I don’t see this feature as incompatible with the new version though, so I’ll add it to our roadmap for the 1.1.0 update.