Custom macos metal version using raytracing


#1

Hi everyone,

Some proof of concept code to answer the question: Can I raytrace the Looking Glass per (sub)pixel rather than render 45 views? The answer being yes!

In addition this code grabs the calibration info from the HID interface, and detects the appropriate screen… so possibly it may be of use to others.


#2

Nice! Too bad I don’t have a macOS system to try it…

I was toying with the same idea, but couldn’t figure out how to get rid of the discrete number of views, given that we basically don’t know much about the physical parameters of the display. So starting with the current formulas based on the 45 views seems to be the only way, for example, by increasing the number of views to the number of horizontal subpixels.

So I’m wondering: are you still basing the rays to trace on the original 5x9 quilt layout? I.e. for each output pixel compute the 3 subpixel positions in the quilt, map each of those to a view and pixel within their view, finally compute the ray to trace for the subpixel and trace through the scene?

Or are you using the mapping of LG display subpixel to physical view cone and computing a ray (in the opposite direction) based on that?


#3

Every subpixel has a subtly different horizontal lightfield angle based on the alignment of the lenticular lens above. That the unity SDK chooses to split this into 45 discrete views is likely based on some compromise between visual quality and the cost of having to render too many views. The whole notion of a “quilt” is a software artifact to support the approach of rendering multiple views - which I certainly agree is more convenient in most cases.

In my example I simply calculate the exact horizontal angle for each subpixel and raytrace that - and it seems to work :wink:


#4

Right, that I understand. But what I seem to be missing in all the code I’ve seen so far is any link to the material characteristics of the Looking Glass. I.e. at some point you need the IOR of the lenticular lens array to correctly model the outgoing ray direction of light from (e.g. the center of) a subpixel. But maybe I fail to grasp how the math can be substantially simplified due to the regular layout of the subpixels in relation to the lenticular lens array.


#5

This is amazing @baby-rabbit, can’t wait to try it out! We’ve had some internal discussions about raytracing techniques but haven’t yet gotten the time to get a test up and running, awesome to see someone else put it together. I agree that this shouldn’t be the default behavior of the SDK but there are potentially very powerful use-cases for this approach.

@kyle you might want to take a look at this!


#6

My standalone C++ program was also “viewless” in that it took the view fraction and directly calculated what was being seen. I could quantize the fraction to simulate a certain number of views, but above 45-50 I could not see much difference. Below 32 I could see “view stepping”.

To your comment paulmelis, I was just using the fraction to sweep a view angle; I did not reverse out the actual angles due to lenticular & acrylic IOR, etc. I think where that could have the most improvement is if there is actually a non-linearity between the fraction and the actual angle. I don’t know the properties of the lenticular, nor the spacing from there to the acrylic, so it would take some optical measurements to figure it out. Beyond what I have time for right now.

Another area that I ponder is the best way to (for lack of a better term) anti-alias the display (reduce the artifacts). I noticed the Japanese Room picture looks better than most content, and I think it’s because it has some defocusing on the near (and maybe far) plane due to depth-of-field. In my experiments, the view-ray density is highest for things near the middle depth, and tail off for near and far views, so having some blurring in the source images there is probably helping a lot to reduce the visibility of artifacts. So some DOF blurring in the quilt rendering (or ray tracing in baby-rabbit case) could help.


#7

@Dithermaster amazing work!!

And yes, adding some DOF when things get too far from the focal plane is our sort of go-to method for masking those artifacts.