My standalone C++ program was also “viewless” in that it took the view fraction and directly calculated what was being seen. I could quantize the fraction to simulate a certain number of views, but above 45-50 I could not see much difference. Below 32 I could see “view stepping”.
To your comment paulmelis, I was just using the fraction to sweep a view angle; I did not reverse out the actual angles due to lenticular & acrylic IOR, etc. I think where that could have the most improvement is if there is actually a non-linearity between the fraction and the actual angle. I don’t know the properties of the lenticular, nor the spacing from there to the acrylic, so it would take some optical measurements to figure it out. Beyond what I have time for right now.
Another area that I ponder is the best way to (for lack of a better term) anti-alias the display (reduce the artifacts). I noticed the Japanese Room picture looks better than most content, and I think it’s because it has some defocusing on the near (and maybe far) plane due to depth-of-field. In my experiments, the view-ray density is highest for things near the middle depth, and tail off for near and far views, so having some blurring in the source images there is probably helping a lot to reduce the visibility of artifacts. So some DOF blurring in the quilt rendering (or ray tracing in baby-rabbit case) could help.