Tiny updates. Today I verified the buttons work with lsinput and input-events, and I’ve learned how to make Blender produce multiview renders (preferably in a single EXR file). Messed about with image loading libraries and I think my next step might involve making Blender show something interesting; it should be possible to autogenerate alternate view projections using offscreen buffers and from there build a 3d viewport for the Looking Glass.
Simple recipe to make a fullscreen viewport: Ctrl-Alt-W to make a new window, Alt-F10 to fullwindow the viewport, Alt-F11 to fullscreen the window. I think I’ll identify the viewport by making a screen named LookingGlass.
Early experiments with making a camera array didn’t yield a convenient method to name and place them all, though I was able to add automatic lens shift by using a driver from local X coordinate.
It’s possible the code at https://github.com/lonetech/LookingGlass might interest someone. It’s extremely slow but does demonstrate two distinct views, at the moment. You’ll have to mess with it to transfer the JSON calibration data from one part to the other. Rough proof of concept hacks still.
I got the gpu.offscreen sample overlay running, which does one offscreen view. Extending it to multiple views and covering the area should be reasonably easy. The tricky steps will be: modifying the matrices for distinct views, mapping all the views for a shader to use, and writing and connecting a shader to do the subpixel mappings (proof of concept is ridiculously slow Python at the moment).
Another project would be to use mpv’s shader support to add a quilt remapping. I’ve fetched the 5x9 quilt videos from vimeo for testing that.