Screenshotting the Open Window

Hello everyone!

I’m trying to better understand how the HoloPlay code takes images (hp_copyViewToQuilt) and turns them into a lightfield (hp_drawLightfield) since I’m still having issues with getting a pre-formed quilt to display. To this end, I’m using uniquely colored squares to see where Looking Glass will place the individual pixels.

However, I can’t seem to grab the resulting image of diagonal lines–whenever I use the Snipping Tool, the window vanishes! I can see there is some sort of slope/pitch idea happening, which I’ve confirmed by reading the .js code, but I’d like to be able to look at this more closely.

If any has any insight to my issues, please let me know! Any additional resources or tutorials for better understanding what’s happening code-wise would also be appreciated since this is all still pretty new to me.

I just took screenshots using PrtScn key (Windows) and Command+Shift+3 (macOS).

The things that can make the rendering not work: Looking Glass display results not set to exactly match the native resolution (e.g., 2560x1600 for small one), and driver not able to get calibration via USB connection.

Also on windows if you Hold the Windows Key on the Keyboard and hit “Prt Sc” key it will save a screenshot in the screenshots folder in the user’s photo’s folder.

I’m pretty sure the way the lense splits up the light is at a subpixle level. So one pixle’s subpixles (The red portion, the green portion and the blue portion) could be going in independent directions. So using colors in your experiments may or may not allow you to see what your are trying to see. (I could be wrong about this, this is just my understanding from the interviews i’ve seen, and the tech documents they have on here)

I really would like to be able to prerender the output video so that I can just play it back in a video player at fullscreen so that it wouldnt eat up my CPU cycles. Since screenshotting is possible, it should be possible.

Correct, rendering is done by subpixel.

You could render to a file and play that back. Nothing technical is stopping you. Another user here has done it. But you need to not apply very much compression because it will mess up the results. Compression algorithms are designed for visual images, not sharp-edged lightfields. The artifacts caused by compression (blurring, macroblocking, mis-alignment) are nearly invisible in video but will wreak havoc in lightfield images. Also, they will only play on the specific device they were rendered for and no other (since each has unique calibration data).

I think the better solution is to use quilts and play back on a device which can handle a full-screen pixel shader. Seems like Jetson Nano and Raspberry Pi 4 should do it. I have both but not spare time to try them.

Basically what Dithermaster said!

I wasn’t able to crack how exactly the pixel shader did its job, so I opted for the pre-render route. I can successfully go from mesh to pre-rendered and compressed video with little difficulty, and the produced videos are playable on an Intel NUC with little quality loss–I imagine a more complex video would have much higher bit rates if I compressed it at the same level (which I’m encountering in my trials with volumetric data).

If you want me to give a rundown on how I did it, I can, but tthe limitation of only working on the pre-designated LookingGlass is pretty major if you’re trying to make content for the community here. I imagine someone has made a videoplayer efficiently incorporating the shader, but if not, it could be an interesting project to explore!

Yeah I hope to get a Pi 4 soon to experiment with it as well. What do you think would be harder on a Rasberry Pi 4 hardware to run? A lossless compressed “Prerendered” output file, or a maximum resolution quilt, with the advantage of being able to use h265 compression? I assume the WebGL implementation would be the primary quilt player currently available that might work with the Pi, or at least easiest to adapt.