Three.js VS pure WebGL


I didn’t received my device yet but I imagine what I will do when I will have it in my hands and I’m a bit frustrated actually.

I think the fact your provided a working example with Three.js is great for most of people but it’s not good for advanced programers who use to create their own framework from scratch.

Three.js is nice but have a lot of limitation concerning texture-management (the size of the texture in itself, the way Three.js update the texture, the way Three.js manage the UV coordonnates for each Texture object, etc…

I saw in the holoplay.js that it need a Threejs-driver to get it working and I would like to know what specific values contained in Three.Texture you need in your driver (to be able to use my own texture, without Three.js)

If you are not confortable with pure WebGL, I can adapt holoplay.js by myself and give you the code (as I said, I don’t received my device yet, I need you to test and adjust the property you need in the texture-object)

Thanks for reading

I suggest taking a look at the code in the Three.js demo via your browser’s Dev Tools. It’s a little weird, but not too too hard to find how it works. The LkG specific stuff is in holoplay.js. The part that puts the right pixels in the right place on the screen is the fragment shader at the very end of the file.

It’s also super easy to look at the data that holoplay.js gets from the “looking glass driver”. It’s just a small json object with a few calibration numbers in it. I don’t have it offhand (LkG not connected) but it’s easy to grab the data from there once you have the display connected.


1 Like


Here’s minimal sample code to pull the lenticular calibration file into a browser, once the driver is installed:

<pre id = "calibration"></pre>
<script type ="text/javascript">
	var ws = new WebSocket('ws://localhost:11222/');
	ws.onmessage = function(event) {
		document.getElementById("calibration").textContent = JSON.stringify(calibration, null, ' ');

Example here.

If you’re interested in hacking on the shader, holoplay.js is a great place to start, as well as in Lenticular.shader and Quilt.cs in the HoloPlay Unity SDK.

Thank you for your message

Tell me if I understand well :

  1. The shader is used to extract one view from the texture, apply a post-processing effect and set the output-size at the screen-size

  2. The driver is only used to send calibration data for each view.
    for each frame, it’s called N times (where N = the amount of views)

If the whole code that ouput something on the screen come from Three.js , and the only data shared between the LG and Three.js are the calibration data, I really don’t understand how the 45 (fullscreen) view can exist at the same time.
Even if I don’t understand how it’s possible, does it work like that ?

I supposed that the LG contains its own step of post-processing before to display something on the screen. It’s not the case ?

thanks for your help !

@Nick : not sure to understand your answer…

@esk : I should receive my LG next week. Once I’ll get it, it should be easier to make some test :slight_smile: