Experimentation by hand while still waiting for more documentation/samples

#1

Ok,
this is where I got so far by still trying to make a sense out of all this and by ‘dissecting’ the HoloPlay.js that btw uses a lib ‘even weirder than the other’.

While doing all this I am out of my usual country in another one so I don’t have the looking glass with me I can test ‘the final result’ but I can see some things.

So first of all if I understood correctly, you render the 32 images ( or sub-images as you call them ) in a 2048x2048 Render Target that gets “split” into 4 x 8 images.

So we start with :

	L_RenderResolution = 2048;
	TilesX = 4;
	TilesY = 8;
	//render texture dimensions
	L_RenderSizeX = L_RenderResolution / TilesX;
	L_RenderSizeY = L_RenderResolution / TilesY;

#ifdef SHOW_TEST

	L_RenderSizeX = wp.w / TilesX;
	L_RenderSizeY = wp.h / TilesY;

#endif

	the_rt  = g_Neon_Device->CreateRenderTarget_Neon(Texture_RGBA, L_RenderResolution, L_RenderResolution, NULL);

Furthermore you create a set of ViewPorts to match those “little sub images” in a way like this :

	// we create a set of cameras and viewports for it
	int i;
	float x, y;
	i = 0;
	for ( y = 0; y < TilesY; y++) {
		for ( x = 0; x < TilesX; x++) {
			//var subcamera = new THREE.PerspectiveCamera();
			Subcamera[i] = glm::perspectiveRH(50.0/180.0*M_PI, 1.0, 0.1, 2000.0);

			//subcamera.viewport = new THREE.Vector4(x * renderSizeX, y * renderSizeY, renderSizeX, renderSizeY);

			SubcamViewPort[i].x = x * L_RenderSizeX;
			SubcamViewPort[i].y = y * L_RenderSizeY;
			SubcamViewPort[i].w = L_RenderSizeX;
			SubcamViewPort[i].h = L_RenderSizeY;

		//	cameras.push(subcamera);
			i++;
		}
	}

Finally you create a ‘quad’ that you’ll use for the ‘final full screen rendering’ using the ‘lenticular shader’, this is nothing special :

	g_Neon_Device->GetVertexBuffer(&Holo_Quad, 4, COLORVERTEX_BASIC);

	AddColorVertex(&Holo_Quad, -1, 1, 0, 0xffffffff, 0, 1, 0, 1, 0, 1);
	AddColorVertex(&Holo_Quad, 1, 1, 0, 0xffffffff, 1, 1, 1, 1, 1, 1);
	AddColorVertex(&Holo_Quad, -1, -1, 0, 0xffffffff, 0, 0, 0, 0, 0, 0);
	AddColorVertex(&Holo_Quad, 1, -1, 0, 0xffffffff, 1, 0, 1, 0, 1, 0);

The idea is you render TilesX * TilesY images creating this “quilt” by moving the camera and the projection matrix “in some way” and here we have ‘some problems’ because I tried to use the “pseudo code” you supplied in the CPP “documentation”.

If I understand correctly you move the “LookAT” point by offsetting in X and you do also something in the Proj Matrix to compensate for the ‘skew’ ?

Anyway FIRST problems, it’s not clear what SORT of View/Proj matrices you use, I use this :

proj = glm::perspectiveRH(
		fov,
		(float)L_RenderSizeX / (float)L_RenderSizeY,
		//1.0f,
		0.1f, // z near
		20.0f); // z far

Fundamentally a standard Perspective fov Right Handed matrix for ‘Proj’ and a standard :

	 M = glm::lookAtRH(
		 glm::vec3(eyeX, eyeY, eyeZ), // Camera is at (), in World Space
		 glm::vec3(centerX, centerY, centerZ), // and looks at the origin
		 glm::vec3(upX, upY, upZ)  // Head is up (set to 0,-1,0 to look upside-down)
	 );

Which in turn I put into this bit of code that basically should be your “capture views” :

if (nviews == 32)
{
	// ok we ARE in openGL !!

	// simple case, simple screen

#ifndef SHOW_TEST
g_Neon_Device->SetRenderTarget(the_rt);
#endif

	float fov = 0.244; // 14° in radians
	float viewCone = 0.698; // 40° in radians
	float cameraSize = 0.4; // for example
	float aspectRatio = 1.6; // derived from calibration's screenW / screenH

	Vector3 focalPosition;// = (0, 0, 0); // the center of the focal pane
	focalPosition.zero();

	float cameraDistance = -cameraSize / tan(fov / 2);
	Vector3 cameraPosition = focalPosition + Vector3(0, 0, -cameraDistance);

	proj = glm::perspectiveRH(
		fov,
		(float)L_RenderSizeX / (float)L_RenderSizeY,
		//1.0f,
		0.1f, // z near
		20.0f); // z far

	g_Neon_Device->camera(cameraPosition.x, cameraPosition.y, cameraPosition.z,
		0, 0, 0,
		0, 1, 0,
		&view, true);

	// now let's do what they do

	//PRINTF("View Index %d\n", view_index);

	// start at -viewCone * 0.5 and go up to viewCone * 0.5
	float offsetAngle = ((float)view_index / (float)(nviews - 1) - 0.5) * viewCone;

	// calculate the offset
	float offset = cameraDistance * tan(offsetAngle);

	// modify the view matrix (position)
	view[0][3] += offset;

	/*
	g_Neon_Device->camera(cameraPosition.x+offset, cameraPosition.y, cameraPosition.z,
		0, 0, 0,
		0, 1, 0,
		&view, true);
	*/

	// modify the projection matrix, relative to the camera size and aspect ratio
	//proj[0, 2] += offset / (cameraSize * aspectRatio);

	proj[0][2] += offset / (cameraSize * aspectRatio);

	g_Neon_Device->setProjection(&proj);
	g_Neon_Device->setCamera(&view);

	g_Neon_Device->SetViewport(SubcamViewPort[view_index]);

	if ( 0 == view_index ) g_Neon_Device->ClearScreen(0x00318eff); // RGBA

}

After all that with view_index going from 0 to 31 a set of 32 images are drawn that look “reasonably correct” each image fills a place in the quilt and is L_RenderSizeX * L_RenderSizeY big so 512 x 256.

At this point “Problem number 2” I re-created the “lenticular” shader as this :

#version 330 core

// Input vertex data, different for all executions of this shader.

layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 tex0;

// Output data ; will be interpolated for each fragment.
out VertexData {
vec4 Color;
vec2 TexCoord0;
vec2 TexCoord1;
};

// Values that stay constant for the whole mesh.
uniform mat4 worldview;
uniform mat4 proj;

void main(){

// Output position of the vertex : MV * position
vec4 modelViewPosition = worldview *vec4(position,1);

// Output in clip space
gl_Position = proj * modelViewPosition;

// Color is just passed by
Color = color;

// UV of the vertex. Just pass through, that's what they call iUv or uv
TexCoord0 = tex0;
TexCoord1 = tex0;

}

And …

// Giles’ fragment shader for Looking Glass

#version 330 core

// Interpolated values from the vertex shaders
in VertexData {
vec4 Color;
vec2 TexCoord0;
vec2 TexCoord1;
} inData;

// Ouput data
out vec4 color;

// Values that stay constant for the whole mesh.
uniform sampler2D quiltTexture;

uniform float pitch;
uniform float tilt;
uniform float center;
uniform float invView;
uniform float flipX;
uniform float flipY;
uniform float subp;
uniform float tilesX;
uniform float tilesY;

// “varying vec2 iUv;”+

vec2 texArr(vec3 uvz) {

float z = floor(uvz.z * tilesX * tilesY);
float x = (mod(z, tilesX) + uvz.x) / tilesX;
float y = (floor(z / tilesX) + uvz.y) / tilesY;

return vec2(x, y);
}

float Remap(float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

void main() {
vec4 rgb[3];
//vec3 nuv = vec3(iUv.xy, 0.0);
vec3 nuv = vec3(inData.TexCoord0, 0.0);

        //Flip UVs if necessary
        //nuv.x = (1.0 - flipX) * nuv.x + flipX * (1.0 - nuv.x);
        //nuv.y = (1.0 - flipY) * nuv.y + flipY * (1.0 - nuv.y);

        for (int i = 0; i < 3; i++) {

           nuv.z = (inData.TexCoord0.x + float(i) * subp + inData.TexCoord0.y * tilt) * pitch - center;

		   //nuv.z = (inData.TexCoord0.x + float(i) * subp + inData.TexCoord0.y * tilt);
		   //nuv.z = (inData.TexCoord0.x + float(i)*0.0013+ inData.TexCoord0.y * 0.306);

           nuv.z = mod(nuv.z + ceil(abs(nuv.z)), 1.0);
           nuv.z = (1.0 - invView) * nuv.z + invView * (1.0 - nuv.z);

           rgb[i] = texture2D(quiltTexture, texArr(vec3(inData.TexCoord0.x, inData.TexCoord0.y, nuv.z)));

		   //rgb[i] = texture2D(quiltTexture, nuv.xy);
        }

        //"gl_FragColor = vec4(rgb[0].r, rgb[1].g, rgb[2].b, 1);"+
		
		color = vec4(rgb[0].r, rgb[1].g, rgb[2].b, 1);

		//color = texture2D(quiltTexture,vec2(inData.TexCoord0.x,inData.TexCoord0.y) );
	}

Various commented things was me testing stuff in pieces also that function ‘Remap’ as you can see seems to be never used.

Note that I noted your shader seems to have ‘a number of uniform NEVER used’ , that I took out , then I re-created your setShaderValues function in this way :

void LS_Display_Compositor::SetLookingGlassShaderValues(float dpi, float pitch, float slope, float screenH, float screenW, float center, int flipX, int flipY, int invView)
{
float screenInches = (float) g_Neon_Device->GetDisplayWidth() / dpi;

screenInches = 2560.0 / dpi;

float newPitch = pitch * screenInches;

//account for tilt in measuring pitch horizontally
newPitch *= cos(atan(1.0 / slope));
LookingGlassUniforms[LOOKING_PITCH] = newPitch;

//PRINTF("PITCH %f\n",)

float newTilt = (float) g_Neon_Device->GetDisplayWidth() / ((float) g_Neon_Device->GetDisplayHeight() * slope);

newTilt = 2560.0 / (1600.0)*slope;

if (flipX == 1)
	newTilt *= -1;
LookingGlassUniforms[LOOKING_TILT] = newTilt;

//center
//I need the relationship between the amount of pixels I have moved over to the amount of lenticulars I have jumped
//ie how many pixels are there to a lenticular?
LookingGlassUniforms[LOOKING_CENTER] = center;

// should we invert ?
LookingGlassUniforms[LOOKING_INVVIEW] = invView;

//Should we flip it for peppers?
LookingGlassUniforms[LOOKING_FLIPX] = (float)flipX;
LookingGlassUniforms[LOOKING_FLIPY] = (float)flipY;

LookingGlassUniforms[LOOKING_SUBP] = 1.0 / (screenW * 3.0);

//tiles
LookingGlassUniforms[LOOKING_TILESX] = TilesX;
LookingGlassUniforms[LOOKING_TILESY] = TilesY;

g_Neon_Device->looking_glass_params = &LookingGlassUniforms[0];

}

Now here PROBLEM, I could not really figure out, by looking the .js code, what are the variables :

var newTilt = window.innerHeight / (window.innerWidth * slope);

window “what” ? The display itself so 2560x1600 ? The RT where the quilt is, so 2048x2048 ? Something else ?

If you render the quilt first in 2048x2048 that is “immutable” and the Looking Glass that is 2560x1600 that is “immutable” as well why you need that window and why you even care if it gets resized ?

Also the vertex shader part calls for a ‘proj’ and ‘worldview’ that fundamentally if I understood correctly you want to ‘blast it over a full quad’ so I use an ‘identity’ matrix for both i.e. 1:1 pixels mapping, I am not sure what’s the point of using that Three.Ortographic camera ( which BTW what matrix is that ? )

So in all and all I get ‘some images out’ that I am still not 100% are correct and ATM I cannot test with the device ( becuase I don’t have it here with me ) but they look like ‘could be something’ ( I will see if I can upload them later here ).

Ah yes I call that function like this :

SetLookingGlassShaderValues(338.0, 49.825218200683597, 5.2160325050354, 1600.0, 2560.0, -0.23396748304367066, 0, 0, 0);

So if you could clarify a little maybe I can get “somewhere” I still insist all this stuff needs MUCH better documentation.

Cheers.

#2

Here two images, this is what I get with my version of ‘capture images’ :

And this is what they get after the application of the shader, but it still does not look ‘that good’ to me, I think something is a bit amiss …

So you see there’s still something I quite don’t get.

Cheers.

#3

Hi Giles, there are indeed a number of uniforms that are not used in the HoloPlay.js file. These are intended for a future release that will allow non-fullscreen content to render correctly. This is similar to what the window.innerHeight refers to: that is the number of pixels tall your rendered content will be. For fullscreen applications, this is just the height of the display!

I cant’ tell just from looking at the image if it’s working correctly or not, I think it’s pretty near impossible to tell without plugging a Looking Glass in. Your quilt looks good, so it may be ready!

#4

I can tell you it’s WRONG … I generate a 2560x1600 image of that and tried to put into the LG it does not look 3D

And at this point I am starting to be really a bit nuts because I cannot figure out what’s wrong in what I am doing given that I believe I have a quilt that “should be good enough” but that final shader seems “not to do what it should”.

By using yout three driver I manage to read the configuration string so I have my own params of my own LG which are a bit different than the “default” ones even if not that much.

But what I remember seeing is that the images that go to the LG look “a lot more fuzzy” like really “N images overlapped” as you can see my shader ( copied from the .js ) does not look at all like that.

It’s like “it’s getting only a small piece of the quilt” not the full thing.

Another thing “you don’t say” … how are the textrure coords supposed to be on the quilt ? I assumed they are “Standard (0,0) (0,1) (1,0)(1,1)” …

The quilt I think it’s not “bad” but it’s the final output that isn’t.

I am getting really quite nuts I still can’t figure out “what is that they do I don’t do ?”.

Cheers.

#5

The 2560x1600 image needs to be rendered using precise calibration constants that are unique to your specific display. If those values are not correct, the final image will not be correct and won’t look at all right on the Looking Glass. The full-screen quad and the shader render a 2560x1600 image using a full-screen pixel shader that calculates the view angle for each RGB sub-pixel, and look it up in the source texture quilt (2048x2048 in your case, but can be something else). It’s effectively converting a “device independent multiview image” into a “device-specific bitmap” that will only look correct on the exact device it is calculated for. These things are done for you in the various SDK(s) so you can just produce image quilts.

#6

Again, “this is not info” I get that, and I am using the specific calibration info ( the JSON string ) taken from the device itself using the appropriate driver and query and using a re-made version of setShaderValues() found in the holoplay.js script.

The shader appears to be used only once in a single pass after the quilt is formed, the params I believe I am passing them correctly.

One thing the documentation is still a bit unclear is what type of Projection and Camera matrix they use, there can be a few different ones, I THINK they are using a PerspectiveFOVRH matrix but I cannot be sure because no docs tells what’s been using, likewise I suppose they use a LookatRH camera matrix, while moving the center ? lookat ? both ? position in the X direction alone.

If you can see the final output of my thing after I used the shade on that quilt I shown before “it’s just a few stripes”, that can’t be right, it should be a sort of “all fuzzy” image.

At this point maybe I can dump my 2048x2048 quilt texture in a ‘quilt viewer’ and/or maybe try to get a quilt made by something else and see if I can make it work with my shader.

I don’t know if I am generating a wrong quilt or if I am still using wrongly that shader I ripped off the holoplay.js or I have to find a way to run/debug the holoplay.js in some way and try to dump everyhing I can dump of it and see “what the heck of matrices are being used”/“where is the difference”.

I am still waiting for a code sample/better documentation explaining how all this is done.

In all this stuff I yet have to see one “proper OpenGL code sample” generating the quilt/final output, the one using the DLL for some reasons - will try again - can’t make it work and I can’t see/understand any error code/anything ‘because there’s none’ it just work or doesn’t in my case doesn’t.

I yet have to see ONE document one saying things like “this is the projection matrix and view matrix you need to use”, this is what you need to offset ( besided that [2,0] or [3,0] depends all on what matrices you use and what row mayor or column mayor matrices are stored so even put like that they don’t say much ) this is the shader and these are the params.

It’s only “3 pieces of info” yet if you don’t know them it appears can’t make it work properly.

Besides tried also to study the three.js lib documentation and so far I found things like “yes this creates a camera matrix” … … what camera matrix ? There can be quite a few and then I found out their “camera” class seems to combine in “a camera” a projection + view + else stuff but could not find docs saying precisely “what type of camera matrix they are using”.

I have to say “this is documentation nightmare”.

#7

I agree, your “after shader” does not look right. If you screenshot the images from the working examples you’ll see the diagonals tilt to the right, not left, and the places where views differ are “fuzzier” as you say.

#8

It seems I finally got it working, a matter of matrices it appears.

It seems it works fine both with 45 images in 4096 and with 32 images in 2048, here are 2 images that work but probably not on yours cause the calibration params.

The first one I “reconstructed” a sort of “quilt viewer” by using my own sw then once I got that working I knew “I should have the correct params” then basically I worked the rest out.

In the end it was “the type of view/proj matrix I was using was probably not the good one” all the rest mostly was ok.

As soon as I’ll be back in UK in a couple of days I should be able to see stuff with my own eyes, at the moment I am uploading those pics to a friend of mine that is setting them as “desktop background” to see them inside the LG.

#9

Congratulations! Those look like what is needed (and provided the calibration is correct, should work on your display).

#10

It works.

#11

Ta da! (although it’s unlikely to work on random Looking Glass displays since it was rendered using calibration constants of a specific display, which is why device-independent “quilts” are used for sharing multiview images and videos).

1 Like
#12

I am going to do more in the next days, finally I am back to Wales with the LG on my desk, I’ll try to add a bit of winsock code to read out the params and continue to experiment with things.

#13

Hi,
fun facts … it took me two hours … lot of digging around … a netmonitor … an RFC 6455 … a bit of tears and approx 200 lines of CPP code - which I still have to finish to test/check better - to do what one single line of JS does with that “websocket” lib, but now I also learnt something about “WebSockets” that I never used before because it’s not my area of work.

Provided that driver is installed I can read the config params of any display.

This LG display is getting quite an experience :smiley: