Pixel Layout

#1

Is there any documentation about how the 2560x1600 pixels are laid out in the device? I’m just wondering how I would go about arranging my own full screen of pixels to show properly in the volumetric display. I was looking at the shader in the ThreeJS library, which I suppose I could deduce it from, but I’m wondering if there is a more generalized primary source.

Thanks,
Tony

1 Like
#2

Hi Tony! We’re actually going to be releasing an OpenGL pipeline with documentation next week as a closed beta. Would you be interested in receiving that? Shoot me an email at alex.duncan@lookingglassfactory.com if that’s the case!

#3

I figured out a few things about the optics, and wrote a stand-alone (non-SDK) program which displayed a 3D image, which I posted about here: Vimeo capture and testing before submitting?

There are also some user (@lonetech) reverse-engineered ideas and shader source here: https://github.com/lonetech/LookingGlass

Using a lenticular lens, the RGB subpixels across approximately 7 pixels create a handfuls of views, and then each row is shifted slightly to use different subpixel colors and also increase the views. It’s super clever, and then the acrylic block changes the field of view to be wider. Beyond clever! Each display is slightly different and the factory calibration data is in EEPROM and can be read via USB commands.

You’ll be best off using a blessed SDK like the one @alxdncn is talking about.

#4

The info and scripts from @dithermaster and @lonetech are a great way to explore the way the LG works. I wanted to get even more basic, by using a quilt image that shows the view number (low view number = view from the left, high view number = view from the right):

Btw, the script that generated the image can be found here gen_numbers_quilt.py and has some parameters for other quilt layouts (e.g. 4x8, lower resolution, etc).

I could not find a way to add my own lightfield pic to the Library app, so turned the image into a one-frame mp4 movie and used the mpv shader by @lonetech to play it on the LG (small model). I assume that reproduces the “official” method for displaying content. Oh, and trying to analyze the patterns really screws with your head :slight_smile:

I’m wondering about the “inversion” that seems to happen when the viewing distance becomes more than 55 cm (give or take). When viewing closer than than (only one eye open and holding a static view position) the view numbers generally decrease from left-to-right over the display, which seems to make sense as you’d expect to see views more from the left on the right side of your viewpoint. But when farther away the view numbers increase from left-to-right. I guess because your eyes in that situation still get correct relative view positions it isn’t disturbing (e.g. left eye: 21-26, right: 24-29 and therefore left-most, respectively, right-most).

#5

trying to analyze the patterns really screws with your head

Yes, I can agree with that! Also, it seems to amuse other people who are watching you do it.

When I self-calibrated mine (to find the constants for my pseudo-shader) I found something similar. Ultimately what I decided on was finding the calibration constants for a normal viewing position such that when you’re lined up with the left edge of the block you see the “center” view at the left, and when you’re lined up with the right edge you see the same center view at the right. As you move your one-eye-open head from one side to the other that view follows you. But if you get really close or really far away things seem to change, in ways I don’t understand and certainly can’t explain. I’m curious what the factory calibration procedure is, and whether it is done by human or machine.

#6

I’ve been experimenting with the different tools and scripts out there, making some new scripts to simplify the math involved with the goal of better understanding the mapping from device pixel to/from view “pixel”. One step is to get rid of the quilt concept and instead directly read the set of per-view images. I simplified the math by assuming a one-dimensional quilt, i.e. all tiles side-by-side. This already gives much cleaner formulas. From that the view computed for each subpixel maps directly to a view image, so no quilt needed anymore. I also added a script to directly generate a “native” image that can be displayed on a LG with a regular image viewer. This of course becomes a device-specific image which will not display correctly on other LGs. Which inadvertently is a weird new form of DRM :slight_smile:

In case you’d like to experiment with this see the script frames2native.py in https://github.com/surfsara-visualization/looking-glass. You need the device calibration JSON file to run it. I didn’t manage to extract that using lonetech’s tool on Linux, but hacked the Unity SDK to save it from a Windows system.

The visual results are currently not exactly the same as showing a quilt of the same set of view images using e.g. lonetech’s mpv shader. There’s some slight offset in views, but I haven’t figured out why yet. I do like the simpler math in my script compared to what Lenticular.shader does (lonetech’s quiltshader of course also was a big simplication), even though they don’t do exactly the same thing. However, the resulting 3D effect is quite similar.

#7

Oh yeah, I’ve only tested with view images of 819x455 and 45 views (which is what a 4096x4096 quilt uses). So this stuff might fail for other dimensions and number of views.

I’ve noticed the pixel indexing in e.g. lonetech mpv shader sometimes goes out of bounds with respect to the view images. I.e. for a 5x9 quilt of 819x455 pixels this gives a 4095x4095 quilt, while it is stored with 1 pixel more horizontally and vertically. Sometimes the shader (or actually the formulas I copied into my Python script) seems to use coordinates like x=4095.18, which works due to the extra row of pixels. Not sure this is by design or merely a happy accident.

#8

Hi all,

Lonetech made a few wrong assumptions about how calibration values are interpreted; I’m going to submit a pull request that corrects them. I don’t really have capacity to support this project in any official role, but I don’t want everyone to start writing software based off code that almost works but not quite :slight_smile:

If you’re looking for a simple way to extract calibration data without worrying about Python HID bindings and you’ve installed the beta WebGL driver, you can do so by polling the local websocket host at port 11222, like so. There will be an official tool do do this shortly. If you’re looking to get the lonetech script working, I recommend pip install hidapi-cffi.

Factory calibration is done by humans :slight_smile:

#9

Hi all, some cool experiments going on here! One of the reasons for the square arrangement, @paulmelis, is simply to optimize the pipeline. Having a square texture allows us to work in powers of two, which lets us squeeze more from the GPU. However, for the kind of prerendered content you’re talking about (these sets of images) your approach can work just fine! I’m a bit curious what you mean by “directly read the set of per-view images” but I’m glad it’s giving you good results!

As a side note, we have a Windows-only quilt viewing tool being released tomorrow (after a couple delays) as a closed beta. It’s much heavier than lonetech’s tool (which is amazing) but it has a bit more functionality as well. We’ll soon be releasing another, lighter-weight tool of our own for this purpose as well. If anyone would like to get a beta release of the quilt viewing tool, please sign up on our form here!

#10

Regarding the calibration data - Rustaceans might find https://crates.io/crates/pluton useful. I’d appreciate feedback on it - only have one machine+Looking Glass to tinker with, and this is my first published Rust library so there are likely issues to sort out.

#11

Yes, I understand that. I wasn’t questioning why the quilt exists or why the tiles are layed out in a way to make the quilt square. I merely meant that for understanding the pixel mapping (or for producing a native image) producing a quilt image as an intermediate step isn’t necessary.

Maybe the above answer already clearified it, but I don’t put together a quilt image, I just let the tool read the individual images (each containing one view) directly.

#12

I just saw your pull request and the reply from lonetech on the subpixel values. Would you know a reference (scientific paper or such) that describes the math involved? I’m interested in the derivation of the formulas. Related to this: is the exact meaning of the calibration values described somewhere?

#13

Nice! I might be missing something, but don’t see an actual application in the repo that uses the library? I’m not familiar with Rust, so am really in the dark in how to use your code. I’ve gotten as far as cloning the repo, calling cargo build and putting the example piece of code from the readme in a separate file t.rs, but I can’t get it to compile as it can’t find the library. What am I doing wrong? :slight_smile:

#14

Some early-morning polar vortex pre-coffee thoughts - you don’t want your per-view images too large because they will be sub-sampled and cause aliasing. Also, too big is wasteful. Too small and they will be blurry. When I have some time I was going to work out the optimal size based on fetched pixel counts. In terms of number of views, 45 to 50 seems to be a sweet spot. Above that and it doesn’t look much different; below that and you can see the view stepping. Lastly, about the calibration constants, I only peeked at the factory shader (in the SDK) after I wrote my own, but I used three constants: X multiplier, Y multiplier, and X offset. I used multiplies instead of divides in the inner loop for performance (multiplies are faster). I did not check if these are the same three parameters that drive the factory shader, but they were the ones that made sense to me. To do better you’d need second-order formulas (to account for bend in the lenticular during manufacturing).

#15

Hi Paul, thanks for looking! It’s just a library, so is intended to be used as a piece for building a bigger application. cargo test -- --nocapture in a clone of the repo should print some info about any connected looking glasses.

There’s a code snippet in the github repo’s README.md ; I’ll try to figure out how to get that on crates.io .

#16

Ah, that works! Based on your code I finally managed to get a working python version for extracting the calibration (the other scripts I found for this all seemed to have issues, maybe something to do with the correct HID library and/or read timeout).

#17

Great! HID, or at least “abuse of HID to do other things”, can be a pain…

#18

Regarding calibration loading:

Eventually we’ll work out the licensing to release the native calibration loader blob as a standalone tool, and dumping the calibration string is something you can do with the beta OpenGL shared library.

However, I think the easiest way to build non-SDK programs that load calibration data at this moment, is probably to piggyback off the holoplay.js calibration daemon. Opening any holoplay.js project should prompt the user to install the calibration daemon, and once it’s installed, you can grab the calibration string via websocket. Here’s a barebones example.

This doesn’t work for Linux at the moment unfortunately.