Is it possible to propose a video instead of a demo for "the library"?

#1

Hello !

Yesterday, I tryed to make a video-screen-capture of the result given by the holographic-shader.
It works very well, even in bad quality.

You can read the topic here (the second half of the comments)

Most of the demo on “the library” are just a movie actually, there is no interactivity at all , then it’s relevant (in my opinion) to publish directly a pre-processed video in 2560x1600 on the library. The file may be a bit larger - it’s not dramatic (85Mo / 10 seconds in high quality ) - but the rendering process will be infinitly lighter because instead of computing 45 views and post-process them in a shader, you just play a 2K video in fullscreen and even a 5 years old laptop can do it at 60fps

The only problem I see with videos is that you need to produce a video for each screen , but since there are only 2 size of LG, it’s not a big problem.

Can someone send me the calibration data given with a large LG, I’ll try to make a video-test for it
and I would like to know if it works.

If it work, would it be possible to publish it as demo in the library ?

#2

It’s not about large vs. small screen size, it is literally different calibration data for every single screen regardless of size. That’s what @alxdncn was trying to tell you. It’s because of how the screen is built, and is why quilt format makes more sense for delivery. So there is no way to post a pre-processed video and have it play on all screens. In other news, LG and users are working on a ~$100 fanless player that attaches to the back.

#3

What part of the calibration are specific to the screen ?
In my typescript code, I only use this values “dpi”, “pitch” and “slope” from the calibration data; I compute every other dynamicly.

EDIT : can some people send me the calibration data of your screen, I will try to use it with mine and I’ll see - sorry but I need to see it by myself to really believe what you say (even if you build it , I’m like that, sorry )

EDIT2 : and if it doesn’t work, I want to be able to try to figure out a solution, even if its impossible :smiley:

#4

I’m working for a company who make a video-editing-software online with every media on the cloud.

We have hundreds of effects / transitions ; some of them are in 3D and I was considering to add an export option using the holographic shader… But from what you said I can’t do it because the final video output is processed by a remote server and there is not an easy way to keep the calibration data of each user (it’s a particular case very particular in the context of our app).

Do you think there is no solution to this problem ?

#5

I’d need to understand a bit better how the effects work, but most of them should be able to be applied to the quilt, no? Is it just that it is 4k that’s an issue, or is it that you can’t apply the shader after the effects you’re adding? In any case, here the calibration data for my display in case that helps!

{
“configVersion”: “1.0”,
“serial”: “00000”,
“pitch”: {
“value”: 49.825218200683594
},
“slope”: {
“value”: 5.2160325050354
},
“center”: {
“value”: -0.23396748304367065
},
“viewCone”: {
“value”: 40
},
“invView”: {
“value”: 1
},
“verticalAngle”: {
“value”: 0
},
“DPI”: {
“value”: 338
},
“screenW”: {
“value”: 2560
},
“screenH”: {
“value”: 1600
},
“flipImageX”: {
“value”: 0
},
“flipImageY”: {
“value”: 0
},
“flipSubp”: {
“value”: 0
}
}

#6

As far as I know, this is the same default calibration as that on the HoloPlay.js library, just so you know.

#7

Also, final thought: using our quilt tools to play videos and show images in 3D is MUCH faster than the interactive apps, you’re right. We’re just behind on delivering stable versions of that software to our community, but we’re certainly actively working on it! In the meantime, we have a beta quilt viewer you can get by signing up for the closed beta here.

#8

Hello !
thank you for your message :slight_smile:

It’s not a problem for us to create a quilt-video but it’s a bit boring to need to use a specific video player.

When I say it’s boring, I’m not speaking for myself , it’s ok for me to use it but it means more work for our customer-service-team to explain to the people on the phone “no it wont work with your video player, you need to use that specific video player” ; most of people don’t read the explanation on our website and it’s boring :slight_smile:

Another thing is , even it’s not a big problem for us to create the quilt-video it will take much more space on our server and as I said in another post, it has a bandwidth-cost and it also need much more time to process a 8192x8192 video with ffmpeg than a 2560x1600 video. It’s not a problem to create one but we create something like 3-4000 movies every day (some of them are very very long > 2 hours - , if we need to multiply our work by 45 for the same price, it’s not a good idea to add that feature)

The quality is really better in 8192x8192, and even better in 16384x16384 (I’m using an old GTX 750 ti at home, it’s slow but great !).

The goal of our tool is to give the best quality possible and assume every performance-issue in order to guarantee to see the movie in the best condition, but it’s impossible to keep that promise if the rendering depends on the user’s hardware.

If we could pre-process the video, it would better at every level (less space on the server ; less time to create it - our gpu are very fast - ; ability to use the maximum quality possible ; usable on every player ; easier to share… I really think it’s a solution much more interesting than the quilt-video.

“In any case, here the calibration data for my display”

Thank you for sharing it !
It’s very close to the values I have but it’s different, and in another topic someone else shared its calibration data, I tryed them and it obviously didn’t work…

Can I ask you why every screen has not the same calibration data , it’s very unexpected

#9

Hi Tom,

Each calibrations are different because the lenticular layers are very delicate, and if it’s all the same, it will not be called “calibration” anymore.

If I’m understanding correctly, what will fulfill your need is to make quilt videos, not pre-render them sub-pixels scattered. Quilts look like this, and with the calibration data loaded on your Looking Glass and the right tools (official Quilt Viewer and this third party web-based Quilt Viewer), you can play those videos with minimum resources in the Looking Glass.

Currently we don’t have tools for making quilt videos yet, but it seems like you know ffmpeg, so you can render each frames out and use ffmpeg to put them together into a video.

Hope this helps!

#10

For Our customers, I think the quilt texture is the only option but for Your customers , you should consider to create a pre-process-video-creator because most of the time they will use it on their own LG and they will be able to pre-process the video with 64 views of 2560x1600 without paying anything else to be able to use at it best.

Good news, I will do it with my typescript version this week, it’s almost done already :slight_smile:
(it’s a personnal project at this point )

EDIT : if it’s as easy as I think it is, I 'll do the quilt-recorder in the same time

#11

You should sell this service actually (pre-process the video in high quality on your server for those who are ready to pay for it and create a video with their particular calibation, just for them.

Exactly what I wanted to do for my company actually, but here your could pay the creators, encourage them to create more stuff and encourage them to share the quilt-version to diffuse it in lower quality, a win-win :slight_smile:

EDIT : I’m sure some people could pay for it because I 'm doing it just to be able to see how it is in best quality at 60fps (in 16384x16384 with 64 views, it runs at less than 1 fps on my computer , it’s hard to have an idea from it :slight_smile:

#12

Hi Tom,

Hopefully I can provide some clarity on this process.

As you’ve noticed, the variations between calibration values are very slight. This is to account for micron-level alignment differences between the pixels on the LCD in the Looking Glass, and the lenticular overlay atop it. We do not have perfect control over the alignment in manufacturing, so we need to encode very precise values to allow the Looking Glass rendering engine to correct the output for an individual alignment.

However, even if the calibrations on every Looking Glass device were exactly the same, we would still not recommend distributing 2560x1600 video of your Holoplay.js scene, for a number of reasons:

  • When lossy compression algorithms are applied to an image with subpixel-level detail, a quality reduction that might not be visible in a normal video becomes very obvious. This means that recording a precalibrated video is a bit of a catch-22. Playing back a video recorded at a low bitrate will look substantially worse than a 2D video at the same resolution and quality settings. Recording a video at a lossless bitrate may look ok, but will generate an enormous file (as you discovered) and require substantial horsepower to play back.
  • “De-quilting” a quilt video is comparatively inexpensive relative to rendering the multiview scene in the first place. It is possible to play back a 4096x4096 quilt video on a Looking Glass driven by an iGPU, no problem.
  • Distributing a quilt video allows your content to play back on either a small or large Looking Glass, both of which have different aspect ratios.

We are all well aware that the tools for working with static quilt image and video files are underbuilt right now, and we’re working hard to deliver new software to facilitate that in the coming weeks and months. At the moment, the best tools to view and manipulate quilts are the StereoPhoto Maker quilt playback webapp, our in-house Quilt Viewer (closed beta), and kirurobo’s LookingGlassQuiltViewer.

Regarding the actual process of rendering a quilt video out from a three.js scene: you might have to ask @alxdncn, as I’m not sure we have a process for that at the moment.

1 Like
#13

Thank you a lot for your detailled answer !

#14

Hello,

I’m working on my typescript app using ffmpeg and concerning the quilt-video I would like to know the format you are using in your quilt player ?

4096x2048 H264 or 8192x4096 H265 or both ?

#15

Hi,

The standards we usually roll with are 4096x4096 with a 9x5 view tiling (819x455 per view), and 2048x2048 with a 4x8 view tiling (512x256 per view). Both of these are just rules of thumb - most of the quilt players can be configured to accept a wide range of input formats - but both of these formats we’ve tested and produce a decent balance of view count vs resolution.

hope this helps!

#16

Thank you for your answer and your reactivity ! :slight_smile:

Actually these format are not conventional for videos

the resolution max of H264 is 4096x2048 and the resolution max of H265 is 8192 x 4096.

In a first time, I will allow only 2048x2048 in H264 because my graphic card (gtx 750 ti) refuse to read H265 video with a bigger height than 2048 . I may add the H265 options for bigger capture but I 'll need a feedback because I can’t test it.

It’s not a problem to apply the shader on a 4096 texture for sure, but did you try to just play a 4096x4096 video at 60 fps on a regular laptop ? I doubt it will work because H265 require 10x more ressources than H264 (its the official number from the documentation of H265) and actually I can’t even open it with my desktop (i5 / gtx 750 ti / 16 Go ram / ssd )

I finished my pre-processed-video-encoder and in comparison, 2560x1600 needs much less ressource to just read it (I’m not speeking about the shader, just opening the video with a video player ) than a 4096x4096 H265 video.

Because it’s preprocessed , I can use a huge quilt texture of 16384x16384 with 64 views. (2048x2048 by view) then I get almost the best quality possible with a more accurate final result (because of the amount of views) while mainting the best performance possible.

But, as you can imagine, the video file is huge ! Around 1.4 Go / minutes for a good ouput quality
EDIT : and it’s also very long to compute, around 0.5 seconds by frame x 60 frames by second x 60 seconds in one minute = 1800 seconds minim = 15(-20) minutes / minute

I will post my code on github and on the LG forum once my quilt-video-recorder and quilt-video-reader works as expected. Probably next week because I’m very busy…

#17

Hi Tom,

We try to use square textures for encoded video because GPUs tend to handle them better. We also typically also use H.264.
I don’t know an enormous amount about video encoding standards - I typically just use ffmpeg to stitch together sets of 4096x4096 frames, and that’s seemed to work with a 4096x4096 texture for us.

With lonetech’s mpv shader, I can play back a 4096x4096 quilt video on my Macbook Pro from 2015, with an i5, integrated graphics, and 8 gigabytes of RAM, at full frame rate on a standard Looking Glass. Our in-house quilt players aren’t as well-optimized, which is something we’re well aware of and working on. Your best bet may be kirurobo’s native quilt playback tool. It’s actually less computationally intensive to play back a 4096x4096 video that is passed through the postprocess shader and played back at native 2560x1600, than it is to play back the video at full resolution in a window and scale it down.

I’m definitely curious about what your video looks like! And if you’re designing it for a specific purpose - playing it on a single Looking Glass - it seems like you’ve already figured out the cheapest way to play it back.

One thing I will suggest, though - a quilt texture size of 16384x16384 is total overkill for the current capabilities of the display. This isn’t exactly the right way to think about it, but in this situation your “pixels in / pixels out” ratio is only 1.52% (2560 * 1600) / (16384 * 16384) - compared to a 24% ratio for a 4096x quilt, or a 97% ratio for a 2048x quilt. With that extremely high input resolution, you’ve already surpassed any diminishing returns to be gained from higher resolution textures and increased view counts, and are just wasting GPU power to render and postprocess it. Even the very highest resolution settings in our flagship rendering pipeline (Unity) maxes out at 1280x720/view, and that’s already serious overkill.

I’d suggest that before you commit to building preprocessed quilt videos at that very high resolution, I’d take a shot at rendering at 4096/45, then postprocessing using your existing process – see if it’s possible to tell the difference; we’ve never dealt with a scenario where it is.

#18

Hi Evan,
Thank you again for your time !

I’m total beginner with ffmpeg too, my work is not focus on this part when I’m at the office and this is actually my very first project with it. I’m a lot into ffmpeg / h264 / h265 documentation theses days to understand how it works but I’m a begginner with it.

I don’t know how ffmpeg works with a sequence of image but I’m very surprised by your result. I will try to export a sequence of png first and then to stitch them together as you did. If it allow to publish bigger quilt video without loss of performance, that’s definitly what I want to do !

Can I ask you the ffmpeg command you’re using to convert the png-sequences into a video - then I would be able to do the exact same thing that you did on my computer -

Sure, but for the moment my big video are not even playable ^^
From my team-mates points of view, H265 is not made to being read on a computer because it’s too heavy to be used (ir’s made for expensive smart TV), but if your 4096x4096 videos are running on a laptop it must be a solution - and I 'll find it ! :slight_smile: -

Right now, it’s just a basic demo but I want to be able to create a kind of musical video clip based on hundred of thousand particles with light effect. Actually, I just want to be able to create every-what-I-want without thinking about performance and pre-processed video are perfect for that !

Please excuse me but I don’t understand what it means.
The parallax effects works very well with the 16384x16384 texture ; the fact it use a lot of view make the result smoother than a 4096x4096 texture sharing 45 views.
Can you try to clarify a bit what you said

--------- EDIT : I just tryed with a 2048x2048 quilt and it acts like if the viewCone value was higher than what you get with 16384x16384 , is it what you had in mind when you told me about the " pixel in / pixel out ratio" ?
If yes, it’s not a big problem because you just need to adjust the viewCone value but maybe it’s more complex than what I think -----------------

Concerning the size of each view, I think you’re right, it’s probably overkilling my cpu but if I split it into 10x10 views it’s great ! :smiley:

The thing is, for the moment, I don’t use a png sequence and then the encoding-process reduce the quality of the input and because lossless output generate ultra-huge files I don’t do it. So the final result is a mix between the ultra-high-quality quilt-source and the high (but not ultra high) quality of the video.

There is a perceptible difference of quality between a quilt of 8192x8192 with 45 views and 16384x16384 with 64 views but in my opinion it’s more because of the amount of view than because of the size of each view. This afternoon I made a test with 100 views and I think it’s the best choice (but it’s very very long to encode, maybe 4 seconds / frame ( not frame by second)

EDIT : I made some test with static quilt - using my process with ffmpeg - with different quilt size from 2048 to 16384 because it’s easier to compare and I confirm that what really matter is the amount of view. Actually, the results are maybe a bit better with views smaller than 1024x1024 because it generates a kind of soft “blur” around the elements. But I need to make more test with texturized object in different scenarios.

EDIT2 : Is there a connection between the viewCone value and the camera’s fieldOfView ?

Thanks again for your advices and for your time

#19

I can’t even open it with my desktop (i5 / gtx 750 ti / 16 Go ram / ssd )

According to this, H.265 decoding requires as GTX 1000-series (or the GTX 950 or 960 can apparently manage 4k x 2k). With a GTX 1000-series card or later, you can do 8k x 8k.

I recommend upgrading - the GTX 750 Ti is quite old, being introduced more than 5 years ago. That’s a long time, in GPU terms. You could get a refurb GTX 1050 for about $100, or a new GTX 1650 for $150.

BTW, that page also lists 4k x 4k support for H.264. So, it seems the format’s limit is a bit higher than 4k x 2k.

#20

a quilt texture size of 16384x16384 is total overkill for the current capabilities of the display.

Not if you want to move the camera around, within it. …just sayin’.