Pre-processed Video Difficulties

Hi everyone,

I’ve nearly accomplished what I want to do with the Looking Glass as far as creating and playing electron tomography models as already processed videos. However, I’m running into some issues with the actual playback of the videos since the files are massive. I’ll outline my general process here, and if anyone can provide insight, it’d be much appreciated!

Goal: Generate model with 60 seconds rotation animation at 30 FPS that can be played from an Intel NUC without needing significant Graphics Card use and as little compression as possible.


  1. Export a reconstructed contour of a dataset from TomViz as a .stl

  2. Load .stl into Blender 2.8 to set lighting, 45 cameras (script), and whatever animation is desired. Render this via script.

  3. Take Rendered Tile views and turn them into Quilts through MatLab (32-bitdepth pngs).

  4. Use Unity generated app to turn quilts into lenticularized images, done capturing and transforming the output texture into a .png.

  5. Run lenticularized images through MatLab to generate a .avi.

  6. Finally, use VLC MediaPlayer to fullscreen play this .avi on the Looking Glass display.

Ultimately, we end up with a high quality model that looks great, but the .avi file is about 20 GB. It will play in short bursts and will be somewhat jittery.

Some Questions
-Is it reasonable to even attempt to play a video of this magnitude, or it would it be better to try and use the GPU attached to the NUC?
-If the video is reasonable, what video player would be capable of doing this?

I’ll be continuing my own trials with this, but if anyone’s has gone through similar issues and found a solution, please let me know!
I don’t know how to embed a video here, so here’s a link to video of what it looks like right now

1 Like

I think you will need to transcode to a more compressed format, like H.264 or VP9. Look into ffmpeg for this transcoding. Expect the transcoding to be very slow (at least a few minutes for your 1 minute video).

That said, since you are lenticularizing before encoding the video, it will have trouble compressing well (too much high-frequency data - this might also be why your AVI is large) and will probably result in very ugly playback. Instead, it would be best if you can encode the quilted video, and lenticularize on the fly during playback. The NUC’s GPU should have no problem doing that.

I very much agree with suggestion to encode to tile/quilt format and lenticularize on the fly during playback. Compression and lenticular image don’t mix well.

I also created a small tool to record pre-processed video
I used H264 codec with a quality almost lossless (but not totally lossless, to reduce the space) and with a slow encoding, also to reduce the space.

The ffmpeg command look like this

-hwaccel cuvid -y -f rawvideo -r 60 -s 2560x1600 -pix_fmt rgb0 -i - -c:v h264_nvenc -pixel_format yuv420p -rc constqp -preset slow -profile:v baseline -crf 18

I minute requier around 1.4 Go , it’s still huge but it’s better than 20 Go :slight_smile:

EDIT : Another solution that I didn’t try yet could be to generate a sequence of png instead of a mp4 , then to apply a png-optimizer ; some of them can reduce 90% of the space without loss of quality ( ) ; finally encode a mp4 from the optimized png-sequence

I have the best results with this one

It’s not ugly at all and it’s a guarantee to have 60 frame per second , even with an ultra complex demo , with 64 views or more.

It’s true that the final output quality is not as good as the source-quality but it’s also almost impossible to handle 64 views or more (because you can use a huge quit texture) at 60 fps if the demo a bit complex , then none of the two scenarios is perfect.

The amount of view increase the quality of the user-experience , the size of the viewcone play a role too .

I think that with a good configuration (amount of view, size of texture, encoding quality , viewcone size ) , using a pre-processed video is the best choice (except if you have a very expensive computer that can handle everything)

EDIT : if you want to try by yourself (and if your are using windows), dowload and install the threejs driver

Then download this project

Once you’re ready, clic on “holoplayToMp4.exe” , it will create a video-example into the video folder using your LG-calibration. If you want to customize the amount of view or the size of the quilt, just set the values you want in the file src/index.html

If the bitrate is high enough, a compressed video will look fine. But I bet if you compressed the quilted version instead of the lenticularized one, you could get the filesize much lower with the same high quality.

Of course, this requires you to be able to playback quilted videos. And I’m not sure there is an easy way to do that. (Clearly there are some applications in the Library that have done similar things - I bet they do it through Unity.)

FYI, optimizing PNGs affects only the PNG file size itself, and will have no effect on the size of an output video that’s created from those PNGs.

Actually, it should be not too difficult to make a video player with Holoplay.js and Three.js, to convert a quilted video for display on the fly.

EDIT: I read further on your link and it sounds like you’ve already done all this.

“FYI, optimizing PNGs affects only the PNG file size itself, and will have no effect on the size of an output video that’s created from those PNGs.”

Of course it has an impact on the output video…
If you convert the sequence of png into a video with a lossless quality, it will juste assemble the picture to make a video.

if each picture has been optimized ,your video will be as optimized as your picture.

It’s not true.
The maximum size you can use with a quilt-video is 8192x4096 because it’s the final limit of the H265 codec.
But even if you can create this video, only 1% of the computer in the world could play it because H265 is 10x more complex than H264 to decode. My desktop computer is not even able to read a 4096x2048 H265 video…

Why does the quilt size matter ?
Because the amount of views and the resolution of each view depends on the size of your quilt texture.

If you preprocess the video, you can use a 8192x8192 with 64 views (1024x1024 each) without any problem because the output video is only 2560x1600 and because you use H264 it’s easy to decode for any computer

EDIT : the final rendering framerate has an impact on the quality of the user experience , the quality of the picture is not everything

Ah, thanks so much everyone!

I looked into Pnggauntlet, but it was taking a dauntingly long time to start reading my files so I nicked the test early–I may revisit it. A couple things I’ve learned so far in other testing today:

  1. Probably obvious, but storage space matters for this. I tested this with SSD on my laptop, a USB 3.0, and the NUC’s HDD. SSD performed best, but interestingly to me is that the USB 3.0 seems to perform better than the NUC’s HDD, implying that a sufficiently large USB 3.0 could function even better than what comes with the NUC. What would be interesting to test is how SD card’s function since that’s an option. I imagine looking into data transfer rates will reveal if this makes sense or not.

  2. ffmpeg is a lifesaver for space while maintaining quality. I transformed my lenticular.avi testing file (11.4 GB, 20 FPS, 1.9Gbps) into a far more reasonable lenticular.mp4 (202 MB, 20 FPS, .034Gbps). Here’s a link to both playing from NUC’s HDD using VLC Media, where the clear winner is the .mp4 version.
    The code I used to transform it is:
    ffmpeg -i input.avi -c:v libx264 -crf 19 -preset veryslow -c:a aac -b:a 192k -ac 2 out.mp4.
    A minor concern is that sometimes the .mp4 will undergo massive color shifts or image tearing for the first cycle, but it will adjust itself afterward. I’ll try to look into this more.

  3. I tried out MPC-HC, and it does play the massive videos back at a way better pace than VLC Media, most notably seen when running off my SSD since the big videos struggled to play everywhere.

Anyways, thanks again for all the help, and I’m glad we’re getting some good discussion here! I’ll try passing my original 20 GB videos through testing today and see where it takes me. If I have time, I’ll try to do some quiltvideo -> LookingGlass code (though it seems there’s some successful tools being shared around already!).

Small Side Note: Does anyone know any good sources/references on data storage? The ffmpegg stuff is kinda incredible to me and I’d like to learn more about it.

PNG is a lossless format, and optimizing PNGs does not change their content (unless you are doing lossy optimization, which is possible but uncommon). Any video encoder will decode the PNG to raw bitmap data before encoding it, so the result doesn’t change if the image data doesn’t change.

One other thing I’m confused about: why do you want the quilt to have more total pixels than the output display? I’m suggesting storing the quilt, but with approximately the same number of pixels as the display itself, not at a larger resolution that would have to be downscaled.

You didn’t look at the link I shared.

If you optimize a png, it’s still lossless but the size of the file is much more small.
I made a test with a png generated with my tool , the original size is almost 5Mo, after optimisation it’s only 1.1Mo for the exact same image. It’s almost 80% less compared to the original size.

You 're right if I encode it, but what if I don’t encode it at all ?
You can do that with ffmpeg using -c:v copy , it’s not an encoding it’s an exact copy from the source, one frame after the others to make a video. The output filesize is equal to the sum of the filesize of every images.

So, if you optimize your png, you optimize your video, because it’s the exact same thing. Ffmpeg just add some “glue” between each frame.

because the screen show one view at a time (it’s not totally true but let’s keep it simple) , and one single view in your quilt is much much smaller than the size of your screen.

If you use a 2048x2048 quilt with 36 view , it means that the size of each view is 341x341.
Sound like a very very bad quality compared to the resolution of the screen (2560x1600).

If you preprocess the video, you can use a huge quilt because performance is not a problem anymore.
With a bigger quilt you can have much more view (so a better quality of hologram) and a better quality by view.

A quilt contains multiple view.
A single view is what you see on the screen.
If you want that each view has a size of 2560x1600 , your quilt texture will be enormous, much bigger than 8192x8192…

Me too :slight_smile:
I made one test using this tool

gauntlet should be better but I got the same result as you and after one minute to compress a single frame I gave up. Tinypng did the job in 6-7 seconds - for a single frame, for 3600 frames, it would be sooo long. No sure I can be so patient, it’s easier to use bigger file and a SSD as you suggest :slight_smile:

I did, and I know that PNGs can be significantly optimized. I just didn’t understand the next part:

You’re right about that. I had incorrectly assumed you wanted to re-encode. But if you can get an acceptable file size and disk bandwidth with a sequence of optimized PNGs, you will certainly get a great visual result (assuming your video player can play it back).

And, of course, if you’re doing this losslessly, then the lenticularization won’t hurt the compression quality. (Although it could change the compression ratio of your PNGs; I’m not sure.)

Hm, are you sure? I’m pretty certain that the resolution of the screen per view is approximately 2560x1600 divided by the maximum number of distinct views (I think it’s 45?) - not 2560x1600 per view.

This is exactly what I 'm saying :slight_smile:
The default quilt for the LG is 5 x 7 views inside a 2048x2048 quilt or 5 x 9 views inside a 4096x4096 quilt.

Then the correct size of each is either 409x292 or 819 x 455.

In any case, it’s much smaller than the size of the display, and it’s all because of the size of the quilt.


Actually, I’m considering to create a king of png-sequence-player.
I think it would probably be the lightest solution possible in term of performance.

Ok, sounds like we agree on everything :slight_smile:
I was just confused by this:

since each view is physically much less than 2560x1600 on the real display, so I don’t think you would want that.