Depth pass video sequence on looking glass

Hello is it possible to play a sequence of frames to crate a video using Depth Media player or some other player? I would like to render out my animation with a depth pass under it just like the image format and play it on the looking glass for a presentation.

Here is an example of 1 frame of the animation.

Not really, since no parallax view information is available. Instead of image plus depth, render out multiple views of just the image (depth not needed). The Looking Glass Unity and Unreal integrations do this for you. To do it yourself, set up a camera rig with 45 cameras in a linear array, and backplane shifts so they keep your content centered (if that is not possible you could pan the cameras so they point at a central point, but then you’ve introduced unwanted vertical disparity). Tile these 45 imges into a “quilt” and use the quilt player to view on the Looking Glass.

Hi @Marcinstar,

If you have a sequence of frames but not a video, you can use tools like ffmpeg to generate a video and then play it in Depth Media Player

@Dithermaster actually we do support depth videos in Depth Media Player!

@Dithermaster i know i can do that but by using the color+depth method i can convert a lot of older videos to work with looking glass since z-depth is a common pas to render.

@kelly Thank you for the response. I will try the suggested program. I tried to make a simple mp4 from after effect but the depth media player was asking for metadata to play it.

What format would you suggest?

https://youtu.be/hotBULuXY8Y here are some test files that work as a single image.

Thanks @kelly, I did not know that. TIL.

Hey @Marcinstar,

This feature was originally designed to play depth videos from Scatter’s Depthkit, so it could be a little more difficult if you made your own depth videos.

But here’s a hacky way. You can actually generate a metadata file yourself by creating a .txt file with the same filename of your video, under the same folder. Inside the text file, you need to have the following arguments in JSON format (there are other values too but these are the absolute required ones, will need to go back and check my code and update later):

{
    "farClip": 1.1364357471466064,
    "nearClip": 0.40272921323776245
}

If I’m remembering correctly these are the numbers represents meters…? Please tweak the numbers and see what value works for you. Also please remember that this feature wasn’t designed this way officially so it might not produce the best results. Will update if I find a better way.

Note: another thing I just realized, depth map might need to be in hue-encoded instead of grayscale-encoded

Thank you for your helo @kelly. I followed your advice. I also downloaded a sample from depthkit video page analize the metadata.

Based on this i was able to determine how i could remap the black and white depth pass to a color pass. It looks like the color spectrum on this sample is from yellow(closest) to violet(furthest) and i’m assuming it goes full way to red etc. I tried to mimic that using a pre-renered zdepth and here are some examples

Here is a link to the video of how that works. https://youtu.be/7PVu9Xrya64 (sorry for the quality as i did not have my main recording equipment at the moment)

I also used this code that got from the depth kit sample and your suggestions

{
“depthImageSize”: {
“x”: 1440.0,
“y”: 1280.0
},

"farClip": 0.95555,
"nearClip": 0.35555,

“format”: “perpixel”,
“textureHeight”: 1440,
“textureWidth”: 1280
}

I was not able to find the documentation for the metadata so this is trial and error.

For the most part it is working ok but there are large artifacts present on the edges when the front plane meets the background.

The same image while using the grey scale zdepth image does not have any artifacts but does not allow for animation. Looks like the gray scale image is using the background pixels to fill in the space and the color coded material is using the object pixels to fill it in. There is also some kind of a choker that Easts a way the edges of the character using the gray scale image. I am assuming that helps to fill in the edges.

I think there could be some color spill when converting the zdepth to color that might be causing this. but i am not sure.

Is it possible to know what is the difference in converting the material from zdepth and form the color depth. ? looks like the depth version is superior as it builds the depth from 0-1 based on the brightness.

Trying to find a way to move forward but it is very close. There are hours of pre-rendered materials that could be converted to play on the looking glass if this works.

Hi Marcin - I responded to you via email and will continue the conversation there. For those who come across this thread later, we are indeed using a different algorithm to parse the photo and video depth - we’ll look into unifying them in the future if this approach presents issues for folks.