Rendering Images and Videos with Cinema4D

Hi there,
I work with Cinema4D and would love to render videos and images for the looking glass. Is there somewhere a precise description how cameras need to be positioned and how the image needs to be set up?

It would be not a big problem to create a quilt but what would be the next step? how to generate the final image for the looking glass?

Same issue here on Blender :slight_smile:

Hello Wes! We’re not working on a port for Cinema4D internally, but we are working on getting proper documentation as to how you would approach this problem in any platform. That documentation should be coming in early January, we’d be happy to send it to you then! This would allow you to export image sets that can be viewed on a couple tools we’ll also be releasing in the near future - a video viewer and a photo viewer.

In addition to this, we’ll be releasing a closed beta of an OpenGL pipeline, along with documentation, that would allow you to create a live viewport if that’s of interest to you!

More information would help to prepare a plugIn or a setup for C4D. I noticed that it is possible to take a screenshot from the looking glass and it will work directly as a hologram. Same goes for capturing videos. The trick would be to directly render images and video for this final format (not sure if there will be issues with compression artifacts).

I guess I just wait until you are ready to share some more infos :smiley:

Hi there, this is Chris. Is the documentation on how to render images/movies for the LKG already available? I got my device today and tested it. I would love to render animations and images right now. I am not looking for making interactive applications. I saw that there is an app that is connected to vimeo but in the instructions it tells me to use Unity. Thats an extra step. I want to render from Cinema4D and just upload the video to the dedicated vimeo channel. That would be awesome. I also don´t get yet why a game engine would be required to produce the videos. It would require more understanding of the output format from my side I guess. Also how is the progress on the image and video viewer?
I apologize in advance if the information is available somewhere and I did not see it. I am a bit hyper right now.
Thank you for any information.

Hi, producing rendered images in the correct quilt format is not really difficult if you’re familiar with setting up cameras in a 3D modeling/rendering package. The default high-quality quilt uses 45 views of the scene where the camera moves over a horizontal line from a left-most to right-most viewing position. See this thread, and specifically the image in this reply, for how the set of cameras looks in Maya. Note in that picture that although the camera sweeps over a range of different viewpoints its viewing frustum is corrected at each viewpoint (by making it assymetrically) so that the frustra coincide at the view plane. Set the camera (horizontal) field of view to something like 15 degrees.

You then need to combine the 45 view renderings into a single quilt image. I’m not sure there’s any official tools for it yet, but the layout is pretty straightforward: left-most view in the bottom-left, continuing filling rows all the way to the top-right for the right-most view. See Combining 32 views for an example. For a 4096x4096 quilt you’d use view images of 819x455 pixels each (4096/5 by 4096/9).

You might have to play around a bit with where to place your 3D content relative to the camera, model scaling, etc.

Preferably you keep the cameras parallel and shift the backplane (rather than rotate the camera). The former keeps vertical parallax to zero, which is better.

I’m not sure if you’re responding to something I wrote above, but that is what I meant, specifically the part on using cameras with an asymetric frustrum so they all us the same image plane, regardless of their position along the horizontal axes.

Glad to hear! I saw the word “angle” and assumed the worst (as in “the camera sweeps from a left-most to right-most viewing angle”).

Keep up the great work.

Good catch, I clearified my post :wink:

Hi Paul, I think I got the idea to start to experiment. Thank you for the info. If I manage to produce a quilt image/video directly from Cinema4D, how can I view it on the LKG? Do I have to upload to vimeo? Or can I do that by just displaying it on the Looking glass screen locally somehow?

1 Like

Hi @morn1415, we have a tool for that! It’s called our Quilt Viewer, and soon we’ll be releasing a lightweight version of it as well. For now, you can sign up to our closed beta of the Quilt Viewer here!

Hi @alxdncn @paulmelis
I think I managed to create a Cinema4D scene with a one step solution for rendering Quilts.
Without the help of any plugins or scripts!
Tested with the Quilt viewer. Works fine in principle.
Just raised more questions regarding the exact camera positions, dimensions, depth etc.
Was basically a lot of trial and error.
@_wes you are also using C4D? Have you had any attempts on that by now?

There are two ways to go about rendering for the Looking Glass. You have to decide what’s best for your situation. The methods are: 1) Lightfield Photo method or 2) Quilt method.

The Lightfield Photo method is to generate 32 or 45 images from camera(s) horizontally arranged and equidistant, each pointing in the same direction. The field of view must be wide enough so the subject is not cropped in any of the resulting images (particularly at the leftmost and rightmost positions). The camera’s view of the scene is much wider than will be shown in the Looking Glass. The Lightfield Photo viewer accepts 32 or 45 separate images, crops them, and displays them on the Looking Glass. The viewer has a bit of flexibility by being able to adjust the center point of the images, and the focal point depth. This method is the more flexible and forgiving of the two.

Further, this is how the live-action lightfield photos are created. A real-world camera is moved along a track and snaps pictures at 32 or 45 positions while looking at a subject. Those images are fed into the Lightfield Photo viewer and displayed on the Looking Glass. However, this method can also be used in a rendering engine (Blender, Maya, C4D, etc).

More info about Lightfield Photos here.

The Quilt method is to generate 32 or 45 images, horizontally arranged, with every camera pointing in the same direction, but (and this is key) their view frustums are shifted so that the focal point is centered in each camera¹. The cameras should be arranged such that the angle between the leftmost camera, the focal point, and the rightmost camera is 40 degrees. Try to be precise, because the Quilt Viewer doesn’t have the capability of adjusting the focal point after the fact. The Quilt Viewer expects all perspectives to be combined into a single image; right most view at the top-right, leftmost view at the bottom left. In this way, generating a quilt is sort of like an advanced mode, where you’re controlling all aspects of how images are shown on the display.

More info about Quilts here.

Summary:

  • Lightfield Photo method
    • Easiest to get up and running
    • Can be used for live-action photography
    • 32 or 45 cameras, or a single camera moved through 32 or 45 positions
    • focal length, or field of view wide enough to see the subject in every position
    • horizontally arranged and equidistant
    • pointing in the same direction (forward)
    • Lightfield Photo viewer accepts 32 or 45 separate images
    • Lightfield Photo viewer crops the images and displays them
    • Center position and focal point may be adjusted after-the-fact
  • Quilt method
    • More effort to set up, but gives you more precise control
    • Typically used for rendering
    • You have to be in the closed beta (sign up here)
    • 32 or 45 cameras
    • 13.5 degrees fov each
    • horizontally arranged and equidistant
    • pointing the same direction (forward)
    • view frustum shifted such that the focal point is centered in every camera¹
    • The angle between the leftmost camera, the focal point, and the rightmost camera should be 40 degrees
    • Quilt Viewer expects all views to be combined into a single image (hence, quilt)

¹ In Blender, if focal point is at 0,0, and cameras are arranged such that +Y is forward, the horizontal shift value is (x/2)/(y*tan(fov/2)) where x,y are the camera position and fov is the camera’s field of view…13.5 degrees if you followed the prescription above. You can either keyframe the camera position and render 32 or 45 frames that give you the desired perspectives, or you can set up 32 or 45 scenes, each with a separate camera. If you’re rendering a still image, the former is fine, but if you’re doing animation the latter is necessary. More info about cameras here.

1 Like

Thank you for the detailed description. There are some informations that will help me adjust my setup now. 13.5 degree fov and 40 degree over all cameras. I missed that somehow. Thanks.

1 Like

Thanks for the great description - if one wanted to create 4D video, then presumably the quilting approach is better? I’m just brainstorming.

1 Like

For me it comes to one thing: are you capturing live action or rendering? For live action, obviously you have to use a Lightfield Photo because you’re using a real, physical camera. For rendering, I prefer Quilt so you don’t waste time rendering pixels which will never be shown (because they are cropped out).

As an aside, I’ve never attempted to play a video comprised of Lightfield Photo’s…is that even supported?

We don’t currently support videos of lightfield photosets, but conceivably that would be possible. In fact, what the Lightfield Photo Viewer app is doing under the hood is creating a quilt as you move you focal point and view size, etc., so really there is an output quilt there. It would be incredibly tedious, though, to do this with several lightfield photosets and then combine the resulting quilts into a video, so I would strongly recommend using quilts for all video content.