I know the Maya plug-in is in progress, but can anyone tell me the tech specs if I were to build a 32 camera rig in Maya to do some test still renders now? I.e. What lens, camera spacing, toed-in or parallel, etc. Just something basic. Thanks!
Hey there, I can very briefly sketch this out for you here, but we actually are working on more official documentation to answer these questions, so I’ll let you know when those are posted!
Basically, we use an FOV of about 14 degrees (total, not from center) for our virtual cameras. The view cone for the Looking Glass is typically about 40 degrees total, so when we do our renders, we start -20 degrees from center and move to +20 from center. We use a parallel method, not toe-in, so we’re shifting the view and projection matrices of the camera as we move from view to view. Not sure what the lens options are in Maya, but I’d also aim to have 16:10 output.
That said, if you run into any pitfalls, I’d strongly advise waiting for the Maya beta release. We ran into a few ourselves and we’ve done the work to get around them so you don’t have to. Tagging @dez here as it’s his project and he’s just finishing it up now, so he can help answer any further questions you might have!
Cool. That all sounds doable, but I can also easily wait for the official Maya plug-in. Besides, I also need the media player to feed the resulting quilts into.
I’m also looking to render some offline frames from other DCC’s. Could a camera rig be exported from Maya via FBX?
I’ve hacked an approximate camera setup from the Unity prefab at 13.5 FOV, 1.8 aspect ratio, in a 45 camera parallel array.
I’m wondering about the effects of the camera spacing relative to the HoloPlay capture volume. Is this a fixed number?
did you answer this above? Perhaps I did not understand the response.
Also wondering about any progress on an offline media player.
There is no fixed number. It is recommended to find the focal length / camera distance / film back offset values that work best for what you are working on.
Each parameter has a different effect on what is perceived in the Looking Glass, which are quite hard to describe in words (you can see for yourself what they do by playing with the Unity SDK). In any case, the effects can be tweaked in real time in our Unity, Maya, (and I believe Blender) tools. The Maya tools are currently being tested within the team to iron out issues, but I am not sure when they are planned to be released. It will be quite soon, though.
This was my maya tool for multi-camera rig in 2013.
I built several tools in Maya, Nuke, AE and 3D Max with image sequence base.
The worst thing is postproduction flow after cg render.
I really hate to wait for rendering and to manage tons of images.
Yea, it will be possible to manage some of this with the incoming Maya tools.
However, if you want to render using a render farm, it will still be necessary to manage the sequence of each camera view.
So aside from Maya being able to assemble quilts (already done, it’s being tested, currently), I will be providing 2 additional solutions myself (maybe others in the team will add more):
- A tool called QUILTR will be able to assemble a quilt sequence out of images with a _v# and _f# in the names (view and frame respectively).
- I plan to provide a Nuke template that can assemble quilt sequences from raw image views of frames
If it is helpful, here is what the cameras look like in Maya, currently:
NOTE: the film back on each sub-camera is offset. The cameras are parallel and do not arc.
Awesome!! I don’t have to prepare my camera system.
Note: The picture was one of our node in Nuke. It was complicated.
Are you going to use Multi-Stream?