Play of Future’s Past
18-Second-Tech-Demo for Looking Glass System Download (Windows): . SDK ver. 0.50.
At LKG, this is an ongoing 3D animated narrative (tentatively 2 minutes) that I am in development on creating that highlight themes of technology, generational divide, and impermanence. In this 2-minute story to be finished within the month-month 1/2, a boy is to encounter multiple versions of himself throughout time and witness the levels of degradation or improvement interactive technology has had on himself in the future from 1993. He will witness whether the future has evolved through entertainment, degraded through immersion, and the ways he can take it upon himself to improve the future from the past. The visuals, stylistically, will reference modern 3D game artwork, 1990s vintage polygonal artwork, and 2D pixelation.
For the last three weeks, give and take hours from developing other projects, I was able to with a single-artist pipeline conceptualize, sculpt, texture, animate, light, transfer, program, and capture an 18-second tech demo of what the final project may look like come conclusion. It is often is misconception of the difficulty involved to create a project in this realm; much more with a single-artist’s pipeline. There are advantages and difficulties with the workflow. A somewhat standard but oddball tool pipeline was utlized in that asset generaton began in Maya/Mudboxand was settled in Unity. Augmented by that the visuals were made and tooled solo.The audio was compiled from publically available sound clips and the OST of The Dam Keeper for demonstration sake (and a fine film). All visuals created by myself.
Part of the challenge with creating the characters is essentially needing to give birth to 3-renditions of each one for the narrative: modern hi-quality, 1990’s polygonal, and a special pixelated 3D look that harkens to Atari days.
The project began inception at a couple of key moments: 1.) during a team meeting discussing the birth of ‘The Looking Glass’ (LKG’s newest system) there was in invitation to the possibility of 3D animated narrative within the machine. As well, 2.) on the plane ride to San Francisco for GDC, I put the initial sketches in a sketchbook for the concept while appropriating a set of assets from my library which address the aesthetic desired
There is a lot of technical activity going on in the background of the work, even at this stage. Here are a few key principles I went by throughout:
A. Lights and even the entire environment itself need to change every single shot. For lights, it is mandatory to achieve a cinematic feeling. Every individual shot must be lit differently to maximize appeal. As if it were a filmic production. Funny enough, it is a staging style for a play as the entirety is a single shot. A single lighting pattern ‘could’ be given to the work, but it would be far less effective. As well, for clear reasons of the hologram, the environment needed to be toggled.
B. Manipulation of animation curves. In Maya animation, there exists ‘stepped’ animation or animations which switch to more stiff point without any interpolation. This is often used to layer beginning poses for specific character animation as not to polish anything unnecessary. For movement of the camera, this was necessary. For the HoloPlay Capture, I could not simply toggle cameras on and off during runtime. I needed the same camera throughout the whole shot. Thus, need to stepped curves was necessary. In Unity, they have changed that naming convention to ‘constant curves’.
C. For quality: create the ‘performance’ in full without considering the scenery or editing… and then curate that performance with camera capture. There are even a few moments in the 18-seconds I wished could make it to the final cut. However, the camera could not cut there for purposes of content clarity and overall performance strength of one area versus the other. Creating the entire animation first does not in fact take more time. It actually saves time; in lieu of post-mortem readjustments later because there is no full content.
As an added bonus, I needed to create a separate ‘load-in’ scene in order to align the video with audio. If the 18-seconds loaded with animated scene as a default, the first playthrough would be out of sync with the way Unity loads assets.
As yet another bonus, Unity does not have support to import facial (Blendshape) animations from Maya. Thus, the actual animation-configuration of facial expressions needs to happen in the Unity software.
And because I forgot, the same occlusions that have been happening in AR projects recently (the black masking of other lit objects) also takes effect here too in the doorway.
I wrote this internally to help shed light on the medium of ‘floating image/hologram’ versus other media. In my writing, this is what I came down to… :
"How 3D animation works in The Looking Glass vs other media."
From a Visual (Audience) Standpoint:
-When you think about it, both a typical screen and TLKG have a ‘canvas’ to work with. One is a screen and one is a condensed cubic space. The screen works because people are, of course, conditioned to the screen. TLKG and 3D animated content can work on the manner that we convince people of the ‘normalcy’ of our canvas. While it is the ‘future’, TLKG must conform with the present. 3D TV didn’t work because it was not distant enough from ‘the screen’ (and expensive). 3D animation can approach the ‘screen’ from a social standpoint if it is proven to be an objectively stronger and distant way to view narrative without sacrificing traditional viewing values.
From a Technical (Creator) Standpoint:
-I’ve worked in several methods around 2D and 3D creation, and its always funny to me how even 3D animated shorts-films for 2D media (the ‘screen’) involve a heavy component of 2D input and output. You have 3D characters and assets; though they inevitably are a tool to perform ‘for’ the 2D screen. As hi-def or lo-def that is. With 3D animated shorts-films for TLKG, the characters are not necessarily tools ‘for’ the 2D screen but immediate representatives of themselves. The process of curating 3D capturing has been repositioned or in some cases eliminated. For a general workflow to compare-
[3D animated short for screen]:
Asset Gen, Rig, Animate (Maya) --> Capture (Still Maya) --> FX Compositing-Editing (AE, Nuke)
[3D animated short for TLKG]:
Asset Gen, Rig, Animate (Maya) --> Capture (Unity) --> FX Compositing-Editing (Still Unity)
What I can gather are TWO important notes we can understand-
–This reinvents some of the manner where a creator can visualize their 3D animated narratives. If they are adverse to the process of post-effects in 2D, they can still be situated in the realm of 3D creation while doing just that. One can stay in 3D without needing to veer back and forth between 3D-2D pipelines. For teams of shorter degree or single-artist pipelines (ex. myself), this can prove valuable in respective team resources and time spent.
–The audience can see AS the creator does. As before, 3D animators created stories for the 2D canvas. The work they have done/do must go through a channel that distorts a degree of the original vision and might change it. For a 3D canvas, the vision is not distorted by the screen. The vision is just as it appears in the native 3D format.