Design Excellence in Tron Legacy

I watched Tron Legacy this weekend.
Awesome movie, if only for the following reasons:

  • The music
  • The aesthetics
  • Jeff Bridges
  • That’s it

Despite not having the greatest storyline or script, the film has still had quite a profound effect on me thanks to its frankly mind-blowing visual identity.

As with most of the films I watch these days, I like to do a quick post-view scan of the web to consolidate my thinking around certain plot points, characters, or to brush up on production trivia.

This time, I hit up IMDB’s forums to read others’ views on Tron’s iconography, delved into some pretty weird fan pages, and researched the history of the crew – but in all of my post-view readings, I think I’ve found the major contributing factor towards why this film looks so damn good.

This film looks so damn good, in my belief, due to Joshua T. Nimoy, a software artist who worked on the film’s procedural artwork and user interfaces, which add a thick and gooey layer of believability to both Encom’s software, and to Tron’s 3D environment.

He has this to say:

I made software art before there was Flash or Processing. Things have not grown easier or harder, they are simply different. I am not just a user of Adobe and 3D programs. I work in the source ideas from which those programs originate. If I need a new algorithm, I learn it from theories, ask one of my peers, hunt for reusable code, or invent my own way. My most contagious meme is BallDroppings. My most visible work is commercial. My artiest works have shown in serious galleries and museums.

So, here we have a guy who is just brilliant at design, working on some of the world’s coolest and most progressive brands, plus a shitload of other stuff, and who knows how to hack to achieve a great effect. Pretty much the perfect dude to lead the march at Digital Domain when they were asked to work on Tron Legacy.

Following clearance from Disney, Josh has published a fascinating piece on his site about his work on the film, which I’ve pulled some interesting thoughts from:

I spent a half year writing software art to generate special effects for Tron Legacy […] in addition to visual effects, I was asked to record myself using a unix terminal doing technologically feasible things. I took extra care in babysitting the elements through to final composite to ensure that the content would not be artistically altered beyond that feasibility.

I take representing digital culture in film very seriously in lieu of having grown up in a world of very badly researched user interface greeble. I cringed during the part in Hackers (1995) when a screen saver with extruded “equations” is used to signify that the hacker has reached some sort of neural flow or ambiguous destination. I cringed for Swordfish and Jurassic Park as well. I cheered when Trinity in The Matrix used nmap and ssh (and so did you). Then I cringed again when I saw that inevitably, Hollywood had decided that nmap was the thing to use for all its hacker scenes (see Bourne Ultimatum, Die Hard 4, Girl with Dragon Tattoo, The Listening, 13: Game of Death, Battle Royale, Broken Saints, and on and on).

I like this guy even more now – who hasn’t cringed at stuff like this?

In Tron, the hacker was not supposed to be snooping around on a network; he was supposed to kill a process. So we went with posix kill and also had him pipe ps into grep. I also ended up using emacs eshell to make the terminal more l33t. The team was delighted to see my emacs performance — splitting the editor into nested panes and running different modes. I was tickled that I got emacs into a block buster movie. I actually do use emacs irl, and although I do not subscribe to alt.religion.emacs, I think that’s all incredibly relevant to the world of Tron.

Now, I don’t understand much of that last paragraph, but it’s cool to consider that there are people out there applying proper nerdery to their work, that 99.9% of people would totally miss. It just makes things better, doesn’t it?!

Tron Legacy

Oh hell yeah. The trailer for the new Tron film is out and looking great:

The premise is that Kevin Flynn is still trapped in Tron 25 years after the events of the first film, and remains captive in a world that has grown more advanced and more dangerous. His tech-savvy hacker son has tracked him down at last and enters Tron to release him. Gladiatorial-style battles and cunning technological traps will stand in his way blah blah blah

Let’s be straight here: the story is not what matters. It’s all about how visually gorgeous this cyber-universe will look, how cool the high-concept designs will be, and how awesome it will be to see this in full 3D at the IMAX. It’s success will inevitably give rise to toys and merchandise that will allow a new horde of young fans to feel part of the Tron universe. Let’s also not forget the great opportunities in Gaming, both console-based and MMORPG, and all that entails.

I’m most excited about the impact such a highly stylised film will have on other cultural forms, such as interior design and fashion, and predict that Tron’s bold iconography will pervade creative communities for some time after its release.

But I’m especially looking forward to this film because of the Daft Punk soundtrack. They’ve announced 24 new tracks which I’m sure will be a perfect fit with the new Tron universe, followed by a World Tour. Roll on, 2010…

Sam Flynn (Garrett Hedlund), the tech-savvy 27-year-old son of Kevin Flynn (Jeff Bridges), looks into his father’s disappearance and finds himself pulled into the same world of fierce programs and gladiatorial games where his father has been living for 25 years. Along with Kevin’s loyal confidant (Olivia Wilde), father and son embark on a life-and-death journey across a visually-stunning cyber universe that has become far more advanced and exceedingly dangerous. [7]

Virtual Reality

AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.

As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:

A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision
A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision

The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:

A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device
A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device

It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:

“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”

Lonsway (2002: 65)

Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).

Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:

An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them
An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them

 

What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible
What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible

There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.

In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:

“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”

Bolter et al. (2006: 22)

This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.