Yo Gabba Gabba

If you’ve not yet had the pleasure, I’d urge you to get to grips with Yo Gabba Gabba, which is a sensationally well-conceived kids show from the US that’s been running for about four years:

Yo Gabba Gabba is the creation of W!ldbrain Entertainment, who have to be one of the coolest multimedia companies I’ve encountered. Based in LA and NYC, these guys make TV shows, films, adverts and merchandise, and have partnerships in place with Nickelodeon and Disney.

They also own Kidrobot, which has a strong line in collectible vinyl toys, clothes and art. So it won’t come as a surprise that Yo Gabba Gabba is centred around five toys:

  • Muno (he’s tall and friendly)
  • Foofa (she’s pink and happy)
  • Brobee (the little green one)
  • Toodee (she likes to have fun)
  • Plex (a magic robot)

The characters are brought to life at the beginning of each show by their owner DJ Lance Rock:

As well as being an educational and visually stimulating show,  I think it’s a great example of how content powers commerce, with W!ldbrain capitalising on the strength of their creation through “apparel, accessories, books, electronics, games, home décor and toys, available at retail through top licensees”, including their Kidrobot stores (clever, huh?), “Spin Master, Ltd., Simon & Schuster, Nickelodeon Home Entertainment and Paramount Home Entertainment, Nickelodeon/Sony BMG and others” (thanks Wikipedia).

This kind of integrated commercial thinking takes true advantage of today’s twisted media landscape, subverting pre-existing norms of content creation and ownership.

For example, a recurring musical segment in the show features The Aquabats, a superhero-themed rock band fronted by the show’s creator. The Aquabats have now been awarded their own W!ldbrain-produced TV show – so it’s evident these guys are all about through-the-line thinking.

What a modern company – W!ldbrain, I salute you.

Applying McLuhan

I begin with McLuhan, whose Laws of Media or Tetrad offers greater insights for Mobile AR, sustaining and developing upon the arguments developed in my assessment of the interlinking technologies that meet in Mobile AR, whilst also providing the basis to address some of this man’s deeper thoughts.

The tetrad can be considered an observation lens to turn upon one’s subject technology. It assumes four processes take place during each iteration of a given medium. These processes are revealed as answers to these following questions, taken from Levinson (1999):

“What aspect of society or human life does it enhance or amplify? What aspect, in favour or high prominence before the arrival of the medium in question, does it eclipse or obsolesce? What does the medium retrieve or pull back into centre stage from the shadows of obsolescence? And what does the medium reverse or flip into when it has run its course or been developed to its fullest potential?”

(Digital Mcluhan 1999: 189).

To ask each of these it is useful to transfigure our concept of Mobile AR into a more workable and fluid term: the Magic Lens, a common expression in mixed reality research. Making this change allows the exploration of the more theoretical aspects of the technology free of its machinic nature, whilst integrating a necessary element of metaphor that will serve to illustrate my points.

To begin, what does the Magic Lens amplify? AR requires the recognition of a pre-programmed real-world image in order to augment the environment correctly. It is the user who locates this target, it is important to mention. It could be said that the Magic Lens more magnifies than amplifies an aspect of the user’s environment, because like other optical tools the user must point the device towards it and look through, the difference with this Magic Lens is that one aspect of its target, one potential meaning, is privileged over all others. An arbitrary black and white marker holds the potential to mean many things to many people, but viewed through an amplifying Magic Lens it means only what the program recognises and consequently superimposes.

This superimposition necessarily obscures what lies beneath. McLuhan might recognise this as an example of obsolescence. The Magic Lens privileges virtual over real imagery, and the act of augmentation leaves physical space somewhat redundant: augmenting one’s space makes it more virtual than real. The AR target undergoes amplification, becoming the necessary foundation of the augmented reality. What is obsolesced by the Magic Lens, then, is not the target which it obscures, but everything except the target.

I am reminded of McLuhan’s Extensions of Man (1962: 13), which offers the view that in extending ourselves through our tools, we auto-amputate the aspect we seek to extend. There is a striking parallel to be drawn with amplification and obsolescence, which becomes clear when we consider that in amplifying an aspect of physical reality through a tool, we are extending sight, sound and voice through the Magic Lens to communicate in wholly new ways using The Virtual as a conduit. This act obsolesces physical reality, the nullification effectively auto-amputating the user from their footing in The Real. So where have they ‘travelled’? The Magic Lens is a window into another reality, a mixed reality where real and virtual share space. In this age of Mixed Realities, the tetrad can reveal more than previously intended: new dimensions of human interaction.

The third question in the tetrad asks what the Magic Lens retrieves that was once lost. So much new ground is gained by this technology that it would be difficult to make a claim. However, I would not hold belief in Mobile AR’s success if I didn’t recognise the exhumed, as well as the novel benefits that it offers. The Magic Lens retrieves the everyday tactility and physicality of information engagement, that which was obsolesced by other screen media such as television, the Desktop PC and the games console. The Magic Lens encourages users to interact in physicality, not virtuality. The act of actually walking somewhere to find something out, or going to see someone to play with them is retrieved. Moreover, we retrieve the sense of control over our media input that was lost by these same technologies. Information is freed into the physical world, transfiguring its meaning and offering a greater degree of manipulative power. Mixed Reality can be seen only through the one-way-glass of the Magic Lens, The Virtual cannot spill through unless we allow it to. We have seen that certain mainstream media can wholly fold themselves into reality and become an annoyance- think Internet pop-ups and mobile ringtones- through the Magic Lens we retrieve personal agency to navigate our own experience. I earlier noted that “the closer we can bring artefacts from The Virtual to The Real, the more applicable these can be in our everyday lives”; a position that resonates with my growing argument that engaging with digital information through the Magic Lens is an appropriate way to integrate and indeed exploit The Virtual as a platform for the provision of communication, leisure and information applications.

It is hard to approximate what the Magic Lens might flip into, since at this point AR is a wave that has not yet crested. I might suggest that since the medium is constrained to success in its mobile device form, its trajectory is likely entwined with that medium. So, the Magic Lens flips into whatever the mobile multimedia computer flips into. Another possibility is that the Magic Lens inspires such commercial success and industrial investment that a surge in demand for Wearable Computers shifts AR into a new form. This time, the user cannot dip in and out of Mixed Reality as they see fit, they are immersed in it whenever they wear their visor. This has connotations all of its own, but I will not expound my own views given that much cultural change must first occur to implement such a drastic shift in consumer fashions and demands. A third way for the Magic Lens to ‘flip’ might be its wider application in other media. Developments in digital ink technologies; printable folding screens; ‘cloud’ computing; interactive projector displays; multi-input touch screen devices; automotive glassware and electronic product packaging could all take advantage of the AR treatment. We could end up living far more closely with The Virtual than previously possible.

In their work The Global Village, McLuhan and Powers (1989) state that:

“The tetrad performs the function of myth in that it compresses past, present, and future into one through the power of simultaneity. The tetrad illuminates the borderline between acoustic and visual space as an arena of the spiralling repetition and replay, both of input and feedback, interlace and interface in the area of imploded circle of rebirth and metamorphosis”

(The Global Village 1989: 9)

I would be interested to hear their view on the unique “simultaneity” offered by the Magic Lens, or indeed the “metamorphosis” it would inspire, but I would argue that when applied from a Mixed Reality inter-media perspective, their outlook seems constrained to the stringent and self-involved rules of their own epistemology. Though he would be loath to admit it, Baudrillard took on McLuhan’s work as the basis of his own (Genosko, 1999; Kellner, date unknown), and made it relevant to the postmodern era. His work is cited by many academics seeking to forge a relationship to Virtual Reality in their research…

Mobile Telephone

The Internet and the mobile phone are two mighty forces that have bent contemporary culture and remade it in their form. They offer immediacy, connectivity, and social interaction of a wholly different kind. These are technologies that have brought profound changes to the ways academia consider technoscience and digital communication. Their relationship was of interest to academics in the early 1990’s, who declared that their inevitable fusion would be the beginning of the age of Ubiquitous Computing: “the shift away from computing which centered on desktop machines towards smaller multiple devices distributed throughout the space” (Weiser, 1991 in Manovich, 2006). In truth, it was the microprocessor and Moore’s Law- “the number of transistors that can be fit onto a square inch of silicon doubles every 12 months” (Stokes, 2003) that led to many of the technologies that fall under this term: laptops, PDA’s, Digital Cameras, flash memory sticks and MP3 players. Only recently have we seen mobile telephony take on the true properties of the Internet.

The HARVEE project is partially backed by Nokia Corp. which recognises its potential as a Mobile 2.0 technology: user-generated content for mobile telephony that exploits web-connectivity. Mobile 2.0 is an emerging technology thematically aligned with the better established Web 2.0. Nokia already refer to their higher-end devices as multimedia computers, rather than as mobile phones. Their next generation Smartphones will make heavy use of camera-handling systems, which is predicated on the importance of user-generated content as a means to promote social interaction. This strategic move is likely to realign Nokia Corp.’s position in the mobile telephony and entertainment markets.

Last year, more camera phones were sold than digital cameras (Future Image, 2006). Nokia have a 12 megapixel camera phone ready for release in 2009, and it will be packaged with a processing unit equal to the power of a Sony PSP (Nokia Finland: non-public product specification document). MP3 and movie players are now a standard on many handsets, stored on plug-in memory cards and viewed through increasingly higher resolution colour screens. There is a growing mobile gaming market, the fastest growing sector of the Games Industry (Entertainment & Leisure Software Publishers Association (ELSPA) sales chart). The modern mobile phone receives its information from wide-band GPRS networks allowing greater network coverage and faster data transfer. Phone calls are the primary function, but users are exploiting the multi-media capabilities of their devices in ways not previously considered. It is these factors, technologic, economic and infrastructural that provide the perfect arena for Mobile AR’s entry into play.

Mobile Internet is the natural convergence of mobile telephony and the World Wide Web, and is already a common feature of new mobile devices. Mobile Internet, I would argue, is another path leading to Mobile AR, driven by mobile users demanding more from their handsets. Mobile 2.0 is the logical development of this technology- placing the power of location-based, user-generated content into a new real-world context. Google Maps Mobile is one such application that uses network triangulation and its own Google Maps technologies to offer information, directions, restaurant reviews or even satellite images of your current location- anywhere in the world. Mobile AR could achieve this same omniscience (omnipresence?) given the recent precedent for massively multi-user collaborative projects such as Wikipedia, Flickr and Google Maps itself. These are essentially commercially built infrastructures designed to be filled with everybody’s tags, comments or other content. Mobile AR could attract this same amount of devotion if it offered such an infrastructure and real-world appeal.

There is a growing emphasis on Ubiquitous Computing devices in our time-precious world, signified by the increased sales in Smartphones and WiFi enabled laptops. Perhaps not surprisingly, Mobile Internet use has increased as users’ devices become capable of greater connectivity. Indeed, the mobile connected device is becoming the ubiquitous medium of modernity, as yet more media converge in it. It is the mobile platform’s suitability to perform certain tasks that Mobile AR can take advantage of, locating itself in the niche currently occupied by Mobile Internet. Returning to my Mixed Reality Scale, Mobile AR serves the user better than Mobile Internet currently can: providing just enough reality to exploit virtuality, Mobile AR keeps the user necessarily grounded in their physical environment as they manipulate digital elements useful to their daily lives.

Introducing… The Pico-Projector

mobileinnovation1Problem:
Mobile multimedia capabilities are increasing in uptake and potential, but the small form-factor we so desire in our handsets are beginning to inhibit a rich user experience.
The typical mobile screen size is 320×240.

Solution:
If your mobile has a pico-projector, it will be able to emit high-res imagery onto any suitable surface, up to 50″ in width.
This unlocks the full immersive power of your mobile web browser, 3D games engine, DivX movie player or video conferencing.

Market Readiness:
Pico-projectors are already on sale as stand-alone units, though are yet to be implemented in mobiles, PMPs or laptops.
The first of these hardware mashups will be on sale in the East by the end of this year, but it’ll likely be another 18 months before they reach Western shores.

Potential:
Aside from the new opportunities for deeper engagement with content and software on the mobile platform, the largest socio-cultural change will occur once people begin to share their mobile experience.
Picture regular consumers using the real world as a medium for virtual interaction.
Location-aware video advertising anyone?