The Projected Instrument Augmentation system (PIANO) was developed by pianists Katja Rogers and Amrei Röhlig and their colleagues at the University of Ulm in Germany. A screen attached to an electric piano has colourful blocks projected onto it that represent the notes. As the blocks of colour stream down the screen they meet the correct keyboard key at the exact moment that each one should be played.
Florian Schaub, who presented the system last month at the UbiComp conference in Zurich, Switzerland, said that users were impressed by how quickly they could play relatively well, which is hardly surprising given how easily we adapt to most screen interfaces these days.
But while there is real potential for PIANO as a self-guided teaching aid, in my view it’s the potential for a really tight feedback loop that makes this most interesting, and potentially more widely applicable.
When a piano teacher corrects a student’s mistake, they will perhaps specify one or two things that need improving, but this approach would sense each incorrect note and could provide an immediate visual response, flashing red for instance, conditioning the student to success more quickly.
A team from the University of Tokyo have conceived of several new applications for lasers, some of which are interesting to say the least, others potentially groundbreaking. These applications arise from their Smart Laser Scanner (markerless laser tracking) technology:
Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area to a very narrow window precisely the size of the target (from the Ishikawa Komura Laboratory)
So what this means for us is we could pretty soon have a low-cost and low-apparatus method to interface with a wearable computer, in multitouch, and without the need for any markers.
The project website features videos for all of their experiments, including:
Simple 3D tracking
Multiple point tracking
I urge you to read more on the project website right here, but before you go, I’d like to feature one of the coolest applications I found for the Smart Laser Scanner. It’s called Sticky Light, and it’s an experiment in light interaction:
The question I want to ask is, wouldn’t this be the ultimate executive toy if productized in time for Christmas? I know I want one.
Augmented Reality (AR) is a theme of computer research which deals with a combination of real world and computer generated data. AR is just one version of a Mixed Reality (MR) technology, where digital and real elements are mixed to create meaning. In essence AR is any live image that has an overlay of information that augments the meaning of these images.
Digital graphics are commonly put to work in the entertainment industry, and ‘mixing realities’ is a common motif for many of today’s media forms. There are varying degrees to which The Real and The Virtual can be combined. This is illustrated in my Mixed Reality Scale:
This is a simplified version of Milgram and Kishino’s (1994) Virtuality Continuum; simplified, because their research is purely scientific, without an explicit interest in media theory or effects, therefore not wholly applicable to my analysis. At the far left of my Mixed Reality Scale lies The Real, or physical, every-day experiential reality. For the longest time we lived solely in this realm. Then, technological innovation gave rise to the cinema, and then television. These media are located one step removed from The Real, a step closer to The Virtual, and can be considered a window on another world. This world is visually similar to our own, a fact exploited by its author to narrate believable, somewhat immersive stories. If willing, the viewer is somewhat ‘removed’ from their grounding here in physical reality, allowing them to participate in the construction of a sculpted, yet static existence. The viewer can only observe this contained reality, and cannot interact with it, a function of the viewing apparatus.
Later advancements in screen media technologies allowed the superimposition of graphical information over moving images. These were the beginnings of AR, whereby most of what is seen is real with some digital elements supplementing the image. Indeed, this simple form of AR is still in wide use today, notably in cases where extra information is required to make sense of a subject. In the case of certain televised sports, for example, a clock and a scoreboard overlay a live football match, which provides additional information that is useful to the viewer. Television viewers are already accustomed to using information that is displayed in this way:
More recently, computing and graphical power gave designers the tools to build wholly virtual environments. The Virtual is a graphical representation of raw data, and the furthest removed from physical reality on my Mixed Reality Scale. Here lies the domain of Virtual Reality (VR), a technology that uses no real elements except for the user’s human senses. The user is submersed in a seemingly separate reality, where visual, acoustic and sometimes haptic feedback serve to transpose them into this artificial, yet highly immersive space. Notice the shift from viewer to user: this is a function of the interactivity offered by digital space. VR was the forerunner to current AR research, and remains an active realm of academic study.
Computer graphics also enhanced the possibilities offered by television and cinema, forging a new point on the Mixed Reality Scale. I refer to the Augmented Virtuality (AV) approach, which uses mainly digital graphics with some real elements superimposed. For example, a newsreader reporting from a virtual studio environment is one common application. I position AV one step closer towards The Virtual to reflect the ratio of real to virtual elements:
There is an expansive realm between AV and VR technologies, media which offer the user wholly virtual constructions that hold potential for immersion and interactivity. I refer to the media of video games and desktop computers. Here the user manipulates visually depicted information for a purpose. These media are diametrically opposed to their counterpart on my scale, the cinema and television, because they are windows this time into a virtual world, actively encouraging (rather than denying) user interactivity to perform their function. Though operating in virtuality, the user remains grounded in The Real due to apparatus constraints.
Now, further technological advancements allow the fusion of real and virtual elements in ways not previously possible. Having traversed our way from The Real to The Virtual, we have now begun to make our way back. We are making a return to Augmented Reality, taking with us the knowledge to manipulate wholly virtual 3D objects and the computing power to integrate digital information into live, real world imagery. AR is deservedly close to The Real on my scale, because it requires physicality to function. This exciting new medium has the potential to change the way we perceive our world, forging a closer integration between our two binary worlds. It is this potential as an exciting and entirely new medium that has driven me to carry out the following work.
To begin, I address the science behind AR and its current applications. Next, I exploit an industry connection to inform a discussion of AR’s development as an entertainment medium. Then, I construct a methodology for analysis from previous academic thought on emergent technologies, whilst addressing the problems of doing so. I use this methodology to locate AR in its wider technologic, academic, social and economic context. This discussion opens ground for a deeper analysis of AR’s potential socio-cultural impact, which makes use of theories of media and communication and spatial enquiry. I conclude with a final critique that holds implications for the further analysis of Mixed Reality technology.