Programmed To Love

Two robots, Vincent & Emily, are connected to each other as if deeply in love: where at the heights of romance, every motion, utterance, or external influence is shared in an acutely empathic, highly attuned ’emotional’ response:

The creation of German artists Nikolas Schmid-Pfähler and Carolin Liebl, the robots take in sound and motion data–from each other and from spectators– via sensors, which causes them to react–via gears and motors–with certain expressions. Shown in a gallery and open to the interaction of visitors, the project aims to explore the ideal of the human couple by distilling it into a more basic form. Simple lines represent bodies. Reacting to inputs replaces complicated decision-making.

Like in any relationship, miscommunication is a factor – so an intimate moment can lead to conflict, and eventual resolution. This gives a certain texture to their ‘dance of love’ that makes it hard not to anthropomorphise, or indeed relate to!

Take a look:

Via Co.Exist.

Screens

[box]This post originally appeared on the FTMF.info planning blog.[/box]

Within the pages of Watchmen, Adrian Veidt, the so-called “smartest man in the world”, esteemed business leader and founding member of the Crimebusters is shown at a wall of televisions, each tuned to a different channel. He uses this clatter of imagery, sound and motion to make sense of the current geopolitical and social climate and to act upon it:

Watchmen 10 - 08

Reads a bit like Social Media Monitoring, doesn’t it? But Adrian Veidt, AKA Ozymandias, was multi-screening before it was even a thing. Nowadays, we do it by default, up to 60% of the time, and in the age of 4.6 connected devices per household it just comes naturally.

Multi-screening can be simultaneous (same journey across devices, as in the above case), sequential (different journeys across devices simultaneously), or separate (different journeys across devices simultaneously) – but it’s an emergent behaviour that needs much further inquiry. There are few real thought leaders, except for SecondSync perhaps, or Microsoft, who so succincinctly define the terms I’ve used here.

One other thought leader is Kevin Kelly, co-founder of Wired, whose view is that as screens proliferate further into each aspect of our lives, their role becomes not just to display but also to help filter information – we’re literally ‘screening out’ the stuff we don’t want to see.

Watch his talk on ‘screening’ and five other ‘Verbs for the New Web’ below – it’s great:

And finally, screens can also be mirrors, lenses or even windows. Clever, aren’t they?!

Matthias Müller’s Particle Art

There’s this guy called Matthias Müller, and he makes beautiful abstractions out of virtual dust on his supercomputer. He’s some kind of motion-art superhero, probably sent to us from the exploding Planet 3DS Max by his scientist parents.

In this post I’ve picked out a few examples of his work, because as well as being simply gorgeous viewing material, they’re great examples of what’s possible with a few gigs of RAM, a graphics card and some imagination.

Probably my favourite due to it’s relative simplicity, this tech demo plays with texture in surprising ways:

This next one is so epic! Like an underwater fireworks show of electric choreographed jellyfish, or something…

Watch as millions of particles merge and blend with infinite complexity in this piece of seemingly generative fludity:

This final clip is almost a love story. Watch as two swirling masses collide, explode and dance in time with the music:

An undoubtedly talented guy, Matthias has done commercial work for Honda and Vodafone (as featured last year).  His YouTube channel is certainly worth a look, as are his lovely image renders on CGPortfolio.

I can barely get the most out of MSPaint, however…

Cinemetrics: Interactive Movie Infographics

Cinemetrics is the fascinating result of Frederic Brodbeck‘s bachelor graduation project at The Royal Academy of Art in the Netherlands.

cinemetrics quantum of solaceAs a graphic designer, Brodbeck is drawn to particular style details, but as a generative coder he’s interested in exploring the role for graphic design in analysing these same details.

He picked the medium of film as his ‘data-set’ and came up with something actually very unique: rather than analysing the meta-data around a film (i.e. from IMDB), he’s using movies themselves.

The project seeks to ‘fingerprint’ films (a bit like the recent moviebarcode site) and turn them into interactive models. The models can be manipulated to allow users to identify differences or trends in the graphics via a sexy looking interface, all of which he’s now open-sourced on GitHub.

Here’s a demo:

Brodbeck defines the project as “an experiment to find out if the data that is inherent in the movie can be used to make something visible that otherwise would remain unnoticed.” It’s a really interesting area for academic inquiry, one which he set out the following goals:

Measuring and visualizing movie data to reveal the characteristics of movies and to create some sort of unique “fingerprint” for them.

Extracting and analyzing information – such as the editing structure, use of colors, speech or motion – and transform them into graphic representations, so that movies can be seen as a whole and easily be interpreted or compared.

Working experimentally and presenting the work both in print and digital media.

A side effect is that the system he’s built is great at comparing films, so as to see differences between originals and remakes; within similar genres; among a string of sequels, similar filmmaking styles or certain directors.

What’s great is the system is actually useful. It’s an infographic engine for film-buffs, and we know how popular those are, don’t we?!

Frederick, I look forward to the sequel!