Steven Nicholson - 11 April 2017
Graham himself seems like a fairly uncomplicated guy. Actually Patricia Piccinini intended him to be something of a rural Victorian ‘everyman’. Hence his relaxed pose with his arm across the bench seat; the faint suggestion of a singlet tan and sun spots if you look closely. The brief from Clemenger BBDO Melbourne and the TAC to Patricia was to create something provocative that generated discussion; but with regional drivers in mind.
Of course as with many things that look effortless - the project to bring him to life was anything but uncomplicated. The crew at Clemenger and the TAC client should get some kind of award for having the courage to base a campaign around commissioning a piece of modern art.
Actually if they haven’t already; I’m sure they will.
Graham is a very unusual looking fellow. Anyone could tell that he would prove a compelling, even provocative piece of art. Graham looks the way he does, because he is designed to survive a car crash.
Current human physiology, of course, was set well before the advent of motor vehicles.
And that’s the underpinning concept of the campaign. When you look at Graham, you start thinking about his bizarre differences and you’re forced into thinking about your own frailty by comparison.
That alone might be enough to create the behaviour change that the TAC were looking for; but Clemenger CDs Stephen de Wolf and Evan Roberts had grander plans. For them, he was always envisaged as something that went deeper than the front page of Reddit. A teaching tool that made you think about your mortality as a human on the roads in a more profound way.
So Clemenger/TAC approached Patricia to give birth to Graham, and AIRBAG to autopsy him.
Sonia Von Bibra (Executive Producer at Clemenger BBDO Melbourne) engaged us to manage two parts to the Graham story - the design of Graham’s internal organs, and how we go about communicating those organs to our audience.
THE DESIGN (INTERNAL ORGANS)
As a confessed Piccinini fan, I was very excited about trying to find ways to translate her fantastical ‘externals’ into engaging ‘internals’ that feel connected to the physiology we see. It would prove an interesting challenge.
Patricia Piccinini's most iconic work incorporates very lifelike sculpture and a lot of exposed skin
I should mention at this point that Patricia’s studio was working flat out for the scheduled launch date, and we knew that Graham in his final assembled form wouldn't be complete until a few weeks before he was launched. And even then, those few weeks, he was scheduled for his photo and video shoots. So we had to work without access to him. I also knew that my access to Ms Piccinini herself would be limited.
I started with the extensive interview material the crew from Clemenger had shot - not just with Patricia - but also the Trauma Surgeon (Dr. Christian Kenfield) and the Road Safety Engineer (Dr. David Logan) and based my initial thoughts around that, and some conceptual renders of Graham.
There was already a wealth of scientific evidence, data and process - we needed to extrapolate on that, through the lens of Patricia's creative process, to find the evolutionary path and design for these organs.
I’ve actually had reason to study quite a bit of gross anatomy in my career. As an animation director, I’ve found an understanding of biomechanics of a human is critical to creating believable movement in an animated character. As a technologist my work on the “MIFF: Emotional Trailers” and “La Trobe: Thoughtography” projects had involved expanding that knowledge and it helped somewhat - but I definitely had to bone up (puns are allowed now that I’m a dad).
From the get-go I knew we could quite easily get it all horribly wrong. But not just in terms of what was believable anatomy - but in terms of tone.
Ms Piccinini’s work is so often characterized by almost too much pale pink skin. It’s iconic to her - but once we peel that skin back.... what are we left with? From my perspective, I really wanted it to feel authentic - like a medical text that fell through a portal from a world where people could survive car accidents. But it also needed to not gross anyone out, nor seem too dry and dispassionate.
I also knew that we would have to shoot for something that looks ‘photo real’. That’s a Piccinini trait too. I think it’d be easier to go for a stylised approach such as illustrations, or perhaps X-ray - but it would feel out of place.
We’re talking about entirely new organs, found in an autopsy and photographed.
We mentioned in our initial chat with Clemenger that it was a little bit like in ‘House MD’ where they’re trying to diagnose a patient and the camera zooms down inside the patient’s body. But that filmic representation of the inside of the body is always very saturated and contrasty - and to be honest, kind of dramatic - scary, disgusting, moist. As a visual style it would work on its own - but didn’t seem to quite line up with Piccinini’s fleshy low contrast, almost warm and comforting look.
There’s a wonderful TV series that used to run on SBS (“Anatomy for Beginners”) where a German pathologist in a racy fedora performs autopsies on these ‘plastinated’ corpses. The show itself is very interesting, but I mention it because before he autopsied these corpses he had this process where he pumped chemicals into the corpse - effectively turning all the soft tissue into a kind of plastic. It still looked real, but took on this slightly low contrast waxy look. The ‘meat’ (so to speak) is effectively dried out in that process without becoming desiccated.
Everyone seemed on board with that approach - so long time collaborator Xavier Irvine and I began working up some sketches based on what we had seen of Graham thus far. From a philosophical point of view - we felt it important to see Graham as an otherwise normal human - but who has evolved to survive car accidents. It was important to distill it in that way so that the message was cleanly conveyed without distracting exotic ‘fantasy’ biology. Anything that helped you survive a car accident we changed, anything that didn’t we tried to leave as it was.
The work of Gunther Von Hagen was an important influence on the tone of the work. I strongly encourage you to watch his TV work. We also drew inspiration from 18th & 19th century preserved vivisection specimens.
It was gratifying to find that for the most part, Patricia loved our work and was very much in-line with what she’d been thinking - and so thoroughly marinaded in the scientific data Dr Kentfield & Dr Logan had provided. She even seemed thrilled with most of the new things that Xavier and I brought to the discussion - the brain suspended by tendons for example. In response to the Dr Kentfield's data about traumatic brain injury - and the conclusion that most damage occurs when the brain collides with the interior of the skull - she’d proposed a ‘hammock like swing’. And that would have made sense evolutionarily speaking. We already have the dura mater, a membrane that envelops the brain and protects the brain. It’s not hard to imagine that evolving into a structure that would allow the brain to sway gently.
But I felt that was overly complicated to visually design and communicate, and didn’t seem to solve the mechanical problem so emphatically. So I suggested the tendons. My inspiration was drawn from the ‘shock mounts’ we use on shotgun mics as filmmakers; and the so called ‘Heart Strings” (Chordae Tendinea). Visually it worked great, and communicates in one image the idea of a brain protected by being allowed to move. Everyone seemed happy with that concept and it stayed.
Of course not everything we offered up was gold. Perhaps ironically, the only thing Xavier and I really departed from her thinking on - and wasn’t able to convince everyone on - was with respect to Graham’s ‘airbags’.
Or ‘pus nipples’ as we called them in our office.
Dr Logan's crash data has shown for decades that airbags are an extremely effective means of preventing injury. They work by causing the object that's colliding with it to decelerate more slowly as it forces the gas back out of the bag. Graham's airbags work the same way.
But my feeling was that what was visible on top of the rib cage didn’t have sufficient volume to be effective as ‘airbags’. Of course I couldn't measure their volume, but to the naked eye the amount of air (or fluid) expelled from those organs didn’t seem like it could be enough to have much effect on Graham’s deceleration. Especially given Graham’s apparent mass.
Graham has an extremely barrel-like chest - so I proposed that those external nipples were actually just the visible component of a much larger system that existed between his ribs and his nipples. With the two connected by ducts that passed between the ribs. Of course I thought I was being extremely clever, but the idea didn’t prove popular and didn’t make the cut.
Crafting these illustrations was one of the most interesting and thoughtful projects I’ve ever embarked upon, but it wasn’t nearly as stressful as pulling the app together.
THE TECH (AUGMENTED REALITY)
Stephen and Evan had always wanted to present this ‘interior’ world in some kind of Augmented Reality way, but had thus far been told it was impossible to do. Indeed whilst I felt that there had to be some way to do it - I wasn’t entirely sure what that was.
There’s a thing we like to insist on at AIRBAG, and in this project it served us quite well. We call it the ‘Scoping Phase’, but you could call it research if you like. Basically we take a week or two, and work out how we might do a job before we commit to it. It proves very valuable knowing how something might be done before everyone is too invested in a project that may not even work. For a client, you could even call it cheap insurance.
The most critical component of any vision-based AR solution is what’s called the localisation system. Before the app can ‘augment’ reality, it needs to properly understand it. That’s what localisation is about. It’s the app looking through the camera, and understanding what it’s looking at, and where it is in relation to it.
Finally, we knew that the activation would have to work in numerous different rooms and lighting conditions. Graham was touring his message of vulnerability throughout regional Victoria. And the end users were anything from schoolchildren to the elderly, and everyone in between. This thing had to work, work well, and be as simple and effortless as possible.
All of this is antagonistic to computer vision; which despite the leaps and bounds it’s come along in recent years - is still a fairly nascent area. So we explored a number of technological options through a series of experiments, all without access to the real Graham.
Our experimental setup revolved around a mannequin painted with extra details to mirror the added details of the expected design of Graham. Our “Faux” Graham is just a cheap dime store mannequin, but we grew to love him anyway, and he lives in our office to this day.
Then we tried a bunch of different possible solutions. Our first go-to is a system that we use fairly often which is based on known ‘fiducial’ markers to initiate tracking. The system starts by looking for these markers, and once it sees one, uses what it knows about that marker to work out where it (the app) is in relation to the marker. From there it starts looking at all the other features in the room and can continue to track where it is - for a time. It’s often called ‘SLAM’ - for Simultaneous Location And Mapping.
But of course we knew that we couldn’t attach those markers to Graham, so they’d need to be off him somewhere. And the flaw there is that unless the camera is looking at one of those fiducial markers *most* of the time then tracking would fail and the AR would stop working.
The second idea was more ambitious from a computer vision point of view. Instead of attempting to recognize a number of predefined ‘patches’, we trained the computer with machine learning to understand the ‘sculpture’ as a whole and to recognize it from any angle.
We were able to get a working demo up on a very powerful desktop computer, but even with a lot of optimizing it seemed unlikely to ever really be functional on a handheld device with current gen hardware and the time we had available to us.
So our third idea was to actually move the localisation problem *off* the handheld device and onto a separate server, and have that localisation data streamed back to the handheld units via wifi. It’s an approach we use in VR, and basically consists of putting tracking markers on the tracked device, and putting a static camera (or cameras) somewhere it can see the device. Usually the cameras operate in infrared. It’s basically how the Oculus Rift works if you’ve ever used one.
It worked of course, but it involved a costly rig of infrared cameras that would need to travel with Graham, and needed to be carefully calibrated regularly - seriously complicating Graham’s installation in areas, and potentially compromising the aesthetics of the room. It also suffered from latency and interference issues. We needed something simpler and more reliable.
Ideally something entirely self contained in the handheld unit.
So at this point we were able to get our hands on a Google Tango Dev Kit. At the time, these devices were essentially only available to developers and not, for that matter, in Australia. Technologically, they’re a very impressive device and software platform in one. The device is purpose built for Augmented Reality. It not only has a very wide angle camera, it also has what’s called a structured light sensor (like the Xbox Kinects), and very high speed inertial measurement units.
It quickly became apparent that this tech was the way forward.
That said, the Tango isn’t without its issues. As I mentioned, the Dev Kits in question cannot be bought in Australia. So we had to import them from the United States. The value of the order being what it was - it triggered all sorts of customs and excise issues. The bureaucracy and attendant paper work kept us in a state of frustration for weeks.
The hardware itself is also very definitely a Dev Kit. A device designed to get developers familiar with the software environment, rather than something that’s expected to be used by the general public. To whit - they’re a little rough around the edges.
For starters the screen is 7 inches wide, significantly smaller than 9.7 we’re used to in a standard iPad. Secondly, our control over the Tango camera in terms of colour and exposure is very limited - the rendition of Graham’s pants - for example - is very unfaithful in terms of colour.
But the big thing was the power usage. The Tango devices suck a lot of power and they run out of battery extremely fast. So we always knew that they’d need to be permanently plugged in to power in order to function. What we didn’t know was that the devices used SO much power, that even plugged into USB power, they still ran flat over the course of several hours.
We actually had to dismantle the specially provided 12 volt charging docks and build it into custom designed 3D printed cases, so that we could provide enough power to the Tango’s to keep them charged. It also afforded an opportunity to consider the aesthetics of the handheld devices - Graham’s overall design was carefully considered by Clemenger and Traffik to have the consideration of being in a high end art gallery. Fun fact: the process of designing and 3D printing these cases was so involved and time consuming, that the per unit cost exceeded the cost of the Tango’s themselves!
In the real world the Tangos actually perform very well. Tracking Graham and his surrounds with a surprising degree of fidelity and stability.
From a user experience (UX) point of view, we had to work through a number of iterations. Ultimately I felt my first objective was to keep people near and engaged with Graham as long as possible. But tablet based AR can get exhausting for the user. Holding the tablet up and keeping it carefully pointed at an object can tire even a fit young person who is accustomed to the medium. And my objective was to cast the net widely and be as accessible as possible.
This allows you to relax into a more comfortable reading position once you’ve made your choice. As an approach, it seemed to strike a nice balance between dynamic AR experience, and accessible method of content delivery.
It was extremely gratifying to see Graham and his Tango app, not just in use in the gallery by regular punters, but also folded into a special curriculum for school students.
Graham is a real swell guy, and I’m stoked to have met him. I consider it something of a career highlight to have worked on this project. Not just because I got to collaborate with Ms. Piccinini - an artist whose work I’ve long admired - but also with the sharp folk at Clemenger. A finer bunch of minds I haven’t encountered in a long time.
Maybe my favourite thing about this project may seem a little bit odd. Patricia’s team actually individually placed every single hair on Graham’s body. Apparently anything other than real hair doesn’t work very well, and they’re in constant need of ‘samples’. So Nick Venn (Producer - AIRBAG), Evan Roberts (Creative Director - Clemenger) and Adrian Bosich (Managing Partner - AIRBAG) and I stood on a tarp as one of her assistants harvested our various body hair, to incorporate into Graham.
Apparently my particular brand of wiry dirty blonde was perfect for his eyebrows.
A huge thanks to the Clemenger BBDO Melbourne team for bringing us in on this project. And the TAC in turn for greenlighting the most interesting campaign any of us have ever worked on.
From team AIRBAG we'd like to thank:
Creative Technologist: Steven Nicholson
Managing Partner: Adrian Bosich
Producer: Nick Venn
Illustrator: Xavier Irvine
3D Team: Patrick Gavin, Adam MacGowan, Dmitrij Leppee, Tim Murphy, Justin Imhoff
AR Team: Leigh Mannes, Rob Caparetto, Paul Stapelberg
R&D Support: Stephen Burns, Adrian Oostergetel