CR: What is your job title?
Chris: CG artist.
CR: What were your responsibilities on this commercial and how did you approach it?
Chris: We had to create an android that was as futuristic, humanoid and realistic as possible and I headed up the team responsible for 3D work on this. The first task for us was to get researching and assemble as much information and imagery as we could on androids. Once we’d gathered lots of stuff we were able to work with the designer, Chris Glass, on the look of the android and how we could approach modelling it and then animating it. So we did various tests both in – modelling and rigging – which is the most important thing in a piece of work like this.
CR: What’s rigging?
Chris: Rigging is what we create and use to make a model move: the controls if you like. In this instance, the rigging needed to make the model move as humanly as possible for the ad to have maximum impact. We were looking at human muscle structures in anatomy books so we that could base the android’s structure and movement actually on how the human body works and moves. So yes, modelling is all about crafting appropriate geometry while the rigging is set up to make that model move. On this spot there was lots of key modelling to be done on all the glass elements that were then rigged to move within the structure of the head. Our real job is to take the look decided upon by the creatives, research it and follow it through to the tiniest detail.
CR: So how many people at The Mill worked on this and for how long?
Chris: Fourteen people in all – over a period of about three months.
CR: How does everyone’s responsibilities break down?
Chris: Basically, we split the component elements of the job down so everyone has a particular area to look after – rigging, animation, lighting, modelling…
CR: Did you create the android from scratch – or, in Gollum/Kong style was there a live action shoot with an actor playing the role of the android?
Chris: Yes, there was a shoot with an actor in Los Angeles in July.
CR: So how do you guys make sure that you get all the information and data from the shoot that you need?
Chris: Russell Tickner, who was the lead 3D supervisor on this job, went along to the shoot and so was able to take notes and basically cover all the bases in terms of making sure we had as much information and data as we could possibly have to make our life in post easier. He came back with what we call survey information which enables us to track all the filmed data on the offline edit. We were going to have to map our modelled android onto the footage of an actor – who was hooked up with various motion capture sensors during the shoot – so we had this vital data, without which our work would have been considerably more laborious.
CR: What other vital data was collected at the shoot stage?
Chris: Firstly, we used a scanning technique using lasers to scan the actor’s face so we got the actual 3D geometry of his face from that.
Also on the shoot Russell took reference plates – such as shots of glass beakers and stuff in situ on the shoot – to show how light reflects off real glass in such an environment. It was key to this – glass is never actually of uniform thickness or completely smooth – so having this kind of reference material means you can add the sort of detail that makes a CG object look more real.
Also, we knew the glassy effect we wanted to achieve, so on the shoot Russell took reference plates – shots of glass beakers, test tubes and beakers, real chemistry lab stuff – all set up in the same spots that the actor was being filmed in. This gave us great reference material to see how light would play off glass in the various environments of the ad. All the glass items were also brought back from the shoot so we had them here at The Mill so we could further study them and use them as modelling source material.
CR: So your android was composited into live action footage of the environments – how do you ensure it sits realistically in that?
Chris: Invaluably, we were able to light the android using information actually taken on the shoot. Russell took a full photo survey of High Dynamic Range (HDR) shots of the sets so that backgrounds could be re-created in CG. This meant that we could control the lighting of the model as it would have been lit had it actually been in that environment.
CR: And how does that work – is it new technology?
Chris: Imagine a camera mounted on a tripod in the shoot environment. The camera takes a 360 degree panoramic shot – that comes out looking a bit like a reflection in a metal ball. This is done several times, over several passes at different exposures. These different shots are all composited together to give us one HDR map – which is then used to create a 3D environment map that we can render out lighting info from as we need it. HDR has been around for several years.
CR: There’s a shot in the ad where the camera pans round the android’s head as he’s speaking and we see various bits of its structure moving accordingly. How did you achieve that one?
Chris: It all comes down to the information that we’d collected and matching it all up and using the rigging for our model. We had the tracking data for the camera movement and also tracking data for any movement of the head within that shot. Then we use the rigging to move whatever we want within our model, such as the jaw. We were able to hand-animate the jaw to the rig model and to the shot – so as the jaw is moving, we rotoscope the animation we’ve made of the android talking – so that it matches. And we have a talking android. Of course we had to work very closely with the 2D team to ensure all the textures such as the skin and surfaces of our android looked the part too.