We Coded S4C’s Voice-Responsive Idents

The latest batch of live action idents for Welsh channel S4C by Proud Creative all feature elements which respond to the voice of the channel’s announcer – thanks to 12 months of research and development, not to mention the code-writing skills of directors Minivegas.

lightingstore_after.jpg

The latest batch of live action idents for Welsh channel S4C by Proud Creative all feature elements which respond to the voice of the channel’s announcer – thanks to 12 months of research and development, not to mention the code-writing skills of directors Minivegas.

“By fusing traditional notions of national pride with contemporary motion design we have created a unique and arresting development to the award winning first set of idents,” says Proud Creative’s Dan Witchell. “The voices of S4C literally breathe new life into traditional Welsh landscapes, shaped and moved by sound and language,” he continues. “Older generations see the familiar beauty of their Wales represented in a new way. Younger demographics see S4C representing a developing and creative nation in way that they understand. With every voice-over the image will change in a unique way – every broadcast will be a one-off piece.”

[QUICKTIME /images/uploads/2007/12/proudcranesml.mov 428 240]

In CR’s January 2008 issue, we spoke to Luc Schurgers and Daniel Lewis at Minivegas about their work on the project. Here is the full, unedited interview:

CR: How did you guys end up working on this and were you able to modify previously written code for the S4C stuff?
Minivegas: Proud contacted Onedotzero about interesting directors to pitch on the S4C idents. Onedotzero were familiar with our work because of previous programming work we had done. The brief was picture-postcard scenes of Welsh life. We wanted to produce a set of idents that could bring those scenes to life via software.

Before joining Minivegas, Dan had written his own real-time compositing application. There was obviously a lot of material there that would be useful for the S4C job – video decoding, compositing, colour correction etc. Some of that knowledge became the base for the software. But apart from the open-source libraries we’ve used, the S4C application is written completely from scratch.

CR: So what exactly is the format for these idents?
Minivegas: Initally we discussed having infinite, continously playing idents. But we were using real filmed footage, so we stuck to a 20 second format. It’s also not too much of a challenge for the announcer – they can just treat them like a normal 20 second ident and say as little or as much as they want. All the idents stand on their own without voice input. But speech breathes life – and a degree of strangeness – into the scenes.

lighthouse_before.jpg
In Lighthouse, the electric wires in the scene warp and wave as the channel announcer speaks over the ident

lighthouse_after.jpg

CR: There was an ident package for BBC 4 created by Glassworks and Lambie-Nairn a few years ago in which the announcer’s voice generated animated graphics. How did you work out what you can and can’t do with this idea in live action? Were there any restrictions or technical limitations?
Minivegas: The guy who worked on the Glassworks project was Robin Carlisle. We actually worked with him to develop a realtime graphics synthesizer called rDog. After Robin left, Dan Lewis, who is a part of our collective, picked up some of the rDog work, adding features such as analysing video camera feeds. We’ve taken this on tour and done some fun club sets as well.

We’ve also built several live art installations from scratch, and have a few more in the pipeline. We’re into the concept of dynamically generated content, and also exploiting cheap consumer hardware. The S4C idents were a great opportunity to explore this concept while trying to deliver an otherwise traditional, photo-real set of idents.

At the start of the project we were very enthusiastic, but didn’t really have a clue as to actually execute the idents. We sat down with Proud and S4C and made a huge list of ideas – locations, object and animals that could be voice-reactive in each ident. It was a fairly inclusive list, with entries such as “cow pooing with dung falling in time to voice”.

We shot the first three idents and went a little crazy. The first ident had six different elements, some reacting to the voice, some to the music, some CG, some using filmed elements. The music was generated randomly… it was pretty ambitious. But the voice-reactivity was somewhat drowned out. So for the final seven idents, we stripped things down with just one strong reactive visual element for each ident, and a selection of background action plates for variation and mood.

There were certainly technical limitations – the Ice Cream Van and Scissor Lifts involved cramming over 12GB of slow-motion footage into 2GB of RAM memory, then compositing 4 or 5 streams of video in real-time. We devised several cunning compression and playback schemes until we finally managed to crack it.

One problem when creating a real scene, versus something abstract, is that the viewer expects it to obey the laws of physics. If a forklift responds to a loud cough, a human animator can animate slightly ahead of the event, giving the forks the momentum to rise to the top at the exact time of the cough. With real-time animation, you don’t know about the cough until just after it’s happened. If the forks move to the top instantly, it looks wrong. You need to cheat and pretend you knew about it, rising as fast as you can. You’ll always be slightly behind, and it’s a real balancing act to get this to look plausible.

carpark_before1.jpg
In Carpark, the painted signage on the ground comes to life

carpark_after.jpg

CR: How long were you working on this project and what was involved?
Minivegas: The project has taken over a year to develop, from conception, R&D, shoot, post, coding, integration and going back to the drawing board several times.

We directed the live-action. The first three shoots were a case of “get as much cool looking plates as possible”. Nobody really knew what to expect. By the time we’d got to the second set of shoots, we were more surgical. The most involved shoots were the lighting-based ones – the Lighting Store, the Welding Shop and the Wedding DJ. They involved turning hundreds of lights on and off, trying to keep the background conditions as fixed as possible. With the Welding Shop, we needed to get enough footage of each welder so that we could comfortably build a loop. These idents took most of a day to shoot. Some were a lot simpler.

Some idents involved placing 3D elements in the scene – the Lighthouse, the Floor-Polisher wires in the Museum and the Car Park. We took plenty of measurements for these, scrubbed out the original elements from the plates and then just worked at it until we got code which was fast enough to replace those elements in real-time and still look realistic.

One thing we did wrong with the first idents was grade the shots first and then process the graded footage. Grading typically happens at the very end of the post-production pipeline, but we were this extra element that happened before it. So after the second shoot, we just grabbed all the data we could from the film scans. The grade is applied in the software, in real time. This let us do things like darken or lighten the sky in response to the voice. It also gave us the latitude to composite over 50 different lighting plates in one shot – they are loaded onto the computer as high-dynamic range elements and modified in real-time.

CR: Can you tell us what writing code for a project like this entails?
Minivegas: Writing code is like writing text. You write it once, it’s messy. Then you re-write it again and again, perfect it, chuck stuff out etc. The source code repository says the first line of code was checked in June 1. There are about 50,000 lines of code in the final product. But there were many unrecorded revisions and throwaway prototypes before that.

At the beginning, Dan took a lot of the code he’d written previously for his compositor, and wrapped it up so it could be used from a scripting language. We used Python to create prototypes for most of the idents using these wrapped modules. This allowed a very fast turnaround, with the ability to tweak things and try out new techniques easily. We did this as early on as possible, working with whatever we had available. So for Car Park for instance, at the beginning we just took some photos of a car park. Then when the rushes came in we used them. Then when the film plates came in we used them. At last we dropped in the polished final elements. So the coding processing started way up front and followed the production and post process until the end.

A fair amount of time was taken writing tools – tools to process the scanned film files, tools to compress the elements. We even built a Quicktime compressor into the application so that we could quickly generate sample renders for client review.

As the prototypes were signed off, we re-coded them in C++, inserting them into the final, broadcast-spec application. This was painful, but needed to be done for real-time performance.

We don’t write in any “software” as such. Apart from Python, a C++ compiler, a text editor and a terminal, the only other components are a bunch of open-source libraries. We developed the software on Linux, but delivered it on Windows. It could run on a Mac fairly easily too.

CR: Tell us about the preparation of the idents – you’ve actually delivered hardware as well as software to the client, right?
Minivegas: We’ve delivered a turnkey system. Channels are comfortable dealing with digidecks and playout systems, so we designed the box to behave pretty much like a deck. It can communicate with their scheduling system, switch audio feeds from a number of sources, and it can output synchronized digital video and audio using broadcast standard signals.

This was actually quite a tough part of the job. You can imagine there’s been a large team of people focused on the creative aspects of the idents – planning, production, grading, compositing, writing the code for each ident and getting it signed off. But after all that, we still needed to deliver something that could sit in a broadcast environment, be stable and run without crashing, have a user interface that’s easy enough for the presentation team to use, and a technical interface that’s extensive enough for the engineers to use. Getting a piece of software to that point where you can deliver it and leave it somewhere without blowing up takes a long time and a lot of effort. At least as much code was written to support the integration as the creative aspect of the project.

We’d really like to see an open platform for this kind of thing. Maybe a piece of software that handles all the integration aspects, the audio and video input/output etc. Then people could just drop in modules – a bunch of media and some code, so they could just concentrate on the creative aspect.

Credits:

Agency/production company: Proud Creative
Creative director: Dan Witchell
Director: Minivegas
D.O.P: Bob Pendar-Hughes
Agency producer: Roger Whittlesea
Client: S4C
S4C creative director: Dylan Griffith
Music: Freefarm for seven of the idents, Brains and Hunch for the other three.

More from CR

Dexia Tower and the light fantastic

Picture: LAb[au]
In Brussels at the weekend (the family and I having decided to test out the St Pancras Eurostar experience), the chips, the chocolate and the Atomium were all good fun, but the highlight (forgive the pun) was the Dexia Tower – 38 floors and 150,000 LEDs equal one hell of a light show.

Paper Alphabet

Sonya Dyakova, associate art director at Phaidon, is behind the design of the publisher’s latest title, Sculpture Today, by Judith Collins

Crowdsourcing: Can you design the UK cover?

Next summer, Random House will publish the UK edition of Crowdsourcing, Wired writer Jeff Howe’s upcoming book on the new internet revolution driven by the combined power of the masses. In the spirit of the book, we are opening up its UK cover design to the world…

PS3 Goes Pop

This glitzy new campaign for Sony PlayStation 3, from TBWALondon, sees the introduction of a more theatrical feel to the brand’s advertising

IIASA_115x115

Graphic Designer

International Institute for Applied Systems Analysis
Centaur_115x115

Integrated Designer

Centaur Media