We coded S4C’s voice-activated idents

The latest batch of live action idents for Welsh channel s4c by Proud Creative all feature elements which respond to the voice of the channel’s announcer–thanks to the code-writing skills of directors Minivegas. We spoke to Luc Schurgers and Daniel Lewis about the project

CR: How did you guys end up working on this?

Minivegas: Proud contacted [production company] Onedotzero about interesting directors to pitch on the S4C idents. Onedotzero were familiar with us because of previous programming work we had done. The brief was ‘picture-postcard scenes of Welsh life’. We wanted to produce a set of idents that could bring those scenes to life via software.

Before joining Minivegas, Dan had written his own real-time compositing application. There was obviously a lot of material there that would be useful for the S4C job–video decoding, compositing, colour correction etc. Some of that knowledge became the base for the software but apart from the open-source libraries we’ve used, the S4C application is written completely from scratch.

CR: So what exactly is the format for these idents?

M: Each of the ten idents is 20 seconds long and is made up of real, filmed footage. All the idents stand on their own without voice input, but we’ve created a package for the channel so that when an announcer speaks over the ident, introducing the next programme, elements within each ident respond in real-time.

CR: Glassworks and Lambie-Nairn created a graphics package for BBC4 in which the announcer’s voice generated animated graphics. How did you work out what you can and can’t do with this idea in live action? Were there any restrictions or technical limitations?

M: That’s right. The guy who worked on the Glassworks project was Robin Carlisle. He also worked with us to develop a real-time graphics synthesiser called rDog. After he left, Dan Lewis, who is a part of our collective, picked up some of the rDog work, adding features such as analysing video camera feeds. We’ve taken this on tour and done some fun club sets with this.

We’ve also built several live art installations from scratch, and have a few more in the pipeline. We’re into the concept of dynamically-generated content, and also exploiting cheap consumer hardware. The S4C idents were a great opportunity to explore this concept while trying to deliver otherwise traditional idents.

We sat down with Proud and S4C and made a huge list of ideas – locations, objects and animals that could be voice-reactive in each ident.

When we shot the first three idents we went a little crazy. The first one had six different elements, some reacting to the voice, some to the music, some CG, some using filmed elements–it was pretty ambitious.

But the voice-reactivity was somewhat drowned out because there was so much going on. So for the final seven, we stripped things down with just one strong reactive visual element for each ident, and a selection of back­ground action plates for variation and mood.

There were certainly technical limitations–two of the idents required us to cram over 12GB of slow-motion footage into 2GB of RAM memory, then composite four or five streams of video in real-time. We devised several cunning com­pression and playback schemes until we finally managed to crack it.

One problem when creating a real scene, versus something abstract, is that the viewer expects it to obey the laws of physics. If a forklift responds to a loud cough, a human animator can animate slightly ahead of the event, giving the forks the momentum to rise to the top at the exact time of the cough. With real-time animation, you don’t know about the cough until just after it’s happened. If the forks move to the top instantly, it looks wrong. You need to cheat. You’ll always be slightly behind though, and it’s a real balancing act to get this to look plausible.

CR: How long were you working on this project and what were your responsibilities?

M: The project has taken over a year to develop, from conception, R&D, shoot, post, coding, integration and going back to the drawing board several times.

We directed the live-action. The first three shoots were a case of ‘get as many cool-looking plates as possible’. Nobody really knew what to expect. By the time we’d got to the second set of shoots, we were more surgical. The most involved shoots were the lighting-based ones–the Lighting Store, the Welding Shop and the Wedding DJ. They involved turning hundreds of lights on and off, trying to keep the background conditions as fixed as possible. With the Welding Shop, we needed to get enough footage of each welder so that we could comfortably build a loop. These idents took most of a day to shoot. Others were a lot simpler.

Some idents involved placing 3D elements in the scene–the Lighthouse, the Museum and the Car Park idents. We took plenty of measurements for these, scrubbed out the original elements from the plates and then just worked at it until we got code which was fast enough to replace those elements in real-time and still look realistic.

One thing we did wrong with the first idents was grade the shots first and then process the graded footage. Grading typically happens at the very end of the post-production pipeline, but we were this extra element that happened before it. So after the second shoot, we just grabbed all the data we could from the film scans. The grade is applied in the software, in real-time. This let us do things like darken or lighten the sky in response to the voice. It also gave us the latitude to composite over 50 different lighting plates in one shot.

CR: Tell us about the preparation of the idents–you’ve delivered hardware as well as software to the client, right?

M: We’ve delivered a turnkey system. Channels are comfortable dealing with digidecks and playout systems, so we designed the box to behave pretty much like a deck.

It can communicate with their scheduling system, switch audio feeds from a number of sources, and it can output synchronised digital video and audio using broadcast standard signals.

This was actually quite a tough part of the job. There’s been a large team of people focused on the creative aspects of the idents, planning, production, grading, compositing, writing the code for each ident and getting it signed off. But after all that, we still needed to deliver something that could sit in a broadcast environment, be stable and run without crashing, have a user interface that’s easy enough for the presentation team to use, and a technical interface that’s extensive enough for the engineers to use. Getting a piece of software to that point where you can deliver it and leave it somewhere without blowing up takes a long time and a lot of effort. At least as much code was written to support the integration as the creative aspect of the project.


More from CR

David O’Reilly

Born 21.06.1985, Kilkenny, Ireland. Education Self-taught. Based Berlin, Germany. Work history Worked with Shynola on Hitchhiker’s Guide to the Galaxy and also on Beck’s E-Pro video. Concept artist and designer at Studio AKA. Now signed to Colonel Blimp. Worked with Hammer & Tongs on upcoming feature, Son of Rambo. Contact dddddavid@gmail.com, colonelblimp.com

Creative Review Among Design Museum Designs of the Year

CR is quietly chuffed to announce that two of our projects have been chosen for the Design Museum’s new Designs of the Year exhibition “showcasing the best work in architecture and design in the last 12 months”. The December Monograph (spread shown above) and Peter Saville’s sticker from our February issue will both be in the show.

CR’s Christmas Turkey Tale

We often get nice presents and cards sent to us at this time of year at CR. But we have to admit we’ve never been sent a whopping great turkey before. Until last week that is, when agency Mustoes sent us an enormous, fresh-as-you-like Turkey…

Uniqlo reborn

Over-ambitious expansion and over-reliance on one product nearly ruined Uniqlo, but now the Japanese retailer is back in fine style

Graphic Designer

Fushi Wellbeing

Creative Designer

Monddi Design Agency