The web is no longer a thing for HTML pages only. Nowadays, thanks to advances in technology and broadband speeds, people are doing ever more adventurous things in web content production. However, with WebGL getting ever more usable and impressive, we thought it'd be a good idea to take a step back and look at the history of computer generated things.
A Long Way in a Short Time
We've come a long way in the world of rendering things since pong. As computer power has increased, we've increasingly harnessed it to do ever more complex and helpful things. From creating Google Now to recommend films you might want to see and recommend restaurants based on places you've been before and enjoyed, to serving high definition video content around the world to every device you own, the world now relies on massively powerful computing more than ever.
Nowhere, however, is this more evident than in the film industry. It's hard to believe that anyone working on bringing the eyes of Yul Brynner's character "Gunslinger" from Westworld, or the computer animated wireframes shown in Trench Run scene in the original Star Wars from 1977 could have imagined today's cutting edge animation. Yet, in the space of 41 years we've moved between the two extremes:
In this post, we're going to take a look at the history of CGI in the film industry, and then take a look at some of the techniques and technologies used.
In the Beginning, There Was Tron
The first time computer generated imagery was really seen in serious use was Tron in 1982. The film, with the now iconic Light Cycle race, was the first true harbinger of things to come in the world of computer animation in film. That famous scene, however, very nearly never was. After creating the script and storyboards, animator Steven Lisberger and his business partner Donald Kushner took the concept story, along with examples of CG footage to several film studios, none of whom were willing to finance the film. It was when they reached out to Disney however, that the difficulties that were to come in the future really came to light.
The studio had been in financial difficulties, and so had started commissioning work outside of its traditional fare in an effort to increase box office receipts. Kushner, however, noted that there had been friction between his people and Disney's animators. The visual compositing techniques were at that time still unproven, and Lisberger had never directed before. He remembered in 1982, that "They saw us as the germ from outside. We tried to enlist several Disney animators but none came. Disney is a closed group."1
Thanks in no small part to Disney's production chief Tom Wilhite, the studio, however, agreed to finance a test reel to demo how the scenes involving the light disc fights might look. The studio only gave them the most basic of help. However, with a tiny budget, costumes taken from "The Black Hole" (a 1979 sci-fi film directed by Gary Nelson), hockey outfits and the skills of local frisbee expert Sam Schatz, they filmed two minutes of footage. With the addition of a few CGI shots, the resulting test footage was good enough to convince Disney to sign off on the budget, giving Lisberger and Kushner the money needed to produce the film.
There can be little doubt that the animators saw these upstarts with suspicion and a certain amount of trepidation. They weren't the only ones though. Surprisingly, the Motion Picture Academy refused to nominate Tron for a special effects award. Nowadays, almost every film which wins the award does so with CGI at least in part. However, at the time, Lisberger noted "The Academy thought we cheated by using computers".2
The 80's was really the proving ground for CGI. From Tron's rapidly created CGI scenes to the Stained Glass Knight from 1985's Young Sherlock Holmes, directors and animators were starting to get to grips with the idea of inserting digital characters and scenes into live action films. The technology first really started advancing with the 1986 film "Flight of the Navigator".
In 1985, Bob Hoffman at Digital Effects had been developing software around the concept of reflection-mapping. The basic principle was that a photo of something could be mapped onto a digitally created surface, making it seem as though the object in question was accurately reflecting the environment around it. The then president of the company Jeff Kleiser had the company create a short test for his brother, who was developing the film (again, a Disney production). The footage showed a simple ship flying toward the camera, turning and flying away again, with the environment mapped to the mesh of the digital model.
Digital Effects imploded before they could be hired to work on the film, however, Kleiser went to work for the Motion Picture Special Effects Division at Omnibus Computer Animation. He brought Hoffman with him, among others, and the team worked to create the ship in the film.
To create the ship itself, the video of the scenes was mapped digitally, then overlaid frame by frame onto the digitally created surface of the vessel. This was done using a VAX, an early computer, and one of the first capable of processing a million instructions per second (MIPS). It was when it came to rendering the vessel itself that things got really tricky. The company used a separate machine, a Foonly F1, to render the film.
The Foonly only had enough disk space to store one frame at a time, which meant that for 30 seconds of film, 720 images had to be computed. With each frame taking 20 minutes to compute and render, that single 30-second sequence would take ten days to render, assuming all went well. And often, all didn't go well. During one render run, an electrician was working on a power panel, when he dropped a wrench. The wrench immediately short-circuited two of the three phase power busses together, killing the Foonly.3
Compounded by the fact that, even on a good day, the machine would crash numerous times, the idea of a ten-day turnaround was wildly optimistic. To ensure the film was completed on time, Hoffman took the code to the San Diego Supercomputer Center to use their machines, ported the code to run on Cray systems. Without that, the film would never have made its deadline. Film render times were already starting to become substantial, the code difficult, and their hardware requirements non-trivial.
Things only got more complex with the arrival of "Luxo Jr" in 1986, which you'll know as it lives on in part as the lamps sequence that precedes every Pixar film. Conceived by John Lasseter and Ed Catmull, it banished the ghost of Tron, being nominated for an Academy Award (the first CGI film to get that recognition), and marked the first time shadows were rendered accurately, something which was distinctly missing from Navigator's effects.
However, as a result of the added complexity, and its advanced use of lighting, it took almost a year of computer time to render. That was a huge amount of time for the era, but would soon seem quaint by comparison to what was coming.
It was the nineties though that marked the start of CGI which successfully blended live action film and digital animation, where the latter was realistic enough to be believed. Thanks to massive increases in computing power, combined with vastly more sophisticated and nuanced algorithms, digitally created visuals were about to advance at an unbelievable rate.
It's the End of the World
Digital animation reached new heights in 1991, with the advent of James Cameron's Terminator 2: Judgement Day (far from the last time he turns up for this reason). Often considered the greatest achievement in CGI since Tron, nine years earlier, Terminator 2 ushered in a new era of digital graphics. Unlike films before, the driving force behind the CGI for that film was to create photorealistic effects; a character who was believable, which interacted with actors and elements in the real world. The five minutes of screen time that the metallic T1000 features in took a team of 35 people to create, over a period of ten months. Industrial Light and Magic, who'd recently wrapped the effects for The Abyss (which won an Oscar) handled the effects, from the initial concept storyboards to the modelling, animating, lighting, rendering and compositing.
The animation for the T1000 included building a virtual model of Robert Patrick's face using laser scanning and cameras to build a database of his features. His body however was wireframed and animated by hand. During the animation process, Alex Seiden, now Senior R&D Engineer at Tippett Studio, formerly Technical Director at Pixar was then at the time a visual effects supervisor at ILM, and led the development of surface shading and lighting techniques used. In talking about his time working on it, Seiden noted:
"For most of the show I had a jar of mercury sitting on my desk so I could look at it. I could see what it was like when we were trying to light a shot, to see how it moves, how light hits it in a certain way, how it reflects light. References are tremendously important in any kind of artwork."4
Those references, the patience, and the time were vital in pulling off the final effects. The total five minutes took 25 total man years to create.
The First Full CG Film
The next serious advance was 1995's Toy Story, which proved for the first time that a completely CGI-shot film could be done, and done successfully. Considered the foundation from which almost every modern children's and animated film now borrows, it also marked the first real commercial success of Pixar.
With a writing team featuring John Lasseter, Andrew Stanton, Joel Cohen, Alec Sokolow, and Joss Whedon, music by Randy Newman and executive producers in the form of Steve Jobs and Edwin Catmull, the weight and experience behind it was vast. Created on a budget of just £30 million, it grossed over 12 times that at the box office, and is now considered one of the most culturally important films of all time.
The film itself took a team of 110 to create, including a total of 27 animators, who produced a grand total of 114,240 animated frames. However, despite the increase in computer power over the Foonly F1 used for Flight of the Navigator, the much increased complexity of the scenes, combined with the fact that the whole image had to be rendered, not just a partial piece meant that render times averaged about 2 to 13 hours per frame, depending on the complexity of the shot. This was the case despite some judicious design choices to reduce the time involved. For example, rendering real-time hair movement was impossible, so all the characters have short hair, except for Andy's mother's, which is always tied back in a ponytail.5
Even with all the tricks used to cut the time down, when rendered on a suite of 107 SPARCstations and a SPARCServer, the whole film took 91 years of computing time to create. When it was re-rendered in 2009 for 3D, scenes rendered around 60 times faster, so scenes which took two hours to render before took two minutes. Slotted in for rendering between queues for its other films, the whole thing was rendered in days.
A Whole New World
If the previous decade had been about CG going mainstream, it was in the new millennium that we started to see the kinds of effects that we take for granted now, starting really with 2002's Lord of the Rings: The Two Towers.
Gollum was originally created by Weta in 1998, four years before the film would come out. Similar to Tron years before, the studio required proof that the team could produce what they'd dreamed up. In this case, New Line was the client. In the films, Andy Serkis worked on set as well as through motion capture, providing the physical references used to bring the character to life. This meant he had to re-create the performance twice, to allow both the film and animation teams to match everything correctly. This was vastly outside the original contract, which was originally for voice only. However, director Peter Jackson was so impressed with Sirkis' performance, he hired him to do all the motion capture as well.
Despite the incredible talent on display, it was again the digital artistry that gave the character believability of a kind that hadn't been previously seen. The original character modelled in the prior film, The Fellowship of the Ring was actually completely different. This forced Weta to re-model the character to fit Sirkis' physical characteristics. The resulting change meant re-doing two years work in ten weeks.
Modelling Gollum himself was a new challenge to be faced. With a body composed of 5,000 separate polygon faces, and with eyes made by creating spheres with faux caustics to create a believable sheen, and incredibly detailed texturing, the digital character was a quantum leap from the Stained Glass Knight, or anything else seen before.6 Thanks to rendering only the character itself, by using physical effects for where he'd move in water, rendering each shot required only around six hours from start to finish, so the team would create the shot during the day, render it overnight and view the results in the morning.
Gollum's creation also marked the first time subsurface scattering was used to produce realistic skin, a technique which simulates light interacting with surfaces beyond their top layer. It's now commonly used in everything from films to cutting edge computer games for rendering skin and other similar surfaces.
In the End, There Was Tron
As it turned out, the Disney animators who'd seen Tron in the 80's had been right to distrust the new technology. The arrival of Tron and the success of its blend of CGI and live action video continued to influence film development for years to come. So successful was the technology, and so complete its takeover of traditional animation, that a mere 32 years later in 2004, Disney announced the conversion of its animation studios over to a completely digital offering.
What was arguably the most successful animation studio of all time, a studio Donald Kushner referred to as "the vanguard of traditional animation" threw in the towel.
3D Grows Up
No stranger to CGI, with films like Terminator 2, The Abyss, Titanic, Solaris and more pushing the boundaries at every turn, James Cameron's Avatar stands apart as marking the first time a complete digital world was mated with 3D with real success. The scale of creating the world of Pandora was truly awesome. Rendered on Weta's render farm of just over a thousand computers, with a total of 40,000 processors and 104 terabytes of memory, the machines crunched 8 gigabytes of data every second of every day for more than a month. The whole library of assets finally clocked in at a petabyte.7
By the time Avatar rolled around, the actual complexity in computing every scene had reached a level of detail that's difficult to comprehend. For example, Avatar marked the advent of spherical harmonics for lighting.8 Whilst foundationally similar to Fourier transforms, spherical harmonics are able to apply to three dimensional structures. Simply put, if you model an object using ambient occlusion, from two light sources, one red and one blue, you'll end up modelling light as if it came from a red and blue source, with a general lighting everywhere which is purple. That means shadows on both sides would be the same.
Spherical harmonics, on the other hand, includes a directional component to the lighting. This has many advantages, including for example giving much more realistic shadows, due to the system "understanding" that the shadow on one side is being cast by a red light, onto a surface illuminated by a blue light, whilst the reverse is true on the other side.
Modern filmmakers are pushing CG even further than ever. With lighting and texturing approaching the point of being completely indistinguishable from real life, directors are able to push for shots that could only be imagined years ago.
Widely considered one of the most technically accomplished films of all time, Gravity required some seriously inventive film-making, and incredible VFX to bring the space-based world of the film to life. Rather than blathering on more, I'll let this short video from Prime Focus talk you through one single shot. The interesting stuff starts at one minute in.
So that gives you an idea of the complexity involved in modern filmmaking, for a single shot. Multiply that by the number of shots in a film, and you can understand why modern filmmaking costs have spiralled so high.
For Disney's Big Hero 6, the animation team was tasked with rendering San Fransisco in a cartoon-esque way. As a result, they used procedural generation plus single, placed models to artificially generate a city with 83,000 individual buildings, modelled of the real city. This included creating new software for generating lighting and reflections, which has to model more than 200,000 individual light sources, as well as software for generating a city with people. 750,000 of those, all of whom are distinct and move around the city.9
To give you an idea s to some more modern work, have a look at this VFX showreel from Oscar-nominated Embassy VFX.
As directors and writers place ever more complex demands on animation teams, it's likely we're only going to see more and more detail and technical prowess in this already impressive area.
With thanks to the following:
- "When You Wish Upon a Tron", Newsweek, 1982
- "Tron 20th Anniversary", San Francisco Gate
- "The Foonly F1 Computer", Dave Seig
- "Visual Effects on Terminator 2 – Page 2", Animator Mag
- "The Physics of the Ponytail", ScienceBlogs
- "Of Gollum and Wargs and Goblins, Oh My!", Computer Graphics World
- The Data-Crunching Powerhouse Behind 'Avatar', Data Center Knowledge
- The science of Spherical Harmonics at Weta Digital, FX Guide
- Disney's new Production Renderer 'Hyperion' – Yes, Disney!, FX Guide
If you've enjoyed this post, you might want to follow me on Twitter