Light Stage 3: Surrounding Actors with LEDs to Light them with Images of Virtual Sets

Paul Debevec, Andreas Wenger, Chris Tchou, Andrew Gardner, Jamie Waese, and Tim Hawkins

SIGGRAPH 2002 Conference Papers Talk
July 26, 2002

Updated 1/27/22

Our SIGGRAPH 2002 paper proposed surrounding actors with RGB LEDs driven by images of virtual sets to light actors appropriately for realistic composites, a key component of the virtual production LED stages now being built worldwide. Our goal was to apply image-based lighting techniques to real actors and sets, not just to computer-generated characters. Although Light Stage 3 was built with just 156 RGB LED lights, the paper explained that an ideal LED stage would place the LEDs so close together that "the light stage becomes a omnidirectional display device, presenting a panoramic image of the environment to the actor."

We found a mini-DV video of the talk presentation at SIGGRAPH 2002, and Uni High intern Dina Hashash has edited the video together below with the original PowerPoint slides. The audio could be better, so we've included a transcript of the talk, as well as the original SIGGRAPH 2002 technical paper and papers video.

To our knowledge, our system is the first LED Stage to be used in motion pictures: with Sony Imageworks for Spider-Man 3, to record Spider-Man, Sand Man, MJ, and Venom under various image-based lighting environments, and with Digital Domain and Lola Effects for The Social Network, to illuminate Armie Hammer with with animated HDRI lighting playback so that he could play both Winklevoss twins through facial replacement, with American Cinematographer magazine crediting us for the pioneering work. In 2004, we designed and tested an LED panel virtual production stage for Digital Domain in a test for Benjamin Button. Our group then collaborated with Framestore and Warner Brothers in the R&D tests which led to Gravity's "Light Box", which used LED Panels to illuminate Sandra Bullock and George Clooney with animated image-based lighting environments to help them appear to be floating in space.

Hollywood took note of our work, but not everyone immediately saw the value of our system. The first SIGGRAPH talk question asked "Is this really necessary? ... people have faked lighting composites for years." Industrial Light and Magic's John Knoll attended the talk and was also skeptical, but has since become one of the technique's most notable proponents, building an enormous LED light stage for Rogue One: A Star Wars Story and reporting that "using that image-based lighting technique to light actors and sets was really successful". Today, image-based lighting from LED stages is used on scores of productions, most famously on the Disney+ streaming series The Mandalorian.

Light Stage 3 served as a springboard for our group's continuing research. Our SIGGRAPH 2002 paper noted the poor color rendition quality of the spectrally peaky RGB LED lights, which we addressed at EGWR 2003 by building LED lights with additional LED spectra and finding how to drive them to mimic the color rendition qualities of traditional light sources. We eventually built Light Stage X3, an entire LED stage with multispectral light sources, and at SIGGRAPH 2016 we showed a practical way to drive the lighting with data from HDRI maps and color charts to get near-perfect color rendition of natural lighting environments. In 2017, we built a studio-sized multispectral lighting reproduction system for SOOVII visual effects in Beijing, which has used the system to accurately light and composite actors for numerous films and television shows.

The structure of Light Stage 3 became the basis of our group's work in performance relighting, which allows the lighting on an actor's performance to be changed after it's been recorded. For SIGGRAPH 2005, we added 156 high-output white LED lights controlled by high-speed circuitry to allow an actor to be illuminated by thousands of different lighting conditions per second. Using our image-based relighting technique from SIGGRAPH 2000 and optical flow, we could synthesize any lighting on the actor in postproduction. We went on to add relighting techniques to our group's volumetric capture research in the Light Stage 6 Relighting Human Locomotion paper and Google's Light Stage X4 Relightables work.

Light Stage 3, with its 156 RGB LED light sources, is now in the collection of the Academy Museum of Motion Picture Arts and Sciences.

More Resources: SIGGRAPH 2002 Papers Fast Foward presentation by Andrew Gardner. Our Group Photo just after shooting the SIGGRAPH 2002 examples.

SIGGRAPH 2002 Presentation Slides
PowerPoint .ppt, PowerPoint .pptx, Acrobat .pdf

SIGGRAPH 2002 Technical Paper
Also at the ACM Digital Library, with BibTeX


SIGGRAPH 2002 Papers Video

Talk Transcript

[Steve Marschner, Session Chair] The last paper is entitled A Lighting Reproduction Approach to Live-Action Compositing. The authors are Paul Debevec, Andreas Wenger, Chris Tchou, Andrew Gardner, Jamie Waese, and Tim Hawkins. They're all from the University of Southern California Institute of Creative Technologies. And Paul will be presenting the paper.

[Paul Debevec] Thank you Steve, and good morning! This talk is about trying to composite actors into backgrounds -- either virtual sets or a background of places that have been recorded photographically -- and trying to pay attention to getting the light that is on the actors to be the same light that you see on the background.

This is very much in the spirit of traditional live-action compositing, which is most often done with a blue screen. The actor is filmed in a studio, with studio lighting in front of a blue screen. The background is recorded separately, and then using a color difference process, we can get the blue screen to create a matte, which is basically a cutout image that will allow you to suppress the actor's shape from the background and then composite the actor in and then you can have the image of the actor in front of the background. Now the thing that you don't get here necessarily is consistent lighting between the actor and the background because the actor is illuminated with the light in the studio and the background is illuminated with the light of whatever you happen to have there.

So what we've tried to do is to address this problem and take a look at some work we've done with illuminating computer generated objects such as these,

with measurements of real world illumination, such as these light probe images of various lighting environments. And the kind of technique we generally use for this is to

texture map one of these high dynamic range omni-directional images onto a big sphere of illumination, placing computer generated objects in the center, and then tell your favorite global illumination algorithm to render the appearance of all this light impinging upon the computer generated objects and take a look at what it looks like.

The kind of renders we tend to get for this look like this. So here are those computer generated objects and they're illuminated by those environments. And in fact at the same time we're actually looking at that sphere of illumination directly, right behind the objects, so you not only see the objects illuminated by those environments, but you see the background as well. And one of the nice things about this effect is that it really looks like those computer generated objects might plausibly even be there because the lighting on the objects is consistent with the lighting that is there in the environment. And so we thought, well, could there be a way to possibly make the same thing happen for real actors being composited into environments? And the thing that seemed to suggest itself was: is there some way to construct a sphere of incident illumination that would be for real, and place an actor into the middle of that?

And that's exactly what this project was all about.

The basic process is that we'll take an actual lighting environment or an environment where we know what the light is, use that to generate a background plate, also use that to generate the image of the actor illuminated by that environment, and then place the actor into the environment so that they look like they're actually illuminated by the environment that they're composited into.

Some related work is a very nice analysis of the blue screen matting process by Smith and Blinn, and a paper at this year's SIGGRAPH about applying that to unconstrained, real-time backgrounds. There's been a number of papers about recombining different basis lightings of an object, or even a human face, in order to show that human face in different lighting environments. And there's also been recent work on taking objects and very accurately simulating the refracted and reflected light through them, including some work that shows this in real time.

For our project, the first thing we needed to do is actually construct that sphere of illumination, and we found a very helpful website here called "Desert Domes" where you just assign how many tesselations of an icosahedron you would like, and the radius, and it tells you all the different lengths of the struts and the number of connectors you need. So we went to the website, plugged in the numbers that we wanted, and put some simple plans together for our local machine shop and got the parts together.

This is Andreas Wenger here putting together the beginnings of the light stage device. We also, in addition, needed to find some light sources. It was actually 156 light sources in this light stage, and we needed lights that could be driven to any color of red, green, and blue illumination. We were also extremely fortunate in that case to find something that was useful for that.

They are these iColor MR lights from Color Kinetics. Each one of them has red, green, and blue LEDs which are bright enough these days to illuminate objects and then be recorded with video cameras. And it's got a very convenient USB interface with computers so we can drive quite a number of them in real time.

And then we put all these lights at the vertices of our stage, which is actually 41 lights, but we later expanded it, to place an additional light at each one of these edges in the stage as well, taking it up to a total of 156 lights.

The way that we drive the lights with sample lighting environments such as light probe images that we have here -- this is the grace cathedral light probe -- is that we simply take the image and resample it to essentially a 156 pixel version of it with the appropriate downsampling, and then we play that image back onto those light sources to illuminate the actor.

These are, in fact, three of our lighting environments here and some images of actors standing within the light stage illuminated by those environments. You'll see these in motion in just a moment, but what I'd also like to also say is what we need to really make this complete is to be able to show the environments behind the actors as well, and that means we need to obtain a matte of them standing there in the stage.

We could try to go with the traditional process of putting a bluescreen behind the actor and use perhaps the color difference method in order to get the matte, but the drawback to that is we don't really want to have to cover up all those lights with a bluescreen, and another drawback is we don't want to have a lot of blue light back there that's going to be reflecting on the actor, showing through their hair and reflecting on the sides of their face, causing problems with blue spill. What we really want to do is get exactly the lighting that was in the environment without changing it around by placing big colored pieces of cardboard behind. So what we decided to do is to try get this matte image here with a second camera using non-visible light so that we could somehow hopefully get a field of infrared illumination behind them and photograph the actor's silhouette against that field of infrared light. The first thing we needed to do is to find some kind of material that would be black in the visible spectrum but also reflect infrared light so we could light it up with our lighting to get this kind of effect.

We basically looked at a number of black cloths here, there's some Duvateen, and I think there's about four T-shirts we picked up from various SIGGRAPH conferences here, and we found that, very fortunately, if you look at these things in the infrared, quite a number of commonly available cloths actually reflect infrared light.

Some do, and some don't. We then went to a local fabric store and found quite a nice supply of a very good quality fabric that simply reflects IR that is also quite dark in the visible spectrum.

So we then took this cloth, sewed it into the shape of the back of our light stage here, and cut holes in it for the light to peek through, and then augmented the light stage with six of these infrared LED banks of lights to project the infrared light onto the back of the light stage. We then built a camera rig that would simultaneously record the actor in the visible spectrum and in the infrared.

And that's what this thing is here. We've got our Sony color camera, it's an NTSC resolution 3-chip video camera. And over here is a monochrome camera which happens to have sensitivity that goes up to the infrared; we put a filter here to block all the visible light. And we put a beam splitter in between at a 45-degree angle so we simultaneously get the same image from both cameras essentially.

The image that we get of the actor in front of the infrared illumination actually looks like this. We weren't able to get a perfectly even field of infrared light on the actor, but we were able to illuminate the background without spilling too much light onto the actor themselves. The way that we processed this to get the desired matte that we wanted was to, for any particular shot, to have the actor duck down out of the shot, then record a clean plate of what the background looks like. And if you have your camera radiometrically calibrated, you can simply take this image of the actor's silhouette against the matte and divide it by this image, and it yields very directly this image. Wherever it doesn't change, you're dividing "x" by "x" and you get one, which is a good thing for the background here. And wherever the actor's blocking it you get a small number divided by a not-terribly small number and you get something close to zero. Now, these dark areas here where the holes are cut for lights, very fortunately the front of those light sources actually reflected back enough of the infrared that we got a non-zero response for those, so we were able to have the lights in there. In a future version we'll probably put a little bit of a dark mesh in front of it so light can get through and it'll reflect even more infrared. You may see some artifacts in some of the mattes, a couple of twinkling pixels, because these things were getting close to the thresholds that we had. This is, by our standards, a reasonably good quality matte with just a little bit of garbage matting here we're able to clean it up just a little bit more

and that's the kind of thing that we use in our composites to take our actors who are illuminated by that light and then composite them into those lighting environments. And if we can go to the video, I'll show some stuff in motion!

This is our first lighting environment spinning around here. It's a high dynamic range omnidirectional image and it's being converted in real time to those 156 light source directions. This is our friend Emily here in the light stage; we're actually rotating her around on a platform so that we can simulate the camera moving around her, and we're interacting and also moving the light around her and she moves around as well. This is what the infrared camera saw. We flipped it left-right so it's the same orientation as the color camera image. And here's the clean plate that we used to divide the two to create the matte that's cleaned up. We had her flip her hair around a little bit so we could test if the matte would work out reasonably well. And here she is, lit by the light of the Berkeley Eucalyptus Grove and simultaneously composited into it.

We have another example here. This is Grace Cathedral. And here's the lighting environment, which consists principally of this bright yellow altar on one side of the cathedral here, and there's also some somewhat blueish stained glass windows at the top. You can see the altar is being replicated here, and in the background, you can see a little bit of the stained glass windows reflecting in her hair and on her forehead. And when we twirl things around here we can see some rim lighting on here face there from the altar as it goes by and some reflections of the stained glass windows out there.

We have one more example here. This is the Uffizi Gallery, and we can cut pretty quickly to the final composite. Another friend of our laboratory Elana is in here. That stick is to synchronize the two cameras. And there's Elana in the Uffizi Gallery. [Applause] Thank you.

So in the paper we talked about some issues related to the fact that we have somewhat limited resolution of the incident lighting -- we only have 156 lights. I think for future versions it would be very helpful to have more of them. If you look particularly at Elana in the Uffizi Gallery as she rotates around, you can see a little bit of artifacting because of the multiple shadows you get from the different light sources, which would be solved if we had more light sources or larger area light sources. We were certainly happy with the quality of the results that we got from the infrared compositing system and wondered: "well certainly someone must've done this before?" It's such a good idea and eliminates a lot of problems with blue spill. And with a little bit of research we were able to find that this technique dates back as early as the 1940s using film.

We found that it was first proposed by Leonard Pickley in a paper called Composite Photography, and there was a paper in the journal of the Society of Motion Picture and Television Engineers by Zoli Vidor describing a version of the process that was used on several films. It was actually quickly replaced by something called the Sodium Process where you use sodium light behind the actor, a yellow monochromatic light which you could then notch reject from a color camera and then bandpass accept to a second piece of film. Both the infrared and the sodium processes used old Technicolor cameras. Those were the first cameras that could record color by using three strips of black and white film going through. Once we had color film they weren't as necessary but people would use it for other tricks, such as visual effects or running infrared film through one of the slots and a strip of color film down another. The sodium process was used, for example, on Mary Poppins with Dick van Dyke being surrounded by penguins and it was used for Hitchcock's movie The Birds which have incredible matte work if you are able to get the DVD's of those. They were eventually replaced by the blue screen process because the color difference method made it possible to get a good quality matte from a single regular motion picture camera that is a lot easier to use than one of these old Technicolor cameras. Of course, you don't have to use the infrared compositing process with our technique, you could use a bluescreen process if you wanted to.

And Professor Masa Inakage at Keio University has been working with a light stage device here and has used a front-projection blue screen process where there's a retroreflective material back there, a camera with a ring of blue LED's, and the actor is seen basically with a relatively traditional looking bluescreen behind them except that's because of the retroreflectivity of the background. And you can do the composite that way as well.

I'd like at this point to talk about a number of the calibration processes that we went through to get the composites that we're showing, and that's actually what a lot of the paper concerns itself with. We really tried to get correct radiometric consistency between the background and the actor and so we carefully calibrated the color response of the light sources. We found the ColorKinetics light actually have a gamma curve of about 1.9 when we send the 8-bit values in. That was enormously helpful for us because of the fact that it effectively extends the dynamic range of the devices to closer to maybe 10 or 12 bit.

We also calibrated the Sony color camera, as well as the infrared camera, for their response curves using our HDR Shop software. And for both of the cameras as well, we geometrically calibrated them. The first thing we did was to find out where their nodal points were -- their centers of projection. And we have a technique that we talked about in the paper that was able to figure out the center of projection of both of the cameras I think to within about a millimeter in terms of how far it was behind the lens using just a piece of cardboard and a ruler. It's pretty easy to implement. And using that technique we found the relative distance of the nodal points behind the front surface of the lens and that allowed us to place these two cameras at the distances behind the front surface of the glass such that the cameras were effectively directly on top of each other, and the two images should theoretically line up very well ...

except for the fact that we have two different lenses which have approximately the same focal length. But they probably also had different radial distortion characteristics and such. So if you take a look at the two images that you would get for the matte and the actor, you wouldn't necessarily expect them to line up with each other. Of course, since the cameras are nodally aligned, we'd guess that there would at least be only a two dimensional transformation that could be made to map the matte image to the color image.

In order to determine that, we simply put a checkerboard grid in the light stage illuminated with incandescent light so that we could photograph it simultaneously in the color spectrum and in the infrared spectrum.

Here's the color image there and here's the infrared image and you can see they don't line up; in fact they're not even technically the same number of pixels. But we notated one of the squares and seeded by clicking on these four points here an algorithm that then automatically detected all of the corners in the infrared grid and the color grid and then we correspond all of these different grid squares and we compute a series of homographies that map each square of the infrared image to each square of the color image. And that allowed us to take the infrared image and produce a warped infrared image like this that lines up very well with the original color image. and here's the infrared image and you can see they don't line up; in fact they're not even technically the same number of pixels. But we notated one of the squares and seeded by clicking on these four points here an algorithm that then automatically detected all of the corners in the infrared grid and the color grid and then we correspond all of these different grid squares and we compute a series of homographies that map each square of the infrared image to each square of the color image. And that allowed us to take the infrared image and produce a warped infrared image like this that lines up very well with the original color image.

Using this technique we're able to take the original matte image that we get, warp it so it lines up with the original color image, and we can show that it's basically lining up with the color image of the actor, and use that to create a "hold out" matte of cutting the actor out of the background. Since the actor is originally photographed on black, the actor is what's called "self-matting", which means you simply have to add that image in to that background image in order to form the composite. We're careful to also check to make sure that we don't also add the light sources into this image as well by having a threshold of the amount of brightness you need in the matte to place the actor in. So there's a final composite.

There's a couple of other very important color calibration processes that we used in the paper that I'll just touch on here. One thing that was very important was the color calibration of the device. The lights have a certain red-green-blue spectrum, the camera has a certain red-green-blue spectral sensitivity, and we had to find a way of driving the lights so they would show up the appropriate colors when photographed by the particular color camera that we had. And the way that we basically ended up doing that was by placing a white reflectance standard inside the light stage. You can see it here and seeing with what colors we drove the lights with what color the reflectance standard ended up being, and computing at 3-by-3 transformation matrix from the lights' space to the camera space. We also had to compensate for the fact that the light sources are a little bit like spotlights: they're brighter in the center than on the sides. So the way that we calibrated that out was to take a big white card, place that into the light stage, and turn all the lights on at the same time and that gave us this falloff image or vignetting image. Basically then since we have a white card in there we normalize all the images we get off the color camera by dividing those pixel values by the pixel values that are in this image here.

So for our final example I'd like to show something where we composited an actor into a three-dimensional virtual set where they're actually walking around in the environment and the light changes on them depending on where they walk. The set that we had was a set of the Greek Acropolis; in particular the idea we had was to have the actor walk along the colonnade of the Parthenon here. This is the model that we constructed and then illuminated with an image-based lighting environment. In this case we actually went and photographed the sky itself and used that as the illumination on this environment and then rendered it in the Arnold renderer by Marcos Fajardo so that the final renderings we get of the background plates are computed with global illumination with respect to the actual radiometry.

So the basic elements that we had then were the background plate with the camera tracking backwards inside the virtual environment down the colonnade, then to get the lighting we took a virtual mirrored ball, put it at the position that the actor's head would be as they walked forward and did an ambient rendering of this mirrored ball and we saved these out in high dynamic range so they essentially are a virtual light probe of the environment moving forward. And we took that light probe movie and played it back in the light stage as our actor walked in place. And did a final composite. So if we can go back to the video I'll show this example.

So we'll see first here is the background plate. The idea is that it's late afternoon and the sun's coming through the columns and the actor will walk into shadow and into sunlight as they move forward. Here's the virtual light probe sequence moving forward. Here's Elana doing an admirable job of looking like she's walking forward, even though she's just walking in place (this was her first time doing it.) And you can see the light is going on and off as she walks forward ... that's her walking into the sun and into the shadows. Here's her matte image as she walks forward ... here's the derived matte that we used. And finally, this is Elana walking forward in the Parthenon. There's one other thing that I'd like to point out: we put a couple of colored banners here on the side to see if we are also getting indirect illumination characteristics. The idea is that when she walks by the red banner she'll receive a little bit of red indirect light on that side of her face and when she walks by the blue banner she gets a little bit of a blue tinge on that side of her face as well. So we're getting both the direct and indirect lighting in the environment.

So for future work, one of the things that we would like to be able to do is to reproduce near field incident illumination as well. When Elana is walking forward and she steps into the shadow of the Parthenon column, she actually darkens all at the same time. Really what should happen is that she should have a visible shadow crossing along her face as she goes through. We can't reproduce that with these light sources, each of which illuminates the entire actor. So one way you could do that is by replacing all those light sources with video projectors which would then reproduce these spatially varying incident fields of illumination. The other thing that we'd like to do is in our software, add some artistic control so that you can not only get exactly the correct light for where the actor is, but then augment it using the standard tools of cinematography. In any feature film that you will see, the lighting is always being manipulated very carefully and artistically by the cinematographer: placing rim lights, fill lights, and bounce cards all around them. We can do all of these same things in our light stage as well using the correct answer as what we start with and then artistically moving away from that as a cinematographer might see fit. We also would really like to have more light so we don't have doubling shadows, and another thing that you'll see in the paper is we have a picture of a close-up of somebody's eye, illuminated in the light stage, and you can actually see all the different lights there. More lights would solve that problem. We would also like to have brighter lights. We were really pushing the brightness sensitivity of our cameras, particularly in lighting environments where a lot of the illumination is concentrated in one particular area, because in order to simulate that correctly all the rest of the lights have to be quite dim and you're really not using that much of the total lighting power of the stage, so you end up having to turn up the gain of your cameras which can cause some noise in your composites. So, more and brighter lights is a good idea.

And also Light Stage 3 is really just a prototype. You can only do basically a medium close up of an actor in it. Ideally, you'd like to be able to get entire actors and multiple actors into scenes where they can actually walk around for real. So that's to build a larger stage: this is a version of Light Stage 4 which is basically 50 feet wide. You can see a couple of actors standing there at the bottom. We have a motion control camera.

And this is a simulation of what might be going on in it. We also used a special particularly bright light source for simulating the Sun. We're hoping to get this together at some point and working with some people who would be interested in making it happen.

So with that I'd like to conclude the talk and put up some credits. Thank you very much!


Paul Debevec / paul@debevec.org