Acquiring the Reflectance Field of a Human Face

Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar

SIGGRAPH 2000 Conference Proceedings

Updated 4/10/04

Abstract:

We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a person's face under novel illumination and viewpoints.


The Paper
SIGGRAPH 2000 Conference Proceedings
The Movie
SIGGRAPH 2000 Electronic Theater
The Demo
SIGGRAPH 2000 Creative Applications Laboratory
debevec-siggraph2000-high.pdf (3.6MB)
320x240 QuickTime (19.6MB)
640x480 DivX (148MB)
Facial Reflectance Field Demo
SIGGRAPH 2000 PowerPoint Slides
15.4MB
SIGGRAPH 2000 Papers Video (Divx)
54MB

The Light Stage. The primary apparatus used in our work is the Light Stage. Version 1.0 of the Light Stage is designed to move a small spotlight around a person's head (or a small collection of objects) so that it is illuminated from all possible directions in a small period of time. From this data, we can then simulate the object's appearance under any complex lighting condition. A set of stationary digital video cameras record the person or objects' appearance as the light moves around, and for some of our models we precede the lighting run with a geometry capture process using structured light from a video projector.

Light Stage 1.0 consists of a wooden superstructure and a two-bar mechanism made of ABS plastic tubing to rotate the light in azimuth (theta) and inclination (phi). The primary theta axis is is attached to the superstructure at the top using two lazy Susan bearings which form an axle. The 300' power cord to the light is wound around this axle which allows the theta axis to be spun by pulling the cord. The axle triggers an electronic beep each revolution, which is recorded by the digital video cameras and is later used to register the light source directions. The phi axis bar is lowered manually by a marked string (actually a ball-chain) which runs through the axle and attaches to the phi bar near the light. The phi operator lowers the axis 180/32 or or 180/64 degrees per theta revolution. The lighting process takes approximately 80 seconds to record a 64x32 sampling of lighting directions. The materials to build the rig were bought in 10 successive trips to Home Depot for a total of $1,000. Light Stage 1.0 was operational from December 1999 to January 2000 in generously loaned space from Professor Shawn Brixey of the UC Berkeley Art Practice Department.

Light Stage 1.0
Chris Tchou adjusts the video projector unit used for structured lighting, which is on wheels so that it can be moved to three locations and then removed during a capture session. H.P. Duiker sets up one of the two Sony VX1000 mini-DV cameras used for reflectance capture. Tim Hawkins marks off lengths on the phi-axis ball-chain at the top of the Light Stage. Just left of him are the theta axle and power cord, currently in need of re-winding.

H.P. sits in the rig before a capture session, while Tim and Chris take the reins of the theta and phi axes. The theta-cord is now wound. A 1-minute-exposure photograph of a reflectance field capture session; the path of the spotlight is traced out as a spiraling luminous streak. A photograph of a set of objects being captured; the exposure lasted for three theta revolutions of the 64-revolution capture session.

While the rig runs, the cameras record an image of the person under every possible direction of illumination. We can recombine this set of images to generate a rendering of what the person would look like under any complex form of illumination - where light comes from every direction with different colors. Most importantly, this includes illumination that we have sampled from the real world using the light probe lighting capture technique. Below are five images of a subject's face illuminated with light from different environments:

The environments are, left to right, St. Peter's Basilica, the UC Berkeley Eucalyptus Grove, The Uffizi Gallery in Florence, Grace Cathedral in San Francisco, and a synthetic test environment. These lighting environments are available in the Light Probe Image Gallery. Below each face rendering is the lighting environment resampled to the same resolution as the Light Stage lighting direction data at 64 × 32 pixels.


In our work we also obtain a range scan of the face which allows us to project the re-illuminated appearance onto the geometry and render the face from arbitrary viewpoints. We use a second view of the face from the mirrored direction to get the information for the entire face. We use a reflectance analysis technique to detect and re-render the specular reflection of the face so that it is consistent with the novel viewing directions. Some of these results are shown on page 10 of the paper below. The model is actress Jessica (a.k.a. De'Lila) Vallot, star of Pacific/Title Mirage's SIGGRAPH 99 and 2000 Electronic Theater hits "The Jester" and "Young at Heart".



Paul Debevec / paul@debevec.org