Elastic Surface Deformation with Subsurface Scattering,
Bump/Elasticity/Group/Anchor Mapping + Skin Detection

 
 
regular
[t]exture
group
skin
height
anchor
[p]hong
[b]ump
[p+t]
[p+b+t]
[s]ubsurface
[s+b+t]

Rogerio Feris + Alex Olwal
{rferis, olwal}@cs.ucsb.edu

About the Project

We have developed a system for intuitive elastic surface deformation, rendered with subsurface scattering and bump mapping. Our automatic preprocessing step supports generic image processing and our skin segmentation algorithm allows our system to automatically set skin vertices to be more elastic than other parts of the model.

The system uses a multi-flash camera for automatic bump map aquisition, and other manually created maps are used to group vertices and lock geometry.

The subsurface scattering provides sophisticated rendering as the user deforms the model.

While the examples shown here are models of human heads, we wish to emphasize the generality of the method. Most components are not dependent on the specific model used; The multi-flash camera can create a bump map from any object, the group and anchor maps are obviously general, and so are the shaders applying the realistic effects. The skin segmentation can be replaced or combined with any image processing algorithm, providing the potential for very powerful systems. Image processing could for instance provide edge and feature maps and we envision a fully automatic system with generation of group and anchor maps as well as other data of interest.

showhide all content

bumpmapping &
sub surface scattering
 
   

Data Acquisition / Modelling  

We used a multi-flash camera for image acquisition. Differently from [1], we used a large camera-flash baseline in our setup, which allows us to detect not only depth edges in the face, but also important details such as hair, wrinkles, beard, etc. We obtained images from both frontal and profile views of a person and created a 3D model of the person.

Multi-Flash Imaging

Input frontal and profile images; correspondent NPR outputs (see [1] for details)

3D Modelling

We generate a 3D model from both frontal and profile views using 3DmeNow application. We used a library (glm) to read Wavefront .obj model files in OpenGL. The mesh model is separated in three groups face, eyes and mouth, which are rendered and processed separately.

 

 

Bump Mapping   

Human faces have small-scale bumpy features that are too fine to represent with highly tessellated geometry. We used bump mapping with our NPR output as a height field to (1) create fine-detail 3D models automatically and (2) create non-photorealistic rendering illustrations.

Generating the Normal Map

We used our NPR output obtained with multi-flash imaging as a height field to create a normal map. The main advantage of our approach is that the NPR image captures bumpy features such as hair, wrinkles, and beard, allowing us to create fine-detail 3D models and NPR illustrations automatically.

In order to create the normal map, we first negate the NPR texture image so that darker regions of the height field are lower and lighter regions are higher. Then, we compute the normals based on partial derivatives of the height field surface, exactly as demonstrated in the CG book [2], page 203.

Tangent-space Bump Mapping

Since we are dealing with arbitrary geometry, we need to align the coordinate system of the normal vectors in the normal map (tangent space) with the light vectors coordinate system. This is done by creating a rotation matrix for each vertex with columns specified by the correspondent normal, binormal, and tangent vectors. See CG book [2], page 225, for how to compute these vectors. We transformed both the light vector and the normal map vectors in the same consistent eye-space coordinate system.

Results


From left to right: 3D model, bump mapping with small scale, bump mapping with large scale.


From left to right: Texture-mapped 3D model, Fine-detail 3D model with bumpy features created automatically, Non-photorealistic illustration with large scale bump mapping.

 

 

Subsurface Scattering   

We implemented a real-time approximation of subsurface scattering for skin rendering. Our implementation follows the algorithms described in [3], which were also used in the skin rendering module of NVIDIA SDK.

Surfaces like skin, paint, and marble are best modeled with subsurface scattering models, where most of the light enters the object surface's layer, interacts with those particles, and then exits the layer. Several real-time approximations to subsurface scattering have been proposed [4], including methods based on wrap lighting, depth maps, and texture blurring. We decided to implement an algorithm based on an approximation of the model proposed by Hanharan and Krueger [5]. Our implementation follows closely the algorithms described in [3], which were also used in the NVIDIA SDK (part of our code was obtained from the SDK).

Phase Functions

When modeling scattering within the layer, we can use phase functions to describe the result of light interacting with particles in the layer. A phase function describes the scattered distribution of light after a ray hits a particle in the layer. We use Henyey-Greenstein phase function (see [3], page 55), which depends on the incident and outgoing ray, and takes an asymmetric parameter g, that ranges from -1 to 1, spanning the range of strong retro-reflection to strong forward scattering.

The phase function is used to determine the BRDF that describes single scattering from a medium (see [3], page 56), along with scattering albedo. Multiple scattering is empirically approximated by adding together three single scattering terms, with different values for the asymmetric parameter g.

Fresnel Effect

We need to consider the Fresnel effect, which happens when the light ray enters and exits the surface. This is important to determine the incoming and outgoing directions (and also intensity) of the light rays inside the medium, so that the BRDF/scattering is properly computed.

Results

From left to right: subsurface scattering with mostly backscattering (note the glow effect), same as before with bump mapping, subsurface scattering with mostly forward scattering.


Subsurface scattering tends to smooth the lighting effects. We are considering a constant surface thickness for the face, but properly modeling this would cause redish effects (due to blood interaction) along thin facial features, such as ears and nostrils.

 

 

Maps   

We make heavy use of different images to map functionality onto the 3D model. Besides regular texture and bump mapping, we found it interesting and useful to explore the use of several other types of maps.

a] GROUP MAP
groups vertices based on a shared color that is unique for the group
b] SKIN MAP
image processing on the original texture image allowed us to implement a skin detector. The original texture is converted to YCbCr space where our algorithm extract skin colored regions.
c] ANCHOR MAP
sets vertices as anchors in the deformation model.

 

 

Deformation   

The deformation is based on a spring model, which proved to be very suitable for freeform 3D deformation of elastic materials. The acceleration of each vertex is determined by the forces that act on it from other vertices. The spring model can be interactively deformed with the forces being evaluated and applied to the model at each frame. Several maps where implemented to increase the user experience, these include GROUP, ANCHOR and ELASTICITY MAPs.

THE SPRING MODEL

The spring force is described by

   F = -k * ( x [ current ] - x [ rest ] )

A force is created as a vertex is moved from its rest position ( x [ rest ] ).


IMPLEMENTING SPRINGS IN THE MESH

We choose a vertex-centric approach, where each vertex is responsible for maintaining its relationships to other vertices. We preprocess the geometry and store a list of all neighbours per vertex. The neighbours are taken from each triangle that the vertex is part of. Each vertex is considered at rest (F = 0) when the model is loaded. The forces are created when a vertex moves, which affects its neighbours. All forces are evaluated and applied at each frame, where vertex acceleration is derived from Newton's Second Law of Motion:  

   F = m * a


ANCHORS

Anchors are vertices that are part of the spring model, but with a fixed position. They create forces, but are not displaced by their neighbours. We use anchors in two ways:

a] The user can interact with the model by dragging a group of vertices. As the vertices are selected, they temporarily become anchors, which the user updates the position of. The other vertices will elastically follow the anchors.

b] We implemented a special ANCHOR MAP to fix geometry that is not to be moved. The ANCHOR MAP (see maps) is a binary texture, where vertices mapped to white pixels are treated as anchors.

ELASTICITY MAPPING

The spring parameters were experimentally determined and uniformly applied to all vertices. It is however desirable to be able to vary the spring coefficients over the mesh and it is obviously undesirable to manually specify them on a vertex-per-vertex basis. To address this issue, we implemented a skin detection algorithm that output a binary mask, which is used to set the elasticity in the deformation model differently depending on the surface material.

 

 

Interaction   

Several methods were investigated to provide intuitive interaction with the spring model. The fundamental problem is allowing the user to select a group of vertices and deform the model as these vertices are displaced.

While the deformation model is described above, it was not clear how to provide fluid interaction with the geometry. (Selecting single vertices would of course not provide satisfactory results. It is not only an unnatural selection method, but also results in strong discontinuities, which give rise to unpredictive behavior and oscillations that might break down the simulation.)

We investigated three methods, two of which were implemented:

RECURSIVE NEIGHBOUR SELECTION

This method uses the existing spring data structure and recursively modifes a givel level of neighbouring vertices. The method works well for models with equal tesselation, but gives unpredictable results for models with varying resolution. The models we created have a high concentration of vertices around face features, such as eyes, nose and mouth, whereas the rest of the head is fairly coarse. This makes it hard to get a feel for how big of an area the user will select and also consumes a lot of computation power in dense areas, if the neighbours are not preprocessed.


GROUP MAP

This method uses a separate texture map where facial areas have been grouped through unique colors. Our preprocessing step creates a hash table with vertex groups based on color. Upon selection of a vertex, it can quickly look up the group and obtain the associated vertices. This method works very well and is the one we use in our current version of the system. This method does not require the vertices to be adjacent, which provides greater flexibility in grouping. The drawback is of course that the map has to be created beforehand (currently manually, while the process could be automated) and selection at group borders.


DISTANCE NEIGHBOUR SELECTION (not implemented)

This method addresses the problem of varying selection areas in the recursive method. The recursive search for neighbouring vertices could be controlled by a minimum and maximum distance. Preprocessing this information would yield a per-vertex list of neighbouring vertices within a certain distance range. We plan to implement this method in the next version and use it in combination with the group map.

 

 

Performance test

  CPU RAM GPU
laptop Pentium M 1.8 GHz 256 MB NVIDIA GeForce FX Go5200 32MB
desktop Pentium 4 3.25 GB NVIDIA GeForce 6800 GT 256 MB
Wavefront OBJ file
11 346 vertices
21 408 triangles
512 x 512 texture (24 bit)
 
Pentium M FPS
Pentium 4 FPS
OpenGL lighting (no shader)
25
60
Bump mapping (shader)
10
30
Subsurface scattering (shader)
2
30
 

 


 

Environment

The application was developed under Windows XP SP2 and Fedora using OpenGL and the Cg shading language on NVIDIA GeForce FX cards. Additional libraries used:

GLUT (GL Utility Toolkit) OpenCV (Image processing) OglExt (GL extension managment) NVIDIA Cg SDK (Shaders)


Source code [80 kB]


Do not distribute without author's permission

 

References

[1] R. Raskar, K. Tan, R. Feris, J. Yu, M. Turk, "Non-Photorealistic Camera: Depth Edge Detection and Stylized Rendering with Multi-Flash Imaging", ACM SIGGRAPH 2004
[2] R. Fernando, M. Kilgard, "The CG Tutorial" - NVIDIA, Addison-Wesley, 2003
[3] M. Pharr, "Layered Media for Surface Shaders", SIGGRAPH Renderman course, 2001
[4] R. Fernando, "GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics", NVIDIA, 2004
[5] P. Hanrahan, W. Krueger, "Reflection from Layered Surfaces due to Subsurface Scattering", ACM SIGGRAPH 1993

Links

[a] Exploring Spring Models. http://www.gamasutra.com/features/20011005/oliveira_01.htm
[b] NVIDIA SDK. http://developer.nvidia.com/object/sdk_home.html
[c] OglExt. http://www.julius.caesar.de/oglext/
[d] OpenCV. http://sourceforge.net/projects/opencvlibrary/
[e] OpenGL. http://www.opengl.org/