Citation
Computer vision based material property extraction and data-driven deformable object modeling

Material Information

Title:
Computer vision based material property extraction and data-driven deformable object modeling
Creator:
Wilber, Steven C
Publication Date:
Language:
English
Physical Description:
1 electronic file : ;

Subjects

Subjects / Keywords:
Computer simulation ( lcsh )
Computer animation ( lcsh )
Model-integrated computing ( lcsh )
Computer animation ( fast )
Computer simulation ( fast )
Model-integrated computing ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
Of all the options available to computer animators, physics engines are typically used to develop animations where realism is important. When simulating deformable bodies animators first model the object and then find the object's material parameters in a manual, iterative manner which is both inaccurate and time consuming. This thesis defines a method for finding the physics engine coefficients of an object by examining recordings of the object under various stress tests. It also investigates how these coefficients are coupled to an object's geometry by applying them to 3D models with very different geometries. To accomplish these, this thesis defines a set of deterministic experiments to put various types of stress on the object, develops a computer vision algorithm which takes the recordings and extracts the important behavioral information, develops a simulation of the object under the same type of stress using the bullet physics engine, and defines a feed-back loop algorithm for optimizing the simulation until it matches the criteria set by the extracted behavioral information. This thesis then uses the coefficients obtained to simulate an object with a different geometry. Using color and shape cues of the recorded experiments, this thesis is successful in defining novel methods for extracting the size and location of the object over time. The framework for simulating the object, while it is still in infancy, has an algorithm which successfully creates a homogenous, isotropic version of the object mimicking the global behavior and local deformation of the recorded. While future work would improve the results, this thesis has successfully shown that extracting material properties from a recorded video (recorded under a strict experimental environment) and using that information to data-drive a simulation of that object is achievable and produces good results.
Bibliography:
Includes bibliographical references.
Statement of Responsibility:
by Steven C. Wilbur.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
859788133 ( OCLC )
ocn859788133

Downloads

This item has the following downloads:


Full Text
COMPUTER VISION BASED MATERIAL PROPERTY EXTRACTION AND DATA-
DRIVEN DEFORMABLE OBJECT MODELING
By
Steven C. Wilber
B.S. Computer Science, Metropolitan State College of Denver, 2005
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
Of the requirements for the degree of
Master of Science
Computer Science and Engineering
2012


This MS Thesis for the Master of Science
Degree by
Steven Wilber
Has been approved
By
Thesis Advisor
MS Committee
Professor Ellen Gethner


Wilber, Steven, C. (M.S., Computer Science and Engineering)
Computer Vision Material Property Extraction and Data-Driven Deformable Object
Modeling
Thesis directed by Associate Professor Min-Hyung Choi, Ph.D.
Of all the options available to computer animators, physics engines are typically
used to develop animations where realism is important. When simulating deformable
bodies animators first model the object and then find the objects material parameters in a
manual, iterative manner which is both inaccurate and time consuming. This thesis
defines a method for finding the physics engine coefficients of an object by examining
recordings of the object under various stress tests. It also investigates how these
coefficients are coupled to an objects geometry by applying them to 3D models with
very different geometries. To accomplish these, this thesis defines a set of deterministic
experiments to put various types of stress on the object, develops a computer vision
algorithm which takes the recordings and extracts the important behavioral information,
develops a simulation of the object under the same type of stress using the bullet physics
engine, and defines a feed-back loop algorithm for optimizing the simulation until it
matches the criteria set by the extracted behavioral information. This thesis then uses the
coefficients obtained to simulate an object with a different geometry.
Using color and shape cues of the recorded experiments, this thesis is successful
in defining novel methods for extracting the size and location of the object over time.
The framework for simulating the object, while it is still in infancy, has an algorithm
which successfully creates a homogenous, isotropic version of the object mimicking the
m


global behavior and local deformation of the recorded. While future work would
improve the results, this thesis has successfully shown that extracting material properties
from a recorded video (recorded under a strict experimental environment) and using that
information to data-drive a simulation of that object is achievable and produces good
results.
The form and content of this abstract are approved. I recommend its publication.
Approved: Min-Hyung Choi, Ph.D.
IV


DEDICATION
I dedicate this work to my loving wife, Mindy Rae Wilber.
It is your patience, encouragement, and positive attitude that gave me
motivation I needed while working towards this goal.


TABLE OF CONTENTS
Chapter
1. Introduction..............................................................1
1.1 Background.........................................................3
1.2 Motivation.........................................................9
2. Present State of Knowledge in the Field..................................11
2.1 Data-Driven Physics Based Animation...............................11
2.2 Computer Vision Material Property Extraction......................15
3. Experiment Definition....................................................17
3.1 Defining the Experiments..........................................17
3.2 Material Property Capturing System................................18
3.2.1 Nikon D5000................................................. 19
3.2.2 Casio Exilim EX-ZR200........................................20
4. Global Deformation Experiment............................................21
4.1 Material Property Extraction......................................22
4.1.1 Computer Vision..............................................23
4.1.1.1 Reducing Image Noise....................................23
4.1.1.2 Canney Edge Detector....................................25
4.1.1.3 Image Segmentation by Color Matching....................26
4.1.1.4 Image Cropping..........................................28
4.1.2 Capturing Data Points over Time..............................28
4.1.3 The Global Deformation Capture Application...................31
4.2 Finding Simulation Coefficients...................................33
4.2.1 Bullet.......................................................33
4.2.2 Modeling the Ball............................................34
vi


4.2.2.1 Mass-Spring Sphere........................................35
4.2.2.2 Tetrahedral Sphere........................................35
4.2.3 TetView........................................................37
4.2.4 Capture Data Points over Time..................................38
4.2.5 The Coefficient Finding Algorithm..............................38
4.2.6 Global Deformation Simulation Application......................40
5. Local Deformation Experiment................................................43
5.1 Material Property Extraction........................................44
5.1.1 Recording the Experiment.......................................45
5.1.2 Capturing the Control and Deformed Frames......................45
5.1.3 Material Properties Extracted..................................46
5.1.4 The Local Deformation Capture Application......................46
5.2 Finding Simulation Coefficients.....................................48
5.2.1 The Board-Ball Simulation......................................49
5.2.2 Optimization Problem...........................................50
5.2.3 Capturing Data Points..........................................51
5.2.4 The Coefficient Finding Algorithm..............................51
5.2.5 The Local Deformation Simulation Application...................52
6. Coupling Object Geometry with Elasticity Coefficients.......................54
6.1 Global Deformation..................................................54
6.2 Local Deformation...................................................55
7. Results.....................................................................57
7.1 Global Deformation Material Property Extraction.....................57
7.2 Global Deformation Simulation.......................................60
7.3 Local Deformation Material Property Extraction......................61
vii


7.4 Local Deformation Simulation
62
7.5 Investigation of Object Geometries..................................63
7.5.1 Global Deformation.............................................63
7.5.3 Local Deformation..............................................66
8. Discussion..................................................................68
8.1 Computer Vision.....................................................68
8.2 Simulation..........................................................70
8.3 Other Interesting Tests.............................................71
9. Conclusion..................................................................72
References.....................................................................74
viii


LIST OF TABLES
Table
Table 1
Table 2
Table 3
Comparing the Deformation of each Ball Type.......................61
Captured vs. Simulated Compression of Nerf Ball...................62
Captured vs. Simulated Compression of Stress Ball.................63
IX


LIST OF FIGURES
Figure
Figure 1 One Goal of this Thesis................................................1
Figure 2 Object Material Properties..................................................4
Figure 3 Defining the Geometry of a Simulated Object.................................5
Figure 4 Extracting Material Properties from a Video.................................6
Figure 5 Modifying Object Behavior by Modifying Mesh Density.........................8
Figure 6 Matching Object Behavior by Modifying Linear Elasticity.....................8
Figure 7 13x13 Gaussian Matrix with Figure 8 Ball Drop Experiment with Edges Detected...................................26
Figure 9 Using Canny edge detector with color segmentation scheme to find the ball... 26
Figure 10 Error in Color Segmentation Scheme........................................28
Figure 11 Graphs of the ball height over time and the width/height ratio over time.29
Figure 12 Description of the Global Deformation Material Property Extraction
Application.........................................................................31
Figure 13 Geometry Parameters for a Mass-Spring Sphere..............................35
Figure 14 Tetrahedrons with High and Low Radius/Edge Length Ratios..................36
Figure 15 Spheres with Differing Radius / Edge Length and Volume Constraints........37
Figure 16 Overview of the Global Deformation Simulation Application.................41
Figure 17 The Local Deformation Experiment..........................................44
Figure 18 Extracting the Ball's Dimensions for the Local Deformation Experiment.44
Figure 19 Overview of the Local Deformation Material Property Extraction Application
....................................................................................47
Figure 20 The Simulated Ball being Compressed by a Simulated Weighted Board.....49
Figure 21 Overview of the Local Deformation Simulation Application..................52
x


Figure 22 Object Behavior Differences by Adjusting the Geometry....................56
Figure 23 Graphs of the Ball Positions over Time when Dropped from 1 Meter.......58
Figure 24 Graphs of the Ball Positions over Time when Dropped from 1.6 Meters....59
Figure 25 Captured vs. Simulated Ball Height over Time.............................60
Figure 26 Graphing the Ball heights of Two Spheres with Differing Resolutions......64
Figure 27 Graphing the Ball Heights of Two Spheres with Different Radius / Edge
Length Constraints..................................................................64
Figure 28 Graphing the Ball Heights of Two Spheres with Different Volume Constraints
.................................................................................64
Figure 29 Chart Comparing Compression of Two Spheres with Different Resolutions. 66
Figure 30 Chart Comparing Compression of Two Spheres with Different Radius / Edge
Length Constraints..............................................................66
Figure 31 Chart Comparing Compression of Two Spheres with Different Volume
Constraints.....................................................................67
Figure 32 Graph of Width / Height Ratio over Time...............................69
xi


1. Introduction
Figure 1 One Goal of this Thesis.
The first goal of this thesis is to define, execute, and record a set of experiments,
extract the important information from the recordings and develop a simulation of that
recorded object using the extracted information.
When discussing the broad topic of computer animation there are many different
methods of solving the same problem and depending on the goal of the individual
creating the animation there are many choices which go into making an animation. If the
animator of the childrens movie Cars is developing a crash scene theyre not looking
for realism, but a cartoony feel where the car might deform in a way not possible in our
world dictated by the laws of nature. However, when the developers of a new line of
family sedans develop a car crash scene, realism is absolutely crucial as the lives of their
customers are at stake.
If realism is the goal, there are several choices that need to be made.
Traditionally, there is a tradeoff between realism and speed. Interactive applications
require the animation is rendered at 20-30 frames per second. Any less and our eyes will
not see smooth transitions between frames. Moreover, if haptic responses are important,
the literature states our senses require update rates of 1000 frames per second to feel
realistic [28], Realism requires more discrete primitives in the animation scene to handle
the sharp edges and irregular contours an object might have. Calculations are performed
1


on each primitive in the scene. Therefore, adding more primitives slows render time. In
this thesis, modeling accuracy and deformation realism is crucial while rendering speed is
secondary.
When simulating a real-life object using a physics engine, there are three details
that need to be interpreted and used correctly, to effectively simulate that real-life object.
The first is the governing equation. This equation describes the relationship between the
force applied to the object, and the resultant movement of the object. Also included in
the governing equation is the time integration mechanism which allows the animation to
move forward at a set time interval. Secondly, the coefficients of the governing equation
describe the material properties of the object being simulated. For instance, a baseball
and tennis ball will have different coefficients but the same governing equation. It is
these coefficients that tell the tennis ball to be bounder and the baseball to be harder.
Finally, there is the geometry of the object. The geometry is defined by the primitive
polyhedrons or vertex-mesh that describes the shape of the object. This thesis focuses on
the latter two.
To this end, this thesis provides a mechanism to extract certain elastic properties
of an object. It does this by performing a global and local experiment on the object, both
of which apply contact forces to the objects surface all the while recording the
deformation using a single high-speed camera. Using this recording and various
computer vision techniques, I programmatically determined the both the local
deformation and the global behavior of the object as it is interacted with in the various
experiments.
2


This thesis also develops a novel method for transforming these extracted material
properties into elasticity coefficients to be plugged into the simulations governing
equation.
To this day, the objects geometry and coefficients are treated as disjoint entities.
This means that first the geometry is created and secondly the coefficients are found.
This thesis investigates how changing the objects geometry affects the elasticity of the
simulated object. In other words, in this thesis I treat the various geometric properties as
if they are coefficients of the governing equation, thus coupling the two. From this, I
performed tests which investigated how the elasticity changes when using a small volume
tetrahedral mesh compared to a mesh with larger tetrahedrons. I also investigated how
changing the tetrahedron dimensions affected the global and local elasticity of the object.
1.1 Background
Data-driven physically-based computer animation is a budding field full of
possibilities. We need only think of ways to get data points which can be plugged into an
equation representing the behavior of the object. This may include mechanically
measuring the object, to feed-back loops constantly updating the animation until it is
close-enough, to recording an object behaving in some, seemingly random fashion. As
important as data-driven animation will be to the future of this industry, this type of
animation cannot be thoroughly researched without an understanding of the simulated
geometry. This geometry is tightly coupled to the coefficients of the governing equation.
Changing one will always require a similar change to the other to maintain the same
behavior.
3


Figure 2 Object Material Properties
This image shows four objects which all have the same geometry but very different
material properties. As we look at an objects material properties we are concerned with
both how the object looks and how it behaves when interacting with the world.
As we look at and judge real-life objects simulated by a computer animator, there
are several aspects of the animation which we pay careful attention to. The keen observer
will notice the shape, color, reflectance, and size of a stationary object. A moving object
will have properties such as weight, friction, and momentum. If the object is not
absolutely rigid, it will deform when contact forces are applied to the surface. How the
object deforms, the amount of force required to achieve some deformation, and whether
the object retains volume when deformed are all visual clues telling the observer whether
the animated object reflects the behavior of the real object.
Animating deformable objects has been an area of research since the mid-1980s
when Demetri Terzopoulos first utilized elastic theory to develop a system of ordinary
differential equations which model the behavior of objects with elastic properties [34],
The ideas developed by these pioneers are still in use today in that we still develop a
deformable model with a governing equation representing the strain/stress relationship
between adjacent primitives of the model. The differential equations are solved for the
force function which gives the reader the velocity of a piece of the model at a future time
step and ultimately the position of the model at that time step.
4


While the same basic principles are sustained for physically-based systems, the
governing equations have gotten more complex, the geometric models are denser and
more complex, and the ordinary differential equations are solved in more efficient and
stable ways. By complicating the governing equations, geometric models, and
integration schemes, were building a system which is more durable and has an increased
level of realism.
Figure 3 Defining the Geometry of a Simulated Object
Two methods for defining the geometry of an object is to generate a tetrahedral mesh
(left) or a symmetric wireframe mesh (right). Each type has speed and accuracy benefits.
With these tools in place, we can deform objects, but it is still up to the animators
trained eye to, using those tools mentioned above, develop an object which closely
matches some real-life object. For instance, the tools allow the animator to create a ball
which can deform, but the animator needs to determine what type of ball to be simulated.
A rubber super-ball will behave differently from a tennis ball which is different from a
pool ball. Likewise, an inflated soccer ball will have a different material behavior than a
deflated soccer ball. These differences make up the material properties of the object. It
is the job of the animator to provide the correct coefficients (parameters) to the governing
equation to assign these material properties to the simulated object.
This is not always an easy task. Traditionally, the animation director would
model an object with quasi-random coefficients and would tweak the coefficients until


the simulated objects behavior was reasonably close to their perception of the real
objects behavior. While this may have suited the directors need at the time, one of the
goals of computer animation is to eliminate this guess and tweak method of simulating
realistic objects.
Data-driven animation is the idea that we can gather the material properties
directly by performing experiments on the actual object. We then take those
measurements and convert them into parameters to be plugged into the governing
equation and used in the simulation.
Figure 4 Extracting Material Properties from a Video
This thesis describes a method to extract an objects material properties by recording
the object under certain stress tests, then building a simulation of that object.
One deficiency in the field of computer animation is that these three areas of
deformable modeling are treated as disjoint areas of research. Major achievements have
been made in each of the three areas separately. As mentioned, governing equations have
become much more complex, allowing for nonlinear internal and external forces, more
accurate collision detection schemes, and more efficient and stable integration schemes.
With respect to defining the coefficients of the governing equation, major fronts have
6


been made to more accurately find the coefficients which define the stress/strain
relationship within the object, whether that be through mechanically measuring the
displacement of real objects as some local force applied, to recording the experiment and
finding this displacement using computer vision techniques. As for the geometry and
connectivity of a 3D model, the advent of more and more efficient graphics processors
have allowed for objects with more dense geometries. These denser models allow more
finely tuned control of the deformation of the object. Along with this, research has been
done to generate better polyhedron meshes, giving the animator advanced control over
the type of mesh generated.
When an animator wants to define a new type of object, they traditionally need to
think hard about these three areas. TheyTl want to choose a governing equation keeping
in mind that there is a tradeoff between accuracy and speed, the set of coefficients of the
governing equation to coarsely define how the particular object will behave, and a
geometry for the object which is coarse enough to define the intricacies of the material,
but sparse enough to be rendered efficiently.
Looking at these three areas, it is clear that the governing equation and the
coefficients of those governing equations are tightly coupled together, but is it clear that
the geometry is tightly coupled to the coefficients? When defining the coefficients of a
deformable object, well be defining the springiness and how much energy is lost due to
the deformation. Both of these are functions of the magnitude of a vertexs displacement
from its equilibrium position. It should be clear that as the number of vertices becomes
denser the displacement of each vertex (when the same amount of force is applied)
should be smaller if the same type of deformation is desired. The elasticity coefficients
7


will need to be adjusted to have this smaller local deformation but the same global
deformation.
Figure 5 Modifying Object Behavior by Modifying Mesh Density
With all other coefficients the same, these are spheres with 64, 128, 256, 512, 1024,
and 2048 vertices, respectively. It is easy to see that modifying the geometry of the
object also changes the material behavior of that object.
Figure 6 Matching Object Behavior by Modifying Linear Elasticity
This time, we have two spheres with 64 and 2048 vertices, respectively. By
increasing the coefficient defining linear elasticity in the denser object, we build an object
that appears to have similar deformation under its own weight.
8


As the governing equation and coefficients are coupled together, as well as the
geometry and coefficients, this thesis investigates how the objects geometry, like the
coefficients, affects the objects behavior. For instance, when using a tetrahedral mesh,
how does increasing or decreasing the tetrahedron volume affect the elasticity of the
object? If the tetrahedrons are larger on the surface than in the center, does that make the
object more or less elastic?
1.2 Motivation
As mentioned above, there are two aspects of this thesis, both with their own merits
and benefits to the research community.
Determining the coefficients of the simulation by directly extracting the material
properties of an object is a relatively new field with a large amount of potential. As this
area of computer graphics advances, animators will no longer have to use their flawed
perceptions of the physical word to judge a simulated object. No longer will they have to
sit around and guess whether a simulated object matches what they expect is the behavior
of a real object. They will know that it matches as the material properties were
accurately and scientifically gathered and transformed into the governing equation
coefficients to produce the best possible results.
There are many different mechanical methods for measuring material properties.
These may include stretching cloth in different directions to measure stretching and
shearing, using a robotic arm to drop a glass ball and measuring the displacement and
shatter pattern, using a spring to push an object across a diffuse surface and measuring
the friction by the displaced spring, etc. All of these methods have merits of their own,


but without a mechanical method to measure the results, they all suffer from loss due to
limits of human measurement. However, by using a mechanical means of measuring, we
will ensure the type of accuracy only given by the enormous processing power of modern
computers.
This thesis describes a framework which was built, along with a global and local
deformation test, to determine the material properties of the object and to convert those
material properties into values which can be plugged into the simulation. Furthermore,
by using computer vision techniques to measure the outcomes of those tests, the
techniques described in this thesis are able to measure material properties to an accuracy
which cannot be replicated by the naked eye.
Since the dawn of deformable object simulation, animators have used a simulation
system where there are three closely coupled parts. These parts are the geometry of the
object, the governing equation, and the material coefficients of the governing equation.
Animators are forced to manually define each of the three separately, whereby tweaking
any one of them may cause the animation to shift considerably due to the invalidation of
the other two. As long as animators are confining themselves to a system where each of
the three areas are treated separately this will be a standard scenario.
If there were a way to bring two or more of these into harmony with each other, we
will not only make the life of the animator that much easier, but will be opening the door
to many other possibilities with respect to deformable body animation. For instance, we
can optimize the animation for speed by reducing the geometry while maintaining the
realism of the same object obtained with a dense mesh.
10


2. Present State of Knowledge in the Field
In this chapter, I present the different areas of computer graphics and computer vision
which directly influenced this thesis work. The two areas I focus on is data-driven
animation and certain computer vision techniques used to track an object in a recorded
video.
The third area of this thesis is the coupling of the objects geometry with the
coefficients of the governing equation. The work of this thesis, with respect to the
coupling of the coefficients to the geometry, is completely novel. In this thesis, I
describe the experiments I used to investigate the elasticity coefficients and how they,
along with the geometry of the object, affect the displacement of the meshs surface in
response to contact forces
2.1 Data-Driven Physics Based Animation
As an animator begins to attack a new type of deformable object, say a flag waving
in the wind, an ocean crashing on the rocks, or the windshield of a Corvette shattering
after crashing into another car, they generally use some type of real-time physics engine
such as Bullet or NVIDIA PhysX which have predefined governing equations for
different types of objects. The only chore of the animator is to define the objects
geometry, that is, the vertices and connections that make up the 3D object, and the
coefficients of the governing equation, that is, the relationship between the
internal/extemal forces and the movement of the object.
Traditionally, the animator will try some values and run the simulation. If the
object behaves how they perceive it should, they move on to texturing or call it a day. If
11


not, they tweak the coefficients, and re-run the simulation. This can happen over and
over again until the director of the animation gives his or her thumbs up. Clearly, this
method has some major flaws. First, it relies on the trained eye of the director, which
may differ from person to person depending on experiences and training. The precision
of the animation depends on the keenness of the human eye and the patience of the
director trying to run these iterative experiments. Also, this method is very time
consuming. Not all animation can be performed in real-time. For highly detailed scenes
it may take hours to render a single frame and days to render a long enough clip to
determine the objects behavior.
In 2003, an approach by Kiran S. Bhat and team [4] designed a method for
estimating cloth material automatically, using computer vision techniques, which were
then plugged into a simulation of the cloth and compared computationally to verify
results. Basically, the goal was to define the gradient vector field of the real image to
determine how the cloth folds. Then, to begin the parameter estimation, they started
randomly choosing coefficients for the cloth simulation and used a simulated annealing
algorithm to modify those parameters until the simulated gradient vector matched closely
to the gradient vector of the real image. While they do define a method for duplicating
cloth folds, there are other important properties of cloth, such as stretching and shearing,
that are not covered. Similar work with video capture of cloth material properties was
done by Kunitomo [24],
A new direction in cloth parameter estimation, developed by Bradley et al. [8],
involved a markerless approach to capture off-the-shelf garments via stereo video. This
research focused on the ability to capture the cloth material as well as fill in the occluded
12


holes on the geometry not captured by the stereo video. This work was a corollary to
previous work by White et al. [49], whereby that team used a swatch of cloth and
designed a unique markering system which was suitable for finding the material
properties of the cloth. White then used a data-driven approach to filling in the occluded
sections of cloth. While this work made leaps in the area of the cloth folding and hole-
filling, the important areas of cloth stretching were not covered by this research.
Recently, the idea of generating these coefficients by measuring the object under
different types of stress to determine the materials strain/stress relationship has been
proposed. Wang, OBrien, and Ramamoorthi [48], came up with a method to
mechanically measure the stretching and bending of cloth by performing a series of
controlled tests. They attached a measured swatch of cloth to an apparatus with a grid
backdrop. Using a weight and pulley system, they stretched the cloth in multiple
directions and manually measured the cloth displacement using a grid backdrop. They
then measured the bending of the cloth as different lengths of cloth were draped over
their own weight. Using these, along with an optimization algorithm, they defined the
coefficients of the governing equation for several types of cloth material. These steps
were done for multiple types of cloth producing a library of cloth parameters, which the
authors claim can be reused in future simulations. This novel work provided great
examples on how to mechanically measure the strain/stress relationship of an object
through some type of local deformation experiment, however missing from this was a
global deformation experiment such as draping cloth over an object.
Also in 2011, Faure et al. [7] developed a novel method for modeling complex
deformable meshless objects by using a material property map (stiffness map) to define
13


how the control points (mesh nodes) are distributed across the object. While this is not
data-driven in the sense that the material properties are measured mechanically it is still
useful to the proposed researched in that the collected (scanned) data is put through an
algorithm to build parameters which are then applied to the simulation.
In 2001, Jochen Lang published his doctoral dissertation [25] in which he describes
the ACME system for deforming an object using a robotic manipulator with an attached
force gauge. Along with a trionocular stereo cameras and a point-cloud scanning device,
the ACME system was able to build a 3D mesh of the object under various stages of the
deformation. Using the known force applied locally to the object, as well as the
displacements of the vertices of the object, Lang was able to estimate the Greens strain
functions which can then be used in Finite Element or Boundary Element simulations to
build a simulated object which deforms in the same way as the real object.
Similar to this work, Becker et al. [3] developed a quadratic programming
optimization routine for estimating the Youngs modulus and Poissons ratio of a
homogenous isotropic object simulated using the linear finite element method when a
force / displacement measurement is known beforehand.
Bickel et al., [5] developed a system for capturing non-linear material elasticity using
a trinocular stereo camera system and a markering system for the object under test to
measure the strain undergone during a force experiment. The strain is measured at
arbitrary points on the object and the strain/stress relationship is interpolated across the
whole object using these reference points.
Syllebranque and Boivin [43] developed a system of finding the Youngs modulus
and Poissons ratio of the homogenous isotropic object by simulating the object in an
14


iterative fashion. Each iteration an error value is calculated. If this value is above some
threshold the Youngs modules and Poissons ratio parameters are updated appropriately
using simulated annealing
Barbara Frank et al [17] used a robotic arm attached to a force gauge to applied
localized pressure to a single point on the surface of the object. With a depth camera,
they were able to determine the displacement of the objects surface during all phases of
the experiment. As their goal was to find the Youngs modulus and Poissons ratio, they
too developed an iterative method of simulation in which these two parameters are
adjusted each time. Their method is slightly different as they defined an error function
which uses the iterative closest point (ICP) algorithm to align the simulated deformed
mesh with the captured deformed mesh and then returned the differences between
simulated and deformed points.
All of the previous work deals with deforming the object in some localized manner
and directly measuring the strain/stress activity to gather data points for generating the
Greens function to be used in a FEM or BEM simulation. My work defines a local
deformation experiment, but it also defines a global deformation experiment. That is, the
goal is to see how the whole object responds to some strain/stress activity which happens
within the object. In this, I am able to see if, after the object is deformed, the objects
residual movement is appropriate.
2.2 Computer Vision Material Property Extraction
The first problem in extraction is to locate the object in the recorded images. Over
the previous decades, many techniques have been developed to extract the features of an
object, comparing those features to a database of objects whose feature have already
15


been extracted, and then creating a histogram of likely matches. Some early work in the
object matching research include the Hausdorff distance algorithm which determines how
close edge-based images are to some target images edge based image [41] Another
similar matching scheme uses the Chamfer distance [45], While both are useful in their
own right, neither does well unless the inspected object is very similar to the target
object. Slight deformation, rotations, or scaling, which we would expect in this research,
would cause these algorithms to erroneously fail.
With respect to deformable object recognition, novel work has been done by [37]
which define a system for recognizing a deformable object in images by segmenting the
edges of the object around the areas of high curvature. They then allow each segment to
be individually transformed and compared for possible matches. In [16], the authors
broke the image into contour segments which are then inputted into an edgel chain
network where the network links are determined by a set of rules. When recognizing an
object, we need only find continuous paths through the network. This allows the authors
to recognize objects, which can be partially occluded, efficiently. This work continues
the work of [32] which defines the Berkley edge detector. This method can find the
edges of objects in cluttered scenes.
In all of these areas, the goal is to find a given object within an image. This thesis
extends this idea in that it not only is searching for the object, but it extracts important
material information about the object.
16


3. Experiment Definition
Throughout the development of this thesis, four applications were implemented
which utilize tools from several different areas of computer graphics and computer vision
to accomplish the goals described in the previous chapter. Chapters 3 through 6 describe
these applications, the logic that went into developing them, previous work that attributed
to their implementation, and the thirdparty tools utilized in each. Furthermore, it is the
goal of these chapters to explain how the applications developed contributed to solving
the goals stated earlier.
3.1 Defining the Experiments
One task of an animator is to take some real-life object and convert it into a 3D
object which can be used in video games, movies, or TV commercials. Typically, as we
go through this process manually, we might look at the object to determine shape, hold
the object to determine weight, squeeze the object to determine elasticity and plasticity,
and drop the object on the floor to see how it reacts to the ground might help us
understand the elasticity of the object. For instance, a water balloon, tennis ball, and
crystal ball will all have very different responses to this type of experiment.
This thesis focuses on the elasticity of the object, therefore no experiments were
done to measure weight or shape, however the work done could easily be extended to
include these items as computer vision algorithms are not necessary to calculate weight,
and while computer vision can be used to determine the shape of the object, it would
have to take a recording of all sides of the object and patch them together to make a 3D
model. Furthermore, a computer vision solution would require edge and corner detection
17


along with complex algorithms for analyzing color gradients to map the topography of
the object. A better solution would be to utilize a range-scanner positioned at well-
defined locations around the object to gather precise shapes and contours.
As capturing elasticity is the goal, it is beneficial to define experiments which
describe how the object behaves when interacting with the world in some global sense.
This means without directly interacting with the object, see how it behaves when
interacting with the world. Also, it makes sense to define experiments in which the
object is interacted with directly. This means applying some directed localized force and
determining how the object responds to that.
3.2 Material Property Capturing System
As explained in previous sections, several different mechanisms were used to capture
the material behaviors of the objects under experimentation. Wang [38] developed a
system that captured the material properties of cloth by stretching the cloth in different
directions a measured the amount the cloth was displaced. Bhat [2] and Kunitomo [19]
recorded their experiment and then used computer vision algorithms to determine how
the cloth folds on itself when draped over its own weight. Other efforts were made to
record the experiments and using computer vision techniques to extrapolate the material
behavior of the cloth [5] [39],
As a goal of this thesis is to build a framework that allows non-scientific animators
the ability to conduct their own experimentation, the logical choice was to record the
experiment and have the computer vision algorithm extract the measurement from the
recording. By doing this, we are reducing the work required by the experimenter,
minimizing human error, and utilizing the powerful processing capabilities of modern
18


computers. A draw-back is that we have all of the limitations of the recording device and
a computer which only understands discrete images.
It is important to note that when using computer vision techniques to analyze images,
the subject of the image must be clear. This means the edges surrounding the object be
sharp and there is not a lot of noise around the image background.
3.2.1 Nikon D5000
Early in the research phase, it was thought that using a high quality SLR camera
should provide a high pixel density, and therefore, should be sufficient in recording the
experiments. The Nikon D5000 is a 12 Megapixel camera, has a 30 fps frame rate, and a
shutter speed of 1/40000 second.
After recording several experiments under this configuration, I found that the images
were large, the video was clear; however computer vision algorithms look at individual
frames of the video over time, so it wasnt enough to look at the recording. Individual
frames showed blurry objects and we werent able to clearly see the point of contact
when force was added to the object. The shutter speed is the amount of time the cameras
shutter is opened to allow light to enter the lens and contact the photo receptors at the
back of the camera. If the ball is moving in front of the camera while the shutter is
opened, the movement of the ball will be captured in those photo receptors which makes
the ball look like it is streaked across the image, thus blurry. For this reason, I learned it
is important, when recording moving objects, to have a very fast shutter speed.
Also, the deformation of the object due to added force might only happen for a
moment. When reviewing the images for my test experiments I found that I could not
find the single frame that clearly showed the deformation of the object as the force was
19


applied. This is because the D5000 only captures 30 frames per second. The force would
be applied to the object sometime between image captures. I learned that, in order for my
experiments to be successful, I need a camera that captures frames significantly faster
that the Nikon D5000.
3.2.2 Casio Exilim EX-ZR200
This camera was purchased to solve the problems faced with the Nikon D5000.
The EX-ZR200 has a 1/250000 second shutter speed and recording frames rates of 30fps,
120fps, 240fps, 480fps, and lOOOfps. The shutter speed improvement allowed me to
perform experiments that had high levels of motion and capture a sharp image at every
frame. Of course, there are still limitations to this technology, but for my thesis it proved
to be all I needed.
The frame rate of the EX-ZR200, while impressive, does have drawbacks. The
camera, while it has a 16.1 Megapixels when shooting photos has a much lower
resolution when shooting at high speeds. For instance, the 1000 fps setting only shot
images with 14336 pixels. These images were grainy and did not capture the edges of the
images very well. In the end, I used the 120 fps setting as it did capture the deformation
not captured by the slower setting, but did not have the graininess of the higher setting
(307K pixels).
20


4. Global Deformation Experiment
A global experiment, for the purpose of this thesis, is an experiment where the
researcher has no direct influence over the deformation of the object. The researcher sets
some initial conditions and then lets time, space, and the forces of nature demonstrate
how the object behaves. This is an important type of experiment in capturing the
behavior of an object as it removes the limitations with how the object can behave. For
instance, if we were to capture the behavior of cloth by stretching it in different directions
using different amounts of stress, the cloth would stretch exactly as we expected, but that
doesnt help us understand how the cloth flaps in the wind or how it floats to the ground
when dropped.
As this is a deformation experiment, were not concerned with how objects move, but
rather how the objects shape changes under different amounts of stress. Therefore, we
must define experiments that either directly measure the stress/strain relationship or
indirectly measuring this relationship by measuring the response to some stress/strain
activity that occurs in the experiment. As this is a global experiment, we dont know the
exact amount of stress on the object as were not directly applying force, but we can
measure the response to some strain/stress activity.
This thesis defines a global deformation experiment in which a ball is dropped
from some specific height and recorded until the ball stops bouncing. This experiment
satisfies the global deformation requirement as there is no force introduced by the
experimenter. The ball is set into position before the experiment starts and gravity, air,
the ball, and the ground surface colliding with the ball are the only factors affecting the
deformation of the object.
21


From the moment the ball is dropped, I tracked the height of the ball, with respect
to the ground the ball makes contact with. I also track the width and height of the ball for
the duration of the experiment. Through this type of experiment I hoped to see how
much energy is lost when contact is made with the ground. If there is a large
deformation, more energy should be lost. I also hoped to see how the balls shape
changes as it collided with the ground.
As precise experiments are important to get accurate results, I built a system
which provides the ability to drop objects from different heights. Basically, it was a
frame built with PVC piping and a box housing the ball with a trap door mechanism. The
experimenter can set the box to a specific height, center the ball in the box, then pull the
trigger to snap open the trap door releasing the ball. The frame is two meters tall
which allows the ball to be dropped anywhere from zero meters to two meters off the
ground.
I did a series of experiments with different types of balls, some highly
deformable, some not. All experiments were done from a single height. The balls used
were softball, wiffleball, pingpong ball, nerf ball, stress ball, hollow rubber ball, and a
hollow plastic ball.
4.1 Material Property Extraction
After the experiment was conducted in front of a camera, the next step was to take the
recording and run it through some type of computer vision program to extract pertinent
information from the recorded experiment. The goal was to find the height and width of
the ball at all times throughout the recording. This allowed me to see how energy was
lost over time and how the ball deformed when it came into contact with the ground.
22


4.1.1 Computer Vision
OpenCV is an open source toolkit that provides a user the ability to perform computer
vision operations on an image. This resource provides numerous algorithms, from simple
to complex, giving the researcher an arsenal for tackling common problems in computer
vision. For instance, it has numerous edge detection, object segmentation, and object
recognition algorithms, all with simple APIs for working with. As it is open source, there
is an opportunity to extend an existing algorithm if needed, but they also provide an API
to iterate through the pixels of the image providing the researcher the ability to
implement a new computer vision algorithm. For this thesis, I used OpenCV to locate
the ball in the image and to find the dimensions of the ball at all times throughout the
recording.
4.1.1.1 Reducing Image Noise
In order to find the object in the image, we require the ability to find the edges in
the scene. Edge detection algorithms typically use color gradients which tells the
magnitude of the color intensity change over some space. If there is a large enough color
change, the algorithm marks it as an edge. The problem with edge finding is that there
may be may many edges in the scene. The subject of the image may have many edges,
but the background of the image may also have edges especially if the image is shot
outdoors or a cluttered room.
By blurring the image slightly, the steep color gradients in the background
become shallower. Of course, the color gradients of the subject also become smaller
making the subject more difficult to detect, but if the camera is focused on the subject, it
will have the steepest color gradients to start with.
23


For the purpose of this thesis I used a Gaussian mask to reduce the image noise
[10], The Gaussian mask is a nxn matrix where n=3,5,7. In this scheme, the matrix
defines a weighting system for blending neighboring pixels together. The middle matrix
item represents pixel being changed and thus has the highest weight. The further the
matrix element is to the center, the lower the weight is defined for that element. The
standard deviation of the Guassian is used to determine how the weighting drops off as
you get further from the center element. The Gaussian matrix is passed over the image
from top-left to bottom-right, and every pixel is redefined by multiplying the Gaussian
matrix by the nxn matrix of pixels neighboring the pixel being updated.
0 0 0 0 1 2 2 2 1 0 0 0 0
0 0 1 3 6 9 11 9 6 3 1 0 0
0 1 4 11 20 30 34 30 20 11 4 1 0
0 3 II 26 50 73 82 73 50 26 11 3 0
1 6 20 50 93 136 154 136 93 50 20 6 1
2 9 30 73 136 198 225 198 136 73 30 9 2
2 11 34 82 154 225 255 225 154 82 34 11 2
2 9 30 73 136 198 225 198 136 73 30 9 2
1 6 20 50 93 136 154 136 93 50 20 6 1
0 3 11 26 50 73 82 73 50 26 11 3 0
0 1 4 11 20 30 34 30 20 11 4 1 0
0 0 1 3 6 9 11 9 6 3 1 0 0
0 0 0 0 1 2 2 2 1 0 0 0 0
Figure 7 13x13 Gaussian Matrix with The size of the filter depends on the type of image being recorded. For instance,
if there is a lot of background noise, a larger Gaussian mask would be necessary. Thus,
in my application, I give the user the ability to change the Gaussian mask size.
24


4.1.1.2 Canney Edge Detector
It should be noted that the first step to finding the edges in an image is to convert
it to grayscale. The reason we do this is because were concerned with color intensity
changes, not the change from one color to another. Color intensity would mean a dark
color compared to a light color. Finding the edges of an object in an image is an area
which has received due attention over the years. Many algorithms use a simple first order
derivative in the x andy directions [10], We can then see the magnitude do to the change
in color intensity.
The Canny edge detection scheme does precisely this. Then, it uses non-maxima
suppression to blacken all pixels in which the intensity gradient of a pixel is higher than
the gradients of the pixels on either side of it. Finally, the algorithm goes through the
pixels and if the gradient is above some threshold, it is declared an edge point.
The threshold depends on the application of the canny edge detection scheme,
therefore, my application gives the user the ability to change the threshold to customize
the algorithm for their recorded experiment.
For my experimentation I went a step further to assist the algorithm in finding the
edges of the object. All experiments were made in front of a flat white screen. The goal
is to minimize the amount of background noise so the edge detection scheme will only
find the ball in the image. Second of all, the balls were painted a bright red color. This
created a large color contrast between the wall/floor and the object under
experimentation.
25


Figure 8 Ball Drop Experiment with Edges Detected
4.1.1.3 Image Segmentation by Color Matching
The goal of the previous step was to remove everything from the image except for the
ball being dropped. Of course, the canny edge detector can do a lot, but there are still
edge artifacts that make it onto the edge image. Therefore, I needed a method to try to
extract the ball location and size from the edge image where it may or may not have
edges.
Figure 9 Using Canny edge detector with color segmentation scheme to find the ball
I decided to implement a color segmentation scheme. The idea is to choose the
color of the ball, then write a computer vision algorithm which scans through the image
looking for edge colors which match the selected color. The ball is not flat therefore
there is a myriad of colors within the ball, however as I painted the ball bright red it
limited the colors I had to find. Even though I did this, the ball still ranged from bright
26


red to dark red so selecting and finding a single color would not find the entire ball. I
developed a threshold scheme which the user controls how closely to the selected color
the pixel must match to consider it part of the ball.
I was able to find the height of the ball at all times throughout the recording, however
there were some troubling areas. For instance, the ball, while painted red, was brighter
on the side that faced the light source than the opposite side. A possible solution to this
would be to have three light sources all on different sides of the ball. This even light
distribution to all areas of the ball would be an attempt to make the ball a solid color, thus
easier to pick up with this thresholding scheme. Another problem I had was the light
reflecting off the ball and hitting the white table surface. As the ball became closer and
closer to the ground, more and more of the red color was shown on the white cloth. The
thresholding scheme often picked up this color and considered the affected ground as part
of the balls size.
One problem I had with this method deals with one of the goals of the canny edge
detection scheme, that is the edge found is as close to the actual edge of the
ball/background as possible. This means that the one pixel thick line representing the
edge will have some ball elements and some background elements. The color
segmentation algorithm only knows the balls color, therefore it does not include some
sections of the edge which has more background color than ball color.
27


Figure 10 Error in Color Segmentation Scheme
As shown in the above figure, the color segmentation sometimes fails to include
sections of the edge which has more of the lighter background color than the red ball.
4.1.1.4 Image Cropping
As the canny edge detection algorithm left unwanted edges and the color
segmentation picked up the ground when the color of the ball reflected off of it, I needed
a method to discard unwanted edges and focus the color segmentation algorithm on only
the section of the video where the experiment was taking place.
For this I developed an image blackening solution which allows the user to drag
their mouse across the area of the image theyre interested in and the algorithm would
blacken everything outside of it. This blackening would occur on the image created by
the canny edge detection algorithm as not doing so would have the canny algorithm
treated the square box as an edge.
4.1.2 Capturing Data Points over Time
Once the color segmentation was in place I now had a method of capturing the
location and size of the ball over time. The goal of the global deformation experiment
was to find and save the height and width over time.
28


The two data points I captured was the height over time and the width/height ratio
over time. As the experiment is playing back and the computer vision algorithms
mentioned above were calculating the location and size of the ball over time, the
application was graphing these values. As the application captures this data it takes into
account the camera frame rate so that a 30 fps video will yield similar results to a 120fps
video.
The section of the graph that we are concerned with is the instant the ball is
dropped to the instant the ball goes to a resting state on the ground. Therefore there are
sections of the graph that we dont want included in the final results set. I developed a
mechanism for cutting off the section of the graph were not interested in.
Figure 11 Graphs of the ball height over time and the width/height ratio over time
The top-left image is the height, from the ground, of the ball captured the entire
recording. The top-right is the same data where the unwanted data is pruned. The
bottom-left image is the width/height ratio over time graph for the duration of the video.
The bottom-right is the pruned graph, where were only considering the portion of the
video the ball is moving.
As is shown in the width/height ratio over time graph, there tends to be a lot of noise
due to the imperfections of the canny edge detection and color segmentation algorithms
29


described above. Some future work might be to run this data through a low-pass filter
algorithm in order to remove some of the noise and better understand the balls behavior
when it collides with the ground.
Finally, once the data is captured and pruned, I developed a method to serialize
the data structure which stores the captured data. Pressing the appropriate button will
serialize the object and store it to the clipboard. This serialized object will be used when
we simulate this object using the bullet physics engine.
30


4.1.3 The Global Deformation Capture Application
23
Figure 12 Description of the Global Deformation Material Property Extraction
Application
1 A radio button to choose an AVI file which contains the recorded experiment
2 A radio button to choose an attached webcam to record the experiment
3 A frame containing the original image, but also displaying the cropped section
of video
4 Shows the color used by the color segmentation algorithm. To select a color,
right-click on the frame (3)
5 To enable or disable the cropping of the edge image
31


6 Play button for starting the video from the current frame.
7 Restart button for starting the video from the first frame.
8 Stop button to pause the video playback
9 The graph which displays the data for the entire video, or since (11) was pressed
10 Slider to choose the left-most data point to display in the (16) graph
11 The button to start saving height and width data. This also starts the graphing of
the data
12 Slider to choose the right-most data point to display in the (16) graph
13 The button to stop saving height and width data. This also stops the graphing of
the data.
14 Clears all saved height and width data. Also resets the graph
15 The button which serializes the cropped data store and saves it to the clipboard
16 The graph which displays the data for the cropped section of video, as
determined by (10) and (12)
17 The height at which the ball was dropped.
18 The number of frames per second in the recording
19 The frame containing the cropped edge image. Also shows a box around the
segmented ball.
20 A slider for adjusting the color segmentation threshold
21 A slider for adjusting the canny edge detection algorithm gradient threshold
22 A slider for adjusting the blurring Gaussian matrix size. This can be 3x3, 5x5 or
7x7.
32


23 A button which opens a file choosing dialog for selecting the recorded
experiment AVI file.
4.2 Finding Simulation Coefficients
Using the ball height over time, as well as the ball height and width over time, I
developed an application which simulates a ball dropping using a physics engine and
automatically adjusts the coefficients of that simulation until the behavior of the
simulated ball matches that of the recorded ball in the capture step. In a final step, the
application allows the user to transfer the coefficients obtained to a different object to see
how the object behaves. My goal with this step was to determine how the objects
behavior changes when only the geometry (internal structure) of the object changes. All
other coefficients will remain exactly the same.
4.2.1 Bullet
Bullet is an open source physics engine which has special support for rigid body
dynamics and collision detection. By physics engine, I mean the developer models a 3D
object in its undeformed shape. This shape will consist of vertices and edges between
them. The developer then places it in the scene, adds some initial conditions, and then
sets the engine in motion. The engine determines the position and velocity of the each
vertex in the 3D object at each frame by taking the position and velocity at a previous
frame, and then uses some general physics principles to get the position and velocity of
each vertex at the current animation frame.
33


Newtons second law of motion describing the relationship between force and
acceleration can be summed up by the following second order differential equation
where F() is a function accumulating all of the internal and external forces [27],
x = F{x, x, t)
Physics engines turn this into two first order differential equations.
vn i, Xjj_ i, t)
Solving these differential equations has been an area of much attention and is outside the
scope of this thesis. However, it can be said that there are many integration schemes each
with benefits and draw backs which are currently employed. Usually, choosing an
integration scheme is decided by speed requirements and object elasticity.
Bullet has many material coefficients which define the internal and external force
functions. For instance, there are elasticity coefficients which describe the constraints
between two vertices which are connected via an edge. The elasticity describes the
stress/strain relationship between two connected vertices. The stress is the force pulling
the vertices in opposite direction; the strain is the amount the distance between the
objects displaces. As the goal of the simulation is to create a ball which has the same
bouncing behavior as the recorded object, the elasticity coefficients are paramount in
achieving it.
4.2.2 Modeling the Ball
Before we can start the simulation, we must model the 3D ball. The global
deformation simulation application uses two methods for building a 3D model of a ball.
34


4.2.2.1 Mass-Spring Sphere
Mass spring modeling system builds a 3D mesh of vertices connected with edges.
The vertices are the mass objects and the edges between them define the springs, or the
elasticity relationships, between the adjacent masses. In this application, the mass-spring
balls are only defined by a surface mesh. Some future work might be to explore how
mass-spring object behave when it is not hollow, but has an internal network of masses
and springs.
Figure 13 Geometry Parameters for a Mass-Spring Sphere
With respect to the objects geometry, when using the mass-spring object, the only
options to modify the object are the objects vertex density.
To affect the geometry, we can increase or decrease the sphere radius, or
increase/decrease the resolution of the object. The higher the resolution, the more
vertices and edges define the surface mesh of the sphere.
4.2.2.2 Tetrahedral Sphere
A tetrahedron is a three dimensional object comprising of four triangular faces.
This application uses TetGen to construct a tetrahedral mesh. TetGen is an application
which takes a 3D surface mesh and generates the non-hollow 3D tetrhedralization of that
35


surface mesh employing Delauny triangulation [40], TetGen allows several parameters
to be passed in which define how it will tetrahedralize the surface mesh.
The first is the quality constraint on the generated mesh. The TetGen manual tells us
the reason for this constraint is that the Finite Element Method for soft body dynamics
(FEM) is more accurate when the aspect ratio of a tetrahedral is as small as possible.
The aspect ratio of a tetrahedral is defined as the ratio of the radius to the length of the
shortest edge. Therefore, thin and flat tend to have a large aspect ratio [39]
Figure 14 Tetrahedrons with High and Low Radius/Edge Length Ratios
The three tetrahedrons on the left are low quality while the three on the right are
higher quality. [39]
The second parameter passed to TetGen is the volume constraint. This value tells
TetGen what the maximum volume in which to generate a tetrahedron. Therefore, if
large tetrahedrons are desired we give a large value for volume constraint. If smaller is
needed, we provide a smaller volume constraint.
To affect the geometry when using tetrahedrons, the coefficients which can be
adjusted are the resolution (vertex density), the tetrahedral quality constraint, the
tetrahedral volume constraint, and the number of clusters defined for that tetrahedral
mesh. This final coefficient, defined by bullet, combines tetrahedrons together to form
rigid groupings of tetrahedrons. For instance if we only defined a single cluster, all of the
tetrahedrons of the sphere would be combined together making the entire sphere a cluster,
36


and thus a rigid object. The application also allows for each tetrahedron to be its own
cluster.
Figure 15 Spheres with Differing Radius / Edge Length and Volume Constraints
From left to right: The first sphere has a high-quality constraint. The second sphere a
low-quality constraint. The third sphere has a high-volume constraint. The forth sphere
has a low-volume constraint.
4.2.3 TetView
During the bullet simulation, when tetrahedrons are used, the only part of the
geometry which is viewable to the user is the outside. Sometimes, if there is enough
force when colliding with the ground, the tetrahedrons will collapse on themselves.
Many times, a geometrical coefficient is change, but there doesnt appear to be any
physical change to the outside of the sphere. For both of these reason, I added the use of
TetView, the tetrahedral mesh viewer designed by Hang Si of the Weirerstrass Institute
for Applied Analysis and Stochastics [39],
TetView provides the options to load a tetrahedral mesh, rotate it, and most usefully-
cut it down the middle to see the internals of the object. Any time during the running of
the global deformation simulation application, a button can be pressed which will launch
this application and load the mesh currently being used by the application.
37


4.2.4 Capture Data Points over Time
Similar to the global deformation capture application, this application has the ability
to capture and graph data points over time. The data points used for this application is
the height of the ball (off the ground) over time, and the width/height ratio of the ball
over time. The goal was to see the trajectory of the bouncing ball over time, but to also
see how the ball deforms when it collides with the ground and as it pushes off the ground.
As well as capturing data points over time, this application can import data points
from the global deformation capture application. Doing this allows the application to
find the coefficients which make the simulated ball bounce like the real ball.
4.2.5 The Coefficient Finding Algorithm
Finding the optimal coefficients to make the simulated ball behave like the
recorded ball is an optimization problem where the objective function is to minimize the
bounce height difference of the first n bounces of the simulated ball and the real ball.
The constraints are the numeric limits of the coefficients. Most coefficients have values
between 0 and 1, while some only have a lower limit.
In order to optimize the coefficients, I developed an algorithm which runs the
simulation, modifies a single coefficient, and runs the simulation again. This loop is
continued until a solution is found. Each time the loop is executed, the algorithm tracks
the ball height over time. Using this information, it is able to capture the bounce heights
by looking for peaks in the ball height data. A peak in the data is defined as a ball height
where the heights directly before and directly after are smaller.
38


In this loop, I developed a method to determine whether the simulation was
converging on the optimal solution. This is done by saving run history, and then
comparing the current run with a past run to see if the bounce height differences are
getting smaller. If the bounce height becomes closer to the optimal values, the algorithm
continues as it is still converging on the solution.
The first iteration of the loop, the algorithm modifies the radius of the ball to
match the radius of the recorded ball. In the second iteration, the starting y position of
the ball is updated. Subsequent iterations will work through one elasticity coefficient at a
time. The elasticity coefficient will be modified as long as the simulation continues to
converge on the optimal solution. Once it begins to diverge from the optimal solution,
the algorithm saves the optimal coefficient state and moves onto the next coefficient.
The elasticity coefficients start in some default position. The position does not
matter as the algorithm can find the optimal value regardless of the starting value. The
simulation is run with the coefficient at this starting position to get a baseline as to how
close the simulation is to the optimal bounce heights.
As increasing an arbitrary coefficient could make it more elastic or less elastic,
the next step is to determine whether to increase the coefficient or decrease it. To do this,
the algorithm modifies the coefficient to halfway between the default and the maximum
value. If this change causes the simulation to diverge, the algorithm decreases the
coefficient halfway between the minimum value and the default value. Once this has
been determined, the algorithm searches for the optimal coefficient value by modifying
the coefficients in a binary manner.
39


There are many coefficients, some which have nothing to do with the elasticity of
the object. Therefore, I limited the set of coefficients to adjust to only those which affect
elasticity. These include linear stiffness, angular stiffness, volume stiffness, pressure,
volume conservation, and collision impulse splitting.
As these coefficients are updated one at a time and never repeated, it is important
to find the correct order as once it is updated, the algorithm does not return to the
coefficient later. The purpose of updating the coefficient individually, one-at-a-time, is
to isolate the effect of a single coefficient on the behavior of the object. As different
coefficients have similar or opposite effects on the objects local and global elasticity,
creating an algorithm which modifies multiple coefficients each iteration would be
infeasible as there would not be a way to determine which coefficient was responsible for
converging on the solution, or diverging away from a solution.
In the future, it would be beneficial to utilize an external multi-variate
optimization routine for finding these coefficients of the governing physics equation. It
would be the responsibility of the application to define the error function which would
tell the optimization routine whether the simulation was converging on or diverging from
the optimal solution.
4.2.6 Global Deformation Simulation Application
This application was written using C++, fltk 1.3, Bullet 2.79, OpenGL 2.1.0, and
ChartDirector 5.0.3 on a WindowXP SP3 system and an Intel onboard graphics card.
40


IS
Figure 16 Overview of the Global Deformation Simulation Application
1 Launches TetView displays the sphere mesh from (2)
2 The simulation window which shows the sphere generated with the coefficients
defined (14)
3 Stops the simulation on its current frame
4 Restarts the simulation. First it regenerates the scene with a new sphere using
the coefficients as it is currently set.
5 Starts the simulation from the current frame.
6 The graph of the height over time and width/height ratio data points
7 The button to start saving height and width data. This also starts the graphing
41


of the data
8 Stop button to pause the video playback
9 Imports data points from the clipboard and graphs it. This is the data from the
global deformation capture application.
10 Clears all saved height and width data. Also resets the graph
11 Radio buttons to determine which object to display in the right simulation
window (15)
12 Button which starts the algorithm for finding the coefficients to make the
object being simulated match the imported data points.
13 Button which stops the algorithm for finding the coefficients of the sphere.
14 The coefficients which can be manually adjusted or the coefficient finding
algorithm will adjust.
15 The simulation window which shows the object defined in (11). If the object is
one of the spheres, it will use the coefficients defined in (14), but it will change
one coefficient to change the geometry of the object.
16 Tells the algorithm to use a tetrahedral mesh to define the sphere
17 Tells the algorithm to use a mass-spring mesh to define the sphere
18 Launches TetView and displays the sphere mesh from (15)
42


5. Local Deformation Experiment
In previous sections, the thesis was focused on experiments where the object was
placed in some initial state and in an instant all forces applied by the experimenter were
removed leaving only the natural internal and external forces to act on the object. In this
next section, I present a different kind of experiment.
When we think about how an animator might test a deformable object to see how it
behaves, one obvious test would be to hold the ball in their hand and squeeze it. This
gives the animator an idea of the stress/strain relationship as they know how much stress
theyre applying and they can see the ball displace from its resting configuration (strain).
They can also see how volume is conserved by noticing whether the area not under stress
expands while the stressed area compresses.
Some other interesting tests an animator might do would be to poke their finger at a
single location on the object to determine the area of the affected region or to see how
deep the finger enters the object.
In these types of tests, the animator has direct control over the stress placed on the
object and it is these kinds of tests for which this section will focus.
The goal of my local deformation experiment was to see how a ball displaces when a
specific amount of directed force is applied to the ball. There was a limitation to the
kinds of experiments I could do since the ball was round (e.g., I could not set the ball on
the ground and poke some stress calculating device into it as it would roll away). I
settled on placing the ball on a table, then setting a flat board on top with several weights
a top the board. Finally, I used a level to make sure the board was parallel to the table. I
43


used my hands to ensure the board didnt fall off and to adjust it until it was level, but I
did not add any force to the ball under test while doing this.
Figure 17 The Local Deformation Experiment
A nerf ball undergoing the local deformation experiment
5.1 Material Property Extraction
Similar to the global deformation experiment material property extraction, the local
deformation capture tool uses Gaussian masking to blur the image (described in section
4.1.1.1), Cannys edge detection algorithm (described in section 4.1.1.2), a color
segmentation scheme where the user can right-click on the object to set the color
(described in section 4.1.1.3), and image cropping to focus the experiment on the part of
the image which contains the ball under test (described in section 4.1.1.4)
Figure 18 Extracting the Ball's Dimensions for the Local Deformation Experiment
44


The nerf ball from figure n-1 after undergoing the material property extraction
algorithm
While the method for finding the ball in the image is the same, the difference lays in
how the experiment is conducted, how the application is used to extract the material
properties, and what material properties are captured
5.1.1 Recording the Experiment
I set up a table with some tall object over which a white cloth could be draped as
my recording background. On the table I laid a length of white particle board to ensure
the experiment background would be one color while the object under test would be a
contrasting color. I did this to assist the color segmentation algorithm find only the ball.
I started the recording, making sure the camera was at the same level as the
middle of the ball. By doing this I was able to see where the boards met the top and the
bottom of the ball.
It was important to get both the undeformed shape as well as the deformed shape in
the video in order for the algorithm to compare the two. I started the recording with the
ball setting on the table with no external forces acting on it. Then, I placed the board on
the ball followed by the weights and the level. Once I saw the board was level I stopped
the recording.
5.1.2 Capturing the Control and Deformed Frames
This application is different from the global deformation application in that we only
need two data points. We need the height and width of the ball both in its equilibrium
state and in its deformed state. Therefore, as the video was playing there are two buttons
45


which allow you to select the frame that best shows the control state and button to press
when the frame best shows the deformed state.
5.1.3 Material Properties Extracted
For this experiment, all we care about is the width and height of the ball in its
control state and the width and height of the ball in its deformed state. When using the
application, the user must enter the amount of weight placed on the object. Along with
that the user must input the height of the ball.
The edge detection, color segmentation, and cropping algorithms took care of
finding the ball in the frame. I used the information inputted from the user to calculate
the width and height of the control and deformed ball in meters.
Similar to the global deformation capture application, this application has a button
which will serialize the extracted data and copy it to the clipboard.
5.1.4 The Local Deformation Capture Application
This application was written using C++, fltk 1.3, and openCV 2.1 on a WindowXP
SP3 system and an Intel onboard graphics card.
46


17
23 22 21 20 19 18
Figure 19 Overview of the Local Deformation Material Property Extraction
Application
1 Radio button which tells the application the experiment is a recorded AVI file
2 Radio button which tells the application the experiment will be done via a
web cam connected to the device.
3 The frame which displays the recording as it was recorded
4 Button stops the playback of the recorded experiment
5 Button restarts the playback of the recorded experiment from the first frame
6 Button starts the playback of the recorded experiment from the current frame
7 A box which shows the currently selected object color
8 A checkbox to enable/disable the cropping of the image
9 A field which the user enters the size (in the y direction) of the object
10 A field which the user enters the weight of the board placed on the object
47


11 The captured width and height for the control ball. This gets set when (21) is
pressed
12 Takes the captured data, serializes it, and copies it to the clipboard.
13 The captured width and height for the deformed ball. This gets set when (21)
is pressed
14 A slider to modify the color segmentation threshold
15 A slider to modify the Gaussian blur matrix size. This can be 3x3, 5x5, or
7x7)
16 A slider to modify the Canny edge detection gradient threshold.
17 Button to discard the previously set deformed obj ect image
18 Button to set the deformed obj ect image
19 The frame which displays the set deformed edge image
20 Button to discard the previously set control object image
21 Button to set the control object image
22 The frame which displays the set control edge image
23 Button which opens a file dialog where the recorded experiment can be
selected. The experiment must be an AVI file format.
5.2 Finding Simulation Coefficients
Once the width, height, and weight information has been serialized to the
clipboard, it can then be brought into the local deformation simulation application for the
balls coefficients can be found. This application has all of the same technologies as the
global deformation simulation application. Bullet and OpenGL are used to simulate the
48


undeformed ball and the compressed ball (described in section 4.2.1). TetGen is used to
modify the balls geometry when tetrahedrals are chosen (described in section 4.2.2).
TetView is used to have a closer inspection of the tetrahedralized sphere. TetView
allows us to see the internals of the object (described in section 4.2.3).
There are several key differences in the two applications. First, while the global
deformation simulation application was trying to optimize one behavior of the ball (ball
bounce height), this application attempts to optimize four behaviors (control width,
control height, deformed width, deformed height). Second, the method in which the data
is captured is different between the two applications. This time, instead of capturing data
points over time, were capturing data points at a single snapshot in time.
Figure 20 The Simulated Ball being Compressed by a Simulated Weighted Board
5.2.1 The Board-Ball Simulation
Simulating a board squishing a ball is not a simple problem. This is because, in a
typical simulation, the animator sets up the scene (placing objects of differing masses,
sizes, and shapes in different locations in the scene) and then starts the simulation, letting
the physics engine take care of the details.
The end goal of this simulation is to place a board on top of the ball with the same
amount of force as the recorded experiment. In the recorded experiment we set the board
49


on the ball and balanced it with my hands. This allowed the board to be on the ball until
both had zero velocity, thus not adding any kinetic energy into the system at that point.
One problem with this is that we cant start the simulation with the objects already
in the deformed equilibrium state. Deformed states occur because of collisions which
occur when two objects intersect with each other. When the physics engine notices two
objects intersecting, it adds artificial impulse force to both objects. The magnitude of this
impulse force depends on how far of an intersection occurred. When two objects start a
simulation intersected, the physics engine adds a lot of impulse force which usually
causes the objects to fly apart when the simulation is started.
For this thesis, I started the simulation with the weighted board slightly over the
top of the ball. Then, when the simulation starts, the board will drop onto the ball
causing a deformation of the soft body. The goal was to minimize the amount of kinetic
energy brought into the experiment due to the board dropping, however using this
mechanism, it was impossible to remove all of it.
5.2.2 Optimization Problem
The purpose of the local deformation experiment that I chose was to see how the
object deforms when I squished the object between two boards with a certain amount of
force. As the camera is two dimensional, the information available was the amount the
object displaced in they and x directions. As the ball is symmetric, this means the height
and width, respectively.
The displacement is calculated as the difference, in position, of the ball in its
deformed state minus the position of the ball in its undeformed (or equilibrium) state.
50


Therefore, the application finds the width and height of the ball in both the equilibrium
and deformed state.
The purpose of the simulation application was to optimize the coefficients so that the
width and height of the ball in the equilibrium state matched that of the recorded ball, and
the width and height of the ball in the deformed state matched that of the recorded ball.
5.2.3 Capturing Data Points
Unlike the global deformation experiment, the goal was not to track the behavior
of the ball over time, but to capture the balls behavior at one moment in time. That is,
we want the width and height of the ball when it is most deformed.
As every frame is rendered, the algorithm was to track the min/max x and y values
of the entire object. As we found the width and height, we saved that information if the
width was bigger than it had been in any previous frame and if the height was smaller
than it had ever been in any previous frame.
The application renders two frames simultaneously. First it renders a scene with just
the ball. Secondly, it renders a scene with the same ball, and a weighted board dropped
on it. This allows us to calculate the max width and min height of both the controlled
ball and deformed ball for that simulation. This is exactly what was done in the local
deformation capture application.
5.2.4 The Coefficient Finding Algorithm
This optimization problem is similar to the one defined in the global deformation
simulation application. The difference here is the optimization problem has four
independent variables, that is, the width and height of the control ball as well as the
51


deformed ball. That being said, the algorithm defined is identical with the only
difference being how we determine whether the simulation is converging on the solution
or not.
If either the width or height differences are smaller than in the previous run, the
algorithm marks that run as converging on the solution.
5.2.5 The Local Deformation Simulation Application
This application was written using C++, fltk 1.3, Bullet 2.79, and OpenGL 2.1.0 on a
WindowXP SP3 system and an Intel onboard graphics card.
13
17
16 15

6 7 3 9 10 U
Figure 21 Overview of the Local Deformation Simulation Application
1 The simulation window which shows the sphere generated with the
coefficients defined (14)
52


2 The simulation window which shows the sphere generated with the
coefficients defined (14) with a weighted box dropped on it.
3 Stops the simulation on its current frame
4 Restarts the simulation. First it regenerates the scene with a new sphere
using the coefficients as it is currently set.
5 Starts the simulation from the current frame.
6 The width and height of the simulated control sphere (1)
7 The width and height of the simulated deformed sphere (2)
8 Imports width/height data from the clipboard and sets it in the fields (9) and
(10)
9 The imported width and height of the control object.
10 The imported width and height of the deformed object. Also the amount of
weight set on top the deformed object.
11 Radio buttons to determine which object to display in the right simulation
window (15)
12 Button which starts the algorithm for finding the coefficients to make the
object being simulated match the imported data points.
13 Button which stops the algorithm for finding the coefficients of the sphere.
14 The coefficients which can be manually adjusted or the coefficient finding
algorithm will adjust.
15 Tells the algorithm to use a tetrahedral mesh to define the sphere
16 Tells the algorithm to use a mass-spring mesh to define the sphere
17 Launches TetView and displays the sphere mesh from (2)
53


18 Launches TetView and displays the sphere mesh from (1)
6. Coupling Object Geometry with Elasticity Coefficients
The goal of the previous experiments was to build a simulation of an object whose
material behavior closely resembles the material behavior of some known object. We did
this by recording the object under several tests, then running those recordings through a
few computer vision algorithms to extract the material behavior of the object from the
recordings. Then, these extracted behaviors are transferred to a physics simulation
engine which attempts to create an object with those same material behaviors.
Before starting the coefficient finding algorithm, the animator must generate the
geometry of the object. Then, the coefficients are found without changing the geometry.
The next stage of the thesis is to take two objects with identical material coefficients,
modify their geometries and see how it affects the material behaviors. The goal of this
stage of the thesis is to determine whether and how the coefficients of a simulation and
the geometry are coupled to each other.
Similar to the material behavior matching experiments done in the previous sections,
I developed a global deformation and a local deformation solution to test out these
theories.
6.1 Global Deformation
In a global deformation experiment, we are looking at how the whole object
responds to contact forces applied to the surface of the object. All previous research
focuses on local deformation, that is, how a certain area on the objects surfaces displaces
due to some contact force being applied to that location. This section discusses the
54


experiments performed to measure the global behavior difference of two objects with
different geometries. The global behavior focused on by this thesis is the bounce height
of an object, that is, how much energy is lost in the collision with the ground.
To accomplish the global deformation goals, I developed a method to simulate
two spheres simultaneously. Both spheres have the same size, starting position, mass,
and elasticity coefficients. One sphere is the control, while the second sphere is identical
to the control except the geometry is different.
There are three geometric properties which can be adjusted on the second sphere.
First the resolution of the sphere, that is the number of vertices and edges on the surface
of the sphere, can be smaller or greater. Second is the radius/edge length ratio constraint.
If we set a lower value for this constraint, the tetrahedral mesh generator will allow fewer
long/skinny tetrahedrals. Third the volume constraint can be adjusted. When this value
is lower, it forces the tetrahedral mesh generator to build smaller tetrahedrals. These
geometric properties are discussed in more detail in section 4.2.2.2. By adjusting these
properties, this thesis tests spheres with larger surface tetrahedrons and smaller internal
tetrahedrons. Smaller surface tetrahedrons and larger internal tetrahedrons are also
tested.
The application tracks the ball bounce height as well as the dimensions of the ball for
both the control sphere as well as the sphere with the adjusted geometry. With this, I
compare the bounce heights of the two spheres.
6.2 Local Deformation
With respect to the local deformation experiment, I also implemented a method to
simulate two spheres with identical material coefficients but slightly different geometries.
55


The geometrical properties which can be changed are the resolution, radius/edge length
constraint, and the volume constraint. The geometric constraints are described in the
previous section.
Figure 22 Object Behavior Differences by Adjusting the Geometry
With all other coefficients being the same, objects are tested to see how identical
localized forces affect the behavior of the objects with different mesh resolutions.
Each sphere has an identical weighted board dropped on it. The height and width of
the control and geometrically-altered spheres are captured until the moment the width is
at its largest value and the height is at its smallest value.
With this information, I am able to compare the deformation of two objects with
identical coefficients but different geometries side by side.
56


7. Results
During the course of this thesis, there were five tasks each with individual results
which will be shared in this section. The goal of the material property extraction tasks
was to successfully extract material behavior of the ball under both the local deformation
and global deformation tests. The goal of the simulation tasks was to use material
behaviors extracted to automatically generate the coefficients of the simulation which
enables the simulated ball to have the same material behavior as the real ball. The final
task was the investigation of how modifying the geometry without the coefficients
affected the material behavior of the object using the same local and global experiments.
7.1 Global Deformation Material Property Extraction
The goal of this task is to take a recorded experiment of a ball being dropped from
some height, parse that video, then extract and track the objectsy position and
dimensions for the duration of the experiment. The captured information needs to be
formatted in such a way that all data points not relevant to the experiment should be
pruned.
To analyze the computer vision, data analysis, and tracking logic in the global
deformation material property extraction research, I used four balls made of different
materials which have different behaviors when bounced. These include a ping-pong ball,
waffle ball, softball, and a highly deformable air-filled childrens bouncy ball.
The experiments were conducted in front of a Casio Exilim EX-ZR200 camera
recording at 120 frames per second. The background and ground in each experiment was
57


covered in a white sheet and each of the balls was painted red. The balls were dropped
from 1 meter and 1.6 meters.
a.
b.
|
X 0.4
c.
1.8-
1.6
1.4
I'2
** !'
0.4
0.2'
0.0
00001111222233
d.


r\
\ 1 X
\) \ j
V V V v
V
Figure 23 Graphs of the Ball Positions over Time when Dropped from 1 Meter
These graphs show the y position of the top of the ball for the duration of the
experiment which drops the ball from 1.0 meters. They represent the position of the (a)
softball, (b) wiffleball, (c) air filled childrens ball, (d) ping-pong ball.
58


:





V
\/ V y:
0 0 0 0 1 1 1
C.
1.2-
1.0-
0.6
I 0.6
0.4-
0.2-
0.0


\

\ i \
i r
\i V
i I V 1/ -V A Ai
d.
Figure 24 Graphs of the Ball Positions over Time when Dropped from 1.6 Meters
These graphs represent they position of the balls in the experiment which drops them
from 1.6 meters. They are the (a) softball, (b) wiffleball, (c) air-filled childrens ball, and
(d) ping-pong ball
These results show what was expected prior to running the tests. The softball has
low elasticity and achieves its final resting position quicker than the other three ball
types. The hollow wiffleball bounces higher than the softball, but not as high as the air-
filled ball, and finally the ping-pong ball bounces the highest and longest of all the balls
tested. This is expected as the ping-pong ball is built to bounce well.
It should be noted that the data collected creates a smooth graph of the y-position
of ball. This tells us that the frame rate used is slow enough to capture the peak and apex
of the curve. It also tells us that using the top of the ball to measure position is an
acceptable solution to this type of experiment.
59


7.2 Global Deformation Simulation
The goal of the simulation step is to estimate the coefficients of the governing
equation which cause the simulated ball to have the same bounce heights as the recorded
ball. For this thesis, success would mean the algorithm changes the coefficients such
that the bouncing heights of the simulated sphere converge on the bouncing heights of the
recorded ball. It is not always the case that the simulated balls behavior exactly matches
the recorded balls behavior, but it is always the case that the algorithm changes the
coefficients such that the simulation solution is closer to the optimal solution that when
the simulation was run with default values.
It appears that when using tetrahedrons, the solid object which gets created
diffuses much of the kinetic energy attained by the ball at the moment of collision.
Because of this, the energy doesnt convert into an upward momentum great enough to
propel the object to the bounce heights of the wiffleball, air-filled ball, or ping-pong
balls. The reasoning for the energy loss is the topic of another research effort, however
the author speculates it is because of the moving parts inside of a solid tetrahedral mesh.
However, the algorithm was able to find elasticity coefficients which make the
simulated ball behave like the softball.
1,21
1.0-
0.2-
0.0-I--,----j-----r
0 12 3
Figure 25 Captured vs. Simulated Ball Height over Time
After 25 iterations, the algorithm was able to find the coefficients which made the
simulated ball have the same bounce heights as the captured ball
60


7.3 Local Deformation Material Property Extraction
In the local deformation material property extraction work, the goal was to find the
width and height of the ball both under its own weight and under the weight of a heavy
board. To test out the computer vision width and height extraction algorithm, I used four
types of ball, each of different material. They were a softball, wiffleball, nerf ball, and
stress ball. The softball made of a solid polyurethane core with a leather wrapping, the
wiffleball is hollow and made of a hard plastic shell, the nerf ball is a solid light sponge-
like material, and the stress ball is made of a soft plastic casing filled with a gel-like
substance.
Table 1 Comparing the Deformation of each Ball Type
The width and height of the balls under experimentation along with the percentage
change after the weighted board is placed on the object.
Control Deformed % Changed
width 0.198969 0.196907 -1.04%
jOTtDall height 0.2 0.198696 -0.65%
width 0.156757 0.156757 0.00%
WITTIGDall height 0.16 0.158919 -0.68%
,. .. width 0.16 0.156991 -1.88%
INGiT Dali height 0.16 0.127536 -20.29%
width 0.161053 0.166316 3.27%
MiGSS Dali height 0.16 0.111579 -30.26%
As expected, the width and height of the softball and waffle ball does not change
when weight is applied as theyre made of hard substances that does not deform. The
nerf balls height decreases by 20% as it deforms under the weight but the height does not
61


change as the nerf material does not preserve volume when deformed. The stress ball is
also highly deformable as it loses 30% of its height when the weight is applied. Finally,
this ball gets wider as it preserves volume when it is deformed
7.4 Local Deformation Simulation
The local deformation simulation application was responsible for finding the
coefficients of the governing equations which cause the simulated experiments of a board
compressing a ball to behave like the recorded experiment. The objects under test were
a softball, wiffleball, nerf ball, and a stress ball.
The algorithm developed works well with highly deformable spheres but does not
work well when the object under test has little or no deformation. Therefore, the
algorithm was unable to find the elasticity coefficients for the softball and wiffleball.
However, the algorithm was successful with respect to the spongy nerf ball and stress
ball.
Table 2 Captured vs. Simulated Compression of Nerf Ball
The simulated control width and height along with the deformed height all come very
close to matching the recorded values. The simulated deformed width is greater than the
captured which tells us the tetrahedral mesh tends to preserve volume as the object is
being deformed.
Control Deformed
width ______height width ______height
NerfBall Recorded 0.25 0.25 0.244048 0.197619
Simulated 0.250632 0.242885 0.261936 0.197754
0.25% -2.85% 7.33% 0.07%
62


Table 3 Captured vs. Simulated Compression of Stress Ball
All simulated width and height values are within acceptable thresholds (4%
difference between simulated and captured) for the stress ball.
Control Deformed
width ______height width ______height
StressBall Recorded 0.102581 0.1 0.103226 0.069032
Simulated 0.100048 0.097808 0.105659 0.066867
-2.47% -2.19% 2.36% -3.14%
7.5 Investigation of Object Geometries
The goal of this task was to determine how the behaviors of two spheres change when
they have the same coefficients of the governing equation, but differing geometries. This
was tested both with the global simulation experiment, that is dropping a ball from some
height and tracking the height and frequency of ball bounces, and the local deformation
experiment, that is compressing a ball between the ground and some weighted board to
see how the objects height and width displaces.
7.5.1 Global Deformation
For this part of the thesis, I only focused on the tetrahedral spheres as they have
more geometric properties that can be adjusted than the mass-spring sphere in which only
object resolution can be adjusted.
The coefficients of the tetrahedral sphere which can be adjust are the resolution,
that is the number of vertices and edges which make up the surface of the object, the
radius/edge length ratio constraint, that is long-skinny tetrahedral versus evenly
proportioned tetrahedral, and volume constraint on the tetrahedral.
63


Figure 26 Graphing the Ball heights of Two Spheres with Differing Resolutions
The graph shows how the bounce height is affected when the resolution of the object
is adjusted. The sphere with the larger number of tetrahedrons tends to lose less energy
in the collision.
Figure 27 Graphing the Ball Heights of Two Spheres with Different Radius / Edge
Length Constraints
This graph shows how the bounce height is affected when the radius / edge length
ratio is adjusted. While the sphere with the ratio closer to one does lose less energy in the
collision than the other, the affect on bounce height is minimal.
Figure 28 Graphing the Ball Heights of Two Spheres with Different Volume
Constraints
This graph shows how the bounce is affected when the volume constraint is the only
geometric difference between two spheres. The bounce heights are the same among the
two objects however the sphere with the lower volume constraint compresses more when
it strikes the ground.
64


When resolution is increased, this increases the polygons on the surface of the sphere,
but it is still the volume constraint and radius / edge length ratio which determine the size
and shape of the tetrahedrons inside the object. In the first graph, two balls were
dropped, one with a low resolution, and one with a high resolution. Both spheres had
identical elasticity coefficients as well as radius / edge length ratio constraint and volume
constraints. This is evidenced by the same sized tetrahedrons on the inside of the object.
From this information, we can see that an object with a denser surface mesh than
internal mesh will lose less of the spheres kinetic energy when it collides with the
ground.
My results show that changing the tetrahedrons radius / edge length ratio has
minimal affect on the objects bounce height. The object with a value closer to one lost
slightly less energy in the collision with the ground and thus, bounced slightly higher
than the ball with a higher radius / edge length ratio.
Finally, the tests showed that decreasing the mesh generation volume constraint
coefficient had no affect on the energy lost during the collision. However, an interesting
take away from this test was that the ball with the lower volume constraint compressed
more when it collided with the ground but still reached the height of the ball which didnt
compress as much.
Interestingly, all of the tests confirmed that changing the geometric coefficients, with
none of the elasticity coefficients changed, changed the elastic behavior of the object.
65


7.5.3 Local Deformation
With respect to local deformation, the test was to drop identical boards onto two
spheres with identical elasticity coefficients, but slightly different geometric properties.
The goal is to investigate how the displacement of the spheres width and height differ
when their geometries change.
Value Control Width Deformed Width % Difference
Resolution 128 0.599457 0.603917 0.74% 0.80%
608 0.599435 0.604202
0.00% 0.05% Value Control Height Deformed Height % Difference
Resolution 128 0.5907 0.516605 -12.54% -5.78%
60S 0.59041 0.556264
-0.05% 7.68%
Figure 29 Chart Comparing Compression of Two Spheres with Different
Resolutions
This chart represents how the balls height and width displaced when identical boards
were placed on top of them. The balls are identical except the resolution of the sphere.
Value Control Width Deformed Width
Radius / Edge Length 1.072 0.59937 0.602328
2.SIS 0.599348 0.602709
0.49%
0.56%
0.00%
0.06%
Value
Radius / Edge Length
Control Height Deformed Height % Difference
-11.97%
-11.47%
1.072 0.59172 0.520906
2.818 0.592422 0.524481
0.12%
0.69%
Radius / Edge Length = 2.818 Radius / Edge Length = 1.072
Figure 30 Chart Comparing Compression of Two Spheres with Different Radius /
Edge Length Constraints
This chart represents how the balls height and width displaced when identical boards
were placed on top of them. The balls are identical except the radius / edge length
constraints were different on the two balls.
66


Value Control Width Deformed Width % Difference
Volume Constraint 0.0166 0.599657 0.602192 0.42% 0.36%
0.1 0.60057S 0.602767
0.15% 0.10% Value Control Height Deformed Height % Difference
Volume Constraint 0.0166 0.5SS5S3 0.520549 -11.56% -5.58%
0.1 0.575359 0.543256
-2.25% 4.36%
Figure 31 Chart Comparing Compression of Two Spheres with Different Volume
Constraints
This chart represents how the balls height and width displaced when identical boards
were placed on top of them. The balls are identical except the volume constraint placed
on the mesh generator which created this tetrahedral sphere.
Through experimentation, it is shown that resolution of the ball has a large effect on
the strain stress relationship. In the first chart we see that when the objects were in their
most-deformed state, the height of the sphere with a lower resolution was 7.68% lower
than the more-dense sphere. This tells us that, when using tetrahedrons, using more,
smaller tetrahedrons will give the object a more solid behavior than fewer, bigger,
tetrahedrons.
While investigating how the radius /edge length ratio constraint had on object
deformation, I found that while there were slight differences in the displacements while
the force of the board was being added, the difference wasnt big enough to see in the
simulation itself.
Finally, when testing the volume constraint parameters used by the tetrahedral mesh
generator I found that with a volume constraint about 10 times less than the control
sphere, the objects height compressed 4.26% more than the control sphere. Furthermore,
decreasing the volume constraint, more than any other geometric parameter, caused the
width of the object to increase as the height was decreased.
67


8. Discussion
The research conducted during the course of the past two semesters involved long
nights, weekends, and skipped holidays. While an enormous amount of work went into
it, there are still areas I have kept track of which could be improved upon to gain better
results.
8.1 Computer Vision
It was always the intention of this thesis to record experiments with an object and
to use computer vision techniques to extract the important behavioral information from
the recording.
To do this, I used a point-and-shoot camera to recording high speed videos of the
experiments I recorded. The camera, while much better than others I have used still
produced grainy videos when doing any speed other than 30 frames per second. Even
when using the highest frame rate, it still did not shoot the 12.5 megapixel values that it
would when shooting a still picture. The result of the grainy, low-resolution, pictures
was the edge detection algorithms had a harder time finding the true edges of the ball. I
believe it was for this reason I was not able to capture height and width information of
the ball in the global deformation material property extraction application. See the figure
below for an example. I would expect the width/height ratio to stay at one except when it
collides with the ground. However, because the image is grainy, the width and height are
not precise at every frame.
68


Figure 32 Graph of Width / Height Ratio over Time
Graph of the Width/Height over time as tracked and calculated by the global
deformation material property extraction application.
Another idea to rectify this problem might be to implement a low-pass filter which
removes all of the noise and, hopefully, only leaves a line better representative of the
width/height ratio of the ball over time.
The computer vision technique used to find the ball in the image, while it works for
the type of experiments conducted, would not work under several circumstances. First, if
there was a lot of noise in the background, or if you couldnt see the top and bottom of
the object, or if the background and ball dont have a high enough contrast.
There are better segmentation algorithms available today to find an object in an
image. Most of them require a library of images. The algorithm I used finds the edges
of the ball by using the canny edge detection algorithm and then utilizing a color
matching scheme to find edges of a certain color in the image. I have shown in an earlier
section that this algorithm does not do well when a single pixel shares both the balls
color and the background color which happens in a lot of cases.
If I had two cameras, I could have shot the experiment in stereo which can pick up
much more three dimensional information than one camera. Having one camera limited
the experiments I could do to ones which the objects behavior was visible in two


dimensions. With stereo cameras, I could have done an experiment in which I poked the
object with a certain force and found the radius and depth of the crater I made.
The experiments I did conduct would have been better captured if I had proper
lighting. For all my experiments, I had over head lights and scrounged a couple lamps,
but this was never enough. Some of the lights in my experiment were brighter than
others. Some had soft-white light bulbs while others were fluorescent white. Also, the
overhead light casted a shadow straight down which always showed in the video. A
better setup would be to have three umbrella lights situated in a triangle in front of the
object. That way the shadow would be behind the object and all sides of the object would
have an even color.
8.2 Simulation
For this thesis, I developed a new method for finding the coefficients of the
governing equation which made the simulation sphere behave like the real sphere. The
algorithm involves changing coefficients one at a time. Once a coefficient was moved
past, it would never be under consideration again. A study of multi-variable optimization
routines might allow this application to make better choices when updating coefficients.
It may also be the case that some of the difficult optimization work can be passed off to a
third party optimization routine altogether.
When using tetrahedral meshes for this thesis, it became clear that I was limited to the
arguments I could pass into the tetrahedral mesh generator, that being the radius / edge
length constraint and the volume constraint. It would be nice to have better control over
the meshes I am creating. For instance, I would like to create a spherical mesh where the
volume constraint starts out big near the border but gets smaller as it approaches the
70


center. The opposite configuration would also be interesting. The mesh generator also
doesnt provide the ability to generate hollow meshes. This would be an interesting test
as many of the balls I used in my experimentation were hollow. It was the pressurized air
inside that cause the ball to bounce.
8.3 Other Interesting Tests
As we look at an object to determine how rigid or soft the object is, we might want to
deform the object is some manner and track the objects vibration until it reaches an
equilibrium state. As such, an interesting test might be to take a long thin strip of the
material being tested, move it to the edge of a table, and secure one end of the object to
the table. With this setup, the experimenter may apply a certain force to the end of the
object protruding from the end of the table and suddenly remove the force causing the
object to vibrate up and down.
With this type of experiment, a computer vision application could be written which
measures the frequency and amplitude of the vibrating object. Also, the application
could measure how quickly the vibrating objects amplitude changes, that is, how much
energy is lost in the system due to the deformation of the object.
It would also be interesting to investigate how the frameworks described in the earlier
sections of this thesis could be modified to support the material property extraction of any
shaped object. In this case, I suspect the algorithm would need to find feature points on
the object and track how the distance between feature points change while the object is
under test is deformed.
71


9. Conclusion
On a daily basis, animators transform things they interact with into a 3D model to be
used in the movies, games, and research project they are building. This thesis strove to
build a framework which allows animators to accomplish these goals quicker and more
accurately than the tradition guess-and-execute method currently employed.
Through the previous semesters, I have developed a system which is a starting point
for future research in this area. The framework I built can simulate different types of
spheres, but it cannot yet simulate all possible spheres, not to mention the infinite other
shapes which an animator might want to simulate.
In the future, this work could be expanded to work on different types of objects. For
instance a cube might be a good candidate for future experiments as it is symmetric and
similar material properties can be extracted using a single two dimensional camera.
In the future, it is the goal to build a library of materials, which means the
experiments should not require a certain shape. Instead if were trying to get the material
coefficients to simulate glass or hard rubber, we should run experiments with different
shaped objects of the same material.
Another possible future area of research is to, after finding the coefficients which
make a simulated sphere behave like the rubber ball which was recorded, use those
coefficients on different objects of different shape. This way, we can determine if this is
a good method for simulating an object with a more complex geometry like a rubber car
tire.
Another goal of the thesis was to investigate how the geometric properties affect the
material behavior of the object. All researchers, to this point treat the objects geometry
72


and elasticity coefficients separately. With respect to tetrahedral meshes, I have
discovered which geometry properties affect object elasticity and which do not. This
information gives animators and researchers additional tools to simulate real-life objects.
73


References
[1] Y. Wang, Y. Xiong, K. Xu, and D. Liu, Key Techniques in Surgery Simulation
for Arthroscopic ACL Reconstruction, Computer Animation And Virtual Worlds,
p. 53,2011.
[2] R. Basri, L. Costa, D. Geiger, and D. Jacobs, Determining the similarity of
deformable shapes., Vision research, vol. 38, no. 15-16, pp. 2365-85, Aug. 1998.
[3] M. Becker, Robust and efficient estimation of elasticity parameters using the
linear finite element method, Proc. of Simulation and Visualization, 2007.
[4] K. S. Bhat, C. D. Twigg, J. K. Hodgins, P. K. Khosla, Z. Popovi, and S. M. Seitz,
Estimating Cloth Simulation Parameters from Video, Simulation, 2003.
[5] B. Bickel, M. Bacher, M. a. Otaduy, W. Matusik, H. Pfister, and M. Gross,
Capture and modeling of non-linear heterogeneous soft tissue, ACM
Transactions on Graphics, vol. 28, no. 3, p. 1, Jul. 2009.
[6] B. Bickel et al., Multi-scale capture of facial geometry and motion, ACM
Transactions on Graphics, vol. 26, no. 3, p. 33, Jul. 2007.
[7] G. Bousquet, D. K. Pai, B. Gilles, and F. Faure, Sparse Meshless Models of
Complex Deformable Solids, Computer.
[8] D. Bradley, T. Popa, A. Sheffer, W. Heidrich, and T. Boubekeur, Markerless
garment capture, ACM Transactions on Graphics, vol. 27, no. 3, p. 1, Aug. 2008.
[9] J. Canny, A computational approach to edge detection., IEEE transactions on
pattern analysis and machine intelligence, vol. 8, no. 6, pp. 679-98, Jun. 1986.
[10] T. Cootes and C. Taylor, Statistical models of appearance for computer vision,
World Wide Web Publication, February, 2001.
[11] W. J. R. Daniel P. Huttenlocher, Gregory A. Klanderman, Comparing Images
Using the Hausdorff Distance, IEEE transactions on pattern analysis and
machine intelligence, vol. 15, no. 9, p. 13, 1993.
[12] E. de Aguiar, C. Theobalt, C. Stoll, and H.-P. Seidel, Marker-less Deformable
Mesh Tracking for Human Shape and Motion Capture, 2007 IEEE Conference on
Computer Vision and Pattern Recognition, pp. 1-8, Jun. 2007.
[13] O. Faugeras, Fundamentals in computer vision: an advanced course, p. 132,
1983.
74


[14] P. F. Felzenszwalb and J. D. Schwartz, Hierarchical Matching of Deformable
Shapes, 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp.
1-8, Jun. 2007.
[15] V. Ferrari, F. Jurie, and C. Schmid, Accurate Object Detection with Deformable
Shape Models Learnt from Images, 2007IEEE Conference on Computer Vision
and Pattern Recognition, pp. 1-8, Jun. 2007.
[16] V. Ferrari and T. Tuytelaars, Object detection by contour segment networks,
Group, vol. 3953, no. section 4, p. 14, 2006.
[17] B. Frank and R. Schmedding, Learning the elasticity parameters of deformable
objects with a manipulation robot, Intelligent Robots, 2010.
[18] W. T. Freeman et al., Computer vision for interactive computer graphics, IEEE
Computer Graphics and Applications, vol. 18, no. 3, pp. 42-53, 1998.
[19] B. Gilles, G. Bousquet, F. Faure, and D. K. Pai, Frame-based elastic models,
ACM Transactions on Graphics (TOG), vol. 30, no. 2, p. 15, 2011.
[20] H. C. W. H.G. Barrow J.M. Tenenbaum, R.C. Bolles, PARAMETRIC
CORRESPONDENCE AND CHAMFER MATCHING: TWO NEW
TECHNIQUES FOR IMAGE MATCHING.
[21] M. Hauth, Corotational simulation of deformable solids, Science, pp. 2-9, 2003.
[22] D. Hoiem, a. a. Efiros, and M. Hebert, Geometric context from a single image,
Tenth IEEE International Conference on Computer Vision (ICCV05) Volume 1,
pp. 654-661 Vol. 1, 2005.
[23] R. Jarvis, A perspective on range finding techniques for computer vision,
Pattern Analysis and Machine Intelligence, IEEE Transactions on, no. 2, pp. 122-
139, 1983.
[24] S. Kunitomo, S. Nakamura, and S. Morishima, Optimization of cloth simulation
parameters by considering static and dynamic features, in ACMSIGGRAPH 2010
Posters, 2010, p. 15.
[25] J. Lang, Deformable Model Acquisition and Validation, University of British
Columbia, 2001.
[26] J. Lee, J. Chai, P. S. a. Reitsma, J. K. Hodgins, andN. S. Pollard, Interactive
control of avatars animated with human motion data, Proceedings of the 29th
annual conference on Computer graphics and interactive techniques -
SIGGRAPH 02, p. 491, 2002.
75


[27] Y. Li, T. Wang, and H.-Y. Shum, Motion texture: a two-level statistical model for
character motion synthesis, ACM Transactions on Graphics, vol. 21, no. 3, Jul.
2002.
[28] Q. Luo, Contact and deformation modeling for interactive environments,
Robotics, IEEE Transactions on, vol. 23, no. 3, pp. 416-430, 2007.
[29] Q. Luo, Geometric Properties of Contacts Involving a Deformable Object, 2006
14th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator
Systems, pp. 533-538, 2006.
[30] N. Magnenat-thalmann and F. Faure, Simple yet Accurate Nonlinear Tensile
Stiffness.
[31] N. Magnenat-thalmann and H. E. C. Iro, JOINT-DEPENDENT LOCAL
DEFORMATIONS FOR HAND ANIMATION AND OBJECT GRASPING, p.
12.
[32] D. R. Martin, C. C. Fowlkes, and J. Malik, Learning to detect natural image
boundaries using local brightness, color, and texture cues., IEEE transactions on
pattern analysis and machine intelligence, vol. 26, no. 5, pp. 530-49, May 2004.
[33] S. Martin, B. Thomaszewski, E. Grinspun, and M. Gross, Example-based elastic
materials, in ACM Transactions on Graphics (TOG), 2011, vol. 30, no. 4, p. 72.
[34] T. Moeslund, A Survey of Computer Vision-Based Human Motion Capture,
Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231-268, Mar.
2001.
[35] A. Nealen, M. Muller, R. Reiser, E. Boxerman, and M. Carlson, Physically Based
Deformable Models in Computer Graphics, Computer Graphics Forum, vol. 25,
no. 4, pp. 809-836, Dec. 2006.
[36] A. Opelt and A. Pinz, A boundary-fragment-model for object detection, Lecture
Notes in Computer Science, 2006.
[37] S. Ravishankar and A. Jain, Multi-stage contour based detection of deformable
objects, Computer Vision-ECCV2008, pp. 483-496, 2008.
[38] J. Shotton, a. Blake, and R. Cipolla, Contour-based learning for object detection,
Tenth IEEE International Conference on Computer Vision (ICCV05) Volume 1,
pp. 503-510 Vol. 1,2005.
[39] J. Shotton, A. Blake, and R. Cipolla, Multiscale categorical object recognition
using contour fragments., IEEE transactions on pattern analysis and machine
intelligence, vol. 30, no. 7, pp. 1270-81, Jul. 2008.
76


[40] H. Si, TetGen: A Quality Tetrahedral Mesh Generator and Three-Dimensional
Delaunay Triangulator. 2006.
[41] D. Sim and O. Kwon, Object matching algorithms using robust Hausdorff
distance measures, Image Processing, IEEE, vol. 8, no. 3, pp. 425-429, 1999.
[42] C. V. Stewart, Robust Parameter Estimation in Computer Vision 1 Introduction,
pp. 1-32, 1999.
[43] C. Syllebranque and S. Boivin, Estimation of mechanical parameters of
deformable solids from videos, The Visual Computer, 2008.
[44] D. Terzopoulos, J. Platt, A. Barr, and K. Fleischer, Elastically deformable
models, ACMSIGGRAPH Computer Graphics, vol. 21, no. 4, pp. 205-214, Aug.
1987.
[45] a. Thayananthan, B. Stenger, P. H. S. Torr, and R. Cipolla, Shape context and
chamfer matching in cluttered scenes, 2003 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, 2003. Proceedings., p. I-127-I-133.
[46] A. Torralba and K. Murphy, Using the forest to see the trees: exploiting context
for visual object detection and localization, Communications of the ACM, 2010.
[47] P. Viola and M. J. Jones, Robust Real-Time Face Detection, International
Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, May 2004.
[48] H. Wang and J. F. O. Brien, Data-Driven Elastic Models for Cloth: Modeling
and Measurement, Measurement, vol. 30, no. 4, 2011.
[49] R. White, K. Crane, and D. a. Forsyth, Capturing and animating occluded cloth,
ACM Transactions on Graphics, vol. 26, no. 3, p. 34, Jul. 2007.
[50] L. Zhang, N. Snavely, and B. Curless, Spacetime Faces: High-Resolution Capture
for- Modeling and Animation, Data-Driven 3D Facial Animation, 2007.
[51] J. Zhao, Y. Wei, D. Zhu, S. Xia, and Z. Wang, Path Tracking with Time
Constraints : A Convex Optimization Approach, Society, p. 8, 2011.
77


Full Text

PAGE 1

COMPUTER VISION BASED MATERIAL PROPERTY EXTRACTION AND DATA DRIVEN DEFORMABLE OBJECT MODELING B y Steven C. Wilber B.S. Computer Science, Metropolitan State College of Denver, 2005 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment O f the requirements for the degree of Master of Science Computer Science and Engineering 2012

PAGE 2

ii

PAGE 3

iii Wilber, Steven, C. (M.S., Computer Science and Engineering) Computer Vision Material Property Extraction and Data Driven Deformable Object Modeling Thesis directed by Associate Professor Min Hyu ng Choi, Ph.D. Of all the options available to computer animators, physics engines are typically used to develop animations where realism is important. When simulating deformable parameters in a manual, iterative manner which is both inaccurate and time consuming. T his thesis define s a meth od for finding the physics engine coefficients of an object by examining recordings of the object under various stress tests. It also investi gates how these very different geometries. To accomplish these, this thesis defines a set of deterministic experiments to put various types of stress on the object, develo p s a computer vision algorithm which takes the recordings and extracts the important behavioral information, develop s a simulation of the object under the same type of stress using the bullet physics engine, and define s a feed back loop algorithm for optim izing the simulation until it matches the criteria set by the extracted behavioral information. This thesis then use s the coefficients obtained to simulate an object with a different geometry. Using color and shape cues of the recorded experiments, th is thesis is successful in defining novel methods for extracting the size and location of the object over time. The framework for simulating the object, while it is still in infancy, has an algorithm which successfully creates a homogenous, isotropic version of the object mimicking the

PAGE 4

iv global behavior and local deformation of the recorded. While future work would improve the results, this thesis has successfully shown that extracting material properties from a recorded video (recorded under a strict experimental environment) and using that information to data drive a simulation of that object is achievable and produces good results. The form and content of this abstract are approved. I recommend its publication. Approved: Min Hyung Choi, Ph.D.

PAGE 5

v DEDICATION I dedicate this work to my loving wife, Mindy Rae Wilber. It is your patience, encouragement, and positive attitude that gave me the motivation I needed while working towards this goal.

PAGE 6

vi TABLE OF CONTENTS C hapter 1. Introduction ................................ ................................ ................................ .................. 1 1.1 Background ................................ ................................ ................................ ....... 3 1.2 Motivation ................................ ................................ ................................ ......... 9 2. Present State o f Knowledge in the Field ................................ ................................ .... 11 2.1 Data Driven Physics Based Animation ................................ .......................... 11 2.2 Computer Vision Material Property Extraction ................................ .............. 15 3. Experiment Definition ................................ ................................ ............................... 17 3.1 Defining the Experiments ................................ ................................ ............... 17 3.2 Material Property Capturing System ................................ .............................. 18 3.2.1 Nikon D5000 ................................ ................................ .......................... 19 3.2.2 Casio Exilim EX ZR200 ................................ ................................ ........ 20 4. Global Deformation Experiment ................................ ................................ ............... 21 4.1 Material Property Extraction ................................ ................................ ........... 22 4.1.1 Computer Vision ................................ ................................ .................... 23 4.1.1.1 Reducing Image Noise ................................ ................................ .. 23 4.1.1.2 Canney Edge Detector ................................ ................................ .. 25 4.1.1.3 Image Segmentation by Color Matching ................................ ...... 26 4.1.1.4 Image Cropping ................................ ................................ ............ 28 4.1.2 Capturing Data Points over Time ................................ .......................... 28 4.1.3 The Global Deformation Capture Application ................................ ...... 31 4.2 Finding Simulation Coefficients ................................ ................................ ..... 33 4.2.1 Bullet ................................ ................................ ................................ ...... 33 4.2.2 Modeling the Ball ................................ ................................ .................. 34

PAGE 7

vii 4.2.2.1 Mass Spring Sphere ................................ ................................ ...... 35 4.2.2.2 Tetrahedral Sphere ................................ ................................ ........ 35 4.2.3 TetView ................................ ................................ ................................ .. 37 4.2.4 Capture Data Points over Time ................................ .............................. 38 4.2.5 The Coefficient Finding Algorithm ................................ ....................... 38 4.2.6 Global Deformation Simulation Application ................................ ......... 40 5. Local Deformation Experiment ................................ ................................ ................. 43 5.1 Material Property Extraction ................................ ................................ ........... 44 5.1.1 Recording the Experiment ................................ ................................ ..... 45 5. 1.2 Capturing the Control and Deformed Frames ................................ ........ 45 5.1.3 Material Properties Extracted ................................ ................................ 46 5.1.4 The Local Deformation Capture Application ................................ ........ 46 5.2 Finding Simulation Coefficients ................................ ................................ ..... 48 5.2.1 The Board Ball Simulation ................................ ................................ .... 49 5.2.2 Optimization Problem ................................ ................................ ............ 50 5.2.3 Capturing Data Points ................................ ................................ ............ 51 5.2.4 The Coefficient Finding Algorithm ................................ ....................... 51 5.2.5 The Local Deformation Simulation Application ................................ ... 52 6. Coupling Object Geometry with Elasticity Coefficients ................................ ........... 54 6.1 Global Deformation ................................ ................................ ........................ 54 6.2 Local Deformation ................................ ................................ .......................... 55 7. Results ................................ ................................ ................................ ........................ 57 7.1 Global Deformation Material Property Extraction ................................ ......... 57 7.2 Global Deformation Simulation ................................ ................................ ...... 60 7.3 Local Deformation Material Property Extraction ................................ ........... 61

PAGE 8

viii 7.4 Local Deformation Simulation ................................ ................................ ....... 62 7.5 Investigation of Object Geometries ................................ ................................ 63 7.5.1 Global Deformation ................................ ................................ ............... 63 7.5.3 Local Defo rmation ................................ ................................ ................. 66 8. Discussion ................................ ................................ ................................ .................. 68 8.1 Computer Vision ................................ ................................ ............................. 68 8.2 Simulati on ................................ ................................ ................................ ....... 70 8.3 Other Interesting Tests ................................ ................................ .................... 71 9. C onclusion ................................ ................................ ................................ ................. 72 R e f e r e n c e s ................................ ................................ ................................ ......................... 74

PAGE 9

ix LIST OF TABLES T able Table 1 Comparing the Deformation of each Ball Type ................................ .................. 61 Table 2 Captured vs. Simulated Compression of Nerf Ball ................................ ............. 62 Table 3 Captured vs. Simulated Compression of Stress Ball ................................ .......... 63

PAGE 10

x LIST OF FIGURES Figure Figure 1 One Goal of this Thesis. ................................ ................................ ...................... 1 Figure 2 Object Material Properties ................................ ................................ ................... 4 Figure 3 Defining the Geometry of a Simulated Object ................................ .................... 5 Figure 4 Extracting Material Properties from a Video ................................ ...................... 6 Figure 5 Modifying Object Behavior by Modifying Mesh Density ................................ .. 8 Figure 6 Matching Object Behavior by Modifying Linear Elasticity ................................ 8 Figure 7 13x13 Gaussian Matrix with = 2 [10] ................................ ............................. 24 Figure 8 Ba ll Drop Experiment with Edges Detected ................................ ..................... 26 Figure 9 Using Canny edge detector with color segmentation scheme to find the ball ... 26 Figure 10 Error in Color Segmentation Scheme ................................ .............................. 28 Figure 11 Graphs of the ball height over time and the width/height ratio over time ....... 29 Figure 12 Description of the Global Deformation Material Property Extraction Application ................................ ................................ ................................ ........................ 31 Figure 13 Geometry Parameters for a Mass Spring Sphere ................................ ............ 35 Figure 14 Tetrahedrons with High and Low Radius/Edge Length Ratios ....................... 36 Figure 15 Spheres with Differing Radius / Edge Length and Volume Constraints ......... 37 Figure 16 Overview of the Global Deformation Simulation Application ....................... 41 Figure 17 The Local Deformation Experiment ................................ ................................ 44 Figure 18 Extracting the Ball's Dimensions for the Local Deformation Experiment ...... 44 Figure 19 Overview of the Local Deformation Material Property Extraction Application ................................ ................................ ................................ ................................ ........... 47 Figure 20 The Simulated Ball being Compressed by a Simulated Weighted Board ....... 49 Figure 21 Overview of the Local Deformation Simulation Application ......................... 52

PAGE 11

xi Figure 22 Object Behavior Differences by Adjusting the Geometry .............................. 56 Figure 23 Graphs of the Ball Positions over Time when Dropped from 1 Meter ........... 58 Figure 24 Graphs of the Ball Positions over Time when Dropped from 1.6 Meters ....... 59 Figure 25 Captured vs. Simulated Ball Height over Time ................................ ............... 60 Figure 26 Graphing the Ball heights of Two Spheres with Differing Resolutions ........... 64 Figure 27 Graphing the Ball Heights of Two Spheres with Different Radius / Edge Length Constraints ................................ ................................ ................................ ............ 64 Figure 28 Graphing the Ball Heights of Two Spheres with Different Volum e Constraints ................................ ................................ ................................ ................................ ........... 64 Figure 29 Chart Comparing Compression of Two Spheres with Different Resolutions 66 Figure 30 Chart Comparing Compression of Two Spheres with Different Radius / Edge Length Constraints ................................ ................................ ................................ ............ 66 Figure 31 Chart Comparing Compression of Two Spheres with Different Volume Constraints ................................ ................................ ................................ ........................ 67 Figure 32 Graph of Width / Height Ratio over Time ................................ ....................... 69

PAGE 12

1 1. Introduction Figure 1 One Goal of this Thesis The first goal of this thesis is to define, execute, and record a set of experiments, extract the important information from the recordings and develop a simulation of that recorded object u sing the extracted information. When discussing the broad topic of computer animation there are many different methods of solving the same problem and depending on the goal of the individual creating the animation there are many choices which go into maki ng an animation. If t he for realism, but a cartoony feel where the car might deform in a way not possible in our world dictated by the laws of nature. However, when t he developers of a new line of family sedans develop a car crash scene, realism is absolutely crucial as the lives of their customers are at stake. If realism is the goal, there are several choices that need to be made. Traditionally, there is a tradeof f between realism and speed. Interactive applications require the animation is rendered at 20 30 frames per second. Any less and our eyes will not see smooth transitions between frames. Moreover, if haptic responses are important, the literature states o ur senses require update rates of 1000 frames per second to feel realistic [28]. Realism requires more discrete primitives in the animation scene to handle the sharp edges and irregular contours an object might have. C alculations are performed

PAGE 13

2 on each pr imitive in the scene T herefore adding more primitives slows render time. In this thesis, modeling accuracy and deformation realism is crucial while rendering speed is secondary. When simulating a real life object using a physics engine there are three details that need to be interpreted and used correctly, to effectively simulate that real life object. The first is the governing equation. This equation describes the relationship between the force applied to the object, and the resultant movement of t he object. Also included in the governing equation is the time integration mecha nism which allows the animation to move forward at a set time interval. Secondly, the coefficients of the governing equation describe the material properties of the object be ing simulated. For instance, a baseball and tennis ball will have different coefficients but the same gov erning equation. It is the se coefficients that tell the tennis ball to be bouncier and the baseball to be harder. Finally, there is the geometry of t he object. The geometry is defined b y the primitive polyhedrons or vertex mesh that describe s the shape of the object. This thesis focuses on the latter two. To this end, this thesis provide s a mechanism to extract certain elastic properti es of an object It does this by performing a global and local experiment on the object, both recording the deformation using a single high speed camera Using this recording and various computer vision techniques, I programmatically determine d the both the local deformation and the global behavior of the object as it is interacted with in the various experiments.

PAGE 14

3 This thesis also develop s a novel method for transforming these extracted material properties into elasticity equation. This means that first the geometry is created and secondly the coefficie nts a re found. This thesis investigate s the elasticity of the simulated object In other words, in this thesis I treat the various geometric properties as if they are coefficients of the governing equation, thus coupling the two. From this, I performed tests which investigated how the elasticity changes when using a small volume tetrahedral mesh compared to a mesh with larger tetrahedrons. I also investigated how changing the tetrahedron dimensions affected the global and local elasticity of the object. 1. 1 Background Data driven physically based computer animation is a budding field full of possibilities. We need only think of ways to get data points which can be plugged into an equation representing the beha vior of the object. This may include mechanically measuring the object, to feed back loops constantly updating the animation until it is important as data driven animat ion will be to the future of this industry, this type of animation cannot be thoroughly researched without an understanding of the simulated geometry. This geometry is tightly coupled to the coefficients of the governing equation. Changing one will alway s require a similar change to the other to maintain the same behavior.

PAGE 15

4 Figure 2 Object Material Properties This image shows four objects which all have the same geometry but very different material properties. As we look at an both how the object looks and how it behaves when interacting with the world. As we look at and judge real life objects simulated by a computer animator, there are several aspects of the animation which we pay careful attention to. The keen observer will notice the shape, color, reflectance, and size of a stationary object. A movi ng object will have properties such as weight, friction, and momentum. If the object is not absolutely rigid, it will deform when contact forces are applied to the surface How the object deforms, the amount of force required to achieve some deformation, and whether the object retains volume when deformed are all visual clues telling the observer whether the animated object reflects the behavior of the real object. Animating deformable objects has been an area of research since the mid 1980s when Demetri Terzopoulos first utilized elastic theory to develop a system of ordinary differential equations which model the behavior of objects with elastic properties [ 34]. The ideas developed by these pioneers are still in use today in that we still develop a def ormable model with a governing equation representing the strain/stress relationship between adjacent primitives of the model The differentia l equations are solved for the force function which give s the reader the velocity of a piece of the model at a future time step and ultimately the position of the model at that time step.

PAGE 16

5 While the same basic principles are sustained for physically based systems, the governing equations have gotten more complex the geometric models are denser and more complex, and the ordinary differential equations are solved in more efficient and stable ways. By complicating the governing equations, geometric models, and more durable and has an increased level of realism. Figure 3 Defining the Geometry of a Simulated Object Two methods for defining the geometry of an object is to generate a tetrahedral mesh (left) or a symmetric wireframe mesh ( right). Each type has speed and accuracy benefits. trained eye to, using those tools mentioned above, develop an object which closely matches some real life object. For instance, the tools allow the animator to create a ball which can deform, but the animator needs to determine what type of ball to be simulated. A rubber super ball will behave differently from a tennis ball which is different from a pool ball. Lik ewise, an inflated soccer ball will have a different material behavior than a deflated soccer ball. These differences make up the material properties of the object. It is the job of the animator to provide the correct coefficients (parameters) to the gov erning equation to assign these material properties to the simulated object. This is not always an easy task. Traditionally, the animation director would model an object with quasi random coefficients and would tweak the coefficients until

PAGE 17

6 the simulated goals of computer animation is to eliminate this guess and tweak method of simulating realist ic objects Data driven animation is the idea that we can gather the material properties directly by performing experiments on the actual object. We then take those measurements and convert them into parameters to be plugged into the governing equation and used in the simulation. Figure 4 Extracting Material Properties from a Video This thesis describes a method to the object under certain stress tests, then building a sim ulation of that object. One deficiency in the field of computer animation is that these three areas of deformable modeling are treated as disjoint areas of research. Major achievements have been made in each of the three areas separately. As mentioned, governing equations have become much more complex, allowing for nonlinear internal and external forces, more accurate collision detection schemes, and more efficient and stable integration schemes. With respect to defining the coefficients of the governi ng equation, major fronts have

PAGE 18

7 been made to more accurately find the coefficients which define the stress/strain relationship within the object, whether that be through mechanically measuring the displacement of real objects as some local force applied, to recording the experiment and finding this displacement using computer vision techniques. As for the geometry and connectivity of a 3D model, the advent of more and more efficient graphics processors have allowed for objects with more dense geometries. T hese denser models allow more finely tuned control of the deformation of the object. Along with this, research has been done to generate better polyhedron meshes, giving the animator advanced control over the type of mesh generated. When an animator wants to define a new type of object, they traditionally need to choose a governing equation keeping in mind that there is a tradeoff between accuracy and speed, the se t of coefficients of the governing equation to coarsely define how the particular object will behave, and a geometry for the object which is coarse enough to define the intricacies of the material, but sparse enough to be rendered efficiently. Looking at these three area s, it is clear that the governing equation and the coefficients of those governing equations are tightly coupled together, but is it clear that the geometry is tightly coupled to the coefficients? When defining the coefficients of a deformable object, we from its equilibrium position. It should be clear that as the number of vertices becomes denser th e displacement of each vertex (when the same amount of force is applied) should be smaller if the same type of deformation is desired. The elasticity coefficients

PAGE 19

8 will need to be adjusted to have this smaller local deformation but the same global deformat ion. Figure 5 Modifying Object Behavior by Modifying Mesh Density With all other coefficients the same, these are spheres with 64, 128, 256, 512, 1024, and 2048 vertices, respectively. It is easy to see that modifying the geom etry of the object also changes the material behavior of that object. Figure 6 Matching Object Behavior by Modifying Linear Elasticity This time, we have two spheres with 64 and 2048 vertices, respectively. By increasing the coefficient defining linear elasticity in the denser object, we build an object that appears to have similar deformation under its own weight.

PAGE 20

9 As the governing equation and coefficients are coupled together, as well as the geometry and coefficients, thi s thesis investigate s how does increasing or decreasing the tetrahedron volume affect the elasticity of the object? If the tetra hedrons are larger on the surface than in the center, does that make the object more or less elastic? 1. 2 Motivation As mentioned above, there are two aspects of this thesis, both with their own merits and benefits to the research community. Determining the coefficients of the simulation by directly extracting the materia l properties of an object is a relatively new field with a large amount of potential. As this area of computer graphics advances, animators will no longer have to use their flawed perce ptions of the physical word to judge a simulated object. No longer will they have to sit around and guess whether a simulated object matches what they expect is the behavior of a real object. They will know that it matches as the material properties were accurately and scientifically gathered and transformed into the governing equation coefficients to produ ce the best possible results. There are many different mechanical methods for measuring material properties. These may include stretching cloth in d ifferent directions to measure stretching and shearing, using a robotic arm to drop a glass ball and measuring the displacement and shatter pattern, using a spring to push an object across a diffuse surface and measuring the friction by the displaced sprin g, etc. All of these methods have merits of their own,

PAGE 21

10 but without a mechanical method to measure the results, they all suffer from loss due to limits of human measurement. However, by using a mechanical means of measuring, we will ensure the type of acc uracy only given by the enormous processing power of modern computers. This thesis describes a framework which was built along with a global and local deformation test, to determine the material properties of the object and to convert those material prope rties into values which can be plugged into the simulation. Furthermore, by using computer vision techniques to measure the outcomes of those tests, the techniques descri b ed in this thesis are able to measure material properties to an accuracy which canno t be replicated by the naked eye. Since the dawn of deformable object simulation, animators have used a simulation system where there are three closely coupled parts These parts are the geometry of the object, the governing equation, and the material coe fficients of the governing equation. Animators are forced to manually define each of the three separately, whereby tweaking any one of them may cause the animation to shift considerably due to the invalidation of the other two. As long as animators are c onfining themselves to a system where each of the three areas are treated separately this will be a standard scenario. If there were a way to bring two or more of these into harmony with each other, we will not only make the life of the animator that much easier, but will be opening the door to many other possibilities with respect to deformable body animation. For instance, we can optimize the animation for speed by reducing the geometry while maintaining the realism of the same object obtained with a den se mesh.

PAGE 22

11 2. Present State of Knowledge in the Field In this chapter, I present the different areas of computer graphics and computer vision which directly influenced this thesis work. The two areas I focus on is data driven animation and certain computer vision techniques used to track an object in a recorded video. The third area of this thesis coefficients of the governing equation. The work of this thesis, with respect to the coupling of the coefficient s to the geometry, is completely novel. In this thesis, I describe the experiments I used to investigate the elasticity coefficients and how they, response to contact forces 2.1 Data Driven Physics Based Animation As an animator begins to attack a new type of deformable object, say a flag waving in the wind, an ocean crashing on the rocks, or the windshield of a C orvette shattering after crashing into another car, they generally use some type of real time physics engine such as Bullet or NVIDIA PhysX which have predefined governing equations for geometry, that is, the vertices and conne ctions that make up the 3D object, and the coefficients of the governing equation, that is, the relationship between the internal/external forces and the movement of the object. Traditionally, the animator will try some values and run the simulation. If t he object behaves how they perceive it should, they move on to texturing or call it a day. If

PAGE 23

12 not, they tweak the coefficients, and re run the simulation. This can happen over and over again until the director of the animation gives his or her thumbs up. Clearly, this method has some major flaws. First, it relies on the trained eye of the director, which may differ from person to person depending on experiences and training. The precision of the animation depends on the keenness of the human eye and th e patience of the director trying to run these iterative experiments. Also, this method is very time consuming. Not all animation can be performed in real time. For highly detailed scenes it may take hours to render a single frame and days to render a l ong enough clip to In 2003, an approach by Kiran S. Bhat and team [4 ] designed a method for estimating cloth material automatically, using computer vision techniques, which were then plugged into a simulation of the cloth a nd compared computationally to verify results. Basically, the goal was to define the gradient vector field of the real image to determine how the cloth folds. Then, to begin the parameter estimation, they started randomly choosing coefficients for the cl oth simulation and used a simulated annealing algorithm to modify those parameters until the simulated gradient vector matched closely to the gradient vector of the real image. While they do define a method for duplicating cloth folds, there are other imp ortant properties of cloth, such as stretching and shearing, that are not covered. Similar work with video capture of cloth material properties was done by Kunitomo [24]. A new direction in cloth parameter estimation, developed by Bradley et al. [8], invo lved a markerless approach to capture off the shelf garments via stereo video. This research focused on the ability to capture the cloth material as well as fill in the occluded

PAGE 24

13 holes on the geometry not captured by the stereo video. This work was a coro llary to previous work by White et al. [49], whereby that team used a swatch of cloth and designed a unique markering system which was suitable for finding the material properties of the cloth. White then used a data driven approach to filling in the occ luded sections of cloth. While this work made leaps in the area of the cloth folding and hole filling, the important areas of cloth stretching were not covered by this research. Recently, the idea of generating these coefficients by measuring the object u nder strain/stress relationship has been mechanically measure the stretching and bending of cloth by performing a series of contr olled tests. They attached a measured swatch of cloth to an apparatus with a grid backdrop. Using a weight and pulley system, they stretched the cloth in multiple directions and manually measured the cloth displacement using a grid backdrop. They then m easured the bending of the cloth as different lengths of cloth were draped over their own weight. Using these, along with an optimization algorithm, they defined the coefficients of the governing equation for several types of cloth material. These steps were done for multiple types of cloth producing a library of cloth parameters, which the authors claim can be reused in future simulations. This novel work provided great examples on how to mechanically measure the strain/stress relationship of an object through some type of local deformation experiment however missing from this was a global deformation experiment such as draping cloth over an object. Also in 2011, Faure et al. [7] developed a novel method for modeling complex deformable meshless objects by using a material property map (stiffness map) to define

PAGE 25

14 how the control points (mesh nodes) are distributed across the object. While this is not data driven in the sense that the material properties are measured mechanically it is still useful to the proposed researched in that the collected (scanned) data is put through an algorithm to build parameters which are then applied to the simulation. In 2001, Jochen Lang published his doctoral dissertation [25] in which he describes the ACME system for defor ming an object using a robotic manipulator with an attached force gauge. Along with a trionocular stereo cameras and a point cloud scanning device, the ACME system was able to build a 3D mesh of the object under various stages of the deformation. Using t he known force applied locally to the object, as well as the functions which can then be used in Finite Element or Boundary Element simulations to build a simulated o bject which deforms in the same way as the real object. Similar to this work, Becker et al [3] developed a quadratic programming homogenous isotropic object simulated using the linear finite element method when a force / displacement measurement is known beforehand Bickel et al., [5] developed a system for capturing non linear material elasticity using a trinocular stereo camera system and a markering system for the object under test to measure the strain undergone during a force experiment. The strain is measur ed at arbitrary points on the object and the strain/stress relationship is interpolated across the whole object using these reference points. us isotropic object by simulating the object in an

PAGE 26

15 iterative fashion. Each iteration an error value is calculated. If this value is above some threshold the using simulated annealing Barbara Frank et al [17 ] used a robotic arm attached to a force gauge to applied localized pressure to a single point on the surface of the object. With a depth camera, s of too developed an iterative method of simulation in which these two parameters are adjusted each time. Their method is slightly different as they defined an erro r function which uses the iterative closest point (ICP) algorithm to align the simulated deformed mesh with the captured deformed mesh and then returned the differences between simulated and deformed points. All of the previous work deals with deforming t he object in some localized manner and directly measuring the strain/stress activity to gather data points for generating the deformation experiment, but it also defines a glo bal deformation experiment. That is, the goal is to see how the whole object responds to some strain/stress activity which happens within the object. In this, I am able to see if after the object is deformed, the object residual move ment is appropriat e. 2.2 Computer Vision Material Property Extraction The first problem in extraction is to locate the object in the recorded images. Over the previous decades, many techniques have been developed to extract the features of an object, comparing those featu res to a database of objects whose feature have already

PAGE 27

16 been extracted, and then creating a histogram of likely matches. Some early work in the object matching research include the Hausdorff distance algorithm which determines how close edge based image similar matching scheme uses the Chamfer distance [45]. While both are useful in their own right, neither does well unless the inspected object is very similar to the target object. Slight defor mation, rotations, or scaling, which we would expect in this research, would cause these algorithms to erroneously fail. With respect to deformable object recognition, novel work has been done by [37] which define a system for recognizing a deformable obje ct in images by segmenting the edges of the object around the areas of high curvature. They then allow each segment to be individually transformed and compared for possible matches. In [16], the authors broke the image into contour segments which are th en inputted into an edgel chain network where the network links are determined by a set of rules. When recognizing an object, we need only find continuous paths through the network. This allows the authors to recognize objects, which can be partially occ luded, efficiently. This work continues the work of [32] which defines the Berkley edge detector. This method can find the edges of objects in cluttered scenes. In all of these areas the goal is to find a given object within an image. This thesis ext ends this idea in that it not only is searching for the object, but it extracts important material information about the object.

PAGE 28

17 3. Experiment Definition Throughout the development of this thesis four applications were implemented which utilize tools from several different areas of computer graphics and computer vision to accomplish the goals described in the previous chapter. Chapters 3 through 6 describe these applications, the logic that went into developing them, previous work that attributed to their implementation, and the thirdparty tools utilized in each. Furthermore, it is the goal of these chapters to explain how the applications developed contributed to solving the goals stated earlier 3 .1 Defining the Experiments One task of an animator is to take some real life object and convert it into a 3D object which can be used in video games, movies, or TV commercials. Typically, as we go through this process manually, we might look at the o bject to determine shape, hold the object to determine weight, squeeze the object to determine elasticity and plasticity, and drop the object on the floor to see how it reacts to the ground might help us understand the elasticity of the object. For instan ce, a water balloon, tennis ball, and crystal ball will all have very different responses to this type of experiment. This thesis focuses on the elasticity of the object, therefore no experiments were done to measure weight or shape, however the work done could easily be extended to include these items as computer vision algorithms are not necessary to calculate weight, and while computer vision can be used to determine the shape of the object, it would have to take a recording of all sides of the object an d patch them together to make a 3D model. Furthermore, a computer vision solution would require edge and corner detection

PAGE 29

18 along with complex algorithms for analyzing color gradients to map the topography of the object. A better solution would be to utili ze a range scanner positioned at well defined locations around the object to gather precise shapes and contours. As capturing elasticity is the goal, it is beneficial to define experiments which describe how the object behaves when interacting with the world in some global sense. This means without directly interacting with the object, see how it behaves when interacting with the world. Also, it makes sense to define experiments in which the object is interacted with directly. This means applying some directed localized force and determining how the object responds to that. 3 .2 Material Property Captur ing System As explained in previous sections, several different mechanisms were used to capture the material behaviors of the objects under experimentatio n. Wang [38] developed a system that captured the material properties of cloth by stretching the cloth in different directions a measured the amount the cloth was displaced. Bhat [2] and Kunitomo [19] recorded their experiment and then used computer visi on algorithms to determine how the cloth folds on itself when draped over its own weight. Other efforts were made to record the experiments and using computer vision techniques to extrapolate the material behavior of the cloth [5 ] [ 39]. As a goal of thi s thesis is to build a framework that allows non scientific animators the ability to conduct their own experimentation, the logical choice was to record the experiment and have the computer vision algorithm extract the measurement from the recording. By d oing this, we are reducing the work required by the experimenter, minimizing human error, and utilizing the powerful processing capabilities of modern

PAGE 30

19 computers. A draw back is that we have all of the limitations of the recording device and a computer whi ch only understands discrete images. It is important to note that when using computer vision techniques to analyze images, the subject of the image must be clear. This means the edges surrounding the object be sharp and there is not a lot of noise around the image background. 3 .2.1 Nikon D5000 Early in the research phase it was though t that using a high quality SLR camera should provide a high pixel density, and therefore, should be sufficient in recording the experiments. The Nikon D5000 is a 12 Megapix el camera, has a 30 fps frame rate, and a shutter speed of 1/40000 second. After recording several experiments under this configuration, I found that the images were large, the video was clear; however computer vision algorithms look at individual frames when force was added to the object. The shutter speed is the amount of time the cameras shutter is opened to allow light to enter the lens and contact the photo receptors at the back of the camera. If the ball is moving in front of the camera while the shutter is opened, the movement of the ball will be captured in those photo receptors whi ch makes the ball look like it is streaked across the image, thus blurry. For this reason, I learned it is important, when recording moving objects, to have a very fast shutter speed. Also, the deformation of the object due to added force might only happe n for a moment. When reviewing the images for my test experiments I found that I could not find the single frame that clearly showed the deformation of the object as the force was

PAGE 31

20 applied. This is because the D5000 only captures 30 frames per second. Th e force would be applied to the object sometime between image captures. I learned that, in order for my experiments to be successful, I need a camera that captures frames significantly faster that the Nikon D5000. 3 .2.2 Casio Exilim EX ZR200 This camera w as purchased to solve the problems faced with the Nikon D5000. The EX ZR200 has a 1/250000 second shutter speed and recording frames rates of 30fps, 120fps, 240fps, 480fps, and 1000fps. The shutter speed improvement allowed me to perform experiments tha t had high levels of motion and capture a sharp image at every frame. Of course, there are still limitations to this technology, but for my thesis it proved to be all I needed. The frame rate of the EX ZR200, while impressive, does have drawbacks. The ca mera, while it has a 16.1 Megapixels when shooting photos has a much lower resolution when shooting at high speeds. For instance, the 1000 fps setting only shot images with 14336 pixels. These images were grainy and did not capture the edges of the image s very well. In the end, I used the 120 fps setting as it did capture the deformation not captured by the slower setting, but did not have the graininess of the higher setting (307K pixels).

PAGE 32

21 4. Global Deformation Experiment A global experiment, for the purpose of this thesis is an experiment where the researcher has no direct influence over the deformation of the object. The researcher sets some initial conditions and then lets time, space, and the forces of nature demonstr ate how the object behaves. This is an important type of experiment in capturing the behavior of an object as it removes the limitations with how the object can behave. For instance, if we were to capture the behavior of cloth by stretching it in differe nt directions using different amounts of stress, the cloth would stretch exactly as we expected, but that when dropped. concerned with how objects move, but must define experiments that either directly measure the stress/strain relationship or indirectly measuring this relationship by m easuring the response to some stress/strain measure the response to some strain /stress activity. This thesis defines a global deformation experiment in which a ball is dropped from some specific height and recorded until the ball stops bouncing. This experiment satisfies the global deformation requirement as there is no force introduced by the experimenter. Th e ball is set into position before the experiment starts and gravity, air, the ball, and the ground surface colliding with the ball are the only factors affecting the deformation of the object.

PAGE 33

22 From the moment the ball is dropped, I tracked the height of the ball, with respect to the ground the ball makes contact with. I also track the width and height of the ball for the duration of the experiment. Through this type of experiment I hoped to see how much energy is lost when contact is made with the groun d. If there is a large changes as it collided with the ground. As precise experiments are important to get accurate results, I built a system which provides the ability to drop objects from different heights. Basically, it was a frame built with PVC piping and a box housing the ball with a trap door mechanism. The experimenter can set the box to a specific height, center the ball in the box, then pull the trigger to snap o pen the trap door releasing the ball. The frame is two meters tall which allows the ball to be dropped anywhere from zero meters to two meters off the ground. I did a series of experiments with different types of balls, some highly deformable, some no t. All experiments were done from a single height. The balls used were softball, wiffleball, pingpong ball, nerf ball, stress ball, hollow rubber ball, and a hollow plastic ball. 4 .1 Material Property Extraction After the experiment was conducted in fron t of a camera, the next step was to take the recording and run it through some type of computer vision program to extract pertinent information from the recorded experiment. The goal was to find the height and width of the ball at all times throughout the recording. This allowed me to see how energy was lost over time and how the ball deformed when it came into contact with the ground.

PAGE 34

23 4 .1.1 Computer Vision OpenCV is an open source toolkit that provides a user the ability to perform computer vision operat ions on an image. This resource provides numerous algorithms, from simple to complex, giving the research er an arsenal for tackling common problems in computer vision. For instance, it has numerous edge detection, object segmentation, and object recogniti on algorithms all with simple APIs for working with. As it is open source, there is an opportunity to extend an existing algorithm if needed, but they also provide an API to iterate through the pixels of the image providing the researcher the ability to implement a new computer vision algorithm. For this thesis I used OpenCV to locate the ball in the image and to find the dimensions of the ball at all times throughout the recording. 4 .1. 1.1 Reducing Image Noise In order to find the object in the imag e, we require the ability to find the edges in the scene. Edge detection algorithms typically use color gradients which tells the magnitude of the color intensity change over some space. If there is a large enough color change, the algorithm marks it as an edge. The problem with edge finding is that there may be may many edges in the scene. The subject of the image may have many edges, but the background of the image may also have edges especially if the image is shot outdoors or a cluttered room. B y blurring the image slightly, the steep color gradients in the background become shallower. Of course, the color gradients of the subject also become smaller making the subject more difficult to detect, but if the camera is focused on the subject, it wil l have the steepest color gradients to start with.

PAGE 35

24 For the purpose of this thesis I used a Gaussian mask to reduce the image noise [10]. The Gaussian mask is a n x n matrix where n =3,5,7. In this scheme, the matrix defines a weighting system for blending ne ighboring pixels together. The middle matrix item represents pixel being changed and thus has the highest weight. The further the matrix element is to the center, the lower the weight is defined for that element. The standard deviation of the Guassian i s used to determine how the weighting drops off as you get further from the center element. The Gaussian matrix is passed over the image from top left to bottom right, and every pixel is redefined by multiplying the Gaussian matrix by the n x n matrix of pi xels neighboring the pixel being updated. Figure 7 13x13 Gaussian Matrix with = 2 [10] The size of the filter depends on the type of image being recorded. For instance, if there is a lot of background noise, a larger Gaussian mask would be necessary. Thus, in my application, I give the user the ability to change the Gaussian mask size.

PAGE 36

25 4 .1.1 2 Canney Edge Detector It should be noted that the first step to finding the edges in an image is to convert changes, not the change from one color to another. Color intensity would mean a d ark color compared to a light color. Finding the edges of an object in an image is an area which has received due attention over the years. Many algorithms use a simple first order derivative in the x and y directions [10]. We can then see the magnitude do to the change in color intensity. The Canny edge detection scheme does precisely this. Then, it uses non maxima suppression to blacken all pixels in which the intensity gradient of a pixel is higher than the gradients of the pixels on either side of it. Finally, the algorithm goes through the pixels and if the gradient is above some threshold, it is declared an edge point. The threshold depends on the application of the canny edge detection scheme, therefore, my application gives the user the abili ty to change the threshold to customize the algorithm for their recorded experiment. For my experimentation I went a step further to assist the algorithm in finding the edges of the object. All experiments were made in front of a flat white screen. The goal is to minimize the amount of background noise so the edge detection scheme will only find the ball in the image. Second of all, the balls were painted a bright red color. This created a large color contrast between the wall/floor and the object und er experimentation.

PAGE 37

26 Figure 8 Ball Drop Experiment with Edges Detected 4 .1. 1 3 Image Segmentation by Color Matching The goal of the previous step was to remove everything from the image except for the ball being dropped. Of course, the canny edge detector can do a lot, but there are still edge artifacts that make it onto the edge image. Therefore, I needed a method to try to extract the ball location and size from the edge image where it may or may not have edges. Figure 9 Using Canny edge detector with color segmentation scheme to find the ball I decided to implement a color segmentation scheme. The idea is to choose the color of the ball, then write a computer vision algorithm whic h scans through the image looking for edge colors which match the selected color. The ball is not flat therefore there is a myriad of colors within the ball, however as I painted the ball bright red it limited the colors I had to find. Even though I did this, the ball still ranged from bright

PAGE 38

27 red to dark red so selecting and finding a single color would not find the entire ball. I developed a threshold scheme which the user controls how closely to the selected color the pixel must match to consider it pa rt of the ball. I was able to find the height of the ball at all times throughout the recording, however there were some troubling areas. For instance, the ball, while painted red, was brighter on the side that faced the light source than the opposite sid e. A possible solution to this would be to have three light sources all on different sides of the ball. This even light distribution to all areas of the ball would be an attempt to make the ball a solid color, thus easier to pick up with this thresholdin g scheme. Another problem I had was the light reflecting off the ball and hitting the white table surface. As the ball became closer and closer to the ground, more and more of the red color was shown on the white cloth. The thresholding scheme often pic ked up this color and considered the affected ground as part One problem I had with this method deals with one of the goals of the canny edge detection scheme, that is the edge found is as close to the actual edge of the ball/backgrou nd as possible. This means that the one pixel thick line representing the edge will have some ball elements and some background elements. The color sections of the edg e which has more background color than ball color.

PAGE 39

28 Figure 10 Error in Color Segmentation Scheme As shown in the above figure, the color segmentation sometimes fails to include sections of the edge which has more of the lighter background color than the red ball. 4 .1. 1 4 Image Cropping As the canny edge detection algorithm left unwanted edges and the color segmentation picked up the ground when the color of the ball reflected off of it, I needed a method to discard unwanted edge s and focus the color segmentation algorithm on only the section of the video where the experiment was taking place. For this I developed an image blackening solution which allows the user to drag ed in and the algorithm would blacken everything outside of it. This blackening would occur on the image created by the canny edge detection algorithm as not doing so would have the canny algorithm treated the square box as an edge. 4 .1.2 Capturing Data Points over Time Once the color segmentation was in place I now had a method of capturing the location and size of the ball over time. The goal of the global deformation experiment was to find and save the height and width over time.

PAGE 40

29 The two data points I captured was the height over time and the width/height ratio over time. As the experiment is playing back and the computer vision algorithms mentioned above were calculating the location and size of the ball over time, the application w as graphing these values. As the application captures this data it takes into account the camera frame rate so that a 30 fps video will yield similar results to a 120fps video. The section of the graph that we are concerned with is the instant the ball is dropped to the instant the ball goes to a resting state on the ground. Therefore there are d in. Figure 11 Graphs of the ball height over time and the width/height ratio over time The top left image is the height, from the ground, of the ball captured the entire recording. The top right is the same data where the u nwanted data is pruned. The bottom left image is the width/height ratio over time graph for the duration of the video. The bottom video the ball is moving. As is shown in the width/height ratio over time graph, there tends to be a lot of noise due to the imperfections of the canny edge detection and color segmentation algorithms

PAGE 41

30 described above. Some future work might be to run this data through a low pass filter when it collides with the ground. Finally, once the data is captured and pruned, I developed a method to serialize the data structure which stores the captured data. Pressing the appropriate button will serialize the object and store it to the clipboard. This serialized object will be used when we simulate this object using the bullet physics engine.

PAGE 42

31 4 .1. 3 The Global Deformation Capture Application Figure 12 Description of the Global Deformation Material Property Extraction Application 1 A radio button to choose an AVI file which contains the recorded experiment 2 A radio button to choose an attached webcam to record the experiment 3 A frame containing the original image, but also displaying the cropped section of video 4 Shows the color used by the color segmentation algorithm. To select a color, right click on the frame (3) 5 To enable or disable the cropping of the edge image

PAGE 43

3 2 6 Play button for starting the video from the current frame. 7 Restart button for starting the video from the first frame. 8 Stop button to pause the video playback 9 The graph which displays the data for the entire video, or since (11) was pressed 10 Slider to choose the left most data point to display in the (16) graph 11 The button to start saving height and width data. This also starts the graphing of the data 12 Slider to choose the right most data point to display in the (16) graph 13 The button to stop saving height and width data. This also stops the graphing of the data. 14 Clears all saved height and width data. Also resets the graph 15 The button which serializes the cropped data store and saves it to the clipboard 16 The graph which displays the data for the cropped section of video, as determined by (10) and (12) 17 The height at which the ball was dropped. 18 The number of frames per second in the recording 19 The frame containing the cropped edge image. Also shows a box around the segmented ball. 20 A slider for adjusting the color segmentation threshold 21 A slider for adjusting the canny edge detection algorithm gradient threshold 22 A slider for adjusting the blurring Gaussian matrix size. This can be 3x3, 5x5 or 7x7.

PAGE 44

33 23 A button which opens a file choosing dialog for selecting the recorded experiment AVI file. 4 .2 Finding Simulation Coefficients Using the ball height over time, as well as the ball height and width over time, I developed an application which simulates a ball dropping using a physics engine and automatically adjusts the coefficients of that simulation until the behavior of the simulated ball matches that of the recorded ball in the capture step. In a final step, the application allows the user to transfer the coefficients obtained to a different object to see behavior changes when only the geometry (internal structure) of the object changes. All other coefficient s will remain exactly the same. 4 .2.1 Bullet Bullet is an open source physics engine which has special support for rigid body dynamics and collision detection. By physics engine, I mean the developer models a 3D object in its undeformed shape. This shap e will consist of vertices and edges between them. The developer then places it in the scene, adds some initial conditions, and then sets the engine in motion. The engine determines the position and velocity of the each vertex in the 3D object at each fr ame by taking the position and velocity at a previous frame, and then uses some general physics principles to get the position and velocity of each vertex at the current animation frame.

PAGE 45

34 for ce and acceleration can be summed up by the following second order differential equation where F () is a function accumulating all of the internal and external forces [27]. Physics engines turn this into two first order differential equations. Solving these differential equations has been an area of much attention and is outside the scope of this thesis. However, it can be said that there are many integration schemes each with benefits and draw backs which are currently employed. Usually, choosing an integration scheme is decided by speed requirements and object elasticity. Bullet has many material coefficients which define the internal and external force functions. For instance, there are elasticity coefficients which describe the constraints betwee n two vertices which are connected via an edge. The elasticity describes the stress/strain relationship between two connected vertices. The stress is the force pulling the vertices in opposite direction; the strain is the amount the distance between the o bjects displaces. As the goal of the simulation is to create a ball which has the same bouncing behavior as the recorded object, the elasticity coefficients are paramount in achieving it. 4 .2.2 Modeling the Ball Before we can start the simulation, we must model the 3D ball. The global deformation simulation application uses two methods for building a 3D model of a ball.

PAGE 46

35 4 2 2 1 Mass Spring Sphere Mass spring modeling system builds a 3D mesh of vertices connected with edges. The vertices are the mass objects and the edges between them define the springs, or the elasticity relationships, between the adjacent masses. In this application, the mass spring balls are only defined by a surface mesh. Some futu re work might be to explore how mass spring object behave when it is not hollow, but has an internal network of masses and springs. Figure 13 Geometry Parameters for a Mass Spring Sphere metry, when using the mass spring object, the only options to modify the object are To affect the geometry, we can increase or decrease the sphere radius, or increase/decrease the resolution of the object. The higher the res olution, the more vertices and edges define the surface mesh of the sphere. 4 2 2 2 Tetrahedral Sphere A tetrahedron is a three dimensional object comprising of four triangular faces. This application uses Tet G en to con struct a tetrahedral mesh. TetG en i s an application which takes a 3D surface mesh and generates the non hollow 3D tetrhedralization of that

PAGE 47

36 surface mesh employing D elauny triangulation [40]. TetG en allows several parameters to be passed in which define how it will tetrahedralize the surfac e mesh. The first is the quality constraint on the generate d mesh. The TetG en manual tells us the reason for this constraint is that the Finite Element Method for soft body dynamics (FEM) is more accurate when the aspect ratio of a tetrahedral is as sma ll as possible. The aspect ratio of a tetrahedral is defined as the ratio of the radius to the length of the Figure 14 Tetrahedrons with High an d Low Radius/Edge Length Ratios The three tetrahedrons on the left are low quality while the three on the right are higher quality. [39 ] The second parameter passed to TetG en is the volume constraint. This value tells Tet G en what the maximum volume in which to generate a tetrahedron. Therefore, if large tetrahedrons are desired we give a large value for volume constraint. If smaller is needed, we provide a smaller volume constraint. To affect the geometry when using tetrahedrons, the coefficients whi ch can be adjusted are the resolution (vertex density), the tetrahedral quality constraint, the tetrahedral volume constraint, and the number of clusters defined for that tetrahedral mesh. This final coefficient, defined by bullet combines tetrahedrons t ogether to form rigid grouping s of tetrahedrons. For instance if we only defined a single cluster, all of the tetrahedrons of the sphere would be combined together making the entire sphere a cluster,

PAGE 48

37 and thus a rigid object. The application also allows for each tetrahedron to be its own cluster. Figure 15 Spheres with Differing R adius / E dge L ength and V olume C onstraints From left to right: The first sphere has a high quality constraint. The second sphere a low qua lity constraint. The third sphere has a high volume constraint. The forth sphere has a low volume constraint. 4 .2.3 TetView During the bullet simulation, when tetrahedrons are used, the only part of the geometry which is viewable to the user is the out side. Sometimes, if there is enough force when colliding with the ground, the tetrahedrons will collapse on themselves. physical change to the outside of the sphere. Fo r both of these reason, I added the use of TetView, the tetrahedral mesh viewer designed by Hang Si of the Weirerstrass Institute for Applied Analysis and Stochastics [39]. TetView provides the options to load a tetrahedral mesh, rotate it, and most useful ly -cut it down the middle to see the internals of the object. Any time during the running of the global deformation simulation application, a button can be pressed which will launch this application and load the mesh currently being used by the applicati on.

PAGE 49

38 4 2.4 Capture Data Points over Time Similar to the global deformation capture application, this application has the ability to capture and graph data points over time. The data points used for this application is the height of the ball (off the ground ) over time, and the width/height ratio of the ball over time. The goal was to see the trajectory of the bouncing ball over time, but to also see how the ball deforms when it collides with the ground and as it pushes off the ground. As well as capturing d ata points over time, this application can import data points from the global deformation capture application. Doing this allows the application to find the coefficients which make the simulated ball bounce like the real ball. 4 .2.5 The Coefficient Findin g Algorithm Finding the optimal coefficients to make the simulated ball behave like the recorded ball is an optimization problem where the objective function is to minimize the bounce height difference of the first n bounces of the simulated ball and the r eal ball. The constraints are the numeric limits of the coefficients. Most coefficients have values between 0 and 1, while some only have a lower limit. In order to optimize the coefficients, I developed an algorithm which runs the simulation, modifies a single coefficient, and runs the simulation again. This loop is continued until a solution is found. Each time the loop is executed, the algorithm tracks the ball height over time. Using this information, it is able to capture the bounce heights by loo king for peaks in the ball height data. A peak in the data is defined as a ball height where the heights directly before and directly after are smaller.

PAGE 50

39 In this loop, I developed a method to determine whether the simulation was converging on the optimal solution. This is done by saving run history, and then comparing the current run with a past run to see if the bounce height differences are getting smaller. If the bounce height becomes closer to the optimal values, the algorithm continues as it is sti ll converging on the solution. The first iteration of the loop, the algorithm modifies the radius of the ball to match the radius of the recorded ball. In the second iteration, the starting y position of the ball is updated. Subsequent iterations will wo rk through one elasticity coefficient at a time. The elasticity coefficient will be modified as long as the simulation continues to converge on the optimal solution. Once it begins to diverge from the optimal solution, the algorithm saves the optimal coe fficient state and moves onto the next coefficient. The elasticity coefficients start in some default position. The position does not matter as the algorithm can find the optimal value regardless of the starting value. The simulation is run with the coef ficient at this starting position to get a baseline as to how close the simulation is to the optimal bounce heights. As increasing an arbitrary coefficient could make it more elastic or less elastic, the next step is to determine whether to increase the coefficient or decrease it. To do this, the algorithm modifies the coefficient to halfway between the default and the maximum value. If this change causes the simulation to diverge, the algorithm decreases the coefficient halfway between the minimum val ue and the default value. Once this has been determined, the algorithm searches for the optimal coefficient value by modifying the coefficients in a binary manner.

PAGE 51

40 There are many coefficients, some which have nothing to do with the elasticity of the objec t. Therefore, I limited the set of coefficients to adjust to only those which affect elasticity. These include linear stiffness, angular stiffness, volume stiffness, pressure, volume conservation, and collision impulse splitting. As these coefficients ar e updated one at a time and never repeated, it is important to find the correct order as once it is updated, the algorithm does not return to the coefficient later. The purpose of updating the coefficient individually, one at a time, is to isolate the ef fect of a single coefficient on the behavior of the object. As different creating an algorithm which modifies multiple coefficients each iteration would be infeasib le as there would not be a way to determine which coefficient was responsible for converging on the solution, or diverging away from a solution. In the future, it would be beneficial to utilize an external multi variate optimization routine for finding the se coefficients of the governing physics equation. It would be the responsibility of the application to define the error function which would tell the optimization routine whether the simulation was converging on or diverging from the optimal solution. 4 2.6 Global Deformation Simulation Application This application was written using c++, fltk 1.3, Bullet 2.79, OpenGL 2.1.0, and ChartDirector 5.0.3 on a WindowXP SP3 system and an Intel onboard graphics card.

PAGE 52

41 Figure 16 Overview of the Global Deformation Simulation Application 1 Launches TetView displays the sphere mesh from (2) 2 The simulation window which shows the sphere generated with the coefficients defined (14) 3 Stops the simulation on its current frame 4 Restarts the simulation. First it regenerates the scene with a new sphere using the coefficients as it is currently set. 5 Starts the simulation from the current frame. 6 The graph of the height over time and width/height ratio data points 7 The button to start saving height and width data. This also starts the graphing

PAGE 53

42 of the data 8 Stop button to pause the video playback 9 Imports data points from the clipboard and graphs it. This is the data from the global deformation capture application. 10 Clears all saved height and width data. Also resets the graph 11 Radio buttons to determine which object to display in the right simulation window (15) 12 Button which starts the algorithm for finding the coefficients to make the object being simulated match the imported data points. 13 Button which stops the algorithm for finding the coefficients of the sphere. 14 The coefficients which can be manually adjusted or the coefficient finding algorithm will adjust. 15 The simulation window which shows the object defined in (11). If the object is one of the spheres, it will use the coefficients defined in (14), but it will change one coefficient to change the geometry of the object. 16 Tells the algorithm to use a tetrahedral mesh to define the sphere 17 Tells the algorithm to use a mass spring mesh to define the sphere 18 Launches TetView and displays the sphere mesh from (15)

PAGE 54

43 5. Local Deformation Experiment In previous sections, the thesis was focused on experiments where the object was placed in some initial state and in an instant all forces applied by the experimenter were removed leaving only the natural internal and external forces to act on the object. In this next section, I presen t a different kind of experiment. When we think about how an animator might test a deformable object to see how it behaves, one obvious test would be to hold the ball in their hand and squeeze it. This gives the animator an idea of the stress/strain re lationship as they know how much stress They can also see how volume is conserved by noticing whether the area not under stress expands while the stressed area co mpresses. Some other interesting tests an animator might do would be to poke their finger at a single location on the object to determine the area of the affected region or to see how deep the finger enters the object. In these types of tests, the animat or has direct control over the stress placed on the object and it is these kinds of tests for which this section will focus. The goal of my local deformation experiment was to see how a ball displaces when a specific amount of directed force is applied t o the ball. There was a limitation to the kinds of experiments I could do since the ball was round (e.g., I could not set the ball on the ground and poke some stress calculating device into it as it would roll away). I settled on placing the ball on a t able, then setting a flat board on top with several weights a top the board. Finally, I used a level to make sure the board was parallel to the table. I

PAGE 55

44 did not add any force to the ball under test while doing this. Figure 17 The Local Deformation Experiment A nerf ball undergoing the local deformation experiment 5 .1 Material Property Extraction Similar to the global deformation experiment material property extraction, the local deformation capture tool uses Gaussian masking to blur the image (described in section 4 rithm (described in section 4. 1.1.2), a color segmentation scheme whe re the user can right click on the object to set the color (described in section 4. 1.1.3), and image cropping to focus the experiment on the part of the image which contains the ball under test (described in section 4. 1.1.4) Figure 18 Extracting the Ball's Dimensions for the Local Deformation Experiment

PAGE 56

45 The nerf ball from figure n 1 after undergoing the material property extraction algorithm While the method for finding the ball in the image is the same, the difference lays in how the experiment is conducted, how the application is used to extract the material properties, and what material properties are captured 5 .1.1 Recording the Experiment I set up a table with some tall object over which a white cloth could be draped as my recording background. On the table I laid a length of white particle board to ensure the experiment background would be one color while the object under test would be a contrasting color. I did this to assist the color segmentation algorithm find only th e ball. I started the recording, making sure the camera was at the same level as the middle of the ball. By doing this I was able to see where the boards met the top and the bottom of the ball. It was important to get both the undeformed shape as well as the deformed shape in the video in order for the algorithm to compare the two. I started the recording with the ball setting on the table with no external forces acting on it. Then, I placed the board on the ball followed by the weights and the level. O nce I saw the board was level I stopped the recording. 5 .1.2 Capturing the Control and Deformed Frames This application is different from the global deformation application in that we only need two data points. We need the height and width of the ball bot h in its equilibrium state and in its deformed state. Therefore, as the video was playing there are two buttons

PAGE 57

46 which allow you to select the frame that best shows the control state and button to press when the frame best shows the deformed state. 5 .1.3 M aterial Properties Extracted For this experiment, all we care about is the width and height of the ball in its control state and the width and height of the ball in its deformed state. When using the application, the user must enter the amount of weight p laced on the object. Along with that the user must input the height of the ball. The edge detection, color segmentation, and cropping algorithms took care of finding the ball in the frame. I used the information inputted from the user to calculate the wi dth and height of the control and deformed ball in meters. Similar to the global deformation capture application, this application has a button which will serialize the extracted data and copy it to the clipboard. 5 .1.4 The Local Deformati o n Capture Applic ation This application was written using c++, fltk 1.3, and openCV 2.1 on a WindowXP SP3 system and an Intel onboard graphics card.

PAGE 58

47 Figure 19 Overview of the Local Deformation Material Property Extraction Application 1 Radio but ton which tells the application the experiment is a recorded AVI file 2 Radio button which tells the application t he experiment will be done via a web cam connected to the device. 3 The frame which displays the recording as it was recorded 4 Button stops the playback of the recorded experiment 5 Button restarts the playback of the recorded experiment from the first frame 6 Button starts the playback of the recorded experiment from the current frame 7 A box which shows the currently selected object color 8 A checkbox to enable/disable the cropping of the image 9 A field which the user enters the size (in the y direction) of the object 10 A field which the user enters the weight of the board placed on the object

PAGE 59

48 11 The captured width and height for the control ball. This gets set when (21) is pressed 12 Takes the captured data, serializes it, and copies it to the clipboard. 13 The captured width and height for the deformed ball. This gets set when (21) is pressed 14 A slider to modify the color segmentation threshold 15 A slider to modify the Gaussian blur matrix size. This can be 3x3, 5x5, or 7x7) 16 A slider to modify the Canny edge detection gradient threshold. 17 Button to discard the previously set deformed object image 18 Button to set the deformed object image 19 The frame which displays the set deformed edge image 20 Button to discard the previously set control object image 21 Button to set the control object image 22 The frame which displays the set control edge image 23 Button which opens a file dialog where the recorded experiment can be selected. The experiment must be an AVI file format. 5 .2 Finding Simulation Coefficients Once the width, height, and weight information has been serialized to the clipboard, it can then be brought into the local deformation simulation application for the global deformation simul ation application. Bullet and O penGL are used to simulate the

PAGE 60

49 und eformed ball and the compressed ball ( described in section 4. 2.1). TetGen is used to n 4. 2.2). TetView is used to have a closer inspection of the tetrahedralized sphere. TetView allows us to see the internals of the o bject (described in section 4. 2.3). There are several key differences in the two applications. First while the global deformation simulation application was trying to optimize one behavior of the ball (ball bounce height), this application attempts to optimize four behaviors (control width, control height, deformed width, deformed height). Second, the method in which the data is captured is different between the two applications. This time, instead of capturing da ta Figure 20 The Simulated Ball being Compressed by a Simulated Weighted Board 5 .2.1 The Board Ball Simulation Simulating a board squishing a ball is not a simple problem. This is because, in a typical simulation, the animator sets up the scene (placing objects of differing masses, sizes, and shapes in different locations in the scene) and then starts the simulati on, letting the physics engine take care of the details. The end goal of this simulation is to place a board on top of the ball with the same amount of force as the recorded experiment. In the recorded experiment we set the board

PAGE 61

50 on the ball and balan ced it with my hands. This allowed the board to be on the ball until both had zero velocity, thus not adding any kinetic energy into the system at that point. in the defo rmed equilibrium state. Deformed states occur because of collisions which occur when two objects intersect with each other. When the physics engine notices two objects intersecting, it adds artificial impulse force to both objects The magnitude of this impulse force depends on how far of an intersection occurred. When two objects start a simulation intersected, the physics engine adds a lot of impulse force which usually causes the objects to fly apart when the simulation is started. For this thesis I started the simulation with the weighted board slightly over the top of the ball. Then, when the simulation starts, the board will drop onto the ball causing a deformation of the soft body. The goal was to minimize the amount of kinetic energy brought i nto the experiment due to the board dropping, however using this mechanism, it was impossible to remove all of it. 5 .2.2 Optimization Problem The purpose of the local deformation experiment that I chose was to see how the object deforms when I squished the object between two boards with a certain amount of force. As the camera is two dimensional, the information available was the amount the object displaced in the y and x directions. As the ball is symmetric, this means the height and width, respectively The displacement is calculated as the difference, in position, of the ball in its deformed state minus the position of the ball in its undeformed (or equilibrium) state.

PAGE 62

51 Therefore, the application finds the width and height of the ball in both the eq uilibrium and deformed state. The purpose of the simulation application was to optimize the coefficients so that the width and height of the ball in the equilibrium state matched that of the recorded ball, and the width and height of the ball in the deform ed state matched that of the recorded ball. 5 .2.3 Capturing Data Points Unlike the global deformation experiment, the goal was not to track the behavior we want the width and height of the ball when it is most deformed. As every frame is rendered, the algorithm was to track the min/max x and y values of the entire object. As we found the width and height, we saved that information if the width was bigger than it had been in any previous frame and if the height was smaller than it had e ver been in any previous frame. The application renders two frames simultaneously. First it renders a scene with just the ball. Secondly, it renders a scene with the same ball, and a weighted board dropped on it. This allows us to calculate the max widt h and min height of both the controlled ball and deformed ball for that simulation. This is exactly what was done in the local deformation capture application. 5 .2.4 The Coefficient Finding Algorithm This optimization problem is similar to the one define d in the global deformation simulation application. The difference here is the optimization problem has four independent variables, that is the width and height of the control ball as well as the

PAGE 63

52 deformed ball. That being said, the algorithm defined is identical with the only difference being how we determine whether the simulation is converging on the solution or not. If either the width or height differences are smaller than in the previous run, the algorithm marks that run as converging on the solut ion. 5 .2.5 The Local Deformation Simulation Application This application was written using c++, fltk 1.3, Bullet 2.79, and OpenGL 2.1.0 on a WindowXP SP3 system and an Intel onboard graphics card. Figure 21 Overview of the Loc al Deformation Simulation Application 1 The simulation window which shows the sphere generated with the coefficients defined (14)

PAGE 64

53 2 The simulation window which shows the sphere generated with the coefficients defined (14) with a weighted box dropped on it. 3 Stops the simulation on its current frame 4 Restarts the simulation. First it regenerates the scene with a new sphere using the coefficients as it is currently set. 5 Starts the simulation from the current frame. 6 The width and height of the simulated control sphere (1) 7 The width and height of the simulated deformed sphere (2) 8 Imports width/height data from the clipboard and sets it in the fields (9) and (10) 9 The imported width and height of the control object. 10 The imported width and height of the deformed object. Also the amount of weight set on top the deformed object. 11 Radio buttons to determine which object to display in the right simulation window (15) 12 Button which starts the algorithm for finding the coefficients to make the object being simulated match the imported data points. 13 Button which stops the algorithm for finding the coefficients of the sphere. 14 The coefficients which can be manually adjusted or the coefficient finding algorithm will adjust. 15 Tells the algorithm to use a tetrahedral mesh to define the sphere 16 Tells the algorithm to use a mass spring mesh to define the sphere 17 Launches TetView and displays the sphere mesh from (2)

PAGE 65

54 18 Launches TetView and displays the sphere mesh from (1) 6. Coupling Object Geometry with Elasticity Coefficients The goal of the previous experiments was to build a simulation of an object whose material behavior closely resembles the material behavior of some known object. We did this by re cording the object under several tests, then running those recordings through a few computer vision algorithms to extract the material behavior of the object from the recordings. Then, these extracted behaviors are transferred to a physics simulation engi ne which attempts to create an object with those same material behaviors. Before starting the coefficient finding algorithm, the animator must generate the geometry of the object. Then, the coefficients are found without changing the geometry. The next stage of the thesis is to take two objects with identical material coefficients, modify their geometries and see how it affects the material behaviors. The goal of this stage of the thesis is to determine whether and how the coefficients of a simulation a nd the geometry are coupled to each other. Similar to the material behavior matching experiments done in the previous sections, I developed a global deformation and a local deformation solution to test out these theories. 6 .1 Global Deformation In a global deformation experiment, we are looking at how the whole object responds to contact forces applied to the surface of the object. All previous research due to some contact force being applied to that location. This section discusses the

PAGE 66

55 experiments performed to measure the global behavior difference of two objects with different geometries. The global behavior focused on by this thesis is the bounce hei ght of an object, that is, how much energy is lost in the collision with the ground. To accomplish the global deformation goals I developed a method to simulate two spheres simultaneously. Both spheres have the same size, starting position, mass, and ela sticity coefficients. One sphere is the control, while the second sphere is identical to the control exce pt the geometry is different. There are three geometric properties which can be adjusted on the second sphere. First the resolution of the sphere, that is the number of vertices and edges on the surface of the sphere, can be smaller or greater. Second is the radius/edge length ratio constraint. If we set a lower value for this constraint, the tetrahedral mesh generator will allow fewer long/skinny tetrahedrals. Third the volume constraint can be adjusted. When this value is lower, it forces the tetrahedral mesh generator to build smaller tetrahedrals. These geometric properties are discussed in more detail in section 4.2.2.2. By adjusting these properties, this thesis tests spheres with larger surface tetrahedrons and smaller internal tetrahedrons. Smaller surface tetrahedrons and larger internal tetrahedrons are also tested. The application tracks the ball bounce height as well as the dimension s of the ball for both the control sphere as well as the sphere with the adjusted geometry. With this, I compare the bounce heights of the two spheres. 6 .2 Local Deformation With respect to the local deformation experiment, I also implemented a method to simulate two spheres with identical material coefficients but slightly different geometries.

PAGE 67

56 The geometrical properties which can be changed are the resolution, radius/edge length constraint, and the volume constraint. The geometric constraints are descr ibed in the previous section. Figure 22 Object Behavior Differences by Adjusting the Geometry With all other coefficients being the same, objects are tested to see how identical localized forces affect the behavior of the object s with different mesh resolutions. Each sphere has an identical weighted board dropped on it. The height and width of the control and geometrically altered spheres are captured until the moment the width is at its largest valu e and the height is at its smallest value. With this information, I am able to compare the deformation of two objects with identical coefficients but different geometries side by side.

PAGE 68

57 7. Results During the course of this thesis there were five tasks each with individual results which will be shared in this section. The goal of the material property extraction tasks was to successfully extract material behavior of the ball under both the local deformation and global deformation tests. The goal of the simulation tasks was to use material behaviors extracted to automatically generate the coefficients of the simulation which enables the simulated ball to have the same material behavior as the real ball. The final task was the inv estigation of how modifying the geometry without the coefficients affected the material behavior of the object using the same local and global experiments. 7 .1 Global Deformation Material Property Extraction The goal of this task is to take a recorded exp eriment of a ball being dropped from y position and dimensions for the duration of the experiment. The captured information needs to be formatted in such a way that all data points not re levant to the experiment should be pruned. To analyze the computer vision, data analysis, and tracking logic in the global deformation material property extraction research, I used four balls made of different materials which have different behaviors whe n bounced. These include a ping pong ball, waffle ball, softball, and a highly deformable air The experiments were conducted in front of a Casio Exilim EX ZR200 camera recording at 120 frames per second. The background and ground in each experiment was

PAGE 69

58 covered in a white sheet and each of the balls was painted red. The balls were dropped from 1 meter and 1.6 meters. a. b. c. d. Figure 23 Graphs of the Ball Positions over Time when Dropped from 1 Meter These graphs show the y position of the top of the ball for the duration of the experiment which drops the ball from 1. 0 meters. They represent the position of the (a) pong ball.

PAGE 70

59 a. b. c. d. Figure 24 Graphs of the Ball Positions over Time when Dropped from 1.6 Meters These graphs represent the y position of the balls in the experiment which drops them from 1.6 meters. They are the (a) softball, (b) wiffleball, (c) air (d) ping pong ball These results show what was expected prior to running the tests. The softball has low elasticity and achieves its final resting position quicker than the other three ball t ypes. The hollow wiffleball bounces higher than the softball, but not as high as the air filled ball, and finally the ping pong ball bounces the highest and longest of all the balls tested. This is expected as the ping pong ball is built to bounce well. It should be noted that the data collected creates a smooth graph of the y position of ball. This tells us that the frame rate used is slow enough to capture the peak and apex of the curve. It also tells us that using the top of the ball to measure pos ition is an acceptable solution to this type of experiment.

PAGE 71

60 7 .2 Global Deformation Simulation The goal of the simulation step is to estimate the coefficients of the governing equation which cause the simulated ball to have the same bounce heights as the recorded ball. For this thesis success would mean the algorithm changes the coefficients such that the bouncing heights of the simulated sphere converge on the bouncing heights of the the reco coefficients such that the simulation solution is closer to the optimal solution that when the simulation was run with default values. It appears that when using tetrahedrons, the solid object which gets created diffuses much of the kinetic energy attained by the ball at the moment of collision. propel the object to the bounce heights of the wi ffleball, air filled ball, or ping pong balls. The reasoning for the energy loss is the topic of another research effort, however the author speculates it is because of the moving parts inside of a solid tetrahedral mesh. However, the algorithm was able to find elasticity coefficients which make the simulated ball behave like the softball. Figure 25 Captured vs. Simulated Ball Height over Time After 25 iterations, the algorithm was able to find the coefficients which made the simulated ball have the same bounce heights as the captured ball

PAGE 72

61 7 .3 Local Deformation Material Property Extraction In the local deformation material property extraction work, the goal was to find the width and height of the ball both under its own weight and under the weight of a heavy board. To test out the computer vision width and height extraction algorithm, I used four types of ball, each of different material. They were a softball, wiffleball, nerf ball, and stress ball. The softball mad e of a solid polyurethane core with a leather wrapping, the wiffleball is hollow and made of a hard plastic shell, the nerf ball is a solid light sponge like material, and the stress ball is made of a soft plastic casing filled with a gel like substance. Table 1 Comparing the Deformation of each Ball Type The width and height of the balls under experimentation along with the percentage change after the weighted board is placed on the object. Control Deformed % Changed Softball width 0.198969 0.196907 1.04% height 0.2 0.198696 0.65% Wiffleball width 0.156757 0.156757 0.00% height 0.16 0.158919 0.68% Nerf ball width 0.16 0.156991 1.88% height 0.16 0.127536 20.29% Stress Ball width 0.161053 0.166316 3.27% height 0.16 0.111579 30.26% As expected, the width and height of the softball and waffle ball does not change t

PAGE 73

62 change as the nerf material does not preserve volume when deformed. The stress ball is also highly deformable as it loses 30% of its height when the weight is applied. Finally, this ball gets wider as it preserves volume when it is deformed 7 .4 Local D eformation Simulation The local deformation simulation application was responsible for finding the coefficients of the governing equations which cause the simulated experiments of a board compressing a ball to behave like the recorded experiment. The obj ects under test were a softball, wiffleball, nerf ball, and a stress ball. The algorithm developed works well with highly deformable spheres but does not work well when the object under test has little or no deformation. Therefore, the algorithm was unabl e to find the elasticity coefficients for the softball and wiffleball. However, the algorithm was successful with respect to the spongy nerf ball and stress ball. Table 2 Captured vs. Simulated Compression of Nerf Ball The simulated control width and height along with the deformed height all come very close to matching the recorded values. The simulated deformed width is greater than the captured which tells us the tetrahedral mesh tends to preserve volume as the object is being deformed. Control Deformed width height width height NerfBall Recorded 0.25 0.25 0.244048 0.197619 Simulated 0.250632 0.242885 0.261936 0.197754 0.25% 2.85% 7.33% 0.07%

PAGE 74

63 Table 3 Captured vs. Simulated Compression of Stress Ball All simulated width and height values are within acceptable thresholds ( 4% difference between simulated and captured) for the stress ball. Control Deformed width height width height StressBall Recorded 0.102581 0.1 0.103226 0.069032 Simulated 0.100048 0.097808 0.105659 0.066867 2.47% 2.19% 2.36% 3.14% 7 .5 Investigation of Object Geometries The goal of this task was to determine how the behaviors of two spheres change when they have the same coefficients of the governing equation, but differing geometries. This was tested both with the global simulation experiment, that is dropping a ball from some height and tracking the height and frequency of ball bounces, and the local deformation experim ent, that is compressing a ball between the ground and some weighted board to 7 .5.1 Global Deformation For this part of the thesis I only focused on the tetrahedral spheres as they have more geometric properties that can be adjusted than the mass spring sphere in which only object resolution can be adjusted. The coefficients of the tetrahedral sphere which can be adjust are the resolution, that is the number of vertices and edges which make up the surfa ce of the object, the radius/edge length ratio constraint, that is long skinny tetrahedral versus evenly proportioned tetrahedral, and volume constraint on the tetrahedral.

PAGE 75

64 Figure 26 Graphing the Ball heights of Two Spheres with Differing Resolutions The graph shows how the bounce height is affected when the resolution of the object is adjusted. The sphere with the larger number of tetrahedrons tends to lose less energy in the collision. Figure 27 Gra phing the Ball Heights of Two Spheres with Different Radius / Edge Length Constraints This graph shows how the bounce height is affected when the radius / edge length ratio is adjusted. While the sphere with the ratio closer to one does lose less energy i n the collision than the other, the affect on bounce height is minimal. Figure 28 Graphing the Ball Heights of Two Spheres with Different Volume Constraints This graph shows how the bounce is affected when the volume constraint is the only geometric difference between two spheres. The bounce heights are the same among the two objects however the sphere with the lower volume constraint compresses more when it strikes the ground.

PAGE 76

65 When resolution is increased, this increases the polygons on the surface of the sphere, but it is still the volume constraint and radius / edge length ratio which determine the size and shape of the tetrahedrons inside the object. In the first graph, two balls were dropped, one with a low resolution, a nd one with a high resolution. Both spheres had identical elasticity coefficients as well as radius / edge length ratio constraint and volume constraints. This is evidenced by the same sized tetrahedrons on the inside of the object. From this informatio n, we can see that an object with a denser surface mesh than ground. minimal affect on the slightly less energy in the collision with the ground and thus, bounced slightly higher than the ball with a higher radius / edge length ratio. Finally, the tests showed that decreasing th e mesh generation volume constraint coefficient had no affect on the energy lost during the collision. However, an interesting take away from this test was that the ball with the lower volume constraint compressed more when it collided with the ground but compress as much. Interestingly, all of the tests confirmed that changing the geometric coefficients, with none of the elasticity coefficients changed, changed the elastic behavior of the object.

PAGE 77

66 7 .5.3 Local Deformation With respect to local deformation, the test was to drop identical boards onto two spheres with identical elasticity coefficients, but slightly different geometric properties. s width and height differ when their geometries change. Figure 29 Chart Comparing Compression of Two Spheres with Different Resolutions were placed on top of them. The balls are identical except the resolution of the sphere. Figure 30 Chart Comparing Compression of Two Spheres with Different Radius / Edge Length Constraints ight and width displaced when identical boards were placed on top of them. The balls are identical except the radius / edge length constraints were different on the two balls.

PAGE 78

67 Figure 31 Chart Comparing Compression of Two Sphe res with Different Volume Constraints were placed on top of them. The balls are identical except the volume constraint placed on the mesh generator which created this te trahedral sphere. Through experimentation, it is shown that resolution of the ball has a large effect on the strain stress relationship. In the first chart we see that when the objects were in their most deformed state, the height of the sphere with a lower resolution was 7.68% lower than the more dense sphere. This tells us that, when using tetrahedrons, using more, smaller tetrahedrons will give the object a more solid behavior than fewer, bigger, tetrahedrons. While investigating how the radius /edge length ratio constraint had on object deformation, I found that while there were slight differences in the displacements while simulation itself. Finally, when tes ting the volume constraint parameters used by the tetrahedral mesh generator I found that with a volume constraint about 10 times less than the control decreasing the volume constraint, more than any other geometric parameter, caused the width of the object to increase as the height was decreased.

PAGE 79

68 8. Discussion The research conducted during the course of the past two semesters involved long nights, weekends, and skipped holidays. While an enormous amount of work went into it, there are still areas I have kept track of which could be improved upon to gain better results. 8 .1 Computer Vision It was always the intention of this thesis to record experiments with an object and to use computer vision techniques to extract the important behavioral information from the recording. To do this, I used a point and shoot camera to recording high speed videos of the experiments I recorded. Th e camera, while much better than others I have used still produced grainy videos when doing any speed other than 30 frames per second. Even when using the highest frame rate, it still did not shoot the 12.5 megapixel values that it would when shooting a s till picture. The result of the grainy, low resolution, pictures was the edge detection algorithms had a harder time finding the true edges of the ball. I believe it was for this reason I was not able to capture height and width information of the ball in the global deformation material property extraction application. See the figure below for an example. I would expect the width/height ratio to stay at one except when it collides with the ground. However, because the image is grainy, the width and he ight are not precise at every frame.

PAGE 80

69 Figure 32 Graph of Width / Height Ratio over Time Graph of the Width/Height over time as tracked and calculated by the global deformation material property extraction application. Another i dea to rectify this problem might be to implement a low pass filter which removes all of the noise and, hopefully, only leaves a line better representative of the width/height ratio of the ball over time. The computer vision technique used to find the ball in the image, while it works for the type of experiments conducted, would not work under several circumstances. First, if the object, or if the background and ball d There are better segmentation algorithms available today to find an object in an image. Most of them require a library of images. The algorithm I used finds the edges of the ball by using the canny edge detection algori thm and then utilizing a color matching scheme to find edges of a certain color in the image. I have shown in an earlier color and the background color which happens i n a lot of cases. If I had two cameras, I could have shot the experiment in stereo which can pick up much more three dimensional information than one camera. Having one camera limited ble in two

PAGE 81

70 dimensions. With stereo cameras, I could have done an experiment in which I poked the object with a certain force and found the radius and depth of the crater I made. The experiments I did conduct would have been better captured if I had proper lighting. For all my experiments, I had over head lights and scrounged a couple lamps, but this was never enough. Some of the lights in my experiment were brighter than others. Some had soft white light bulbs while others were fluorescent white. Also, the overhead light casted a shadow straight down which always showed in the video. A better setup would be to have three umbrella lights situated in a triangle in front of the object. That way the shadow would be behind the object and all sides of the o bject would have an even color. 8 .2 Simulation For this thesis I developed a new method for finding the coefficients of the governing equation which made the simulation sphere behave like the real sphere. The algorithm involves changing coefficients one at a time. Once a coefficient was moved past, it would never be under consideration again. A study of multi variable optimization routines might allow this application to make better choices when updating coe fficients. It may also be the case that some of the difficult optimization work can be passed off to a third party optimization routine altogether. When using tetrahedral meshes for this thesis it became clear that I was limited to the arguments I could pass into the tetrahedral mesh generator, that being the radius / edge length constraint and the volume constraint. It would be nice to have better control over the meshes I am creating. For instance, I would like to create a spherical mesh where the vo lume constraint starts out big near the border but gets smaller as it approaches the

PAGE 82

71 center. The opposite configuration would also be interesting. The mesh generator also g test as many of the balls I used in my experimentation were hollow. It was the pressurized air inside that cause the ball to bounce. 8 .3 Other Interesting Tests As we look at an object to determine how rigid or soft the object is, we might want to defor equilibrium state. As such, an interesting test might be to take a long thin strip of the material being tested, move it to the edge of a table, and secure one end of the obj ect to the table. With this setup, the experimenter may apply a certain force to the end of the object protruding from the end of the table and suddenly remove the force causing the object to vibrate up and down. With this type of experiment, a computer vision application could be written which measures the frequency and amplitude of the vibrating object. Also, the application energy is lost in the system due to the de formation of the object. It would also be interesting to investigate how the frameworks described in the earlier sections of this thesis could be modified to support the material property extraction of any shaped object. In this case, I suspect the algori thm would need to find feature points on the object and track how the distance between feature points change while the object is under test is deformed.

PAGE 83

72 9. Conclusion On a daily basis, animators transform things they interact with into a 3D model to be used in the movies, games, and research project they are building. This thesis strove to build a framework which allows animators to accomplish these goals quicker and more accurately than the tradition guess and execute method currently employed. Through t he previous semesters, I have developed a system which is a starting point for future research in this area. The framework I built can simulate different types of spheres, but it cannot yet simulate all possible spheres, not to mention the infinite other shapes which an animator might want to simulate. In the future, this work could be expanded to work on different types of objects. For instance a cube might be a good candidate for future experiments as it is symmetric and similar material properties can be extracted using a single two dimensional camera. In the future, it is the goal to build a library of materials, which means the coefficients to simulate glass or hard rubber, we should run experiments with different shaped objects of the same material. Another possible future area of research is to, after finding the coefficients which make a simulated sphere behave like the rubber ball which was recorded, use those coefficients on different objects of different shape. This way, we can determine if this is a good method for simulating an object with a more complex geometry like a rubber car tire. Another goal of the thesis was to investigate how the geometric p roperties affect the

PAGE 84

73 and elasticity coefficients separately. With respect to tetrahedral meshes, I have discovered which geometry properties affect object elastic ity and which do not. This information gives animators and researchers additional tools to simulate real life objects.

PAGE 85

74 R eferences [1] Computer Animation And Virtual Worlds p. 53, 2011. [2] deformabl Vision research vol. 38, no. 15 16, pp. 2365 85, Aug. 1998. [3] Proc. of Simulation and Visualization 2007. [4] K. S. Bhat, C. D. Tw igg, J. K. Hodgins, P. K. Khosla, Z. Popovi, and S. M. Seitz, Simulation 2003. [5] B. Bickel, M. Bcher, M. a. Otaduy, W. Matusik, H. Pfister, and M. Gross, linear heterogen ACM Transactions on Graphics vol. 28, no. 3, p. 1, Jul. 2009. [6] ACM Transactions on Graphics vol. 26, no. 3, p. 33, Jul. 2007. [7] G. Bousquet, D. K. Pai, B. Gill Computer [8] ACM Transactions on Graphics vol. 27, no. 3, p. 1, Aug. 2008. [9] J. Canny IEEE transactions on pattern analysis and machine intelligence vol. 8, no. 6, pp. 679 98, Jun. 1986. [10] World Wide Web Publ ication, February 2001. [11] IEEE transactions on pattern analysis and machine intelligence vol. 15, no. 9, p. 13, 1993. [12] E. de Aguiar, C. Theoba lt, C. Stoll, and H. less Deformable 2007 IEEE Conference on Computer Vision and Pattern Recognition pp. 1 8, Jun. 2007. [13] 1983.

PAGE 86

75 [14] 2007 IEEE Conference on Computer Vision and Pattern Recognition pp. 1 8, Jun. 2007. [15] Detection with Deformable 2007 IEEE Conference on Computer Vision and Pattern Recognition pp. 1 8, Jun. 2007. [16] Group vol. 3953, no. secti on 4, p. 14, 2006. [17] Intelligent Robots 2010. [18] IEEE Compute r Graphics and Applications vol. 18, no. 3, pp. 42 53, 1998. [19] ACM Transactions on Graphics (TOG) vol. 30, no. 2, p. 15, 2011. [20] H. C. W. H.G. Barrow J.M. Tenenbaum, R.C CORRESPONDENCE AND CHAMFER MATCHING: TWO NEW [21] Science pp. 2 9, 2003. [22] pp. 654 661 Vol. 1, 2005. [23] Pattern Analysis and Machine Intelligence, IEEE Transactions on no. 2, pp. 122 139, 1983. [24] ACM SIGGRAPH 2010 Posters 2010, p. 15. [25] Columbia, 2001. [26] Proceedings of the 29th annual conference on Computer graphics and interactive techniques SIGGRAPH p. 491, 2002.

PAGE 87

76 [27] Y. Li, T. Wang, and H. level statistical model for ACM Transactions on Graphics vol. 21, no. 3, Jul. 2002. [28] Robotics, IEEE Transactions on vol. 23, no. 3, pp. 416 430, 2007. [29] 2006 14th Symposium on Haptic Interfaces for Vi rtual Environment and Teleoperator Systems pp. 533 538, 2006. [30] N. Magnenat [31] N. Magnenat DEPENDENT LOCAL DEFORMATIONS FOR HAND ANIMATION A 12. [32] IEEE transactions on pattern analysis and machine intelligence vol. 26, no. 5, pp. 530 49, May 2004. [33] based elastic ACM Transactions on Graphics (TOG) 2011, vol. 30, no. 4, p. 72. [34] Computer Vision and Image Understanding vol. 81, no. 3, pp. 231 268, Mar. 2001. [35] Computer Graphics Forum vol. 25, no. 4, pp. 809 836, Dec. 2006. [36] fragment Lecture Notes in Computer Science 2006. [37] stage contour based detection of deformable Computer Vision ECCV 2008 pp. 483 496, 2008. [38] pp. 503 510 Vol. 1, 2005. [39] ltiscale categorical object recognition IEEE transactions on pattern analysis and machine intelligence vol. 30, no. 7, pp. 1270 81, Jul. 2008.

PAGE 88

77 [40] H. Si, TetGen: A Quality Tetrahedral Mesh Generator and Three Dimensional Delaun ay Triangulator 2006. [41] Image Processing, IEEE vol. 8, no. 3, pp. 425 429, 1999. [42] pp. 1 32, 1999. [43] The Visual Computer 2008. [44] ACM SIGG RAPH Computer Graphics vol. 21, no. 4, pp. 205 214, Aug. 1987. [45] 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings. p. I 127 I 133. [46] Communications of the ACM 2010. [47] Real International Journal of Computer Vision vol. 57, no. 2, pp. 137 154, May 2004. [48] Driven Elastic Models for Cloth Measurement vol. 30, no. 4, 2011. [49] R. White ACM Transactions on Graphics vol. 26, no. 3, p. 34, Jul. 2007. [50] Resolution Capture Data Driven 3D Facial Animation 2007. [51] Constraints Society p. 8, 2011.