Citation
Genetic selection amongst artificial life agents

Material Information

Title:
Genetic selection amongst artificial life agents
Creator:
Plotkin, Alex
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
x, 87 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Artificial life ( lcsh )
Microorganisms -- Simulation methods ( lcsh )
Natural selection -- Simulation methods ( lcsh )
Artificial intelligence -- Biological applications ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 86-87).
Thesis:
Computer science
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Alex Plotkin.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
60403716 ( OCLC )
ocm60403716
Classification:
LD1190.E52 2004m P56 ( lcc )

Downloads

This item has the following downloads:


Full Text
AUTOMATING ESCHER TILING AND COLORING:
AN IMPLEMENTATION
by
Stephen C. Ogden
B. S., University of Colorado at Denver, 1997
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
2004


2004 by Stephen C. Ogden
All rights reserved.


This thesis for the Master of Science
degree by
Stephen C. Ogden
has been approved
by
Ellen Gethner
Gita Alaghband
Min-Hyung Choi
\X 0 3 3 CC L)
Date


Ogden, Stephen C. (M. S., Computer Science)
Automating Escher Tiling and Coloring: An Implementation
Thesis directed by Assistant Professor Ellen Gethner
ABSTRACT
An Escher tile is a creation most notably attributed to the famous artist
M. C. Escher. The tile is a square that contains a unique design. It is
used to cover a surface much as tiles cover a wall, for instance. The
design in the tile may seem to be unremarkable because it appears to
contain only a group of random polygons. But, when the tiles are placed
adjacent to one another to cover a surface, patterns appear on the surface.
These patterns are repeated in cycles, or periods, to create wallpaper
patterns on the surface. In addition, the patterns may or may not be
overlapping. The patterns may not be apparent just by looking at the
designs, or motifs, in the square tile.
The patterns are interesting but even more interesting when colored. A
question that begs an answer is: How can the patterns be colored so as to
receive unique colors? Unique coloring, in this case, means that each
wallpaper pattern receives a color that is different from patterns it may
overlap. There must also be a discrete number of colors used on the
wallpaper. This allows the color patterns are repeated as well.
Fortunately, Ellen Gethner, has discovered a method for the solution to
this problem.
Another question is: How can this method be automated on a computer?
That is the question that this thesis seeks to answer. Each step in the
IV


process of coloring Escher tiles in a wallpaper is mathematically sound
and serves a purpose in the solution. However, for an implementation,
there are issues of data structures, algorithms, and procedures that are
necessary for engineering a solution. There is also the problem of simply
defining the problem in concrete terms so that a solution may be
implemented. The theoretical solution many times does not account for
the practical solution. There must be some translation from the realm of
theory to the realm of practice. These are the issues that are discussed in
this thesis.
This abstract accurately represents the content of the candidate's thesis.
I recommend its publication.
Signed
Ellen Gethner
v


DEDICATION
This thesis is dedicated to every art and math lover who has come to
appreciate the work of M. C. Escher.


ACKNOWLEDGEMENT
I wish to thank my advisor and mentor Ellen Gethner who spent many
evenings with me explaining linear algebra, number theory, and those
pesky forbidden vectors. Thanks to Min-Hyung Choi for his help in
implementing a graphical interface for Escher tile visualizations. Also
thanks to the talented professors in the Computer Science department,
including John Clark, Bill Wolfe, Gita Alaghband, and Il-Kyun Ra, for
expanding my world once again. I especially wish to thank my fiance,
Ann Gonzalez, for all her love, support, and devotion that made it possible
for me to complete this project and earn a degree.


CONTENTS
Figures..........................................................x
Tables...........................................................xi
Chapter
1. Introduction...............................................1
1.1 Motivation.................................................1
1.2 Problem Definition.........................................7
1.3 Related Work and Resources................................10
1.4 Discussion Format.........................................10
2. Method....................................................12
2.1 Main Algorithm............................................13
2.2 Label Motif Pieces........................................14
2.2.1 Implementation............................................14
2.3 Identify Connections......................................15
2.3.1 Implementation............................................19
2.3.2 3D Extension..............................................20
2.4 Identify Overlaps.........................................20
2.4.1 Implementation............................................21
2.4.2 3D Extension..............................................21
2.5 Derive the Spanning Trees.................................22
2.5.1 Implementation............................................22
2.5.2 3D Extension..............................................23
2.6 Create the Generating Set of Motif Pieces.................23
2.6.1 Implementation............................................25
2.6.2 3D Extension..............................................28
2.7 Record Removed Edges......................................28
2.7.1 Implementation............................................28
2.8 Identify Ghost Nodes......................................30
2.8.1 Implementation............................................33
2.8.2 3D Extension..............................................34
2.9 Lattice...................................................34
2.10 Compute Forbidden Vectors.................................36
viii


2.10.1 Implementation............................................38
2.10.2 3D Extension..............................................39
2.11 Normalize Forbidden Vectors...............................40
2.11.1 Implementation............................................41
2.11.2 3D Extension..............................................42
2.12 Choose Collision-Free Vectors.............................45
2.12.1 Implementation............................................46
2.12.2 3D Extension..............................................47
2.13 Compute Big Tile Size.....................................48
2.13.1 Implementation............................................52
2.13.2 3D Extension..............................................52
2.14 Locate Color Vectors......................................56
2.14.1 Implementation............................................57
2.14.2 3D Extension..............................................62
2.15 Assign Colors to Lattice Points...........................65
2.15.1 Implementation............................................69
2.15.2 3D Extension..............................................70
2.16 Color the Tile............................................72
3. Conclusion................................................74
3.1 Design Tool...............................................74
3.2 Future Considerations.....................................75
Appendix
Appendix A. Algorithms...........................................76
Appendix B. Escher Tile Examples.................................81
References.......................................................85
IX


FIGURES
Figure 1.1: An Escher Tile.........................................1
Figure 1.2: An Escher Wallpaper....................................2
Figure 1.3: A Colored Escher Wallpaper.............................3
Figure 1.4: Escher Design Tool.....................................5
Figure 1.5: 3D Escher Design Tool..................................7
Figure 1.6: Highlighted Unit Tile..................................8
Figure 1.7: Wallpaper Component....................................9
Figure 2.1: Labeled Escher Tile...................................14
Figure 2.2: Horizontal, Vertical, and Diagonal Connections........15
Figure 2.3: Period Graph..........................................18
Figure 2.4: Agglomerated Period Graph.............................19
Figure 2.5: Spanning Tree.........................................22
Figure 2.6: Embedded Spanning Tree................................25
Figure 2.7: Generating Set of Motif Pieces........................25
Figure 2.8: Ghost Node............................................31
Figure 2.9: Natural Period........................................31
Figure 2.10: Lattice Basis........................................35
Figure 2.11: Lattice Pattern......................................36
Figure 2.12: Overlap Graph........................................37
Figure 2.13: Lattice..............................................45
Figure 2.14: Normalized Lattice...................................49
Figure 2.15: Uncolored Big Tile...................................52
Figure 2.16: Open Parallelogram...................................58
Figure 2.17: Open Parallelepiped..................................62
Figure 2.18: Colored Big Tile.....................................68
Figure 2.19: Numbered Big Tile....................................69
Figure B.l: Trivially Periodic Tile...............................82
Figure B.2: Trivially Periodic Wallpaper..........................82
Figure B.3: Singly Periodic Tile..................................83
Figure B.4: Singly Periodic Wallpaper.............................83
Figure B.5: Doubly Periodic Tile..................................84
Figure B.6: Doubly Periodic Wallpaper.............................84
x


TABLES
Table 2.1: Connections.............................................17
Table 2.2: Overlaps................................................21
Table 2.3: Generating Set of Motif Pieces..........................24
Table 2.4: Removed Edges...........................................28
Table 2.5: Forbidden Vectors.......................................38
Table 2.6: Color Vectors...........................................57
Table 2.7: HNF Lattice Properties..................................59
Table 2.8: Open Parallelogram Definitions..........................59
Table 2.9: Lattice Angle Properties................................60
Table 2.10: Iteration Ranges.......................................61
Table 2.11: Parallelepiped Definitions.............................63
Table 2.12: Color Assignments......................................67
xi


1. Introduction
Working with Escher tiles is fun. It involves the study of their
mathematical properties that span the areas of Number Theory, Graph
Theory, Linear Algebra, and Geometry. The added benefit is that this
study involves studying the art of the great master artist M. C. Escher. It
seems incongruous that the study of mathematics would lead to a deeper
appreciation of art. This artist's work is awe inspiring from the
standpoint of beauty and form. It seems that most people are familiar
with and love his art. As if esthetics are not enough, it is also awe
inspiring to consider the beauty of the mathematics present in his work.
It is incredible how art and math were brought together so seamlessly by
this man.
1.1 Motivation
Figure 1.1: An Escher Tile
Figure 1.1 shows an example of an Escher tile. This is the basic building
1


block for creating patterns on a plane. By placing multiple copies of this
tile adjacent to one another a wallpaper is created. Figure 1.2 is an
example of tiling a wallpaper with the tile shown in Figure 1.1. The
patterns that emerge in the wallpaper are not readily apparent just by
looking at the tile alone. We can see patterns in the ribbons, or strands,
that are created by the tile. The preferred term for these strands is
wallpaper component. Each polygon in the tile is joined with another on
the wallpaper to create regular patterns.





Figure 1.2: An Escher Wallpaper
The patterns that show up in the wallpaper, almost as if by magic, are
interesting in their own right. They weave in and out amongst
themselves as if to create some exotic fabric. The individual wallpaper
2


components create a repeated, rhythmic, design that guides our eye across
the canvas. They interlace and overlap to create a structure that makes it
seem as if they cannot be unraveled. And if they were unraveled, they
could all be part of the same wallpaper component. It would be much
more interesting if we could color the patterns so that we could better see
the rhythms and dependencies. Figure 1.3 is an example of one possible
coloring.

Figure 1.3: A Colored Escher Wallpaper
The coloring is definitely an improvement over the monotone scheme in
its predecessor. The coloring in Figure 1.3 is the result of using
Ellen Gethner's coloring algorithm for Escher tiles [Ge02]. The coloring
uses enough colors so that overlapping patterns do not use the same color
3


while allowing the color pattern to repeat. In Figure 1.3, for example, no
red wallpaper component overlaps another red wallpaper component.
Ellen's algorithm guarantees that no same colored wallpaper components
will overlap another.
Now we know we can take an Escher tile, cover a wallpaper with it, and
color it as defined. It would be nice if we could stretch the possibilities.
There are a number of Escher tiles using an arrangement of motif pieces
such as seen in Figure 1.1. It would be nice if we could automate the
process for different tiles. Automation holds the promise of being able to
explore many new possibilities of creating Escher tiles. Software could be
implemented to allow a user to create new tiles and see the patterns
emerge, both geometrically and chromatically.
As part of a related project, software has been created to test the
feasibility of the following scenario. A tile designer sits down at a
computer and begins to add polygons, or motif pieces, to a design pane. As
each motif piece is added, a wallpaper pattern emerges in a separate
pane. A screen shot of a prototype for this software is shown in Figure
1.4.
This tool was created by the author as part of a project supervised by
Min-Hyung Choi. The aim of the project was to provide a graphical
interface as an aid in the visualization of Escher tiles. This tool is
planned to be used as a teaching aid for students of Combinatorics and
Geometry. A major use for the design tool is simply to test the
functionality of the implementation described here.
4


X
Figure 1.4: Escher Design Tool
The main body of the discussion in this thesis will deal with tiling a
5


wallpaper in IR2. It turns out that the algorithm is easily extended to IR3.
As part of the same project mentioned above, a design tool for 3D
visualization was created. A screen shot of that prototype is shown in
Figure 1.5. The purpose of the software was intended to help in
visualizing motif pieces in IR3. This extension into 3 dimensions should be
an aid in not only studying 3D Escher tiles, but also in defining just what
a 3D Escher tile really is.
6


Figure 1.5:3D Escher Design Tool
1.2 Problem Definition
Now that we have some idea of the problem domain, it is time to define
the problem to be solved and implemented. First, we need to define some
terms.
7


The Escher tile (Figure 1.1) is called a unit tile. It is a unit tile because it
is the basis for all measurements on the wallpaper. All positions and
directions on the wallpaper are expressed in units of one tile. Since we do
not deal with fractions of unit tiles, only the set of integers will be needed
for calculations.
The polygons within the unit tile are called motif pieces. Motif pieces may
or may not overlap one another. Also, they may or may not touch the edge
of the unit tile. Figure 1.6 shows the motif pieces seen in Figure 1.1
highlighted with different patterns. There are nine motif pieces for this
particular example.
An area tiled with unit tiles is called a wallpaper. The motif pieces then
join with one another to form patterns. These patterns are called
wallpaper components. Figure 1.7 highlights one wallpaper component in
our example. We could have chosen any of an infinite number of identical
wallpaper components. Notice that all of the wallpaper components in
this example overlap each other. However, with other Escher tiles, some
wallpaper components may not overlap each other.
Figure 1.6: Highlighted Unit Tile
8


>V^\5^ \ XxxW / \ /v4\\ / \ /X\\\ /
\ /x\\\ / >Y\^J \ / \r% \ /x\\\ /
\ /VC\\\ / \ \ /
\ /x\\\ / \ /a\\\ / \ /
Figure 1.7: Wallpaper Component
Now that we have these basic definitions, the problem can be stated. The
problem is to color all wallpaper components in such a manner such that
no two overlapping components have the same color and using a finite
number of colors. The solution is attainable and may be solved using an
efficient algorithm [Ge02]. It would be nice to say that the number of
color is minimal. But the number of colors is only guaranteed to be finite.
9


The first part of the problem, that no two wallpaper components that
overlap may have the same color, could be solved by simply using a
different color each time a new component is found. This would require
an infinite number of colors. That does not satisfy the second part of the
solution that a finite number of colors be used. The second part implies
that, in addition to seeing geometric repetition in a colored wallpaper, we
would see a chromatic repetition as well. The solution gives us a palette
of colors that is finite, but is not minimal.
1.3 Related Work and Resources
The primary motivation for this project is Ellen Gethner's work with
Escher tiles. However, there are a few other resources available for
further investigation.
Doris Schattschneider explored the subject by investigating the counting
problem posed and answered by M. C. Escher[Sc97]. This problem was
alluded to in work by Rick Mabry, Stan Wagon, and Doris Shattschneider
earlier in [MaWaSc96]. In the same article she also posed a problem in
three dimensions. Dan Davis showed the exact counting of different
Escher tiles using a base motif [Da97]. Ellen Gethner later generalized
the problem that Doris Schattschneider posed [GeOl]. An interactive web
site by Steve Passiouras lets the user experiment with the motifs to see
what patterns emerge [PaOl].
Finally, for actual drawings of tiles by M. C. Escher see Visions of
Symmetry, by Doris Schattschneider [Sc04]. This book shows many works
beside the tiles and gives insight into many of the patterns that Escher
was pursuing. Additionally, the work of Ellen Gethner is mentioned in
this book relation to Escher tiles.
1.4 Discussion Format
As you can see, there is plenty of motivation for the study of Escher tiles.
This is an area of study that combines the disciplines of art, mathematics,
computer science, and engineering. The primary focus of this thesis is the
implementation of Ellen Gethner's algorithm. However, one aspect of the
implementation is how to transfer tile information into a f the computer
10


can use. This process is commonly called digitizing. Another area of
discussion is how to extend the algorithm so that it can be used in three
dimensions.
The discussion of digitizing is discussed during the primary steps of the
algorithm. It is also discussed at the last step, the point where colors may
be assigned to motif pieces.
The 3D extension could be a case of carrying along a third coordinate, or it
may require some extra work to be conform to the theory. The discussion
of extending the algorithm into 3D is handled at nearly every step in the
algorithm. The 3D extensions are very important because that is where
the real implementation lies. Since 2D is a special case of 3D, the
implementation will be in 3D and then applied to scenarios in 2D.
Another note on format is the use of an example tile. We have seen the
tile in Figure 1.1. To help provide some continuity, this tile will be used
as we work our way through the steps in the algorithm. At each step, the
results of calculations will be shown to help illustrate the expected
results. These are the same results that must be used when performing
unit tests on the software.
Lastly, there is no discussion of actual programming languages here. All
the procedures are shown in pseudo code. This document is meant more
as a blueprint for implementation. The concepts and procedures could be
implemented just as readily in C, C++, Java, Perl, or Assembly if one
would like. The major consideration is that it becomes possible to create a
bridge from the theoretical realm to the practical realm.
11


2. Method
The main algorithm, as summarized from Ellen Gethner's doctoral thesis
[Ge02], is presented below. The method described here, and the theory
behind the method is described in her thesis. It is included only to
illustrate the overall program flow. Terms such as motif piece,
forbidden vectors, natural periods, and collision-free vectors will
become clearer as they are presented and defined in the sections that
follow.
12


2.1 Main Algorithm
The main algorithm may be summarized by the following steps.
Algorithm: color an Escher wallpaper
Inputs:
An Escher tile in a wallpaper
Main
1 Label the motif pieces
2 Identify motif piece connections
3 Identify overlapping motif pieces
4 Derive spanning forest for all connected components
5 for (each connected component)
6 Create the generating set of motif pieces
7 Record removed edges
8 Identify ghost nodes and compute natural periods
9 if (the number of natural periods <2)
10 Compute forbidden vectors
11 Normalize forbidden vectors
12 Choose collision-free vectors
13 Compute Big Tile size
14 Locate color vectors
15 A
13


2.2
Label Motif Pieces
Figure 2.1: Labeled Escher Tile
Figure 2.1 shows an example of a labeled Escher tile. For convenience,
this particular tile will be used in all of the examples that follow. The tile
is a square that is one unit by one unit in dimension. It is used to tile the
Euclidean plane to create a wallpaper. This type of Escher tile will
sometimes be referred to as a unit tile. Figure 2.1 also illustrates one
possible numbering of motif pieces for this particular Escher tile. As
discussed earlier, a motif piece is a unique polygon within the unit tile.
Motif pieces may or may not intersect the edge of the unit tile. Motif
pieces may or may not overlap each other. It is not important which order
the motif pieces are numbered. It is only important that each distinct
motif piece gets a unique number.
2.2.1 Implementation
This has been included as part of the algorithm because this may also be
14


automated. The design tool mentioned in the introduction allows a user
to enter various polygons, such as we see in Figure 2.1. This user
interface recognizes each polygon as it is created and assigns it a unique
index. This step, and the next two, require transferring visual
information into digital information. Therefore, the first three steps are
considered digitizing steps.
2.3 Identify Connections
When a wallpaper is tiled with the Escher tiles, they are placed adjacent
to other tiles. Two tiles are adjacent if they touch on their horizontal
edges, vertical edges or if they touch each other at their corners. There
are two adjacencies in the x direction, two in they direction and one for
each of the four corners. Therefore, one tile is adjacent to eight other tiles
for a total of nine tiles in all.
A connection occurs when a motif piece shares at least one point with
another motif piece in an adjacent tile. This means there are three types
of connections. Figure 2.2 shows a horizontal connection between motif
pieces ms and m,7, a vertical connection between motif pieces m3 and m2,
and a diagonal connection between motif pieces mi and ms.
a: horizontal connection
Y
b: vertical connection
c: diagonal connection
Figure 2.2: Horizontal, Vertical, and Diagonal Connections
15


After all the connections have been discovered, the directions of the
connections must be recorded. This associates each connection with a
direction vector. In IR2, the direction vector signifies connection in the x or
y directions.
For example, in Figure 2.2a, me has a horizontal connection to m7 with a
direction vector of [1,0]. This means that, if we were able to stand on me
and travel to m7, we would have to travel one unit in the x direction. Of
course, to get back to me from mz, the direction vector would be [-1,0].
This highlights one property of connections: there is always a direction
vector [-x,-y] or a complementary connection, for every direction vector
[xj] for each connection. The notation we will use for this connection is
[6.7.1.0] which signifies a connection from motif piece me to motif piece m7
in the direction of [1,0]. It is just as easy to use the complementary
4-tuple [7,6,-1,0] to describe the same connection.
In Figure 2.2b, m3 has a vertical connection to m2 with a direction vector
[1.0] resulting in [3,2,1,0] and its complement [2,3,-1,0]. In Figure 2.2c,
mi has a diagonal connection to mg with a direction vector of [1,1]
resulting in [1,8,1,1] and its complement [8,1,-1,-1].
The complete set of connections for Figure 2.1 is listed below in Table 2.1.
Since the complementary connections may be derived from this set, they
have been omitted.
16


[1,2,1,0]
[1,6,0,1]
[1,8,1,1]
[1,9,0,1]
[2,3,0,-1]
[2,8,0,1]
[2,9,-1,1]
[3,4,-1,0]
[4.5.1.0]
[5,6,0-1]
[6.7.1.0]
[8,9,-1,0]
Table 2.1: Connections
This set of 4-tuple vectors describes the relationship between any tile in
the plane and its eight adjacent tiles. This information will be used to
build the period graph. Each entry in the list has the form [m^rrijjcy],
where m; is represents a motif piece with label i. Figure 2.3 shows the
period graph for the connections listed in Table 2.1 including the
complementary connections.
17


[0,-1]
[1^0].
[0,-1]
Figure 2.3: Period Graph
It is useful to consider the underlying structure of the period graph by
combining all the edges (connections) between vertices (motif pieces) and
removing the directions. Figure 2.4 shows the agglomerated period graph
derived from the the period graph. It will tell us the exact nature of the
wallpaper components that are created by the connected motif pieces.
Please see Ellen Gethner's doctoral thesis [Ge02] for a more complete
discussion of the period graph. The usage of the period graph and the
agglomerated period graph will become clear in subsequent sections.
18


The period graph represents one connected component. It is possible to
have multiple connected components for an Escher tile, each with their
own period graph. Separate components are discovered during creation of
the spanning tree.
2.3.1 Implementation
Note that, instead of using as set of 4-tuples, an adjacency matrix could
have been used to record the same information. The set of 4-tuples was
chosen because it makes more efficient use of memory. The difference in
efficiency when referencing matrix entries as opposed to set (or list) of
entries is acceptable [CoLeRiOl]. Additionally, the set of 4-tuples makes
locating entries easier to implement.
The design tool is able to recognize when polygons intersect an edge or a
corner. This is accomplished by constraining the endpoints of the line
segments that create the polygon to a regular grid. By testing each of the
grid points on the edges, the tool can determine which polygons (motif
pieces) intersect the edge, and where they intersect. When two polygons
19


share opposite edge points, either horizontally, vertically, or diagonally, a
connection is recorded.
2.3.2 3D Extension
One difference between 2D and 3D is the definition of edges. Instead of
connections existing only on line segments or corner points, connections
can occur at the faces of a unit cube as well. The unit cube could be called
an Escher cube, but a preferable term is a 3D Escher tile.
Another difference is the definition of a connection. There is still a
horizontal connection in the x direction, and a vertical connection in the y
direction. There is an additional depth wise connection in the z direction.
A diagonal connection is a connection between any two opposing edges in
the cube or any two opposing corners. One cube may be adjacent to 26
other cubes for a total of 27 cubes.
The last difference is that 5-tuples are used instead of 4-tuples to account
for connections that involve the z coordinate. The 5-tuple is described as
2.4 Identify Overlaps
Finding overlaps is very straightforward. Simply record the fact that any
pair of motif pieces have overlapped each other. This produces a set of
ordered pairs. Each pair represents the indices of the two motif pieces
that overlap one another. For instance, in Figure 2.1, mi overlaps m3.
This produces an overlap of [1,3] and the complement [3,1]. One might
consider this an undirected graph where motif pieces are the vertices and
overlaps are the edges. Table 2.2 shows the overlaps for Figure 2.1.
20


[1.3]
[1,6]
[2.3]
[2.5]
[2.6]
[4,6]
Table 2.2: Overlaps
2.4.1 Implementation
Again, a set of pairs is used instead of an adjacency matrix. This is more
a matter of consistency throughout the implementation at this point.
Therefore, all n-tuples (vectors) that are used to contain information about
the Escher tile were implemented as a set, or list, of n-tuples.
The design tool uses a method similar to finding intersections with the
edges. A minimum set of points, not necessarily grid points, is used to
detect whether a motif piece intersects each test point. If two or more
motif pieces intersect with a test point, then an overlap is recorded.
This step may or may not be necessary depending on the Escher tile. It
will not be necessary if two natural periods are found in line 9 of the
main algorithm (section 2.1). This bit of inefficiency must be tolerated in
the design tool because we can never know in advance what the finished
product will be. Calculations are processed dynamically as the tile is
created. Given a fixed tile, however, we would skip this step until we
know that overlaps are needed.
2.4.2 3D Extension
Whether we are considering IR2 or IR3, only the fact that two motif pieces
overlap is important. An overlap is defined as when two motif pieces
contact each other in at least one point. Since overlaps do not record
direction or position, the fact that they occur in either IR2 or IR3 is
21


inconsequential.
2.5 Derive the Spanning Trees
The next step is to find a spanning tree, or spanning forest, of the period
graph. The period graph may have multiple connected components.
Therefore, it is necessary to find a spanning tree for each connected
component. This is easily accomplished by performing a depth-first
search [WeOl]. The next steps following this one in the main algorithm
are performed for each connected component. Running a depth-first
search on the agglomerated period graph in Figure 2.4 reveals that there
is only one connected component. This produces the spanning tree found
in Figure 2.5. This spanning tree will be used to plot the relative
locations of motif pieces on the wallpaper.
2.5.1 Implementation
It will soon become apparent that the spanning tree is not an important
22


artifact. It is only the exercise of performing the depth first search that is
important. Therefore, no attempt is made to save any graph information
concerning the spanning tree. It is useful for human eyes to see the
spanning tree because it helps us to visualize what is happening. For
instance, we can see which motif pieces are connected to each other.
Recursive method calls were chosen to implement the depth-first search.
There is a danger of stack overflow when using recursive method calls.
But, since most period graphs (Escher tiles) are simple enough, that
danger is relatively low. If there is a problem with method call stack
overflow, the search may be implemented iteratively.
2.5.2 3D Extension
The 3D extension has no effect on the spanning tree except that there is
an additional direction of connection to consider when parsing the period
graph.
2.6 Create the Generating Set of Motif Pieces
In this step, we will use the direction vectors of the period graph to
produce a position vector for each one of the motif pieces. The position
vector is the position of a motif piece in relation to an origin piece to which
it is connected. The origin piece is an arbitrary piece that is chosen to
occupy position [0,0]. The initial origin piece is always mi. If there is
more than one connected component, then each component will have its
own origin piece other than mi.
The generating set is a set of 3-tuples that represent the motif piece and
its relative position vector, [mtpc y]. Another notation for this is [m^ aj
where mi is the motif piece and a; is its position vector [rc^y]. The set is
called a generating set because it can be used to determine the
relationship between a motif piece and any other motif piece in the
wallpaper.
The generating set is produced during traversal down the spanning tree.
The direction vectors for the vertices on an edge are summed to produce a
position vector. The position vector is then assigned to the latest vertex to
create a generating motif piece. The generating set represents the
23


positions of all motif pieces relative to the origin piece for a set of
connected pieces.
For example, mi is automatically assigned position [0,0]. This is the
origin piece [1,0,0]. Checking the period graph, we see that to get from mi
to m2 on the spanning tree we must travel in the direction of [1,0]. Adding
[0,0] and [1,0], we get the result [1,0]. This creates generating piece
[2,1,0]. To get from m2 to m3, we must travel in the direction of [0,-1].
Adding [1,0] to [0,-1], we get [1,-1]. This creates generating piece
[3,1,-1].
The complete set of generating motif pieces is shown in Table 2.3. The
graph of the generating set, or embedded spanning tree, is shown in
Figure 2.6. The embedded spanning tree represents the motif pieces in
their relative locations. This set of motif pieces, or wallpaper component,
is a base pattern for a repeating set of patterns in the wallpaper. To get a
good idea of what the graph actually represents, a partial tiling is shown
in Figure 2.7. This illustrates how the individual motif pieces are located
according to the spanning tree in our example.
[1,0,0]
[2,1,0]
[3,1,-1]
[4,0,1]
[5,1,-1]
[6,1,-2]
[7,2-2]
[8,1,1]
[9,0,1]
Table 2.3: Generating Set of Motif Pieces
24


Figure 2.7: Generating Set of Motif Pieces
2.6.1 Implementation
While the vertices have not been exhausted, first create a generating set
that contains an origin piece. The origin piece is the first available motif
piece. Its position vector set to [0,0]. For example, the very first piece is
always mi. Thus, we start out with the initial set containing the origin
piece [1,0,0]. This vertex (motif piece) is removed from the set of available
vertices. Then a depth-first search is done on the period graph starting
with the origin piece. It is during the search that position vectors are
calculated. Then, the resulting generating set is associated with the
connected component from which it was derived. If there are any vertices
left at this point, they are part of different components. The procedure is
repeated until all the vertices are depleted.
25


Algorithm: Create generating set of motif pieces
Definitions:
C = the set of connected components
E = the set of edges from the period graph
V = the set of vertices from the period graph
Inputs:
P = the period graph
Mk = the generating set of motif pieces for component k
CreateGeneratingSet (P, Mk)
1 Add all vertices to V={ 1,2,3
2 Copy period graph P>E
3 k = l
4 C=0
5
6 while (V*J0T)
7 Mk=0
8 Create Ck={Mk}
9 Get first available vertex in V and remove (V=Vv0)
10 Create origin motif piece m0=[v0 0,0]
11 Mk=Mk+m0
12 Search (i>0 0,0) // adds motif pieces > Mk
13 C=C+Ck
14 k = k + l
The efficiency of this algorithm, not including the search, is proportional
to the number of connected components in the tile, O(k).
26


Algorithm: depth first search
Inputs:
E = the set of edges from the period graph
V = the set of vertices from the period graph
Mk= the generating set of motif pieces for component k
Search.(i,xi,yi)
1 for ( each edge ei;/=[ i, j, , y^])
if (i=j) II self connection
E=E~(eti+eji)
else
if (VjE.V) IIgenerating motif piece
V=V-v..
2
3
4
5
6
7
8
9
10
11
12
13
14
E=E-(e..+eji)
x0=Xi+x..
y0=yi+yij
m=[j,x0 M k=M k+m
Search {j,x0y0)
else if (e-GE) II removed edge
E=E~(eij+eji)
This is a recursive procedure. If a self-connection is encountered, then
remove that edge. Otherwise, if a new vertex (motif piece) is encountered,
(1) remove the new vertex from the list of unvisited vertices, (2) add the
direction vectors of the parent vertex and the new vertex to create a
position vector, (3) assign the position vector to the new vertex, and (4)
add the new generating piece to the current generating set. Then, start a
new search using the newly created generating piece. If the vertex is not
new, simply remove the edge.
The algorithm must traverse each edge in the graph twice, once for each
vertex on the edge. Therefore, the efficiency of this procedure is 0(E),
27


where E is the number of edges in the period graph. The performance
may be enhanced by only checking edges where i < j on line 1.
2.6.2 3D Extension
The generating motif pieces will be 4-tuples instead of 3-tuples. The only
thing that changes in the algorithm is to use the z coordinate when
calculating the position vector for a generating motif piece.
2.7 Record Removed Edges
Removed edges are the edges that were removed to create the spanning
tree. They are used to determine the natural periods if any. Natural
periods and the use of removed edges will be explained in section 2.8. The
removed edges for our example are listed in Table 2.4.
11,6,0,1]
11,8,1,1]
11,9,0,1]
12,9-1,1]
Table 2.4: Removed Edges
2.7.1 Implementation
The procedure is simply an extension to the search that was described in
the section 2.6. The new behavior, in bold and underlined, is to record the
removed edges and link it with the generating set. The efficiency is not
affected.
28


Algorithm: Create generating set of motif pieces and record removed
edges
Definitions:
C = the set of connected components
E = the set of edges from the period graph
V = the set of vertices from the period graph
Rk = the removed edges for component k
Inputs:
P = the period graph
Mk= the generating set of motif pieces for component k
CreateGeneratingSet(P, Mk)
1 Add all vertices to V={ 1,2,3
2 Copy period graph P^E
3 k = l
4 C=J3
5
6 while (V*j0)
7 Mk=0
8 Rk=0
9 Create Ck={Mk,Rk]
10 Get first available vertex in V" and remove (V=Vv0)
11 Create origin motif piece m0=[v0 0,0]
12 Mk=Mk+m0
13 Search(v0 0,0)
14 C=C+Ck
15 k=k + l
29


Algorithm: depth first search
Inputs:
E = the set of edges from the period graph
V = the set of vertices from the period graph
Mk= the generating set of motif pieces for component k
Rk= the set of removed edges for component k
Search (i, y;)
1 for ( each edge e-=[i,j, xtj, y-])
2 if (i=j) II self connection
3 E=E-{eij+eji)
4 Rk=Rk+eu
5 else
6 if (Vj.V) II generating motif piece
7 < II <5 1
8 E=E-(eij+eji)
9 x0=xi+xij
10 yo=yi+yij
11 m=[j,x0y0]
12 M kM k+m
13 Search {j,x0y0)
14 else if (e-EE) II removed edge
15 E=E-(eij+eji)
16 Rh=Rh+eii
2.8 Identify Ghost Nodes
During the depth-first search, certain edges were removed to create the
spanning tree. The removed edges contain direction vectors that may
create a different position vector for a given motif piece. This leads to the
creation of ghost nodes. Ghost nodes are those that have a different
relative position than their counterparts in the generating set. A natural
30


period is the difference between these two positions.
For example, examine removed edge [1,6,0,1]. Its direction vector is [0,1].
We add this vector to the position vector for mi, [0,0] to get [0,1]. Then we
compare the sum to the position vector for me which is [1,-2]. [0,1] is not
equal to [1,-2] so we get the ghost of me at position [0,1]. The natural
period is the difference between the latter vectors. Therefore,
[0,1] [1,2]=[1,3] is a natural period. Figure 2.8 shows this ghost
node, the only ghost for our example.
>V\Ji A /x\\\ / \ /x\\\ /
>>W^ \ /x\\\ / \ 7 Awi" \ \ \ \ /x\\\ /
>Vv/ \ /x\\\ / \ / \ /x\\\ /
\ /x\\\ / \ M\\\ / \ \ \ '' \\ /A \ /x\\\ /
Figure 2.9: Natural Period
To define a natural period, recall the definition of a wallpaper component.
31


A wallpaper component is simply a connected set of motif pieces. Figure
2.9 shows a highlighted wallpaper component with a vector superimposed
on top of it. The vector represents the natural period. The tail has been
set on a motif piece me and the head on the very next me. We can see that,
if we count the tiles between them, we have traveled a distance of [-1,3].
The significance of the natural period vector is that the tiles at the head
and tail receive the exact same coloring. So, if the wallpaper component
was to be translated by a natural period, it would land right back on top of
itself and not some other wallpaper component. If the tile was doubly
periodic, the translation could occur in two distinct directions and a
wallpaper component would still land on top of itself and not another.
In IR2, there can never be more than two valid natural periods. However,
our algorithm so far could produce more than two results for natural
periods. If we find two natural periods, they must be linearly independent
as well. How do we solve the problem of finding more than two natural
periods and guarantee that they are linearly independent? We can use
Hermite Normal Form (HNF) to address this problem. (Please see
appendix A for an explanation of HNF). We place the vectors in a 2 xn
matrix where n is the number of potential natural periods. When this
matrix is run through the algorithm to create a HNF matrix, the result
will be one or two basis vectors [Co95].
It is necessary to run our results through HNF to get a basis or a partial
basis and to ensure that the vectors will be linearly independent. We will
see in section 2.14 that this will also aid in locating the colors for each
motif piece.
At this point, we will have either zero, one, or two natural periods. If we
have zero we call the tile trivially periodic. With one natural period, we
call the tile singly periodic and with two, doubly periodic. If the tile is
doubly periodic, we have a basis and are ready to start coloring.
Otherwise, we must choose one or two collision-free vectors to add to the
set of natural periods so that we may have a basis for a lattice. The
lattice will be discussed more fully in section 2.9. Therefore, in the
trivially or singly periodic case, we must continue to sections 2.10, 2.11,
and 2.12 to find forbidden vectors and choose collision-free vectors. In the
32


doubly periodic case, we can skip directly to selecting color vectors
starting at section 2.13. It might be instructive at this point to see the
examples of the different period types in appendix B.
2.8.1 Implementation
The procedure is to iterate over each removed edge. For each removed
edge [ijjcy], (1) find the position vector of generating piece i, (2) add that
position vector to the direction vector in the removed edge, (3) compare
the sum to the position vector of generating piece j. If the sum vector and
the position vector of piece j are different, find their difference. That
difference is a natural period, so add it to the set of natural periods N.
The efficiency is 0(||i?J|).
Algorithm: calculate natural periods
Definitions:
a-= the position vector of generating motif piece at]
rij=[i> J>by\ = a removed edge from the period graph
Inputs:
Nk= the set of natural periods for component k
Rk = the set of removed edges for component k
CalculateNatural(iVft, Rk)
1 Nk=0
2 for (each removed edge,/v ei2J
3 gl=&i+bij II potential ghost
4 if {Fy*aj)
5 n=g..-aj
6 Nk=Nk+n
7
8 Nk=HNF{Nk)
33


2.8.2 3D Extension
Item related to the 3D extension have been pretty simple up to this point
concerning extending the algorithm into IR3. The most complicated item
has been to carry the hidden z coordinate along in our calculations. But
the procedure gets a little trickier here.
In IR2, we know that when we have a trivially or singly periodic tile, we
must compute forbidden vectors and choose one or two collision-free
vectors. If the tile is doubly periodic, we skip those steps. In IR3, we need
to find a basis of three vectors. So if we have a doubly periodic tile, we
have a dilemma. How do we know if we are working in IR2 or IR3 given
only a period graph? To make matters worse, we will be using 3
dimensional vectors for the direction vectors and position vectors for both
IR2 and IR3.
The answer is that whenever the tile is less than triply periodic, we must
proceed to finding forbidden vectors and choosing collision-free vectors. If
we are dealing with a two dimensional tile, it turns out that we can choose
a collision-free vector that will fit for IR3 and allow the correct
computations for IR2. The main algorithm from section 2.1 then needs to
be adjusted at line 9 to say that if the number of natural periods is less
than 3, continue, otherwise skip to finding color vectors.
2.9 Lattice
The wallpaper components are a graphical representation of a lattice. We
saw in Figure 2.9 how a natural period defines the period of a wallpaper
component. Assume that we have a tile that is doubly periodic. Then
both natural periods would map an Escher tile from one position in the
wallpaper to two others. The tile in the original position and both of the
tiles in the translated positions receive the same coloring. The tiles
intervening tiles must receive different colorings.
For example, Figure 2.10 shows a wallpaper grid with tiles represented as
points on the grid. The vectors are the natural periods, or vectors, that
34


provide a basis for the lattice. The tile at the base of both vectors gets a
coloring scheme. The tiles at the tips of the vectors get the exact same
coloring scheme as the tile at the base. Additionally, the tile at the sum of
the vectors receives the same coloring as the previous three tiles.
Figure 2.10: Lattice Basis
The vectors form an open parallelogram in which lie a set of points, or
tiles. The set of points (tiles) defined by the parallelogram are those that
lie within it an those that coincide with the vectors except at the vector
tips. Each point in the set receives a unique coloring scheme from the
others in the set. It is this set of tiles that is repeated infinitely
throughout the wallpaper. Figure 2.11 illustrates how the patterns are
repeated. Later, we shall see how the lattice will be used to derive a Big
Tile that is used to assign colors in the wallpaper.
35


Figure 2.11: Lattice Pattern
2.10 Compute Forbidden Vectors
We are at this step because we don't have enough natural periods to
create a lattice. We have a tile that is either trivially or singly periodic.
Forbidden vectors are used when choosing collision-free vectors. A
collision-free vector is the distance between two wallpaper components
that do not overlap one another. Therefore, forbidden vectors are the
vectors between two wallpaper components that do overlap one another.
We have to know which vectors are forbidden so we can choose one that is
collision-free. The method for finding forbidden vectors requires some
analysis of the relationship between motif pieces and overlaps. The
method may then be distilled into a few simple steps that are still correct.
The first step in the analysis is to find the equivalent motif pieces.
Equivalent pieces are those that are a linear combination of the natural
period away from each other as expressed in equation 2.1.
36


(2.1)
cTj-a^kn, &eZ
Equivalent motif pieces are grouped together on vertices in the overlap
graph. Figure 2.12 shows the overlap graph for our example. An edge in
the overlap graph connects two vertices if any motif piece overlaps any
motif piece in another vertex. If the tile is trivially periodic (zero natural
periods), then any two motif pieces with the same position vector are
equivalent without regard as to whether they overlap. This is equivalent
to letting n=0 in equation 2.1.
Overlap Graph
Now we can calculate forbidden vectors. Each motif piece on the overlap
graph is connected to an mj. The difference between the mi motif pieces
connected to some mi and m, is a forbidden vector. For example, the
position of the mi connected to m^ is -a] and the mi connected to m& is
-cT6. The difference is a^-a^=[l,-2]-[0,-l]=[l,-l]. Therefore,
[1,-1] is a forbidden vector. For m4 and m9, the difference is
a^-a^=[0,1] [0, 1]=[0,2], so [0,1] is also forbidden. Note that m^
and mg do not overlap each other. However, since me is equivalent to mg,
we must calculate the difference just the same. The two forbidden
vectors, [1,-1] and [0,2], are equivalent because their difference is a linear
37


combination of the natural period [-1,3].
Since one overlap provides a representative for an entire class of vectors,
it is not necessary in the previous example to calculate the difference
between ni4 and mg. The overlap between and me provided the
necessary forbidden vector. In the next step of the main algorithm, the
forbidden vectors will be normalized, therefore, one representative is
enough. In fact, it is only necessary to check the set of overlaps (not to be
confused with the overlap graph) to find all the representative forbidden
vectors.
Thus, the procedure is simplified. We just iterate over each pair of
overlaps while calculating the difference in the position vectors. If the
difference is 0, or not a linear combination of n, then add the
difference to the set of forbidden vectors. The forbidden vectors (not
normalized) for the our example tile are listed in Table 2.5.
[1,-2]
[0,1]
[1,-1]
[0,2]
Table 2.5: Forbidden Vectors
2.10.1 Implementation
The method begins by iterating over every overlap that has been detected.
The difference between the position vectors of the pair of motif pieces is
calculated. If the difference is either 0, or it not a linear combination of
the natural period, then that vector is forbidden. Add it to the set of
forbidden vectors. The method is set up to account for the trivially, or
singly periodic cases by setting an appropriate vector to 0. This causes
the linear combination to be accurate by effectively removing a missing
natural period from the calculations. The efficiency of this operation is
O(IIOH), where O is the set of overlaps. Normalization of the forbidden
vector is a simple calculation that does not affect the efficiency.
38


Algorithm: calculate forbidden vectors
Definition:
oy = an overlap GO of motif pieces i and j
F = the set of forbidden vectors
Inputs:
N= the set of natural periods
0= the set of overlaps
CalculateForbidden
1 F=0
2 if (||iV||=0) m=0
3 if (||iV||=l) m=n
4
5 for (each o^gO)
6 fQ=ajai II possible forbidden
7 if (/0=6 or f0^rm, rGZ)
8 f=NormalizeForbidden (f0) II linear combo
9 F=F+f
2.10.2 3D Extension
The 3D extension is very simple. In IR2, the tile may either have zero or
one natural periods. In IR3, the tile may also have two natural periods.
This is accounted for in the initialization and also when determining if the
forbidden vector is a linear combination of the natural periods. The
changes are underlined in bold below. The efficiency is not affected in IR3.
39


Algorithm: calculate forbidden vectors
Definition:
oy = an overlap eO of motif pieces i and j
F = the set of forbidden vectors
Inputs:
N = the set of natural periods
0= the set of overlaps
CalculateForbidden
1 F=8
2 if(||iV||=0) m,=m2=0
3 if (1|AT|| = 1) m,=n, in2=6
4 if (||AT||=2) m,=n, m2=n2
5
6 for (each o- GO)
7 fQ=ajai II possible forbidden
8 if (/q=0 or f r,sGZ) II linear combo
9 jf=NormalizeForbidden (/'q)
10 F=F+f
2.11 Normalize Forbidden Vectors
We need to reduce every forbidden vector to a minimal range.
Normalization requires finding the least residual of some coordinate in
the forbidden vector. For example, given f=[f1f2\ and a natural period
n=[n1 n2] (there is only one), then we use greatest common divisor
GCD) reduction to normalize f2 with respect to n2. If f2=n2=0, then we
normalize fi with respect to ni. In the trivially periodic case,
normalization does not affect the forbidden vector.
Assume we have a forbidden vector f=[/*1 f2], and a natural period
40


n=[n1 n2]. All forbidden vectors equivalent to f are expressed as
G={f +qn1:qeZ). We want to find some f0^G that minimizes fo2.
Minimizing means finding the least residual f 02=f2mod (n2). Or,
equivalently f2=fo2+(lni2 Then we choose q=[f2ln12J- The
normalized forbidden vector f 0=f -qnx.
The forbidden vectors in Table 2.5 are exactly the set of normalized
forbidden vectors. We can verify this by noting that the second coordinate
for each forbidden vector is 0 < |/2| < |n2|=3.
2.11.1 Implementation
The procedure checks the number of natural periods. In IR2, the possible
number is zero or one natural periods. The forbidden vector does not get
normalized in the trivially periodic case. In the singly periodic case, we
find the least residual as applicable.
Algorithm: normalize forbidden vectors
Inputs:
N = the set of natural periods
f = a forbidden vector
NormalizeForbidden (f)
1 if (||AT||=0)
2 q=0
3 else //||AT||=1
4 if (f2+n2*0)
5 q=[fjn2\
6 else
7 Q=[ful^i\
8
9 f=f-qn
41


2.11.2 3D Extension
For IR3, there can be zero, one, or two natural periods. We will still employ
GCD reduction on the forbidden vector, but in a slightly different manner.
We need to find the least residual of some coordinate in the forbidden
vector with respect to a linear combination of the natural periods. As
before, if the number is zero, then normalization does not affect the
forbidden vector. If the number is one, then normalization occurs exactly
as before. However, if the number is two, then a linear combination of the
two natural periods must be taken into account. We can use the linear
combination of two vectors to handle the case of one natural period by
letting one vector be the zero vector.
Given forbidden vector /=[/\/"2 natural periods ^i=[nnn12ji13],
and n2=[n21n22n23\, all forbidden vectors equivalent to f are
expressed as G={f+sfT1+tn2:s,tEZ}. We want to find some f0eG
that minimizes fo3 subject to
f 03 f 3 (13 t^23) fo3~(2.2)
Let e=gcd(n13n23). Then, e=gn13+hn23 g,h£.~Z., wheree,g, andh are
the result of the extended Euclidean algorithm. If we let a-n13le, and
b=n23le, then, substituting into equation 2.2 then,
foz=f3-e\s{n13le)+t{n23le)\
=f3e{sa+tb). (2.3)
Next, we let q=[f3le\ Note that
ga+hb=1. (2.4)
Therefore, we can multiply both sides of equation 2.4 by q to get
qga+qhb=q. Let s=qg, and tqh, so that q=sa+tb. The result
is that we find
f03=f3- =fz~yfzle J(e)- (2.5)
42


(2.6)
Equation 2.5 minimizes fo3 because
|/,o3| Note that equations 2.5, and 2.6 are just the division algorithm
a=bq+r, with f3a, e=b ,[f3le\=q, and f03=r. [RoOO]. We can now
find values for s, and t and calculate the minimized vector by using
equation 2.7.
>0i fi~(snn+tn21)
f 02 f 2(snl2+tn22)
f 03 f3-{snl3+tn23)
The procedure is much the same as in IR2. We find some non-zero
coordinates to work with and use them to calculate a quotient. Then we
subtract the quotient times the linear combination of the natural periods
from the forbidden vector. This gives us the least residual and normalizes
the forbidden vector. Finding the GCD is a matter of using the extended
Euclidean algorithm [CoLeRiOl]. (Please see appendix A for details of
this algorithm). The efficiency of this operation is that of the Euclidean
algorithm.
43


Algorithm: normalize forbidden vectors
Inputs:
N = the set of natural periods
f = a forbidden vector
NormalizeForbidden (f)
1 if (||AT||=0)
2 g=0
3 else
4 if (||AT|| = 1) m^=n^ rn2=6
5 if (||iV||=2) m^nl m2=n2
7 if (m13 + m23^0)
8 (e,g,h)=ExtendedEuclid(m 13 ra23)
9 9=l//eJ
10 s=qg
11 £=<7/1
12 if (m12+m22^0)
13 (e,g,&)=ExtendedEuclid(m12 m22)
1 if (/*3<0) /=-/
14 ?=i/yeJ
15 s=<7g
16 t=qh
17 else
18 (e,g,h)=ExtendedEuclid(mn m21)
19 q=[fnle\
20
21 tqh
22
23
44


2.12 Choose Collision-Free Vectors
At this point, we know whether we are trivially or singly periodic, and we
have a set of normalized forbidden vectors. The correct choice of a
collision-free vector is one that is not a linear combination of the forbidden
vectors and the natural period. Note that the zero vector is always a
forbidden vector. It is at this point that we start adding vectors to the set
of natural periods. The set will continue to called N even though it may
not be solely comprised of natural periods. In our example, we can choose
[1,0] for the collision-free vector. This vector will be used to create the
lattice, shown in Figure 2.13, that is used for coloring motif pieces in the
wallpaper.
A
Figure 2.13: Lattice
45


2.12.1 Implementation
The task is to iterate over enough vector values until we have two valid
choices. If the number of natural periods is zero, the first choice is used as
if it were a natural period to ensure that both choices are linearly
independent.
There is a challenge as to how to set the limits for iteration. Vector
choices would iterate over values for [a,6]. If we let the limits be infinite,
then the b values will never go past zero. This situation invites the
possibility of an endless loop. The solution is to place a maximum on the
iteration for a and b values. If we set these limits high enough, we will
find a collision-free vector. If they are too low, we will not find a collision-
free vector. If they are too high, performance will be affected.
For simple Escher tiles, a small limit (16) seems to be more than
adequate. But this might not be high enough in some extreme cases and a
vector would not be chosen. One possible solution is to have rolling limits.
Rolling limits would be implemented as follows. Each time both a and b
limits are exhausted, and the number of periods is still less than two,
reset the upper and lower limits and start again. This solution is not part
of the implementation to date. With this in mind, the worst case is when
the maximum values for x, and y are reached. Therefore, the efficiency for
this operation is 0(xmax ymaJ.
46


Algorithm: choose collision-free vectors
Inputs:
N = the set of natural periods
F=[fx f2-",fn}= the set of forbidden vectors
ChooseColllisionFree {N ,F)
1 if (||AT||=0) m=0
2 if (||A/j| = l) m=nl
3
4 OUTER_LOOP:
5 while (||AT||<2)
6 for (a=0 to maxa)
7 for (6=0 to maxb)
8 v=[a,b]
9 for (i=l to \\F\\)
10 c=v-fi
11 if (c^sm, sGZ) // linear combo
12 N=N+v
13 if (m=0)
14 65=6^
15 continue OUTER_LOOP
2.12.2 3D Extension
In R3, the number of possible natural periods changes. Only minor
adjustments are needed for implementation in 3D. The changes are
highlighted below. Since there is one more coordinate to account for in
the iteration, efficiency is increased to 0(xmax 3W zmax).
47


Algorithm: choose collision-free vectors
Inputs:
N= the set of natural periods
F'={f1f2...,fn}= the set of forbidden vectors
ChooseCollisionFree (N, F)
1 if (||AT||=Q) rn,=Tn,2=0
2 if (||AT|| = 1) m^=iV1 fn2=0
3 if(|lAT|l=0) in=N~,
4
5 OUTER_LOOP:
6 while (||iV||<2)
7 for (a=0 to maxa)
8 for (6=0 to maxb)
9 for (c=0 to maxe)
10 v=[afbfc]
11 for (i = l to ||F||)
12 c=v-fi
13 if (c^smj+f
14 N=N+v
15 if (nf2=0)
16 m2=v
17 if (/rf, =6)
18 rn^v
19 continue
2.13 Compute Big Tile Size
//
linear combo
We have arrived at this step with the necessary number of period vectors,
either natural or collision-free. In IR2, that number is two. Before we can
continue, we must first normalize the periods. We perform the HNF
48


algorithm on the matrix to normalize the basis. This will be very useful
for the next step when we look for color vectors. Equation 2.9 is the result
of running the natural periods in equation 2.8 through the HNF
algorithm. The normalized lattice is illustrated in Figure 2.14.
tf={[-l,3],[l,0]}=
1
0
(2.8)
HNF{N) =
1
0
0
3
(2.9)
A

P


Figure 2.14: Normalized Lattice
The Big Tile Size is the width and height of a set of unit tiles which will
be used to assign colors to the motif pieces. Each tile in the Big Tile
represents a location that will receive a different coloring than other tiles
in the Big Tile. Finding the Big Tile size (BTS) depends on the set of
periods, which contains the either natural periods, collision-free vectors,
49


or both. The object is to find values that minimize the width and height.
We will begin by finding the width.
Given W={n1,n2} =
n n n21
n!2 n22
, let A = determinant of N. Find a minimal
x value that is a linear combination of periods
xi^rn^+sn^ r,seZ, i=[l,0].
Then, equation 2.10 may be expressed as
X = /in n2l r => r N = X => r =N~1 X
0 "12 U22 s s 0 s 0
___! 1
Recall that the inverse of N is N =
A
U22
n
-n,
21
12
n
n
therefore,
Solving for r and s in equation 2.12 gives us

r =
s =
n
22
\AI
X,
and
n
21
X.
The smallest xeZ+ that satisfies equations 2.12 and 2.13 is
4
x =
gcd{n12n22)
(2.10)
r 1 n22 -^21 X 1 xn22 (2.11)
s A 1 to nn 0 ~xn12
(2.12)
(2.13)
(2.14)
Next, we will find the height. We use the same reasoning for the y value
as for the x value. Find a minimal y value that is a linear combination of
the periods
y j=rn1+sn2 r,seX, y'=[0,l]. (2.15)
Then, equation 2.15 may be expressed as
50


(2.16)
0 nn n2i r => r N = 0 => r II i 0
y. 1112 n22 s s y_ s
Substituting the inverse,
r 1 n22 ~n2l 0 l -yn 2i
s A ~nU nn y_ A yn n
Again, solving for r and s in equation 2.17 gives
n
r =
21
y,
s =
nn
^A,
The smallest yG Z4
and
that satisfies equations 2.18 and 2.19 is
(2.17)
(2.18)
(2.19)
y=
gcd(nun21)
(2.20)
The width is the x value and the height is the y value. We use the results
from equations 2.14, and 2.20 to calculate the Big Tile Size.
Big Tile Size BTS = width X height
BTS =
A v A
gcd(n12n22) gcd{nnn21)
(2.21)
Using equation 2.21 in our example, we find that |a|=3, and
BTS = 1x3. We also know that A is the total number of colors necessary
for coloring. The uncolored Big Tile is shown in Figure 2.15.
51


Figure 2.15: Uncolored
Big Tile
2.13.1 Implementation
Implementation is matter of correctly implementing equation 2.21 using
linear algebra [St03] and the Euclidean algorithm [CoLeRiOl]. Linear
algebra is used to find the determinant. The Euclidean algorithm is used
to find the GCD of the vector values.
2.13.2 3D Extension
The Big Tile size in R2 is just a special case of the Big Tile size in R3. The
reasoning is almost exactly the same. The difference is that now we use
52


the z coordinate to express the depth. Again, we start by finding the
width.
Given
N={nltn2,fQ=
nn U21 U3l
ni2 n22 n3X
ni3 U23 n33
, let A = determinant of N.
Find a
minimal x value that is a linear combination of periods
xi=rfT1+sn2+tn3 r,s,t&~Z., i=[ 1,0,0].
(2.22)
Then, equation 2.22 may be expressed as
X /In n2\ n31 r X r X
0 = 12 U22 n3l => s N = 0 => s H 1 £ II 0
0 "13 U23 3 CO CO t 0 t 0
Let the transpose of the cofactor matrix of N,
Nct
A D G
BEG
C F I
(n22n33
(ni3 U32
(n!2n23
~n23n3?) (^23^31
~ni2n33^ (niin33
~ni3n22^ (n\3n2l
n2in33)
~ni3n3l)
~nnn23)
(n2in32 n22n3l)
(n 12 ^31 ^11^32)
(^11^22-^12^2l)
Therefore,
r 1 A A D G X 1 xA
s t B E H C F I 0 0 -L A xB xC
Solving for r, s and t in equation 2.26 gives us
r =
A
A. x =
(n22n33 n23n32^
X,
(2.23)
(2.24)
(2.25)
(2.26)
(2.27)
53


s =
B
A
JC
~\a
II (ni3n32 nV2n33^
A
II Kn12n23-n13n22)
A
x, and
x.
(2.28)
(2.29)
The smallest xGZ.+ that satisfies equations 2.27, 2.28, and 2.29 is
A
x =
gcd(A,B ,C)
gcd{(n22n33 n23n3^), {n13n32 n12n33), (^12^23 ^13^22))
(2.30)
Next, we will find the height. Find a minimal y value that is a linear
combination of the periods
yj=rn1+sn2+tn3 r,s,te Z, j'=[0,l,0].
Then, equation 2.31 may be expressed as
(2.31)
0 nn n21 n31 r 0 r 0
y = nn U22 n31 => s N = y => s =N~' y
0 niz U23 U33 t 0 t 0
Substituting the inverse,
r 1 A D G 0 1 yD
s t A B E H C F I y 0 A yE yF
(2.32)
Again, solving for r, s and t in equation 2.32 gives us
x,
r=
s =
1
D
x =

IE
X
Al
(n23n31-^2in33)
x, and
(2.33)
(2.34)
54


t=
lp\
A
x =
(n13n21-nun23)
x.
(2.35)
The smallest yeZ+ that satisfies equations 2.33, 2.34, and 2.35 is
4
y=
gcd(D ,E ,F)
gcd((n23n31 n21n3g),(ujjUgg n13n3l),(nl3n21 n.un23))
Next, we will find the depth. Find a minimal z value that is a linear
combination of the periods
zk=rn1+sn2+tn3 r,s,t^7L, £=[0,0,1].
Then, equation 2.44 may be expressed as
(2.36)
(2.37)
0 Tin n21 n31 r 0 r 0
0 = ni2 U22 n31 => s N = 0 => s =AT1 0
z nX3 n23 U33 t z t z
Substituting the inverse,
r 1 ~ A A D G 0 1 ~ A zG
s B E H 0 zH
t Zj C F I z lA zl
Again, solving for r, s and t in equation 2.40 gives us
s =
t =
yG'
\^/
*4,
z=\
z =
\(n2in32 n22n3l)
(n12n31-nun32)
z,
z, and

z =
(nnn22 ^12^21)
(2.38)
(2.39)
(2.40)
(2.41)
(2.42)
(2.43)
The smallest yeZ+ that satisfies equations 2.41, 2.42, and 2.43 is
55


4
gcd(G,H,I)
______________________________A___________________________
gcd{{n2l ^32 ^22 ^31) > (^12 ^"31_^"11 ^32) > (^11 ^22 _^12 ^2l))
The width is the x value, the height is y value, and the depth is the 2
value for the Big Tile Size.
Big Tile Size BTS = width X height X depth
(2.44)
BTS =
4 4
gcd(A, B, C) A gcd(D ,E ,F) A gcd(G,H ,1)
A
___________________________________________________ x
gcd ((n.22 ^33 ^23^32)> ( ^13 ^32 ^'12^33)^ (^12 ^23 ^13 ^22))
___________________________________________________ x
gcd((n23n31n21n33), (nnn33n13n31), (n13n21nnn23))
_________________________A_________________________
gcd {{n21 n32-n22 n31), (n12 /i31 -/iu n32), {nn n22-nl2 n21))
(2.45)
We use linear algebra and the Euclidean algorithm to implement equation
2.45 in the 3D extension. Notice that if we let n13=n23=0 and
^31=^32=^33 = 1, then equation 2.45 reduces to equation 2.21.
2.14 Locate Color Vectors
Now that we have found the lattice, or set of periods, we can begin to color
the motif pieces. But first we need to identify the colors. This procedure
involves choosing points (vectors) inside the open parallelogram created
by the period vectors. Valid points are points that are located on the
period vectors but not at their tips, and inside the parallelogram but not
on the open boundary. Remember that we will use the normalized lattice
as shown in Figure 2.14. The choices for color vectors are listed in Table
2.6.
56


[0,0]
[0,1]
[0,2]
Table 2.6: Color Vectors
2.14.1 Implementation
At this point, we have found the natural periods and collision-free vectors.
The next step is to identify the set of color vectors so that individual motif
pieces may be colored correctly. The color vectors are those points within
the half open parallelogram in IR2 (or the half open parallelepiped in IR3).
Figure 2.16 is a geometric representation of some general parallelogram
in IR2. An explanation of terms will follow. We begin the discussion by
examining the conditions in IR2 using vectors in IR3 After that, we will
extend the discussion to cover the conditions for IR3.
57




Figure 2.16: Open Parallelogram
We start by defining the problem domain. The set of vectors is the set of
natural periods and/or collision-free vectors:
V=Ky2/3) =
V2i v3i
V\2 V22 V32
V13 V23 V33
(2.46)
We apply the Hermite Normal Form algorithm to the matrix to derive, the
matrix N. N describes a vector space equivalent to V.
N=HNF{V) =
nn n21 "31
0 n22 "32
0 0 U33
(2.47)
This gives us a set of vectors with the properties shown in Table 2.7.
58


coincides with the x -axis.
n2 is anywhere in the XY plane except the x-axis.
n3 is anywhere in 3-space except the XY plane.
Table 2.7: HNF Lattice Properties
Note that we let n3=[0,0,1] when we want to consider only IR2. Table
2.8 defines the terms we saw in Figure 2.16.
R = the region inside parallelogram defined by and n2
c=rtl+n2
p = a point inside R and on n[ and n2
(not including points at the ends of and
q=c-p
i=[ 1,0,0] (unit vector)
a = the angle between i and
P = the angle between i and n2
a '= the angle between i and
*
P= the angle between i and n2
y= the angle between i and p
y'= the angle between -i and q
Table 2.8: Open Parallelogram Definitions
Note that the angles p and P' are equal. This is not necessarily the case
with y and y'. Additionally, a=a=0.
Upon analyzing the aforementioned angles we can see that the
relationships shown in Table 2.9 are true.
59


a='=0 (because of HNF)
r = ~P
cos(a)=cos(a') = l
cos(/J)=cos(/T)
0 cos(/J) cos(f$ ') Table 2.9: Lattice Angle Properties
From this we find the important inequalities in equations 2.48, and 2.49.
For a point to be inside the open parallelogram, both of these inequalities
must be true for any point we test.
cos(/$) cos(0) Since we know the values of the vectors, we can compute the cosines.
cos(/J) =
cos(y)=
cos(y') =
"21
+n
2
22
("ll + "21 Pi)
^(nn+n21p1f+(n22p2f
(2.50)
(2.51)
(2.52)
Now that we can compute the cosine values, we now need a range of
points to test. The maximum range is defined by the vectors
nx,n2, and c So, we let x and y range over the intervals in Table 2.10.
60


X mm max
v < v < v
J mm J J max
where
xmin = min{0,n 21)
Xmox = maX(n 11, Cl)
ymin=min( 0^22)
ymax = maX(n 22)
Table 2.10: Iteration
Ranges
As we iterate over these x and y values, we let p=[x, y, 0] and evaluate
p against the angle intervals described earlier.
We are now ready to define an algorithm that will work in IR2. After a
discussion of the algorithm in IR2, we will extend the algorithm to work for
both IR2 and IR3. If we let dx=xmax-xmin, and d^y^-y^, then the
efficiency of this operation is 0(dx dy).
Algorithm: locate color vectors
Inputs:
N = the set of period vectors
LocateColors (N)
1 for (x=x to xm)
2 for {y=ymin to ymJ
3 p=[*,:y,o]
4 q^+r^-p
5 calculate cos(y)
6 calculate cos(y')
7 if (cos(^) < cos(y) < 1 and cos(/3') < cos(y') < 1)
8 add p to the set of color vectors
61


2.14.2
3D Extension
Figure 2.17: Open Parallelepiped
Up to now, we have not discussed the third vector, n3. For IR2, it is
implied that n^=[0,0,1] for reasons that will soon become clear. This
last vector is anywhere in IR3 except for the XY plane. Referring to Figure
2.17, we see the projection of n3 onto the XYplane labeled nz'. We
need the definitions in Table 2.11.
62


ti3'= the projection of nz onto the XY plane
Q= the angle between i and nz
Table 2.11: Parallelepiped Definitions
The three vectors in N describe an open parallelepiped where we must
search for color vectors. We now know how to search for points when
z=0. In IR3, this is a matter of searching for the x and y values of
points for all the other values of z. For example, imagine a plane parallel
to the XY plane and passing through z = 1. The intersection of this plane
and the parallelepiped creates an nearly exact duplicate of the original
parallelogram. The only difference is that this new parallelogram might
be shifted by some amount in the x and y directions in relation to the
original. The amount of shift is defined by the angle 6 and the value ofz.
Equation 2.53 defines a shift vector to tell us where the new
parallelogram is in relation to the one at z=0 for each new value of z.
e=[zcos(0), zsin(0), p3]
(2.53)
Note that if n3=[0,0,n33], then e=[0,0,p3]
We need the shift vector for two purposes. The first is to adjust the
vectors for each point p we test, and to shift the range of x and y, as we
iterate over z values. The shift vector tells us where the projection of the
parallelogram onto the XY plane, R', will be located.
We adjust the vectors p and q by subtracting e from them. This
causes p and q to point to integer coordinates if e does not have
integer coordinates. This is the case if R is between integer coordinates
in either the x or y direction.
(2.54)
q =q-e
(2.55)
P =P~e
We also adjust the range for x and y by adding the corresponding e
coordinates to the minimum and maximum values of the ranges.
[zcos(0)J 63


(2.57)
[zsin(0)J Therefore, as the parallelogram shifts for each new value of z, we may
have a new set of x and y points to test. This also keeps us from pointing
too far away with our test points whenever R' is between x and y integer
coordinates.
We calculate the sine and cosine of the projection vector as shown in
equations 2.58 and 2.59.
cos(0)=
n
31
yfn'
31
+n
> 2
32
(2.58)
sin(0) =
|| [-7T31 0,0] |1 _ n'32
11*11 Ill'll Vrc'gi + tt'L
(2.59)
We evaluate the angles (a, 0, y, etc.) as before except we use the adjusted
points p' and q to calculate the angles. If p' or q are accepted as
valid within the angle ranges, then we add p and q, respectively, to
the set of color vectors. Equations 2.60 and 2.61 show the new
calculations for p and q.
Pj' pj-zcosfe)
cos(y)= ,=, ---
1Jpl+P'l V(Pi-2COs(0))2+(p2-zsin(0))2
n11+n21-p1-zcos(0)
cos(y )= ,
yj{nn+n21-p1-zcos{e))2+{n22-p2-zsin(e))
2
(2.60)
(2.61)
The efficiency of the operation is augmented by the magnitude to the
projection of ri3. Therefore, in IR3, the efficiency is 0(dx dy n33) where
dx and dy are as defined in section 2.14.1.
64


Algorithm: locate color vectors
Inputs:
N = the set of period vectors
LocateColors (N)
1 for (z =0 to n33-1)
2 e=[zsin(0), zcos(0), z]
3 *w=Km+2:sin(0)J
4 ^pper = t^mo, + 2:sin(0)J
5 :ywr=lymin+zcos(0)j
6 yupper = \.ymax~^^ COS (0)J
7 for {x=xlower to xupper)
8 for (v = v, to v )
\J Slower S upper /
9 P=[x,y,z]
10 q=n1+n2p
11 p'=P~e
12 P=q~e
13 calculate cos(y) at p'
14 calculate cos(y') at q
15 if (cos(fl) < cos(y) < 1 and cos(^') < cos(y') < 1)
16 add p to the set of color vectors
2.15 Assign Colors to Lattice Points
Returning to the uncolored Big Tile in Figure 2.15, we see every motif
piece that we need to color. We will color the Big Tile then use the Big
Tile to tile the wallpaper. Almost as if by magic (not really, though),
every wallpaper component will be colored so that a wallpaper component
will have a different color than another wallpaper component it happens
to overlap.
Given periods A{n x ,n2}
n ii n2i
*12 *22
, color vectors K={kl, k2,...kn}
65


and r,sE Z, a Big Tile position of a motif piece is defined as
p=ki+rnl+sn2.
It follows that
=at1
r
s
Pi~ku
P2~ki2
AT-1 J-
Recall that the inverse of AT is A/ =
A
Therefore,
n
22
n
21
-n12 nu
r _ 1 n22 n2\ Pi~ku
s -n12 nu P2~^i2
Let
^22^Pl ^n) ^21 (P2
nu(P2-ki2)-ni2(Pi-kil)
Then test
(2.62)
(2.63)
(2.64)
0=r'{modAN) and 0=s'{mod(2.65)
The worksheet in Table 2.12 shows all the assignments that need to be
made. Figure 2.18 shows a colored Big Tile. We know that there are
three unit tiles in our Big Tile. Each unit tile has nine motif pieces.
Therefore, there are 27 color assignments to make, nine for each Big Tile
location. The determinant tells us that three colors will be used, so each
color must be used a total of nine times in the Big Tile. In general, each
color must be used the exact number of times every other color is used.
This fact helps provide a method to double check the calculations.
66


BT loc Motif Sum Linear combo
[a,b] - [mi,x,y] = P r' i r' 2 s' 2 choose
[0, 0] [1, 0, 0] [0,0] 0 -3 -3 0 1 2 [0,0J
[0,0] [2, 1, 0] [-1,0] -3 -6 -6 0 1 2 [0,0]
[0,0] [3,1, -1] [-1,1] -3 -6 -3 1 2 3 to,2| '
[0,0] [4, 0, -1] [0,1] 0 -3 -3 1 2 3 ~to;2T
[0,0] [5, 1, -1] [-1,1] -3 -6 -3 1 2 3 to, 2]
[0,0] [6, 1, -2] [-1, 21 -3 -6 -6 2 3 4 [0, 1]
[0,0] [7, 2, -2] [-2, 2] -6 -3 -9 2 3 4 [0,1] '
[0,0] [8, 1, 1] [-1, -1] -3 -6 -6 -1 0 1 [0, lj
[0,0] [9, 0, 1] [0, -1] 0 -3 -3 -1 0 1 [0,1]
[0,1] [1, 0, 0] [0, 1] 0 -3 -3 1 2 3 to,2l
[0,1] [2, 1, 0] [-1, 1] -3 -6 -3 1 2 3 10, 2J
[0,1] [3, 1, -1] [-1, 2] -3 -6 -6 2 3 4
[0,1] [4, 0, -1] [0, 2] 0 -3 -3 2 3 4 iSjtr
[0,1] [5, 1, -1] [-1, 2] -3 -6 -6 2 3 4
[0,1] [6, 1, -2] [-1, 3] -3 -6 -6 3 4 5 (Ml
[0,1] [7, 2, -2] [-2, 3] -6 -9 -9 3 4 5 r WM~
[0,1] [8, 1, 1] [-1, 0] -3 -6 -6 0 1 2
[0,1] [9, 0, 1] [0, 0] 0 -3 -3 0 1 2 10,0;
[0, 2] [1, 0, 0] [0, 2] 0 -3 -3 2 3 4 10, 1]
[0, 2] [2, 1, 0] [-1, 2] -3 -6 -6 2 3 4 [0,1]
[0, 2] [3, 1, -1] [-1, 3] -3 -6 -6 3 4 5 TOT"
[0, 2] [4, 0, -1] [0, 3] 0 -3 -3 3 4 5 TCiol
[0, 2] [5, 1, -1] [-1, 3] -3 -6 -6 3 4 5
[0, 2] [6, 1, -2] Tl,4] -3 -6 -6 4 5 3 tOP
[0, 2] [7,2, -2] [-2, 4] -6 -9 -9 4 5 T FT2T
[0, 2] [8, 1, 1] [-1, 1] -3 -6 ne~ 1 2 |3| to sr
[0, 2] [9, 0, 1] [0, 1] 0 -3 ^- 1 2 [0,2]
Table 2.12: Color Assignments
i
67


Figure 2.18: Colored
Big Tile
This is almost the last step in the coloring algorithm. The information
that seeds this algorithm may come from the design tool that was
mentioned in the beginning of this thesis. It is now time to pass the
coloring information back to the coloring machinery of the design tool. In
effect, the coloring can happen in real time while a tile is being designed.
Recall that the coloring information at this point is for one connected
component. This has to be taken into account when choosing colors to fill
a palette. The strategy is to disallow the choice of a color for any other
motif piece, in any other component, once it has been assigned within
another component. For example, if red, blue, and green have been used
for component A, then they are not available for component B of the same
68


tile. This choice could be implemented here. But it seems better to let the
object actually in charge of coloring to make that decision. Thus the
design tool receives generic color vectors and decides which colors to take
from the palette.
Figure 2.19: Numbered
Big Tile
We begin by assigning each unit tile in the big tile a location vector. This
assignment is illustrated in Figure 2.19. For each Big Tile location, each
motif piece must get the correct color. A Big Tile location is the sum of a
color vector plus a linear combination of the periods. The procedure for
color assignment is to iterate over every motif piece at every Big Tile
69


location and plug in each color vector. If a linear combination is found for
a color during iteration, then assign that color to the current motif piece.
The efficiency for this procedure is 0(A B ||MJ| H-KJI).
Algorithm: assign colors
Inputs:
BTS=AxB
A = the determinant of ft (the set of periods)
Kk={k1 k2..., kn}= the set of color vectors
Mk=[mx m2..., mn}= a generating set of motif pieces
(Recall mi=[i,jc,y]=[i,a)])
AssignColors (A,B,A,M ,K)
1 for (a=0 to A-l)
2 for (6=0 to B 1)
3 for (i = l to ||MJ|)
4 for (j = l to \\Kk\\)
5 ~p=[a,b]ai
6 calculate s' and t'
7 if (s' and t '=0(mod A))
8 assign kj to mi in position [a, 6]
2.15.2 3D Extension
There is a little more work involved to carry out the calculations in IR3.
But, the algorithm is only slightly changed to account for the extra
coordinate. Note that, in both IR2 and IR3, it is necessary to check all sets of
modulus equations (r, s, and t (mod A )).
Given periods N={nx,Tr2>n^)
color vectors K=[kl,k2,...kn\, and r,s,tE Z, a position of a motif
nll n21 n31
= ni2 n22 n33
ni3 n23 3 CO CO
70


piece is defined as p=ki+rn1+sn2+tn3. Therefore,
r Pi~ka
s =N~l P2~ki2
t Pz~kj3
(2.66)
Using the transpose of the cofactor matrix of N from equations 2.24, and
2.25,
Let
r 1 A D G Pi~ku
s t X ~A B E H C F I P2~ki2 P3~ki3
(2.67)
r {Px-ka)A {p2-ka)D {p3-ka)G
s' (Pi~ka)B (P2~ki2)E (P3-ka)H
t' (Pi~ka)C (p2~ki2)F {Ps-k^I
(2.68)
Then test
0=r'(modAN),
0=s'{modAN), and 0=t(modAN).
(2.69)
The algorithm is similar to the case in IR2 and highlighted below. In this
case, the revised efficiency is 0(A B C ||AfJ| H-KJI).
71


Algorithm: assign colors
Inputs:
BTS=AxBxC
A = the determinant of ft (the set of periods)
Kk={k1k2 ...,kn}= the set of color vectors
Mk={m1 m2..., mn}= a generating set of motif pieces
(Recall mi=[i,a:,y>z]=[i,gi]) 1 2 3 4 5 6 7 8 9
AssignColors (A, B, Q, A, M, K)
1 for (a=0 to A-l)
2 for (6=0 to B 1)
3 for (c=0 to C-l)
4 for (i=l to ||MJ|)
5 for (j = l to ||tfj|)
6 p=[ a, 6, c 1 a;
7 calculate r . 8' and t'
8 if (r'fs and t=0(modA))
9 assign fe, to mi in position [q,6,c]
2.16 Color the Tile
This is the step where we complete the loop in relation to the design tool.
We finally get to transmit color information back to the design tool. The
design tool will receive and interpret the color assignments, and assign
colors to the polygons that have been built. A possible software design
might allow the user interface to be decoupled from the algorithm
implementation in a client-server relationship. In other words, the color
calculations could be kept separate from the graphical rendering in the
design tool. Thus, performance could be enhanced by allowing rendering
and computations to be performed in separate environments.
If there are multiple wallpaper components, they all have their own Big
Tile size. If all the Big Tile sizes are identical, then they will all fit into
72


the same Big Tile. However, the case may be that different wallpaper
components have different Big Tile sizes. In that case, a big Big Tile size
needs to be found so that all the Big Tiles will fit within it. To find the big
Big Tile size the least common multiple (LCM) of all the corresponding
dimensions for each Big Tile must be calculated. For example, in IR3,
given two Big Tile Sizes BTS1=A1 X B1xC1 and
BTS2=A2 X B2xC2, then the big Big Tile Size
bBTS=\cm(A1, A2) X lcm(B1,B2) X lcm(C1 C2). (2.70)
Recall that lcm(a, b)=ablgcd(a,b) [RoOO]. We can find the GCD, and
by extension, the LCM of two integers using the Euclidean algorithm.
This is a computation that should be added to the design tool so that
rendering may be adjusted as necessary.
73


3. Conclusion
Now that we see how to compute the coloring for an Escher tile, we can
wonder where it will lead. Looking back, we touched on implementing
algorithms for dealing with vectors and matrices utilizing principles found
in linear algebra. We saw how to manipulate the vectors using GCD
reduction and the Hermite Normal Form that draws on techniques from
number theory. We mentioned the design tool that uses graphics
programming to aid in visualization and creation of tiles in both IR2, and
IR3. We drew on techniques from basic geometry to help identify points in
a lattice.
An important aspect of the implementation is that all the steps involved
are efficient. This means that we can expect the overall run time to be in
a manageable range. While we are not looking for real time
computations, we will be happy that the algorithm will finish quickly.
Most Escher tiles are relatively simple. They have few motif pieces and
few connections. That means that the software implementation should be
fast enough so that a user of the design tool will not have to wait a
noticeable amount of time for coloring.
Looking forward, it is easy to imagine a wide range of possibilities. The
design tool may be used to create strange and wonderful new Escher tiles.
Many of M. C. Escher's tiles were formulated on a basic pattern within a
5x5 grid. This motif pattern may be used in several permutations to
create a lot of interesting patterns. We do not have to be limited to a grid
or even straight lines when creating a tile. Unit tile patterns, for
instance, do not have to be polygons. They can be a set of lines, or a
combination of lines and polygons. We are not even constrained to create
two dimensional tiles. When we look into three dimensional layouts, we
add three dimensional solids to the list of possible motif pieces. The
aesthetic possibilities abound.
3.1 Design Tool
The design tool holds much hope for possibilities. With this tool, a
74


designer or artist is free to create new tiles that have not been seen
before. The tool was developed as part of a related project. To be truly
automated, the coloring algorithm needs to have some sort of input. In
fact, the first steps down the road to coloring require that we label motif
pieces, and find their relationship to one another through connections and
overlaps. The last step implicit in the algorithm is that a person actually
colors a physical wallpaper. The design tool is intended to fulfill those
steps in the algorithm.
3.2 Future Considerations
As humans, we can sit down with a sketch pad, draw tiles, then
wallpapers based on that tile. A few talented souls, M. C. Escher being a
notable example, have the time, talent, and resources to produce such
creations. The hope for the design tool is to make this type of endeavor
accessible to a wider range of people and possibilities. It should make
creating tiles easier so that people can create pleasing wallpapers. New
wallpapers mean that students and scholars might gain new insight by
having a concrete example to manipulate.
Earlier we mentioned extending Escher tiles into three dimensions. This
holds promise for further exploration. Problems such as using tiles other
than square tiles could be explored. For instance, the Euclidean plane can
be tiled by squares, regular hexagons, and equilateral triangles. Motif
pieces could be drawn on hexagons or triangles and then used to tile an
area. The problem of how to define connections and boundaries could be
posed. With square tiles the Big Tile is and area outlined by straight
lines. It may be that Big Tiles using triangles, and certainly hexagons
would not have straight edges. Triangles should be problematic because
they must be translated so they can tile a plane. This could possibly make
connections harder to define. Escher also created work that explored the
hyperbolic plane. This is an interesting area of exploration as well.
There are many areas that beg to be explored when it comes to Escher
tiles as it is with the rest of M. C. Escher's work. He inspires us not only
with beauty and form but with the infinite possibilities that exist in his
patterns.
75


APPENDIX A
Algorithms
The following algorithms are listed in this appendix to provide an
illustration of their implementation. They have been used as part of
the implementation but not described in the text.
A.1 Extended Euclidean
Given a, b, c Z, the greatest common divisor (GCD) c is the
greatest integer that will divide both a, and b. Additionally, c may be
expressed as a linear combination of a, and b. Given /, ge Z, then
c=fa+gb [RoOO]. The extended Euclidean algorithm computes c as
well as providing the two factors f, andg [CoLeRiOl].
The algorithm depends on the fact that gcd (a,b)=gcd (b, a mod b)
[CoLeRiOl]. If b=0, then a is the GCD. Otherwise, find the
remainder, r=amodb, and recurse with b and r. As the algorithm
ascends out of the recursions, the quotient q=[alb\ is used to
calculate valid factor values at each step as the algorithm ascends out
of the recursion. When the method completes, / and g are valid factors
for c =fa +gb. The efficiency of this algorithm is proportional to
0(log b) for a>b>0.
76


Algorithm: extended Euclid
Inputs:
a,bG Z
ExtendedEuclid (a ,b)
1 if 6=0
2 (c,f ,g)={a, 1,0)
3 else
4 r=a mod b
5 (d, x, ;y)=ExtendedEuclid(6, r)
6 q=[a/b\
7 t=xqy
8 [c,f ,g)={d,y,t)
9
10 return {c,f,g)
A.2 Hermite Normal Form (HNF)
Hermite Normal Form is a form of matrix and also refers to the algorithm
for normalizing matrices [Co95], In general, the result is a matrix of the
that, given the matrix A=aij, satisfies
1. Uy=0 for i> j,
2. au>0 for all i, and
3. 0i,
or, as an mxn matrix with m 0 0 0 * * ... *
A = 0 0 0 0 * ... *
0 0 0 0 0 *
77


50
In our explicit case for the Escher coloring, m=n=3, so the result matrix
form is
* * *
A = 0 * *
0 0 *
The algorithm starts at the bottom right and works its way up the
diagonal. The zero values are the result of performing GCD reduction on
the columns to the left of the current diagonal element. The result is that
the diagonal entry contains the non-negative GCD of every element to the
left in the row or zero. After the every element to the left is zero, a final
reduction is performed on the columns to the right.
The initialization of the algorithm is just setting indices for iteration.
After initialization, the algorithm performs the reductions using column
operations.
Since the algorithm is computing a GCD the efficiency is related to that of
the Extended Euclidean algorithm. Assuming m n 1 operations on the last row at each iteration until every value
except the last is zero. So efficiency for that row is 0 ((n -1) log *),
where x is the least value in that row. For the next row, efficiency is
0((n2)logx), and so on. Every column operation involves m
operations for each element in the column. Therefore, overall efficiency is
0(ra2nlogx).
78


Algorithm: Hermite normal form (HNF)
Definitions:
A= an mXn matrix
i,j,k,l,d GZ= matrix indices
A;= a column in A
min = temp variable for minimum value
allzero = boolean status of row
Inputs:
A= the matrix to be reduced
Initialize:
1 i=k=1
2 if (m 3 b=m
4 if (m>n)
5 b=n
79


HermiteNormalForm (A)
1 while (k 2 s=1
3 while (s^O)
4 min=0
5 d=-1
6 for (j=0 to k 1) // findleast^a^O, j>l
7 if (a^0 and (min=0 or mm>|aj))
8 mm=|a^| d=j
9 if (d^1) // foundajj 10 if ((0^=0 and a^ne0) or (a^O and |a^|
11 swap(Ad, Ak)
12 "o' & A o
13
14
15 s=0
16 for (j=n l to k 1) II column reductions
17
18 >> II > II
19 s=s+a V
20
21 if (att=0)
22 k=k l
23 else // final reductions
24 for (j=k l to 0)
25 q=Wijlaik\
26 > II > 1 §>
27
28 i=i + l
29 k=k + l
l
80


APPENDIX B
Escher Tile Examples
The following are examples of Escher tiles representing each of the period
types in IR2. These examples are meant to aid in understanding what
natural periods actually describe geometrically and chromatically.
81


B.l Trivially Periodic
Periodic Tile
Figure B.2: Trivially Periodic Wallpaper
82


B.2 Singly Periodic
Figure B.3: Singly
Periodic Tile
83


B.3 Doubly Periodic
Figure B.5: Doubly
Periodic Tile
Figure B. 6: Doubly Periodic Wallpaper
84


REFERENCES
[Co95] Henri Cohen, A Course in Computational Algebraic
Number Theory, 2nd ed., Springer-Verlag, New York,
1995.
[CoLeRiOl] Thomas H. Cormen, Charles E. Leiserson, Ronald L.
Rivest, Clifford Stein, Introduction to Algorithms, 2nd ed.,
The MIT Press, Cambridge, Massachusetts, 2001.
[Da97] Dan Davis, On a Tiling Scheme From M. C. Escher, The
Electronic Journal of Combinatorics, v. 4, no. 2, (1997)
#R23.
[GeOl] Ellen Gethner, On a Generalization of a Combinatorial
Problem Posed by M. C. Escher, Congressus
Numerantium, Baton Rouge, LA, 2001.
[Ge02] Ellen Gethner, Computational Aspects of Escher Tilings,
PhD thesis, University of British Columbia, 2002.
[MaWaSc96]Rick Mabry, Stan Wagon, Doris Schattschneider,
Automating Escher's Combinatorial Patterns,
Mathematica in Education and Research, v. 5, no. 4,
(1996) 38-52.
[PaOl] Steve Passiouras, Escher Tiles,
http://www.eschertiles.com, 2001.
[RoOO] Kenneth H. Rosen, Elementary Number Theory, 4th ed.,
Addison Wesley Longman, Inc., Reading, Massachusetts,
2000.
[Sc04] Doris Schattschneider, M.C. Escher: Visions of Symmetry,
2nd ed., Harry N. Abrams, Inc., New York, 2004.
85


[Sc97]
[St03]
[WeOl]
Doris Schattschneider, Escher's Combinatorial Patterns, The
Electronic Journal of Combinatorics, v. 4, no. 2, (1997) #R17.
Gilbert Strang, Introduction to Linear Algebra, 3rd ed.,
Wellesley-Cambridge Press, Wellesley, Maryland, 2003.
Douglas B. West, Introduction to Graph Theory, 2nd ed.,
Prentice-Hall, Inc., Upper Saddle River, New Jersey, 2001.
86


Full Text

PAGE 1

GENETIC SELECTION AMONGST ARTIFICIAL LIFE AGENTS by Alex Plotkin B.S. Colorado School ofMines, 2000 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Comput e r Science Fall2004

PAGE 2

This thesis for the Master of Science degree by Alex Plotkin has been approved by

PAGE 3

Plotkin, Alex (M.S. Computer Science) Genetic Selection Amongst Artificial Life Agents Thesis directed by Professor John Clark ABSTRACT We consider simulating the behavior selection and evolution of biological organisms utilizing the computer as the ultimate test bench. Instead of trying to create a complex system to represent an Artificial Intelligence entity we look at the field of Artificial Life which chooses the approach of creating simplified artificial creatures or agents that can be evolved and subjected to selection. Such creatures sometimes exhibit rather complex behavior without the initial development typically involved in the Artificial Intelligence approach. We investigate the field of Artificial Life by studying some of the techniques used to simulate the genetic makeup of simple Artificial Life forms The project also looks at the possible ways to simulate natural selection amongst these life forms. The intent of this exercise is to gain an understanding of the techniques currently available to accomplish the tasks similar to those of biological selection. Within the recent years a numerous environments have been developed to aid in simulating the Artificial Life objects and their surroundings. To complete this exercise we have used one such environment, called Breve, and provide some insight into why we chose Breve for our work. We use the platform to create a board-like environment in which Artificial Life Agents move around searching for food and other Agents while the fittest individuals are chosen. The premise for the selection is provided by utilizing a Genetic Algorithm. We conclude with our findings and some recommendations for the future work. This abstract accurately represents the content of the candidate's thesis I recommend its publication. Signed lll

PAGE 4

DEDICATION I dedicate this thesis to my mother father and sister for their support and belief in me I also want to thank dear Robin and my good friend Walter for their support and constant motivation to keep me going.

PAGE 5

ACKNOWLEDGMENT The project would not be successfully completed without the software so generously offered to the public by Jon Klein, while the patience and understanding of the engineering team at WildBlue Communications have made it possible for me to utilize the necessary hardware. I am grateful to Andy Romero who has been so patient in helping me with the expenses for the program I am also grateful to Dr. Clark for his support and patience with me for the last 2 semesters.

PAGE 6

CONTENTS Figures.............. ..... ... . . . ... . .......... . ................ .... .... .... ....... .... . ........... tx Tables . . . . . . . . .... ... . ........ ................... ..... ....... . ....... .... .................... x CHAPTER 1. INTRODUCTION .... . ..... . .... . . ............. .......... ........ ...... . ...... . . . ........ 1 Accomplishments ... . . ..... . . . . . ..... . .... .......................... ...... .... ... 4 2. BACKGROUND ......... ..... . ... .. ....... ... ............. .................................... . 7 Environmental Background .................... .... ....... ........... ....... .... .... 10 3. SIMULATION DESIGN ... ... . .... . ....... ...... .................. ........................ 11 Idea Formulation .. . . . . . .. . . . . . . .. .. . .. .. .. . .. . . . . . .. . . .. .. . . .. . 11 The Super Agent . . ...... .... . . ...... ..... ................ ...... .... . ...... . ........... 12 The Sub Agent ..... ...... ......... ..... . . .... . .... ................ ....................... 13 Concurrency and Time ... . . ..... .......... .................. ........ .... . . .... ...... 17 Agent Evolution .. . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . 18 Gene Representation ........ .... . . ..... . .... .................................... ...... 20 4 ENVIRONMENTS AND TOOLS EXAMINED ....................... ..... ...... 22 Breve ............................... ............ . ..... .... .... .... .............................. 22 Swarm ........... . ........... . . . ......... .... .... .... .............. .... ......... ......... 24 Framsticks . .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . .. .. . . .. . 24 VI

PAGE 7

50 SIMULATION IMPLEMENTATION oooooooooooooooooooooooooooooo o oooooooooooooooooooo 26 Simple Agent Implementation 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 000 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 26 Feeders ooooooooooo o o o ooooo oooooooooo oooo o ooooooooooooooooooooooooooooooooooooooooo o o o ooooooooooooo o o 28 Further Refinement and Readjustment of Goals 00000000 00000000000000000000 0 29 Getting the Agents to Eat 000000 oooooooooooooooooooooooooooooooooooooooooooooooooooooooooo 31 Adding Intelligence to the Agents O oOOOOo o o 0000000 000000000000000000000000 00000000000 34 Physics of the Simulation and Collisions oooooooooooooooooooooooooooooooooooooooo 34 Neighbors 000000 oooooooooOOooOOooooooooo 000 00 0000000000000 ooooooooooooo o o o ooooo o o o oooooooooooooooo 35 Designing the Agent Goals 0 0 0 0 0 0 0 0 0 0 0 0 00 00 0 0000 00 0 00 0 0 0 0 0 0 0 00 00 00 0 0 0 0 0 0 00 0 0 0 0 00 0 0 0 0 0 0 00 0 36 Design and the Use of Traits 0000000000000000 oooooooo ooooooo ooooooooooooo o o o oooooooooooo 37 Refinement and Implementation of the Agent Genome 0000 0 0000000 0 00000 38 Algorithm Used OOooooooOOOOOOo OOoO O O OOOOOOOOOOOOOOoOOOOOOO ooOOooo o o o o ooooo o o ooooo oooOOoOOoooo o 39 Sorting the Agents 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 00 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 40 Breeding of the Agents oooooo oooo ooooooooooooooooooooooo o oooooooooooooooooooooooooooooooooo 41 Trait Mutation ooooooooooooooooooooooooooooooo o oooooo oooooooo oooo oooooooooooooooooooooooooooooooo 41 6 0 RUNNING TilE SIMULATION ooooooooooooooooooooooooo o o ooooooooooooooooooooooooooooooooo 43 Successful Runs ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo 44 Run 1 oooo o ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooo 44 Run 2 oooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooo o oooooooooooooooooooooooooooooo 46 Run 3 o o o oooooooooooooooooooooooooooooooooooooooooooooooooo o o oooooooooooooooooooooooooooooooooooooooo 47 Vll

PAGE 8

Run 4 ...... ....................... .... ............ .. .. .. ....................... .. .......... .. .. .... 50 7. CONCLUSION ............ ..... ........................... .. ............... ..... .. ....... .... ..... 51 Future Recommendations .... .. .... ...................... ................. ...... ...... 52 APPENDIX A. ENVIRONMENT ATTEMPT ............................ .. .......... .. .. ..... 55 B.CODE LISTING ...... . .... ............ .......... .................. ...... .... ......... 57 C. AGENT_GENOME.TZ LISTING .... .. .... .... .. ........... . ..... .. .. ..... 58 D. AGENT.TZ LISTING ............... .. ...... .. .. ..... ........... ...... .......... .... 64 E. DATA_COLLECTOR.TZLISTING ..... .................... ........... ..... 72 F. FEEDERS.TZ LISTING ........ ..... .... ...... .. .... .. ........................ ..... 74 G. IIIVE.TZ LISTING ........................................................ .......... 76 H. SMART_AGENTS.TZ LISTING ... .. .. .. ...... .... ... .......... ............. 79 BIBLIOGRPAHY ....... ........... ...... ..... .... .. ...... ........ ...... .. ...... .............. .... .. ........... 86 Vlll

PAGE 9

FIGURES Figure 1: A sketch for the Super Agent idea .. .... ... ..... ....... ....... ................. . 13 Figure 2 : A sketch for the Sub Agent idea ... ........................... .. .. .. ..... ... .... 14 Figure 3: The Selection Cycle ......... .............................................. .. .... ....... 18 Figure 4: Initial simplistic Agents in the Environment .... ....... .... ... ............ 27 Figure 5: Two Agents on the board with the Feeders ... .. .... .. ..... ........... ..... 32 Figure 6: Snapshot of the running tally of the Agents' energies ................ 33 Figure 7 : Only one Agent remains now ...... .... .......... ...... ..... .... ......... ........ 33 Figure 8: Simulation output sample .................... .... .... .. .... .. .. .................. 43 Figure 9: Energy chart for the best Agent and the worst Agent .............. 45 Figure 10 : Energy chart for the best Agent and the worst Agent . .. .. .. .... 47 Figure 11: Agents on the board with Feeders .............................................. 48 Figure 12: Energy chart for the best Agent and the worst Agent ............. 49 Figure 13: Energy chart for the best Agent and the worst Agent ............. 50 Figure 14: Output from the initial implementation ................ ...... ...... ......... 56 Figure 15: The simplistic initial GUI implementation ............... .. .. .. ..... ..... 56 IX

PAGE 10

TABLES Table 1 : Simplified representation of the Agent's Gene ........ ....... ... .......... 20 Table 2: Refined representation of the Agent's Gene ......... ....... ........ ........ 20 Table 3: Traits of the Agent ........ .... . . . . . . ..................... .... ....... . ............ 38 X

PAGE 11

CHAPTER 1 INTRODUCTION Through the mind (reason, intellect) a person observes oneself, but he or she knows oneself only through consciousness. Without consciousness of oneself any sort of observation as well as application of the mind is unthinkable." 1 After reading the above statements I thought, "Show me a system, application, a single piece of code which is conscious of itself." I am sure many have said so before without giving it much follow-up and so in my (very) humble opinion, the fundamental approach to Artificial Intelligence (AI) should be in developing an entity, which is conscious. Surely this is something that is no longer a discovery to most scientists and researches in the area of AI. The old paradigm of "I think therefore I am" should no longer be applicable -perhaps, "I am conscious of myself therefore I am" is more tangible. For too long, it seems ability to think has long being confused with ability to acknowledge one's existence. The interesting fact about our existence and the manner in which we wade our way through life and the world around us is perhaps described by the words of the same author: 1 LeoN. Tolstoy, War and Peace (City of Samara: City of Samara House of Printing, 1996) Vol. 3 and 4, pg. 654. 1

PAGE 12

" .. [One] does not need to learn absolutely all the space rules, which in turn provides a certain degree of freedom." 2 Then is it with some unknown innate ability we learn and follow some unknown set of flexible rules to formulate each person's set of knowledge individually with seemingly no engrained basic knowledge? And so in this project I proceed to scratch a surface on a slightly different approach to imitating life and intelligence-Artificial Life. Throughout the course of this endeavor I am not trying to attempt to sway anyone's opinion in any particular direction, but instead by showing my trials and resulting successes and errors, perhaps, stimulate some interest for the others to pursue the subject matter further. It seems that attempts to create a system which has the Artificial Intelligence end up imitating a lot of what we, humans, do and since we don't have a complete understanding of ourselves yet, most attempts, while achieving some rather interesting and spectacular results, still do not reproduce an intelligent self-conscious being. I believe that not unlike the biological evolution, an accidental mix of resources and circumstances will spark the evolution of Artificial Life in a context 2 LeoN. Tolstoy War and Peace (City of Samara : City of Samara Hou se of Printing 1996) Vol. 3 and 4, pg 669. 2

PAGE 13

that truly resembles Biological Life, which we are a part of. While the idea of renegade lines of code may seem a little far fetched, there are enough lines of code out there, combined with hardware to provide a "seed" of resources which could start something out of our control. Imagine a server farm getting struck by lightning, but instead of electronics getting fried, something else will happen. The circuits will get altered in just the right way and the code will be read in just the right sequence to form something that will seem like a computer virus at first, but will start acting in a very different way-a one day old child trying to learn what is it and where is it. And so the first true Artificial Life form will be created. Philosophy aside while conducting the research for this document, I realized that I do not posses the sufficient knowledge or the time to implement an environment, which could imitate something similar to what I discussed above. Instead, the idea of simple little creatures performing simple tasks seems more achievable and to the point. Can a piece of hardware with some lines of code act in a manner, which resembles evolution? Can it act irrationally without the backing of search algorithms and still achieve a goal, which can be easily formulated, and be pertinent to a live creature trying to survive in a real environment? I will discuss my experiments along with the conclusions I arrived to and any possible future recommendations on the following pages. 3

PAGE 14

Accomplishments The flow of this paper may seem almost irrational sometimes; however the intent of the author was not to confuse the reader but to try to follow the flow of the development and refinement of the experiment's coming to life. To alleviate any confusion this section serves as a compressed overview of what was done as well as any future recommendations. After some brain-storming I formulated the following idea for the environment implementation. A group of Agents will scour a board-like environment trying to achieve a common goal. In the context for this paper an Agent is simply an entity that is represented by a data-structure, which enables it to perform simple tasks such as eating moving, acquiring one's neighbors, etc Originally I had planned to pit two Hives of Agents against each other with opposing sides able to fight and steal resources from each other to insure the respective Hive s survival and well-being The best performing Agents would be selected and bred to propagate their superior traits. The original spawning of Agents would carry a penalty on the Hive so each Hive could not spawn an unlimited number of Agents. Agents would suffer penalty in the form of energy loss during collision with other Agents with a more severe penalty upon collision with the enemy Agents. 4

PAGE 15

Instead, due to the arising complexity, I settled for the following as the final implementation. A set number of Agents would be spawned with random character traits and they will then be allowed to move around the board, which has a limited number of Feeders on it. Feeders will act as the energy sources for Agents, which come in to contact with them. Agents would suffer energy loss from collision with other Agents. However, the Agents would still be drawn toward other Agents based on the value of the socialization indices. Basically an Agent is drawn to anything within its sight distance based on their level of "curiosity." This was my attempt to start implementing group dynamics for a population of Agents with differing or common goals based on some of the literature discussed in the background section. The Agents gain a certain amount of energy from contact with a Feeder and are allowed to remain on a Feeder as long as they please. The Agent with the highest amount of energy is selected to the top of the population at which point the top contender and the member just below are bred together to produce a child, which is placed in to the last member position. The method used to produce a child is a onepoint crossover. 3 The population members' traits can then be mutated at random and the Agents are updated with the new Genes and proceed with moving about the board. After a while the simulation served as a spectacular example of the 3 Langdon W. B. and Poli R Better Trained Ants for Genetic Programming. Site of William Langdon. Ed. William Langdon 1 Sep. 2004. School of Computer Science, University of Birmingham 10 Nov 2004 . 5

PAGE 16

applicability of Genetic Algorithms (GA) to the field of Artificial Life (AL). The most rewarding thing for me was to see that the principles of several fields of Computer Science and other research disciplines such as Biology, came together to produce a tangible and applicable result. As far as the future possibilities and recommendation, which will be discussed at the end of the paper, one should consider the further implementation to reflect the original intent of having two opposing sides compete for the resources and domination of the environment. I would like to suggest further refinement of the way the traits are used in determining the Agents behavior. To facilitate a more life-like learning behavior, perhaps Neural Network (NN) concepts could be implemented in the future. The simulation could be used as a starting point for a number of real-life simulations such as market dynamics, socialization studies, war games While it is only a simplified representation I found even the simple concepts I've implemented to be extremely useful in showing the functional application of theoretical concepts and to serve as a proof of concept as the section discussing the results will show. 6

PAGE 17

CHAPTER2 BACKGROUND What originally sparked my interest in implementing a type of environment such as the one discussed in this paper was reading some of the works in the proceedings of the Santa Fe Institute Studies in the Sciences of Complexity. 4 While no particular article seemed to cover exactly what I wanted to try out, the overall direction of the presented research and discussion coincided with my general interests Some of the articles would look in to simulating a chemical process utilizing the computer as a new kind of a test bench, while others would discuss how to teach a robot to walk. The courses in Artificial Intelligence (hereafter referred to as AI) only seemed to further fuel my interest for something other then predicate calculus and binary tree search. After some searching for the topics related to Artificial Life (AL) on the internet I came across several sites, which either discussed a more specific idea in AL implementation or actually provided an environment for one's study. Several of these environments will be discussed in more detail later and I will go in to more detail about Breve -the environment used to implement my simulation. At this point I must extend my gratitude to Dr. Clark, who stopped my wanderings in a 4 Christopher G. Langton, Artificial Life: The Proceedings Volume in the Santa Fe Institute Studies in the Sciences of Complexity: Vol. l, 2, 3 (Redwood City : Addison-Wesley Publishing Company Inc 1987 1992) 7

PAGE 18

timely manner by pointing out Breve as one of the possible tools I could use instead of attempting to implement things from scratch. In "Maturation and the Evolution of Imitative Learning in Artificial Organisms" the authors look in to the issues of increased costs of juvenile mortality due to early maturation. 5 The paper goes through the process of parents passing their knowledge on to their offspring and the effect of the age at which the offspring is considered mature on their survival rates. One idea which I became interested in during the reading was that of learning through imitation-basically instead of explicit instructions the candidate would simply follow the teacher (parents in this case) and hopefully produce appropriate behavior or response to that demonstrated by the teacher in similar situations as they arise at a later time. This is one of the possible future projects, which could be implemented utilizing Neural Networks (NN). In "Genetic Algorithms and Artificial Life" the authors provide an overview of how a chromosome could be represented by a string of bits. 6 This proved to be an important building block during the implementation of my experiment to represent the traits in the Genome of an Agent. 5 Cecconi, Federico, Filippo Menzer and Richard K. Belew, Maturation and the Evolution of Imitative Learning in Artificial Organisms (San Diego: University of California 1995) 2. 6 Melanie Mitchell and Stephanie Forrest Genetic Algorithms and Artificial Life (Artificial Life 1, 1987 : 267-289) 267. 8

PAGE 19

While searching for ways to formulate how the Agents would be selected I came across an article with the same name as the one discussed above by Ziv Kedem in which the author points out a possible way to determine the fitness of an Agent. 7 The fitness is then used in selecting the top contender in the population. Ironically another idea for the implementation and future directions for the project came from an article on simulating market dynamics. 8 The article looks in to ways to demonstrate the effect of the buyer's psychology on the market. For example, a case when someone buys something based on the fact that their friend bought a similar product can have a resounding effect on a group of individuals governed by such a behavior while making purchases While I was not interested in the market effects of AL Agent's behavior, this steered me to wonder about the group dynamics of several individuals placed in an environment and attempting to achieve a common goal as opposed to fighting with each other. While there were a number of other "snippets" which helped me in shaping the final idea for what I wanted to do, the final piece was the web page for Breve environment. 9 While there is no wealth of articles on the site itself, through the generous examples and some of the linked articles I was able to get a very good 7 Ziv Kedem Genetic Algorithms and Artificial Life (Tel Aviv: Tel Aviv University 1998) 2. 8 Marco A. Janssen and Wander Jager, Simulating Market Dynamics : Interactions between Consumer Psychology and Social Networks (Bloomington: Indiana University, 2003). 9 Jon Klein The Breve Simulation Environment . 9

PAGE 20

understanding of what the environment could do for me and what I could produce to help me in conducting my experiment. Environment Background During the examination of a number of environments in which the authors have tried to simulate an emerging and evolving population of agents/creatures one thing became increasingly obvious-they all involved some sort of a board-like environment with agents represented by rudimentary structures trying to crawl around and not die That being said I attempted to start implementation of an environment of my own which would act in a manner described above But after having to re-learn Java and struggling to create a simple Graphical User Interface (GUI) to display the results I have gained much more respect of those who have succeeded in at least the simplest of functioning implementations A section on my attempt to implement the environment from scratch can be found in the Appendix A. 10

PAGE 21

CHAPTER3 SIMULATION DESIGN Idea Formulation The underlying premises for my idea is similar to numerous others-simplistic agents scour the environment, which is represented by a board-like structure and perform simplistic tasks while simulated evolution picks the fittest ones for the job and future generations. However, instead of having a single entity cross the field and get killed, imagine a Super Agent which is capable of spawning some number of smaller Sub Agents The initial idea, which I had, was as follows: 1. Instead of risking the Super Agent to achieve a goal, the Super Agent is broken up in to Sub Agents. Loosing a Sub Agent constitutes some structural damage to the Super Agent, but does not completely eliminate the Super Agent. This is different from the concept of Hive, discussed later, where Hive is nothing more than an abstract representation for a hangar, per se, from which the Sub Agents are launched. 2. Each Super Agent can only generate a limited number of Sub Agents 11

PAGE 22

3 The board can be represented as either 2 D or 3D collection of locations (identical in basic propert i es i e for now keep the discrete size of each board location equal). 4. The environment can contain a number of entities other than the Agents such as sources of energy or food and randomly introduced viruses. One of the premises for such a simulation would not be good vs the bad guy, but instead the prevalence of the more creative opponent. The Super Agent The goals of the Super Agent would include: 1. Exploring the board/field. 2. Reaching a certain location 3. Possibly resource domination 4. Perhaps searching out allies and potential enemies. The possible tasks would allow for exploration of the behaviors of the Sub Agents and observe how their simple knowledge functions would allow the Super Agent to achieve a goal more or less efficiently. 12

PAGE 23

Super Agent Food Figure 1: A sketch for the Super Agent idea. The Sub Agent Initially the Sub Agent was to act as the basic building structure or a block of the Super Agent; however, after a while it became obvious to me that a simplistic life form would be needed to represent any of the participants in the simulation. One of the questions I've pondered for a while is what if the knowledge is embedded in each 13

PAGE 24

simple creature s components or body parts as opposed to having a central repository for that knowledge. This brings us to Strings and Nodes. No matter how simplified one tries to be in representing the basic entity, it always becomes too complex and it seems almost impractical to attempt to cram all of the capabilities in to one unit whatever it is. Nodes Figure 2 : A sketch for the Sub Agent idea. Imagine each Node being capable of everything that a Sub Agent is capable of based on the Genes contained in the Genetic Pool for the species. However past a certain level of genetic development, the Node is no longer capable of carrying additional genetic information For example it could only have three traits : sight, hearing, ability to crawl and a limited amount of memory Therefore, mutation will displace 14

PAGE 25

existing traits, which could improve the survival abilities of that Node or lead to its extinction. Of course this is just one possible scenario At this point we bring the Strings in. A String gives the ability to any Node to connect to another Node with some interesting side-effects. Think of the String as a communication channel and a physical link. Two or more Nodes connected via a String or a collection of Strings can discover each other s abilities. This should prevent Nodes from wasting their limited internal resources on acquiring new genetic traits if other Nodes already carry those traits and they are connected via a String. Let's develop this idea further. A Node which is a part of a groups of Nodes connected via Strings, is attacked by a virus and does not have immunity to it. At this point that Node should send a distress signal to the other Nodes and the connecting String should rupture or destroy itself instead of allowing the virus to propagate. Depending on the virus abilities it could still pursue the second Node or sit dormant waiting for another host or target and expire after some time. Such behavior by the Agents could be viewed as crude communication (the String s rupture could be view as a distress signal which is communication). Various scenarios for sicknesses and viruses could be examined: only attacking the immediate Nodes, propagation of the diseases via the String to the whole Sub Agent, killing the infected Node and then prorogating to every Node connected by the string 15

PAGE 26

along with simulation of fast vs. slow virus. Another example would be of virus taking over the control of the host Sub Agent and masking itself as a one of the friendly entities in attacking other Sub Agents after connecting to them via the Strings and thus growing in size. Eventually the virus masking as a Sub Agent could find its way back to the Super Agent. The infected Super Agent would either eventually vanish or mutate in to something undefined by the original parameters of the simulation. The scope of such simulation is not to favor the good or the bad guy, but simply attempt to simulate a non-inhibited behavior of a system of life-like creatures/agents. Some of the topics to examine would be: 1. Can the Sub Agent learn agility? For example if the board has a location perceived as dangerous, can a Sub Agent comprised of two Nodes and a String, learn to reach or jump over the obstacle instead of taking the longer route of crawling around it? 2 By loosing a Node, which lands on the dangerous location of the board, can the Sub Agent stop other Nodes from traveling to that location? 3. Can it also learn how far it can stretch before the String breaks? Such event could constitute Sub Agent's death or injury upon which the two separated Nodes will start acting as independent Agents or perhaps, they will try to reconnect. 16

PAGE 27

Another possible direction to take with the String-based Agents is the examination of how such agents would move based on some simple rules: 1. The Agent is stable as long as the Nodes are as far away from each other as possible (similar to a molecule but with some adjustments, perhaps, to make the Agent behavior more applicable to this project). 2. The String connectors are flexible but will break if over-stretched. 3. The Nodes can be detached to seed new Sub Agents or new Nodes can be spawned if favorable conditions exist (of course nothing is for free-the old Nodes will loose some weight and will have to replenish their nutrition). 4. Multiple Sub Agents can re-combine in to new Sub Agents. Partial reasoning behind the String-Node approach is that every Node would only carry a certain Gene and molecule-like structures would be formed to select the best configuration of the Sub Agent for the environment. Concurrency and Time It became obvious early on in the project that simply checking on each Sub Agent would be inefficient and so each Sub Agent would have to be a thread or behave in a thread-like manner. Basically, the time would have to be real as opposed to taking steps every time the loop iterated for example. 17

PAGE 28

Agent Evolution After some experimentation and reading on the subject it became more and more apparent that I would have to implement some sort of a process to select the Agents out. While I was not able to implement the environment from scratch as discussed previously, the original idea for selecting the Agents was still applicable. Interestingly enough, after sketching down my ideas and later reading material in regards to GA programming, there were some undeniable similarities. Pool of Genes I I G S 1 I Sub-Agent I 1 enettc e ect10n 1 with properties t within the Pool I I I I I I Gene recedes Fail I Pass with Try and get or is kept percentage/rating. evaluated for fitness Figure 3: The Selection Cycle. After reading some GA introductory papers I realized that the above sketch presents the basic premise for the operation of most Genetic Algorithms. I will elaborate further on the details of implementation of the selection of Agents. 18

PAGE 29

My initial "plan" was to provide a pool of genes allowed for a particular species, which of course would be criticized as limiting the possibilities of evolution. However, the pool would only be the starting point-new genes could be added to the pool as they were created via crossbreeding or mutation. Such an approach seems as unrealistic since one would argue that in natural environments there is no repository for all the possible genes for a species. For one this is not a real world and I needed a way of tracking all of the possible iterations of Agent Genes Initial agents would perform and Incidental Search for Genes from the Gene Pool. Later offspring could have mutated Genes. The mutation could occur over several generations or be instantaneous The instantaneous mutation would represent a catastrophic change in the environment. This approach puts any researcher on a slippery ground as I found from reading various documents on the topic of genotypes versus phenotypes. One issue that started to interest me more and more is how the propagation of the learned knowledge to the future generations is facilitated. One of the future considerations would be to study the propagation of acquired knowledge to the offspring from the parent through imitation or direct memory implant. This is where the NN implementation could prove useful as a simulation of real-life memory and learning abilities. 19

PAGE 30

Gene Representation The gene representation, for the sake of simplicity should be in the following format: Sex 011 Table 1: Simplified representation of the Agent's Gene. However, this yields some issues -the representation is too oversimplified There is not enough resolution to represent the granularity of each trait/gene. For example, having only value of 0 (off) and 1 (on) to represent memory would only imply having a memory or not having a memory, but would not specify the level of its development. So I propose the following gene representation for more flexible and hopefully, realistic levels of representation: I Sex I Sight j Sgeed Jump Memory I 01 I o10 l 010 010 0110 Table 2 : Refined representation of the Agent's Gene. Where the traits would be defined as follows (keep in mind that everything is a subject to change with each person's interpretation): 20

PAGE 31

1. Sex00 are female, 10 are male and 11 are asexual. 2. Sight000 is blind, 111 is extremely good sight and everything else is in between 3. Speed000 slowest and 111 is the fastest. 4. Jump-again 000 is the smallest and 111 is the highest. 5. Memory-this is still a subject to change at this point, but for example a convention of 2 would mean no memory and 21212121 would be the highest memory capacity (this could be used for remembering locations visited etc.). Before I designed a concept, which would not be possible to implement I decided to explore what are the tools I could use to perform the desired experiments. First I will discuss several environments, which I came across and then I will follow with the recollection of the successes and modifications made during the implementation as well as some of the lessons learned and recommendations for the future. 21

PAGE 32

CHAPTER4 ENVffiONMENTSANDTOOLSEXAMThlliD Breve Breve is an OpenGL based open-source environment, which has a number of useful features built in. 10 After some initial evaluations, I have chosen it to be my tool of choice. The reasons will become apparent as I discuss the implementation of the final project. The environment provides the features of building simple and relatively complex 3D or 2D objects with relative ease, while enabling significant physics capabilities such as collision handling and realistic physical behavior. The author has also been designing and implementing capabilities for genetic programming and simplistic neural-network features. Tasks, which would take a significant amount of effort if you were to program them from scratch, take few lines of code. The underlying platform is complex and has taken the developer a significant amount of time to produce. This is a part of the reason why the functionality is outstanding At the same time there are some limitations worth mentioning The environment has no issues with running on newer 1 0 Jon Klein The Breve Environment Web Page < h t t p:// www.s pid erland.o r g/ br eve l> 22

PAGE 33

machines : 2+ GHz processor and 512MB of Random Access Memory (RAM) with a reasonably new graphics card are highly recommended for almost flawless performance. However, even with newer hardware there are some performance issues which seem to arise out of some of the graphical features or the calculations needed to establish the behavior of the Agents. It was hard to judge, but for example, in some cases where Agents would get closer to each other the simulation would slow down until the Agents would spread further apart. Also if you are planning to implement more complex entities, which require more calculations to function in the environment, spawning larger number of such entities will probably result in slower performance. That being said Breve is an outstanding platform as it takes care of a lot of the concurrency physical simulation and even some of the concepts of GA and NN programming while providing a tremendous ease of use and short learning curve compared to a lot of the alternatives. I would recommend it as a part of AI curriculum as it could prove as a valuable tool in learning the concepts by translating the theory to simple, interactive and engaging graphical demonstration. The only recommendation I would have is perhaps streamlining some of the graphical components to allow for better performance on lower end hardware, which may no longer be of concern to a lot of the potential users with newer hardware Before the completion of this project the author released a new version of Breve with some impressive add-ons for Genetic Programming which shows that the development is not stale and suggestions could be made to extend some of the GA 23

PAGE 34

and NN facilities to make for an extremely useful teaching and development platform. Swarm While Swarm 11 is also a multi-agent system capable of handling concurrent execution of tasks, I was promptly swayed towards Breve due to the ease of use and the short learning curve, which Breve had. Framsticks Framsticks 12 is an environment, which I, unfortunately, discovered towards the ending phase of my implementation with Breve. It offers some interesting features, which paralleled with some of my hypothetical suggestions for representing the Sub Agents as multi-node creatures connected via the Strings, where the final composition of the creature would represent the evolution, which it has endured. Framsticks allows for creating creatures with various sensory and physical abilities, which are represented by actual physical components such as muscle, touch or smell sensors, etc. However, since my final project did not reach the level of complexity I envisioned due to lack of time and the ambiguity of the task undertaken by me, 11 Paul Johnson, The Swarm Development Group . 12 Maciej Komosinski and Szymon Ulatowski, Framsticks-Artificial Life-3D Evolution and Simulation . 24

PAGE 35

Framsticks is something that should be explored in the future to further the capabilities of the Sub Agents as was originally envisioned. The above environments are a good representation of what is currently available as far as free tools for Artificial Life and Complex Systems simulations. I am sure there are numerous others in the works which I have missed; however, Breve and Framsticks really shine since they are kept updated and provide professional yet simple to follow API to their features, while allowing the creation of a rather complex final product. 25

PAGE 36

CHAPTERS SIMULATION IMPLEMENTATION Simple Agent Implementation Having used Breve for a short period of time it became apparent that I should've been using the environment from the beginning. After only a day I had an implementation with two Agents moving in the environment while I was still unable to get to this point in attempting to create the environment from scratch. The initial implementation was rather simplistic -the locations and directions for the Agents were chosen randomly during the runtime. This was a good lesson in not re inventing the wheel. .. The next step was to actually implement a group of Agents with intelligence and a goal to be used in my experiment. 26

PAGE 37

'L7" Figure 4: Initial simplistic Agents in the Environment. One of the first issues I ran into was to keep the Agents on the board-this was a simple issue, which caused me a lot of headache-before not too long the first implementations of the Agents were wondering off in to the vast space without any controls to bring them back. The problem was later addressed by using a drifting function from code by Lee Spector and Jon Klein. 1 3 However, at this time I had the first piece of "realism" built into them: after a single iteration of the simulation their life energy was decremented by a unit and once they got to zero they died and were removed from the board. 1 3 Lee Spector, Evolutionary Dynamics Di scover ed via Visualization in the BREVE Simulation Environment 27

PAGE 38

Next we need something to feed the agents with and to make them compete for the resources Typically there is some sort of a resource in most of the simulations which I have looked at. One that caught my attention in particular was work done by Lee Spector and Jon Klein. 14 The refined version of the code is now one of the demos in the newest release of Breve. In this case there are two Feeder platforms which drift around at random times I have used some of the code from their implementation to simulate the Feeders in my implementation and to provide for a smoother movement of my Agents Feeders Feeders are the sources of energy and do have a limited supply of energy-an Agent which comes in contact with the Feeder will absorb the energy in some pre-set increment. For future implementations the amount of energy which the Agent can carry off, could be evolved as well... Once the Feeder's energy supply is exhausted it will vanish. This is easily a chieved by simply using the following command: free self 14 L ee Spe c t o r, Ev o luti o n ar y Dynami cs D isco vered via Vi s u a lization in the BREVE S i mul a t io n Envir o nment . 28

PAGE 39

First step was to get the Feeders up and floating around-these will represent our resources, but instead of simply having a square designated as "food" this adds a fun dimension to the simulation the food resources actually move around and so represent more of a challenge to discover and make contact with. The simulation built by Lee Spector and Jon Klein demonstrated the behavior of evolving swarms. There are two groups, which are allowed to roam the 3D field on which there are two Feeders. After a while the competing groups would start to exhibit particular behaviors which were more than just random 15 This proved to be a great stepping-stone for my simulation Although the only code I re-used was that used for drifting the moving objects smoothly in the case of Feeders and the Agents, the general look and feel of their simulation greatly helped me to formulate what I wanted to do in my simulation. Further Refinement and Readjustment of Goals At this point I realized that ambiguity and ambition of my goals and had tore-formulate my original intent for the implementation. Originally I had planned to have two Super Agents, which would spawn some number of Sub Agents, which would achieve a goal and evolve; therefore the Super 1 5 Lee Spector, Evolutionary Dynamic s Di s covered via Visualization in the BREVE Simulation Env ir o nment 29

PAGE 40

Agent with more fit or crafty Sub Agents should prevail. This was only the first layer of the flora and fauna to be represented in the environment. The next layer would include the resources, viruses and obstacles. I was quickly realizing I embarked on an ambiguous task, which would be impossible to finish within my current constraints of time, resources and skills. So I changed the goal of the experiment to the following. Part of the original intent for the experiment was to examine the genetic selection of the optimal structures of Agents -i.e. the number of Sub Agents which are a resource of the Super Agent, their sensory abilities and other characteristics, such as mobility, size, life expiration, etc. For example one could start out by allowing the Super Agent to spawn an unlimited number of Sub Agents, but with some taxing parameter, which could be used in the evaluation function. A good median should be established after a while-for example it is better to have fewer Sub Agents which are faster and have better vision as opposed to four times the Sub Agents which are slower and blind and get nowhere. However, I had to settle for the following setup: 1. The Agents are spawned from the Hive, which is stationary and really has no other use other than just initiating the Agents on the Board. 30

PAGE 41

2. The Agents have some traits, which affect how they move around the board and those traits are drawn from each Agent's individual Genome. All of the Agent Genomes have the same traits, which are set within the specified range, using the random number generator. 3. The Agents have a limited amount of initial energy and loose energy while moving or fighting with other Agents, but gain energy from the encounters with Feeders. 4. Two population members with most energy are bred together replacing the worst. 5. The Feeders are not replenished upon exhaustion and will be removed from the board. Getting the Agents to Eat One of the first simple tasks an Agent had to do was to eat and die if its energy supply is exhausted. This is something that may seem trivial in initial planning, but turned out to be a nightmare once I tried to implement it. One of the issues was to have a way to learn if an Agent came in contact with another object or an Agent in the environment. After starting to implement the simulation in Breve this was no longer a problem-it was achieved by simply using the following line of code: self handle-collisions with-type "Feeders" with-method "eat". 31

PAGE 42

:::::1 Smart_Agents_02. tz 1-l(Q)(ill -.... - Figure 5: Two Agents on the board with the Feeders At this point the Agents and the Feeders were capable of moving around in a random fashion, but every time an Agent comes in contact with any of the Feeders on the board it obtains some amount of energy from the Feeder. At the same time the Feeder will loose the same amount of energy. Any object whose energy falls below zero is removed from the simulation via the free method. 32

PAGE 43

Agents (09076B60) 2843 Agents (09072AOO) 430 Agents (09076B60) 2874 Agents (09072AOO) 461 Agents (09076B60) 2905 Agents (09072AOO) 492 Agents (09076B60) 2936 Agents (09072AOO) 490 Figure 6: Snapshot of the running tally of the Agents' energies. Notice the fluctuations in energy levels for same Agent-the energy was added upon an encounter with a Feeder. Figure 7: Only one Agent remains now-the second Agent has run out of energy 33

PAGE 44

Adding Intelligence to the Agents The next thing that I wanted the Agents to do was to fight. Since this is just a simple simulation, taking away some energy from the colliding Agents would simulate the fight. The next step would be to create some sort of a rule, where the bigger Agent or the Agent with more energy would win by taking away more energy from the smaller Agent; however, that is a recommendation for the future. Physics of the Simulation and Collisions Breve has a number of built in physics abilities, which allow for a relatively realistic handling of collision velocities and acceleration as well as joint simulation. However, I was not able to figure out the best way to utilize a value for acceleration to set the acceleration abilities of an Agent; thereby imitating agility (how fast can an Agent switch directions and get to a potential source of food or avoid a foe) The collision handing is elegant in its syntax and the setup involved: self handle-collisions with-type "Feeders" with-method "eat". self handle-collisions with-type "Agents" with-method ''fight". 34

PAGE 45

Where the first line points to the method, which handles collision with object of type Feeders and the second line handles collisions with the type Agents. Of course, I had to implement the functions eat and .fight; however compared to having to handle the physics from scratch this was trivial. For some reason at first the Agents would handle the collisions with Feeders very gracefully; however, I could not use the collision handler to recognize a collision with another Agent. So I tried to use the neighbor finding abilities and set some sort of a minimum distance which would dictate when a collision has occurred with another Agent. After additional tweaking the collisions with Agents were recognized and collisions with both the Feeders and other Agents were gracefully handled so I scraped the idea for searching for the possible neighbors. Neighbors Handling neighbors is equally simple: self set-neighborhood-si z e to vision_range. The above code essentially sets how far can an Agent "see and was perfect in simulation the sensory abilities of an Agent. Instead of turning the process of finding the object's neighbors in to a procedure with 0 (n2 ) Breve has a built in update-35

PAGE 46

neighbors method which allows for each object to re-discover its neighbors at each iteration. This works rather well in the smaller simulation such as mine, but in larger scale environments one would still have to perform a search with larger vision ranges and so a larger computational penalty would still hold. Now the Agent is able to find the Food and other Agents around it if they are within its field of "view." One of small issues which took me a while to figure is that the list of the neighbors found also included the object of the Stationary type, which is used to represent immobile objects such as the floor. In the initial versions the Agents would keep going in to the floor and I could not figure out why until I traced the finding of each neighbor and Stationary type was amongst them At this point I had to modify the collision response functions to exclude Stationary types as the potential targets, which fixed the problem. However, this led me to another interesting potential future to be considered in the future-evolving Agents could camouflage themselves by accepting i e surrounding colors for example ... Designing the Agent Goals Initially, for simplicity's sake, I planned to test the Agents on achieving a common group goal, such as accumulating and bringing back the highest overall cumulative energy resources As simplistic as this may sound the simulation gets more and 36

PAGE 47

more complex with even the simplest considerations built in. Here is the general premise for the simulation as it stands right now: 1. Each Agent is able to feed from any resource 2. All Agents fight with each other. 3. Any Agents whose energy is zero is removed. The overall goal is to examine which combination of traits will allow the Agent to survive the longest. A Genetic Algorithm would be used to select the Agents which are the best performer s based on their energy levels. Design and the Use of Traits After going back to the idea of representing the Agent Gene String with binary numbers I refined that to simply use decimal values to represent the intensity of each trait. I used the float values (int could be used with similar effectiveness) to simulate: 1. Vision. 2. Attempted to simulate Agility. 3. Attempted to simulate how much memory the Agent has. 4 How social and aggressive the Agent is Of course other traits could be tried but the idea remains the same. 37

PAGE 48

Breve provides the ability to store the Genome information in XML format and retrieve that information from the previously stored XML files While I only made an attempt at collecting and saving the data in XML format, perhaps in the future there will be work done to use this data for more comprehensive comparison of the starting and final versions of the Agent Genomes. Refinement and Implementation of the Agent Genome The trait values were represented by the variables in the following table where upon the initialization of the Agent Genome, each value was created randomly within the preset range. trait_max(float). The maximum to be used in the random number generator for trait values. vision_distance(float). This value will simulate how far the Agent can see. acceleration ability(int). This will simulate how agile the Agent is. aggression_index(float) This will allow simulating the aggressiveness coward_ness_index(float). of the Agent. socialize_index(float). This will simulate the group behavior of an individualistic index(float). Agent. Table 3 : Traits of the Agent. Further, the Agent Genome would provide the facilities for mutating particular traits based on the value of a random number generator. 38

PAGE 49

At this point the whole project became a rather interesting lesson in how to simulate a behavior of an Artificial Creature and how to evolve its behavior. Even after the Agents started behaving in a fashion, which had some resemblance to the real world, I was not able to clearly formulate the best way of using the traits in forcing the Agents' behavior. Nor was I able to come up with the best way of mutating the traits once the simulation was running. One of the issues was my inability to provide the neutral handling of the trait values. I did not write the functions for adjusting the traits in such a way so that a particular trait would be favored and; therefore, always dominated the final evolution of the objects in the simulation. The Agent Genome would also provide the ability to access and modify particular traits at run time. This was also a familiar and easy to use functionality as the Steve language, which is used in working with Breve, has relatively straightforward syntax and is object-oriented. Algorithm Used Although initially I planned to have creatures, which learned and achieved goals, I was never able to formulate how they would be selected. After some reading and examination of other projects of similar nature, it became more apparent that a Genetic Algorithm might provide the facilities to simulate the selection process. The 39

PAGE 50

results, which will be demonstrated later, were rather interesting. The basic algorithm for the selection is as follows: 1. Fixed number of Agents is spawned within the Hive 2. Each Agent is impregnated with a randomly initialized Genome for: a. Vision (to simulate the sensory abilities) b. Agility (this part still needs refinement). c Social behavior-this could also be refined but the general traits introduced are the likelihood of going over to an object within the object's vision and tendency of following its own kind 3. The Agents then go through the iterations of the simulation where they can move (with some penalty), eat or fight with each other. 4 The energy of each Agent is used to sort the population in the descending order. 5 Following the sort the top two members of the population are bred together and replace the last member in the list. The algorithm described above was the first implementation, which showed some promise and allowed me to get familiar with simple Genetic Algorithm implementation as well as the tools available in Breve to support GA programming. Sorting the Agents The sorting of the Agents is performed with the following line of code: 40

PAGE 51

sort agents with compare-energ y The compare-energ y function returns the difference between two values of energy. We now have the Agents positioned from the most "healthy" to the most "sick. The top population members can now be bred. Breeding of the Agents The breeding is performed using the one-point crossover method, which is actually built in to the Genome Class in Breve. 16 The operation is performed in the following manner: agents{O} breed with agents{ 1} to-child agents{population_si ze 1}. Now we have the top two members who produce a child which is then used to replace the worst performer. Interestingly the code is dynamic enough to allow for two Agents to breed and then replace the second one with its own child. Trait Mutation The next item, which took me some time to learn was the process to handle the connection between updating the Agent s traits based on the new mutations of its genome While I was able to successfully "impregnate the Agents with the traits 16 Robin Bie s broek Cr ossov er 41

PAGE 52

from their respective genes during the initialization the genes did not seem to change following the mutation. I fixed this by having an update method which read in the Agents Genome information at every iteration. While this may prove to be wasteful in terms of computing resources for larger scale simulation, in this case the performance was not perceivably affected While in the future it may make more sense to perform the mutation of traits based on certain criteria such as the amount of time which has passed in the current implementation the mutation is performed at every iteration as long as there is more than one Agent on the board. The mutation is performed after the members of the population have been sorted according to their energy values. Every member of the population is mutated except for the top specimen. 42

PAGE 53

CHAPTER6 RUNNING THE SIMULATION Below is a snapshot of one of the first successful runs of the simulation with population mutation and sorting working. Agent 1 10268 Agent 2 9982 Agent 3 9949 Agent 4 9949 mutated item 18 of Agent_Genome (052CFCD8) mutated item 13 of Agent_Genome (052E8A90) mutated item 26 of Agent_ Genome (052CCOCO) 53 Agent 1 10267 Agent 2 9981 Agent 3 9948 Agent 4 9948 mutated item 26 of Agent_ Genome (052CFCD8) mutated item 10 of Agent_ Genome (052E8A90) mutated item 17 of Agent_Genome (052CCOCO) 54 Figure 8: Simulation output sample. Four Agents would be spawned from the Hive and they would proceed to move around the board based on the vicinity of the Feeders or other Agents. Interestingly in some of the runs, the Agents would increase their energy above what they were originally spawned with. Since the first versions of the simulation did not have the 43

PAGE 54

proper handling of updating the Agent's Genome after each mutation had occurred, I believed that to be a fluke or an erroneous implementation. Upon fixing the errors and further runs, this turned out to be more than just fluke. Since the behavior or Agents was slightly more than random and the movement of the Feeders was random, the selection of the best specimen to the top and the preservation of their Genome would later prove to be rather effective as the results will show. Successful Runs Although any number of runs could be cited, the following results will show what was becoming a trend during the majority of runs. The graphs present the plot of the best member of the population versus the lowest member of the population. Run 1 A good number of repeated runs would exhibit this result, where both the best member and the worst member would start off with equal amounts of energy; however, very rapidly there would be a significant spread in the consequent energy values Please recall, that the Agent loses energy every time it moves or collides with another Agent and it gains energy every time it collides with a Feeder. Another thing to note is that for all of the present cases the starting number of Agents was nine and there was no breeding to increase the population, which could be another 44

PAGE 55

future consideration. The breeding was only done to replace the worst member of the population. Energy Values for the Best and Worst Members 120000 100000 -+Best Value --Worst Value 80000 >. 60000 Q) 1: w 40000 20000 0 0 1000 2000 3000 4000 Iteration Figure 9: Energy chart for the best Agent and the worst Agent. The interesting events to notice on this graph are the step-ups in the value of energy for the worst member. The value would jump every time the worst member would loose of all of its energy and then was dropped from the simulation. The next worst member would then replace its energy value and the cycle would continue. The best member of the population would consistently prevail; however, it seemed that no 45

PAGE 56

better children were bred. However, I do not have enough evidence to refute or prove that other than the continuity of the graph for the best energy value. Essentially, I did not have enough tools built in to my code or the simulation to make a distinct connection between the initial best member and the final surviving Agent. This is something that could, perhaps, be implemented in the future. Run 2 In this case an interesting event occurred-it seemed that the worst member was winning out at first. However, at some point it seemed that selection kicked in and the winning Genome was pushed to the top, providing a much longer lifetime for the Agent at the top. The key thing to note is that while several thousand iterations may not seem much it is only so due to the nature of how I implemented the simulation. Some number of Agents is mutated and some number is always moving and colliding with objects; therefore, loosing energy (or gaining in case of collision with Feeders). In a fixed population environment for one Agent to outlive all others is rather impressive. While the encounters with objects still had some random components to them this goes beyond simple random luck. 46

PAGE 57

Energy Values for the Best and Worst Members 120000 100000 80000 >-C) 60000 r::: w 40000 20000 0 0 The worst case initially seem to surpass the best member, but then selection took over"" 1000 2000 3000 Iteration 4000 Figure 10: Energy chart for the best Agent and the worst Agent. 5000 Below is a screenshot of what the simulation looks like at this point. The small yellow disks (or simply lighter colored in case of black and white reproduction) represent the Feeders, which are spawned in fixed numbers and move around randomly. The upward facing cones represent the Agents. The screenshot demonstrates a number of features and issues, which emerged during repeated runs and lead to some of the modifications discussed earlier. 47

PAGE 58

Figure 11: Agents on the board with Feeders with potential targets indicated by the lines. The lines from one of the Agents indicate the object, which it can "see" with the current range of vision. Please note that one of those lines leads to the center of the floor. In earlier implementations if an Agent was in the vicinity of the center of the floor, which is of Stationary type and the center of the floor was within its vision range, there was a possibility of the Agent moving to that location, which would look like Agent sinking in to the floor. In some cases the Agent would "decide" to sit there and when other Agents would come around and "see" either the center of the 48

PAGE 59

floor or the Agent which was sitting there, they would move there as well. This would provide for rather boring simulation in some cases I addressed the issue as mentioned earlier in the paper by preventing the Agent from choosing the center of the floor as their target location. >. C) ... C1J s::::: w Energy Values for the Best and Worst Members 120000 100000 80000 60000 40000 20000 0 0 _,..,Again the best member actually started to gain extra energy ... --+--Best Value -Worst Value 500 1000 1500 2000 2500 3000 Iteration Figure 12: Energy chart for the best Agent and the worst Agent. The graph above represents another even worth noting-in some case the best member of the population would actually increase its energy above what it was spawned with As this simulation is rather constricted in its abilities perhaps in the 49

PAGE 60

future revisions the top members from each run would be saved and then introduced to tiers of challenges, while their genes would be passed on to spawn more and more fit population, which then could be used to complete the final challenge ... Run4 Energy Values for the Best and Worst Members 120000 100000 80000 60000 Q) c: w 40000 20000 0 0 / energy but more on that in / the di sc ussion ... -+Best Value --Wor s t V a lue 1000 2000 3000 4000 5000 6000 Iteration Figure 13: Energy chart for the best Agent and the worst Agent. Again, in this case best member of the population gained some extra energy initially and while the final worst competitor came close to the best in the end, the best gene prevailed. 50

PAGE 61

CHAPTER 7 CONCLUSION The experiment has shown possibility of utilizing the computer as a test bench for simulation of the genetic selection amongst Artificial Life creatures. While the algorithms and concepts utilized were mere approximations of the actual biological phenomena, the ability to see the graphical and numerical representation within a much shorter amount of time that biological processes would permit, really helps in connecting the concepts with real life. The graphical representation of Agents energies over the course of the various simulation runs really helped me to see the significant difference GA has made in enforcing the selection of the strongest individual. While the algorithm is simplified and is only a sequence of controlled and discreet steps the concept of selection is really underlined and easy to observe. The graphical representation using Breve has also aided me in understanding the concepts farther. The experiment conducted here was of great value to me in understanding a number of concepts in computer science and other sciences ranging from object-oriented programming and Genetic Algorithms to phenotype and market dynamics. This 51

PAGE 62

leads me to believe that future investigations in to the area of Artificial Life should draw more and more from the various disciplines. At the same time it is really helpful that some computer scientists have done such a tremendous service to us all by investing their time and talent in to providing useful tools such as Breve. Future Recommendations While the graphs make it rather obvious that the best-fit individual would prevail for longer periods of time, they do not show how introducing the newly bred child affected the re-sorting of the genes. There is no indication that mutation of the genes had any effect at all. Perhaps due to chance the best combination of the traits was picked from the beginning and since the best member was protected from mutation it simply stayed at the top without ever being really challenged As mentioned in the implementation section, Breve provides the ability to output the traits in the form of an XML file and perhaps this data could be used to provide a better understanding of exact process of selection throughout the simulation. However, I eventually realized that while I was replacing the Agent Genes for the worst member of the population with the child of two of the best performers, the energy level remained the same. In future implementations one should investigate what happens in the population where children are either introduced with reset levels of energy or they are spawned as unique individuals; therefore, creating potential for growing the population. In addition, if the runs were sustained for longer periods of time, perhaps, the best 52

PAGE 63

member of the population and the remaining worst members would eventually converge Another issue briefly mentioned was the usage and adjustment of the traits so as to represent the behavior of the Agents in a more realistic manner. From some of the literature I've gathered that this topic is an endless discussion in itself. However, I would like to see further refinement of my behavior functions to a state where one can be certain that the Agent will not evolve in a manner favored by the developer but instead the simulated evolution will force the changes as dictated by the particular combination of the environment and goals. Additional refinement could include for the fitness function itself to be optimized and evolved as part of the simulation. This is where the "Queen Bee" could come in and evaluate the efficiency with which resources would be brought back to the Hive While adding a significant level of complexity, such studies could lead in to the area of self-refining code and architecture, adding a higher level of adaptation Further, there are some interesting possibilities in the area of shaped Agents. As mentioned in the section discussing the environment, which are available, one environment that seemed to be of particular interest is the Framsticks. The objects in that simulation could evolve in to multi-link entities and could utilize task specific links to become specialized. For example to exercise the smelling sensory ability the 53

PAGE 64

creature would have to have a link with a smell sensor on the end. This could be used to further develop my intent to produce multi-body Agents with various capabilities dependent on their intelligence as well as their physical characteristics which would tightly integrate with further study in to development of particular phenotypes The extensions to the simulation allowing for two or more competing groups would allow for further study of group behavior as well as add more factors in fitness function evaluation and refinement. More complexity could be added to the simulation by making the Agents search for the return path to the Hive and try not to spill the energy which they have to bring back for the Hive to survive. This would make for more interesting exercise of Agent s socialization traits. In future work, perhaps the Agents could also learn on the fly utilizing NN structure for a brain and memory cells combined with physical makeup which would present a more involved development challenge. However, successful implementation would be an extremely useful study platform for an evolution of much higher complexity creatures and environments. One of the ideas that came to me at the end of the simulation is what if the Agents evolved to fake the color of the surrounding environment to full their prey and foes? What if .. 54

PAGE 65

APPENDIX A. Environment Attempt After effectively wasting about one and a half month s time and having some simple objects which would proclaim their existence and fake a heart beat along with scraps of GUI which did not talk to any of the aforementioned objects, with help from my advisor, I redirected my efforts and decided to utilize someone else tools for the base-framework and instead concentrate on performing and experiment with some meaningful results. Heart Beat Started! Agent Tasks scheduled. 8. 7. 6. 5. 4. 3. 0 2 1. 1 2 3 4 Cell (4, 3, 0) values are: Location: (4 3 0) Condition: true Description: Valid Agent: Agent ID 0 Agent Name: Sub Agent 5 6 7 8 55

PAGE 66

Agent Time of Birth: Sub Agent Agent Type: Sub Agent Type Agent Gene String: 000 Beep! Beep! Beep! Dead! Figure 14: Output from the initial implementation. :::. ALtre 5tmulabon --, --Ready to start .. Reset Stop Figure 15: The simplistic initial GUI implementation. 56

PAGE 67

B. Code Listing The project code consists of: Agent_Genome.tz, Agents.tz, Data_Collector.tz Feeders.tz, Hive.tz, Smart_Agents.tz. Agent_Genome.tz contains the implementation for Agent Gene Agents.tz contains the implementation for the Agent, while the Feeders.tz represents the Feeders. Hive.tz was an attempt at the implementation of the Super Agent, which was later re-defined as the Hive. D ata_Collector.tz provides some basic data collection functionality and Smart_Agents .tz contains the controller for the implementation along with the Genetic Selection Algorithm. To run the simulation you need to input the following in the directory with all the files: breve.exe Smart_Agents.tz 57

PAGE 68

C Agent Genome.tz Listing # Agent_Genome.tz @include "Genome.tz" Genome : Agent_ Genome { + variables : trait_max(float). # this will simulate how far the Agent can see vision_distance(float). # this will simulate how stupid the Agent is memory _capaci ty(i nt). # use later to maybe remember the number of friends or food locations #we'll see ... # this will simulate how agile the Agent is acceleration_ability(int). # this will simulate the personality of the Agent #aggression aggression index(float). coward_ness_index(float). # group behavior socialize_index(float). individualistic_index(float) #randomize-for initialization +to init: 58

PAGE 69

trait_max = 10 0. self randomize. + to randomize: # create the random vision distance vision_distance = random[trait_max]. #create the random memory capacity memory_capacity = random[trait_max] #create the random acceleration acceleration_ability = random[trait_max]. while acceleration_ability == 0 : { acceleration_ability = random[trait_max]. } #create the random personality traits #aggression aggressi on_i ndex = random[ trai t_max] coward_ness_index = random[trait_max]. #group behavior socialize_index = random[trait_max]. individualistic_index = random[trait_max]. # mutate for mutations after spawning +to mutate : n (int). #decide which of the elements in this genome to mutate # again for simplicity's sake in this case everything is random n = random[30]. #I initially thought of only changing one quality at a time by #limiting the permission to mutation only if n fell between #two numbers, but then realized (with help from the work of Lee Spector 59

PAGE 70

#that as n value got larger fewer parameters would be changed; therefore #making for more interesting mutations with possibility of only one or all #or anything in between parameters changing with each mutation. #change the vision if n < 5 : vision_distance = random[trait_max] + n while vision_distance < 0 : { #can't have negative site vision_distance = random[trait_max] + n. } # change the memory else if n < 10 : memory_capacity = random[trait_max] + n. while memory_capacity < 0 : } {#can't have negative memory memory_capacity = random[trait_max] + n. #change the acceleration_ability elseifn < 15: acceleration_ability = random[n]. while acceleration_ability < 0 : } { #can't have acceleration_ability acceleration_ability = random[n]. #aggression 60

PAGE 71

else if n < 25 : { aggression_index = n coward_ness_ i ndex = n-random[n]. while coward_ness_index < 0: {#can't have coward_ness_index coward_ness_index = n-random[n]. } # group behavior else if n < 30 : { socialize_index = random[n]. individualistic_index = n socialize_index while individualistic_index < 0 : { #can't have individualistic_index individualistic index = n-socialize_index. } #print "mutated some traits of $self' #getters + to get_ vision: return vision_distance + to get_memory: return memory_capacity. + to get acceleration : return acceleration ability. +to get aggression: return aggression_index 61

PAGE 72

+ to get_coward_ness: return coward_ness_index. + to get_social_index: return socialize_index. +to get_individualism_index: return individualistic_index. #setters +to set_ vision to-value-of vision(int): vision_distance = vision. +to set_memory to-value-of memory(int): memory_capacity =memory. +to get_acceleration to-value-of acceleration(int) : acceleration_ability = acceleration. + to set_aggression to-value-of aggression(int): aggression_index =aggression. +to set_coward_ness to-value-of coward_ value(int): coward_ness_index = coward_ value. +to set_social_index to-value-of social(int): socialize_index = social. +to set_individualism_index to-value-of individual(int): individualistic_index =individual. # get the gene string +to get_the_gene_string: trait(int) gene_string_list(list). print "Gene String:" gene_string_list = { 62

PAGE 73

vision_ distance, memory_capacity, acceleration_ability, aggression_index, coward_ness_index, socialize_index, indi vidualistic_index } foreach trait in gene_string_list : { print trait. } } #end of Genome Agent_ Genome 63

PAGE 74

@include "Object.tz" @include "Mobile.tz" @include "Agent_Genome.tz" # define the Agents' behavior Mobile : Agents { + variables: # Agent properties agent_id(int). agentShape( object) agent_genome( object). agent_color(vector). vision_range( double). acceleration( double). memory(int). aggression(float). coward_ness(float). # group behavior socialize( float). i ndi viduali stic(float ). go_for_it(int). D Agents.tz Listing #I used some of Lee Spector's code for the drifting, since my # original implementation was rather jumpy instead of floating #to the target destination the Agents would simply instantly jump there. 64

PAGE 75

drifting (int). # where the Agent is at agent_location (vector). # how much energy is the Agent carrying amount_of_energy(int) +to init: #AGENT SHAPE agentShape = new Shape. agentShape init-with-polygon-cone radius 0 5 sides 6 height 1.0. self set-shape to agentShape. self set-velocity to (0 0, 0). agent_color = (1.0, 0 0, 0.0) self set-color to agent_color. #self set-color to random[(l.O, 1.0, 0.0)]. # amount of ENERGY to be born with may move to the genome later # amount_of_energy = random[25000]. amount_of_energy = 100000 # the GENETICS of the Agent agent_genome = new Agent_ Genome #this does not really help as it works for archiving but not for # update of the properties self add-dependency on agent_genome agent_genome get_the_gene_string agent_genome save-as-xml file "$self Starting Genome.xml". 65

PAGE 76

# impregnate with genetic composition self set-genetics. #how to deal with hitting things self handle-collisions with-type "Feeders" with-method "eat". self handle-collisions with-type "Agents" with-method "fight". + to set-genetics: vision_range = agent_genome get_ vision. acceleration = agent_genome get_acceleration. memory = agent_genome get_memory. aggression = agent_genome get_aggression. coward_ness = agent_genome get_coward_ness. # group behavior socialize = agent_genome get_social_index. individualistic= agent_genome get_individualism_index. # how to find things around the Agent # here is where the gene characteristics come in how far #can the Agent see based on its genetics self set-neighborhood-size to vision_range. + to move-around: neighbors(list). number_of_neighbors(int). target_ object( object). target_index(int) target_type(string) target_locati on( vector). neighbors = self get-neighbors. number_of_neighbors = neighbors + 0 # print number_of_neighbors. 66

PAGE 77

go_for_it = 0. # handle the case of no neighbors if number_of_neighbors > 0 : { target_index = random[number_of_neighbors] -1. #just look at the very first neighbor if get negative index. if target_index < 0 : { target_index = 0. } target_ object = neighbors { target_index}. # print target_object. # it turns out all object including Stationary Type will be included #by get-neighbors so a lot of time the Agent would go for the floor #and not other Mobilestook me a while to figure that out! target_type = target_object get-type while target type == "Stationary" : { } target_index = random[number_of_neighbors]-1. #just look at the very first neighbor if get negative index. if target_index < 0 : target_index = 0. target_ object= neighbors{ target_index}. target_type = target_object get-type. target_location = target_object get-location. #print target location. # simulate the likelihood of the Agent moving towards another #Mobile object-if aggression is highershould be more likely #to go for it 67

PAGE 78

# this is a subject for a lot of refining and tweaking if target_type =="Agents" : { } go_for_it =socialize -individualistic. go_for_it = go_for_it +aggression-coward_ness #this step is to simply use the acceleration somehow go_for_it = go_for_it +acceleration. #don't want any negative values if go_for_it < 0 : go_for_it = 10. #attempt to simulate the differences in attacking other Agents # vs flocking towards the Feeders when talking about food #the Agent should be more "selfish" and less driven to socialize but #eat. if target_type == "Feeders" : { } # selfish always want to eat go_for_it =individualistic. # aggression is towards any object not just other agents go_for_it = go_for_it +aggression # the faster to get there the better go_for_it = go_for_it + acceleration #don't want any negative values if go_for_it < 0 : go_for_it = 10. #this is a contradiction of sorts the higher go_for_it value #the less likely the Agent will move-so need to re-write the # go_for_it calculations if random[go_for_it] == 0: { # still trying to figure out how the vector pointing works self point vertex (0 0,0) at target_location #set velocity to (acceleration 0, acceleration) 68

PAGE 79

self drift to target_location. } +to drift to location (vector): drifting = 1. agent_location = location + to iterate : if amount_of_energy < 1 : { free self. } # move around self move-around. # penalty for moving self change-agent-energy by-value-of1 if drifting: { self offset by .06 (agent_location (self get-location)). if ( lagent_location(self get-location)!< .001): { self move to agent_location. drifting = 0 } } # re-impregnate with genetic composition self set-genetics. +to fight with agent (object): #need to figure out a good way for figuring out how the fight # is simulated for now the attacker gets less damage. #print "Bam!" 69

PAGE 80

agent change-agent-energy by-value-of-2. self change-agent-energy by-value-of -1. return 1 +to eat from feeder (object): #print "I just got some!" feeder change-feeder-energy by-value-of-1. self change-agent energy by value-of 1. return 1. +to set-agent-location to-value-of location (vector) : agent_location =location. +to set-agent-id to-value-of id (int) : agent_id = id +to get-agent-location: return agent_location +to set-agent-energy to-value-of energy (int): if amount_ of_ energy < 1: { return 0 } amount_of_energy =energy. +to change-agent-energy by-value-of energy (int) : if amount_ of_ energy < 1 : { return 0. amount of_energy +=energy. +to breed with otherAgent (object) to-child child (object) : (child get agent-genome) crossover from-parent-1 (otherAgent get-agent-genome) from-parent-2 (self get-agent-genome). +to set-agent-color to-value-of color (vector): 70

PAGE 81

self set-color to color. +to get-agent-energy: return amount_of_energy. +to get-agent-id: return agent_id +to get-agent-genome: return agent_genome. +to get-agent-color: return agent_color. } # end of Mobile : Agent 71

PAGE 82

E. Data Collector.tz Listing # Data_Collector tz @include "Object.tz" @include "Data.tz" Data: Data_Collector { + variables: # iteration(int). # results(list) iterations(list). best_ value(list) worst_ value(list). +to init: # iteration = 0. +to update-this-iteration-values by-value-of list-of-values(list) : current_ values(list). current_ values = { 0 0 0}. current_values{O} = list-of-values{O} current_ values{ 1} =list-of-values{ 1 }. current_values{2} = list-of-values{2}. iterations{current_values{O}} = current_values{O} best_ value {current_ values { 0} } = current_ values { 1}. worst_value{current_values{O}} = current_values{2}. 72

PAGE 83

#results{ iteration} =current_ values #print results{ iteration}. # iteration ++. print current values. } # end of Genome Agent_ Genome 73

PAGE 84

F. Feeders.tz Listing # Feeders.tz @include "Object.tz" @include "Mobile.tz" # this is the class to build the feeder objects Mobile : Feeders { + variables: drifting (int). driftLocation (vector). amount_of_energy(int). +to init: self set-shape to ((new Shape) init-with-polygon-disk radius 5 sides 6 height 0.01). self move to random[(20, 0 20)] (10, 0 10). self set-velocity to (0, 0, 0). self set-color to (1, 1, 0.3). amount_of_energy = random[2500] + to maybe-teleport: random_x (float). random_z (float). if random[250] == 0: { random_x = random[40]10. random_z = random[ 40] 10. self drift to (random_x, 0.5, random_z). } 74

PAGE 85

+to iterate: #Move the feeders occasionally and update how much energy they #contain on each iteration-if the feeder's energy is exhausted # then it is removed from the board # this provides some options: # 1. feeder's energy could be diminished with each Agent that feeds # 2. or feeder's energy could be diminished at each iteration # 3. or feeder's energy could be diminished by some other criteria if amount_of_energy < 1 : { free self. } # move around self maybe-teleport. if drifting : { self offset by .06 (driftLocation(self get-location)). if ( ldriftLocation-(self get-location)!< 001) : { self move to driftLocation. drifting = 0. } } +to drift to location (vector): drifting= 1. driftLocation =location +to change-feeder-energy by-value-of energy (int) : if amount_ of_ energy < 1: { return 0 amount_of_energy +=energy. +to get-feeder-energy: return amount_of_energy. } # end of Mobile : Feeders 75

PAGE 86

G. Hive.tz Listing # Hive tz @path "classes" #include the underlying classes for all of the objects in this # simulation @include "Object.tz" @include "Stationary tz" @include "Mobile.tz" @include "Genome.tz" @include "Agents.tz" Stationary : Hive { + variables : number of_bees (int). bees(list). bee_location( vector) hive_location (vector). row(int) column(int). row _counter(int) column_counter(int) item( object) #initiate the Hive +to init: self set-hive-location to-value-of random[(30 0 5, 30)] (10 0 10). 76

PAGE 87

row= 3. column= 3 row _counter = 0. column_counter = 0. number_of_bees = row column. #Create Agents I Bees. bees = number_of_bees new Agents. foreach item in bees : { bee location =self get hive-location bee_location += (column_counter 3, 0, row_counter 3) item set-agent-location to-value-of bee_ location. item move to bee_location. column_counter ++ if column_counter % column == 0: { row _counter++. column_counter = 0. #what to do on each iteration +to iterate: # Compute neighbors for later use. # self update-neighbors. # watch for no bees if bees== 0 : { print "Hive is Empty". } # what to do with each agent in the simulation +to set hive location to-value-of location(vector): 77

PAGE 88

hive_location =location. +to get-hive-location : return hive_location. } #end of Stationary: Hive 78

PAGE 89

H. Smart Agent.tz Listing # Evolving_Agents # Code for Smart Agents #Based on swarm Breve code by Jon Klein. #and Based on the evolving swarm code by (c) 2002 by #Lee Spector (lspector@hampshire.edu). #The underlying principal of this simulation is similar to that # of many other simulations out there: # 1. Some sort of a smart agent tries to accomplish a goal # 2. It takes place inion a board-like environment #However, I wanted to look at a Super Agent which can spawn # simpler agents which in turn will try to accomplish a goal #More than anything this experiment is a learning exercise for me. #Breve related materials can be found at: # http : //www .spiderland.orglbreve #Evolving swarm related materials can be found at: # http://hampshire.edu/lspector/alife8-visualization.html @path "classes" #include the underlying classes for all of the objects in this # simulation @include "Object.tz" @include "Stationary.tz" @include "Control.tz" @include "Genome.tz" @include "Feeders.tz" @include "Agents.tz" @include "Hive.tz" @include "Data_Collector.tz" 79

PAGE 90

Controller Smart_Agents Control : Smart_Agents { + variables : data_tracker (object). floor (object). selection (object). iteration (int). drawEveryFrameMenu (object). skipFrameslfNecessaryMenu (object). feeders (list). agents (list). # red_hive(object). blue_hive(object). +to click on item (object): if selection: selection hide-neighbor-lines if item: item show-neighbor-lines. selection = item. super click on item. # initiate the simulation + to catch-key-s-down : data_tracker save-as-xml file "output.xml". +to init: data_tracker = new Data_ Collector. iteration = 0. # set up the lighting 80

PAGE 91

self enable-lighting. self move-light to (0, 30, 0). # draw every frames menus self add-menu-separator. drawEveryFrameMenu = (self add-menu named "Draw Every Frame" for-method "set-drawEveryFrame") skipFramesifNecessaryMenu = (self add-menu named "Skip Frames if Necessary" for-method "set-skipFramesifNecessaryMenu"). drawEveryFrameMenu check. skipFramesifNecessaryMenu uncheck. # Set up the background. #on my computer at home (1.3 GHz Athlon with Nvidia TNT II card) #I could barely get anything running except for maybe # simplistic cube agents with no background texture self set-background-texture-image to (new Image load from "clouds sgi") #Add a huge floor-simply use a built-in cube with the right size. floor= new Stationary floor register with-shape (new Shape init-with-cube size (100, 2, 100)) at-location (0, -2, 0). floor catch-shadows # Create the feeders. feeders= 10 new Feeders. # Create Agents. You can create as few or # as many as you want... the simulation speed gets pretty #painful at around 1000. blue_hive =new Hive. # red_hive =new Hive. #still have to come up with a way to neatly position the 81

PAGE 92

#hives opposite to each other and where I want them to be # red_hive set-hive-location to-value-of (15 0.5, 15). # red_hive move to (15, 0 5 15). # Set some camera/display options. self offset-camera by (25, 30, 25). self enable-shadows. #self enable-reflections. self disable-text. #self enable-outline # functions to handle some of the menu options +to set-drawEveryFrame: drawEveryFrameMenu check. ski pFrameslfN ecessary Menu uncheck. self enable-draw-every-frame. +to set-skipFrameslfNecessaryMenu: drawEveryFrameMenu uncheck skipFrameslfNecessaryMenu check self disable-draw-every-frame. +to iterate : #location (vector) #velocity (vector). # topDiff (double). # Compute neighbors for later use. #DO NOT FORGET THIS nothing will find the neighbors without this!!! self update-neighbors #identify all the Agents on the board to start with agents = all Agents. # watch for no agents if agents == 0: { 82

PAGE 93

} print "No Agents Remaining" self pause #if agents> 1 && iteration%100 == 0: { if agents > 1 : { self breed-new-agents } feeders= all Feeders. # watch for no feeders if feeders== 0: { } print "No Feeders Present" # self pause. iteration ++. # Call the superclass iterate method to step the simulation forward super iterate. + to breed-new-agents: index(int). n(int). population_size(int). best_energy(int). worst_energy(int). current_ values(list). temp_agent_color(vector). current_ values = { 0, 0 0}. agents = all Agents population_size = agents + 0 sort agents with compare-energy. 83

PAGE 94

#print out the energy for each individual tested # for n = 0, n < population_size, n ++ : #{ # index = n + 1. #print "Agent $index", (agents{n} get-agent-energy) #} # mark the best Agent agents{ 0} set-agent-color to-value-of (0.0, 0.0, 1.0). # this was done to re-paint the rest of the Agents to the default color #as initially several Agents could be painted as best #for n = 1, n < population_size n ++: # { #agents{ n} set-agent-color to-value-of (1.0 0.0 0.0). #} for n = 1, n < population_size, n ++ : { temp_agent_color = agents{n} get-agent-color. if temp_agent_color != (0.0, 0 0 1 0) : { agents{ n} set-agent-color to-value-of (1.0, 0 0, 0 0) } best_energy =agents{ 0} get-agent-energy. worst_energy = agents{population_size-1} get-agent-energy current_values{O} =iteration. current_ values { 1} = best_ energy. current_values{2} = worst_energy. data_tracker update-this-iteration-values by-value-of current_ values. #print only the energy for the best candidate. 84

PAGE 95

#print "$iteration\t$best_energy\t$worst_energy". # printf "\n "$iteration\t$best_energy\t$worst_energy". # breed the two best replacing the worst. agents { 0} breed with agents { 1 } to-child agents { population_size -1 } (agents{population_size-1} get-agent-genome) save-as-xml file "Child agents{2} Genome xml" #give each individual a mutation excluding the top member for n = 1, n < population_size n ++: { (agents{n} get-agent-genome) mutate. } (agents{O} get-agent-genome) save-as xml file "Best agents{O} Genome xml" +to compare-energy of a (object) with b (object): result (int). result= (b get-agent-energy)(a get-agent-energy). return result. } # end of Control : Smart_Agents 85

PAGE 96

BIBLIOGRAPHY Biesbroek, Robin. "Crossover." GA Tutorial Home Page. Ed. Robin Biesbroek. 22 Dec. 1999. European Space Agency. 3 October 2004. . Cecconi, Federico, Filippo Menzer, and Richard K. Belew. "Maturation and the Evolution of Imitative Learning in Artificial Organisms," Technical Report CSE 506, San Diego: University of California, 1995. Evolutionary Dynamics Discovered via Visualization in the BREVE Simulation Environment. Ed. Lee Spector. 12 Jan. 2003. 2 Sep. 2004. . Framsticks Artificial Life-3D Evolution and Simulation. Ed. Maciej Komosinski and Szymon Ulatowski 2004. Sep. 2004. . Janssen, Marco A., and Wander Jager. Simulating Market Dynamics: Interactions between Consumer Psychology and Social Networks. Bloomington: Indiana University, 2003 Kedem, Ziv. Genetic Algorithms and Artificial Life. Tel Aviv: Tel Aviv University, 1998. Langdon W. B. and Poli R. "Better Trained Ants for Genetic Programming." Site of William Langdon. Ed. William Langdon. 1 Sep. 2004 School of Computer Science, University of Birmingham 10 Nov. 2004 . Langton, Christopher G. Artificial Life: The Proceedings Volume in the Santa Fe Institute Studies in the Sciences of Complexity: Vol. 1, 2, 3. Redwood City: Addison-Wesley Publishing Company, Inc., 1987-1992. Mitchell, Melanie, and Stephanie Forrest. "Genetic Algorithms and Artificial Life." Artificial Life 1 (1987): 267-289. 86

PAGE 97

The Breve Simulation Environment. Ed. Jon Klein 15 Oct. 2004. 1 Sep. 2004. . The Swarm Development Group. Ed. Paul Johnson. 6 Mar 2004. 7 Jul. 2004. Tolstoy LeoN. War and Peace. City of Samara : City of Samara House of Printing, 1996. 87