Citation
"Linguistic geometry methods for autonomous, mobile robot control"

Material Information

Title:
"Linguistic geometry methods for autonomous, mobile robot control"
Creator:
Fletcher, Christopher Martin
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
131 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Artificial intelligence ( lcsh )
Linguistic geometry ( lcsh )
Mobile robots ( lcsh )
Robotics ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 131).
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, computer science
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Christopher Martin Fletcher.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
37311907 ( OCLC )
ocm37311907
Classification:
LD1190.E52 1996m .F54 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

"LINGUISTIC GEOMETRY METHODS FOR AUTONOMOUS, MOBILE ROBOT CONTROL" by Christopher Martin Fletcher A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Computer Science 1996

PAGE 2

This Thesis for the Master of Science degree by Christopher Martin Fletcher has been approved by Boris Stilman Tom Altman

PAGE 3

Fletcher, Christopher Martin (M.S., Computer Science) Linguistic Geometry Methods for Autonomous, Mobile Robot Control Thesis directed by Professor Boris Stilrnan ABSTRACT Autonomous robots have been a practical goal of artificial intelligence research since the beginning of the field. Dexterous, decision-making automatons will permit reconnaissance of hazardous environments and enhance the exploration of space. While robots have made inroads into the factory to execute repetitive tasks, widespread use of mobile intelligent robots has not been realized. This has been due chiefly to the inability of a robot to successfully interact in a dynamic environment. A key factor is that the software has not been adept at reacting to changes in the domain. Moreover, there is a lack of formal methods to represent knowledge and systematic changes in this class of problems. A linguistic approach is proposed in this thesis as the basis for robot control in complex environments. Linguistic geometry provides a formal mechanism for representing knowledge and reasoning in the general class of problems of controlling movement in a complex system. Practical applications include: robotics, scheduling, control systems, military gaming, etc. The approach is rooted in the theory of formal languages as well as the theories of problem solving and planning. This thesis presents a linguistic approach for geometric reasoning and applies the technique to a simulated mobile, autonomous robot operating in a dynamic environment. The software application will demonstrate the ability of this approach to successfully generate solutions to complex scenarios encountered by a mobile robot. This thesis will demonstrate how these geometric reasoning methods can be successfully applied to a realistic, intelligent robotic system. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Boris Stilman

PAGE 4

CONTENTS 1. Introduction ..................................................................................................................... 1 2. Linguistic Geometry Methods ........................................................................ ................ 5 2.1 Knowledge Representation ............................................................................................ 5 2.2 State Transition in the System ..................................................................................... 7 3. Mobile Robot Planning and Motion Generation .................................................... ...... 12 3.1 Knowledge Representation .......................................................................................... 12 3.2 Robot Path Planning in a Simple Environment .......................................................... 16 3.2.1 Software Design ........................................................................................................ 16 3.2.1.1 Objects ................................................................................................................... 16 3. 2. 1. 2 Methods ................................................................................................................. 20 3.2.1.3 Classes ................................................................................................................... 21 3.2.2 Test Results ............................................................................................................. 24 3.3 Robot Path Planning in an Environment with Obstacles ........................................... 28 3. 3.1 A Language of Admissible Trajectories .................................................................... 29 3.3.2 Design Augmentation ............................................................................................... 32 3.3.3 Objects ...................................................................................................................... 32 3.3.3.1 Methods ................................................................................................................. 33 3.3.3.2 Classes ................................................................................................................... 35 3.3.4 Test Results .............................................................................................................. 38 3. 3. 4.1 "Walls" Scenario ..................................................................................................... 39 3.3.4.2 "Rooms" Scenario ................................................................................................... 47 3.3.4.3 Invisible Obstacles Scenario .................................................................................. 54 3.4. Robot Path Planning with Static and Dynamic Obstacles ......................................... 57 3.4.1. Trajectory Networks ................................................................................................ 59 3.4.2. Design Augmentation .............................................................................................. 61 3.4.2.1. Objects .................................................................................................................. 61 3.4.2.2. Methods ......................................................... ...................................................... 63 3.4.2.3. Classes ...................................................................... ........................................... 65 3.4.3. Test Results ............................................................. . .... .......................................... 67

PAGE 5

3.4.3.1. Single Dynamic Obstacle Scenario ....................................................................... 68 3.4.3.2. Multiple Dynamic Obstacle Scenario ................................................................... 73 3.4.3.3. Static and Dynamic Obstacle Scenario ................................................................ 78 4. Summary and Conclusion ............................................................................................. 87 Appendix A Mobile Robot Simulation Software ................................................................ 91 Appendix B Source Code ................................................................................................... 96 Bibliography .................................................................................................................... 131

PAGE 6

1. INTRODUCTION Research into autonomous, mobile robots has flourished recently due to the tremendous reduction in the size and cost of components especially sufficiently powerful computers. However, the problems encountered by mobile robotic systems remain considerable. A mobile robot operates in the physical world. This world can be uncompromisingly dynamic and unpredictable. Consequently, an intelligent agent must continually monitor a situation and adapt its current and planned activity to a changing environment. In addition to the volatility of the environment, the area of operation may be too complex to fully represent. Thus, an agent may be capable of comprehending only a portion of the domain. Timeliness is another constraint facing autonomous robots. Mobile robots must perform in real-time. Obviously, the criticality is paramount in certain applications, e.g. an autonomously operating airplane versus a mail delivery robotic system, but some measure of a real-time capability is required in all such systems. Why choose to implement totally autonomous systems? There are applications where manual (remote) or semi-autonomous control of a vehicle operation is sufficient. Many applications, however, inherently prevent human interaction. Today, mobile robotic systems are being designed to work in conditions extremely detrimental to humans and where external communications are hampered by the environment. One such system detailed in [MMAG, 1991] is an inspection robot that operates in a hazardous waste facility, evaluating storage container integrity. Additionally, there are scenarios where the situation changes rapidly and at great distances from any possible human intervention. Applications common to this domain. are exploration and reconnaissance. The exploits of the robot Dante exploring an active volcano recently generated front page news. Finally, there is a class of mostly military systems where communications may be impossible due to potential interference or detection by an opposing force. Applications in this domain include covert surveillance, and search and rescue operations. A primary theme shared in all such systems is the removal of direct human 1

PAGE 7

involvement due to the danger and remoteness involved. Furthermore, they require an intelligent, decision-making capability independent of human intervention. The application investigated in this thesis is one of a simulated autonomous robotic vehicle operating in a complex environment. This mobile robot is assigned a task to complete to reach a pre-determined destination or point of interest in the most optimal manner possible. The vehicle operates under some real-world constraints. It may only travel at a limited velocity on a fmite 2-D plane. The robot possesses only limited knowledge about the operational area. This restriction is manifested as a limited field of view. A robot may be re-tasked to a new destination based on changing priorities. A robot must be flexible allowing for new tasking at any point along its travels. Finally, there are obstacles in the area that inhibit free movement. The obstacles may possess any shape and size. They may also move, disappear and reappear in new locations at will. Under these constraints the mobile robot plans paths to a point of interest and executes movements along selected paths until the task is complete. There are several considerations for an intelligent agent operating in these surroundings. One factor is the representation of knowledge. Information concerning the area of operation, the obstacles, the robot itself, and the decision making process must be represented in a manner that streamlines path generation and execution. Functioning in a dynamic environment presents difficulties in knowing how the knowledge base is affected by change. That is, reflecting what has and what has not changed without reconsidering every piece of knowledge. Elements of change in the scenario include: obstacle location and size, destination, and robot location. Finally, there is the search problem. From any location in the area many paths can be considered. Which path takes it closer to the destination? Which is the best path? A comprehensive approach, considering all possible paths, often fails real-time criteria. A strategy is needed to quickly generate only the most promising paths. The approach must consider the short term goal of optimal movement within the field of view as well as the long term goal of premium movement towards the destination. One of the basic ideas in finding solutions to a system is to break it into smaller sub-problems to be solved and then combine the solutions to the smaller problems together to resolve the whole system. This approach suffers generally due to the complexity of real-world 2

PAGE 8

systems. The subsystems are seldom independent and the solutions to these subsystems are, therefore, dependent on the solutions to other subsystems. This research presents an approach for formulating paths and executing movement on the paths that permit the vehicle to reach its destination in an optimal manner. At the core of this proposal is Linguistic Geometry, a concept for reasoning in this class of problems. Linguistic geometry formalizes a mathematical model for the representation of general heuristic knowledge and provides the search infrastructure for deriving an optimal solution. The theory traces its roots back to the early 1960s, with the development of a syntactic approach to natural language. The development of formal grammars by Chomsky (1963) led to application of this research in other new areas. In particular, grammars were utilized for pattern recognition by Fu (1982), Narisimhan (1966), Pavlidis (1977) and picture descriptions languages by Shaw (1969), Feder (1971), and Rosenfeld (1979). Stilman applied similar techniques to hierarchical complex systems evolving into linguistic geometry. The PIONEER project provided the early framework for linguistic geometry. PIONEER is a system that investigated applying sophisticated human heuristics to computer-based chess. This research resulted in an implementation that produced highly selective searches. This framework was also successfully adapted to a power control and maintenance planning project [Stilman, 1992]. Very recent applications include: a real-time fire vehicle routing application currently being developed into a commercial project under the auspices of the Lockheed Martin corporation, and a high integrity software engineering application to provide the computer-assisted generation of mathematical proofs for software programs. This research is being conducted at the Sandia National Labs. The remainder of this thesis explores linguistic geometry as applied to the mobile robot scenario. Section 2, Linguistic Geometry Methods, presents the theory of linguistic geometry. This includes description of the system variables and a description of movement and state transition. A derivation of a grammar of shortest trajectories is also presented. Section 3, Mobile Robot Planning and Motion Generation, introduces the mobile robot application. It is here that the linguistic technique is applied to the robotic system. The application is examined in this discussion from the representation 3

PAGE 9

of knowledge to the software design and implementation. The shortest trajectory algorithm is augmented through the introduction of the grammar of admissible trajectories. This extension generates optimal (shortest) trajectories and sub-optimal paths. Sub-optimal paths may be required in the presence of obstacles blocking the optimal routes. Various test cases are illustrated and the results are weighed. Appendix A follows the conclusion of the thesis describing the robot simulator software used for robot software testing and visualization. 4

PAGE 10

2. LINGUISTIC GEOMETRY METHODS This chapter introduces the reader to a linguistically based, mathematical tool for heuristic knowledge representation and search generation in a complex system. A complex system is a class of problems that can be represented as a set of elements and positions where elements transition from one state to another. Dividing the problem into a hierarchy of dynamic subsystems replaces a static system with a single goal. The goals of each of the subsystems are independent, but coordinated to the system goal. Linguistic geometry represents the hierarchy of dynamic subsystems with a hierarchy of formal languages. Each sentence a group of words or symbols of a lower level language corresponds to a word of the next higher level language. The first level grammar, the language of trajectories, yields a set of symbols and parameters as illustrated below. a(x1)a(x2)a(xa) ... a(xn) The variables, XI through Xn are the domain specific knowledge of the system. For example, in the mobile robotic system control application, the variables might represent discrete map locations on the planned path of the robot. Second and third level languages build on the strings produced by the language of trajectories to produce higher level decision strings applicable to the environment. Initially, we concentrate on the language of trajectories as a method to create paths. 2.1 KNOWLEDGE REPRESENTATION To begin, there must exist some techniques for formally representing knowledge. Definition of a Como lex System A definition of a complex system [Stilrnan, 1993] is described by the following 8 tuple: (X,P ,Rp,ON ,V ,Si,St,TRANSITION) where: X= {xi) a finite set of points that define locations in an area. 5

PAGE 11

P = {p;J a finite set of elements that define the dynamic objects of the model. P is the union of two non-intersecting subsets: p, and P2. RP (x,y) is a set of binary functions of reachability in X. Where: x, y are cells from X. pis a member ofP. RP (x,y) is true if element p located at cell x can reach cell y. Otherwise, it is false. ON(p) = x where ON is a partial function of placement from Ponto X. Vis an evaluation or cost function for the set P depicting a value associated with each member. S; is the description of the set of initial states of the system by a certain collection ofWell Formed Formulas ofthe first order predicate calculus: {ON(p;) =Xi} s, is the description of the set of target states of the system. TRANSITION(p,x,y) is the description of the operators for transition of the system from one state to another. If an element p wants to transition from its current location x to a new location y, it is described by the following states: precondition: delete: add: 6 ON(p) = x & Rp(x,y) ON(p) = x ON(p) = y

PAGE 12

Representation ofDistance Geometric properties of the system are a key representation concept in linguistic geometry. In linguistic geometry, distance, measured as the minimum amount of time required to reach a given location, is represented in a mapping (MAP) function. This function uses the notion of Rp(x,y), the reachability of locations in the domain. Critical to MAP is the concept that a set of locations is reachable in a certain amount of steps and is not reachable in any less. Figure 2.1 illustrates this concept. The set Mkx.p is a fmite subset of points from the set X specific for element p and for a given location x. Its membership is made up of cells that are reachable in k steps from x and are not reachable in k-1 or fewer steps. Stated formerly, is the set of all Mkx.P (k=l,n) where the Rp(x,y) is true and n is the number of steps from x to y. We apply this function in the next section to help in constructing a grammar of shortest trajectories. Figure 2. 1 Reachability for a MAP Function 2.2 STATE TRANSITION IN THE SYSTEM A Language of Shortest Trajectories Assume a robot must generate optimal trajectories from a start location xo to a destination location yo. The robot possesses a MAP of distances between locations in a fixed domain. We wish to generate strings of locations that describe the optimal path(s) a robot may travel to reach a pre-determined destination. Table 2.1 is a presentation of the controlled Grammar GtO> of Shortest Trajectories that is capable of generating the strings [Stilman, 1993]. 7

PAGE 13

L 1 2i 3 Table 2. 1 Grammar of Shortest Trajectories Q Q1 Q2 Q3 Vr ={a}, VN = {S,A}, VPR = Kernel, rr .. A(x,y,l) A(x,y,l) Pred = {Q1, Q2, Q3} Q1(x,y,l) =
PAGE 14

MOVEt(x) =SUM n STt(x) n STto.J+t(xo) if MOVEt(x) = {mt, m2, ... mr} != 0 then The MOVE set at length l is not empty nexti (x, l) = mi for i :::::; r next returns each member of set per reference else nexti (x, l) = x end if MOVE is empty robot has no next move In order to facilitate understanding of Gt
PAGE 15

5 5 5 5 5 5 4 4 4 4 4 5 3 3 3 3 4 5 2 2 2 3 4 5 I I 2 3 4 5 0 I 2 3 4 5 I 2 3 4 5 61 Figure 2.2 6x6 Example Domain At this stage, we encounter two functions. The function f is trivial, simply subtracting 1 from the current length of the trajectory. The next function produces the a member of the set of next possible locations in the trajectory. This is an iterating function that returns a new value with each application, until there are no more. Iteration is indicated by the i subscript. Function next is the result of intersecting three sets. The first, SUM, contains cells that on-the-way to the destination. At least one trajectory will pass through each of these SUM cells. A cell is on-the-way when the MAP' ed distance from the start to a cell is summed with the distance from the same cell to the destination is exactly the total distance from start cell to the destination cell. In our example, this set is: { (1, 1); (2, 1); (2,2 ); (3,2 ); (3, 3); ( 4, 3); ( 4,4 ); (5, 4)} The second set, STk, contains those cells that are reachable from the starting location in exactly lo-1+ 1 steps. In the example, k = 1. This set is: { (2, 2); (2, 1); (1,2)} The fmal set, ST1, contains cells that are reachable from the current cell in exactly one step. Of course, at step = 1 this is the same set as STk. So, the intersection of the sets is: {(2,1);(2,2)} The application of production 2, combined with the next evaluation produces: A((1, 1),(5,4),4) _. a((1, 1))A(2, 1), (5,4), 3) _. a((1, 1))A((2,2),(5,4), 3) Predicate Q2 is a check for 1 >= 1. With 1 = 3, this evaluates to true and we execute jump to branch two. Production 2, expressed in this manner, is meant to indicate new 10

PAGE 16

productions should be applied in parallel to all non-terminals generated from previous applications. The grammar continues in this fashion until the value of length decrements to 0. At this stage, repeated applications of production 2 has resulted in the following strings: -+ a(l, 1)a(2, 1)a(3,2)a(4,3)A((5,4), (5,4), 0) -+ a(1, 1)a(2,2)a(3,3)a(4,4)A((5,4),(5,4),0) -+ a(l, 1)a(2,2)a(3,2)a(4,3)A((5,4),(5,4),0) -+ a(l, 1)a(2,2)a(3,3)a(4,3)A((5,4),(5,4),0) Each production fails the predicate check, Q2(0);t (l 1), and moves to production 3. This production adds the destination a(5,4) to the string. The result of executing this grammar is that the robot has planned all optimal paths from the initial location (1, 1) to the destination location (5,4). For this scenario, all of the paths are essentially as optimal as the next. So, to reach the destination all that remains is for the robot to select a path upon which to move and to generate movement on the trajectory. 11

PAGE 17

3. MOBILE ROBOT PLANNING AND MOTION GENERATION This section introduces a robot planning and motion engendering application utilizing a linguistic geometry control model. The application employs a robot that must plan and execute a path from its current location to a destination location. The robot is staged in increasingly complex environments --from no obstacles, to static obstacles, to dynamic obstacles. Each enhancement to the model is measured with regards to the impact on the design and implementation, as well as the performance of the design. We measure performance in several ways. First, the robot must be able to execute its tasks in the most optimal manner supported by the environment. A key feature of this requirement is the ability of the software to plan and execute a task without search inefficiencies or backtracking. Second, the robot shall be capable of responding to changes in the domain. A key feature of this requirement is the effectiveness of the software in recognizing systematic changes and reacting. The first requirement still applies in these situations, so the software must implement changes in the most optimal manner available. Third, the robot shall execute tasks in a timely fashion. Success at each stage will be demonstrated via computer simulation. Features of the robot: field of view, speed and the work area: size, obstacle location are parameters of this simulation. Appendix A details the simulation software. Section 3.1 is a review of the key concepts for representing knowledge in the linguistic geometry model. The concepts will be explored with regards to the requirements of a robot control application. Sections 3.2 through 3.5 are the design & implementation details of the robot control application. 3.1 KNOWLEDGE REPRESENTATION RePresentation of a comvlex system Earlier, a linguistic geometry model was defmed for a generic complex system. This concept is now focused on the specifics of the proposed mobile robot control paradigm. 12

PAGE 18

A complex system is defined as the subsequent 8-tuple: (X,P ,Rp,ON, V ,Si,St,TRANSITION) where: X= {xi) a finite set of points that define the arena of operation. X represents the work area where the robots operate; a 2-dimensional space divided into equally sized, atomic cells. The work area dimensions for this application will be a parameter of the simulation. Although the work area is somewhat benign in definition, calculating the distance between discrete cells, determining reachability, and representing obstacles are key design issues that are tightly coupled to the design of the work area. P = {pi) a finite set of elements that define the dynamic objects of the model. P is the union of two non-intersecting subsets: Pt and Pz. P represents the robot in the application. In the general model presented earlier, P was the sum of two sets --each representing a opposing side in a gaming or interception scenario. Initially, P will consist of the single robot under test. Dynamic obstacles, essentially functioning as an opposing element to the robot will be introduced in the fmal manifestation of the application. The robot is the source of intelligent activity in the model. It operates on the work area using methods that measure distance, determine reachability, derive trajectories, etc. Rp (:x:,y) is a set of binary functions of reachability in X. Where: :x:, y are cells from X. Pis a member ofP. Rp (:x:,y) is true if element p located at cell x can reach cell y. Otherwise, it is false. This definition directly applies to the robot control paradigm. In the simplest stage of the model, with no obstacles, the reachability relation is true for all cells that the robot can reach given the robots speed, etc. When obstacles are introduced in later stages of the design the reachability function must incorporate those cells occupied by obstacles, but still within the robots ability, to reflect an unreachable status. Cells that are 13

PAGE 19

entirely shut off from access by obstacles, as illustrated in the example below are also classified as unreachable. Unreachable Cell Figure 3.1 Unreachable cell not containing an obstacle The reachability function can not be as rigidly defmed in the presence of dynamic (moving) obstacles. A cell may be unreachable in one time interval only to be re evaluated as reachable in a subsequent time interval as an obstacle or robot moves. In this environment, the robot control implementation must provide a reachability function that incorporates another variable: the particular state of a cell at the time the function is to be applied. ON(p) = s where ON is a partial function of placement from Ponto X. This function is used to describe the cell, x, currently occupied by an element of P, i.e. a robot or a dynamic obstacle. A robot can occupy only one cell, while a dynamic obstacle may occupy one or more cells, depending upon its size. A cell is either occupied or is not occupied, i.e. there is no partial occlusion. Vis an evaluation function on the set P describing the value of each member. The evaluation function does not apply in this model. S; is the description of the set of initial states of the system by a certain collection ofWell Formed Formulas ofthe first order predicate calculus: {ON(p;) = :x:;) This set represents the initial locations in X of a robot and dynamic obstacles in the application. Both are provided their initial location as a parameter of the simulation. 14

PAGE 20

St is the description of the set of target states of the system. This set represents the destination location in X of a robot in the application. A robot will be provided its destination location as a parameter of the simulation. Obstacles in this model do not possess target states. TRANSITION(p,x,y) is the description of the operators for transition of the system from one state to another. TRANSITION characterizes change in the system. At each time interval, the change in the state of the work area is reflected in two lists. The remove list contains the current locations of the dynamic elements in the model while the add list contains the new locations, after a time step increment has occurred. Dynamic elements characterize change through their locations on the work area. Measurement ofDistance Distance measurement is a simple, but key concept in geometric reasoning. Earlier, a map component was introduced to provide elements operating in an area with the capability of determining the distance from one location to another. This paradigm carries over to the implementation of such a system. In the robot control application, a function is provided to a robot to map the distance from one cell in the work area to another. As in the theoretical presentation, distance is defmed as the smallest number of time intervals required to reach a given cell from a start cell. 15

PAGE 21

3.2 ROBar PATH PLANNING IN A SIMPLE ENVIRONMENT The initial robot control application introduced in this section contains a single robot operating in a 30x30 work area containing no obstacles A robot is defmed with two parameters Velocity is measured in units of work area cells the robot may travel in a single time interval. There are no restrictions dictating direction of travel, i.e. the robot can change direction without a loss of velocity. Field of view, the second parameter, measures how far a robot may see. This parameter, also measured in cells, defmes the visible horizon of the robot. This is a critical robot characteristic in as it defmes a local arena in which the robot plans and moves. Information about cells outside the field of view is limited to simple distance The field of view defmes a square area around the current robot location essentially simulating an omni-directional sensor capability. The robots view stops at the edges of the work area, creating a more rectangular view in those instances Two robots with slightly different functional characteristics will be presented in separate test cases. The first robot travels two cells in any direction in one time interval. It has a field of view of six cells The robot in the second example travels three cells in any direction in one time interval. It has a smaller field of view of only three cells 3.2.1 Software Design The concepts here represent the basis of the design that will be augmented in further sections as a more complex environment is introduced An object oriented methodology characterizes elements of the design using the following steps : Identify and classify objects & methods from the requirements Group objects & methods into classes Demonstrate class interaction 3 2 .1.1 Objects Location A Location object identifies a unique position in the work area. The two-dimensional presentation of the work area drive x and y attribute parts of the object. Examples of different instantiations of robot Locations are : start. current, destination 16

PAGE 22

The cell object is a single, atomic component of the work area. Prior to path planning, a cell consists only of its location tag. \Vhen a path is planned through a cell, however, it acquires other attributes describing its position in the trajectory(s). This is detailed in the following discussions of trajectory and plan. Work Area The work area is a matrix of cells. The work area dimension is set by x and y size elements supplied as parameters to the simulation. The robot is strictly confmed to this arena. At the start of a job, the work area cells are independent elements representing only locations. As a trajectory is built up defming the path plan, the cells are bound to form a network of locations over which the robot may travel to reach the job destination. Robot A robot object generates all activity in the work area. It integrates and controls all of the previously presented objects, using those objects to plan and execute movement to a goal. A robot is characterized by its speed and field of view. Mru! The map object is a critical and powerful element of the design. It is the foundation of the robot control implementation. Map represents distance from a location in the work area to another location in the work area. There are two basic approaches to designing the map. A static map plots all of the distances from a location to any other location in a pre-determined data structure. This design incorporates a relative distance map that places an arbitrary location at the center of the data structure. Other elements of the data structure are representative of delta x's and delta y's from the center of the structure (delta x = 0, delta y = 0). Each of the elements of the structure contains a distance relative from to the center location. Distance is determined from operational robot parameters such as speed and direction. When a distance calculation is needed, the starting location is mapped to the center element of the data structure. All locations adjacent to the starting location are mapped to those cells adjacent to the center, (delta x=l, delta y=O; delta x=O, delta y=l; ... ) and so on until all of the cells are assigned relative locations in the structure. For example, assume the cells adjacent to 17

PAGE 23

the center location are labeled with distances as illustrated in figure 3.2. An arbitrary location (x=lO, y=20) is assigned to the center location. The distances to all locations from (10,20) are immediately known based on each location's delta x, delta y from (10.20). Keep in mind the assignment of (10,20) to the center location is variable. If the robot is in a new location, for example (21,12), and distances are required, (21,12) is simply assigned to the center location and all distances from (21, 12) are known. The representation of the static map must be four times a large as the work area, as illustrated in the figure. This is so locations at the extremes of the work area can also be placed in the center location and still map the full extent of the work area. Also, work area boundaries must be considered when mapping dynamic locations to absolute locations. +2n +2 +1 0 -1 -2 -2n -2n -2-10 +1+2 +2n I I I I I I 2 2 2 2 2 ....... 2 1 1 1 2 2 1 \ 1 2 ....... 2 2 2 \2 2 ....... 3 \ 3 :\ Center Loc Figure 3.2 Static Map Representation of distances from (10,20) A dynamic map requires that only operational parameters, e.g. speed and direction coefficients of a robot be stored. Distances are computed based on the difference between the x coordinates and the y coordinates of two locations, and factoring in the speed of the robot. A simple algorithm for dynamic distance is presented below. Max_ Diff MAX ( ABS ( Startz Finish:), ABS ( Starty Finishy)) DistfttrJ.JII MaxDo/Robot_ Speed + (Max_ Diff% Robot_ Speed) There are advantages to both kinds of Map representation. The static map is particularly useful for fmding all cells in a particular set. For example, to determine all cells that are a distance 5 from location (20,20) involves assigning (20,20) to the center location and searching the static structure to fmd all locations that are a distance of 5. With the dynamic map, a distance must be calculated from the start location to each 18

PAGE 24

location in the work area (or within the robot field of view) to determine if that location is in the set. A static map also distinguishes the extent of blocking by an obstacle. The dynamic map provides a greater advantage when we consider a more complex environment .. Since the static map works off of relative locations and not absolute, it can not consider obstacles in the environment. Obstacles also affect distances in the static map. More detail will be provided in later sections when obstacles are considered. So, for the simple environment presented here, we will use the static map. Trajectory The notion of a trajectory was introduced in previous sections. In the implementation, a trajectory is a path on the work area from a start location to a destination. It is formed by linking cells in a parent-child relationship. A parent is a predecessor cell from which a given cell can be reached in a trajectory. A child is a successor cell that a given cell can reach. The decision logic to link cells in a parent child relationship is the result of the linguistic geometry path planning algorithm. 1 1 1 2 3 4 Figure 3.3 Cells Linked to Form a Trajectory The relationship between cell, trajectory, and path plan objects is the central theme of the implementation. (Path) Plan A path plan object is a bundle of trajectories from a start location to a destination for a given job. This is the network that is formed by generating all of the trajectories that provide a shortest path. Trajectories may coincide with other trajectories, along the same path, at a particular cell. It is important to the design that these coinciding cells be the same in one trajectory as in another. That is, a cell must be instantiated once in a plan. All trajectories passing through that location must be passing though the same cell and not a copy. There are two reasons for this. First, there is a gain in efficiency in only expanding a cell's children once. If another trajectory passes through the same 19

PAGE 25

cell, then the children, and their children, are already in place. An additional benefit is gained when obstacles are introduced. If a cell is blocked from access, it must inform only one set of parents. Time Interval A time interval is perhaps more appropriately presented as part of the simulation. It is mentioned in this design section since it is the attribute that drives action in the application. Time is an artificial notion in the design that does not typify a temporal object as much as it represents a functional object. In this regard, time is closely related to the Reachability function. A robot's speed is a defmed in terms of how many cells it can travel in a single time interval. 3. 2. 1. 2 Methods Generate Transition Locations Since a robot may not have the sensor capability to completely plan to the destination location, there must exist a method to select intermediate locations along the way to the fmal destination. Through the Map object, the robot possesses the knowledge of relative distances between cells in the work area. This method must select optimal transitional cells that are within the robot field of view and bring the robot optimally closer to the destination. The most optimal transition locations are those that utilize the full extent of the field of view and that bring the robot the nearest to the destination location. Plan a Path Planning a path is the method through which trajectories are generated from a start to the selected transition locations. In this implementation, this is accomplished through the language of shortest trajectories. 20

PAGE 26

Execute a Path Upon executing the method to plan a path, the robot has not yet moved. The Execute method moves the robot on a path to the destination location. At this point in the application, with no obstructions in the work area, any of the trajectories chosen suffice as a shortest path. Calculate Distance A method to calculate distance is required in many steps of planning a path. The characterization of the map object makes this method a table search. 3.2.1.3 Classes The objects and methods of the robot control design are compiled into the classes illustrated below. The robot class drives the generation and movement along a path in this scenario. Upon user selection, the simulator creates a Robot object defming as its parameters: speed, and field of view. The robot creates a Robot Map object for itself providing work area size as a parameter. The Area Map constructor creates the static map, calculating relative distance from the center location. The absolute area, an array of cells, is created and initialized. Two utility classes, implemented as templates, are utilized in the planning algorithm. The List class, is a double linked list used as a container for Cells in the algorithm. A Set container inherits from List. The Set class characterizes methods for the intersection and union of cells. 21

PAGE 27

ROBOT SIMULATOR """' ExecutA Get Path Info Create De AREA MAP Move let
PAGE 28

algorithm when the robot reaches a field of view boundary. So, since the planning algorithm produces cells that are along a shortest trajectory and no others, the Execute Move method is guaranteed an optimal path to the destination location. 23

PAGE 29

3.2.2 Test Results Implementation results are presented in two test cases. The first simulation initially places a single robot at grid location 14, 14. The robot in this test is capable of moving two cells in a single time interval and has a horizon of six in all directions. The destination location is set to 0,5. So, at two cells per time interval, the robot should, optimally, attain this location in seven time steps. The test results illustrated below demonstrate the planning and movement done to attain the goal. In the first frame, the robot has performed the initial planning required to move towards the destination location. Since the destination is not within the field of view, the robot can only travel to the frontier of the extent and then must replan. The dark, thin lines in the frame represent the planned trajectories. The lighter, thicker single line illustrates the selected path moved upon by the robot. In the second frame, the robot has reached the view frontier and has planned again. This time, the destination is within the robot view and all trajectories terminate at the destination. The third frame shows the fmal path of the robot from start to destination. Robot Path Generator Simulation 6 6 6 6 6 6 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 5 5 6 6 5 4 4 5 6 6 5 4 3 3 3 3 3 3 3 4 5 B 6 5 2 2 2 2 3 4 5 6 B 5 1 1 2 3 4 5 6 6 123.56 1 2 3 4 5 6 2 2 2 3 5 6 3 3 3 3 3 3 4 5 6 4 4 5 6 5 5 5 5 5 5 5 5 5 5 6 6 6 8 6 6 6 6 6 6 6 6 6 29 28 27 28 25 24 23 22 21 20 19 18 17 16 15 1. 13 12 11 10 9 8 7 6 5 3 l-:o:-:-1 -=2:-::-3 -:-:5:--::-6 -:7-:8=-=9 7.1 o::-::171 -:-::1 2:-:-1 :::-:31:-:-4-::15:-:-1 :::-:6 2:;:::0::;:21:-:n=n:-::24:-:2;;:-5 -::;;26:-::;2:;-:7 28-:;;;-::;;29 2 1 0 0 Obstacle 0 Start Destination 181 Path Robot Definition Speed Fleld VIew [U Figure 3.5 Test Case 1/Frame 1 Planning from initial location 24

PAGE 30

,U: Robot Path Generator Simulalion 29 28 0 Obslacle 27 26 25 0 Start 24 23 Destination 22 21 8 6 6 6 8 6 6 6 6 6 6 6 6 20 D Grid 6 5 5 5 5 5 5 5 5 5 5 5 6 19 6 5 4 4 4 4 4 4 4 4 4 5 6 18 6 5 4 3 3 3 3 3 3 3 4 5 6 17 6 5 4 2 2 2 2 3 4 5 8 16 .. 6 5 1 1 2 3 4 5 6 15 6 1 2 3 4 5 6 14 1 2 3 4 5 8 13 2 2 3 4 5 6 12 3 3 3 4 5 6 11 4 4 4 5 6 10 5 5 5 5 6 9 6 6 6 6 6 8 7 Robot Definition 6 5 Speed [] 4 3 Fleld VIew [iJ 2 1 0 0123456 7 8 9 1011121314151617191920 21 22 23 24 25 2627 29 29 Figure 3.6 Test Case 1/Frame 2 Planning from Field of View Boundary ; Robot Pdth Gener11lor Simulillinn 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 Obstacle 0 Start Destination D Path Definition Speed [3] Fleld VIew GJ Figure 3. 7 Test Case 1/Frame 3 Final Path 25

PAGE 31

In the second simulation a robot is placed on location (1,0). This robot can travel faster than the robot in the first simulation --three cell locations in a single time interval. The field of view for the robot is modified to only three omni-directional increments. With this combination of speed and field of view, the robot must replan at the conclusion of every move. The destination location chosen for this robot is the last cell of the work area, (29,29). The time required to travel a shortest trajectory in this scenario is 10 time intervals. In the figures below, the progression of planning trajectories and movement along those trajectories is illustrated. f:. Robot Path Generator Simulation 1 2 3 1 0 9 8 7 6 5 4 3 2 3if!i3 3 1 __ 1 0 0 1 2 3 4 5 6 7 8 9 101112131415161118192021 222324 252627 2829 0 Obstacle 0 Start @ Destination 181 Path Definition Speed Field VIew [] Figure 3.8 Test Case 2/Frame 1 Planning from initial location 26

PAGE 33

v: Robol Pnlh Generalor Simulalion .. 4 3 2 1 0 9 8 7 6 5 4 3 2 0 0 Obatadc 0 Start Destination l8l Path DiGrili: Robot Definition Speed GJ Fleld VIew [] Figure 3.9 Test Case 2/Frame 2 Last Planning Stage from Field of View Boundary Robot Path Generator Simul a tion 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1-o:;--;,-=-2 -=3:-:-4 -=s-=-s -=1-=-a 0 Obstade 0 Start @ Destination 0 Grid Definition Speed [] Flcld VIew [] Figure 3.10 Test Case 2/Frame 3 Final Path 27

PAGE 34

3.3 ROBar PATH PI...A.'lNING IN A."' ENVIRONMENT WITH OBSTACLES Almost every non-trivial application of this technology contain territories that a robot must be capable of avoiding. A mobile robot navigation scenario that omits the possibility of obstacles is unrealistic. Obstacles can be static, i.e. they are placed in the environment in given locations and stay in those locations for the duration. The robot can detect the presence of obstacles only within its sensor capabilities, defmed by the field of view. A variation on this theme is obstacles that are invisible to the robot until it encounters them in the process of moving along a path. This situation may be due to faulty sensors on the robot that initially failed to detect the blockage. It could also be the result of one set of obstacles obscuring another, preventing sensor readings. Whatever the reason, a robot may encounter, and must plan for, undetected impediments within the field of view. This section applies static obstacles to the implementation. Mobile obstacles are elements that are capable of movement. These can be other robots, vehicles, people, etc. that interact within the work area in a very dynamic fashion. This modification is applied in the next section. We provide an additional capability to the robot to work within the static obstacle environment. If the robot is unable to "see" the extent of the obscura within the assigned field of view, it is allowed to expand that view until a path is located. This is analogous to a robot possessing a variable sensor capability. Under most situations, a sensor that provides a limited view of the area is desirable. This capability uses less resources, provides for faster movement, etc. In certain situations, however, the robot must select a high performance sensor to examine a larger extent. While this may cost the robot in resources and time, it eliminates guess-work on the part of the robot as to the most optimal path. A guess made by the robot could potentially be much more costly than utilizing the more resource expensive sensor. Key to the introduction of obstacles to this environment is the possibility that an optimal path is not attainable. The Linguistic Geometry model generates optimal and non-optimal paths with a new paradigm .. the Language of Admissible Trajectories [Stilrnan, 1993]. 28

PAGE 35

3.3.1 A Language of Admissible Trajectories Assume a robot has determined through the language of shortest trajectories that an optimal path to a destination is not possible due to obstacles. In these circumstances the robot must have the means to plan a path to the destination along a longer, less optimal trajectory. Table 3.1 is a presentation of the programmed Grammar Gt<2l of Shortest and Admissible Trajectories. L 1 2i 3i 4 5 Table 3.1 Grammar of Shortest and Admissible Trajectories Q Q1 Q2 Q3 Q4 Q5 Vr ={a}, VN = {S,A}, VPR = Kernel, n., On S(x,y A(x,y ,1) A(x,y,l) mecL (x,y,l), lmecL (x,y,l)) A(mecL (x,y,l), y, 1lmecL (x,y,l)) A(x,y,l) A(x,y,l) A(x,y,l) Pred = {Q1, Q2, Q3, Q4, Q5} Q 1(x,y,l) = CMAPx.p(y) 1 < 2 X MAPx.p(y)) A (1 < 2n) Q2(x,y,l) = (MAPx,p(y) :1; 1) Q3(x,y,l) = (MAPx,p(y) = l) A (1 1) Q4(y) = (y = yo) Q5(y) = (y :t; yo) Var = {x, y, 1} 29 FT FF two 0 three three three 4 three 5 three 0

PAGE 36

Con = {xo, yo, lo, p} Func = Fcon U Fvar Fcon = {f, next1, next2, ... nextn, med1, meru, ... medn, lmed1, lmeru, ... lmedn} (n= I X I), fO) = l 1, D(f) = z. \ {0} Fvar = {xo, yo, lo, p} E = Z+ U X U P is the subject domain; Parm: At the beginning of the derivation: x=xo; y=yo; l=lo; xo, yo E X; lo E Z+; p E P. For this language, two new functions were created (in addition to the next function, a carry over from the language of shortest trajectories). They are defmed as: med.; (x, y, I) Domain: X x X x Z+ x P Define a set: DOCK(x) = {v I v from X, MAPxo,p (v) + MAPy0,p (v) = I} if DOCK.i (x) = {v1, V2, va, ... vm} != 0 then med.; (x, y, I)= Vi for 1 i m else medi (x, y, I)= x end if 30 The DOCK set is not empty med returns a unique DOCK point for each reference DOCK is empty robot stays at x position

PAGE 37

lmed. (x, y, l) Domain: X X X X Z+ X p lmed. (x, y, l) = :MAPx.p (med. (x, y, l)) :MAP distance from x to y The Language of Shortest and Admissible Trajectories extends the Language of Shortest Trajectories into an algorithm that allows the consideration of less than optimal paths. The thrust of this new grammar is to identify cells in the work area that serve as intermediate points accessible on shortest paths from the start location and the destination location. These DOCK cells are not necessarily on a shortest path. That is, they are not elements of the SUM set. Graphically, this can be depicted as the combination of two shortest trajectories as shown in the figure below. Destin Common Between 2 Shortest Trajectories Figure 3.11 Two Shortest Trajectories combined to Form a Non-Optimal Trajectory It is important to note that shortest trajectories are also promulgated from this language. Thus, if a shortest trajectory does exist, then it will be generated from this grammar. In this instance, the length is the shortest distance between locations and the DOCK set and the SUM set are equivalent. If optimal trajectories are not spawned due to obstructions, the length between two locations is longer and less direct derivations are attempted. 31

PAGE 38

3.3.2 Design Augmentation The basic objects described in the simple model also exist in this design, although they must be modified to accommodate the new environment. A new object, obstacle, is introduced that describes the areas on the work area where a robot can not occupy or travel through. Methods will undergo modification as the planning strategy now allows the possibility that a path is not achievable for a particular distance. In summary, the algorithm is modified to consider different levels of trajectories. 3.3.3 Objects Map The Map object represents distance from a location in the work area to any other location. In the previous manifestation of the design, we introduced the concepts of a static map and a dynamic map. With the introduction of obstacles, the distances represented by the static map are no longer accurate. Indeed, a given location on the map may not be reachable at all. The static map was a relative mapping independent of absolute coordinate assignment. Since the obstacles are absolute objects, the static map can not incorporate this knowledge into the database. Early designs attempted to work around this restriction by incorporating the obstacle knowledge into the planning algorithm instead of the map. In experimental testing, these designs proved to be easily defeated by non-trivial obstacle patterns. An accurate local distance map proved to be a critical feature of a good design. The dynamic map object calculates distances in real-time based on the state of the work area within the robot field of view. A list is formulated in the following manner. Working in a radial fashion outward from the current robot location, a Map is formed by considering simply what is "adjacently reachable" from a given location. Adjacently reachable is: the set of locations exactly one cell away from the current location that do not contain obstacles. These locations are tagged with the current distance and are added to the list to be considered in the next expansion. When the field of view is reached, the algorithm completes, leaving in its wake a calculated true distance for each cell. 32

PAGE 39

A form of a static Map object must also be retained to reflect distances of locations outside the field of view. The object is needed because the robot requires some knowledge of distances to locations outside the sensor range, such as fmal destination locations. This knowledge is incomplete since the robot has no indication of obscura outside the field of view that would affect distance. It can be used, however, to rate the path potential of a cell relative to another cell within the field of view. In this manifestation of the design, the static Map will be similar to the dynamic distance calculation presented in simple model. The algorithm will incorporate knowledge of the local distances into the calculation to obtain the most accurate total distance possible. Obstacle The obstacle object manifests itself in the design as state of a cell, a single atomic component of the work area. In the previous implementation, a cell contained a static structure for location as well as dynamic information describing any path infonnation (descendants and ancestors) that passed through the cell. In the modified design, we add a dynamic feature that allows the cell to be flagged to contain an obstacle. 3. 3. 3. 1 Methods Calculate Distance Using the radial technique described in the above Map object, this method calculates a true (dynamic) distance for all locations within the robot field of view. The algorithm considers current obstacle configuration. As an artifact of this algorithm, a parent relationship is established between cells in the field of view. This is done to facilitate trajectory generation in the path planning phase. The method also retains the capability to calculate static distances to those locations outside the field of view. The static algorithm factor local obstacle interference into the computation, however, it can not take into consideration obstacle interaction external of the robot sensor range. 33

PAGE 40

Generate Transition Locations This method produces intermediate locations that are within the current field of view for the robot and are also on the most optimal path to the fmal destination. The algorithm executed here is similar to the technique used to generate DOCK locations in the grammar of admissible trajectories. The method makes use of the true distances to the boundary locations to determine which of the cells have the greatest potential for expansion to the fmal destination. The algorithm adds dynamic Map distances to the transitional locations and the static Map distances from the transitional locations to the fmal destination. The smallest of the sums point to the most promising transitional locations to expand. If all of the boundary locations have potential that is less than the current location, then the path planning algorithm will simply plan to all boundary locations. This describes an environment in which the obscura is extensive enough that the robot can not usee" around it with the assigned field of view In such situations, the move method will invoke special processing to compensate. Plan a Path In this implementation, the plan is produced by formulating paths from the start location to the best transitional locations. Thus, the method uses the two algorithms described above to generate the dynamic Map and the optimal transition locations. Using the parent relationship established when the dynamic map was produced, this algorithm works backwards from the optimal transition locations to the parent cells. The parent cells are expanded to their ancestors, and so on until the start location is reached. This forms the trajectories on which the robot will travel. Execute a Path In the absence of obstacles, this method simply moved the robot on a pre-planned trajectory until it reached the end of the field of the view or reached the fmal destination. If the destination was not attained, the algorithm would kick-off another planning session. With the advent of obstacles, in particular the so-called invisible obstacles, this method looks ahead at each cell along the trajectory to ascertain if that cell contains a hidden obstacle. If all cells are blocked on the planned path, then the method must formulate new trajectories from the current location by calling the planning method. 34

PAGE 41

The move algorithm must also invoke special processing in case the obstacle layout prevents the robot from determining the best course. The processing is energized by a message from the planning algorithm that all boundary location potentials are worse than the starting location. In this situation, the robot will expand the field of view from the nominal state provided at robot creation. Mter each expansion, the planning algorithm is invoked to determine the potential of the new boundary locations. Once the potential exceeds that of the start location, the robot begins movement on the planned trajectories. The field of view is reset to nominal state. Finally, this method plans after every full robot movement. This is done to take advantage of the gain in the field of view attained whenever the robot moves. Experimentally, it was determined that this permitted the robot to see blind alleys earlier in the path than if the robot simply followed the older trajectories 3.3.3.2 Classes The attributes and methods of the augmented robot control design are compiled into the classes illustrated below. The robot simulator now updates obstacle locations with a set/reset obstacle location message to the Area Map. A new message from Robot to the Area Map allows the robot to inquire about obstacle presence at a given location. Existing methods and attributes were modified as described in the above discussion. 35

PAGE 42

Create ROBOT SIMI. J LATOR Obstacle Location Updates "'llr "'llr ExecutA Get Path Info CreatE De AREA MAP Move lete ... "'llr "'llr Clear Path ROBOT Link Cells Calculate Distance Generate Transition Locatic p Set/Reset Obstacle t.. Reset Path Plan a Path RonnPt ()hat.,,.]., n. I" Execute Move Link Cells Absolute Area I" Return Path Information Area Size t.. rAJt"uJAto niatAn,... I" l..l o .... I"V ()hat .... l .. n Area Map Speed Field of View Graph Start Current Graph DestinatiorLocation Last Location Figure 3 .12 Class Diagram of Robot Software in Static Obstacle Environment A state diagram presented below demonstrates the planning and movement state transitions performed within the robot class 36

PAGE 43

Initialization From Robot Sim. ,, Initialize Local Attributes Path Planning Existing Path Info. ,, Generate Transition Locations ,, Generate Admissible Trajectories Path Execution Move on Trajectory To Path Planning No Done -------1 Figure 3.13 Planning and Path Execution State Diagram The start-up scenario is similar to what was presented earlier. On startup, a creation message is sent to Area Map providing the user-defmed size of the work area. Using the Robot simulator, a user selects a starting location and a destination location for a robot. The user also supplies the functional capabilities of the robot: speed and field of view. The robot simulator sends a create message to a Robot providing an Area Map object as well as the speed, field of view. The user controls the movement of the robot from the simulator. Based on the field of view distance, the robot builds a network of trajectories from the start (current) cell to the selected transition cells. The network is fashioned by the planning algorithm. Motion is generated along one of the planned trajectories. The move process is repeated until the current location is the destination cell or the field of view limit is reached. The Execute Move method with automatically re-task the planning algorithm when the robot reaches a field of view boundary, or an invisible obstacle prevents further movement. The Execute Move method also invokes changes to the field of view as described in the above discussion. 37

PAGE 44

3.3.4 Test Results Experimental results are presented in three sets of test cases. The first simulation places the robot into an environment with numerous, overlapping walls. The second simulation requires that the robot plan to exit a room and gain entry to a room in order to reach the destination. The final test case introduces invisible obstacles. These are blockage that are not perceived by the robot until it is close to the obstacle. 38

PAGE 45

3.3.4.1 "Walls" Scenario The start and destination location are located on opposite sides of the work area: location (0,16) and location (26,10) respectively. The robot is created with a velocity of 3 cells per time interval with a field of view of 6 cells in all directions. In the initial frame, figure 3.14, the planning algorithm identifies three optimal boundary locations (4,22) at a distance of 6 from the start location, (6, 11) at a distance of 8 and (6, 10) also at a distance of 8. Note that the distances labels on the far side of the obstacle reflect actual travel time required to reach the cells and not a straight line distance. Also note that the boundary locations directly in front of the robot are rejected by the planning algorithm due to local blockage. Thus, the robot seeks a route around the obstacle with the planned trajectories. j Robot Path Generator I I 9 B 7 6 5 4 3 r:i .,o,........,..t -:2-3:---:-4-:5:-::-6 -=7,...8,.......,.9...,..1 -:-::17,...,.1 3 2 I 0 0 Obstacle 0 Start Destination 181 Path 0 Grid Robot Definition Speed [] FleldVIew Figure 3.14 Test Case 4/Frame 1 Initial Planning and Motion 39

PAGE 46

In the second frame, figure 3.1.5, the robot has completed its fourth time interval. At location (10,10), it has cleared the first two walls and successfully planned around a third. The move execution algorithm has selected a more satisfactory route based on better proximity to a (theoretical) straight line path to the destination. ; Robot Path Generator Simulation I I 9 II 7 6 s 4 3 2 l: 1 0 9 II 7 6 s 4 3 2 1 1-,0::--:-1 """2,......,....3 ....,.4....,5:--:-6 -:7,..-B:--:-9 ...,.1 0""'1'"'"1 "'"'12:-:-1-::-3 1:-:-4-:-:15:-:176 1:-::7""'"'1 B::-:179 20=-::-21'""'22=23'""2"'"'4 25=26:-:2-::-7 O 0 Obetede 0 Start Destination l8l Path .. Robol DeHniUon Speed [2] FleldVIew Figure 3.15 Test Case 4/Frame 2 Fourth Time Interval The next frame, figure 3.16, illustrates the robot situation at time interval 6. The planning algorithm has selected three optimal boundary locations: (18,14), (18,13), and (17,6). Once again the robot selects the northern route, demonstrated in figure 3.17. At this stage, time = 10, the destination is within the field of view and thus is the only identified transition location. The fmal frame illustrates the as-executed path for this scenario. The robot reached the destination in 12 time intervals, traveling over 36 cells. Without the blockage, the robot would have required 9 time intervals, traveling over 26 cells. 40

PAGE 47

Robot Path Simulation 29 29 27 26 .... 111111 23 22 21 20 19 18 17 16 15 14 13 12 r:11 10 9 8 7 6 5 4 3 2 1 0 0 Obstade 0 Start Destination 0 Grid Robot Definition Speed [iJ Fleld VIew [iJ Figure 3.16 Test Case 4/Frame 3 Sixth Time Interval ; Robot Path Generator Simulillion 0 Obatacle 0 Start Destination D Grid -Definition Speed [] FleldVIew Figure 3. 17 Test Case 4/Frame 4 Final Planning Stage 41

PAGE 48

. Robol Palh Generalor Simulation I I 0 Obstade 0 Start Deatlnadon 0 Path .. Robot Definition Speed [] Fleld VIew [U Figure 3.18 Test Case 4/Frame 5 As Executed Path Display 42

PAGE 49

The next "walls" scenario demonstrates the effect of local blockage on the overall path chosen by the robot. In this experiment, the start location is moved up one cell to (0, 17), while the destination remains at location (26, 10). The obstacle locations are also unchanged. With the one cell change to the start location, the planning algorithm initiates a completely different plan than in the previous scenario. A single transition location was determined to be optimal: (5,23). This boundary location is six cells from the start location and offers the greatest potential. The cells selected in the previous scenario, below the initial location are either blocked (5, 12) or proffer a worse potential (3, 11) than (5,23). , 1 Robot Path Generdlor Sirnulatinn 4 3 4 2 3 4 1 2 3 4 1 1 2 3 4 2 2 2 3 4 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 8 8 8 8 I .. I r: ...,o:--:-1 -=2,.......,...3 ""'4""'5=--=-s -=7:-e:--::-s-:-1 -=-o 1:-:-1 ""1 2,....,1.,.31.,...,4....,1 s::-:1'-="s "'"'11,...,.1 ==2s=-=2:-:8 21=2e=-=29= 4 3 2 1 0 0 Obstacle 0 Start Deatlnatlon 0 Grid Robot Definition Speed [] AeldVIew Figure 3.19 Test Case 5/Frame 1 Initial Planning and Motion In the second frame, the robot is well into the scenariO. The robot has the destination in the field of view but can not reach it. The fmal frame illustrates the as executed path for this scenario. 43

PAGE 50

Robot P111h Generator Simul111ion [.?,r:J I 7 7 7 7 7 s s 6 8 6 5 5 5 5 6 4 4 5 8 3 3 4 5 6 4 56 58 I'VII>""""OVO.. 5 6 s .. I 1 0 Obltade 0 Start Destination DiGrf.t: -Definition Speed GJ AeldVIew @] Figure 3.20 Test Case 5/Frame 2 Destination in Field of View and Not Reachable { Robot Path Generator Simulation I 1 0 9 8 7 6 5 4 3 2 1 0 0 Obstacle 0 Start Destination 0 Path -Definition Speed Aeld VIew [iJ Figure 3.21 Test Case 5/Frame 3 As Executed Path Display 44

PAGE 51

The next Mwalls" scenario fmds the robot trapped behind a obstacle that it is unable to plan around given a nominal, omni-directional field of view of 6 cells. The robot applies extra resources and expands the field of view to 7 cells to permit planning to a cell of greater potential than the initial location. ;;,'j Robot Path Generator Simulation 7 7 7 6 8 6 5 5/l>
PAGE 52

' : Robot Path Generator Simulation 5 ' ' 5 6 5 4 3 3 3 3 3 3 4 5 6 5 3 2 2 2 2 s 6 543211 6 5 3 2 5 4 3 5 4 3 3 5 4 4 5 5 s s 6 6 6 6 7 7 7 7 I I 0 Obstade 0 Start Destination 181 Path 0 G rid Definitio n Speed FJeld VIew [] Figure 3.23 Test Case 6/Frame 2 Reestablish Original Field of View Robot Path Generator Simulation 0 Obstade 0 Start Destlnadon 0 Grid Robot DeflnltJon Speed GJ FleldVIew Figure 3 24 Test Case 6/Frame 3 As Executed Path Display 46

PAGE 53

3.3.4.2 "Rooms" Scenario The next set of scenarios require the robot to navigate out of room with only one exit and process to a destination in another room with only one entrance. In the fust experiment the robot is given a speed of 2 and a field of view of 5 cells in all directions. The robot is placed at an initial position of (13,4), while the selected destination is assigned to (8,26). Note from figure 3.25 that both the initial robot location and the destination are placed inside of confmed areas that the robot must plan around in order to accomplish its goal. In the initial plan, illustrated in figure 3.25, the planning algorithm immediately plans a route out of the room. All paths are directed to the left of the initial location because the field of view restricts the robots view of what is outside the far right wall of the room. The algorithm did not require an expansion of the field of view for its initial planning. ;/;, Robol Palh Gencralor Simulation tac I ...... 1--11. 15 1514 15141 I I 5.1 4.1 0 1 2 3 4 5 6 7 8 g 10 .. .., 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 0 Obstacle 0 Start @ Destination 181 Path 0 Grid Robot Definition Speed [?] Field VIew [D Figure 3.25 Test Case 7/Frame 1 Initial Planning and Motion The robots second advance, presented in the second frame (figure 3.26), shows the effect of re-planning each motion. The robot, now capable of seeing outside the right 47

PAGE 54

wall of the room now rejects the original transition locations and paths to the left of the room as being less satisfactory than those to the right. Basically, these new trajectories avoid traveling the length of the bottom wall of the room in order to clear the obstruction. c 0 Obetacle 0 Start P.. Destination I I l8l Path Di'Grl.ii 9 I I B 7 6 -I I 5 4 3 2 L II 1 0 9 e 16161616161514131 7 Robot DeflnHion 1515151 6 141414 s Speed [] 131313 4 121212 3 Field VIew [] 121111 2 1211 1 12,11 0 Figure 3.26 Test Case 7/Frame 2 Improve Path after First Motion By the third frame, the robot has successfully navigated the obstacles to fmd the destination in the field of view. It can not, however, map a path to the destination due to obscura. In order to improve the path potential relative to the current location, the planning algorithm expands the field of view from 5 to 6 cells. Note that the entrance to the room containing the destination (4,28) is now within the field of view of the robot and a trajectory is generated to the destination. This is illustrated in the fourth frame, figure 3.28. The fmal frame, figure 3.29, presents the as-executed robot path for this experiment. The robot navigated through the obstacles through 47 cells in 24 time intervals. 48

PAGE 55

' Robol P111h Generalor Simulalion I I I .. 9 e 7 6 5 4 J 2 1 0 9 e 7 6 5 4 3 2 1 0 0 Obstacle 0 Start DuUnaUon 1:8:1 Path Robot Definition Speed [] Field VIew 5J Figure 3.27 Test Case 7/Frame 3 Approach Task Destination Robot P111h Gener111or Simul>tlion J-0::-:-1-2::::-::3-4-:-::5:---=-6 -=1:---=-a 9 8 7 8 5 4 J 2 1 0 9 8 7 6 5 4 J 2 1 0 0 Obstacle 0 Start Destination 1:8:1 Path Definition Speed [] Flcld VIew 5J Figure 3.28 Test Case 7/Frame 4 Plan to Entrance to Room 49

PAGE 56

I I I I Robol Palh Generalur Simulallon J 0 Obstacle 0 Start DuUnatlon 0 Patti Robot Definition Speed [] FleldVIew t:0::--:-1 -::2:-3::--:-4-:5:--::-8 -::7:-a::--::-9-:-1 S:-:1-::-81':":7::-1 9 a 7 6 5 4 3 2 1 0 9 a 7 8 s 4 3 2 1 0 Figure 3.29 Test Case 7/Frame 5 As Executed Path Display The second "room" scenario greatly increases the complexity of the domain. In this scenario, the robot must first exit a room and then navigate through a maze of rooms to finally attain the destination. The robot retains the characteristics of the previous experiment: a velocity of 2 cells per time interval and a field of view of 5 cells in all directions. The initial location of the robot is placed at (24,20) and the destination is selected at (2,29). In the first step, the robot must expand the field of view (to 7 cells) just to locate transition locations that get the robot into a greater potential than the initial location. This expansion is depicted in frame 1, figure 3.30. The second frame (figure 3.31) shows the robot heading for the "room maze" with a trajectory through the entrance. .50

PAGE 57

252525252525252525252526 24 24 24 24 24 24 24 24 24 24 25 26 23 23 23 23 23 23 23 23 23 24 25 26 222222 222222 22 22 23 24 25 26 212121212121 20 20 20 20 ....... 9191919 61618 717 20 1 0 9 8 7 6 5 4 J 2 1 o::--:-1--=-2 -:3,....-,-4 --=s:-:-s --=1:-:-e --=9:-1:-::0-:-11,....1=-=2-:-1 3""1""'4..,., 0 Obstade 0 Stan Destination I8J Path 0 Grid": .. Robot Deflnillon Speed [U FleldVIew Figure 3. 30 Test Case 8/Frame 1 Initial Planning and Motion 0 Obstade 0 Stan Destination l8l Path 0 Grid Robot Deflnillon Speed [3] FleldVIew Figure 3.31 Test case 8/Frame 2 Find Entrance to Room 51

PAGE 58

The third frame (figure 3.32) depicts an interesting robot plan. The planning algorithm formulated a trajectory to location (5,21) in an attempt to gain access to the destination from the south (0 .. 4, 21). Frame four illustrates how on the next plan the algorithm rejects this solution and instead expands the field of view to 8. This permits access to the destination location, a path in which the robot follows to complete its goal (figure 3.34). From location (6,20) the robot would have attained a more direct path to the destination through (6,21) than though (5,20). The attempt to gain access to the destination generated a one cell perturbation towards the fmal destination. Of course, the perturbation confirmed that a more direct route was not possible. 9 8 7 6 5 4 3 2 0 5 6 7 8 9 1011121314151617181920 0 Obstade 0 Start @ Destination [81 Path 0 Grid Robot Definition Speed EJ Field VIew Figure 3.32 Test Case 8/Frame 3 Direct Route is Blocked 52

PAGE 59

0 Obstade 0 Start Destination 0 Grid Robot DeHnltlon Speed [U Field VIew EJ Figure 3. 33 Test Case 8/Frame 4 Expand Field of View to Destination 9 8 7 6 s 4 J 2 1 0 Obstacle 0 Start Destination D Path Robot Definition Speed [] Field VIew GJ Figure 3.34 Test Case 8/Frame 5 As Executed Path Display 53

PAGE 60

3.3.4.3 Invisible Obstacles Scenario The fmal experiment for static obstacles leverages off of the previous scenario. In this version, however, we introduce the so-called uinvisible" obstacles into the environment. Recall that these obstacles are not detected in the planning phase and can be encountered blocking planned trajectories. The movement generation algorithm must detect this unplanned obscura and if necessary, replan around it. The robot path display simulates invisible obstacles by supporting the placement of obscura while the robot is in the process of planning and moving. The initial state of this scenario was precisely as it was in the previous experiment. In the first frame (figure 3.35), the robot is 7 steps into the plan when, from the robot simulator, new obstacles are defmed in rows 19 and 21 as noted on the graph. The robot executes two more time steps of the current plan before realizing that the trajectories are now blocked. The second and third frames of the scenario depict the next two time intervals of the experiment and demonstrate the perturbations introduced by the new obstacles. Frame 2 shows the robot, with a nominal field of view, attempting to bypass the invisible obstacles to the left. Frame 3 shows that after movement along that trajectory, the robot can not improve its position any further and expands the field of view allowing navigation through the invisible obstacles. The remainder of the scenario is similar to the previous presentation. 54

PAGE 61

Invisible Obstacles 25 25 25 25 25 25 25 25 25 25 25 26 24 24 24 24 24 24 24 24 24 24 25 26 23 23 23 23 23 23 23 23 23 24 25 26 22 22 22 22 22 2222 22 23 24 25 26 21 21 21 21 1 0 9 8 7 6 5 4 3 2 1 0 Start 0 Destination t8l Path .. .. 0 Grid Definition Speed AeldVIew Figure 3.35 Test Case 9/Frame 1 Invisible Obstacles Placed in Area 9 8 7 6 5 4 3 2 1 0 Obstacle 0 Start 0 Destination I8J Path Definition Speed [!] FleldVIew Figure 3.36 Test Case 9/Frame 2 Plan Rejects Route to Destination 55

PAGE 62

5 4 5 4 5 J J J J J 4 5 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 6 10 9 8 7 7 7 7 7 7 7 7 7 7 7 7 7 10 9 B 8 8 8 El El B B El 8 B El El B 10 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 10 101010101010101010101010101010101010101010 Obstacle 0 Start 0 Desdnatlon 0 Grid Robot Definition Speed [U Field VIew GJ Figure 3.37 Test Case 9/Frame 3 Plan Around Invisible Obstacles 56

PAGE 63

3.4. RoBar PATH PLANNING WITH STATIC AND DYNAMIC OBSTACLES Additional mobile elements are introduced to the environment in this section. In general, mobile elements can relate to our robot in several different ways. They may be of an adversarial nature, attacking the robot in an attempt to capture or destroy. Mobile elements may also be cooperating systems working on the same or different tasks. The last category, dynamic obstacles, is the type of system we will introduce to our environment. Mobile elements of this type simulate several different reallife situations encountered by a task oriented robot. In a factory oriented application, obstacles represent mobile equipment, people, or other robotic systems. In a space based application, mobile obstacles might represent satellites operating in the same area. These systems are not inherently opposed to the robot, i.e. they do not directly attack. They do, however, occupy space, and move concurrently within the same area as the robot. Similar to the static obstacle scenarios, the robot must avoid the moving impediments in the process of completing a task as quickly as possible. The planning problem is made more difficult by the mobile nature of the obscura. Since the obstacles move concurrently with the robot, the planning algorithm must consider how each obstacle travels during a given time interval as well as the starting and destination cells. That is, the robot must account for the route taken by mobile obstacles. Before presenting the robot system response to this environment, it is important to defme the behavior of the dynamic obstacles used in the model. The obstacles may be of any shape and size that can be practically contained in the work area. Similar to the robot, they are assigned a velocity describing how fast and far they may move in a single time interval. Mobile obstacles are assigned an "area of operation". Their movement is restricted to this area. When any portion of a mobile obstacle attempts to move outside the designated area of operation, it will change direction and move back into the area. Mobile obstacles are allowed to occupy the same space as other mobile obstacles and static obstacles without affecting movement. The robot must also follow certain rules for interaction within the new environment. Similar to the static obstacle scenario, the robot can only consider dynamic obstacles that are inside the field of view. Moreover, if only a part of a dynamic obstacle is within 57

PAGE 64

the field of view, the robot may only consider that portion of the obscura Obviously, the robot can not coexist in the same location as a dynamic obstacle Because of concurrent movement the robot can not travel over the same route taken by a dynamic obstacle to reach its new location in the same time interval The route is inclusive of the starting locations of the dynamic obstacle. Two examples shown below demonstrate legal and illegal movement on the part of the robot In both examples the obstacle is moving two cells in the y direction as indicated by the arrows emanating from the obstacles in the figure. This makes the cell directly under the current robot location inaccessible. Legal Movement Illegal Movement Figure 3 38 Examples of legal robot movement and illegal robot movement To accommodate a dynamic environment the planning algorithm must consider current and future interaction between trajectories and moving obstacles. The Linguistic Geometry model provides a set of tools for this paradigm 58

PAGE 65

3.4.1. Trajectory Networks So far in the exploration of Linguistic Geometry we have derived tools for expressing the movement of a single element in a system. This is a lower level operation that does not necessarily provide system level solutions. Tools are required to express the interaction between multiple agents in the system. This is the basis for breaking down a system into smaller subsystems. The subsystems in this case are the inter-connected trajectory networks formed by the movement activity of elements in an area. The elements may be attempting to accomplish a goal, preventing an element from accomplishing a goal, or supporting an element. The general idea for network generation will be demonstrated with the scenario in figure 3. 39. (2,2) (4,4) (3,3) Figure 3.39 Trajectory Network Example Element Po is planning to move along trajectory (1, 1), (2,2), (3,3), (4,4) to reach a goal destination of (5,5). Element Qo and Q1 are opposed to this activity and can intercept the Po trajectory at (3,3) and (4,4) respectively. Elements Pl and P2 support the goal and can inhibit the interception of Po by controlling locations along the Qo and Q1 trajectories. We state that a trajectory connection relation, C(t1, t2), exists between two trajectories if the end link of t1 coincides with an intermediate link of t2. In the above example the Q1 and Po trajectories are connected at (4,4). The connectivity relation can be indirect also. The Pt and Po trajectories can be considered connected through the Q1 trajectory. This is considered a degree 2 connection since it is not a direct connection to the main trajectory but it is part of the network that will determine 59

PAGE 66

whether Po can complete its goal. The degree of the connection is determined by how far the trajectory is removed from the main trajectory. The Qo trajectory is attempting to control a location along the main trajectory, therefore. it is a degree 1 relation to the main trajectory. We formally describe a trajectory network,W, relative to a trajectory to as a fmite set of trajectories to, t1, ... tk from the language LtH(S) (Language of Trajectories) that have the following property. For every trajectory from W there is transitive closure to the main trajectory. That is, each trajectory from the network W is connected to to [Stilman, 1996]. A family of trajectory network languages Lc(S) in a state S of a complex system defmition is the family of languages that produce strings of the form: t(to, param)t(t1, param) ... t(tm, param) where param is defmed by the specific parameters of the language. The strings produced by a network language should look vaguely familiar since they roughly resemble the strings produced by the Language of Trajectories. In the same way that trajectory languages describe one dimensional objects in a system by forming a string of symbols based on a reachability relation, a network language describes higher level objects using the trajectory connection relation [Stilman, 1996]. Different grammars can be generated from this family of languages that correspond to a particular solution. One grammar, Zones (Gz), is particularly useful in describing systems similar to our path planning problem. The detailed derivation of this grammar can be found in [Stilman, 1993]. It will serve here informally as the theoretical basis for a solution to a dynamic obstacle environment. A Language of Zones produces strings of the type: t(po,to, 'to)t(p 1, t1, 'tt) ... t(pk, tk,'tJ where p represents the elements and t represents the trajectories of those elements. represents the time allocated for motion along the trajectory to either intercept or support the main trajectory. If the length of the trajectory is greater than the amount of time available to affect the main trajectory, then that trajectory can be eliminated from consideration. In our system, the goal is roughly the same to avoid intercepting elements (dynamic obstacles) in the process of accomplishing a task. There are not, 60

PAGE 67

however, supporting elements to block the interceptors. In contrast to the more adversarial environments, dynamic obstacles will simply pass through a particular interception point on the main trajectory and will generally not wait to attack the robot. The concept of incorporating a time that intercepting trajectories may affect the main trajectory will play an important role in the software design. The implementation of this concept is detailed in the following section. 3.4.2. Design Augmentation The basic objects described in earlier presentations also apply to this environment. Several objects and methods will undergo significant modification to accommodate defining and generating motion for dynamic obstacles. In general, the robot software must respond to a much more dynamic environment than in previous implementations. The planning algorithm will incorporate predictions of where dynamic obstacles will be in future time intervals. This data affects which transition locations and trajectories are admissible. The movement algorithm will also incorporate current dynamic obstacle activity to avoid collisions and institute new planning when required. The following sections details additional objects and methods and changes to existing objects and methods. 3.4.2.1. ()bjects ()bstacle This object, in the previous design a simple indicator describing the state of a cell, must now be expanded to engender movement. Obstacle now defmes a group of obstacles that share common movement criteria. From time interval to time interval, the locations that the obstacles occupy changes, thus changing the state of the cells at those locations. All state changes to cells are communicated through the Map object so there is one repository for work area cell status. Obstacle contains several attributes that describe the initial state of the group. This includes: starting locations of the obstacles, the amount of movement allowed in the x and in the y direction, and the allowable area of movement (min/max x, min/max y). Additional attributes describe the dynamic state of group: current location of the group members, current direction in x 61

PAGE 68

andy, and the set of locations that the obstacle traveled over in execution of the current time interval. 62

PAGE 69

3 4 2 .2. Methods Execute Obstacle Movement This is the first of two-step algorithm to engender movement for a dynamic obstacle. Obstacle movement is staged to simulate simultaneous movement of obstacles and robot. This stage of the algorithm executes prior to robot planning and movement in a given time interval and performs three basic functions. First, it determines, with the next application of movement criteria if any portion of the obstacle will fall outside the defmed area of movement. If it detects such a condition, it reverses the direction of movement in whichever coordinate is affected (x or y). The second function calculates for each member of the obstacle group, the intermediate locations over which the member will travel to reach its new destination These locations are identified to the Map object and are set to a special cell state indicating that an obstacle has moved over this area in the current time interval. Finally, this method calculates the fmal destination for this time interval for each member of the obstacle group These locations are also identified to the Map object to update the location of the obstacle on the map in second stage of the dynamic obstacle movement algorithm. Complete Obstacle Movement In the second stage of obstacle movement the object completes the movement cycle By this point the robot has completed its planning and movement for the given time interval. The obstacle is still on the map in its old location with the cells identified as to its new locations and the route it traveled to reach them. This method removes the obstacle tag from the old cells and resets indicators for the traveled upon intermediate locations. Finally, the new cell locations are tagged as containing obstacles. These rather complex steps create a facsimile of simultaneous movement to the robot. The same obstacles potentially shows up in several locations but with different state flags identifying old travel and new position The robot planning algorithm uses this information to plan its path accordingly. Calculate Distance 63

PAGE 70

The basic thrust of this method stays the same in this implementation. That is, at each time interval it dynamically calculates distances from the start location to all locations within the field of view. The algorithm must now consider the movement of dynamic obstacles in the current time interval when determining distance. A particular unblocked cell, adjacent to the start location, may not be assigned a distance value of 1. If a dynamic obstacle is moving into that cell at the current time, the distance value may incorporate robot movement around the obstacle. Alternatively, a cell that is currently blocked may have a distance assigned if it will be reachable when the dynamic obstacle leaves the cell. Predict Obstacle Location This new method predicts where blockage will occur in future time intervals. This data is incorporated into the trajectory forming phase of the planning algorithm. In general, a robot operating in this type of scenario must look at obstacle movement over time to ascertain the dynamic characteristics of the object. In this application, however, the robot is allowed to query any dynamic obstacle (through the Map object) in the field of view to get the velocity and the direction of obstacle movement. The robot does not know the area in which a given dynamic obstacle operates, nor does it know the complete geometry of the obstacle unless it is all contained in the field of view. With the knowledge it does possess, though, this algorithm computes a predicted location for each component of the obstacle group for robot distances as determined by the Calculate Distance method. This is accomplished using the following equations: Where: Predicted (x) = current (x) + (time direction (x) *velocity (x)) Predicted (y) =current (y) + (time direction (y) *velocity (y)) current(n) direction(n) velocity(n) time = n (x or y) coordinate of current location. =Direction of movement in n (I =forward -l=back, O=none). = Speed of obstacle in n coordinates. =Delta time from current. If a cell is predicted to contain an obstacle, a special flag is set in association with the time that an obstacle is expected to be at that cell. Plan a Path 64

PAGE 71

This method makes use of the distance information and the predicted obstacle location data to form trajectories that give the robot the best chance to optimize movement to the destination and to avoid colliding with obstacles. Recall from previous implementations that trajectories were formed by determining the best transition locations to the destination and then working back to the start location. The trajectory path links were established using the distance information for adjacent cells. This basic algorithm is retained in this implementation with added enhancements. The predicted obstacle data is used to determine if an obstacle trajectory will intercept a robot trajectory to a transition location. If this does occur, that path is eliminated from the set of possible moves. In certain circumstances, all paths to transition locations may be cut off. Here, there are still options. If available, it can construct a partial path on the way to a transition location and determine if a change in the field of view offers a better path. It may, if the robot is not in danger of having an obstacle collide with it, elect to stay at the current location until circumstances improve. We will discuss this method in some detail when we examine specific test cases. 3.4.2.3. Classes The attributes and methods of the augmented robot control design are compiled into the classes illustrated in figure 3.40. A new class is added, Obstacle, to defme a dynamic obstacle. It receives Create, Execute Move, and Complete Move messages from the robot simulator. It also registers itself with the Area Map class. When changes occur in obstacle location or direction, it sends Update messages to the Map. The Area Map class is modified to keep a list of dynamic obstacles in the current scenario. The Set/ Reset Obstacle method is modified to reflect many different states that are assigned to a cell: static obstacle, current dynamic obstacle, on path of dynamic obstacle, new dynamic obstacle, and predicted obstacle. It is renamed to Set Cell State. The Report Obstacle Presence method is now called Report Cell State. The Robot class is modified to send a Dynamic Obstacle Query message to the Area Map class to obtain data for those dynamic obstacles in the field of view. Robot also adds the ability to set the state of a cell to a predicted obstacle. 65

PAGE 72

ROBOT SIMULATOR Execu e Complete Set Cell Creau Create State Move Move (Static Obstacle) .. Execute Get Path Info CreatE Delete Register Dynamic AREA MAP Move Obstacle .. Clear Path r Update Dyn Link Cells Obstacle Calculate Distance Set Cell State Report Cell State Register Dynamic Obstacle Update Dynamic Obstacle Report Dynamic Obstacle Absolute Area Area Size Dynamic Obstacle List .. "'r .. II' .... .. .. T ROBOT OBSTACLE R-Pt PAth GenerateTransition Location Execute Move -l.inlc (CpiJ,. Plan a Path Complete Move Jn .. ta ....... Execute Move Reset To Initial State Return Path Information Quprv r. .. n StAte_ Initial Locations Set Cell State (Predicted Obstacle) Area Map Current Locations Query Dynamic Obstacle Speed Intermediate Locatiort Field of View Direction Graph Start Velocity Current Graph Area of Operation Destinatiod.ocation Area Mao Last Location Figure 3.40 Class Diagram 66

PAGE 73

3.4.3. Test Results Experimental results are presented in three sets of test cases with increasing complexity. The first simulation shows robot interaction with a single dynamic obstacle. This test will demonstrate the planning concepts discussed previously in an uncluttered environment. The second test case adds additional dynamic obstacles to the environment and examines robot response to more difficult situations. The fmal test case re-introduces static obstacles with more dynamic obstacles to create a very complex environment. The robot simulator display was modified slightly for dynamic obstacles. These obstacles are cast in a dark gray with black borders to help the user distinguish between the static and mobile obscura. 67

PAGE 74

3.4.3.1. Single Dynamic Obstacle Scenario In this initial example, one dynamic obstacle group is introduced into the environment. It is formed at locations {(2,25) through (6,25), (5,26), (5,24)} as illustrated in figure 3.41. The obstacle group is assigned a velocity of 2 cells in they direction and 0 cells in the x direction per time interval. The obstacle is allowed to range over the entire work area in the y coordinate system. So, in the first time interval the dynamic obstacle will move from its current locations through {(2,26) (6,26), (5,27), (5,25)} to a new set of locations: {(2,27) (6,27), (5,28), (5,26)}. The robot starts this scenario at location (3,27) with a task destination at the bottom of the work area at (3, 1). The robot field of view is 5 cells and its speed is 2. Notice that if the robot does not move in the first time interval, it will be struck by the dynamic obstacle moving into its current location. The robot also can not move into the locations directly below (3,27) since they are in the travel path of the dynamic obstacle. ; ;, Robot Path Generator Simulation f3f'-', .... 29 28 21 26 25 24 23 22 21 20 19 18 11 16 15 14 13 12 11 10 g 8 7 6 s 4 3 2 1 0 0 Obstacle 0 Start Destination I:Bl Path 0 Grid Robot Definition Speed (U Aeld VIew [ill Figure 3.41 Test Case 10/Frame 1 Initial Assignment 68

PAGE 75

In frame 2, figure 3.42, the robot has planned and executed its initial move. Carefully note how the robot planned its paths. Although the dynamic obstacle now obscures the initial location of the robot, the robot planned initial movement up from (3,27) to (2,28). This particular movement means the robot can avoid the initial dynamic obstacle movement and it apparently clears the robot from further interference from the obstacle on the next time interval. From (2,28), the planning algorithm identifies a lateral movement to (1,27) or down and over to (2,27). It may be odd to see a path planned through an obstacle, however, the dynamic nature of the obstacle means the planning algorithm can construct a path through an existing obstacle location with the understanding that the obstacle will not be in that location at a later time. So, with the assigned distance of 3 and a robot speed of 2, it is predicted that the obstacle will have vacated the (2,27) cell by the second time interval. The blank space in the field of view is the space just vacated by the obstacle. Recall that trajectory generation is not allowed in this area. ;': Rubol P11lh Gt:nerillor SirnuJ,Iiun g;'jp 3 2 2 2 2 2 3 4 5 32 11 345 3 4 5 5 5 s 6 6 8 7 7 7 7 e e e e e 9 9 9 .. 9 e 7 6 5 4 3 2 1 0 g 8 7 6 5 4 3 2 1 0 t.,o=--=-1 -:2"""'3=--=-4 -:s'""'e=--=-7 29-:: 0 Obstacle 0 Start Destination 181 Path 0 Grid Definition Speed [] FleldVIew Figure 3.42 Test Case 10/Frame 2 Initial Movement for Obstacle and Robot 69

PAGE 76

Frame 3 (figure 3.43) shows a dynamic obstacle changing direction. On this time interval a portion of the obstacle (5,28) would fall outside the work area if the delta y (2) was added. Therefore, the entire group changes direction and moves down the work area. The robot path planning reflects this new condition. Since the obstacle has changed direction and can match the speed of the robot, it will prevent the robot from planning a path inward on the work area lining itself up with the destination. ; . Robol Palh Gr:ner111or Simulalion 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 t.0::-:-1 -:2:-:;-3 -;4-:5;--;;-6 0:::-:1:-:-1 7::1 2:-:-1 O 0 Obstacle 0 Start @ Destination .. Robot Definition Speed FleldVlew Figure 3.43 Test Case 10/Frame 3 Obstacle Changes Direction Finally, in time interval 11 the robot has the destination location in the field of view and has planned a direct path to that location. The obstacle, illustrated in figure 3.44, will overlay the destination in the next time interval, preventing the robot from reaching that location. In this situation, if the robot is not in danger of being struck by the obstacle, it can simply wait for the obstacle to clear the area and then resume its path to the destination. This is illustrated in figure 3.45. This particular behavior is special processing added to the movement generation method as a result of problems discovered in test. If a dynamic obstacle overlaid the destination just as the robot was 70

PAGE 77

poised to complete the goal, erratic activity on the part of robot was noted. In general. special checking was avoided so as not to dilute the basis of the algorithm . ;, Robot Path Genl"rator Simulation 5 4 3 2 1 1 5 5 s s s 4 4 4 4 5 3 3 3 4 5 2 2 3 4 s 1 2 3 s 7 .. 9 e 7 6 s 4 3 2 1 0 9 8 7 6 s 4 J 2 1 3_3 __ 4_5 7 0 0 1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526272929 0 Obstade 0 Start @ Destination O!G"rllit Robot Definition Speed II] Field VIew EJ Figure 3.44 Test Case 10/Frame 4 Obstacle Overlay the Destination 71

PAGE 78

, Robot Palh Generator Simul111ion 5 4 3 2 1 5 5 5 6 7 #+f, _1_1_2_3 __________________ __. 0 Obstacle 0 Start Destination 181 Path .. .. .. 0 Grid Robot Definition Speed Field VIew Figure 3A5 Test Case 10/Frame 5 Robot Waits for Obstacle to Clear Destination :: .. ; Rnbol Pnth Generntnr Simulation 5 4 3 2 1 2 3 5 3 4 5 .. 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 11 2345 0 o_1_2_J_4_5_e 7 e 9 10111213141516171B192021222J242S26272629 0 Obstacle 0 Start Destination 181 Path 0 !Grid"; .. .. Robot Definition Speed [U FleldVIew Figure 3A6 Test Case 10 As Executed Path 72

PAGE 79

3.4.3.2. Multiple Dynamic Obstacle Scenario In the second of our three scenarios the robot is given a tougher gauntlet to run. Additional and more complex dynamic obstacles are placed in the environment. Table 3.2 presents a defmition of the mobile obstacles that are operating in the scenario. In order to facilitate identification of the mobile obstacle while the test is executing, a letter indicator (A through D) is assigned to each group. The robot is placed at an initial location of (23,4) and given a destination goal of (8,24). The purpose of this test is to see how the robot plans avoidance with multiple mobile obstacle groups in the field of view. Table 3.2 Dynamic Obstacles Defmition for Test Case 11 B (15, 13) ... (16-18) 10 through 23 c (9, 15) ... (12,20) 5 through 14 D (26.24) ... (26,27) 0 through 29 [0,3] 73

PAGE 80

Robot Path Generator Simulation "' I 9 8 7 6 5 4 3 2 1 0 9 e 7 6 5 4 3 2 1 '"' o:-:-1 """'2_,3,.....,..4 """'e-,s"""1-=o"'"'11,...,.1 .,...21-=3..,.,14,...,.15""'1""6 ""11,...,.1 e:-:1""9 :-:20""21:-:22c:-2:l:-:c:-24:-:25-::-2:-:6-:-:27::-::2""e 29:-:1 0 Obstacle 0 Start Destination 0 Grid Robot Definition Speed IIJ Fleldvtew Figure 3.47 Test Case 11/Frame 1 Initial State We pick up the robot situation seven time intervals into the scenario. At this stage (figure 3.48), the robot is moving away from the destination to avoid colliding with obstacle C from table 3.2 moving towards the robot. Obstacle B is also in the field of view moving up on a parallel course with the robot. In the next time interval (figure 3.49), the planning algorithm has encountered a situation where all of the boundary transition locations are either directly blocked or are predicted to be blocked at the point when the robot will reach those locations. In this situation, the planning algorithm processes children nodes of the original transition locations as the most optimal cells in which to move. This is illustrated in 3.49 by a set of trajectories planned only to a distance of 3. 74

PAGE 81

,, 111110 9 10101010 9 9 9 9 e e 8 e 7 7 7 7 7 6 6 7 II 5 7 6 5 4 I 7 6 5 4 3 3 7 6 5 4 4 4 7 6 5 5 5 5 Robot Pi!th Generator Simulation 7 7 3 4 3 2 3 2 3 3 3 4 4 4 5 5 5 5 t .. 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 Obstacle 0 Start Destination l8l Path 0 Grid Robot Definition Speed II] Field VIew Figure 3.48 Test Case 11/Frame 2 Two Dynamic Obstacles in Field of View 75

PAGE 82

;' Robot Path Generator Simulatinn .. I 9 8 7 t 6 s 4 3 2 1 0 9 8 7 6 s 4 3 2 1 0 l-:o:-:-1 -=2:--=-3 -:4,..-:-5 -::s""'7=""""='e-=9,....,""'o,..,.11,....1"'"2"'"'1 3'""1,..,.4 .,...,1 5,...,.1 1=-=1""'1 e='"'1"'"9 ""20'""21,..Z2=n::-:2:-:-4"""2s::-:2"'"s 0 Obstacle 0 Start Desdnallon t8l Path 0 Grid Robot Dellnhlon Speed [] Field VIew [EJ Figure 3.49 Test Case 11/Stage 3 Blocked Boundary Transition Locations In the fmal two frames of this scenario we see the robot apparently on the brink of attaining the assigned goal location (8,24). Obstacle D, however, will overlay the destination of the next time interval. The planning algorithm projects a trajectory around the destination and waits for the obscura to clear the area before moving to the goal. 76

PAGE 83

; /. Robot P11th Generator Simulation I s 5 5 5 5 5 5 4 4 4 4 3 3 3 4 2 2 3 4 2 3 4 2 2 2 3 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 II 7 6 s 4 3 2 1 0 0 Obstade 0 Start Destination O!Gril:ir Robot Definition Speed llJ AeldVIew Figure 3.50 Test Case 11/Stage 4 Unable to Reach Destination Location Robot Path Generator Simulation D!!J 7 6 7 6 7 7 6 4 5 3 5 4 3 5 4 3 s 4 3 5 4 4 5 5 5 5 5 s s 5 5 4 4 4 4 4 5 3 3 3 3 4 5 2 2 2 3 4 5 1 1 2 3 4 5 1 2 3 4 5 2 3 4 5 3 4 5 4 5 s 5 5 0 Obstade 0 Start Destination 0 Grid Definition Speed AeldVIew Figure 3. 51 Test Case 11/Stage 5 Project Trajectory Around Blocked Destination 77

PAGE 84

3.4.3.3. Static and Dynamic Obstacle Scenario In the fmal test scenario the complexity of the environment is again increased. In addition to placing more dynamic obstacles in the area, static obstacles are once again introduced into the domain. Table 3.3 details the movement characteristics of the mobile groups. Table 3.3 Dynamic Obstacles Defmition for Test Cases 12 and 13 A (15, 15) ... (16, 17) 12 through 19 12 through 20 [1,1] B (20,20) 19 through 21 [1,0] c (21,21) 19 through 21 [1,0] D (2,25) ... (5,26) 0 through 29 [0,2] E (2,0) ... (6, 1) 0 through 12 0 through 15 [ 1' 1] F (15,8) ... (17, 13) 5 through 20 5 through 20 [1,1] G (27,0) ... (29,4) 0 through 29 0 through 29 [1.2] H (28,26) ... (29,27) 0 through 29 [3,0] Our first test case in the new environment places the robot inside of a large room at (25, 19). The goal location is placed inside a smaller room at (20,22). The smaller room is blocked by two one-cell dynamic obstacles that traverse the entrance to the room. This test demonstrates the planning algorithms capability to integrate static and mobile obstacle avoidance. Figure 3.53 illustrates the activity that took place in the first time interval. Note the robot expanded the field of view from 5 cells to 8 cells in order to locate an exit to the room. The planning algorithm invoked this option since it could not fmd a location that improved its position relative to the destination. 78

PAGE 85

0 1 .I ] 0 Obstacle 0 Start Destination 0Grtd Definition Speed II] FleldVIew Figure 3.52 Test Case 12/Frame 1 Initial State Robot Path Gcner11tor Simulation -+ 29 28 27 3 2 0 Obstade 0 Start Destination Robot Definition Speed [U Field VIew [U Figure 3.53 Test Case 12/Frame 2 Robot Expands Horizon 79

PAGE 86

Several time intervals later (figure 3.54), the robot is still movmg along the expanded path created in the first time interval. Recall from scenarios presented in previous sections that the execute movement method will not invoke new planning until it has moved to a location that is better than the current location relative to the destination. When this plan was generated, obstacle G was not in the field of view, while in the current time interval the motion of the obstacle has it blocking the path just outside the exit location of the room. Note that the robot could not run at full speed for this time interval: from (25,2) to (26,2). This frame also illustrates for the first time that obstacles overlay each other. Recall from our initial discussion that dynamic and static obstacles can share the same space. ':''; nobol Palh C.cneralor Sirnulalion 0 g 8 1 6 5 4 3 2 1 0 Obatade 0 Start Destination 181 Path 0Grld Definition Speed GJ FieldView Figure 3.54 Test Case 12/Frame 3 Waiting for Obstacle G to Clear Exit Frames 4 and 5 show the robot planning to enter the room containing the goal location. In both frames (figure 3.55, 3.56), the robot is blocked from entry by the two guarding obstacles. In frame 5 however, the planning algorithm should see a path around the obstacles through (19,20) and (19,21). In the fmal frame this path was indeed planned and executed by the robot. 80

PAGE 87

; Robot Path Generiltor Simulation II -+ 5 5 5 5 5 5 4 4 4 4 5 4 J J J 5 4 3 2 2 3 5 4 3 2 1 1 5 4 J 2 1 5 4 3 2 1 1 ] Hi 5 5 5 ..... 0 Obstade 0 Start @ Destination 1:81 Path D Grid Robot Dennltlon Speed [ZJ Field VIew ECJ Figure 3.55 Test Case 12/Frame 4 Blocked From Entry into Room 81

PAGE 88

, Robol P111h Gener111or Simulalion II .... 6 s 5 5 5 4 5 4 5 4 5 4 J 5 4 J 2 s 4 3 s s 4 8 s s 5 9 8 7 6 5 4 J 2 1 0 1 2 3 4 s 6 7 8 9 10 0 Ob&tade 0 Start Deatlnatlon DiGrld! Robot Definition Speed Fleld VIew 5J Figure 3,56 Test Case 12/Frame 5 Set-up Entry into Room Rulool Palh Gener.11nr II .... 0 Obstacle 0 Start Destination 0 Grid Definition Speed Fleld VIew [I] Figure 3.57 Test Case 12/Frame 6 Plot a Trajectory Past the "Sentries" 82

PAGE 89

In a second test case utilizing this obstacle group setup, we place the robot and the goal destination at (25,27) and (0, 12) respectively. At this particular starting point, the robot is in immediate jeopardy from obstacle group H which moves at 3 cells per time interval. To make the initial movement more difficult, a row of obstacles blocking any escape to the south is put in place using the robot simulator static obstacle capability. The purpose of this test is to examine the planning and movement executed by the robot over a long distance in a complex environment. -+ 0 1 .I ] 0 Obetede 0 Start l8l Path 0 Grid -II DellniUon Speed [!] AeldVIew Figure 3.58 Test Case 13/Frame 1 Initial State Frame 2 through 4 (figures 3.59, 3.60, 3.61) are the first 3 time intervals of the test. They demonstrate the robots ability to avoid a faster moving obstacle. In the initial time interval the planning method recognizes the futility of a downward route and plans paths up in the work area, while obstacle H gains a cell on the robot in each time interval. 83

PAGE 90

.I ] 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 J.o,..-1...,2,.....,.3-4_,.5 ""'e,.....,..1 """e-g,.--,1 o,-1-11"""'2-13=-1-4-15_1_6 -17-1-B1_9_20_21_Z2_23_24 __ 25_26 __ 27_2_8......J29 0 Obatade 0 Start @ Destination 181 Path 0 Grid -Jlobot Definition Speed Aeld VIew @] Figure 3.59 Test Case 13/Frame 2 Avoid Faster Element, Time= 1 ] 0 Obstade 0 Start @ Deatlnatlon l8l Path 0 Grid -;:tobot Deftnltlon Speed EJ FleldVIew Figure 3.60 Test Case 13/Frame 3 Avoid Faster Element, Time= 2 84

PAGE 91

Robol P111h Simulalion li] ] 0 Obstacle 0 Start Oeadnatlon 181 Path 0 Grid Definition Speed [] Field VIew EJ Figure 3.61 Test Case 13/Frame 4 Avoid Faster Element, Time= 3 In frame 5 (figure 3.62), we pick up the robot towards the conclusion of the test. The robot plans a path around obstacle E which is overlaying the destination location. Two time intervals later (figure 3.63), the obstacle clears the destination and the robot replans a direct path to the destination. 85

PAGE 92

' Robot Path Generator Simu l ation 0 Obstade 0 Start Destination 0 Grid Definition Speed [] Aeld VIew !TI Figure 3.62 Test Case 13/Frame 5 Blocked Destination : Robot Path c;eneriltor ::;,mulat1on En 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 5 3 2 2 2 2 2 4 5 3 2 3 4 5 1 2 3 4 5 112345 2 2 2 2 3 4 5 0 Obstade 0 Start Deatlntlon 0 Grtd Definition Speed tJ Field VIew [D Figure 3.63 Test Case 13/Frame 6 Replan After Obstacle Clears Destination 86

PAGE 93

4. SUMMARY AND CONCLUSION This report makes contributions to methods for autonomous mobile robot control. Foremost is the development of geometric reasoning techniques for planning and moving in a complex and dynamic environment. The architecture has demonstrated: Efficient planning and motion for what the domain allows. Responsiveness and adaptability to changes in the environment. Timely completion of all planning activities. Efficiency in the application of resources. Scalability to larger, more complex problem domains. Efficient Planning and Motion Examination of the test cases 1, 2, and 3 easily reveal optimized path planning for the simple environment. With the introduction of static obstacles (test cases 4 through 9) it is somewhat more difficult to demonstrate optimization. Essentially, the planning software optimizes within the field of view. That is, only local obscura can be considered when fmding an optimal path. Global inefficiencies are mitigated by executing the planning algorithm for each motion. This gives the robot the ability to discover global problems more quickly and accommodate them into the plan. Only in test cases 7 and 8 do we see the robot actually backtracking along a path. In both of these cases, the algorithm was seeking a more direct path to the destination outside the field of view, akin to probing ahead for a better route. As soon as it was determined a better path was not available, the algorithm replanned for the available route. Dynamic obstacles add a new criteria to efficient planning: obstacle avoidance. This criteria is of equal value with movement efficiency. Results from test cases 10 through 13 demonstrate obstacle avoidance used in conjunction with movement efficiency. In test case 10, the robot halts movement towards the goal to allow an obstacle to clear the destination location. In that particular situation, movement in any direction would offer less potential. In test case 11 the robot moves away from the destination in frame 3 to avoid an on-coming dynamic obstacle. The movement is minimal and the robot recovers and moves back towards the task goal when the collision threat is clear. In all test cases the robot plans efficiently 87

PAGE 94

for what the environment allows and executes the most effective movement towards the destination. Responsiveness to Change Adaptability to a fluid domain is demonstrated in the response to invisible obstacles and with dynamic obstacle interaction. Invisible obstacles are handled by the software in two ways. First, invoking the planning algorithm with each motion allows the robot to detect previously undetected obscura even within the field view. Second, the motion algorithm always verifies that a previously planned path is still available to it prior to executing motion. If necessary, the motion algorithm can send a "plan" message to invoke the planning algorithm if no paths remain available. Responsiveness to invisible obstacles was successfully demonstrated in test case 8 without loss of efficiency. Dynamic obstacles are more complex. For these situations, the planning software predicts the movement of the obstacles and incorporates the infonnation into the plan. Here again the motion algorithm always verifies path availability prior to engendering movement along the trajectory. This is done in case the situation develops that the obstacle changes direction while a concurrent move occurs. New planning can invoked at this time to consider the new infonnation. Timeliness Completion of Planning Absolute time is, of course, a function of the hardware CPU* compiler efficiency, etc. A more interesting comparison is the relative timeliness of the software in the different environments. Table 4. 1 presents a comparison of the times, in milliseconds, required to execute the planning algorithm in a simple environment, an environment with static obstacles, and an environment with dynamic obstacles. Since the field of view can also affect the results, several variations are provided. The robot speed in all cases was set to two cells per time interval. The tests were conducted on 180486 executing at 33 MHz. The compiler was Borland C++ version 3. 1. 88

PAGE 95

Table 4. 1 Scenario Timing Field of View 3 5 7 Scenario No Obstacles (Test Case 2) 48 51 82 Static Obstacle (Test Case 4) 93 115 109 Dynamic Obstacles (Test Case 11) 95 111 121 Efficient Application of Resources This concept is demonstrated through the judicious use of the "enhanced sensor" capability to expand the robot field of view. As demonstrated in test cases 6 and 8, the robot only invokes the expanded field of view when it is required to obtain an unambiguous path. And, also demonstrated in test cases 6 and 8, once the unambiguous path is obtained, the robot reverts back to the original field of view. In this application the only resource to which the usage concept applied is the field of view. This could easily be expanded within this architecture to incorporate: speed, turning capabilities, etc. Scaleable Architecture A scalar architecture allows for the easy expansion of the software design and implementation into more complex problem domains. In this case, a scalar architecture provided an impetus for the use of object oriented design and programming methodologies. Object oriented programming techniques such as: inheritance, polymorphism and function overloading were used in this design to facilitate reuse and expansion of the design. Design modifications could include: complex robot characteristics (directional field of view, variable speeds), multiple robots, a three dimensional work area, etc. The scalar nature of the architecture was demonstrated by the way in which dynamic obstacles were added to the design. In general, this research shows the applicability of linguistic geometry techniques to understanding the geometric properties and movement in dynamic hierarchical systems 89

PAGE 96

such as mobile robot control. These techniques provide a fonnalized, domain independent approach to such problems. Domain independence offers a rich set of applications in which to apply these techniques. The robot control architecture presented in this report is potentially a model for further applications in scheduling/planning, integrated circuits layout, space vehicle navigation, as well as multiple robot (cooperative and opposing) control scenarios. 90

PAGE 97

APPENDIX A MOBILE ROBOT SIMULATION SOFIWARE The software used to monitor the progress of the robot and control the environment was developed by the author of the thesis in parallel with the development of the application. The simulator is a Microsoft Windows 3.1 based program that graphically depicts the environment and the progress of the robot under test throughout an experiment. The simulator essentially creates a factory-like environment for the robot. Static obstacles may represent equipment and work stations operating in the factory. The static obstacles can also be shaped into walls and rooms simulating a partitioned factory floor with several different work areas to which the robot must navigate. Dynamic obstacles represent mobile elements operating in the factory. They may be other autonomous or remote controlled robots, or some kind of mobile delivery systems (e.g. fork lift, truck, etc.). The robot destination location represents tasking given to the robot. This task may be a job to accomplish, a product to examine, or a part to deliver. The simulator provides three broad categories of scenario control. Environmental control defmes the area in which the robot operates. This includes establishing obstacles, and path display. Robot definition establishes a robot object in the work area and characterizes its capabilities. Robot control directs how the simulator moves the robot will move through the environment. ENviRONMENT CONTROL Environmental control offers options to add obstacles in the work area and to specify display options. 0 Obstacle Obstacles may be placed on the work area in two ways. A data file may be specified on simulator start-up that defines dynamic and static obstacles associated with the scenarios. An example data file is illustrated in figure A-1. Static obstacles may also be placed in the environment by selecting the Obstacle button and then pointing at the 91

PAGE 98

desired location(s) on the work area, and selecting the left mouse button. An obstacle is registered with the Map object and the location is turned black for visual indication. An obstacle can be removed in a similar manner. Obscura can not be placed on top of a current, start, or destination robot location. Static obstacles may be placed in the environment at any stage of an experiment. 4 (15, 15) (16, 15) (16, 16) (16, 17) 1 1 12 19 12 20 Number of dynamic obstacle cells Initial location of dynamic obstacle cells Delta X, Delta Y, Min/Max X, Min/Max Y Figure A-1 Example of a Dynamic Obstacle File Record D Path D Grid The user can control two specific display options through the simulator software. Path toggles the display of the trajectories planned by the robot software. These trajectories are portrayed as numbered locations connected with thin dark lines (see A4 below for an example of the icons described here). The values at the locations represent distances, calculated by the robot software, from the previous robot location. The extent of the numbering illustrates the horizon of the robot in the current time intezval. A thicker green line shows the as-executed robot path. The location containing the green icon specifies the robot location in the current time intezval. The as-executed path and current robot location are always displayed and are not controlled from the simulator software. 92

PAGE 99

RoBar DEFINITION A robot is fundamentally defmed by its speed, horizon, start and destination locations. Robot Definition Speed D FleldVIew D The Speed and Field View parameters characterize the capabilities of the robot in the current experiment. Speed indicates maximum velocity in units of cells per time interval. The robot is allowed total maneuverability without loss of velocity. That is, the robot may turn in any direction or even go backwards without losing any capability to move at full speed. Field View specifies the nominal horizon for the robot. This is represented in units of cells in all directions. The extent of the horizon is unaffected by obstacle interference. It is, however, limited by the edges of the work area, i.e. there is no "wrap around" on the work area. Once the robot has begun an experiment, these parameters can not be modified via the simulation software. The horizon may be temporarily expanded by the robot software under circumstances described in the sections 3. 3 and 3. 4. 0 Start 0 Destination Start and Destination create the respective locations on the work area. Once the button is selected, the user simply points at the desired location in the work area and selects the left mouse button. These locations can be modified in this manner up until experiment execution begins. The destination goal can be modified while the robot is actively pursuing a goal. This simulates the robot receiving higher priority tasking while conducting a mission. The simulation software will not engender movement in the work area unless a start and a destination location are specified on the work area. 93

PAGE 100

RoBar CONTROL The software offers two methods to control the movement of the robot: Execute and Step. An additional function, Reset, terminates the current experiment and allows for defmition of a new scenario. -Execute causes the robot to plan and move to the conclusion of the task. Built into the simulation, is a one second delay between time intervals to allow the user the ability to view the progress of the experiment. Upon selection of this option, the simulator software verifies that a start and task destination are established. Once verified, the one second timer is started. When the timer expires a Move message is sent to the robot. The robot's Move method determines if and when new planning is required. The simulation software queries the robot object for trajectory and path information, as well as, querying the Map object for obstacle movement. While in the execute mode, static obstacles and display features may be modified between time intervals using the environmental controls discussed earlier. Step execute a single time interval in the simulator. At the conclusion of the time interval the simulator stops, waiting for more direction. This feature gives the user the capability of examining the selected path and planned trajectory data before moving on to the next time interval. Similar to Execute mode, robot trajectory, path and obstacle movement data is collected and displayed after the step completes. The user can utilize the environmental controls to display path data, add static obstacles, etc. between steps. The robot task destination can also be changed between stepping actions. Reset causes the current experiment to complete. If this action is taken while the robot is in transit to a task destination, it eliminates current path information and 94

PAGE 101

immediately moves the robot start location to the task destination. If the robot has already reached the task destination it performs a similar function, basically making the current location the new start location. It is important to note that this control is mandatory between experiments. DISPLAY ICONS .I This icon represents a dynamic obstacle group of four cells. These four cells move in a certain direction and at a certain velocity throughout an experiment. -This icon represents a static obstacle occupying three locations. Static obstacles do not move in an experiment This icon represents the robot start location. This appears only at the start of task. This icon represents the robot task destination. This is displayed for the duration of the task. These icons represent the planned trajectories, as-executed robot path and the current location of the robot. 95

PAGE 102

APPENDIX B SOURCE CODE Project.h #ifndef _PROJECTH_ #define _PROJECTH_ {************************************************************************** ** ** project.h ** define structures and macros used by all classes in the model. **************************************************************************! #include #include #include #include #include "Set.h" #include "List.h" II find the maximum or minimum of two values #define MAX(a,b) ((a) < (b) ? (b) : (a)) #define MIN(a,b) ((a) < (b) ? (a) : (b)) II define a single byte typedef unsigned char Byte_t; II define a 2-D location typedef struct { int x; int y; } Location_t; II define a cell structure in a graph typedef struct { Location_t loc; Byte_t obstacle; Byte_t dist; Byte_t blocked; Byte_t predBlock; Byte_t willBeBlocked; Byte_t pathBlock; Byte_t numChild; Byte_t numParent; void *children; void *parents; void *nxtOnPath; } Graph_t; II flag indicates a static obstacle II flag indicates a dynamic obstacle II flag indicates if robot predicts dynamin obs. II shall be in this location II flag indicates this cell will be blocked by II completion of time increment II flag indicating a this cell was used by an II obstacle to get to a new location II this is cast as graphList in operations II also cast as graphList in operations II signifies next node on chosen robot path 96

PAGE 103

II define a structure for distance display typedef struct { int x; int y; int dist; Dist_t; II define a structure for obstacle information typedef struct { Byte_t id; Byte_t dynamic; int int int Location_t } ObsDef_t; deltaX; deltaY; elemCnt; *loc; const int PATH_CLEAR = 4; const int PATH= 3; const int NEW= 2; const int SET= 1; const int CLEAR = 0; II create templates for Lists and Sets of graph cells typedef Set graphSet; typedef List graphList; #endif 97

PAGE 104

robotMap.h #ifndef _ROBOTMAP #define _ROBOTMAP #include "project.h" #include #include I*************************************************************************** I class robotMap public: !*constructor-create the work area and assign distances*/ robotMap(int sizeX=lOO, int sizeY=lOO); I* destructor--delete allocated mapping structures*/ -robotMapO; II toggle the state of a location on the graph void obsState(Location_t loc); II explicitly set the state of a location on the graph void obsState(Location_t loc, int state); II check for obstacle in graph int obsCheck(Location_t thisLoc); II clear trajectory information from all cells void clearAbs(void); II clear path information from all cells void ClearPath(void); //measure the theoretical (unimpeded) distance between 2 points int MeasDist (Location_t from, Location_t to); II initialize the distance member of area structure void Distlnit(void); II get all distances Dist_t* GetDist(int *cnt); II generate a list of cells adjacent to input location graphList* GenAdj(Location_t loc, int robSpeed); II convert a location to a cell on the graph Graph_t* compPtr(Location_t loc); 98

PAGE 105

private: }; #endif II add a parent cell to list int AddParent (Graph_t*, Graph_t*); II add a child cell to list int AddChild (Graph_t*, Graph_t*); II get a parent list from a cell graph List* GetParent (Graph_t*); II return the size of the work area void GetWorkAreaSize(int *xSize, int *ySize); II clear the predicted obstacle flag for the range of locations void ClearPredObs(int xMin, int x.Max, int yMin, int yMax); II set the predicted obstacle flag for the given location void SetPredObs(Location_t loc); II make a list of obstacle information available List* GetObsListO; II set up a list of obstacles int SetObsList(Byte_t dyn,int groupSz, int deltaX, int deltaY,Location_t *deO; II update obstacle location for the specified id void UpdateObsList(Byte_t handle, Location_t *loc, int state, int upDx, int upDy); Graph_t *absArea; List *obsList; int int int obsCnt; maxmX; maxmY; II work area II list of dynamic obstacles II count of registered dynamic obstacles II maximum allowable size of X dimension II maximum allowable size of Y dimension 99

PAGE 106

robotMap.cpp #include "robotMap.h" #include #include #include II Area map constructor robotMap::robotMap(int xSize, int ySize) { int x,y,minDist, maxDist,llX,llY,urX,urY,xff,i,j; char nxt0bs(80]; Graph_t *absPtr; Byte_t *obsFlag; II create the work area absArea =new Graph_t[xSize*ySize]; II create the dyanmic obstacle list obsList = new List; obsCnt = 0; II set x.y maximums maxmX = xSize; maxmY = ySize; memset(absArea, 0, xSize*ySize*sizeof(Graph_t)); II initialize the work area absPtr = absArea; for(y=O; y < ySize; y++) for(x=O; x < xSize; x++) { absPtr->loc.x = x; absPtr->loc.y = y; absPtr->dist = 255; absPtr++; II area map destructor robotMap: :-robotMapO { delete absArea; delete obsList; II toggle the state of given location between obstacle/clear void robotMap::obsState(Location_t loc) { Graph_t *obsCell; obsCell = compPtr(loc); if(obsCell->obstacle = 1) 100

PAGE 107

obsCell->obstacle = 0; else obsCell->obstacle = 1; II set the state of a location void robotMap::obsState(Location_t loc, int state) { Graph_t *obsCell; obsCell = compPtr(loc); if(state = NEW && !obsCell->obstacle) { obsCell->willBeBlocked = 1; } else if(obsCell->blocked >= 1 && state = CLEAR) { obsCell->blocked--; } else if(state = SET) { obsCell->willBeBlocked = 0; obsCell->blocked++; } else if(state =PATH) obsCell->pathBlock = 1; else if(state = PATH_CLEAR) obsCell->pathBlock = 0; II convert a locaiton into a pointer to the cell containing that location Graph_t* robotMap::compPtr(Location_t loc) { return absArea + ((loc.y maxmX) + loc.x); II resolve if the location blocked by an obstacle (static/dynamic) int robotMap: :obsCheck(Location_t thisLoc) { Graph_t *thisCell; thisCell = compPtr(thisLoc); if(thisCell->obstacle) return 1; else if (thisCell->blocked) return 2; else return 0; II clear the map of all trajectory information void 101

PAGE 108

robotMap: :clearAbsO { Graph_t *nxtCell; int cnt = maxmX maxmY; while(cnt) { cnt--; nxtCell = absArea + cnt; if(nxtCell->children) delete nxtCell->children; if(nxtCell->parents) delete nxtCell->parents; nxtCell->children = 0; nxtCell->parents = 0; nxtCell->numChild = 0; nxtCell->numParent = 0; II clear the map of all as-executed path information void robotMap:: ClearPathO { Graph_t *nxtCell; int cnt = maxmX maxmY; while(cnt) { cnt--; nxtCell = absArea + cnt; nxtCell->nxtOnPath = NULL; II reset all location distances void robotMap:: Distlnit(void) { int maxm = maxmX maxmY, i; for(i=O; i < maxm; i++) (absArea+i)->dist = 255; II return a list of distances for display Dist_t* robotMap::GetDist(int *cnt) { Dist_t *dists =new Dist_t[900]; inti, x, y; *cnt = 0; for(y=O; y < maxmY; y++) { for(x=O; x < maxmX; x++) { if((absArea+{(y maxmX) + x))->dist < 255) { dists[*cnt].x = x; 102

PAGE 109

} dists[*cnt].y = y; dists[*cnt].dist = (absArea+((y maxrnX) + x))->dist; (*cnt)++; return dists; II measure distances from the current location graphList* robotMap::GenAdj(Location_t cur, int robSpeed) { graphList *adjList =new graphList; Graph_t *newLoc, *curCell; inti, j, curDist; curCell = absArea + ((cur.y maxrnX) + cur.x); curDist = curCell->dist + 1; II do for all adjacent cells for (i=cur.y-1; i <= cur.y + 1; i++) { } II is location on the arena? if(i >= 0 && i < maxrn Y) { forG=cur.x-1; j <= cur.x + 1; j++) { II on the arena & don't reprocess current location again if(G >= 0 && j < maxmX) && G != cur.x I I i 1= cur.y)) { newLoc = absArea + ((i maxrnX) + j); II does location contain an obstacle? if(!newLoc->obstacle && !newLoc->blocked && (curDist > robSpeed II 1newLoc->pathBlock) && (curDist > robSpeed II !newLoc->willBeBlocked) ) if(new Loc->dist = 255) { } adj List-> Addltem (new Loc); AddParent(curCell, newLoc); II add a new parent link II to curCell else if(newLoc->dist = curDist) AddParent(curCell, newLoc); II add a new parent link II to curCell return adjList; 103

PAGE 110

II add a parent to call int robotMap::AddParent (Graph_t *parCell, Graph_t *chldCell) { if(!chldCell->numParent) chldCell->parents = new graphList; if(((graphList *)chldCell->parents)->Addltem(parCell)) { } else chldCell->numParent++; return 1; return 0; II add a child to a parent cell int robotMap::AddChild (Graph_t *parCell, Graph_t *chldCell) { if(!parCe ll->n um Child) parCell->children = new graphList; if(((graphList *)parCell->children)->Addltem(chldCell)) { } else parCell->numChild++; return 1; return 0; II retrieve the parent list from a cell graphList* robotMap::GetParent (Graph_t *cell) { if(cell->numParent) return ((graphList *)cell->parents); else return (graphList *)NULL; II perform a simple column/row distance measurement int robotMap::MeasDist(Location_t from, Location_t to) { int absX, absY, max; II use the max of I xI I y I as distance absX = abs(from.x-to.x); absY = abs(from.y-to.y); max= MAX(absX, absY); return max; 104

PAGE 111

II return the work area size void robotMap::GetWorkAreaSize (int *xSize, int *ySize) { *xSize = maxmX; *ySize = maxm Y; II clear the predicted obstacle flags for a range of locations void robotMap::ClearPredObs (int xMin, int xMax, int yMin, int yMax) { int x,y; Graph_t *cell; Location_t loc; II do for range of x,y locations for(y=O; y < maxmY; y++) { for(x=O; x < maxmX; x++) { II make up a location structure loc.x = x; loc.y = y; II get the cell data associated with this location cell= compPtr(loc); cell->predBlock = 0; II set the predicted obstacle flag for a given location void robotMap: :SetPredObs(Location_t loc) { Graph_t *cell; cell= compPtr(loc); cell->predBlock = 1; II register an obstacle int robotMap::SetObsList(Byte_t dyn, int groupSz, int deltaX, int deltaY, Location_t *deO { ObsDef_t *obsSt; Location_t *cLoc; inti; II extract the pertinent fields obsSt =new ObsDef_t; obsSt->id = ++obsCnt; obsSt->dynamic = dyn; 105

PAGE 112

obsSt->elemCnt = groupSz; obsSt->deltaX = deltaX; obsSt->delta Y = delta Y; obsSt->loc =new Location_t[obsSt->elemCnt]; memcpy(obsSt-> loc, def,obsSt->elem Cnt sizeof(Location_t)); II define the initial obstacle location on the this map cLoc = obsSt->loc; for (i=O; i < groupSz; i++) { obsState(*cLoc, SET); cLoc++; obsList-> Addltem(obsSt); return obsSt->id; II update the parameters for a registered obstacle void robotMap::UpdateObsList(Byte_t handle, Location_t *loc, int state,int upDx, int upDy) { ObsDef_t *obs; inti; Location_t *cLoc; II find the obstacle matching this handle obs = obsList->GetFirstO; while(obs) { if(obs->id =handle) break; else obs = obsList->GetNextO; } if(obs) { II now set the new obstacles and update in the list cLoc = obs->loc; for(i=O; i < obs->elemCnt; i++) { } *cLoc = *(loc+i); obsState(*cLoc, state); cLoc++; obs->deltaX = upDx; obs->delta Y = upDy; II make up a list of obstacle information from registered obstacles List* robotMap: :GetObsListO { return obsList; 106

PAGE 113

Obstacle.h #ifndef _OBSTACLE_ #define _OBSTACLE_ #include "Project.h" #include "robotMap.h" #include #include /*************************************************************************** Define an Obstacle class ***************************************************************************/ class Obstacle public: I* constructor for moving obstacle *I Obstacle (robotMap areaMap, int groupSz, Location_t *groupDef, int deltaX,int deltaY, int xMin, int xMax, int yMin, int yMax); I* destructor*/ -ObstacleO; I* movement method*/ int Move (void); void Reset (void); I* complete the move by removing the obstacle from the locations where it was at the beginning of the move cyle . void CompleteMove(void); private: inline int GetSign(int val) { }; if(val > 0) return 1; else if(val < 0) return -1; else return 0; ObstacleO; II No argument constructor is not legal Byte_t myld; int obGroupSz; robotMap *obAreaMap; Location_t *startDef; Location_t *curGroupDef; II the handle for this obstacle II size of obstacle group //local copy of area map II starting locations for obstacle II current obstacle locations 107

PAGE 114

}; Location_t *shGroupDef; Location_t *intGroupDef; II shadow of obstacle after a move II intermediate locations occupied by II obstacle on way to new location int obDeltaX; II movement in X direction int obDeltaY; II movement in Y direction int startDeltaX; II starting delta X int startDeltaY; II starting delta Y int obxMin, obxMax, obyMin, obyMax; II area of movement int areaX, areaY; //local copy of arena definition int moveSize; II number of intermediate locations in a move int intCnt; II size of intermediate area #endif 108

PAGE 115

Obstacle.cpp /************************************************************ ** Define the obstacle class methods. ***********************************************************/ #include "Obstacle.h" II Dynamic Obstacle Obstacle::Obstacle(robotMap *areaMap,int groupSz, Location_t *groupDef, int deltaX, int deltaY, int xMin, int xMax, int yMin, int yMax) II set up obstacle defintion for this object obAreaMap = areaMap; obGroupSz = groupSz; obDeltaX = startDeltaX = deltaX; obDeltaY = startDeltaY = deltaY; moveSize = MAX(obDeltaX, obDeltaY); obxMin = x.Min; obxMax = xMax; obyMin = yMin; obyMax = yMax; II get and store the work area dimensions obAreaMap->GetWorkAreaSize(&areaX, &area Y); II record obstacle initial location, current, shadow struct for moving startDef =new Location_t[groupSz]; memcpy(startDef, groupDef, groupSz sizeof(Location_t)); curGroupDef =new Location_t[groupSz]; memcpy(curGroupDef, groupDef, groupSz sizeof(Location_t)); shGroupDef =new Location_t[groupSz); memcpy(shGroupDef, groupDef, groupSz sizeof(Location_t)); II create a structure to maintain the intermediate locations through II which an obstacle moves to get to its next location. if(moveSize > 1) { intGroupDef =new Location_t[moveSize groupSz]; II add obstacle to areaMap list my Id = obAreaMap->SetObsList(l ,groupSz,deltaX,delta Y ,curGroupDef); II destructor Obstacle: :-Obstacle 0 { if(curGroupDef) delete curGroupDef; if(startDef) delete startDef; if(shGroupDef) delete shGroupDef; if(moveSize > 1) 109

PAGE 116

delete intGroupDef; void Obstacle:: ResetO { int inti; Location_t *cLoc; for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); obAreaMap->obsState(*cLoc, CLEAR); memcpy(curGroupDef, startDef, obGroupSz sizeof(Location_t)); II update obstacle location on the area map obAreaMap->UpdateObsList(myld, curGroupDef, SET, startDeltaX, startDeltaY); memcpy(shGroupDef, startDef, obGroupSz sizeof(Location_t)); obDeltaX = startDeltaX; obDeltaY = startDeltaY; 0 bstacle:: Move(void) { int dx, dy, i, xCnt, yCnt, xSign, ySign; Location_t *cLoc, *intLoc; int magDeltX = abs(obDeltaX); int magDeltY = abs(obDeltaY); dx = obDeltaX; dy = obDeltaY; II determine if any part of II obstacle leaves the assigned area of the work area itself II reverse the direction of the obstacle if so for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); if({(cLoc->x + dx) < obxMin) I I ((cLoc->x + dx) > obxMax) I I ((cLoc->x + dx) < 0) I I ((cLoc->x + dx) > areaX)) obDeltaX = -dx; if(((cLoc->y + dy) < obyMin) I I ((cLoc->y + dy) > obyMax) I I ((cLoc->y + dy) < 0) I I ((cLoc->y + dy) > area Y)) obDelta Y = -dy; II copy current definition of obstacle to shadow locations memcpy(shGroupDef, curGroupDef, obGroupSz sizeof(Location_t)); II if the obstacle speed is greater than 1 in x or y then II must calculate here the path the obtacle will travel to get to 110

PAGE 117

II its new locations. xSign = GetSign(obDeltaX); xCnt = xSign; ySign = GetSign(obDeltaY); yCnt = ySign; intCnt = 0; while((abs(xCnt) < magDeltX) I I (abs(yCnt) < magDeltY)) { II compute intermediate locations and set as blocked for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); intLoc = (intGroupDef + intCnt); intLoc->x = cLoc->x + xCnt; intLoc->y = cLoc->y + yCnt; intCnt++; obAreaMap->obsState(*intLoc, PATH); II increment to next set of x.y locations xCnt += xSign; yCnt += ySign; II one more time through the loop, this time move the obstacle II in the direction and range set by obDeltaX and obDelta Y for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); cLoc->x += obDeltaX; cLoc->y += obDelta Y; II update obstacle location on the area map obAreaMap->UpdateObsList(myld, curGroupDef, NEW, obDeltaX, obDeltaY); return 1; II remove where the obstacle was in the current move cycle void Obstacle:: CompleteMove(void) { inti; Location_t *cLoc; for(i=O; i < obGroupSz; i++) { cLoc = (shGroupDef+i); obAreaMap->obsState(*cLoc, CLEAR); for(i=O; i < obGroupSz; i++) { 111

PAGE 118

cLoc = (curGroupDef+i); obAreaMap->obsState(*cLoc, SET); II clear off intermediate locations over which obstacle travelled for(i=O; i < intCnt; i++) { cLoc = (intGroupDef+i); obAreaMap->obsState(*cLoc, PATH_ CLEAR); 112

PAGE 119

Robot.h #ifndef _ROBOT_ #define _ROBOT_ #include "Project.h" #include "robotMap.h" #include #include I*************************************************************************** I const int STUCK =-1; const int OK = 0; const int DONE = 1; const int areaSizeX = 30; const int areaSize Y = 30; typedef enum { trans Blocked, trans Back, II blocked from max range of fov transFree, transDest II transition cell is the destination mode_t; class Robot public: I* constructor *I Robot(int robotSpeed, int argMaxExtent, robotMap *argMap); I* destructor *I -Robot(void); I* forecast mobile obstacle movement in FOV *I void PrdctDynObs(int curDist, int fovMinX, int fovMaxX, int fovMinY, int fovMaxY); I* plan the path(s) that are admissible for this situation *I void Plan(Location_t start, Location_t destin); I* generate robot movement along an optimal path *I int II return reached/not reached destination, or stuck Move(Location_t, Location_t); I* return the locations passed thru in the last move *I Location_t* II return number of locations path2Loc(int *); I* return the head of the graph for the robot path *I inline Graph_t* getStartNetO { return graphHead; }; II clear existing path data 113

PAGE 120

inline void clearPaths(void) { my Map->clear AbsO; }; inline void GetPlanExt (int *minX, int *maxX, int *min Y, int *maxY) { } ; private: II return set up from last planning *minX = minExtX; *maxX = maxExtX; *minY = minExtY; *maxY = maxExtY; II determine linear distance between two points using coordiantes inline int calcDist (Location_t from, Location_t to); II make the empty virtual Move call private int Move(void); robotMap *my Map; II Local copy of area map int mySpeed; II Velocity Location_t myDestin; II Current destination Graph_t *graph Head; II Start of path Graph_t *currGrph; II Current cell on path Location_t *lastMove; II List of last location visited int moveCnt; II number of locations visited int maxExtent; II nominal field of view int use Extent; II current field of view int minExtX, maxExtX, minExtY, maxExtY; II mix/max horizon mode_t planMode; II planning mode Queue curQue, ndLst; II Planning queues }; #endif 114

PAGE 121

Robot.cpp #include "Robot.h" #include #include II constructor create the data structures for this robot Robot::Robot(int robotSpeed, int argMaxExtent, robotMap *argMap) { mySpeed = robotSpeed; II number of nodes reachable in one time step myMap = argMap; lastMove =new Location_t[robotSpeed]; moveCnt = 0; planMode = transFree; graphHead = NULL; currGrph = NULL; maxExtent = argMaxExtent; II maximum extent for path planning useExtent = maxExtent; II destructor delete this robot Robot::-RobotO { delete lastMove; II forecast mobile obstacle movement in the current field of view void Robot::PrdctDynObs(int curDist, int fovMinX, int fovMaxX, int fovMinY, int fovMaxY) { Location_t locPred; int i, timeCnt; ObsDef_t *obs; List *obsList; Location_t *obsLoc; II get a list of all obstacle definitions. The nature of II this function (i.e. what is returned) must change if the robot has II to figure out the obstacle movement pattern. obsList = myMap->GetObsListO; II compute locations at previous time step since obstacle already moved timeCnt = curDist-1; obs = obsList->GetFirstO; II while mobile obstacle not procesed wh.ile(obs) { II if current obstacle is mobile if(obs->dynamic) { II compute location at time step-1 (time step -1 *delta (x andy) if(timeCnt >= 0) { obsLoc = obs->loc; 115

PAGE 122

II for each element of mobile obstacle for(i=O; i < obs->elemCnt; i++) { II if current location is inside field of view then if((obsLoc+i)->x >= fovMinX && (obsLoc+i)->x <= fovMax.X && (obsLoc+i)->y >= fovMin Y && (obsLoc+i)->y <= fovMaxY) locPred.x = (obsLoc+i)->x + (timeCnt obs->deltaX); locPred.y = (obsLoc+i)->y + (timeCnt obs->deltaY); II if location is inside field of view then if(locPred.x >= 0 && locPred.x <= 29 && locPred.y >= 0 && locPred.y <= 29) II set predicted flag for cell at new location my Map->SetPredObs(locPred); II plan robot trajectory paths void Robot::Plan (Location_t start, Location_t destin) { int tDist, cnt, done=O, startDist, mapDist, sumDist, i, chkDone=O; int fovMinX, fovMaxX, fovMinY, fovMaxY; int compDist, maxDist=O, distDest, ddist, minDest=255; int lastDist, travelDist, adjDist = 0; Graph_t *grPtr, *adjPtr, *maxLoc, *transCell, *curCell, *parCell, *anothCell; Graph_t *startCell; graphList *adjList, *mapl..ist, *transl..ist, *parl..ist, *backList, *anothList; Location_t curLoc; II clear any existing path (link) information myMap->clearAbsQ; II initialize distance in work area my Map-> DistlnitO; graphHead = myMap->compPtr(start); II current node is header node of this plan currGrph = graphHead; II compute area boundary for field of view (xmin, xmax, ymin, ymax) curLoc = currGrph->loc; if((fovMinX = curLoc.x-use Extent) < 0) fovMinX = 0; if((fovMaxX = curLoc.x +use Extent)> 29) fovMaxX = 29; if((fovMin Y = curLoc.y-use Extent)< 0) 116

PAGE 123

fovMinY = 0; if((fovMaxY = curLoc.y + use Extent) > 29) fovMaxY = 29; II initialize min/max x,y extents minExtX = 30; maxExtX = 0; minExtY = 30; maxExtY = 0; II calculte theoretical (unimpeded) distance from start to destination tDist = myMap->MeasDist(start, destin); II create a list to contain the boundary locations as transition locs mapList = new graphList; backList = new graphList; grPtr = myMap->compPtr(start); grPtr->dist = 0; II do for each entry in the list while(grPtr) { startDist = grPtr->dist + 1; II get start distance for this set if(grPtr->dist > maxDist) maxDist = grPtr->dist; II generate list cells that are adjacent to this location that are II not obstacles adjList = myMap->GenAdj(grPtr->loc, mySpeed); chkDone = 0; II flag to post a boundary condition only once II for each cell in adjacent list adjPtr = adjList->GetFirstO; while(adjPtr) { II if still in field of view mapDist = myMap->MeasDist(start, adjPtr->loc); if(mapDist <= useExtent) { adjPtr->dist = startDist; II place into proper node list for further expansion ndLst. put(adjPtr); II create an adjacent list until boundary location is found if(startDist > adjDist) { adjDist = startDist; II destroy the old list, create a new, add the item if(backList) delete backList; back.List = new graphList; backList-> Addlte m (adj Ptr); 117

PAGE 124

} else { else if(startDist = adjDist) { backList->Addltem(adjPtr); II check for minimum/maximum x,y condition if(grPtr->loc.x < minExtX) minExtX = grPtr->loc.x; if(grPtr->loc.x > maxExtX) maxExtX = grPtr->loc.x; if(grPtr->loc.y < minExtY) minExtY = grPtr->loc.y; if(grPtr->loc.y > maxExtY) maxExtY = grPtr->loc.y; II put boundary location in transition list if(!chkDone) { chkDone = 1; mapList->Addltem(grPtr); if((distDest = myMap->MeasDist(grPtr->loc, destin)) < minDest) minDest = distDest; II get next cell in adjacent list adjPtr = adjList->GetNextO; delete adjList; grPtr = ndLst.getO; ndLst.clearO; II moving into the path generation phase. First determine the best tran/1 sition points. This is accomplished by finding locations that were II mapped in the above algorithm closest to the final destination. II Paths are generated from start location to these "best" transtion //locations. transCell = myMap->compPtr(destin); planMode = transFree; II if destination is not reachable in the current field of view if(transCell->dist = 255) { II if destination location is being blocked and distance to II destination is within speed of robot and robot will not get clobbered 118

PAGE 125

if((transCell->blocked I I transCell->willBeBlocked I I transCell->pathBlock) && (tDist <= mySpeed) && !currGrph->willBeBlocked) transList = NULL; II is minimum dist to destination >=theoretical distance from start? else if(minDest >= tDist) { if(minDest < 255) { } II this means our best transition locations are all in worse II theoretical position than our start. Generate paths to these II locations anyway, since they are the best we could get. if(backList) delete backList; transList = mapList; planMode = transBlocked; II if no maplist was created then robot was blocked from the II maximum extent of travel. Use the backup list to get as far II as we can. else { planMode = transBack; transList = backList; II else find best locations as transition points else { if(backList) delete backList; distDest = 255; transList = (graphList *)NULL; transList =new graphList; II do until best distance is found maxLoc = mapList->GetFirstO; while(maxLoc) { II compute total distance from current point ddist = myMap->MeasDist(maxLoc->loc, destin); II this check replaces an algorithm that would determine the II true distance of a transition cell to the destination (at II least thru the fov). if(ddist < tDist) { compDist = maxLoc->dist + ddist; if(compDist < distDest) { II destroy the existing "best" list if( trans List) delete transList; 119

PAGE 126

} else { } } transList = new graphList; transList->Addltem(maxLoc); distDest = compDist; else if (compDist = distDest) { II put new location on best list transList->Addltem(maxLoc); II next member maxLoc = mapList->GetNextQ; delete mapList; delete mapList; planMode = transDest; transList = new graphList; transList-> Add! tern (transCell); II for each transition cell, generate the parent-child relationship if(transList) { II clear all previous predicted obstacle flags in the field of view myMap->ClearPredObs(fovMinX, fovMaxX, fovMin Y, fovMaxY); lastDist = 0; startCell = myMap->compPtr(start); transCell = transList->GetFirstQ; w bile (transCell) { curCell = transCell; II do until start location is obtained w hile(curCell) { II compute the affect of dynamic obstacle interaction at travel time II into scenario travelDist = (curCell->dist/mySpeed) + (curCell->dist% mySpeed); if(travelDist != lastDist) { II clear all previous predicted obstacle flags in the field of view myMap->ClearPredObs(fovMinX, fovMaxX, fovMin Y, fovMaxY); if(travelDist > 0) II predict future locations only (already moved) PrdctDynObs(travelDist, fovMinX, fovMaxX, fovMinY, fovMaxY); lastDist = travelDist; 120

PAGE 127

} if(!curCell->predBlock) { } else { II for each parent of current cell, add a child parList = myMap->GetParent(curCell); if(parList) { parCell = parList->GetFirstO; while(parCell) { if(myMap->AddChild(parCell, curCell)) curQue.put(parCell); parCell = parList->GetNextO; II add child cells to the transition list if(transCell->numParent) { anothList = (graphList *)transCell->parents; if(anothList) anothCell = anothList->GetFirstO; while(anothCell) { if(! anoth Cell->willBe Blocked) transList->Addltem(anothCell); anothCell = anothList->GetNextO; curCell = curQue.getO; II get next transition cell transCell = transList->GetNextO; curQue.clearO; II reintiahze the queue delete transList; II make the robot move along a planned trajectory int Robot::Move(Location_t start, Location_t destin) { canst int maxEval = -4096; canst int childFactor = 0; canst int willBeFactor = -100; canst int blockedFactor = -200; canst int onPathFactor = -200; 121

PAGE 128

graphList *currList; Graph_t *currNode, *bestNode, *startCell; int bestEval, linDist,n=O, clrPath=O, curEval, currDist, destDist; int pathDist, testDist; static int totMove = 0, firstTime=O, chkDist; static Graph_t *destNode; moveCnt = 0; II if in transDest mode see if the destination is blocked if(planMode = transDest) { destNode = myMap->compPtr(destin); if(destNode->blocked I I destNode->willBeBlocked) planMode = transFree; II if this is the first move for the robot if((currGrph =NULL) I I (destin.x 1= myDestin.x I I destin.y != myDestin. y)) { myDestin =destin; totMove = 0; destNode = myMap->compPtr(myDestin); myMap->ClearPathO; II plan the initial path to the destination Plan(start, destin); II verify that robot is not already at the destination else if((currGrph->loc.x = myDestin.x) && (currGrph->loc.y = myDestin.y)) { *(lastMove+moveCnt) = currGrph->loc ; moveCnt++; totMove++; return DONE; II do I have a move from the current position or is it time to replan? else if(planMode 1= transDest I I currGrph->numChild = 0) llif(currGrph->numChild = 0) { II add here to replan (Plan) from current location to destination totMove = 0; Plan(currGrph->loc, myDestin); II ii plan is blocked from destination if(planMode = transBlocked) { destDist = myMap->MeasDist(currGrph->loc, myDestin); if(destDist <= mySpeed) { destNode = myMap->compPtr(destin); if(destNode->blocked I I destNode->willBeBlocked) { 122

PAGE 129

planMode = transFree; return OK; while(planMode = transBlocked) { } II increment search extent and replan use Extent++; if(useExtent >= 29) { useExtent = maxExtent; return STUCK; II fail this search Plan(currGrph->loc, my Destin); planMode = transDest; firstTime = 1; useExtent = maxExtent; II reset to normal field of view *(lastMove) = currGrph->loc; currDist = calcDist(currGrph->loc, my Destin); startCell = currGrph; n =0; while(startCell && (n < mySpeed)) { II examine current choices for movement currList = (graphList *)currGrph->children; if(!currList) break; currNode = currList->GetFirstQ; if(!currNode) break; best Node = currGrph; II best is current position bestEval = maxEval; while(currNode) { II eliminate this node from consideration if it contains an II obstacle, if my start node will be blocked and this node II is currently blocked, if this node will be blocked and this II is my final move for this plan. if(currNode->obstacle I I (startCell->willBeBlocked && currNode->blocked) I I (currNode->willBeBlocked && (n+l) = mySpeed)) II check to see if this is the destination if(currNode->loc.x = myDestin.x && currNode->loc.y = myDestin.y) return STUCK; 123

PAGE 130

} II evaluate this node compared to others. Each location starts II with a measurement of how much closer it brings the robot to II the destination than the current location. The cell then gets II points for the number of children it has and loses points for II being warped in a certain way: willBeBlocked, blocked, previous II and previous path. else { } II calculate how much closer this takes robot to destination linDist = calcDist(currNode->loc, my Destin); if(linDist = 0) II currNode is the destination { II found the destination, so pick this one and exit currGrph = currNode; *(lastMove+moveCnt) = currGrph->loc; moveCnt++; totMove++; currGrph = NULL; return DONE; testDist = currDist linDist; II add in the number of children on this node testDist += (currNode->numChild childFactor); II take away for abnormalities if(currNode->willBeBlocked) testDist += willBeFactor; if(currNode->blocked) testDist += blockedFactor; if(currNode->nxtOnPath) testDist += onPathFactor; if(testDist > bestEval) { bestEval = testDist; bestNode = currNode; currNode = currList->GetNextQ; II did we fmd no acceptable nodes'? if(bestEval = maxEval && startCell->willBeBlocked && moveCnt = 0) { II try replanning out of this one if(totMove > 0) { } II clear paths may exist but are not planned, better replan currGrph->blocked = 0; totMove = 0; Plan(currGrph->loc, myDestin); else II I have no paths from the current location { 124

PAGE 131

} moveCnt = 1; break; else if(bestEval = maxEval) II try again next time break; else { II the path to follow is thru currNode currGrph->nxtOnPath = bestNode; if(bestNode->nxtOnPath) bestNode->nxtOnPath = NULL; currGrph = bestNode; *(lastMove+moveCnt) = currGrph->loc; moveCnt++; totMove++; n++; II if we are on a dedicated transition location path if(planMode = transDest) { } if( first Time) { else { firstTime = 0; chkDist = myMap->MeasDist(currGrph->loc, destin); II calculate current distance to destination and compare to last II time through. II Reset flag if distance is now better pathDist = myMap->MeasDist(currGrph->loc, destin); if(pathDist >= chkDist) chkDist = pathDist; else planMode = transFree; return OK; Location_t* Robot: :path2Loc(int *numLocs) { *numLocs = moveCnt; return lastMove; inline int Robot::calcDist(Location_t from, Location_t to) { return (((from.x-to.x) (from.x-to.x)) + ((from.y to.y) (from.y to.y))); 125

PAGE 132

List.h II List.h defines the template for a list class #ifndef _LIST_ #define _LIST_ II A template class for doubly linked lists template class List { II class Node; public: ListO II Constructor (default) no arguments required : ListHead(O), ListTail(O), Pointer(O) { }; -ListO II Destructor Node *n1, *n2; n 1 = List Head; }; int while (n1 != NULL) { n2 = n1->next; delete n1; n1 = n2; Addltem(const Tc *t) }; II Add an element to the list only if it is II not already present Node *n; n = ListHead; while (n) { if(n->item = t) return 0; II get next n = n->next; n =new Node; n->item = (fc *) t; n->next = 0; n->prev = ListTail; if (ListTail) { } else ListTail->next = n; ListTail = n; ListHead = ListTail = n; return 1; 126

PAGE 133

void Deleteltem(Tc *t) II Delete element from the list { } ; Node *n; int cnt=O; n = ListHead; while (n) { if(n->item = t) { if (n->prev) n->prev->next = n->next; else ListHead = n->next; if (n->next) n->next->prev = n->prev; else ListTail = n->prev; if (n = Pointer) Pointer= n->next; delete n; break; II get next n = n->next; Tc *GetFirstO II Get the first item in the list (reset pointer) }; Pointer = ListHead; if (Pointer) return Pointer->item; else return 0; Tc *GetNextO II Get the next item in the list { if (Pointer) }; Pointer = Pointer->next; else Pointer = ListHead; if (Pointer) return Pointer->item; else return 0; Tc *GetltemO const II Get the current item in the list { if (Pointer) return Pointer->item; else return 0; 127

PAGE 134

private: }; #endif struct Node { }; Tc *item; Node *next; Node *prev; Node *ListHead; Node *ListTail; Node *Pointer; II Nodes of the (doubly linked) list II Pointer to the user's data II Pointer to next item in the list II Pointer to previous item in the list II The list of data elements II Pointer to last element in the list II Current position in the list 128

PAGE 135

Queue.h II Queue.h -defines the template for a Queue class #ifndef _QUEUEH_ #define _QUEUEH_ const int MAX_QUEUE_SIZE = 900; II A template class for queues. Use a simple pointer array as the underlying II structure. template class Queue { public: QueueO II Constructor (default) -no arguments required : qOut(O}, qlns(O) { }; -QueueQ {}; II Destructor II insert a new element into the queue int put(Tc *item) { }; if(qlns < MAX_QUEUE_SIZE) { } else ptrQueue[qlns] =item; qlns++; return 1; return 0; II the queue is f"illed II extract an element from the queue Tc* get(void) { }; if(qOut = qlns) return NULL; else return ptrQueue[qOut++]; int isEmptyQ { }; if(qOut = qlns) return 1; else return 0; void clearQ { II reset pointers qlns = 0; 129

PAGE 136

qOut = 0; }; private: }; #endif Tc *ptrQueue[MAX_QUEUE_SIZE]; int qlns; int qOut; 130

PAGE 137

BIBLIOGRAPHY 1. Craig, J. J. Introduction to Robotics: Mechanics and Control. 2nd Edition. New York : Addison-Wesley. 1989. 2. Ellis, M. A and Stroustrup, B. The Annotated C++ Reference Manual. New York: Addison-Wesley. 1990. 3. Jones, J. L. and Flynn, A. M. Mobile Robots: Inspiration to Implementation. Wellesley, MA : AK Peters Ltd. 1993. 4. Martin Marietta Astronautics Group (MMAG). Technical Prooosal: Intelligent Mobile Sensor System for Autonomous Monitoring and Inspection. Denver, CO. (1991): 11-1-11-25. 5. Parker, L. E. Heterogeneous Multi-Robot Cooperation. Doctoral Thesis, Massachusetts Institute of Technology. 1994. 6. Petzold, C. Programming Windows 3.1. 3rd Edition. Redmond, W A : Microsoft Press. 1992. 7. Rich, E. and Knight, K. Artificial InteJ.liience. 2nd Edition. New York: McGraw Hill. 1991. 8. Stilrnan, B. "From Serial to Concurrent Motions in Multiagent Systems: A Linguistic Geometry Approach." Journal of Systems Engineering (To Appear 1996): 1-33. 9. Stilrnan, B. "Translations of Network Languages." An International Journal: Computers and Mathematics with Applications. Vol. 27, No. 2 (1994): 65-98. 10. Stilrnan, B. "A Syntactic Hierarchy for Robotic Systems." Integrated Computer Aided Engineering Vol. No. 1 (1993): 57-82. 11. Stilrnan, B. "A Linguistic Approach to Geometric Reasoning." An International Journal: Computers and Mathematics with Applications. Vol. 26, No. 8 (1992): 29-58. 131