Citation
Batch spatial query processing using R-tree over solid state drives (SSD) : leveraging internal parallelism

Material Information

Title:
Batch spatial query processing using R-tree over solid state drives (SSD) : leveraging internal parallelism
Creator:
Wanjerkhede, Mrutunjayya
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English

Thesis/Dissertation Information

Degree:
Master's ( Master of science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Computer Science and Engineering, CU Denver
Degree Disciplines:
Computer science and engineering
Committee Chair:
Banaei-Kashani, Farnoush
Committee Members:
Alaghband, Gita
Altman, Tom

Notes

Abstract:
Spatial data management has become an integral part of many applications such as Geographic Information Systems (GIS). Range queries are one of the most important queries in spatial databases. Spatial indexing techniques like R-Tree are applied to improve the performance of such queries. Often database system receives multiple range queries, and in order to improve the overall performance, it processes them in batches, considering spatial characteristics of incoming requests. However, this performance of batch processing and operating index structure depend on the underlying storage hardware, such as hard drives (HDD). Database community has spent its valuable time, in past three decades, and energy in optimizing its systems in accordance with HDD features. But, now flash based Solid State Drives (SSDs) are emerging as main storage devices. They have completely different and better performance characteristics compared to HDD. With this new technology, we need to re-think the batch processing of spatial queries. In this work, we propose a new structure of R-Tree nodes for SSDs and based on that develop a general and static allocation-specific batching methods. Both methods avoid the resource contention that occurs due to long data movement time caused by interleaving and improve the overall performance. With our approach of R-Tree node representation, spatial queries can be batched without considering the spatial characteristics of incoming requests. We were able to achieve overall 80% gain in the performance as compared to non-batching methods over SSDs. The form and content of this abstract are approved. I recommend its publication.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
Copyright Mrutunjayya Wanjerkhede. Permission granted to University of Colorado Denver to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.

Downloads

This item has the following downloads:


Full Text
BATCH SPATIAL QUERY PROCESSING USING R-TREE OVER SOLID STATE
DRIVES (SSD): LEVERAGING INTERNAL PARALLELISM
by
MRUTUNJAYYA WANJERKHEDE
B.E, Visvesvaraya Technological University, Belgaum, India, 2008
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Computer Science & Engineering
2016


© 2016
MRUTUNJAYYA WANJERKHEDE
ALL RIGHTS RESERVED


This thesis for the Master of Science degree by
Mrutunjayya Wanjerkhede
has been approved for the
Computer Science and Engineering
by
Farnoush Banaei-Kashani, Chair & Advisor
Gita Alaghband
Tom Altman
17-Dec-2016
in


Mrutunjayya Wanjerkhede (M.S, Computer Science)
Batch Spatial Query Processing Using R-Tree Over Solid State Drives (SSD): Leveraging Internal Parallelism
Thesis directed by Assistant Professor Farnoush Banaei-Kashani
ABSTRACT
Spatial data management has become an integral part of many applications such as Geographic Information Systems (GIS). Range queries are one of the most important queries in spatial databases. Spatial indexing techniques like R-Tree are applied to improve the performance of such queries. Often database system receives multiple range queries, and in order to improve the overall performance, it processes them in batches, considering spatial characteristics of incoming requests. However, this performance of batch processing and operating index structure depend on the underlying storage hardware, such as hard drives (HDD). Database community has spent its valuable time, in past three decades, and energy in optimizing its systems in accordance with HDD features. But, now flash based Solid State Drives (SSDs) are emerging as main storage devices. They have completely different and better performance characteristics compared to HDD. With this new technology, we need to re-think the batch processing of spatial queries. In this work, we propose a new structure of R-Tree nodes for SSDs and based on that develop a general and static allocation-specific batching methods. Both methods avoid the resource contention that occurs due to long data movement time caused by interleaving and improve the overall performance. With our approach of R-Tree node representation, spatial queries can be batched without considering the spatial characteristics of incoming requests. We were able to achieve overall 80% gain in the performance as compared to non-batching methods over SSDs.
The form and content of this abstract are approved. I recommend its publication.
Advisor: Farnoush Banaei-Kashani
IV


ACKNOWLEDGEMENT
I have taken efforts in this research. However, it would not have been possible without the kind support and help of many individuals and CSE BD Lab members. I would like to extend my sincere thanks to all of them.
I am highly indebted to Dr. Farnoush Banaei-Kashani for his guidance and constant supervision as well as for providing necessary information regarding the thesis & also for his support in completing this research work.
I would like to express my gratitude towards department chair Dr. Gita Alaghband and department of computer science for their kind co-operation and encouragement which help me in completion of this project.
v


TABLE OF CONTENTS
CHAPTER
I INTRODUCTION....................................................................1
II BACKGROUND: SSD INTERNALS........................................................6
2.1 NAND Flash Memory............................................................6
2.2 SSD Internal Architecture....................................................7
2.3 Parallelism in SSD Architecture..............................................9
2.4 Flash Translation Layer (FTL) and Allocation Schemes.......................10
2.5 Native Command Queueing (NCQ)...............................................15
III RELATED WORK SURVEY............................................................16
3.1 Evaluation of SSD...........................................................16
3.2 General Query Processing on SSD.............................................16
3.3 Spatial Data Management on SSD..............................................17
IV PROBLEM DEFINITION AND FORMALIZATION...........................................19
4.1.1 Problem Definition........................................................19
4.1.2 R-Tree Query Workflow and Allocation over SSD............................19
4.1.3 Problem Formalization.....................................................21
V PROPOSED SOLUTION..............................................................24
4.1 Propose Solution............................................................24
4.1.1 Overview: Page Conflict Graph.............................................24
4.1.2 Page Read Request Packing (PRRP) Algorithm................................24
VI


4.1.3 Analysis and Proof of Optimality of PRRP Algorithm
26
4.1.4 Case-1: Conflict Graph Derivation PAGE-Level................................29
4.1.5 Case-2: Page Conflict Graph Derivation DIE-Level............................31
VI EXPERIMENTS AND RESULTS...........................................................35
5.1 Setup and Configuration........................................................35
5.2 Workloads and Performance Metrics..............................................37
5.3 Results and Analysis...........................................................38
5.3.2 Performance of Naive vs Plane-Level vs Die-Level............................38
5.3.3 Number of Queries, Query Distribution and Request Distribution...............42
5.3.4 Why Plane-Level Scheduling?..................................................45
VII CONCLUSION AND FUTURE WORK........................................................48
REFERENCES............................................................................49
vii


LIST OF FIGURES
Figure
1. HDD vs SSD Price------------------------------------------------------------------------------2
2. SSD Internal Architecture [21]----------------------------------------------------------------3
3. Flash Memory Package--------------------------------------------------------------------------7
4. HDD vs SSD Physical Structure-----------------------------------------------------------------7
5. Internal Architecture of an SSD---------------------------------------------------------------8
6. Plane Level Parallelism and Interleave operation---------------------------------------------10
7. Channel and Way Parallelism------------------------------------------------------------------10
8. 1 OF 24 WAYS OF ALLOCATION SCHEME, CWDP------------------------------------------------------12
9. 1 OF 24 WAYS OF ALLOCATION, WDPC-------------------------------------------------------------12
10. CPWD, Set of pages in a DIE------------------------------------------------------------------14
11. CPDW, Set of pages in a DIE------------------------------------------------------------------14
12. Native Command Queuing in HDD----------------------------------------------------------------15
13. Spatial Data partitioned into MBR------------------------------------------------------------20
14. R-Tree forthe MBRsin Figure 14---------------------------------------------------------------21
15. R-Tree Allocation over SSD and Node Structure------------------------------------------------21
16. CaseI-Conflict Graph Derivation--------------------------------------------------------------31
17. Range query performance with scheduling for uniformly distributed data---------------------39
18. Range query performance with scheduling for normal distribution of data--------------------40
19. NO. OF INTERLEAVE OPERATION FOR UNIFORM DISTRIBUTION-----------------------------------------40
20. NO. OF INTERLEAVE OPERATION FOR NORMAL DISTRIBUTION-----------------------------------------41
21. Point query performance, uniform data distribution------------------------------------------42
22. Point query performance, normal data distribution-------------------------------------------42
23. Read distribution across SSD----------------------------------------------------------------43
24. Impact of number of queries, 25 to 400------------------------------------------------------44
viii


25. Performance with query distribution Uniform vs Normal vs Random---------------------------------44
26. No. of Interleave Ops with query distribution Uniform vs Normal vs Random-----------------------45
27. Plane-Scheduling vs Other Scheduling-----------------------------------------------------------46
28. Plane-Scheduling vs Other Scheduling-----------------------------------------------------------46
IX


LIST OF TABLES
Table
1. Allocation Schemes with the Same Pattern.................................................13
2. symbols and Notations....................................................................22
3. ExperimentalSSD Configuration Parameters.................................................36
4. Experimental Workloads...................................................................37
5. Workload Details.........................................................................37
x


CHAPTER I
INTRODUCTION
Spatial data management has become an integral part of many applications like image processing, geographic information systems (GIS), computer-aided design (CAD) and so on. Range queries are most important queries in such spatial applications. In general, the task is to find all spatially close neighbors. In order to answer such range queries, the simplest way is to scan the complete database. This leads to disk access overhead and a large number of distance computations. Database management systems (DBMS) adopt spatial indexing schemes to avoid disk access overhead. The most prominent spatial index structure is R-Tree [17] and its variant R*-Tree [17]. R-tree index structures provide an average logarithmic search time, but spatial data dimensions and the distributions have a negative impact on the search time.
R-Tree index structures are often stored in an array as traditional B-Tree and queries are answered with the sequential scan. The sequential access performance of hard disk drives (HDDs) is leveraged for these sequential scans. In general, a DBMS receives a very large number of range queries. To reduce the average query time, database system batch or combine these multiple queries. Batching of theses queries requires a criterion. These criterions consider the spatial properties of the incoming queries, like sorting the queries according to space filling curve (or Hilbert Sorting) [3,4], This increases the probability of accessing the nearby requests sequentially. Further, the system batches queries to optimize the random reads and increase the sequential accesses. Most of the database research has evolved around the optimization of these queries based on HDD storage systems.
Not only have the properties of spatial data determined the performance of the R-Tree index structure, also the underlying storage hardware. Database community has researched previous
1


two-three decades on optimizing database systems to the underlying HDD as a storage device.
Efforts have been put to address issues like optimizing the long seek latencies, high power consumption, and reliability. Due to the mechanical nature of the HDDs, it is difficult to eliminate such issues completely. But now flash-based Solid State Drives (SSDs) are emerging as main storage devices, due to their performance, lower latency, higher throughput from random I/O, less power consumption, and price advantages over HDD. Recently Samsung released such an SSD [26] that can store about 15TB of data. This tremendous change in the storage may soon make HDD obsolete. In recent years the price of the SSD is going down. Figure 1 shows the price comparison between the SSD and HDD. Considering the cost-effectiveness of SSD we need to reconsider our application or DBMS to meet/leverage the characteristics of an SSD.
150
0
2014 2015 2016 2017 2018 2019
----HDD ASP
—128GB SSD ASP
256GB SSD ASP
The sub $40 zone is impenetrable to HDDs
Source: IDC, Worldwide Solid State Drive Forecast, 2015-2019, doc #256038, May 2015
Figure 1. HDD vs SSD Price
Figure 2 shows the internal architecture of an SSD. SSD is arranged into multiple flash memory packages that are connected to a flash controller through multiple parallel channels. The flash controller can operate on each of these channels independently and in parallel. A package is
2


composed of multiple dies that are inturn composed of multiple planes. The planes are again
divided into a large number of physical data pages.
Each die in the package can be operated independently and receives interleaved operation commands from the controller. From an architecture perspective, SSDs exhibit multiple levels of parallelism: channel, package, die and plane [21]. Since multiple components of an SSD can be activated simultaneously, multiple pages can be operated (read-write) in parallel. This parallel access improves the performance of random I/O, for which SSD have proved superior compared to HDD.
Figure 2: SSD Internal Architecture [21]
The random read speed of SSDs can be utilized in improving the performance of spatial queries. Leveraging this randomness for a single or a small number of range queries could be beneficial. Native Command Queueing (NCQ) component of the SSD takes care of re-arranging the requests that can be executed in parallel i.e. distributed across multiple channels. But the size of this NCQ is a limitation. It varies from 32-128 requests. But for a large number of queries the
3


parallelism does not seem to help much, due to the NCQ size limits. A Large number of requests
could create hot-spots within the SSD resources (channel, package, die or plane). That is, multiple requests could use the same channel, package or die. Distributing the requests across the SSD resources by scheduling them could avoid or reduce the hot-spots.
Re-arranging or batching a large number of page requests in queries at the application level to leverage the parallelism is required to improve the performance. Since SSD provides different levels of parallelism and 24-ways of allocating the pages we need to rethink the batching of page requests over SSD compared to HDD. In this work, we present methods of packing and scheduling the read requests generated by the spatial queries at application level which leverages the maximum degree of parallelism.
The major performance in spatial data query processing on flash-based SSDs can be achieved if we leverage positive characteristic such as parallelism and avoid negative characteristics such as out-of-place updates. In this work, we concentrate on improving the query performance by leveraging the internal parallelism of SSDs. Depending on the configuration of an SSD, the controller can execute n number of flash operations in parallel i.e. it can read/write n pages in parallel. Optimizing and maximizing utilization of this n give us the better query performance. Contribution of this work are:
1. Allocating the HDD based R-Tree to an SSD system using 24-static allocation schemes.
2. Generic methods for identifying and separating sequential or non-parallel requests which create hot-spots and impact the performance (Case-1).
3. Further optimizing this identification process based on the underlying allocation scheme (Case-2).
4


4. A new optimal Page Read Request Packing (PRRP) method to pack incoming read requests
into a minimum number of batches and improve performance by avoiding hot-spots.
5. Implement an "RSSSimulator", it simulates all 24-ways of allocating an R-Tree and spatial query processing over SSD.
6. Since no commercial SSD manufacturers provide details of the allocation schemes, we have used SSDSim [24] hardware validated simulator. Modified it to support all the 24-allocation schemes and measure the overall performance improvement of spatial queries over SSD.
Most of the related work is focused on the construction of R-Tree over SSD by optimizing the "out-of-place-updates" [4,5, 6, 7] and others concentrated on B-Tree [28] and Hash Indexing over SSD [22] without considering details of internals like allocation schemes. Existing work on parallel implementation [10] of R-Tree over distributed HDD architecture propose the efficient spatial data partitioning for high-performance spatial data access. We compared our solution performance with the naive or no scheduling method. With our scheduling and packing approach we were able to achieve about 70% improvement, compared to naive scheduling, by packing read requests considering the plane configuration in SSD (No consideration of allocation scheme). Optimization of the packing based on allocation scheme improved the performance further by 10%. Overall we were able to get 80% improvement, compared with naive scheduling, in performance.
5


CHAPTER II
BACKGROUND: SSD INTERNALS
To meet our research goal we must first understand the SSD architecture and its important internal properties. In this chapter, we discuss the internal architecture of an SSD.
2.1 NAND Flash Memory
There is two kinds of flash memory NOR and NAND [1], NAND-Flash memory has higher density, larger capacity compared to NOR-Flash memory. Again there are two type of NAND-Flash memory, Single-Level Cell (SLC) which stores one bit per cell and Multi-Level Cell (MLC) which stores two or more bits per cell. To have more storage capacity several flash chips are packed together as a model called package [3], Each package 2 or more dies. Each die is composed of multiple planes. Each plane consists of thousands of flash blocks and one or more data/cache register used as I/O buffer. Each block consists of 64 or 128 pages. Figure 3 shows a typical flash memory package.
Two other important characteristics of flash memory are write-after-erase and erase cycle [2], A write operation changes the value of each bit from T to 'O'. A part of the page cannot be updated. Once a page is written it must be erased, where all the bits are set to T, before rewriting the page or updating the part of the page. Each block has a limited number of erasing allowed. Typically an MLC has erase-cycle about 10K (more in the latest versions of SSD) and SLC has about 100K [1], The page size of a NAND typically varies from 2KB, 4KB, and 8KB.
6


2.2 SSD Internal Architecture
Figure 4 compares the physical structure of both SSD and HDD. HDD have various moving parts making them susceptible to shock and damage, while SSDs use a non-mechanical design of NAND flash mounted on a circuit board.
Controller
Actuator
NAND Flash Memory
Actuator Arm
Actuator Axis
Figure 4: HDD vs SSD Physical Structure
7


As Figure 5 [21] depicts different components of a Solid State Drive (SSD). An SSD is
composed of an array of flash memory packages, Processor (CPU), internal RAM buffer, Controller and a host interface. SSDs provide logical block address (LBA) as an interface to the host. The host I/O requests are handled by the SSD controller. SSD controller mimic the behavior of a hard disk drive (HDD). Controller uses the flash translation layer (FTL) mapping information to map (or translate) the incoming LBA request to the actual physical page. Using the FTL information it issues commands like read, write, erase etc. to flash memory packages via flash memory controller layer. Flash memory controller connects to flash packages via multiple channels. Each channel connects to multiple flash packages, 2-8 flash packages.
Figure 5: Internal Architecture of an SSD
Generally, the number of channels varies from 2-10, but this number in SSD is limited by the tolerable peak current and the area availability in the SSD controller. Flash controller can issue independent I/O request to multiple channels, simultaneously. This gives us the channel level parallelism. A number of flash memory packages across a channel gives us the number of ways.
8


I/O parallelism can be achieved across multiple packages within a channel issuing the interleaving
I/O commands. This gives us the way level parallelism More about SSD parallelism is discussed in the next section.
2.3 Parallelism in SSD Architecture
In an SSD number of channels and ways defines the number NAND operations that a controller can process in parallel. An SSD system provides four levels of parallelism [3, 4]:
1. Channel-level Parallelism: Each channel can be operated or can process command from controller independently and simultaneous. The throughput of page reads can be significantly increased by leveraging channel-level parallelism.
2. Package-level Parallelism: Each channel in an SSD is shared by multiple packages. These packages can be operated independently and simultaneously. This leverages the package level parallelism. But to optimize resource utilization such as control bus, the operations to the packages under same channel are interleaved.
3. Die-level Parallelism: Within a package, two dies can operate simultaneously. Here too the only restriction is that commands to each die cannot be served at the same time i.e. the commands need to be interleaved.
4. Plane-level Parallelism: Inside a die, multiple planes can operate simultaneously. There are couple of restrictions about this level of parallelism [5], First, the same operation like read/write/erase run on multiple planes. Second, operating pages must have the same package, die, block and page address. I.e. Only page 1 from plane 0 and page 1 from plane 1 can be operated simultaneously.
9


Following Figure 6 depict the plane level parallelism and interleave operation dies.
â–¡ Command and Address transfer â–¡ Data transfer â–  Write data to target page
I/O Bus Plane 0 Plane 1
I/O Bus
dieO die 1
Figure 6: Plane Level Parallelism and Interleave operation
2.4 Flash Translation Laver (FTL) and Allocation Schemes
FTL responsible for interfacing or mapping the logical world of the host (LBA) and physical flash world (PBA). It performs (1) maintenance logical to physical address mapping data structure (2) keep track of invalid pages for garbage collection (3) evenly distribute write and erase operations over blocks (wear-leveler). Numerous methods of implementing the FTL have been published. These methods consider the different level of mapping granularity i.e. page mapping, block mapping, and hybrid mapping.
k_
u {/}
ro H— CJ
u
â– l-J o
â– -
CL
-h-' o

o O
X
Channel Striping
Way Pipelining
i
Flash Chip 1 Flash Chip
Flash Chip Flash Chip
a=Sh
c
Flash Chip Flash Chip
iFlash y i
Channel Way
Flash Chip
Figure 7: Channel and Way Parallelism
10


Allocation scheme (or page allocation strategy) determines how to choose a physical page
for mapping. Allocation schemes are classified into two categories, static and dynamic. Static allocation assigns a logical page to the pre-determined channel, package, die and plane. Dynamic allocation assigns a logical page to any free physical page. The allocation strategies depend on how a page is selected from the pool. Two ways, Channel first and Way-first [6], determine how a page is selected. Figure 7 depicts both Channel and Way parallelism. The Way type parallelism can be further segregated into Package, Die, and Plane level parallelism.
Channel-first allocates pages based on channel level striping, benefits intra-request parallelism. Way-first allocates pages taking advantages of way pipelining. This benefits interrequest parallelism. The way pipelining leverages die and plane level parallelism. Considering the four levels of parallelism we can have twenty-four ways of allocation strategies i.e. different combination of CWDP, Channel-Way-Die-Plane. W is Package. Figure 10 & 11 shows 2 sample allocation schemes out of 24 allocation schemes.
In both allocation schemes, an observation, that set of pages in a plane are same. Also, the difference between two consecutive pages seems to be constant and is equal to number of planes in SSD configuration. In the example allocation schemes, as marked, pages {5, 21} are in the allocated in one plane and difference between the pages in 16 and is equal to number of planes i.e. 16. We also observed that for all 24 allocation schemes, this set of pages in a plane and distance between pages remains constant.
Remark-1: For a given SSD configuration, a set of pages in a plane are fixed for any of the allocation scheme and difference between two consecutive pagelDs in a plane is egual to number of planes.
11


Channel-0
Figure 8:1 of 24 ways of allocation scheme, CWDP
Figure 9: 1 of 24 ways of allocation, WDPC
12


£1
â– 9jC o uj saBod/o ias autos ayi paioaojjo Aaup -ai ujauod autos a up a/\oq {DOMd'CDMd'DAA1Jomay
â– atQ o uj saBod
/o jas autos atp aioaoiio Aaip -ai ujanod autos atp soy sautaqas uopoaonv/o dnoj£> z >jJOuiay
DQMd'QDMd'DMQd'MDQd'QMDd'MQDd 9
DdMO'DdOM S
MdDO'OdDM'OdMD'MdOD P
DMda 'MDda 'DadM 'aDdM £
MOdD 'OMdD Z
dDMa 'dMDa 'dDQM 'dODM 'dMQD 'dOMD I
ujaiiBd |3Aaq ajQ aiues qi|M aiuaqDS uo|ibdo||v on dnojc)
ujaiied aiues aqi qi|M saiuaips uo|ibdo||v -l a|qei
■aip b uj saSBd jo ujaiisd aujBS MQdD pus OMdD saiuaqDS uo|ibdo||b om; iBqi Moqs £i $ zi ajn§|d -ujaiiBd uoujiuod b SuuBqs saiuaqDS uo|ibdo||b is|| a|qBj_ '|3A3| a;p ;b ujaiiBd aujBS aqi pjdap saop uo|ibdo||b jo dnojS b 'j3A3moh '|aA3| auB|d aqi joj s| n sb saujaqas uo|ibdo||b aqi lie 01 a|qBD||ddB ;ou s| ujaiiBd sjqi ing -ajp b in aiuBS 3jb sa§Bd jo ias iBqi aapou ubd
3m £1 ’S ZI ajn§|d u| '|3A3| ajp aqi ;b apBiu aq ubd 'sauB|d joj sb 'suopBAjasqo jo puj>| aiuBS


Controller
Channel-0

1 3 9 11 5 7 13 15
17 19 25 27 O 21 23 29 31
33 35 41 43 <0
<0 Q.
oi
_c u

P0 PI PO PI P0 PI P0 PI
Die-0 Die-1 Die-0 Die-1
Channel
Plane
Way
Die
Channel-1
I
Figure 10: CPWD, Set of pages in a DIE
Channel-0
Channel
Plane
Die
Way
Channel-1
I
Figure 11: CPDW, Set of pages in a DIE
14


2.5 Native Command Queueing (NCQ)
Native Command Queuing (NCQ) allows hard disk drives to internally optimize the read-write request. This can reduce the amount of unnecessary drive head movement, resulting in increased performance for workloads where multiple simultaneous read/write requests are outstanding.
SSD devices also support this NCQ. In SSD NCQ optimizes the read-write requests by distributing them across available channels efficiently, resulting in significant performance improvement. The NCQ can queue up to 32-requests.
no NCQ
NCQ
Figure 12: Native Command Queuing in HDD
Sometimes for a large number of requests generated by the application, NCQ cannot optimize the requests. With a large number of requests probability of all the request going to same channel increase and this makes no use of NCQ optimization as queue size is limited. We propose our scheduling scheme at the application level to avoid this contention of request to the single channel.
15


CHAPTER III
RELATED WORK SURVEY
We have categorized the related work into 1) work related to SSD design and evaluation 2) General query processing on SSD system 3) Spatial data management on SSD. As our current work is related to R-Tree, we will be concentrating more on R-Tree indexing.
3.1 Evaluation of SSD
Many of the SSD related research concentrate on the design and evaluation of the SSD internal structures [22], Commercial SSD manufacturers like Intel, Samsung, SanDisk etc. does not provide the details about internal structure like page size, FTL mapping etc. In their work Feng Chen et.al [23] provides insights about how to understand or extract the internal characteristics of a closed an SSD. It is kind of reverse engineering by running different workload and based on the statistics intuitively predict the characteristics of an SSD. Other works [22, 24,31] they provide insight of evaluating the internal parallelism of an SSD.
3.2 General Query Processing on SSD
Past three decades researchers have worked on optimizing the database query processing with respect to the performance and characteristics of traditional FIDDs. But now SSD exhibit new set properties compared to the HDD and researchers now have to rethink query processing techniques over SSDs. Recent works have concentrated on leveraging the benefits from random reads in SSD [8,9], Dimitris Tsirogiannis et al [8] propose FlashScan and FlashJoin a flash-aware scan and join implementation for business intelligence and other data analysis queries for a column based page layout. Results show performance improvement by a factor of six for scans, multiway joins, and complex TPC-H queries. In another work Pedram Ghodsnia et al [9] leverage
16


the I/O parallel characteristics of the SSDs and build parallel I/O aware query optimization mode
called QDTT.
3.3 Spatial Data Management on SSD
Multidimensional spatial data structures are defined by hierarchical subdivisions of spaces, which gives rise to tree based search structures. Conventional index types do not efficiently handle spatial queries such as how far two points differ, or whether points fall within a spatial area of interest. Spatial indices are used by spatial databases to optimize spatial queries. Common spatial index methods include: quadtree [16], kd-Tree [17], R-Tree [18], Quadtree [16] is a tree data structure in which each internal node has exactly four children. Quadtrees are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The regions may be square or rectangular, or may have arbitrary shapes. Kd-Tree [17] is a space-partitioning data structure for organizing points in a k-dimensional space. R-Tree [18] he was proposed by Antonin Guttman in 1984. The key idea of the data structure is to group nearby objects and represent them with their minimum bounding rectangle in the next higher level of the tree; the "R" in R-tree is for the rectangle. Since all objects lie within this bounding rectangle, a query that does not intersect the bounding rectangle also cannot intersect any of the contained objects. At the leaf level, each rectangle describes a single object; at higher levels the aggregation of an increasing number of objects.
The majority of existing work on spatial data management over SSD includes the efficient construction and evaluation of R-Tree. Most of the works consider optimizing the SSDs "out-place update" which is activated when a node-split occurs while construction of the R-Trees [4] [5], Chin-Hsien Wu et.al. Work targets to implement an efficient spatial data management system for mobile systems considering the performance and energy consumption of the system. This work
17


also considers the optimization of "out-place update" limitation of flash memory. In a similar work
Yanfel Lv [6] presents a novel flash-aware variant of R-Tree name LCR-Tree (Log Compact R-TRee). The LCR-Tree deploys the compact log mechanism to improve the write performance significantly with an only little increment of read overhead. This LCR-Tree implementation can achieve up to 3X gains over the R-Tree and existing flash based RFTL [4], A different variation of R-Tree implementation like R*-Tree also has been studied. Tobias Emrich [7] et.al. have studied real time performance improvement of R*-Tree on flash SSDs.
In recent years parallel implementation of R-Trees on modern many core processor like GPU has been studied intensively. Simin You et.al [12]] provide very detail implementation of parallel R-tree on a GPU and detail analysis of spatial query (including the spatial joins) processing using the parallel R-Tree constructed. In our survey we also came across the parallel access methods [10] and parallel bulk loading of R-Tree [11], In the work of Christos Faloutsos and et.al provide insight into the implementation of parallel R-Tree over distributed environment consists of multiple HDDs and the spatial data is portioned across the nodes. The work also provide efficient methods (proximity method) of partitioning the spatial data to achieve goal to maximize the parallelism for large queries, like range queries which tend to access the large portion of the spatial data and reduce the number of disk access for small queries like point queries which require small amount of data access. A similar work proposed by Daniar Achajeev and et.al [11] considers scalable MapReduce approach in parallel loading of R-Tree using a level-by-level design. [29, 30] Discuss the methods of spatial batch processing of multiple range queries. They provide insight on selecting the spatial characteristics of incoming queries for batch processing.
18


CHAPTER IV
PROBLEM DEFINITION AND FORMALIZATION
4.1.1 Problem Definition
For given batch of multiple spatial queries, point and range, Q1 Q2,.Qm, schedule queries to
optimize Tq, the total query response time.
Before we propose the solution to the above problem in context with SSD. We need to understand how the R-Tree allocation and its access pattern over SSD. Section 4.1.2 gives details about it.
4.1.2 R-Tree Query Workflow and Allocation over SSD
In figure 13, shown a sample spatial data partitioned into multiple hierarchical MBRs for spatial indexing. Figure 14 shows the corresponding the R-Tree structure of hierarchical MBRs. Assume, on this sample spatial data, range queries Q1,Q2, and Q3 are executed. The flow of these query over R-Tree are marked in figure 14. The range queries degenerates into accessing the R-Tree nodes and traverse multiple paths of the tree. This multiple path traversal of the queries creates the need of accessing more and duplicate R-Tree nodes. So the allocation of these R-Tree nodes over SSD should be such that multiple nodes can be fetched in parallel. Figure 15, depicts the way of allocating the R-Tree over SSD. Each node number of the R-Tree, traversed in the breadth-first way (BFS), in figure 15, refers to an index of FTL and in-turn the FTL index maps to a page in SSD. That is every node of R-Tree is the size of a page and maps to a page on SSD. Since the each query degenerates into access an R-Tree or a page, we redefine our problem as:
For given batch of spatial queries (point and range) Qt Q2,.Qn, accessing R-Tree pages Spr,
schedule page read accesses to optimize the total query response time Tq.
19


In the context of SSD, our goal is to schedule these page access to improve the performance
by leveraging the parallel page access. As the node number maps to the page on the SSD, we can determine the, at each level of the R-Tree, next level pages to be fetched. To support this node number, we introduce a new structure for R-Tree nodes over SSD. Figure 15 depicts the new node structure of the R-Tree. Each node consists of Node-ID and list of Next-Node-IDs referenced by the current node.
Q1
Figure 13: Spatial Data partitioned into MBR
With BFS traversal we can represent the tree in an array format and write it on to SSD. Typically array representation of a tree, physically, allocates the nodes in sequential order. But SSD allocation schemes distributes the R-Tree nodes across different resources. This enables the parallel access of nodes. Flowever, this sequential or array representation of the tree is still available at the logical level of SSD i.e. at FTL level.
20


Figure 14: R-Tree for the MBRs in Figure 14
4.1.3 Problem Formalization
Table 2. Describes the notations referred in formalizing the problem. A batch of queries, using R-Tree index structure, degenerates into accessing the tree nodes which are spread across different subtree path and these nodes are mapped to a page in SSD. Spatial data management
21


system receives a large number of such queries and these queries generate a large number of
page read access request. This large page read request could create a resource contention in SSD and develop a hot-spot among the SSD resources. Hot-spots have a negative impact on the total query time as many queries need to be answered in sequence. Avoiding resource contention is really important and can be achieved by rescheduling page read requests. At each level of the tree, we can pack the page requests which do not request the same SSD resource that is requests in a pack can be accessed in parallel. As we mentioned in the background work, for a give SSD configuration A number of pages can be operated in parallel. Based on this we formalize our original problem into packing of the page read requests at each level of the tree.
Problem Formalization: - For given set Spr at each level of the R-tree, partition Spr into minimal number of packs biof parallel accessible pages, such that 1 < \hi\ < A andi« |5pr|.
Table 2: symbols and Notations
Notations Description
A Number of parallel pages accessible
nPL Total Number of Planes
nCH Total Number of Channels
nPKG Total Number of Packages
nDlE Total Number of Dies
Pc Set of connected pages
Pdi Set of pages in a Die
Pconnected Set of all Connected Components i.e Pc Q Pconnected
s Set of Page Read reguests
Pr Page Read Reguest (Page ID or Node ID)
22


h Pack/Set of parallel pr and i «Spr
B Set of bt


CHAPTER V
PROPOSED SOLUTION
4.1 Propose Solution
In this section, we provide details about our proposed solution, Page Read Request Packing (PRRP) algorithm for read requests packing. In addition details about the data structure used in implementing the solution.
4.1.1 Overview: Page Conflict Graph
Problem defined in previous chapter can be formed as a graph problem. In a set,Spr, of page read requests connected if they are read in an SSD resource, otherwise disconnected. The network model for this problem, partitioning Spr, is given by graph G = (V, E) where V is the set of page request, V = Spr and an edge e e E is drawn between two read requests when they need resources that cannot be used simultaneously, i.e. the two read requests are connected. We call this graph as page conflict graph or simply conflict graph. This conflict graph of page read request is computed at level of the R-Tree. General heuristics to solve problem with graph data structure is to perform exhaustive search of entire graph. This could add overhead for very large graphs. Considering SSD resource allocation characteristics, like Remark-1, we can reduce or eliminate graph traversal overhead in our problem. In coming section, we discuss the methods to reduce such overhead.
4.1.2 Page Read Request Packing (PRRP) Algorithm
The graph approach gives the sets of pages (vertices) that are connected i.e. they cannot be batched together as they request for the same resource. Hence do not leverage the parallelism feature of an SSD. Using graph search heuristics we compute all sets connected, Pc, components pConnectedâ–  That is set of sequential page read requests. On this PConnected set we apply our
24


proposed algorithm-1, Page Read Request Packing (PRRP) to determine the minimal number of
packs of page read requests. Each element, page read request, of these packs (or batches) are disjoint i.e., they do not request for the same resource and can be operated in parallel in SSD.
Algorithm-1. Page Read Request Packing (PRRP)
Input. Pconnected
Output: B, set of packs of disjoint page read request
• Step-1: Sort Pconnected. iti descending order of cardinality of Ppi
• Step-2: Pop one element from each Pc ofPConnected (decreasing order of cardinality) pack into bt where
l^i I — A if \Pconnected I A
\b^ | < A if \Pconnected I ^ A
• Step-3: Repeat Step-1 & 2 until all the page read request are packed.
Step-1 of the algorithm sorts the Pconnected based on the cardinality of Pc (set of connected components or vertices). After sorting Pconnected' iterates through each connected set of it and pops one element from each Pc and packs into batch \bt |. It pops elements from each Pc until the size of the batch is A or all the elements are exhausted. Repeat Step-1 and 2 until all the element of Pconnected are exhausted i.e. all the page read requests are packed into minimal number of batches. Figure 16 shows a working example of PRRP algorithm.
25


Step-1: Sort
Step-2: Pop Elements, fill A, = {17,6,4,13}
Step-1: Sort
Step-2: Pop Elements,
fill A, f)3 ={49,30,15,28}
40 26 27 > Remaining


Final PRRP Batches :-{17,6,4,13} {25,22,12,21}{49,30,15,28}{57,14,39,29}{33,26,27,40} 5 Batches
NO PRRP Algorithm: {4,13,15,6} {12.21,39,22} {28,29,30,26} {14,27,40,17} {25} {49} {57} {33} -» 8 Batches
Figure 16: PRRP Example
4.1.3 Analysis and Proof of Optimality of PRRP Algorithm
The running time of the PRRP algorithm depends on the configuration of underlying SSD. Since the A is very small compared to the |Spr|, it has negligible effect on the runtime. Selecting A elements from Spr will be a constant time. The |Pconnected I's equal to the number of planes in SSD and is also small compared to the number of read requests|Spr |. Since the elements ofSpr are partitioned into |Pconnected I packs, sorting of \Pc0nnected I will also be a constant time operation. Hence the running time of PRRP would be0(|Spr|). But for large number of planes in SSD configuration for like the one mentioned in [26], the sorting of |Pconnected I element would have impact on the overall runtime. Let nPL be the number of planes, then sorting Pconnected would
26


take Q(nPL log nPL). Therefore, the running time of PRRP will be * nPLlognPl i.e.
0(|Spr | log nPL).
Lemma: For given Pconnected > PRRP generate minimum number of packs that is \B \ is minimal.
Proof: In best case scenario |B| = |5pr|/A, such that bt = A. It indicates that PRRP produces batches such that each pack fyin B of maximum size A. Since all packs are of optimal size, the solutions produced is optimal.
When all |Spr| requests are connected i.e.|PComiectd| = 1, they have to be processed in serial. That is, all read request go to single SSD resource. In such situation the best solution produced by PRRP has packs size of 1 i.e., bt = 1 such that |S| = \Spr |. This is also an optimal solution.
Let's assume that B is not optimal. In such case there exist at least two packs A and C such that a; e A , q e C and 1 < i < A . There are two cases in which A and C would be generated.
1. Each a; and q belong to same connected component Pk ,1 < k < \Pconnectd\- That means each 2. Each Is r|
where k component that is not batched in requests of A. This indicates that PRRP did not combine requests of different connected components together in same pack. That would happen only when \Pk \ < \Pm |. i.e., more requests are present in Pm than Pk. This would mean that the set of connected components is not sorted. Which is not the case, as PRRP sorted all components beforehand. There for it is clear that \Pm\ = |Pfc |. That is, both components
27


are same size and that is why PRRP generated two packs. Hence we prove that the
solution produced by PRRP is optimal.
The scheduling problem is NP-complete and difficult to obtain optimal solution. However, heuristics like Wave Front Method (WFM), Critical Path Merge (CPM) or longest path, Heavy Edge Merge (HEM) and Dominant Sequence Clustering (DSC) [32] can obtain suboptimal solutions (schedules) in polynomial time. Parameters like Task priority, execution time, precedence dependencies make scheduling a complex task. Our read request problem also looks similar to scheduling problem. However, the graph structure and parameter building this graph simplify our problem. The conflict graph derived is always disconnected, because each read request is discrete and cannot access more than one resource at a time. Since each request can access only one resource, set of read requests accessing the same resource are connected and form a complete graph, i.e. each Pc is a complete graph and Pconnected 's set of complete graphs. Hence, the conflict graph is a disconnected graph Gc = (VC,EC) made of set of complete graph Pconnected-Also it is important to note that the parameter deriving the conflict graph are finite. Like number of planes or dies in a SSD configuration decide | Pconnected I i-e- number of complete graphs. I Vcompi I= number of pages in plane or die.
In our read request packing problem, each tasks have same execution time, priority, no communication and no precedence dependencies. Since, set of read requests require same resource they will be serialized. With this resource sharing, we assume that there is dependency. For example, for given set of read requests the directed acyclic graph (DAG) is shown in figure. Our goal is pack the set of tasks that can be accessed in parallel. Right side of the image shows the actual required form of DAG.
28


Spr = {12, 13, 15, 21, 22, 26, 6, 14, 27, 28, 29, 30, 39, 57,33,40, 4, 17, 25,49} A= 4
Each path of the graph represents the set of connected components Pc. In actual, these connected components form a complete graph and in a complete graph all the vertices have same degree. In a complete graph visiting any node first, gives us maximum length of the each path in DAG. This maximum length is nothing but the cardinality Pc. In PRRP, we consider this cardinality to pick the read request to pack. Since each path of the DAG is complete graph, just one visit of each path would give us the length of the path and it the overhead of full path traversal. Also, there is no dependency between the tasks across the different paths of DAG. This further simplifies our problem of scheduling.
4.1.4 Case-1: Conflict Graph Derivation PAGE-Level
Set of pages in a plane are fixed for any static allocation scheme, for an SSD Configuration (Remark-1). Alternatively, set of page read requests are connected if they refer the same plane. Node-ID of our R-tree allocation refers the Page-ID or the index of FTL. Using this Node-ID we
29


determine connected read requests i.e. we can determine whether request Node-IDs refer the
pages in the same plane i.e. connected or not. Equation (1) determines whether two pages are connected or not. We repeat this process for all the incoming read requests.
Ppi = Set of Connected components i. e. pages in the same plane Ppl {Pri ^ Spr | (pri prj^°/onPL 0 , t _/'} (1)
Set of pages in a plane makes connected components, hence for a SSD configurations number of sets of connected components will be equal to the number of planes. In the following example, each connected set corresponds to a plane in the SSD. For example, nPL = 8, nCH = 2 , nPKG =2, nDIE = 4, A =4 and Spr = {12,13,15,21,22,26,6,14,27,28,29,30,39,57,33,40,4,17,25,49}
then Pconnected = {4,12,28} {13,21,29} {15,39} {6,22,30,14} {26} {27} {40} {17,25,49,57,33}.
This approach is not efficient as it searches the entire graph to find the connected components. Using the SSD configuration characteristics, the number of connected components is equal to number of planes,nPL, and distance between two Page-IDs in plane is multiple of nPL (Remark-1), we can eliminate the exhaustive search. Instead of checking all elements of Spr iteratively to find the connected components, we use a hash function or modulo method. Following Algorithm-1, PlaneConneccted finds all the connected components using modulo method.
Algorithm-2. PlaneConnected
Input: Spr Set of Page Read requests
Output: Pconnected Set of all Connected Components Sets i.ePcQ PConnected //Flap of list of connected components
1. Flap ConnectedSets[nPL]
2. foreach pri in Spr do:
3. plndex = primodnPL
4. ConnectedSets[plndex].push_back(prj)
5. Pconnected = ConnectedSets
30


ConnectedSets[nPL] at Line-1 implements a map or hash table to stores connected components
for each plane. Line-3 computes the hash table key using the modulo to page read request with number of planes nPL. For example, pr,= 28 then 28 % 8 = 4, this means 28 pagelD read request corresponds to plane 8. Figure 17 shows an example of plane-level conflict graph derivation.
Spr = {12,13,15,21, 22,26,6,14,27,28, 29,30, 39,57,33,40,4,17, 25,49}
4
PL-0 PL-1
PL-2 PL-3 PL-4
PL-5 PL-6
PL-7
Figure 17: Casel-Conflict Graph Derivation
4.1.5 Case-2: Page Conflict Graph Derivation DIE-Level
Plane-Level scheduling method does not consider the underlying allocation scheme. Naively doing the Plane-Level schedule would increase the performance as we eliminate the sequential accesses in a plane. But two planes in a die cannot operate simultaneously. That is, if two request refer to the same die then they have to be serialized. So, the PRRP algorithm packs would contain requests that need to be answered sequentially in a die. This would impact the overall performance of the system. So we need to have a method to avoid this serial access within a die.
We know that the set of pages in a Die are fixed. But, this depends on the allocation scheme for a SSD configuration (Remak-2). A group of allocation schemes exhibit similar page allocation pattern in a Die (Remark-2.1). Based on these observations we extend Case-1 to find the connected
31


components at die level. Pages requests are connected if they are in the same die, otherwise
disconnected. As the pattern changes across the group of allocation schemes, we came with different methods for each group to find connected components. In our method, for each allocation group, we first find the starting page, or HeadPage, for each die. HeadPage of each die will be the smallest pagelD in that die. For example, in Figure 18 WDPC allocation scheme, the FleadPage for each die are marked in red and for Die-0 of Channel-l/Package-1 is 1. With HeadPage we can relate it to other plane in the die. First get the plane level connected components by applying Algorithm-2 and apply Algoritm-3 to find the related planes in die.
Algorithm 3, DleConnected-Vl computes all connected components in a die for a given allocation scheme. SetHeadPage at line-1 stores all possible HeadPages for an allocation scheme. PLConnect and ConnectedSets at line-2 & 3 stores the connected component for plane and die level respectively. At line-4 first compute all the connected components at plane level. At Line-5, 9,16,24,31 and 37 algorithm computes the connected components for allocation groups (Table-1) from 1-6 respectively.
Figure 18: HeagPage for WDPC allocation scheme
Except the group-1, other groups require the Plane-level connected component set, which is computed at line-4 by applying the algorithm-1, PlaneConneted.
32


Algorithm-3. DieConnected-Vl:
Input: Spr Set of Page Read requests
Output. Pconnected.
1. List SetHeadPage [nDIE]
2. lvlap PlConnect [nPL]
3. lvlap ConnectedSets [nDIE]
4. PlConnect = PlaneConnected (Spr)
5. CASE: CIaIDPj CDWP, WCDP, WDCP, DCWP, DWCP
6. foreach pri in Spr do:
7. plndex = pri mod nDIE
8. ConnectedSets[plndex]. push_back(pri)
9. CASE: CPIaIDj CPDW
10. for j=l step 1 to nCH do:
11. for i=j step nPKG to nPL do:
12. SetHeadPage.push_back(i)
14. foreach I in SetHeadPage do:
15. ConnectedSets[I] = Union(PlConnect[I],PlConnect[1+ nCH]
16. CASE: IaIPCDj WPDC, DPCW, DPWC
17. for j = 1 step 1 to (nDIE / nPKG) do:
18. for i = j step (nPL/nCH) to nPL do:
19. SetHeadPage.push_back(i)
20. SetHeadPage.push_back(i+( nDIE/nCH))
21. foreach I in SetHeadPage do:
22. pi = (nPL/nDIE)
23. ConnectedSets[I] = Union(PlConnect[I],PlConnect[1+ pi]
24. CASE: WDPC, DWPC
25. for j = 1 step 1 to (nDIE / nCH) do:
26. for i = j step (nPL/nCH) to nPL do:
27. SetHeadPage.push_back(i)
28. foreach I in SetHeadPage do:
29. pi = (nPL/nPKG)
30. ConnectedSets[I] = Union(PlConnect[I],PlConnect[1+ pi]
31. CASE: CDPIaIjCIaIPDjIaICPDjDCPW
32. for i=l step 1 to nPKG do:
33. SetHeadPage.push_back(i)
34. SetHeadPage. push_back(i+/?77/£)
35. foreach I in SetHeadPage do:
36. ConnectedSets[I] = Union(PlConnect[I]jPlConnect[I+nPiCG ]
37. CASE: PCDW, PCWD, PDCW, PDWC, PWCD, PWDC
38. for i=l step 1 to nPL do:
39. SetHeadPage.push_back(i)
40. foreach I in SetHeadPage do:
41. ConnectedSets[I] = Union(PlConnect[I],PlConnect[1+1]
42. Pconnected = ConnectedSets
33


Die-Level connected components for Group-1 (line-5 to 9) are computed using modulo
method same as the Plane-Level. This because the difference between the two PagelD is nDIE. Group-2 to 6 compute the Die-Level connected components by joining respective Plane-Level connected component set. Each group, from 2 to 6, first determines the HeadPage for each die. Lines 10-12, 18-20, 25-27, 32-34 and 38-39 determines the HeadPage for groups 2-6 respectively. Example, for allocation scheme WDPC (from line-25 to 27) shown in Figure 19, HeadPages for each die are 1,2,3,4, 9,10,11 andl2. For first iterations, algorithm computes the HeadPage of die-0 of package-0 of each channel. In second iteration, HeadPage of die-1 of package-1 of each channel. The SetHeadPage list has the plane number that will be used to join two corresponding planes in a die. At lines 15, 23, 30, 36 and 41 join the Plane-Level connected components for groups 2-6 respectively. For WDPC group allocation scheme difference between planes within a die is nPL/nPKG. Using this criterion we join the Plane-Level connected components.
34


CHAPTER VI
EXPERIMENTS AND RESULTS
5.1 Setup and Configuration
Our research goal is to leverage the internal parallelism provided by the SSD. But this internal parallelism can only be exploited if know how the LBA and physical pages are mapped i.e. FTL mapping or the allocation schemes. As commercial SSD manufacturers hide these details we can't explore our research opportunity on a real SSD hardware. Alternative to this is to use a SSD simulator. There are three kinds of open source SSD simulators are available [27], They are:
1. Cycle-accurate simulator:
Examples: NANDFIashSim and CPS-SIM.
2. Performance model simulator
Examples: Disksim3.0+SSD Extension, flashsim, EagleTree and SSDSim
3. Behavior model simulator
Examples: opennfm
In particular and best suitable for our research is SSDsim [24], SSDSim is an open-source, hardware validated simulator. It supports different FTL schemes, allocation schemes, buffer management algorithms and scheduling algorithms. Current implementation supports only 6 allocation schemes with Channel priority. We added remaining 18 allocation schemes. SSDSim takes FIDD traces files as input for simulating the performance over an SSD.
To support and simulate the R-Tree over a SSD, we implemented a module "RSSDSimulator". RSSDSimulator simulates the R-tree read/search pattern in SSD memory layout and generates the traces required for SSDSim. It also implements the 24-different ways of allocating R-Tree. A
35


scheduler is implemented as part of the RSSDSimulator. A trace generator, part of RSSDSimulator,
writes the flash memory R-tree traversal pattern as trace file. Following Figure 19 depicts different components for RSSDSimulator.
RSSDSimulator
â– 
R-Tree Allocator
Scheduler
SSDSim TraceGen
f
SSD-Simulator
Figure 19: RSSDSimulator Components
SSD simulator Parameters listed in Table 3 were used in our experiments. Simulations were carried out on a host system with 32-core Intel Xeon, 128 GB RAM and 1TB Disk Space.
Table 3 Experimental SSD Configuration Parameters
Parameter Value
Number of channels 8
Number of packages/chips per channel 2
Number of dies per chip/package 2
Number of planes per die 2
Number of blocks per plane 2048
Number of pages per block 64
Page size 4KB
Page read time 55ps
Request queue depth 32
FTL scheme Page-Mapping (24 Allocation Schemes)
36


5.2 Workloads and performance metrics
We used both synthetic and real world workloads in our experiments. Table 4 gives details of the synthetic and real world workloads. R-Tree for each workload is generated offline and loaded into simulator based on the allocation scheme.
Table 4: Experimental Workloads
Name #Data-Rectangles Uniform & Normal Distributions
Synthetic-1 100,000
Synthetic-2 500,000
Synthetic-3 1,000,000
Synthetic-4 10,000,000
AIS Brest, France 1,048,576
In our experiments we measured the total response time and request distribution across the SSD. Compared the total response time of two packing methods with the naive or no-packing method, we used as baseline for our analysis. We measure and analyze the performance of our scheduling algorithm across all 24-allocation schemes. Table-5 gives detail about the workload we used in our experiments.
Table 5: Workload Details
Workload No. of Queries Query Type Data Distribution Query Distribution Description
1 400 Range Uniform & Normal Random Variable query size
2 200 Range Uniform & Normal Uniform, Normal & Random Fix query size
3 25-400 Range Uniform Random Fix query size
4 5000 Point Uniform Random NA
37


5.3 Results and Analysis
In this section we discuss, analyze and reason the performance of the proposed packing algorithms.
5.3.2 Performance of Naive vs Plane-Level vs Die-Level
Figure 20, shows the performance comparison between the naive scheduling, Plane-Level and Die-Level scheduling for range query over uniformly distributed data. About 400 range queries with variable range sizes were executed. On average, we achieved 67% and 78% of performance gain for Plane-Level and Die-Level scheduling, respectively, compared to Naive packing. With uniform distribution CDWP (82%), PDCW (92%) for Plane-Level and Die-Level respectively show more perform improvement. CDWP and PDCW allocation schemes execute more interleave operations for naive scheduling. Plane-level and die-level scheduling reduces these interleave operations, which improves the overall performance for that allocation scheme.
Similarly, Figure 21, shows the performance for same number of range queries over normal distribution data. Here we achieved 76% and 81% of performance gain for Plane-Level and Die-Level packing, respectively, compared to Naive packing. PCWD (87%) for plane-level and PWCD (89%) for die-level show more performance improvement. All 6-allocation schemes with Plane-priority perform better. Data with uniform distribution show less improvement compared to the data with normal distribution. For uniform distribution of data, less R-Tree MBRs are overlapped compared to normal distribution. Few MBR overlaps means less number page access and probability of the page access lying same die or package is minimized. With normal distribution of data, more MBRs overlap. Due to more MBR overlap queries activate more page read access. More page read access trigger the more interleave operation. But our scheduling approach reduces these interleave operations. This reduction of interleave operation is more in data with
38


normal distribution. Also for plane priority allocation schemes the number of interleave
operations are more regardless of the data distribution. Plane priority, allocates the R-Tree pages into the closer resources i.e. In the same die and package. This triggers more interleave operations.
Range Query | Uniform Distribution | 1 Million Data Points

^ o' Allocation Schemes
I No Scheduling â–  Plane Level â–  Die Level
Figure 20: Range query performance with scheduling for uniformly distributed data
Figure 22 and 23, give us the reasoning details about why there is performance gain. The plots show comparison of the number of interleaved operations executed in each allocation scheme for given data distribution. The statistics in the tables below the plots show that reduction in the number of interleaved operations enhances performance. Why this reduction in the interleaved operations gives performance boost? Our scheduling algorithms reorders the page read requests at application level to distribute them across SSD, but the request queue NCQ within SSD reorders their priority i.e. it orders requests into first channel and second way priority. For example, eight requests in a pack, are going into two channels, four in each channel. NCQ schedules them as Channel-Way priority, i.e. first two requests will distributed across channels. Controller picks the channel first requests and assign to a channel round-robin fashion. By the time it picks the second requests for a particular channel (in round-robin) the command line will be free, hence no
39


waiting for controller. But with Way first priority request need to wait until the interleave operation is complete. This improves the performance by reduction of the number of interleave operations.
Range Query | Normal Distribution |1 Million Point
16
Allocation Scheme
â–  No Scheduling â–  Plane Level â–  Die Level
Figure 21: Range query performance with scheduling for normal distribution of data
Uniform Distribution | PDCW | Read Request Distribution
Across SSD
20 Q. 18
O 16 g 14 â„¢ 12
CH-1 CH-2 CH-3 CH-4 CH-5 CH-6 CH-7 CH-8
â–  No Scheduling 0 9 9 3 19 11 9 0
â–  Plane Level 0 2 0 0 4 0 0 0
â–  Die Level 0 0 1 0 3 0 0 0
Channel
â–  No Scheduling â–  Plane Level â–  Die Level
Figure 22: No. of interleave operation for uniform distribution
40


Normal Distribution | PDCW | Raed Request Distribution
Across SSD
CO
Q_
O
CD
>
ro
Q)
&_
CD
+-»
C
30
25
20
15
10
5
i
1.1. â– -1.1
Channel
l No Scheduling â–  Plane Level â–  Die Level
No C CH-1 CH-2 CH-3 CH-4 CH-5 CH-6 CH-7 CH-8
â–  No Scheduling 28 7 11 9 0 9 10 5
â–  Plane Level 4 0 2 3 4 0 0 4
â–  Die Level 1 3 1 1 2 3 1 0
Figure 23: No. of interleave operation for normal distribution
Experiments were also performed with point queries on both uniform and normal distribution of data. We observed similar performance behavior as before in this set of experiments also. That is uniform distribution performed better compared to the normal distribution of data in case of naive packing. But with plane and die level packing there was no much impact of data distribution on the performance. With this we can conclude that packing the read requests, which leverage the parallelism feature of SSD, will improve the performance regardless the workload characteristics. Figure 24 & 25 shows the point query performance.
41


to u (D on 50
_c 40
(D E 30
i- (D 20
to c o 10
Q. to (D cc 0
Point Query (5K): Uniform Distribution | 1 Million Data Points
1.1

Ll h LI I LI L â–  II LI h ,.l LI 1. ,.l L Ii
Allocation Scheme
r l No Scheduling â–  Plane Level â–  Die Level
Figure 24: Point query performance, uniform data distribution
Point Query (5K) | Normal Distribution | 1 Million Data Points
30
to
a! 25 on
.E 20 w 15
•p 10
Si 5 0
c
o
Q.
to
(D
cc
L
I.
ii

h ii 1 L ii

1 ii t 1 I. h 1, ii
Ii

& Allocation Scheme
s> <5 l No Scheduling â–  Plane Level â–  Die Level
Figure 25: Point query performance, normal data distribution
5.3.3 Number of Queries. Query Distribution and Request Distribution
In our experiments, we tuned queries to create resource contention (or hot-spots) and verified our hypothesis that our scheduling algorithms distributes the requests across channel, dies and planes. Figure 26, shows the distribution of requests across SSD, Channel-1 is where the hot-spot occurs. For Naive approach the average number of requests served by channel-1 is about 7. But with scheduling approaches we were able to reduce this to 3.5, that is about 50% reduction.
42


With request distribution across SSD, we were able to diminish the size of the hot-spot this gave
us the opportunity to improve the overall performance of the system.
7
>
£ 6
Request Distribution across SSD (Hot-Spot size reduction)
4 5 6
CHANNEL NUMBER
Die Level
Plane Level
•No Scheduling
Figure 26: Read distribution across SSD
The number of input queries impact the performance of the system. As the number of queries increase, number page access request increase. Figure 27 plots response time changes when the number of queries increases. Naive way of query answering takes more time when the number of queries increase. But the time to get query results two packing methods remain approximately same. This clearly indicates that our scheduling methods distribute the requests across resources and reduce the hot-spots occurrences.
43


Figure 27: Impact of number of queries, 25 to 400 The underlying distribution of the queries have some impact on the performance. In order to
verify this assumption we executed about 200 range queries of same size with uniform, normal and random distribution. The underlying R-Tree is constructed with uniform data distribution.
Query Distribution Uniform vs Normal vs Random
Naive Plane Die
Scheduling
â–  Uniform â–  Normal â–  Random
Figure 28: Performance with query distribution Uniform vs Normal vs Random
As expected, the normal distribution performs better compared naive case. One observation that can be made about the performance gain at Die-level packing that the performance of all the
44


query distributions is approximately same. This shows that our packing method distributes the
requests across SSD such that for any given query distribution the response is minimized to its lowest value.
Query Distribution Uniform vs Normal vs Random
25
cn
c
â–  Uniform â–  Normal â–  Random
Figure 29: No. of Interleave Ops with query distribution Uniform vs Normal vs Random 5.3.4 Why Plane-Level Scheduling?
The Case-1 i.e. Plane-Level packing, tries to separate sequential access of pages within a plane and tries to distribute the read across other planes of the SSD. Number of planes in an SSD configuration are constant, we use modulo method to segregate the request going in to the same plane. Since other resources like Channel, Package and Die are constant, the question here is why can't we apply the same modulo rule other level of resources as well? Figure 30 & 31 answers this question.
In Figure 32 and 33, the performance of plane-level modulo method is consistent and better compared to naive packing all 24-allocation schemes. On the other hand, modulo method does not perform consistently for other types of resources.
45


Plane-Scheduling vs Other Level
40
Q.
â–  Naive â–  Channel â–  Package I Die â–  Plane Figure 30: Plane-Scheduling vs Other Scheduling
Reason being, modulo for other resources does not guarantee the elimination of hot-spots. For example, if read request consist of NodelDs {1, 33, 17, 65} for CWDP allocation scheme; and if modulo method is applied at channel level then, all request will go in to one connected component set and hence creates a hot-spot. If plane level packing is applied, {1, 65}, {33} {17} would be the connected components sets. From this we can eliminate hot-spot by reading the requests 1, 33, and 17 in parallel.
Plane-Scheduling vs Others
40
Allocation Scheme
â–  Naive â–  Channel â–  Package aDle â–  Plane
Figure 31: Plane-Scheduling vs Other Scheduling The goal of the packing the read requests is to improve overall performance by distribution of the requests across the SSD to avoid the number of interleaving operations. More number of
46


interleave operations means hot-spots and our algorithm was able to reduce these hot-spots
significantly. With our experiments, we demonstrate that the proposed PRRP algorithm does improve the performance significantly. The conflict graph derivation methods for Plane-Level and Die-Level does support the PRRP method in distributing the requests across SSD and hence reduce the hot-spots occurrence in SSD. In general the distribution of data affect the performance of the system. But, with our approach, we were able to reduce that dependency of performance on the data distribution.
47


CHAPTER VII
CONCLUSION AND FUTURE WORK
In this work we have studied the effect on performance of spatial queries, using R-Tree, by batching over Solid State Drives (SSD). We proposed a new structure of R-Tree nodes for SSDs and based on that developed a general Plane-Level and Die-Level allocation specific packing methods. Both methods avoid the resource contention occurred due to long data movement time caused by die interleaving. By distribution of requests across SSD this resource contention is addressed. As Plane-Level packing does not require knowledge of underlying allocation scheme, we believe this method can be applied to any available SSD. Our packing algorithm has opportunity to be part of database systems, operating systems or SSD system itself. Also with our approach, R-Tree node representation, we do not need to think of the spatial characteristics of incoming requests in batch.
As part of future work, we plan to bring batching and scheduling more closely towards the database systems that is supporting query arrival and priority order in PRRP algorithm. Further evaluate the batching mechanism at package and channel level to bring more granularity in resource contention avoidance. As the Plane-Level scheduling does not require the understanding of allocation, we plan to apply this on a real commercial SSD. Additionally, we plan to build an adaptive packing and scheduling design i.e. based on the workload characteristics select the allocation scheme and resource priority for batching. Further, implement Spatial Joins, multiple index structure (two or more table) allocation.
48


REFERENCES
[1] F. Chen, D. A. Koufaty, and X. Zhang, "Understanding intrinsic characteristics and system implications of flash memory based solid state drives," SIGMETRICS Perform. Eval. Rev.,vol. 37, pp. 181-192, 2009.
[2] F. Chen, R. Lee, and X. Zhang, "Essential roles of exploiting internal parallelism of flash memory based solid state drives in high-speed data processing," presented at the Proceedings of the 2011 IEEE 17th International Symposium on High Performance Computer Architecture, 2011.
[3] S. Zertal, "Exploiting the Fine Grain SSD Internal Parallelism for OLTP and Scientific Workloads," (HPCC, CSS, ICESS)
[4] C.-H. Wu, L.-P. Chang, and T.-W. Kuo, "An efficient R-tree implementation over flash memory storage systems," presented at the Proceedings of the 11th ACM international symposium on Advances in geographic information systems, New Orleans, Louisiana, USA, 2003.
[5] I. Koltsidas and S. Viglas, "Spatial Data Management over Flash Memory," in Advances in Spatial and Temporal Databases, vol. 6849, D. Pfoser, Y. Tao, K. Mouratidis, M. Nascimento, M. Mokbel, S. Shekhar, and Y. Huang, Eds., ed: Springer Berlin Heidelberg, 2011, pp. 449-453.
[6] Y. Lv, J. Li, B. Cui, and X. Chen, "Log-Compact R-Tree: An Efficient Spatial Index for SSD," in Database Systems for Adanced Applications, vol. 6637, J. Xu, G. Yu, S. Zhou, and R. Unland, Eds., ed: Springer Berlin Heidelberg, 2011, pp. 202-213.
[7] T. Emrich, F. Graf, H.-P. Kriegel, M. Schubert, and M. Thoma, "On the impact of flash SSDs on spatial indexing," presented at the Proceedings of the Sixth International Workshop on Data Management on New Hardware, Indianapolis, Indiana, 2010.
49


[8] D. Tsirogiannis, S. Harizopoulos, M. A. Shah, J.L. Wiener, and G. Graefe, "Query processing
techniques for solid state drives," presented at the Proceedings of the 2009 ACM SIGMOD International Conference on Management of data,Providence, Rhode Island, USA, 2009.
[9] P. Ghodsnia, I. T. Bowman, and A. Nica, "Parallel I/O aware query optimization," presented at the Proceedings of the 2014 ACM SIGMOD international conference on Management of data, Snowbird, Utah, USA, 2014.
[10] I. Kamel and C. Faloutsos, "Parallel R-trees," SIGMOD Rec., vol. 21, pp. 195-204, 1992. [11] D. Achakeev, M. Seidemann, M. Schmidt, and B.
[11] Seeger, "Sort-based parallel loading of R-trees," presented at the Proceedings of the 1st ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data, Redondo Beach, California, 2012.
[12] S. You, J. Zhang, and L. Gruenwald, "Parallel spatial query processing on GPUs using R-trees," presented at the Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data, Orlando, Florida, 2013.
[13] J. I. Munro, T. Papadakis, and R. Sedgewick, "Deterministic skip lists," presented at the Proceedings of the third annual ACM-SIAM symposium on Discrete algorithms, Orlando, Florida, USA, 1992.
[14] W. Pugh, "Skip lists: a probabilistic alternative to balanced trees," Commun. ACM, vol. 33, pp. 668-676, 1990.
[15] W. G. Aref and FI. Samet, "Efficient Window Block Retrieval in Quadtree-Based Spatial Databases," Geoinformatica, vol. 1, pp. 59-91, 1997.
50


[16] J. L. Bentley, "Multidimensional binary search trees used for associative searching," Commun.
ACM, vol. 18, pp. 509-517, 1975.
[17] A. Guttman, "R-trees: a dynamic index structure for spatial searching," presented at the Proceedings of the 1984 ACM SIGMOD international conference on Management of data, Boston, Massachusetts, 1984.
[18] D. Eppstein, M. T. Goodrich, and J. Z. Sun, "The skip quad tree: a simple dynamic data structure for multidimensional data," presented at the Proceedings of the twenty-first annual symposium on Computational geometry, Pisa, Italy, 2005.
[19] L. Arge, D. Eppstein, and M. T. Goodrich, "Skipwebs: efficient distributed data structures for multi-dimensional data sets," presented at the Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing, Las Vegas, NV, USA, 2005.
[20] The Truth About SSDs That HDD Vendors Do Not Want You To Know:June 30, 2015, http://itblog.sandisk.com/truth-ssds-hdd-vendors-do-not-want-vou-to-know/
[21] Ashok Anand, Aaron Gember-Jacobson, Collin Engstrom, and Aditya Akella. 2014. Design patterns for tunable and efficient SSD-based indexes. (ANCS '14).
[22] Nitin Agrawal, Vijayan Prabhakaran, Ted Wobber, John D. Davis, Mark Manasse, and Rina Panigrahy. 2008. Design tradeoffs for SSD performance. In USENIX 2008
[23] Feng Chen, Rubao Lee, and Xiaodong Zhang. 2011. Essential roles of exploiting internal parallelism of flash memory based solid state drives in high-speed data processing.(HPCA 'll).
[24] Yang Hu, Hong Jiang, Dan Feng, Lei Tian, Hao Luo, and Shuping Zhang. 2011. Performance impact and interplay of SSD parallelism through advanced commands, allocation strategy and data granularity.(ICS 'll).
51


[25] Myoungsoo Jung and Mahmut Kandemir. 2012. An evaluation of different page allocation
strategies on high-speed SSDs.(HotStorage'12)
[26] http://www.computerworld.com/article/2971482/cloud-securitv/samsung-unveils-15tb-ssd-based-on-densest-flash-memorv.html
[27] https://www.linkedin.com/pulse/3-types-ssd-simulators-iason-ma
[28] Hongchan Roh, Sanghyun Park, Sungho Kim, Mincheol Shin, and Sang-Won Lee. 2011. B+-tree index optimization by exploiting internal parallelism of flash-based solid state drives. Proc. VLDB Endow. 5, 4 (December 2011), 286-297
[29] Peter Chovanec and Michal Kratky. 2013. On the efficiency of multiple range query processing in multidimensional data structures. In Proceedings of the 17th International Database Engineering & Applications Symposium (IDEAS '13)
[30] Apostolos N. Papadopoulos , Yannis Manolopoulos. Multiple range query optimization in spatial databases. In Proc. of the 2nd East European Symp. on Advances in Databases and Information Systems (ADBIS)
[31] S. y. Park, E. Seo, J. Y. Shin, S. Maeng and J. Lee, "Exploiting Internal Parallelism of Flash-based SSDs," in IEEE Computer Architecture Letters, vol. 9, no. 1, pp. 9-12, Jan. 2010. doi: 10.1109/L-CA.2010.3
[32] https://parasol.tamu.edu/groups/amatogroup/research/scheduling/scheduling algorithms/
52


Full Text

PAGE 1

nrrrrr nr rnrrrnnnrn rrr r !"#$"#%&%%$'()*+*,!'%+)!#$&"!$+,%./ )0!% 1223 -($"!""./!--$0-*-($ 4%'.+-*5-($&%0.%-$'(**+*5-($ )!#$&"!-*5*+*&%0*!)6%&-!%+5.+5!++/$)*5-($&$7.!&$/$)-"5*&-($0$,&$$*5 %"-$&*5'!$)'$ */6.-$&'!$)'$8r),!)$$&!), 129:

PAGE 2

!! ;129: rrr nnrrr

PAGE 3

!!! (!"-($"!"5*&-($%"-$&*5'!$)'$0$,&$$ &.-.)<%%%)<$&=($0$ (%"$$)%66&*#$05*&-($ */6.-$&'!$)'$%)0r),!)$$&!), 4%&)*."(%)%$!%"(%)! (%!&80#!"*& !-%+%,(%)0 */+-/%) 9>$'129:

PAGE 4

!# &.-.)<%%%)<$&=($0$ */6.-$&'!$)'$%-'(6%-!%+.$&&*'$""!),"!),&$$#$&* +!0-%-$&!#$"n$#$&%,!),)-$&)%+ %&%++$+!"/($"!"0!&$'-$0""!"-%)-&*5$""*&4%&)*."(%) %$!%"(%)!6%-!%+0%-%/%)%,$/$)-(%"$'*/$%)!)-$,&%+6%&*5/%)%66+!'%-!*)"".'(%"$*,&%6(!' )5*&/%-!*)"-$/"%),$7.$&!$"%&$*)$*5 -($/*"-!/6*&-%)-7.$&!$"!)"6%-!%+ 0%-%%"$"6%-!%+!)0$?!),-$'()!7.$"+!=$&$$ %&$%66+!$0-*!/6&*#$-($6$&5*&/%)'$*5".'( 7.$&!$"5-$)0%-%%"$""-$/&$'$!#$"/.+-!6+$&% ),$7.$&!$" %)0!)*&0$&-*!/6&*#$-($ *#$&%++6$&5*&/%)'$ !-6&*'$""$"-($/!)%-'($" '*)"!0$&!),"6%-!%+'(%&%'-$&!"-!'"*5!)'*/!), &$7.$"-"*@$#$& -(!"6$&5*&/%)'$*5%-'(6&*'$" "!),%)0*6$&%-!),!)0$?"-&.'-.&$0$6$)0 *)-($.)0$&+!),"-*&%,$(%&0@%&$ ".'(%"(%&00& !#$"%-%%"$'*//.)!-(%""6$)!-"#%+.%+$-!/$ !)6%"--(&$$0$'%0$" %)0$)$&, !)*6-!/!A!),!-"""-$/"!)%''*&0%)'$@!-( 5$%-.&$".)*@5+%"(%"$0*+!0-%-$&!# $""%&$$/$&,!),%"/%!)"-*&%,$ 0$#!'$"($(%#$'*/6+$-$+0!55$&$)-%)0$--$& 6$&5*&/%)'$'(%&%'-$&!"-!'"'*/6%&$0-* !-(-(!")$@-$'()*+*, @$)$$0-*&$-(!)= -($%-'(6&*'$""!),*5"6%-!%+7.$&!$") -(!"@*&= @$6&*6*"$%)$@"-&.'-.&$*5&$$)*0 $"5*&"%)0%"$0*)-(%-0$#$+*6% ,$)$&%+%)0"-%-!'%++*'%-!*)"6$'!5!'%-'(!),/$(*0"*-(/$-(*0"%#*!0-($&$"*.&'$ '*)-$)-!*)-(%-*''.&"0.$-*+*),0%-%/*#$/$)--! /$'%."$0!)-$&+$%#!),%)0!/6&*#$-($ *#$&%++6$&5*&/%)'$!-(*.&%66&*%'(*5&$$)* 0$&$6&$"$)-%-!*) "6%-!%+7.$&!$"'%)$ %-'($0@!-(*.-'*)"!0$&!),-($"6%-!%+'(%&%'-$&!" -!'"*5!)'*/!),&$7.$"-"$@$&$%+$-* %'(!$#$*#$&%++32B,%!)!)-($6$&5*&/%)'$%"'*/6 %&$0-*)*)%-'(!),/$-(*0"*#$&"($5*&/%)0'*)-$)-*5-(!"%"-&%'-%&$%66&*#$0 &$'*//$)0!-"6.+!'%-!*)0#!"*&4%&)*."(%)%$!%"(%)!

PAGE 5

# nrrr(%#$-%=$)$55*&-"!)-(!"&$"$%&'(*@$ #$& !-@*.+0)*-(%#$$$)6*""!+$@!-(*.--($ =!)0".66*&-%)0($+6*5/%)!)0!#!0.%+"%)0r n%/$/$&"@*.+0+!=$-*$?-$)0/ "!)'$&$-(%)="-*%++*5-($/%/(!,(+!)0$-$0-*&4%&)*."(%)%$!%"(%)!5*&(!",.!0%)'$%)0'*)"-%)".6$&#!"!*)%"@$++%"5*&6&*#!0!),)$'$""%&!)5* &/%-!*)&$,%&0!),-($-($"!"8%+"*5*&(!" ".66*&-!)'*/6+$-!),-(!"&$"$%&'(@*&=@*.+0+!=$-*$?6&$""/,&%-!-.0$-*@%&0 "0$6%&-/$)-'(%!&&!-%+%,(%)0%)0 0$6%&-/$)-*5'*/6.-$&"'!$)'$5*&-($!&=!)0'**6 $&%-!*)%)0$)'*.&%,$/$)-@(!'(($+6/$ !)'*/6+$-!*)*5-(!"6&*<$'-

PAGE 6

#! rnnrr 9 rn : 194+%"($/*& : 11)-$&)%+&'(!-$'-.&$ > 1C%&%++$+!"/!)&'(!-$'-.&$ D 1E4+%"(&%)"+%-!*)n%$&4n%)0++*'%-!*)' ($/$" 92 1F%-!#$*//%)0.$.$!), 9F rnrr 9: C9r#%+.%-!*)*5 9: C1$)$&%+.$&&*'$""!),*) 9: CC6%-!%+%-%%)%,$/$)-*) 9> nrr44nG 9D E99&*+$/$5!)!-!*) 9D E91&$$.$&*&=5+*@%)0++*'%-!*)*#$& 9D E9C&*+$/4*&/%+!A%-!*) 19 rn 1E E9&*6*"$*+.-!*) 1E E99#$&#!$@%,$*)5+!'-&%6( 1E E91%,$$%0$7.$"-%'=!),+,*&!-(/ 1E

PAGE 7

#!! E9C)%+"!"%)0&**5*56-!/%+!-*5+,* &!-(/ 1: E9E%"$9*)5+!'-&%6($&!#%-!*)rn$#$+ 1D E9F%"$1%,$*)5+!'-&%6($&!#%-!*)rn$ #$+ C9 rHrrrn CF F9$-.6%)0*)5!,.&%-!*) CF F1*&=+*%0"%)0$&5*&/%)'$$-&!'" C> FC$".+-"%)0)%+"!" C3 FC1$&5*&/%)'$*5%I#$#"+%)$n$#$+#"!$n $#$+ C3 FCC./$&*5.$&!$" .$&!"-&!.-!*)%)0$ 7.$"-!"-&!.-!*) E1 FCE(+%)$n$#$+'($0.+!),J EF n4r E3 r4rrr ED

PAGE 8

#!!! nr4!,.&$9 r 11 rnrrK19L CC 4nrr >E nr >F rnrr4 3: nrnrrnnnrnrnrrr 92> rnnnrn 923 941E4nnrr 91D 941E4nn 9192 r4rr 9E99 r4rr 9E91 rr 9F9C nr 129E rr4r4r9E 199F rrnnrrr 199: r94nr C99> rrr4rrn44n rCD93 rrr4rrn4n 4E29D 4rnrrr44 E212 4rnrrr4n E919 rr4r 4 E111 rr4r n E11C r EC1E 4r4rr 1FE22 EE

PAGE 9

!? 1F r4rr4nEE1: 4rnrrr4nEF1> nrrnrrn E:13 nrrnrrn E:

PAGE 10

? nr%+$ 9 nnrrrrr 9C 1 n 11 C rHrrn4rr C: E rHrrnn C> F nrn C>

PAGE 11

9 r nn6%-!%+0%-%/%)%,$/$)-(%"$'*/$%)!)-$, &%+6%&-*5/%)%66+!'%-!*)"+!=$!/%,$ 6&*'$""!), ,$*,&%6(!'!)5*&/%-!*)""-$/" ' */6.-$&%!0$00$"!,)%)0"**)%),$ 7.$&!$"%&$/*"-!/6*&-%)-7.$&!$"!)".'("6%-!%+ %66+!'%-!*)"),$)$&%+ -($-%"=!"-*5!)0%++ "6%-!%++'+*"$)$!,(*&")*&0$&-*%)"@$&".'( &%),$7.$&!$" -($"!/6+$"-@%!"-*"'%)-($ '*/6+$-$0%-%%"$(!"+$%0"-*0!"=%''$""*#$&($ %0%)0%+%&,$)./$&*50!"-%)'$ '*/6.-%-!*)"%-%%"$/%)%,$/$)-""-$/"%0 *6-"6%-!%+!)0$?!),"'($/$"-*%#*!0 0!"=%''$""*#$&($%0($/*"-6&*/!)$)-"6%-!%+!) 0$?"-&.'-.&$!"&$$K9>L%)0!-"#%&!%)-M &$$K9>L-&$$!)0$?"-&.'-.&$"6&*#!0$%)%#$& %,$+*,%&!-(/!'"$%&'(-!/$ .-"6%-!%+0%-% 0!/$)"!*)"%)0-($0!"-&!.-!*)"(%#$%)$,%-!#$!/ 6%'-*)-($"$%&'(-!/$ &$$!)0$?"-&.'-.&$"%&$*5-$)"-*&$0!) %)%&&%%"-&%0!-!*)%+&$$%)07.$&!$"%&$ %)"@$&$0@!-(-($"$7.$)-!%+"'%)($"$7.$)-!%+% ''$""6$&5*&/%)'$*5(%&00!"=0&!#$"" !"+$#$&%,$05*&-($"$"$7.$)-!%+"'%)"),$)$&%+ %&$'$!#$"%#$&+%&,$)./$&*5&%),$ 7.$&!$"*&$0.'$-($%#$&%,$7.$&-!/$ 0%-%%"$ ""-$/%-'(*&'*/!)$-($"$/.+-!6+$ 7.$&!$"%-'(!),*5-($"$"7.$&!$"&$7.!&$"%'&!$&!*)($"$'&!-$&!*)"'*)"!0$&-($"6%-!%+ 6&*6$&-!$"*5-($!)'*/!),7.$&!$" +!=$"*&-!),-( $7.$&!$"%''*&0!),-*"6%'$5!++!),'.&#$*& !+$&-*&-!),KC EL(!"!)'&$%"$"-($6&*%! +!-*5%''$""!),-($)$%&&$7.$"-""$7.$)-!%++ 4.&-($& -($""-$/%-'($"7.$&!$"-**6-!/!A$-($ &%)0*/&$%0"%)0!)'&$%"$-($"$7.$)-!%+ %''$""$"*"-*5-($0%-%%"$&$"$%&'((%"$#*+#$0 %&*.)0-($*6-!/!A%-!*)*5-($"$7.$&!$" %"$0*)"-*&%,$""-$/"*-*)+(%#$-($6&*6$&-!$"*5"6%-!%+0%%0$-$&/!)$0-($6$&5*&/%)'$*5-($&$$!)0$? "-&.'-.&$ %+"*-($.)0$&+!),"-*&%,$(%&0@%&$% -%%"$'*//.)!-(%"&$"$%&'($06&$#!*."

PAGE 12

1 -@*-(&$$0$'%0$"*)*6-!/!A!),0%-%%"$""-$/"-* -($.)0$&+!),%"%"-*&%,$0$#!'$ r55*&-"(%#$$$)6.--*%00&$""!"".$"+!=$*6-!/! A!),-($+*),"$$=+%-$)'!$" (!,(6*@$& '*)"./6-!*) %)0&$+!%!+!-.$-*-($/$'(%)!'%+ )%-.&$*5-($" !-!"0!55!'.+--*$+!/!)%-$ ".'(!"".$"'*/6+$-$+.-)*@5+%"(%"$0*+!0 -%-$&!#$""%&$$/$&,!),%"/%!) "-*&%,$0$#!'$" 0.$-*-($!&6$&5*&/%)'$ +*@$&+% -$)' (!,($&-(&*.,(6.-5&*/&%)0*/N +$""6*@$&'*)"./6-!*) %)06&!'$%0#%)-%,$"*#$& $'$)-+%/".),&$+$%"$0".'(%) K1:L-(%-'%)"-*&$%*.-9F*50%-%(!"-& $/$)0*."'(%),$!)-($"-*&%,$/%"**) /%=$*"*+$-$)&$'$)-$%&"-($6&!'$*5-($ !",*!),0*@)4!,.&$9"(*@"-($6&!'$ '*/6%&!"*)$-@$$)-($%)0*)"!0$&!),-($ '*"-$55$'-!#$)$""*5@$)$$0-* &$'*)"!0$&*.&%66+!'%-!*)*&-*/$$-N+$#$&%,$ -($'(%&%'-$&!"-!'"*5%) 4!,.&$9#"&!'$ 4!,.&$1"(*@"-($!)-$&)%+%&'(!-$'-.&$*5 %)!"%&&%),$0!)-*/.+-!6+$5+%"(/$/*& -(%-%&$'*))$'-$0-*%5+%"('*)-&*++$&-(&*.,(/ .+-!6+$6%&%++$+ n ($5+%"( '*)-&*++$&'%)*6$&%-$*)$%'(*5-($"$'(%))$+"!) 0$6$)0$)-+%)0!)6%&%++$+6%'=%,$!"

PAGE 13

C '*/6*"$0*5/.+-!6+$ -(%-%&$!)-.&)'*/6*"$0*5/.+-!6+$ n ($6+%)$"%&$%,%!) 0!#!0$0!)-*%+%&,$)./$&*56("!'%+0%-%6%,$" r%'(0!$!)-($6%'=%,$'%)$*6$&%-$0!)0 $6$)0$)-+%)0&$'$!#$"!)-$&+$%#$0*6$&%-!*) '*//%)0"5&*/-($'*)-&*++$&4&*/%)%&'(!-$'-.&$ 6$&"6$'-!#$ "$?(!!-/.+-!6+$+$#$+"*5 6%&%++$+!"/'(%))$+ 6%'=%,$ 0!$%)06+%)$K19L !)'$/.+-!6+$'*/6*)$)-"*5%)'%)$ %'-!#%-$0"!/.+-%)$*."+ /.+-!6+$6%,$"'%)$*6$ &%-$0&$%0@&!-$!)6%&%++$+(!"6%&%++$+ %''$""!/6&*#$"-($6$&5*&/%)'$*5&%)0*/N 5*& @(!'((%#$6&*#$0".6$&!*&'*/6%&$0 -* 4!,.&$1 )-$&)%+&'(!-$'-.&$K19L ($&%)0*/&$%0"6$$0*5"'%)$.-!+!A $0!)!/6&*#!),-($6$&5*&/%)'$*5"6%-!%+ 7.$&!$"n$#$&%,!),-(!"&%)0*/)$""5*&%"!),+$*& %"/%++)./$&*5&%),$7.$&!$"'*.+0$ $)$5!'!%+%-!#$*//%)0.$.$!),'*/6*)$)*5-($-%=$"'%&$*5&$%&&%),!), -($&$7.$"-"-(%-'%)$$?$'.-$0!)6%&%++$+!$ 0!"-&!.-$0%'&*""/.+-!6+$'(%))$+".--($"!A$ *5-(!"!"%+!/!-%-!*)-#%&!$"5&*/C1913 &$7.$"-".-5*&%+%&,$)./$&*57.$&!$"-($

PAGE 14

E 6%&%++$+!"/0*$")*-"$$/-*($+6/.'( 0.$-*-($ "!A$+!/!-"n%&,$)./$&*5&$7.$"-" '*.+0'&$%-$(*-"6*-"@!-(!)-($&$"*.&'$"'( %))$+ 6%'=%,$ 0!$*&6+%)$(%-!" /.+-!6+$ &$7.$"-"'*.+0."$-($"%/$'(%))$+ 6%'=%,$*&0!$ !"-&!.-!),-($&$7.$"-"%'&*""-($ &$"*.&'$""'($0.+!),-($/'*.+0%#*!0*&&$0.'$ -($(*-"6*-" $%&&%),!),*&%-'(!),%+%&,$)./$&*5 6%,$&$7.$"-"!)7.$&!$"%--($%66+!'%-!*)+$#$+ -*+$#$&%,$-($6%&%++$+!"/!"&$7.!&$0-*!/6&*#$ -($6$&5*&/%)'$!)'$6&*#!0$"0!55$&$)+$#$+"*56%&%++$+!"/%)01E@%"*5%++*'%-!),-($ 6%,$"@$)$$0-*&$-(!)=-($%-'(!),*56%,$ &$7.$"-"*#$&'*/6%&$0-*)-(!"@*&= @$ 6&$"$)-/$-(*0"*56%'=!),%)0"'($0.+!), -($&$%0&$7.$"-",$)$&%-$0-($"6%-!%+7.$&!$" %-%66+!'%-!*)+$#$+@(!'(+$#$&%,$"-($ /%?!/./0$,&$$*56%&%++$+!"/($/%<*&6$&5*&/%)'$!)"6%-!%+0%-%7.$& 6&*'$""!),*)5+%"(%"$0"'%)$%'(!$#$0 !5@$+$#$&%,$6*"!-!#$'(%&%'-$&!"-!'".'(%"6%&% ++$+!"/%)0%#*!0)$,%-!#$'(%&%'-$&!"-!'"".'( %"*.-*56+%'$.60%-$")-(!"@*&= @$'*)'$)-&% -$*)!/6&*#!),-($7.$&6$&5*&/%)'$ +$#$&%,!),-($!)-$&)%+6%&%++$+!"/*5"$6$)0 !),*)-($'*)5!,.&%-!*)*5%) -($ '*)-&*++$&'%)$?$'.-$ )./$&*55+%"(*6$&%-!*)"!)6%&%++$+!$!-'%) &$%0N@&!-$ 6%,$"!) 6%&%++$+6-!/!A!),%)0/%?!/!A!),.-!+!A%-!*)*5 -(!" r ,!#$."-($$--$&7.$&6$&5*&/%)'$ *)-&!.-!*)*5-(!"@*&=%&$ 9 ++*'%-!),-($%"$0&$$-*%)""-$/." !),1E"-%-!'%++*'%-!*)"'($/$" 1 $)$&!'/$-(*0"5*&!0$)-!5!),%)0"$6%&%-!),"$7. $)-!%+*&)*)6%&%++$+&$7.$"-"@(!'( '&$%-$(*-"6*-"%)0!/6%'--($6$&5*&/%)'$%"$9 C 4.&-($&*6-!/!A!),-(!"!0$)-!5!'%-!*)6&*'$""%"$ 0*)-($.)0$&+!),%++*'%-!*)"'($/$ %"$1

PAGE 15

F E )$@*6-!/%+%,$$%0$7.$"-%'=!),/$-( *0-*6%'=!)'*/!),&$%0&$7.$"-" !)-*%/!)!/./)./$&*5%-'($"%)0!/6&*#$6$&5*& /%)'$%#*!0!),(*-"6*-" F /6+$/$)-%)O!/.+%-*&P !-"!/.+%-$"%++1E@% "*5%++*'%-!),%)&$$%)0"6%-!%+ 7.$&6&*'$""!),*#$& : !)'$)*'*//$&'!%+/%).5%'-.&$&"6&*#!0$0$-%! +"*5-($%++*'%-!*)"'($/$" @$ (%#$."$0!/K1EL(%&0@%&$#%+!0%-$0"!/.+%-*& *0!5!$0!--*".66*&-%++-($1E %++*'%-!*)"'($/$"%)0/$%".&$-($*#$&%++6$&5*&/% )'$!/6&*#$/$)-*5"6%-!%+7.$&!$" *#$& *"-*5-($&$+%-$0@*&=!"5*'."$0*)-($ '*)"-&.'-!*)*5&$$*#$&*6-!/!A!),-($ O*.-*56+%'$.60%-$"PKE F : >L%)0*-($&"'*)' $)-&%-$0*)&$$K13L%)0%"()0$?!),*#$& K11L@!-(*.-'*)"!0$&!),0$-%!+"*5!)-$&)%+"+ !=$%++*'%-!*)"'($/$"r?!"-!),@*&=*)6%&%++$+ !/6+$/$)-%-!*)K92L*5&$$*#$&0!"-&!.-$0 %&'(!-$'-.&$6&*6*"$-($$55!'!$)-"6%-!%+ 0%-%6%&-!-!*)!),5*&(!,(6$&5*&/%)'$"6%-!%+0%-% %''$""$'*/6%&$0*.&"*+.-!*) 6$&5*&/%)'$@!-(-($)%I#$*&)*"'($0.+!),/$-(*0 !-(*.&"'($0.+!),%)06%'=!),%66&*%'( @$@$&$%+$-*%'(!$#$%*.->2B!/6&*#$/$)'*/6 %&$0-*)%I#$"'($0.+!), 6%'=!),&$%0 &$7.$"-"'*)"!0$&!),-($6+%)$'*)5!,.&%-!*)!) *'*)"!0$&%-!*)*5%++*'%-!*)"'($/$ 6-!/!A%-!*)*5-($6%'=!),%"$0*)%++*'%-!*)"'( $/$!/6&*#$0-($6$&5*&/%)'$5.&-($& 92B#$&%++@$@$&$%+$-*,$-32B!/6&*#$/$)' */6%&$0@!-()%I#$"'($0.+!), !) 6$&5*&/%)'$

PAGE 16

: r nr*/$$-*.&&$"$%&'(,*%+@$/."-5!&"-.)0$&"%)0-($%&'(!-$'-.&$%)0!-"!/6*&-%)!)-$&)%+6&*6$&-!$")-(!"'(%6-$& @$0!"'.""-( $!)-$&)%+%&'(!-$'-.&$*5%) !" ($&$!"-@*=!)0"*55+%"(/$/*&%)0K9L 4+%"(/$/*&(%"(!,($& 0$)"!+%&,$&'%6%'!-'*/6%&$0-*4+%"(/$/* &,%!)-($&$%&$-@*-6$*54+%"( /$/*& !),+$n$#$+$++n@(!'("-*&$"*)$! -6$&'$++%)0.+-!n$#$+$++n@(!'( "-*&$"-@**&/*&$!-"6$&'$++*(%#$/*&$"-*& %,$'%6%'!-"$#$&%+5+%"('(!6"%&$6%'=$0 -*,$-($&%"%/*0$+'%++$0 r KCLr%'(6%'=%,$1*&/*&$ r%'(0!$!"'*/6*"$0*5 /.+-!6+$ n r%'(6+%)$'*)"!"-"*5-(*."%)0"*55+%"(+*'=" %)0*)$*&/*&$0%-%N'%'($ &$,!"-$&."$0%"N.55$&r%'(+*'='*)"!"-"*5 :E*&9136%,$"4!,.&$C"(*@"%-6!'%+5+%"( /$/*&6%'=%,$@**-($&!/6*&-%)-'(%&%'-$&!"-!'"*55+%" (/$/*&%&$ r %)0 rnr K1L@&!-$*6$&%-!*)'(%),$"-($#%+.$*5$%'( !-5&*/Q9R-*Q2R6%&-*5-($6%,$'%))*-$ .60%-$0)'$%6%,$!"@&!--$)!-/."-$$&%"$0 @($&$%++-($!-"%&$"$--*Q9R $5*&$&$ @&!-!),-($6%,$*&.60%-!),-($6%&-*5-($6%,$ r%'(+*'=(%"%+!/!-$0)./$&*5$&%"!), %++*@$06!'%++%)n(%"$&%"$''+$%*.-92 /*&$!)-($+%-$"-#$&"!*)"*5%)0n (%"%*.-922K9L($6%,$"!A$*5%-6!'%+ +#%&!$"5&*/1 E %)03

PAGE 17

> 4!,.&$C4+%"($/*&%'=%,$ #$!#!%&$%$'! 4!,.&$E'*/6%&$"-($6("!'%+"-&.'-.&$*5*-( %)0(%#$#%&!*."/*#!), 6%&-"/%=!),-($/"."'$6-!+$-*"(*'=%)00%/%,$ @(!+$"."$%)*)/$'(%)!'%+0$"!,)*5 5+%"(/*.)-$0*)%'!&'.!-*%&0 4!,.&$E #"("!'%+-&.'-.&$

PAGE 18

3 "4!,.&$FK19L0$6!'-"0!55$&$)-'*/6*)$) -"*5%*+!0-%-$&!#$)!" '*/6*"$0*5%)%&&%*55+%"(/$/*&6%'=%,$" &*' $""*& !)-$&)%+.55$& *)-&*++$& %)0%(*"-!)-$&5%'$"6&*#!0$+*,!'%++*'=%0 0&$""n%"%)!)-$&5%'$-*-($(*"-($(*"N&$7.$"-"%&$(%)0+$0-($'*)-&*++$& '*)-&*++$&/!/!'-($$(%#!*&*5%(%&00!"= 0&!#$*)-&*++$&."$"-($5+%"(-&%)"+%-!*) +%$&4n/%66!),!)5*&/%-!*)-*/%6*& -&%)"+%-$-($!)'*/!),n&$7.$"--*-($%'-.%+6 ("!'%+6%,$"!),-($4n!)5*&/%-!*)!-!"".$" '*//%)0"+!=$&$%0 @&!-$ $&%"$$-'-*5+%"(/$/* &6%'=%,$"#!%5+%"(/$/*&'*)-&*++$&+%$& 4+%"(/$/*&'*)-&*++$&'*))$'-"-*5+%"(6%'=%,$" #!%/.+-!6+$'(%))$+"r%'('(%))$+'*))$'-" -*/.+-!6+$5+%"(6%'=%,$" 135+%"(6%'=%,$" 4!,.&$F)-$&)%+&'(!-$'-.&$*5%) $)$&%++ -($)./$&*5'(%))$+"#%&!$"5&*/192 .--(!")./$&!)!"+!/!-$0-($ -*+$&%+$6$%='.&&$)-%)0-($%&$%%#%!+%!+!-!) -($'*)-&*++$&4+%"('*)-&*++$&'%)!"".$ !)0$6$)0$)-N&$7.$"--*/.+-!6+$'(%))$+" "!/.+ -%)$*."+(!",!#$"."-($'(%))$++$#$+ 6%&%++$+!"/)./$&*55+%"(/$/*&6%'=%,$"%'&* ""%'(%))$+,!#$"."-($)./$&*5@%"

PAGE 19

D N6%&%++$+!"/'%)$%'(!$#$0%'&*""/.+-!6+$6%' =%,$"@!-(!)%'(%))$+!"".!),-($!)-$&+$%#!), N'*//%)0"(!",!#$"."-($@%+$#$+6%&%++$+! "/*&$%*.-6%&%++$+!"/!"0!"'.""$0!) -($)$?-"$'-!*)(!&&#!%&$%$'! )%))./$&*5'(%))$+"%)0@%"0$5!)$"-($) ./$&*6$&%-!*)"-(%-% '*)-&*++$&'%)6&*'$""!)6%&%++$+)""-$/6 &*#!0$"5*.&+$#$+"*56%&%++$+!"/KC EL 9 nnnrnnn r%'('(%))$+'%)$*6$&%-$0*&'%)6&*'$""'*//%) 05&*/ '*)-&*++$&!)0$6$)0$)-+%)0"!/.+-%)$*."($-(&* .,(6.-*56%,$&$%0"'%)$ "!,)!5!'%)-+!)'&$%"$0+$#$&%,!),'(%))$++$#$+ 6%&%++$+!"/ nnrnnnr r%'('(%))$+!)%)!""(%&$0/.+-!6+$6%'=%, $"($"$ 6%'=%,$"'%)$*6$&%-$0!)0$6$)0$)-+%)0"!/.+-%) $*."+(!"+$#$&%,$"-($6%'=%,$ +$#$+6%&%++$+!"/.--**6-!/!A$&$"*.&'$.-!+!A% -!*)".'(%"'*)-&*+." -($*6$&%-!*)" -*-($6%'=%,$".)0$&"%/$'(%))$+%&$!)-$&+$%#$0 r nnrnnn !-(!)%6%'=%,$ -@*0!$"'%)*6$&%-$"!/.+-%)$*. "+$&$-** -($*)+&$"-&!'-!*)!"-(%-'*//%)0"-*$%'(0!$' %))*-$"$&#$0%--($"%/$-!/$!$ -($'*//%)0")$$0-*$!)-$&+$%#$0 r nnnrnnnr )"!0$%0!$ /.+-!6+$6+%)$"'%)*6$&%-$"!/.+-%)$ *."+($&$ %&$'*.6+$*5&$"-&!'-!*)"%*.--(!"+$#$+*56%&% ++$+!"/KFL4!&"-($"%/$*6$&%-!*)+!=$ &$%0N@&!-$N$&%"$&.)*)/.+-!6+$6+%)$"$'*)0 *6 $&%-!),6%,$"/."-(%#$-($"%/$ 6%'=%,$ 0!$ +*'=%)06%,$%00&$""$)+6%, $95&*/6+%)$2%)06%,$95&*/6+%)$ 9'%)$*6$&%-$0"!/.+-%)$*."+ r rr

PAGE 20

92 4*++*@!),4!,.&$:0$6!'--($6+%)$+$#$+6%&%++$+! "/%)0!)-$&+$%#$*6$&%-!*)0!$" 4!,.&$: +%)$n$#$+%&%++$+!"/%)0)-$&+$%#$*6$&%-!*) )!#$& #"!*+#, %$& #% 4n&$"6*)"!+$5*&!)-$&5%'!),*&/%66!),-($+*,! '%+@*&+0*5-($(*"-n%)06("!'%+ 5+%"(@*&+0-6$&5*&/"9/%!)-$)%)'$+*,! '%+-*6("!'%+%00&$""/%66!),0%-%"-&.'-.&$ 1=$$6-&%'=*5!)#%+!06%,$"5*&,%&%,$'*++$'!*)C$#$)+0!"-&!.-$@&!-$%)0$&%"$ *6$&%-!*)"*#$&+*'="@$%&+$#$+$&./$&*."/$(*0"*5!/6+$/$)-!),-($4n(%#$$$) 6.+!"($0($"$/$-(*0"'*)"!0$&-($0!55$&$)-+$# $+*5/%66!),,&%).+%&!-!$6%,$/%66!), +*'=/%66!), %)0(&!0/%66!), 4!,.&$> (%))$+%)0%%&%++$+!"/

PAGE 21

99 ++*'%-!*)"'($/$*&6%,$%++*'%-!*)"-&%$,0$-$&/!)$"(*@-*'(**"$%6("!'%+6%,$ 5*&/%66!),++*'%-!*)"'($/$"%&$'+%""!5!$0!)-* -@*'%-$,*&!$" "-%-!'%)00)%/!'-%-!' %++*'%-!*)%""!,)"%+*,!'%+6%,$-*-($6&$0$-$&/ !)$0'(%))$+ 6%'=%,$ 0!$%)06+%)$)%/!' %++*'%-!*)%""!,)"%+*,!'%+6%,$-*%)5&$$6("! '%+6%,$($%++*'%-!*)"-&%-$,!$"0$6$)0*) (*@%6%,$!""$+$'-$05&*/-($6**+@*@%" nrrrr K:L 0$-$&/!)$(*@ %6%,$!""$+$'-$04!,.&$>0$6!'-"*-((%))$+% )0%6%&%++$+!"/($%-6$6%&%++$+!"/ '%)$5.&-($&"$,&$,%-$0!)-*%'=%,$ !$ %)0+ %)$+$#$+6%&%++$+!"/ (%))$+5!&"-%++*'%-$"6%,$"%"$0*)'(%) )$++$#$+"-&!6!), $)$5!-"!)-&%&$7.$"6%&%++$+!"/%5!&"-%++*'%-$"6%,$"-%=!),%0#%) -%,$"*5@%6!6$+!)!),(!"$)$5!-"!)-$& &$7.$"-6%&%++$+!"/($@%6!6$+!)!),+$#$&%,$"0 !$%)06+%)$+$#$+6%&%++$+!"/*)"!0$&!),-($ 5*.&+$#$+"*56%&%++$+!"/@$'%)(%#$ r @%"*5%++*'%-!*)"-&%-$,!$"!$0!55$&$)'*/!)%-!*)*5 (%))$+%!$+%)$!"% '=%,$4!,.&$92899"(*@"1"%/6+$ %++*'%-!*)"'($/$"*.-*51E%++*'%-!*)"'($/$")*-(%++*'%-!*)"'($/$" %)*"$&#%-!*) -(%-"$-*56%,$"!)%6+%)$%&$"%/$+"* -($ 0!55$&$)'$$-@$$)-@*'*)"$'.-!#$6%,$""$$/"-* $'*)"-%)-%)0!"$7.%+-*)./$&*56+%)$" !)'*)5!,.&%-!*))-($$?%/6+$%++*'%-!*)"'( $/$" %"/%&=$0 6%,$"SF 19T%&$!)-($ %++*'%-$0!)*)$6+%)$%)00!55$&$)'$$-@$$)-($6 %,$"!)9:%)0!"$7.%+-*)./$&*56+%)$" !$9:$%+"**"$&#$0-(%-5*&%++1E%++*'%-!* )"'($/$" -(!""$-*56%,$"!)%6+%)$%)0 0!"-%)'$$-@$$)6%,$"&$/%!)"'*)"-%)!rrr""r #rrrrrrr nrr$rrrrr nnrrrr%rr r&rrrnrr' nrr %r rn

PAGE 22

91 4!,.&$3 9*51E@%"*5%++*'%-!*)"'($/$ 4!,.&$D 9*51E@%"*5%++*'%-!*)

PAGE 23

9C %/$=!)0*5*"$&#%-!*)" %"5*&6+%)$" '%) $/%0$%--($0!$+$#$+)4!,.&$9189C@$ '%))*-!'$-(%-"$-*56%,$"%&$"%/$!)%0!$.-(!"6%--$&)!")*-%66+!'%+$-*%++-($1E %++*'%-!*)"'($/$"%"!-!"5*&-($6+%)$+$#$+* @$#$& %,&*.6*5%++*'%-!*)0*$"0$6!'--($"%/$ 6%--$&)%-0!$+$#$+%+$+!"-%++*'%-!*)"'($/$" "(%&!),%'*//*)6%--$&)4!,.&$9189C"(*@ -(%--@*%++*'%-!*)"'($/$"%)0"%/$6%-$&)*56%,$"!)%0!$ %+$9 r ++*'%-!*)'($/$"@!-(-($%/$%--$&) r ! '%$& #% .&$ &/ $$!# 9 1 C E F : n ( rr)nnrrrrrr rrnnrrrrr rrrr n rr)nnrrr*#r#r#r#r #r+#r*#r +#r*###+#r*#+#r*# ####+r rrrrrrnnrrr rrrrrrr r

PAGE 24

9E 4!,.&$92 $-*56%,$"!)%r r 4!,.&$99 $-*56%,$"!)%r

PAGE 25

9F 0$&/ #,1''*1+ %-!#$*//%)0.$.!),%++*@"(%&00! "=0&!#$"-*!)-$&)%++*6-!/!A$-($&$%0@&!-$ &$7.$"-(!"'%)&$0.'$-($%/*.)-*5.))$'$""%& 0&!#$($%0/*#$/$)&$".+-!),!)!)'&$%"$0 6$&5*&/%)'$5*&@*&=+*%0"@($&$/.+-!6+$"!/.+-%)$* ."&$%0N@&!-$&$7.$"-"%&$*.-"-%)0!), 0$#!'$"%+"*".66*&--(!") *6-!/!A$"-($&$%0@&!-$&$7.$"-" 0!"-&!.-!),-($/%'&*""%#%!+%+$'(%))$+"$55!'!$ )-+ &$".+-!),!)"!,)!5!'%)-6$&5*&/%)'$ !/6&*#$/$)-($'%)7.$.$.6-*C1&$7.$"-" 4!,.&$91 %-!#$*//%)0.$.!),!) */$-!/$"5*&%+%&,$)./$&*5&$7.$"-",$ )$&%-$0-($%66+!'%-!*) '%))**6-!/!A$-($&$7.$"-"!-(%+%&,$)./$&*5&$7.$ "-"6&*%!+!-*5%++-($&$7.$"-,*!),-* "%/$'(%))$+!)'&$%"$%)0-(!"/%=$")*."$*5 *6-!/!A%-!*)%"7.$.$"!A$!"+!/!-$0$ 6&*6*"$*.&"'($0.+!),"'($/$%--($%66+!'%-!*)+$ #$+-*%#*!0-(!"'*)-$)-!*)*5&$7.$"--*-($ "!),+$'(%))$+

PAGE 26

9: r rrn3r4$(%#$'%-$,*&!A$0-($&$+%-$0@*&=!)-*9 @*&=&$+%-$0-*0$"!,)%)0$#%+.%-!*)1 $)$&%+7.$&6&*'$""!),*)""-$/C6%-!%+0 %-%/%)%,$/$)-*)"*.&'.&&$)-@*&= !"&$+%-$0-*&$$ @$@!++$'*)'$)-&%-!),/*&$ *)&$$!)0$?!),(r/'$& # 5 %)*5-($&$+%-$0&$"$%&'('*)'$)-&%$*)-($0$"!,)%)0$#%+.%-!*)*5-($ !)-$&)%+"-&.'-.&$"K11L*//$&'!%+/%).5%'-.& $&"+!=$)-$+ %/".), %)!"=$-'0*$")*6&*#!0$-($0$-%!+"%*.-!)-$&)%+"-&.'-.&$+!=$6 %,$"!A$ 4n/%66!),$-')-($!&@*&=4$), ($)$-%+K1CL6&*#!0$"!)"!,(-"%*.-(*@-*.)0$ &"-%)0*&$?-&%'--($!)-$&)%+'(%&%'-$&!"-!'" *5%'+*"$0%)-!"=!)0*5&$#$&"$$),!)$$&! ),&.))!),0!55$&$)-@*&=+*%0%)0%"$0*) -($"-%-!"-!'"!)-.!-!#$+6&$0!'--($'(%&%'-$&!"!'"*5%)-($&@*&="K11 1E C9L-($6&*#!0 $ !)"!,(-*5$#%+.%-!),-($!)-$&)%+6%&%++$+!"/*5% )(#!1'!"! % # %"--(&$$0$'%0$"&$"$%&'($&"(%#$@*&=$0*)*6-!/ !A!),-($0%-%%"$7.$&6&*'$""!), @!-(&$"6$'--*-($6$&5*&/%)'$%)0'(%&%'-$&!"-!'" *5-&%0!-!*)%+".-)*@$?(!!-)$@ "$-6&*6$&-!$"'*/6%&$0-*-($%)0&$"$%&'($&" )*@(%#$-*&$-(!)=7.$&6&*'$""!), -$'()!7.$"*#$&"$'$)-@*&="(%#$'*)'$)-&%-$ 0*)+$#$&%,!),-($$)$5!-"5&*/&%)0*/ &$%0"!)K3 DL!/!-&!""!&*,!%))!"$-%+K3L 6&*6*"$4+%"('%)%)04+%"(*!)%5+%"(%@%&$ "'%)%)0<*!)!/6+$/$)-%-!*)5*&."!)$""!)-$++!,$ )'$%)0*-($&0%-%%)%+"!"7.$&!$"5*&% '*+./)%"$06%,$+%*.-$".+-""(*@6$&5*&/%)'$ !/6&*#$/$)-%5%'-*&*5"!?5*&"'%)" /.+-!@%<*!)" %)0'*/6+$?7.$&!$")%)*-( $&@*&=$0&%/(*0")!%$-%+KDL+$#$&%,$

PAGE 27

9> -($N6%&%++$+'(%&%'-$&!"-!'"*5-($"%)0. !+06%&%++$+N%@%&$7.$&*6-!/!A%-!*)/*0$ '%++$0((-$&$#2#$ # .+-!0!/$)"!*)%+"6%-!%+0%-%"-&.'-.&$"%&$0$5!)$ 0(!$&%&'(!'%+".0!#!"!*)"*5"6%'$" @(!'(,!#$"&!"$-*-&$$%"$0"$%&'("-&.'-.&$" *)#$)-!*)%+!)0$?-6$"0*)*-$55!'!$)-+ (%)0+$"6%-!%+7.$&!$"".'(%"(*@5%&-@*6*!)-"0 !55$& *&@($-($&6*!)-"5%++@!-(!)%"6%-!%+ %&$%*5!)-$&$"-6%-!%+!)0!'$"%&$."$0"6%-! %+0%-%%"$"-**6-!/!A$"6%-!%+7.$&!$"*//*) "6%-!%+!)0$?/$-(*0"!)'+.0$7.%0-&$$K9:L =0& $$K9>L &$$K93L.%0-&$$K9:L!"%-&$$ 0%-%"-&.'-.&$!)@(!'($%'(!)-$&)%+)*0$(%"$?%' -+5*.&'(!+0&$).%0-&$$"%&$/*"-*5-$) ."$0-*6%&-!-!*)%-@*0!/$)"!*)%+"6%'$&$'.&" !#$+".0!#!0!),!-!)-*5*.&7.%0&%)-"*& &$,!*)"($&$,!*)"/%$"7.%&$*&&$'-%),.+%& *&/%(%#$%&!-&%&"(%6$"0&$$K9>L!" %"6%'$6%&-!-!*)!),0%-%"-&.'-.&$5*&*&,%)!A!), 6*!)-"!)%=0!/$)"!*)%+"6%'$&$$K93L($ @%"6&*6*"$0)-*)!).--/%)!)9D3E($=$!0 $%*5-($0%-%"-&.'-.&$!"-*,&*.6)$%& *<$'-"%)0&$6&$"$)--($/@!-(-($!&/!)!/./*.)0 !),&$'-%),+$!)-($)$?-(!,($&+$#$+*5 -($-&$$U-($VV!)-&$$!"5*&-($&$'-%),+$ !)'$%++*<$'-"+!$@!-(!)-(!"*.)0!),&$'-%),+$ %7.$&-(%-0*$")*-!)-$&"$'--($*.)0!),&$'-%) ,+$%+"*'%))*-!)-$&"$'-%)*5-($'*)-%!)$0 *<$'-"--($+$%5+$#$+ $%'(&$'-%),+$0$"'&!$ "%"!),+$*<$'-U%-(!,($&+$#$+"-($%,,&$,%-!*) *5%)!)'&$%"!),)./$&*5*<$'-" ($/%<*&!-*5$?!"-!),@*&=*)"6%-!%+0%-%/%)%, $/$)-*#$&!)'+.0$"-($$55!'!$)'*)"-&.'-!*)%)0$#%+.%-!*)*5&$$*"-*5-($ @*&="'*)"!0$&*6-!/!A!),-($"O*.-6+%'$ .60%-$P@(!'(!"%'-!#%-$0@($)%)*0$"6+!-*''.&" @(!+$'*)"-&.'-!*)*5-($&$$"KELKFL (!)"!$).$-%+*&=-%&,$-"-*!/6+$/$)-%)$ 55!'!$)-"6%-!%+0%-%/%)%,$/$)-""-$/5*& /*!+$""-$/"'*)"!0$&!),-($6$&5*&/%)'$%)0$)$& ,'*)"./6-!*)*5-($""-$/(!"@*&=

PAGE 28

93 %+"*'*)"!0$&"-($*6-!/!A%-!*)*5O*.-6+%'$.60%$P+!/!-%-!*)*55+%"(/$/*&)%"!/!+%&@*&= %)5$+n#K:L6&$"$)-"%)*#$+5+%"(%@%&$#%&!%)*5&$$)%/$n&$$n*,*/6%'-$$ ($n&$$0$6+*"-($'*/6%'-+*,/$'(%)!"/-*! /6&*#$-($@&!-$6$&5*&/%)'$"!,)!5!'%)-+ @!-(%)*)++!--+$!)'&$/$)-*5&$%0*#$&($%0(! "n&$$!/6+$/$)-%-!*)'%)%'(!$#$.6-* CH,%!)"*#$&-($&$$%)0$?!"-!),5+%"(%"$0 4nKEL0!55$&$)-#%&!%-!*)*5&$$ !/6+$/$)-%-!*)+!=$M&$$%+"*(%"$$)"-.0!$0 *!%"r/&!'(K>L$-%+(%#$"-.0!$0&$%+-!/$ 6$&5*&/%)'$!/6&*#$/$)-*5M&$$*)5+%"(")&$'$)-$%&"6%&%++$+!/6+$/$)-%-!*)*5&$$" *)/*0$&)/%)'*&$6&*'$""*&+!=$ (%"$$)"-.0!$0!)-$)"!#$+!/!)*.$-%+K91LL 6&*#!0$#$&0$-%!+!/6+$/$)-%-!*)*56%&%++$+ -&$$*)%%)00$-%!+%)%+"!"*5"6%-!%+7.$& !)'+.0!),-($"6%-!%+<*!)"6&*'$""!),."!), -($6%&%++$+&$$'*)"-&.'-$0)*.&".&#$@$% +"*'%/$%'&*""-($6%&%++$+%''$""/$-(*0" K92L%)06%&%++$+.+=+*%0!),*5&$$K99L)($@*&=*5(&!"-*"4%+*.-"*"%)0$-%+6&*#!0$ !)"!,(-!)-*-($!/6+$/$)-%-!*)*56%&%++$+&$$ *#$&0!"-&!.-$0$)#!&*)/$)-'*)"!"-"*5 /.+-!6+$"%)0-($"6%-!%+0%-%!"6*&-!*)$0%'& *""-($)*0$"($@*&=%+"*6&*#!0$$55!'!$)/$-(*0"6&*?!/!-/$-(*0*56%&-!-!*)!),-($"6%!%+0%-%-*%'(!$#$,*%+-*/%?!/!A$-($ 6%&%++$+!"/5*&+%&,$7.$&!$" +!=$&%),$7.$&!$"@ (!'(-$)0-*%''$""-($+%&,$6*&-!*)*5-($ "6%-!%+0%-%%)0&$0.'$-($)./$&*50!"=%''$""5 *&"/%++7.$&!$"+!=$6*!)-7.$&!$"@(!'( &$7.!&$"/%++%/*.)-*50%-%%''$"""!/!+%&@*&= 6&*6*"$0%)!%&'(%<$$#%)0$-%+K99L '*)"!0$&""'%+%+$%6$0.'$%66&*%'(!)6%&%++$++ *%0!),*5&$$."!),%+$#$++$#$+0$"!,) K1D C2L!"'.""-($/$-(*0"*5"6%-!%+%-'(6&*'$ ""!),*5/.+-!6+$&%),$7.$&!$"($6&*#!0$ !)"!,(-*)"$+$'-!),-($"6%-!%+'(%&%'-$&!"-!'"*5 !)'*/!),7.$&!$"5*&%-'(6&*'$""!),

PAGE 29

9D r3 nrrn n6n)! 75&#&$& # !rr%rr nnrnr' #rr r#r rr nr' rr , n#rrnr' rrr $5*&$@$6&*6*"$-($"*+.-!*)-*-($%*#$6&*+$/ !)'*)-$?-@!-($)$$0-*.)0$&"-%)0 (*@-($&$$%++*'%-!*)%)0!-"%''$""6%--$&)*# $&$'-!*)E91,!#$"0$-%!+"%*.-!-)8!1'!" !95 .#, %$& # /! )5!,.&$9C "(*@)%"%/6+$"6%-!%+0%-%6 %&-!-!*)$0!)-*/.+-!6+$(!$&%&'(!'%+"5*& "6%-!%+!)0$?!),4!,.&$9E"(*@"-($'*&&$"6*)0!), -($&$$"-&.'-.&$*5(!$&%&'(!'%+" ""./$ *)-(!""%/6+$"6%-!%+0%-% &%),$7.$&!$" 9 1 %)0C%&$$?$'.-$0($5+*@*5-($"$ 7.$&*#$&&$$%&$/%&=$0!)5!,.&$9E($&%), $7.$&!$"0$,$)$&%-$"!)-*%''$""!),-($ &$$)*0$"%)0-&%#$&"$/.+-!6+$6%-("*5-($-&$$ (!"/.+-!6+$6%-(-&%#$&"%+*5-($7.$&!$" '&$%-$"-($)$$0*5%''$""!),/*&$%)00.6+!'%-$ &$$)*0$"*-($%++*'%-!*)*5-($"$&$$ )*0$"*#$&"(*.+0$".'(-(%-/.+-!6+$)*0$"' %)$5$-'($0!)6%&%++$+4!,.&$9F 0$6!'-" -($@%*5%++*'%-!),-($&$$*#$&r%'()*0 $)./$&*5-($&$$ -&%#$&"$0!)-($ &$%0-(5!&"-@%4 !)5!,.&$9F &$5$&"-*%) !)0$?*54n%)0!)-.&)-($4n!)0$?/%6"-*% 6%,$!)(%-!"$#$&)*0$*5&$$!"-($"! A$*5%6%,$%)0/%6"-*%6%,$*)!)'$ -($$%'(7.$&0$,$)$&%-$"!)-*%''$""%)&$$*& %6%,$ @$&$0$5!)$*.&6&*+$/%" r !rr%rrnr' r-rr .r rr/0rr r#r nrrrrr,rrnr' rrr nrr

PAGE 30

12 )-($'*)-$?-*5 *.&,*%+!"-*"'($0 .+$-($"$6%,$%''$""-*!/6&*#$-($6$&5*&/%)'$ +$#$&%,!),-($6%&%++$+6%,$%''$"""-($)*0$ )./$&/%6"-*-($6%,$*)-($ @$'%) 0$-$&/!)$-($ %-$%'(+$#$+*5-($&$$ )$?-+$ #$+6%,$"-*$5$-'($0*".66*&--(!")*0$ )./$& @$!)-&*0.'$%)$@"-&.'-.&$5*&&$$)*0 $"*#$&4!,.&$9F0$6!'-"-($)$@)*0$ "-&.'-.&$*5-($&$$r%'()*0$'*)"!"-"*5*0 $%)0+!"-*5$?-*0$"&$5$&$)'$0 -($'.&&$)-)*0$ 4!,.&$9C 6%-!%+%-%6%&-!-!*)$0!)-* !-(4-&%#$&"%+@$'%)&$6&$"$)--($-&$$!)%) %&&%5*&/%-%)0@&!-$!-*)-*6!'%++ %&&%&$6&$"$)-%-!*)*5%-&$$ 6("!'%++ %++*'%$"-($)*0$"!)"$7.$)-!%+*&0$&.%++*'%-!*)"'($/$"0!"-&!.-$"-($&$$)*0$"%'& *""0!55$&$)-&$"*.&'$"(!"$)%+$"-($ 6%&%++$+%''$""*5)*0$"*@$#$& -(!""$7.$)-!%+ *&%&&%&$6&$"$)-%-!*)*5-($-&$$!""-!++ %#%!+%+$%--($+*,!'%++$#$+*5!$%-4n+ $#$+

PAGE 31

19 4!,.&$9E &$$5*&-($"!)4!,.&$9E 4!,.&$9F &$$++*'%-!*)*#$&%)0*0$-&.'-.&$ )(! 7 !&:$& # %+$1$"'&!$"-($)*-%-!*)"&$5$&&$0! )5*&/%+!A!),-($6&*+$/%-'(*57.$&!$" ."!),&$$!)0$?"-&.'-.&$ 0$,$)$&%-$"!)-*%''$ ""!),-($-&$$)*0$"@(!'(%&$"6&$%0%'&*"" 0!55$&$)-".-&$$6%-(%)0-($"$)*0$"%&$/%66$0*%6%,$!)6%-!%+0%-%/%)%,$/$)-

PAGE 32

11 ""-$/&$'$!#$"%+%&,$)./$&*5".'(7.$&!$"%)0 -($"$7.$&!$",$)$&%-$%+%&,$)./$&*5 6%,$&$%0%''$""&$7.$"-(!"+%&,$6%,$&$%0&$7. $"-'*.+0'&$%-$%&$"*.&'$'*)-$)-!*)!) %)00$#$+*6%(*-"6*-%/*),-($&$"*.&'$"* -"6*-"(%#$%)$,%-!#$!/6%'-*)-($-*-%+ 7.$&-!/$%"/%)7.$&!$")$$0-*$%)"@$&$0!)" $7.$)'$#*!0!),&$"*.&'$'*)-$)-!*)!" &$%++!/6*&-%)-%)0'%)$%'(!$#$0&$"'($0.+!) ,6%,$&$%0&$7.$"-"-$%'(+$#$+*5-($ -&$$ @$'%)6%'=-($6%,$&$7.$"-"@(!'(0*)*-&$ 7.$"--($"%/$&$"*.&'$-(%-!"&$7.$"-" !)%6%'='%)$%''$""$0!)6%&%++$+"@$/$)-!* )$0!)-($%'=,&*.)0@*&= 5*&%,!#$ '*)5!,.&%-!*) )./$&*56%,$"'%)$*6$&%-$0!)6%&%++$+%"$0 *)-(!"@$5*&/%+!A$*.& *&!,!)%+6&*+$/!)-*6%'=!),*5-($6%,$&$%0&$7. $"-"%-$%'(+$#$+*5-($-&$$ rn !rr r rrnnrrr/#rr rrrnr %rrr rnnnr%nr#r rr r r %+$1"/*+"%)0*-%-!*)" $$& # %!&-$& # 1 %rrnnnrr%n 0nr1 %rrn 0nr1 %rrn 0nr1 %rr !" 0nr1 %rr # "rr r $ "rrrrr %&'#('$ "rrnnrr ) # * %&'#('$ r "rrr / r ' + r /r/' r -&rr1&.

PAGE 33

1C 2"rrnnnr + r r , "rrr

PAGE 34

1E r3 nnrnn) ! '$& # )-(!""$'-!*) @$6&*#!0$0$-%!+"%*.-* .&6&*6*"$0"*+.-!*) %,$$%0$7.$"-%'=!), %+,*&!-(/5*&&$%0&$7.$"-"6%'=!),)%00! -!*)0$-%!+"%*.--($0%-%"-&.'-.&$."$0!) !/6+$/$)-!),-($"*+.-!*)) n/!/&.2 #5&%$!&*+$/0$5!)$0!)6&$#!*."'(%6-$&'%)$ 5*&/$0%"%,&%6(6&*+$/)%"$-./ *56%,$ &$%0&$7.$"-"'*))$'-$0!5-($%&$&$%0!)%) &$"*.&'$ *-($&@!"$0!"'*))$'-$0($ )$-@*&=/*0$+5*&-(!"6&*+$/ 6%&-!-!*)!), r !",!#$),&%6( 012345 @($&$ 3 !"-($"$*56%,$&$7.$"W -./%)0%)$0,$ 674 !"0&%@)$-@$$)-@*&$%0&$7.$"-"@($)-($)$$0 &$"*.&'$"-(%-'%))*-$."$0"!/.+-%)$*."+ !$ -($-@*&$%0&$7.$"-"%&$'*))$'-$0$'%++ -(!",&%6(%"6%,$'*)5+!'-,&%6(*&"!/6+'*)5+!' -,&%6((!"'*)5+!'-,&%6(*56%,$&$%0&$7.$"!"'*/6.-$0%-+$#$+*5-($&$$$)$&%+($.&!"!'"-*"*+#$6&*+$/@!-(,&%6(0%-%"-&.'-.&$ !"-*6$&5*&/$?(%."-!#$"$%&'(*5$)-!&$,&%6(( !"'*.+0%00*#$&($%05*&#$&+%&,$,&%6(" *)"!0$&!),&$"*.&'$%++*'%-!*)'(%&%'-$&!"-!'" +!=$$/%&=9 @$'%)&$0.'$*&$+!/!)%-$ ,&%6(-&%#$&"%+*#$&($%0!)*.&6&*+$/)'*/!), "$'-!*) @$0!"'.""-($/$-(*0"-*&$0.'$ ".'(*#$&($%0) 2,;'$%9*+2 !&$ ($,&%6(%66&*%'(,!#$"-($"$-"*56%,$" #$&-!'$"-(%-%&$'*))$'-$0!$-($'%))*-$ %-'($0-*,$-($&%"-($&$7.$"-5*&-($"%/$&$"*. &'$$)'$0*)*-+$#$&%,$-($6%&%++$+!"/ 5$%-.&$*5%)"!),,&%6("$%&'(($.&!"-!'"@ $'*/6.-$%++"$-"'*))$'-$0 # '*/6*)$)-" %&'#('$(%-!""$-*5"$7.$)-!%+6%,$&$%0&$7.$"-") -(!" %&'#('$"$-@$%66+*.&

PAGE 35

1F 6&*6*"$0%+,*&!-(/9 %,$$%0$7.$"-%'=!), -*0$-$&/!)$-($/!)!/%+)./$&*5 6%'="*56%,$&$%0&$7.$"-"r%'($+$/$)6%,$&$% 0&$7.$"*5-($"$6%'="*&%-'($"%&$ 0!"<*!)-!$ -($0*)*-&$7.$"-5*&-($"%/$&$" *.&'$%)0'%)$*6$&%-$0!)6%&%++$+!) 2 !&$82,;'$%9*+ %&'#('$ , nrnnr • %&'#('$rrrnrn r8 • n #9:%&'#('$ rnr nrnn ;< ;<1<= %&'#('$ > ;
PAGE 36

1: 4!,.&$9: r?%/6+$)( #"&#,! 5 5n-$&&$" 52 !&$ ($&.))!),-!/$*5-($%+,*&!-(/0$6$ )0"*)-($'*)5!,.&%-!*)*5.)0$&+!), !)'$-($ !"#$&"/%++'*/6%&$0-*-($ r !-(%")$,+!,!+$$55$'-*)-($&.)-!/$$+$'-!) , $+$/$)-"5&*/ r@!++$%'*)"-%)--!/$($ %&'#('$ !"$7.%+-*-($)./$&*56+%)$"!) %)0!"%+"*"/%++'*/6%&$0-*-($)./$&*5&$% 0&$7.$"-" r !)'$-($$+$/$)-"*5 r %&$ 6%&-!-!*)$0!)-*%&'#('$ 6%'=" "*&-!),*5 %&'#('$ @!++%+"*$%'*)"-%)--!/$*6$&%-!*) $)'$-($&.))!),-!/$*5@*.+0$ @2r5 .-5*&+%&,$)./$&*56+%)$"!) '*)5!,.&%-!*)5*&+!=$-($*)$/$)-!*)$0!)K1:L ($"*&-!),*5 %&'#('$ $+$/$)-@*.+0(%#$ !/6%'-*)-($*#$&%++&.)-!/$n$$-($)./$&*56+%)$" -($)"*&-!), %&'#('$@*.+0

PAGE 37

1> -%=$ @2A9B5 ($&$5*&$ -($&.))!),-!/$*5@!++$ CD EFGD H8 IJKLMA !$ @2 D rD KLM5 !r %&'#('$ #r//rr r %rrrrr , rrnr r )$"-'%"$"'$)%&!* , 1+NO ".'(-(%1 -!)0!'%-$"-(%-6&*0.'$" %-'($"".'(-(%-$%'(6%'= !) , *5/%?!/./"!A$ !)'$%++6%'="%&$*5*6-!/%+"!A$ -($ "*+.-!*) , 6&*0.'$0!"*6-!/%+ ($)%++ r &$7.$"-"%&$'*))$'-$0!$ %&'#($1 -($(%#$-*$6&*'$""$0!)"$&!%+ (%-!" %++&$%0&$7.$"-,*-*"!),+$&$"*.&'$ )".'("!-.%-!*)-($$"-"*+.-!*)6&*0.'$0 (%"6%'=""!A$*59!$ 1 ".'(-(%, 1r (!"!"%+"*%)*6-!/%+"*+.-!*) n$-R"%""./$-(%, !")*-*6-!/%+)".'('%"$-($&$$?!"-%-+$%"-@*6%'=" P %)0 ".'(-(%7P Q7 %)0 ? ($&$%&$-@*'%"$"!)@(!'( P %)0 @*.+0$,$)$&%-$0 9 r%'( %)0 Q $+*),-*"%/$'*))$'-$0'*/6*)$)R S%&'#($ (%/$%)"$%'( %)0 Q%&$!)-($"%/$6%'=%)0-($"$&$7.$"-"(%#$-*$ 6&*'$""$0 "$6%&%-$+$)'$6.--($/!)-@*0!55$&$)-6% '="6&*0.'!),*6-!/%+%-'($" 1 r%'( $+*),"-*'*))$'-$0'*/6*)$)R%)0 Q $+*),"-*'*))$'-$0'*/6*)$)@($&$ S?TTD EFGD (%-!" %-+$%"-*)$&$7.$"-!) $+*),"-*%'*))$'-$0 '*/6*)$)--(%-!")*-%-'($0!)&$7.$"-"*5 P (!"!)0!'%-$"-(%-0!0)*-'*/!)$ &$7.$"-"*50!55$&$)-'*))$'-$0'*/6*)$)-"-*,$-($& !)"%/$6%'=(%-@*.+0(%66$) *)+@($) R? !$ /*&$&$7.$"-"%&$6&$"$)-!) -(%) R(!"@*.+0/$%) -(%--($"$-*5'*))$'-$0'*/6*)$)-"!")*-"*&-$0 (!'(!")*--($'%"$ %""*&-$0 %++'*/6*)$)-"$5*&$(%)0($&$5*&!-!"'+$%&-( %W R (%-!" *-('*/6*)$)-"

PAGE 38

13 %&$"%/$"!A$%)0-(%-!"@(,$)$&%-$0-@*6% '="$)'$@$6&*#$-(%--($ "*+.-!*)6&*0.'$0!"*6-!/%+ ($"'($0.+!),6&*+$/!"'*/6+$-$%)00! 55!'.+--**-%!)*6-!/%+"*+.-!*)*@$#$& ($.&!"-!'"+!=$%#$4&*)-$-(*04 &!-!'%+ %-($&,$*&+*),$"-6%-( $%#r0,$ $&,$r%)0*/!)%)-$7.$)'$+."-$&!), KC1L'%)*-%!)".*6-!/%+"*+.-!*)" "'($0.+$"!)6*+)*/!%+-!/$%&%/$-$&"+!=$%" =6&!*&!$?$'.-!*)-!/$ 6&$'$0$)'$ 0$6$)0$)'!$"/%=$"'($0.+!),%'*/6+$?-%"=.&&$ %0&$7.$"-6&*+$/%+"*+**=""!/!+%&-* "'($0.+!),6&*+$/*@$#$& -($,&%6("-&.'-.&$%) 06%&%/$-$&.!+0!),-(!",&%6("!/6+!5*.& 6&*+$/($'*)5+!'-,&%6(0$&!#$0!"%+@%"0!"'* ))$'-$0 $'%."$$%'(&$%0&$7.$"-!"0!"'&$-$ %)0'%))*-%''$""/*&$-(%)*)$&$"*.&'$%-%-!/$ !)'$$%'(&$7.$"-'%)%''$""*)+*)$ &$"*.&'$ "$-*5&$%0&$7.$"-"%''$""!),-($"%/$& $"*.&'$%&$'*))$'-$0%)05*&/%'*/6+$-$ ,&%6( !$$%'( # !"%'*/6+$-$,&%6(%)0 %&'#('$!""$-*5'*/6+$-$,&%6("$)'$ -($ '*)5+!'-,&%6(!"%0!"'*))$'-$0,&%6( #W U#"#/%0$*5"$-*5'*/6+$-$,&%6( %&'#('$ +"*!-!"!/6*&-%)--*)*-$-(%--($6%&%/$-$&0$& !#!),-($'*)5+!'-,&%6(%&$5!)!-$n!=$)./$& *56+%)$"*&0!$"!)%'*)5!,.&%-!*)0$'!0$X %&'#('$X!$)./$&*5'*/6+$-$,&%6(" X U#&r8XW)./$&*56%,$"!)6+%)$*&0!$ )*.&&$%0&$7.$"-6%'=!),6&*+$/ $%'(%"="(%#$"%/$$?$'.-!*)-!/$ 6&!*&!)* '*//.)!'%-!*)%)0)*6&$'$0$)'$0$6$)0$)'!$"!)'$ "$-*5&$%0&$7.$"-"&$7.!&$"%/$ &$"*.&'$-($@!++$"$&!%+!A$0!-(-(!"&$"*.&' $"(%&!), @$%""./$-(%--($&$!"0$6$)0$)' 4*&$?%/6+$ 5*&,!#$)"$-*5&$%0&$7.$"-"-($0!& $'-$0%''+!',&%6(!""(*@)!)5!,.&$ .&,*%+!"6%'=-($"$-*5-%"="-(%-'%)$%''$" "$0!)6%&%++$+!,(-"!0$*5-($!/%,$"(*@" -($%'-.%+&$7.!&$05*&/*5

PAGE 39

1D -./1 V WXYWWWWZZ[W\W]W^X_X^ Y\XX[_[\WY[^ ` 1[ r%'(6%-(*5-($,&%6(&$6&$"$)-"-($"$-* 5'*))$'-$0'*/6*)$)-" #)%'-.%+ -($"$ '*))$'-$0'*/6*)$)-"5*&/%'*/6+$-$,&%6(%)0!)% '*/6+$-$,&%6(%++-($#$&-!'$"(%#$"%/$ 0$,&$$)%'*/6+$-$,&%6(#!"!-!),%))*0$5!&",!#$"."/%?!/./+$),-(*5-($$%'(6%-(!) (!"/%?!/./+$),-(!")*-(!),.--($'%&0!)% +!# ) @$'*)"!0$&-(!"'%&0!)%+!-*6!'=-($&$%0&$7.$"--*6%'=!)'$$%'(6%-(* 5-($!"'*/6+$-$,&%6( <."-*)$#!"!-*5 $%'(6%-(@*.+0,!#$."-($+$),-(*5-($6%-(%)0 !--($*#$&($%0*55.++6%-(-&%#$&"%++"* -($&$!")*0$6$)0$)'$-@$$)-($-%"="%'&*""-($ 0!55$&$)-6%-("*5(!"5.&-($& "!/6+!5!$"*.&6&*+$/*5"'($0.+!),)) 8 #5&%$!-!&/$& #r8/ $-*56%,$"!)%6+%)$%&$5!?$05*&%)" -%-!'%++*'%-!*)"'($/$ 5*&%)*)5!,.&%-!*) $/%&=9+-$&)%-!#$+ "$-*56%,$&$%0&$7.$""%&$'*))$'-$0!5-($&$5$&-($"%/$6+%)$ *0$*5*.&-&$$%++*'%-!*)&$5$&"-($%,$ *&-($!)0$?*54n"!),-(!"*0$@$

PAGE 40

C2 0$-$&/!)$'*))$'-$0&$%0&$7.$"-"!$@$'%)0$-$& /!)$@($-($&&$7.$"-*0$"&$5$&-($ 6%,$"!)-($"%/$6+%)$!$'*))$'-$0*&)*-r7.% -!*)90$-$&/!)$"@($-($&-@*6%,$"%&$ '*))$'-$0*&)*-$&$6$%--(!"6&*'$""5*&%++-( $!)'*/!),&$%0&$7.$"-" r81)a9:9)Qa)Q9T+9)ab)+B)bac)bT) +A) r81 V +7rD d +e+fg h1_>ij 25 $-*56%,$"!)%6+%)$/%=$"'*))$'-$0'*/6*)$)-" ($)'$5*&%'*)5!,.&%-!*)")./$&*5 "$-"*5'*))$'-$0'*/6*)$)-"@!++$$7.%+-*-($) ./$&*56+%)$")-($5*++*@!),$?%/6+$ $%'('*))$'-$0"$-'*&&$"6*)0"-*%6+%)$!)-($ 4*&$?%/6+$ 1] 1W 1 1 !"1[ 1[r1 k WXYWWWWZZ[W\W]W^X_X^Y\XX[_[\ WY[^ j -($) %&'#('$1 SE 91 13TS9C 19 1DTS9F CDTS: 11 C2 9ETS1:TS1>T SE2TS9> 1F ED F> CCT (!"%66&*%'(!")*-$55!'!$)-%"!-"$%&'($"-($$ )-!&$,&%6(-*5!)0-($'*))$'-$0'*/6*)$)-" "!),-($'*)5!,.&%-!*)'(%&%'-$&!"-!'" -($). /$&*5'*))$'-$0'*/6*)$)-"!"$7.%+-* )./$&*56+%)$" %)00!"-%)'$$-@$$)-@*%,$"!)6+%)$!"/.+ -!6+$*5 $/%&= 9 @$'%)$+!/!)%-$-($$?(%."-!#$"$%&'()"-$%0 *5'($'=!),%++$+$/$)-"*5 r!-$&%-!#$+-* 5!)0-($'*))$'-$0'*/6*)$)-" @$."$%(%"(5.)'-! *)*&/*0.+*/$-(*04*++*@!),+,*&!-(/ 9 n 5!)0"%++-($'*))$'-$0'*/6*)$)-"."!),/*0.+*/$ -(*0 2 !&$8# ##%$, r nr %&'#('$ )a9:AA9)Qa)9T+9)ab)ab)#*%&'#('$ $$%nr #%n&'()r* + "# nr + r ,#-r./ +T9 0#)r*-r.+#12n + 3# %&'#('$/)r

PAGE 41

C9 "3 4 %-n!)$9!/6+$/$)-"%/%6*&(%"(-%+$-*"-*&$" '*))$'-$0'*/6*)$)-" 5*&$%'(6+%)$n!)$C'*/6.-$"-($(%"(-%+$=$ ."!),-($/*0.+*-*6%,$&$%0&$7.$"-@!-( )./$&*56+%)$" 4*&$?%/6+$ +W13-($)13B3WE -(!"/$%)"136%,$&$%0&$7 .$"'*&&$"6*)0"-*6+%)$34!,.&$9>"(*@"%)$?%/6+$ *56+%)$+$#$+'*)5+!'-,&%6(0$&!#%-!*) 4!,.&$9> %"$9*)5+!'-&%6($&!#%-!*))0 82 #5&%$!-!&/$& #r8/ +%)$n$#$+"'($0.+!),/$-(*00*$")*-'*)" !0$&-($.)0$&+!),%++*'%-!*)"'($/$%!#$+ 0*!),-($+%)$n$#$+"'($0.+$@*.+0!)'&$%"$-($6 $&5*&/%)'$%"@$$+!/!)%-$-($"$7.$)-!%+ %''$""$"!)%6+%)$.--@*6+%)$"!)%0!$'%))**6$&%-$"!/.+-%)$*."+(%-!" !5-@*&$7.$"&$5$&-*-($"%/$0!$-($)-($(%#$-*$"$&!%+!A $0* -($%+,*&!-(/6%'="@*.+0'*)-%!) &$7.$"-"-(%-)$$0-*$%)"@$&$0"$7.$)-!%++!)% 0!$(!"@*.+0!/6%'--($*#$&%++ 6$&5*&/%)'$*5-($""-$/*@$)$$0-*(%#$%/$(*0-*%#*!0-(!""$&!%+%''$""@!-(!)%0!$ $=)*@-(%--($"$-*56%,$"!)%!$%&$ 5!?$0.-(!"0$6$)0"*)-($%++*'%-!*)"'($/$ 5*&%'*)5!,.&%-!*)$/%=1,&*.6*5%++*' %-!*)"'($/$"$?(!!-"!/!+%&6%,$%++*'%-!*) 6%--$&)!)%!$$/%&=19%"$0*)-($"$*"$& #%-!*)"@$$?-$)0%"$-*5!)0-($'*))$'-$0

PAGE 42

C1 '*/6*)$)-"%-0!$+$#$+%,$"&$7.$"-"%&$'*))$'$0!5-($%&$!)-($"%/$0!$ *-($&@!"$ 0!"'*))$'-$0"-($6%--$&)'(%),$"%'&*""-($,&* .6*5%++*'%-!*)"'($/$" @$'%/$@!-( 0!55$&$)-/$-(*0"5*&$%'(,&*.6-*5!)0'*))$'-$0 '*/6*)$)-")*.&/$-(*0 5*&$%'( %++*'%-!*),&*.6 @$5!&"-5!)0-($"-%&-!),6%,$ *& 5#r 5*&$%'(0!$$%0%,$*5$%'(0!$ @!++$-($"/%++$"-6%,$!)-(%-0!$4*&$?%/6+ $ !)4!,.&$93%++*'%-!*)"'($/$ -($ $%0%,$5*&$%'(0!$%&$/%&=$0!)&$0%)05*&!$ 2*5(%))$+9N%'=%,$9!"9!-( $%0%,$@$'%)&$+%-$!--**-($&6+%)$!)-($0!$ 4!&"-,$--($6+%)$+$#$+'*))$'-$0 '*/6*)$)-"%66+!),+,*&!-(/1%)0%66++,*&! -/C-*5!)0-($&$+%-$06+%)$"!)0!$ +,*&!-(/C $*))$'-$09'*/6.-$"%++ '*))$'-$0'*/6*)$)-"!)%0!$5*&%,!#$) %++*'%-!*)"'($/$ %-+!)$9"-*&$"%++6*""!+$$%0%,$"5*&%)%++* '%-!*) "'($/$ %)0 %-+!)$18C"-*&$"-($'*))$'-$0'*/6*)$)-5*& 6+%)$%)00!$+$#$+&$"6$'-!#$+-+!)$E5!&"-' */6.-$%++-($'*))$'-$0'*/6*)$)-"%-6+%)$ +$#$+-n!)$F D 9: 1E C9%)0C>%+,*&!-(/'*/6. -$"-($'*))$'-$0'*/6*)$)-"5*&%++*'%-!*) ,&*.6"%+$95&*/9:&$"6$'-!#$+ 4!,.&$93 $%,%,$5*&%++*'%-!*)"'($/$ r?'$6--($,&*.69 *-($&,&*.6"&$7.!&$-($+%)$ +$#$+'*))$'-$0'*/6*)$)-"$@(!'(!" '*/6.-$0%-+!)$E%66+!),-($%+,*&!-(/9 n

PAGE 43

CC 2 !&$8(& ##%$,83 rnnr %&'#('$ #'&-(4nrn* !" + "#%n&-'()*lmn+ ,#%n&-'()r*lop4+ 0#)/ r r 3# 5# nr + r 6#-r./ +rlop4 7#)r*-r.+#12n + 8# 9# n / lqr # n / lms0 lmn "#4nrn#12n 0# nr 4nrn 3#)r*-+/:)*-+) *-;lqr + 5# 6# n / lop4lms05 t 7# n / lmnulqr5 lmn 8#4nrn#12n"9#4nrn#12n;lop4ulqr " # nr 4nrn ""#-/lmn$lop4 ",#)r*-+/:)*-+) *-;-+ "0# "3 n / lop4lqr5 "5# n / lmnulqr5 lmn "6#4nrn#12n"7# nr 4nrn "8#-/lmn$lms0 ,9#)r*-+/:)*-+) *-;-+ , # ,"# n / lms0 ,,#4nrn#12n,0#4nrn#12n;lop4 ,3# nr 4nrn ,5#)r*-+/:)*-+ )*-; + ,6# ,7# n / lmn ,8#4nrn#12n09# nr 4nrn 0 #)r*-+/:)*-+) *-; + 0"# %&'#('$/)r

PAGE 44

CE !$n$#$+'*))$'-$0'*/6*)$)-"5*&&*.69 +!)$F-*D%&$'*/6.-$0."!),/*0.+* /$-(*0"%/$%"-($+%)$n$#$+(!"$'%."$-($0! 55$&$)'$$-@$$)-($-@*%,$!" !" &*.61-*:'*/6.-$-($!$n$#$+'*))$'-$0'*/6*) $)-"<*!)!),&$"6$'-!#$+%)$n$#$+ '*))$'-$0'*/6*)$)-"$-r%'(,&*.6 5&*/1-*: 5 !&"-0$-$&/!)$"-($$%0%,$5*&$%'(0!$ n!)$"9291 9312 1F1> C1CE%)0C3CD0$-$&/!) $"-($$%0%,$5*&,&*.6"1:&$"6$'-!#$+ r?%/6+$ 5*&%++*'%-!*)"'($/$ r 5&*/+!)$1F-*1> r "(*@)!)4!,.&$9D $%0%,$"5*& $%'(0!$%&$9 1 C E D 92 99%)0914*&5!&"-!$&%-!*)" %+,*&!-(/'*/6.-$"-($$%0%,$*50!$ 2*56%'=%,$2*5$%'('(%))$+)"$'*)0!-$&%-!*) $%0%,$*50!$9*56%'=%,$9*5$%'( '(%))$+($ +!"-(%"-($6+%)$)./$&-(%-@!++$."$0-*<*!) -@*'*&&$"6*)0!), 6+%)$"!)%0!$-+!)$"9F 1C C2 C:%)0E9<*! )-($+%)$n$#$+'*))$'-$0'*/6*)$)-"5*& ,&*.6"1:&$"6$'-!#$+4*& r ,&*.6%++*'%-!*)"'($/$0!55$&$)'$$-@$$)6+%)$"@ !-(!)% 0!$!"lmn$lms0"!),-(!"'&!-$&!*)@$<*!)-($+%)$n$#$+'*))$' -$0'*/6*)$)-"

PAGE 45

CF r3 rL($%&$ 9 '+$%''.&%-$"!/.+%-*& r?%/6+$"4+%"(!/%)0 1 $&5*&/%)'$/*0$+"!/.+%-*& r?%/6+$"!"="!/C2Yr?-$)"!*) 5+%"("!/ r%,+$ &$$%)0!/ C $(%#!*&/*0$+"!/.+%-*& r?%/6+$"*6$))5/ )6%&-!'.+%&%)0$"-".!-%+$5*&*.&&$" $%&'(!""!/K1EL!/!"%)*6$)"*.&'$ (%&0@%&$#%+!0%-$0"!/.+%-*&-".66*&-"0!55$&$)4n"'($/$" %++*'%-!*)"'($/$" .55$& /%)%,$/$)-%+,*&!-(/"%)0"'($0.+!),%+,*&!-(/". &&$)-!/6+$/$)-%-!*)".66*&-"*)+: %++*'%-!*)"'($/$"@!-((%))$+6&!*&!-$%00$0 &$/%!)!),93%++*'%-!*)"'($/$"!/ -%=$"-&%'$"5!+$"%"!)6.-5*&"!/.+%-!),-($ 6$&5*&/%)'$*#$&%) *".66*&-%)0"!/.+%-$-($&$$*#$&% @$!/6+$/$)-$0%/*0.+$O!/.+%-*&P !/.+%-*&"!/.+%-$"-($-&$$&$%0N"$%&'(6%-$&)!)/$/*&+%*.-%)0,$)$&%-$"-($ -&%'$"&$7.!&$05*&!/-%+"*!/6+$/$)-"-($ 1E0!55$&$)-@%"*5%++*'%-!),&$$

PAGE 46

C: "'($0.+$&!"!/6+$/$)-$0%"6%&-*5-($!/.+%*&-&%'$,$)$&%-*& 6%&-*5!/.+%-*& @&!-$"-($5+%"(/$/*&-&$$-&%#$&"%+6%--$&)%" -&%'$5!+$4*++*@!),4!,.&$9D0$6!'-"0!55$&$)'*/6*)$)-"5*&!/.+%-*& 4!,.&$9D !/.+%-*&*/6*)$)-" "!/.+%-*&%&%/$-$&"+!"-$0!)%+$C @$&$."$0!)*.&$?6$&!/$)-"!/.+%-!*)"@$&$ '%&&!$0*.-*)%(*"-""-$/@!-(C1'*&$)-$+H$* ) 913%)09!"=6%'$ %+$Cr?6$&!/$)-%+*)5!,.&%-!*)%&%/$-$&" !$! 3' ./$&*5 ' (%))$+" 3 . /$&*5 6 %'=%,$" N' (!6" 6$&' (%))$+ 1 ./$&*50!$"6$&'(!6N6%'=%,$ 1 ./$&*56+%)$"6$&0!$ 1 ./$&*5 +*'="6$&6+%)$ 12E3 ./$&*56%,$"6$&+*'= :E %,$" !A$ E %,$&$%0!/$ FFZ" $7.$"-7 .$. $ 0 $6-( C1 4n" '($/$ %,$ %66!), 1E++*'%-!*)'($/$"

PAGE 47

C> 0 !9 ,#,-!5 !#%$!&% $."$0*-(")-($-!'%)0&$%+@*&+0@*&=+ *%0"!)*.&$?6$&!/$)-"%+$E,!#$"0$-%!+"*5 -($")-($-!'%)0&$%+@*&+0@*&=+*%0"&$$5*& $%'(@*&=+*%0!",$)$&%-$0*55+!)$%)0+*%0$0 !)-*"!/.+%-*&%"$0*)-($%++*'%-!*)"'($/$ %+$Er?6$&!/$)-%+*&=+*%0" =$ 8 %$#2 #&5 !> ! &$!&7'$& # )-($-!' 9 922 222 )-($-!' 1 F22 222 )-($-!' C 9 222 222 )-($-!' E 92 222 222 &$"4&%)'$ 9 2E3 F>: )*.&$?6$&!/$)-"@$/$%".&$0-($-*-%+&$ "6*)"$-!/$%)0&$7.$"-0!"-&!.-!*)%'&*""-($ */6%&$0-($-*-%+&$"6*)"$-!/$*5-@*6%'=!) ,/$-(*0"@!-(-($)%I#$*&)*6%'=!), /$-(*0 @$."$0%"%"$+!)$5*&*.&%)%+"!"$/$ %".&$%)0%)%+A$-($6$&5*&/%)'$*5*.& "'($0.+!),%+,*&!-(/%'&*""%++1E%++*'%-!*)"'($/ $"%+$F,!#$"0$-%!+%*.--($@*&=+*%0@$ ."$0!)*.&$?6$&!/$)-" %+$F *&=+*%0$-%!+" !9 , 5 1'!& 1'!""$ &$!&7'$& # 1'!" &$!&7'$& # %!&-$& # E22 %),$ )!5*&/8 *&/%+ %)0*/ %&!%+$ 7.$&"!A$ 122 %),$ )!5*&/8 *&/%+ )!5*&/ *&/%+8 %)0*/ 4!?7.$&"!A$ ( 1F E22 %),$ )!5*&/ %)0*/ 4!?7.$&"!A$ ) F222 *!))!5*&/ %)0*/

PAGE 48

C3 0('$#,#"& )-(!""$'-!*)@$0!"'."" %)%+A$%)0&$%"*)-($ 6$&5*&/%)'$*5-($6&*6*"$06%'=!), %+,*&!-(/"0(!5 !#% 5?//#8//&8 / 4!,.&$12 "(*@"-($6$&5*&/%)'$'*/6%&!"*) $-@$$)-($)%I#$"'($0.+!), +%)$n$#$+ %)0!$n$#$+"'($0.+!),5*&&%),$7.$&*#$&.)!5* &/+0!"-&!.-$00%-%*.-E22&%),$ 7.$&!$"@!-(#%&!%+$&%),$"!A$"@$&$$?$'.-$0) %#$&%,$ @$%'(!$#$0:>B%)0>3B*5 6$&5*&/%)'$,%!)5*&+%)$n$#$+%)0!$n$#$+"'($ 0.+!), &$"6$'-!#$+ '*/6%&$0-*%I#$ 6%'=!),!-(.)!5*&/0!"-&!.-!*)31B D1B5*&+%)$n$#$+%)0!$n$#$+ &$"6$'-!#$+"(*@/*&$6$&5*&/!/6&*#$/$)-%) 0%++*'%-!*)"'($/$"$?$'.-$ /*&$!)-$&+$%#$*6$&%-!*)"5*&)%I#$"'($0.+!),+ %)$+$#$+%)00!$+$#$+"'($0.+!),&$0.'$" -($"$!)-$&+$%#$*6$&%-!*)" @(!'(!/6&*#$"-($*#$ &%++6$&5*&/%)'$5*&-(%-%++*'%-!*)"'($/$ !/!+%&+ 4!,.&$19 "(*@"-($6$&5*&/%)'$ 5*&"%/$)./$&*5&%),$7.$&!$"*#$&)*&/%+ 0!"-&!.-!*)0%-%$&$@$%'(!$#$0>:B%)039B*5 6$&5*&/%)'$,%!)5*&+%)$n$#$+%)0!$ n$#$+6%'=!), &$"6$'-!#$+ '*/6%&$0-*%I#$6%'= !),3>B5*&6+%)$+$#$+%)0 3DB5*&0!$+$#$+"(*@/*&$6$&5*&/%)'$!/6&*#$/$ )-++:%++*'%-!*)"'($/$"@!-(+%)$ 6&!*&!-6$&5*&/$--$&%-%@!-(.)!5*&/0!"-&! .-!*)"(*@+$""!/6&*#$/$)-'*/6%&$0-*-($ 0%-%@!-()*&/%+0!"-&!.-!*)4*&.)!5*&/0!"-&!. -!*)*50%-% +$""&$$"%&$*#$&+%66$0 '*/6%&$0-*)*&/%+0!"-&!.-!*)4$@*#$&+%6"/ $%)"+$"")./$&6%,$%''$""%)0 6&*%!+!-*5-($6%,$%''$""+!),"%/$0!$*&6% '=%,$!"/!)!/!A$0!-()*&/%+0!"-&!.-!*) *50%-% /*&$"*#$&+%6.$-*/*&$*#$&+%6 7.$&!$"%'-!#%-$/*&$6%,$&$%0%''$"" *&$6%,$&$%0%''$""-&!,,$&-($/*&$!)-$&+$%#$* 6$&%-!*).-*.&"'($0.+!),%66&*%'( &$0.'$"-($"$!)-$&+$%#$*6$&%-!*)"(!"&$0.'-!*) *5!)-$&+$%#$*6$&%-!*)!"/*&$!)0%-%@!-(

PAGE 49

CD )*&/%+0!"-&!.-!*)+"*5*&6+%)$6&!*&!-%++*'% -!*)"'($/$"-($)./$&*5!)-$&+$%#$ *6$&%-!*)"%&$/*&$&$,%&0+$""*5-($0%-%0!"-&!. -!*)+%)$6&!*&!%++*'%-$"-($&$$6%,$" !)-*-($'+*"$&&$"*.&'$"!$)-($"%/$0!$%)0 6%'=%,$(!"-&!,,$&"/*&$!)-$&+$%#$ *6$&%-!*)" 4!,.&$12 %),$7.$&6$&5*&/%)'$@!-("'($0.+!),5*&.)!5*&/ +0!"-&!.-$00%-% 4!,.&$11%)01C ,!#$."-($&$%"*)!),0$-% !+"%*.-@(-($&$!"6$&5*&/%)'$,%!)($6+*-" "(*@'*/6%&!"*)*5-($)./$&*5!)-$&+$%#$0*6$&%!*)"$?$'.-$0!)$%'(%++*'%-!*)"'($/$ 5*&,!#$)0%-%0!"-&!.-!*)($"-%-!"-!'"!)-($ -%+$"$+*@-($6+*-""(*@-(%-&$0.'-!*)!)-($ )./$&*5!)-$&+$%#$0*6$&%-!*)"$)(%)'$"6$&5*&/%) '$(-(!"&$0.'-!*)!)-($!)-$&+$%#$0 *6$&%-!*)",!#$"6$&5*&/%)'$**"-J.&"'($0.+!), %+,*&!-(/"&$*&0$&"-($6%,$&$%0&$7.$"-" %-%66+!'%-!*)+$#$+-*0!"-&!.-$-($/%'&*"" .--($&$7.$"-7.$.$@!-(!)&$ *&0$&"-($!&6&!*&!-!$!-*&0$&"&$7.$"-"!)-* 5!&"-'(%))$+%)0"$'*)0@%6&!*&!-4*&$?%/6+$ $!,(-&$7.$"-"!)%6%'= %&$,*!),!)-*-@*'(%))$ +" 5*.&!)$%'('(%))$+"'($0.+$"-($/ %"(%))$+%6&!*&!!$5!&"--@*&$7.$"-"@! ++0!"-&!.-$0%'&*""'(%))$+"*)-&*++$&6!'=" -($'(%))$+5!&"-&$7.$"-"%)0%""!,)-*%'(%))$+ &*.)0&*!)5%"(!*)-($-!/$!-6!'="-($ "$'*)0&$7.$"-"5*&%6%&-!'.+%&'(%))$+!)&*.)0 &*!)-($'*//%)0+!)$@!++$5&$$ ($)'$)* 2 F 92 9F 12 $"6*)"$!/$!)$'"++*'%-!*)'($/$" %),$.$&X)!5*&/!"-&!.-!*)X9!++!*)%-% *!)-" *'($0.+!), +%)$n$#$+ !$n$#$+

PAGE 50

E2 @%!-!),5*&'*)-&*++$&.-@!-(%5!&"-6&!*&!&$7.$"-)$$0-*@%!-.)-!+-($!)-$&+$%#$ *6$&%-!*)!"'*/6+$-$(!"!/6&*#$"-($6$&5*&/%)' $&$0.'-!*)*5-($)./$&*5!)-$&+$%#$ *6$&%-!*)" 4!,.&$19 %),$7.$&6$&5*&/%)'$@!-("'($0.+!),5*&)*&/%+ 0!"-&!.-!*)*50%-% 4!,.&$11 **5!)-$&+$%#$*6$&%-!*)5*&.)!5*&/0!"-&!.-!* ) 2 1 E : 3 92 91 9E 9: $"6*)"$!/$!)$'"++*'%-!*)'($/$ %),$.$&X*&/%+!"-&!.-!*)X9!++!*)*!)*'($0.+!), +%)$n$#$+ !$n$#$+ 91CEF:>3 *'($0.+!), 2DDC9D99D2 +%)$n$#$+ 2122E222 !$n$#$+ 2292C222 2 1 E : 3 92 91 9E 9: 93 12**5/-$&+$%#$6"(%))$+ )!5*&/!"-&!.-!*)XX$%0$7.$"-!"-&!. -!*) '&*"" *'($0.+!), +%)$n$#$+ !$n$#$+

PAGE 51

E9 4!,.&$1C **5!)-$&+$%#$*6$&%-!*)5*&)*&/%+0!"-&!.-!*) r?6$&!/$)-"@$&$%+"*6$&5*&/$0@!-(6*!)7.$&!$"*)*-(.)!5*&/%)0)*&/%+ 0!"-&!.-!*)*50%-%$*"$&#$0"!/!+%&6$&5*&/%) '$$(%#!*&%"$5*&$!)-(!""$-*5 $?6$&!/$)-"%+"*(%-!".)!5*&/0!"-&!.-!*)6$&5 *&/$0$--$&'*/6%&$0-*-($)*&/%+ 0!"-&!.-!*)*50%-%!)'%"$*5)%I#$6%'=!),.@!-(6+%)$%)00!$+$#$+6%'=!),-($&$@%")* /.'(!/6%'-*50%-%0!"-&!.-!*)*)-($6$&5*&/%)'$ !-(-(!"@$'%)'*)'+.0$-(%-6%'=!), -($&$%0&$7.$"-" @(!'(+$#$&%,$-($6%&%++$+!"/5 $%-.&$*5 @!++!/6&*#$-($6$&5*&/%)'$ &$,%&0+$""-($@*&=+*%0'(%&%'-$&!"-!'"4!,.&$1E 81F"(*@"-($6*!)-7.$&6$&5*&/%)'$ 91CEF:>3 *'($0.+!), 13>99D2D92F +%)$n$#$+ E21CE22E !$n$#$+ 9C991C92 2 F 92 9F 12 1F C2**5)-$&+$%#$6"(%))$+ *&/%+!"-&!.-!*)XX%$0$7.$"-!"-&!.!*) '&*"" *'($0.+!), +%)$n$#$+ !$n$#$+

PAGE 52

E1 4!,.&$1E *!)-7.$&6$&5*&/%)'$ .)!5*&/0%-%0!"-&!.-!*) 4!,.&$1F *!)-7.$&6$&5*&/%)'$ )*&/%+0%-%0!"-&!.-!*)0(('7! 51'!&@1'!"&$!&7'$& ##, ;'$&$!&7'$& # )*.&$?6$&!/$)-" @$-.)$07.$&!$"-*'&$ %-$&$"*.&'$'*)-$)-!*)*&(*-"6*-"%)0 #$&!5!$0*.&(6*-($"!"-(%-*.&"'($0.+!),%+,*&!(/"0!"-&!.-$"-($&$7.$"-"%'&*""'(%))$+ 0!$"%)06+%)$"4!,.&$1: "(*@"-($0!"-&!.-!*) *5&$7.$"-"%'&*"" (%))$+9!"@($&$-($ (*-"6*-*''.&"4*&%!#$%66&*%'(-($%#$&%,$)./ $&*5&$7.$"-""$&#$0'(%))$+9!"%*.>.-@!-("'($0.+!),%66&*%'($"@$@$&$%+$-*& $0.'$-(!"-*CF -(%-!"%*.-F2B&$0.'-!*) 2 92 12 C2 E2 F2 $"6*)"$!/$!)$'"++*'%-!*)'($/$ *!)-.$&F)!5*&/!"-&!.-!*)X9!++!*) %-%*!)-" *'($0.+!), +%)$n$#$+ !$n$#$+ 2 F 92 9F 12 1F C2 $"6*)"$!/$!)$'"++*'%-!*)'($/$ *!)-.$&FX*&/%+!"-&!.-!*)X9!++!*) %-%*!)-" *'($0.+!), +%)$n$#$+ !$n$#$+

PAGE 53

EC !-(&$7.$"-0!"-&!.-!*)%'&*"" @$@$&$%+$ -*0!/!)!"(-($"!A$*5-($(*-"6*--(!",%#$ ."-($*66*&-.)!--*!/6&*#$-($*#$&%++6$&5*&/%) '$*5-($""-$/ 4!,.&$1: $%00!"-&!.-!*)%'&*"" ($)./$&*5!)6.-7.$&!$"!/6%'--($6$&5 *&/%)'$*5-($""-$/"-($)./$&*57.$&!$" !)'&$%"$ )./$&6%,$%''$""&$7.$"-!)'&$%"$4!,. &$1>6+*-"&$"6*)"$-!/$'(%),$"@($)-($ )./$&*57.$&!$"!)'&$%"$"%I#$@%*57.$&%)" @$&!),-%=$"/*&$-!/$@($)-($)./$& *57.$&!$"!)'&$%"$.--($-!/$-*,$-7.$&&$". +-"-@*6%'=!),/$-(*0"&$/%!)%66&*?!/%-$+ "%/$(!"'+$%&+!)0!'%-$"-(%-*.&"'($0.+!),/$ -(*0"0!"-&!.-$-($&$7.$"-"%'&*""&$"*.&'$" %)0&$0.'$-($(*-"6*-"*''.&&$)'$" 2 9 1 C E F : > 91CEF:>3 rrrrnr $7.$"-!"-&!.-!*)%'&*"" *-6*-"!A$&$0.'-!*) !$n$#$+ +%)$n$#$+ *'($0.+!),

PAGE 54

EE 4!,.&$1> /6%'-*5)./$&*57.$&!$" 1F-*E22 ($.)0$&+!),0!"-&!.-!*)*5-($7.$&!$" (%#$"*/$!/6%'-*)-($6$&5*&/%)'$)*&0$&-* #$&!5-(!"%""./6-!*)@$$?$'.-$0%*.-122&%),$ 7.$&!$"*5"%/$"!A$@!-(.)!5*&/ )*&/%+ %)0&%)0*/0!"-&!.-!*)($.)0$&+!),&$$!"' *)"-&.'-$0@!-(.)!5*&/0%-%0!"-&!.-!*) 4!,.&$13 $&5*&/%)'$@!-(7.$&0!"-&!.-!*))!5*&/#"*&/ %+#"%)0*/ "$?6$'-$0 -($)*&/%+0!"-&!.-!*)6$&5*& /"$--$&'*/6%&$0)%I#$'%"$)$*"$&#%-!*) -(%-'%)$/%0$%*.--($6$&5*&/%)'$,%!)%-!$ +$#$+6%'=!),-(%--($6$&5*&/%)'$*5%++-($ 2 1 E : 3 92 91 2F29229F21221F2C22CF2E22EF2rnrrrrn1rr ./$&*5.$&!$"$"6*)"$!/$ %I#$ +%)$ !$ 2 2F 9 9F 1 1F C CF E %I#$+%)$!$$"6*)"$!/$!)$'"'($0.+!), .$&!"-&!.-!*))!5*&/#"*&/%+#"%)0*/ )!5*&/ *&/%+ %)0*/

PAGE 55

EF 7.$&0!"-&!.-!*)"!"%66&*?!/%-$+"%/$(!""(* @"-(%-*.&6%'=!),/$-(*00!"-&!.-$"-($ &$7.$"-"%'&*""".'(-(%-5*&%),!#$)7.$&0 !"-&!.-!*)-($&$"6*)"$!"/!)!/!A$0-*!-" +*@$"-#%+.$ 4!,.&$1D **5)-$&+$%#$6"@!-(7.$&0!"-&!.-!*))!5* &/#"*&/%+#"%)0*/0()"#8/%,'A ($%"$9!$+%)$n$#$+6%'=!), -&!$" -*"$6%&%-$"$7.$)-!%+%''$""*56%,$"@!-(!)% 6+%)$%)0-&!$"-*0!"-&!.-$-($&$%0%'&*""*-($& 6+%)$"*5-($./$&*56+%)$"!)%) '*)5!,.&%-!*)%&$'*)"-%)@$."$/*0.+*/$-(*0-* "$,&$,%-$-($&$7.$"-,*!),!)-*-($ "%/$6+%)$!)'$*-($&&$"*.&'$"+!=$(%))$+ %' =%,$%)0!$%&$'*)"-%)-($7.$"-!*)($&$ !"@('%)R-@$%66+-($"%/$/*0.+*&.+$*-($&+$ #$+*5&$"*.&'$"%"@$++J4!,.&$C28C9 %)"@$&"-(!"7.$"-!*))4!,.&$C1%)0CC -($6$&5*&/%)'$*56+%)$+$#$+ /*0.+*/$-(*0!"'*)"!"-$)-%)0$--$& '*/6%&$0-*)%I#$6%'=!),%++1E%++*'%-!*)"'($/$" )-($*-($&(%)0 /*0.+*/$-(*00*$" )*-6$&5*&/'*)"!"-$)-+5*&*-($&-6$"*5&$"*.&' $" 2 F 92 9F 12 1F %I#$+%)$!$**5)-$&+$%#$6$&%-!*)"'($0.+!), .$&!"-&!.-!*))!5*&/#"*&/%+#"%)0*/ )!5*&/ *&/%+ %)0*/

PAGE 56

E: 4!,.&$C2 +%)$'($0.+!),#"-($&'($0.+!), $%"*)$!), /*0.+*5*&*-($&&$"*.&'$"0* $")*-,.%&%)-$$-($$+!/!)%-!*)*5(*-"6*-" 4*&$?%/6+$ !5&$%0&$7.$"-'*)"!"-*5*0$"S9 CC 9> :FT5*&%++*'%-!*)"'($/$U%)0 !5/*0.+*/$-(*0!"%66+!$0%-'(%))$++$#$+-($) %++&$7.$"-@!++,*!)-**)$'*))$'-$0 '*/6*)$)-"$-%)0($)'$'&$%-$"%(*-"6*-56+%) $+$#$+6%'=!),!"%66+!$0 S9 :FT SCCTS9>T @*.+0$-($'*))$'-$0'*/6*)$)-""$-"4&*/-(!"@ $'%)$+!/!)%-$(*-"6*-&$%0!),-($ &$7.$"-"9 CC %)09>!)6%&%++$+ 4!,.&$C9 +%)$'($0.+!),#"-($&'($0.+!), ($,*%+*5-($6%'=!),-($&$%0&$7.$"-"! "-*!/6&*#$*#$&%++6$&5*&/%)'$0!"-&!.-!*) *5-($&$7.$"-"%'&*""-($-*%#*!0-($)./$& *5!)-$&+$%#!),*6$&%-!*)"*&$)./$&*5 2 92 12 C2 E2 $"6*)"$-!/$!)"$'"++*'%-!*)'($/$ +%)$'($0.+!),#"-($&n$#$+ %!#$ (%))$+ %'=%,$ !$ +%)$ 2 92 12 C2 E2 $"6*)"$-!/$!)"$'"++*'%-!*)'($/$ +%)$'($0.+!),#"-($&" %!#$ (%))$+ %'=%,$ !$ +%)$

PAGE 57

E> !)-$&+$%#$*6$&%-!*)"/$%)"(*-"6*-"%)0*.&%+,*& !-(/@%"%+$-*&$0.'$-($"$(*-"6*-" "!,)!5!'%)-+!-(*.&$?6$&!/$)-" @$0$/*)"-&%$-(%--($6&*6*"$0%+,*&!-(/0*$" !/6&*#$-($6$&5*&/%)'$"!,)!5!'%)-+($'*)5+!',&%6(0$&!#%-!*)/$-(*0"5*&+%)$n$#$+%)0 !$n$#$+0*$"".66*&--($/$-(*0!)0!"-&!.!),-($&$7.$"-"%'&*""%)0($)'$&$0.'$ -($(*-"6*-"*''.&&$)'$!)),$)$&%+-($0!" -&!.-!*)*50%-%%55$'--($6$&5*&/%)'$*5-($ ""-$/.@!-(*.&%66&*%'( @$@$&$%+$-*&$0 .'$-(%-0$6$)0$)'*56$&5*&/%)'$*)-($ 0%-%0!"-&!.-!*)

PAGE 58

E3 r3 nnrn)-(!"@*&=@$(%#$"-.0!$0-($$55$'-*) 6$&5*&/%)'$*5"6%-!%+7.$&!$" ."!),&$$ %-'(!),*#$&*+!0-%-$&!#$"$6&*6*"$ 0%)$@"-&.'-.&$*5&$$)*0$"5*&" %)0%"$0*)-(%-0$#$+*6$0%,$)$&%++%)$n$#$+% )0!$n$#$+%++*'%-!*)"6$'!5!'6%'=!), /$-(*0"*-(/$-(*0"%#*!0-($&$"*.&'$'*)-$)-!*) *''.&&$00.$-*+*),0%-%/*#$/$)--!/$ '%."$00!$!)-$&+$%#!),0!"-&!.-!*)*5&$7. $"-"%'&*""-(!"&$"*.&'$'*)-$)-!*)!" %00&$""$0"+%)$n$#$+6%'=!),0*$")*-&$7.!&$ =)*@+$0,$*5.)0$&+!),%++*'%-!*)"'($/$ @$$+!$#$-(!"/$-(*0'%)$%66+!$0-*%)%#%!+% +$.&6%'=!),%+,*&!-(/(%" *66*&-.)!--*$6%&-*50%-%%"$""-$/" *6$&%-! ),""-$/"*&""-$/!-"$+5+"*@!-(*.& %66&*%'( &$$)*0$&$6&$"$)-%-!*) @$0*)*-)$$ 0-*-(!)=*5-($"6%-!%+'(%&%'-$&!"-!'"*5 !)'*/!),&$7.$"-"!)%-'("6%&-*55.-.&$@*&= @$6+%)-*&!), %-'(!),%)0"'($0.+!),/*&$'+*"$+-*@%&0"-($ 0%-%%"$""-$/"-(%-!"".66*&-!),7.$&%&&!#%+% )06&!*&!-*&0$&!)%+,*&!-(/4.&-($& $#%+.%-$-($%-'(!),/$'(%)!"/%-6%'=%,$%)0'(%) )$++$#$+-*&!),/*&$,&%).+%&!-!) &$"*.&'$'*)-$)-!*)%#*!0%)'$"-($+%)$n$#$+ "'($0.+!),0*$")*-&$7.!&$-($ .)0$&"-%)0!),*5%++*'%-!*) @$6+%)-*%66+-(!" *)%&$%+'*//$&'!%+00!-!*)%++ @$6+%) -*.!+0%)%0%6-!#$6%'=!),%)0"'($0.+!),0$"!,) !$%"$0*)-($@*&=+*%0'(%&%'-$&!"-!'" "$+$'--($%++*'%-!*)"'($/$%)0&$"*.&'$6&!*&!5*&%-'(!),4.&-($& !/6+$/$)-6%-!%+*!)" /.+-!6+$!)0$?"-&.'-.&$-@**&/*&$-%+$%++*'% -!*)

PAGE 59

ED rrrrK9L4($) *.5%%)0HG(%), V)0$&"-%) 0!),!)-&!)"!''(%&%'-$&!"-!'"%)0""-$/ !/6+!'%-!*)"*55+%"(/$/*&%"$0"*+!0"-%-$0&!# $" Vr$&5*&/r#%+$# #*+C> 669399D1 122DK1L4($) n$$ %)0HG(%), Vr""$)-!%+&*+$" *5$?6+*!-!),!)-$&)%+6%&%++$+!"/*55+%"(/$/*& %"$0"*+!0"-%-$0&!#$"!)(!,("6$$00%-%6&*'$"" !), V6&$"$)-$0%--($&*'$$0!),"*5-($1299 rrr9>-()-$&)%-!*)%+/6*"!./*)!,($&5*&/%) '$*/6.-$&&'(!-$'-.&$ 1299 KCLG$&-%+ Vr?6+*!-!),-($4!)$&%!))-$&) %+%&%++$+!"/5*&n%)0'!$)-!5!'*&=+*%0" V rKEL. n(%), %)0.* V)$55!' !$)--&$$!/6+$/$)-%-!*)*#$&5+%"(/$/*& "-*&%,$""-$/" V6&$"$)-$0%--($&*'$$0!),"*5($99-(!)-$&)%-!*)%+"/6*"!./*) 0#%)'$"!),$*,&%6(!'!)5*&/%-!*)""-$/" $@&+ $%)" n*.!"!%)% 122C KFL*+-"!0%"%)0!,+%" V6%-!%+%-%%)%, $/$)-*#$&4+%"($/*& V!)0#%)'$"!) 6%-!%+%)0$/6*&%+%-%%"$"#*+:3ED 5*"$ & %* *.&%-!0!" %"'!/$)-* *=$+ ($=(%& %)0.%), r0" $06&!),$ &$&+!)$!0$+$&, 1299 66EEDEFC K:Ln# n! .! %)0H($) Vn*,*/6%'&$$)r55!'!$)-6%-!%+)0$?5*& V!) %-%%"$"-$/"5*&0%)'$066+!'%-!*)"#*+::C > H. . G(*. %)0)+%)0 r0" $06&!),$&$&+!)$!0$+$&, 1299 6612119CK>Lr/&!'( 4&%5 &!$,$+ '(.$&%)0(*/% V)-($!/6%'-*55+%"("*) "6%-!%+!)0$?!), V6&$"$)-$0%--($&*'$$0!),"*5 -($!?-()-$&)%-!*)%+*&="(*6*)%-% %)%,$/$)-*)$@%&0@%&$ )0!%)%6*+!" )0!%)% 1292

PAGE 60

F2 K3L"!&*,!%))!" %&!A*6*.+*" (%( n!$)$& %)0&%$5$ V.$&6&*'$""!), -$'()!7.$"5*&"*+!0"-%-$0&!#$" V6&$"$)-$0%--( $&*'$$0!),"*5-($122D )-$&)%-!*)%+*)5$&$)'$*)%)%,$/$)-*50%-% &*# !0$)'$ (*0$"+%)0 122D KDL(*0")!% *@/%) %)0!'% V%&%++ $+N%@%&$7.$&*6-!/!A%-!*) V6&$"$)-$0%-($&*'$$0!),"*5-($129E!)-$&)%-!*)% +'*)5$&$)'$*)%)%,$/$)-*50%-% )*@!&0 -%( 129EK92L%/$+%)04%+*.-"*" V%&%++$+-&$$" V $' #*+19 669DF12E 9DD1K99L '(%=$$# $!0$/%)) '(/!0%)0K99L$$,$& V*&-%"$06%&%++$++*%0!),*5-&$$ " V6&$"$)-$0%--($&*'$$0!),"*5-($9"n)-$&)%-!*)%+*&="(*6*))%+-!'"5*& !,$*"6%-!%+%-% $0*)0*$%'( %+!5*&)!% 1291K91L*. G(%), %)0n&.$)@%+0 V%&%++$+" 6%-!%+7.$&6&*'$""!),*)"."!),-&$$" V 6&$"$)-$0%--($&*'$$0!),"*5-($1)0 n)-$&)%-!*)%+*&="(*6*))%+-!'" 5*&!,$*"6%-!%+%-% &+%)0* 4+*&!0% 129CK9CL.)&* %6%0%=!" %)0$0,$@!'= V $-$&/!)!"-!'"=!6+!"-" V6&$"$)-$0%--($ &*'$$0!),"*5-($-(!&0%)).%+"/6*"!./ *)!"'&$-$%+,*&!-(/" &+%)0* 4+*&!0% 9DD1K9EL.,( V=!6+!"-"%6&*%!+!"-!'%+-$&)%!#$-*%+%)'$0-&$$" V*//.) #*+CC 66 ::3:>: 9DD2K9FL&$5%)0%/$Vr55!'!$)-!)0*@+* '=$-&!$#%+!).%0-&$$%"$06%-!%+ %-%%"$" V$*!)5*&/%-!'% #*+9 66FDD9 9DD>

PAGE 61

F9 K9:Ln$)-+$ V.+-!0!/$)"!*)%+!)%&"$%&'( -&$$"."$05*&%""*'!%-!#$"$%&'(!), V*//.) #*+93 66F2DF9> 9D>FK9>L.--/%) V-&$$"%0)%/!'!)0$?"-&.'-.& $5*&"6%-!%+"$%&'(!), V6&$"$)-$0%--($ &*'$$0!),"*5-($9D3E!)-$&)%-!*)%+'* )5$&$)'$*)%)%,$/$)-*50%-% *"-*) %""%'(."$--" 9D3EK93Lr66"-$!) **0&!'( %)0G.) V( $"=!67.%0-&$$%"!/6+$0)%/!'0%-% "-&.'-.&$5*&/.+-!0!/$)"!*)%+0%-% V6&$"$)-$0%-($&*'$$0!),"*5-($-@$)-5!&"-%)).%+ "/6*"!./*)*/6.-%-!*)%+,$*/$-& !"% -%+ 1 22F K9DLn&,$ r66"-$!) %)0**0&!'( V=! 6@$"$55!'!$)-0!"-&!.-$00%-%"-&.'-.&$"5*& /.+-!0!/$)"!*)%+0%-%"$-" V6&$"$)-$0%--($&*' $$0!),"*5-($-@$)-5*.&-(%)).%+ "/6*"!./*)&!)'!6+$"*50!"-&!.-$0'*/6.-!), n %"$,%" 122F K12L($&.-(*.-"(%-$)0*&"**%)-*.*)*@.)$C2 129F (--6NN!-+*,"%)0!"='*/N-&.-(""0"(00#$)0*&"0* )*-@%)-*.-*=)*@N K19L"(*=)%)0 %&*)$/$&%'*"*) *++!)r), "-&*/ %)00!-%=$++%129E$"!,) 6%--$&)"5*&-.)%+$%)0$55!'!$)-%"$0!)0$?$ "[9E K11L!-!),&%@%+ !<%%)&%(%=%&%) $0*$& *()%#!" %&=%)%""$ %)0!)% %)!,&%(1223$"!,)-&%0$*55"5*&6$&5*&/%) '$)rH1223 K1CL4$),($) .%*n$$ %)0H!%*0*),G(%),1299 r""$)-!%+&*+$"*5$?6+*!-!),!)-$&)%+ 6%&%++$+!"/*55+%"(/$/*&%"$0"*+!0"-%-$0&!#$ "!)(!,("6$$00%-%6&*'$""!),[99 K1EL%),. *),!%), %)4$), n$!!%) %*n .* %)0(.6!),G(%),1299$&5*&/%)'$ !/6%'-%)0!)-$&6+%*56%&%++$+!"/-(&*.,(%0# %)'$0'*//%)0" %++*'%-!*)"-&%-$,%)0 0%-%,&%).+%&!-[99

PAGE 62

F1 K1FL*.),"**.),%)0%(/.-%)0$/!&1291)$ #%+.%-!*)*50!55$&$)-6%,$%++*'%-!*) "-&%-$,!$"*)(!,("6$$0"*--*&%,$[91K1:L (--6NN@@@'*/6.-$&@*&+0'*/N%&-!'+$N1D>9E31N'+*.0 "$'.&!-N"%/".),.)#$!+"9F""0%"$0*)0$)"$"-5+%"(/$/*&(-/+ K1>L (--6"NN@@@+!)=$0!)'*/N6.+"$NC-6$"""0"!/.+%-* &"<%"*)/% K13L*),'(%)*( %),(.)%&= .),(*!/ !)'( $*+(!) %)0%),*)n$$1299Y -&$$!)0$?*6-!/!A%-!*)$?6+*!-!),!)-$&)%+6%&% ++$+!"/*55+%"(%"$0"*+!0"-%-$0&!#$" r 678r9 F E$'$/$&1299 13:1D> K1DL$-$&(*#%)$'%)0!'(%+&\-=]129C)-($ $55!'!$)'*5/.+-!6+$&%),$7.$&6&*'$""!), !)/.+-!0!/$)"!*)%+0%-%"-&.'-.&$") rrr:;r&nr%r 9r