Citation
A study on renaming problem in asynchronous systems

Material Information

Title:
A study on renaming problem in asynchronous systems
Creator:
Matam, Shashank
Publication Date:
Language:
English
Physical Description:
ix, 54 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Electronic data processing -- Distributed processing ( lcsh )
Electronic data processing -- Distributed processing ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 52-54).
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Shashank Matam.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
747706561 ( OCLC )
ocn747706561
Classification:
LD1193.E52 2011m M37 ( lcc )

Full Text
A STUDY ON RENAMING PROBLEM IN ASYNCHRONOUS SYSTEMS
by
Shashank Matam
B. Tech, Jawaharlal Nehru Technological University, 2007
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science


This thesis for the Master of Science
degree by
Shashank Matam
has been approved
by
Bogdan S. Chlebus
Tom Altman
3 4pm
Date


Matam, Shashank (M.S., Computer Science)
A Study on Renaming Problem in Asynchronous Systems
Thesis directed by Associate Professor Bogdan S. Chlebus
ABSTRACT
This thesis gives a survey of the state of the art of research on the problem of
renaming. The goal is to assign unique virtual names to a group of processes that
participate in computation, so that the range of new names is as small as possible
while minimizing the number of auxiliary registers. This problem has been
extensively studied in distributed computing due to its importance in unreliable
asynchronous systems. I concentrated on the case of asynchronous systems with
processes prone to crashes and communicating by reading/writing to Shared
Registers.


This abstract accurately represents the contents of the candidates thesis. I
recommend its publication.
Signed
Bogdan S. Chlebus


ACKNOWLEDGEMENT
Foremost, I would like to express my sincere gratitude to my advisor Associate Prof.
Bogdan S. Chlebus for the continuous support of my M.S. Thesis, for his patience,
motivation, enthusiasm, and immense knowledge. His guidance helped me in all the
time of research and writing of this thesis. I could not have imagined having a better
advisor and mentor for my thesis study.
Besides my advisor, I would like to thank the rest of my thesis committee: Prof. Tom
Altman and Assistant Prof. Richard M. Osborne, for their encouragement and
insightful comments.
Last but not the least; I would like to thank my family: my parents Satyanarayana and
Shantha, for giving birth to me at the first place and supporting me spiritually
throughout my life.


TABLE OF CONTENTS
Figures.............................................................viii
Tables................................................................ix
Chapter
1. INTRODUCTION........................................................1
1.1 Purpose of the study.............................................2
1.1.1 Asynchronous Computability.................................3
1.1.2 Problems Solvable in Asynchronous Systems..................5
1.2 Scope of the Study...............................................7
1.3 Structure of the Thesis..........................................7
2. HISTORY OF THE PROBLEM..............................................9
2.1 Early Research and Analysis......................................9
2.2 Most Recent Researches.........................................14
3. TECHNICAL PRELIMINARIES............................................16
3.1 Model Assumptions...............................................16
3.2 Process Model..................................................20
3.3 Failure Model..................................................21
3.4 Communication Model............................................21
VI


3.5 The Renaming Problem,
22
3.5.1 Motivation.................................................23
3.5.2 Terminologies..............................................23
3.5.3 Performance of Renaming Algorithm..........................24
3.5.4 (2n-l) Wait Free Renaming Algorithm........................25
3.5.4.1 The Wait Free Case...................................26
3.5.4.2 Analysis of Renaming.................................29
3.5.4.3 Execution of the Renaming Algorithm..................29
3.5.4.4 The General Case.....................................40
3.5.5 (k, 7V)-Majority Renaming Algorithm........................41
3.5.6 Wait-Free M-Renaming Algorithm.............................45
4. CONCLUSION..........................................................51
REFERENCES.............................................................52
vii


LIST OF FIGURES
Figure
3.1 Group of Free Names.....................................................24
3.2 P3s view for Free Name................................................25
3.3 Snapshot Object of P2 and P3...........................................30
3.8 A Splitter.............................................................46
3.9 A Renaming Grid........................................................48
viii


LIST OF TABLES
Table
3.4 Execution of Renaming for k Processors....................................32
3.5 Execution of Renaming for 3 Processors....................................34
3.6 Execution of Renaming for 4 Processors....................................36
3.7 Execution of Renaming for 5 Processors....................................39
IX


1. INTRODUCTION
The truth in (fault-tolerant/dynamic/large-scale/etc.) distributed computing
where finding models that are realistic while remaining abstracts enough to be
tractable, was, is and still remains a real challenge. Distributed computing was bom in
the late seventies when people started taking into account the intrinsic characteristics
of physically distributed systems.
The field then emerged as a specialized research area distinct from networks,
operating systems and parallelism. The very first research was done in the publication
in 1978 of Lamports most celebrated paper Time, clocks and the ordering of events
in a distributed system. Since then, several high level journals and conferences are
devoted to distributed computing.
The problem of mastering multiprogramming was addressed in the late sixties
and early seventies. The work of pioneers such as Brinch Hansen, Dijkstra and Hoare
(among others), basic concepts to master lock-based synchronization was developed
(e.g., semaphores and monitors) and an associated methodology based on invariants
was developed. These results, helped with multithreaded computing and how to
analyze multi-process programs in failure-free environments.
1


Today computational realities are characterized by communities of networked
entities communicating with each other, cooperating towards common tasks or the
solution to a shared problem and acting partially in an autonomous way. Said,
differently, today computational world is inherently distributed. So, while there are
lots of successful Internet-based distributed applications (e.g., Peer-to-Peer, clouds).
Distributed computing arises when one has to solve a problem in terms of
entities (usually called processes, agents, sensors, peers, actors, processors, nodes,
etc.) such that each entity has only a partial knowledge of the many parameters
involved in the problem that has to be solved. While parallelism and real-time can be
characterized by the words efficiency and on time computing, respectively, distributed
computing can be characterized by the word uncertainty.
This uncertainty is created by asynchrony, failures, unstable behaviors, non-
monotony, system dynamism, mobility, low computing capability, scalability
requirements, etc. Mastering one form or another of uncertainty is pervasive in all
distributed computing problems.
1.1 Purpose of the Study
This section summaries the asynchronous computability and the problems
solvable in asynchronous systems and also discuss about the renaming problem as
well.
2


1.1.1 Asynchronous Computability
An important issue in fault-tolerant asynchronous computing is the
determination of the respective power of an object type with respect to another object
type (an object type corresponds to a problem). This statement has received a lot of
attention, mainly in the context of the consensus problem.
The consensus problem is one of the most important problems encountered in
fault-tolerant distributed computing. Assuming that each process proposes a value, it
states that the processes have to agree on the very same value, that value being one of
the proposed values. Consensus is a basic building block when processes have to
agree.
As an example the totally ordered broadcast problem requires that the
processes deliver in the same order all the messages they broadcast. This means that
totally ordered broadcast is both a communication problem (processes have to deliver
the same set of messages) and an agreement problem (the messages have to be
delivered in the same order at every process), which is an instance of the consensus
agreement problem
With the reference to the author in [11], I can describe renaming problem as
considering the air traffic control problem as an example for clear understandability.
This is explained as follows.
3


In all directions across the Atlantic consider there are n+1 airplanes flying. To
avoid possible collisions, a distinct altitude should be assigned to each plane, where
an altitude is measured in thousands of feet. This task is easy for a centralized air
traffic control system that can be described as simply sort the planes by flight
number, and assign the ith plane to the i,h altitude.
Suppose, instead of assigning distinct altitude for each plane, if I want to
design a protocol so that the planes themselves can coordinate to assign altitudes.
Then this problem can be called as the renaming task: where each plane starts out
with a unique ID taken from a large namespace (its flight number) and it halts with a
unique ID taken from a smaller namespace (its altitude). But I am interested in
asynchronous protocols, because I do not know participating planes in advance, and
interference may cause communications to be lost or delayed. In a real scenario
planes would communicate by message passing, but for ease we will assume that they
communicate via some shared read-write memory.
How many altitudes are needed to solve renaming? For instance consider two
planes A and B. They share an array, initially all -k Planes write their flight ID into
memory and then take a snapshot.
If a plane sees only its own ED, it takes altitude 1000 feet.
If a plane sees both IDs, then it compare IDs.
4


If its Id is less than others, it takes altitude 2000.
If its Id is greater than the others, it takes altitude 3000.
Three altitudes are needed for two planes because if As ID is less than Bs, if
A sees Bs ID then B may or may not have seen As ID. If B did not see As ID, then
it will choose 1000 feet, and if it did it will choose between two altitudes? No,
because if they could, then they could solve two process consensus which we know is
impossible. For two planes, three altitudes are both necessary and sufficient.
Due to its very definition, the consensus notion is irrelevant for studying the
respective power of object types (problems) that are too weak to solve the consensus
problem. These problems are usually called Sub Consensus Types/Problems.
So, an important issue of asynchronous computability consists in establishing
connections (or absence of connection) between sub consensus problems. Several
results in that direction have recently been established in recent years. The following
sections discusses about the problems solvable in asynchronous systems.
1.1.2 Problems Solvable in Asynchronous Systems
However there are some problem that are solvable in asynchronous systems.
5


The first problem is Set Consensus a weakening of an original consensus
problem in which a fixed number of different decisions are allowed.
The second problem is the alternate weakening of set consensus called
Approximate Agreement in which the decisions must lie in the sufficiently
small range of each other.
The third problem is Renaming in which processors must choose new
identifiers for themselves.
There is another problem namely K-exclusion, a fault tolerant variant of
mutual exclusion in which multiple processes can be in the critical section at a
time.
In the renaming problem, processes start with unique initial names taken from
a large name space and decide new names such that no two processes decide the same
new name and the new names are from a name space as small as possible.
Solutions to the renaming problem can be used to solve a variant of K-
exclusion, in which processes must each occupy a specific place in the critical
section.
Throughout this thesis I assume two computational models as Shared memory
model and Message passing model.
6


1.2 Scope of the Study
Exploring the limits of distributed computability are among the most exciting
research areas of distributed computing. In that spirit, this thesis focuses on renaming
problem that has received considerable interest since its introduction in 1987, it was
the non-trivial problem known to be solvable in an asynchronous distributed system
despite process failures. Many algorithms for renaming and variants of renaming have
been proposed and sophisticated lower bounds have been proved, that have been a
source of new ideas of general interest to distributed computing. It has consequently
acquired a paradigm status in distributed fault-tolerant computing.
1.3 Structure of the Thesis
This thesis is made up of 3 sections,
Section 1 is the literature survey of the early research on renaming problem,
and discusses the findings of the contemporary researchers and also most
recent discoveries on renaming problem as well.
Section 2 is an introduction to the renaming problem. The work focuses on the
most basic form of renaming, the one-shot version, although it discusses other
variants, to give a perspective to the reader on the renaming research area.
Section 3 discusses the renaming problem and the model of computation as
Shared memory. In the Shared memory model, the processes communicate by
7


accessing a shared memory that consists of multi-writer/multi-reader atomic
registers.
8


2. HISTORY OF THE PROBLEM
During the past years, renaming problems on asynchronous systems made
researchers a hot topic to research on. When seen many of the past researches have
been done on renaming problem on shared memory model, in which the processes
consists of multi-writer/multi-reader atomic registers communicate by accessing a
shared memory.
Lets study the early researches on this topic.
2.1 Early Research and Analysis
The renaming problem has initially been introduced from a theoretical point
of view [1] in the context of unreliable asynchronous message-passing systems. Then,
the renaming problem has received a lot of attention in the context of shared memory
systems (e.g., [2, 3, 4, 10, and 11])
In [1] Hagit Attiya, Amotz Bar-Noy, Danny Dolev, did their research on two
pairs of nontrivial goals that are achievable in the presence of up to t < n/2 faulty
processors. The first pair deals with renaming processors so as to reduce the size of
the initial name space. The second pair deals with a multi-slot critical section
problem. The fault model that was considered in [1] that the cases presented is fail-
9


stop, where a faulty processor may suddenly stop functioning, no matter what state it
is in. As indicated in the complete version of [1], a processor may fail while being in
a critical section, as well as in the process of sending messages. The various
algorithms can be extended to worse kinds of faults.
Moir-Anderson states that a simple application of the renaming problem is
when the processes perform a computation whose time complexity is dependent on
the size of their name space [2]. By first using a renaming algorithm to reduce their
name space, the time complexity can be made independent of the original name
space. In wait-free solution [2] concerns the shared memory model. This model was
surprisingly simple and elegant. It uses a building block called splitter. This building
block has been initially introduced by Lamport to provide fast mutual exclusion [6].
Wait-free computing has been introduced in 1977 by Lamport [6], Since then,
several problems have been solved with simple and elegant wait-free protocols (e.g.,
[5]). Regardless of whether the other processes are slow, fast or have crashed [8], an
implementation of an object is wait-free if every access by a non faulty process is
guaranteed a response. This results that a non faulty process that invokes an operation
on a wait-free object always gives a correct solution. The other processes cannot
prevent progress by it. Moreover, the maximal duration of an operation on a wait-free
object can be bounded, according to the object.
10


Michel Raynal in his paper Wait-Free Objects for Real-Time Systems, he
explained that a multiprocessor system presents in an increasing order of difficulty
wait-free implementation of three objects: a renaming object, a store/collect object
and a consensus object.
In [9] it is explained that the renaming system consists in the following. Each
of the n processes that define the system has a distinct name taken from an
unbounded domain. The processes have to cooperate to choose new names from a
name space of size M such that no two processes get the same name. In [9] the author
describes renaming problem was that it is trivial when no process can commit a crash
failure. Differently, it has been shown that there is no solution to the M-renaming
problem when M < n+ f, where f is an upper bound on the number of processes that
can crash [7].
He also states that [9] the processes basically compete to acquire new
(distinct) names. Also the net effect of process asynchrony and process crashes
creates an uncertainty on the system, which states that a renaming protocol has to
cope with. Additionally the fact is that the solution has to be wait-free makes the
problem far from being trivial.
To illustrate solutions to the M-renaming problem, In [9] Michel Raynal
proposed two algorithms. The first algorithm was based on Moir and Anderson [2],
considers a shared memory system and solves the problem for M = n (n + 1) =2. It is
11


based on a grid of splitters which is explained in [2], a new data/control structure
specially suited to wait-free computing. Several renaming protocols have been
designed for shared memory systems.
The second algorithm was based on Attiya et al. [1], considers a message
passing system where at most f < n=2 processes can crash and which solves the
problem for M = (n f=2) (f + 1). In this both protocols should allow the reader to
get a better understanding of the difficulties and subtleties of the renaming problem.
And in the message passing model, through channels the processors
communicate by sending and receiving messages. The channels need not to be FIFO,
but it must not to lose, corrupt of create messages. Moreover, there is no assumption
on the message transfer delays.
Combinatorial Topology and Distributed Computing [11] by Maurice Herlihy,
Dmitry Kozlov, Sergio Rajsbaumin, described the plane traffic control problem
(explained in the introduction). In that they explained that the planes would
communicate by message-passing, but for ease of presentation I will assume they
communicate through some kind of shared read-write memory.
Maurice Herlihy in [11] to rules out such trivial solutions, they have taken that
any renaming protocol be anonymous. That is in any execution, choosing the name of
a process can depend only on the name that it was originally issued and how its
12


protocol steps are interleaved with the others. In a renaming protocol each 2n names
is assigned to n + 1 processes. An array A is shared by the process, where A[i] is a
pair consisting of Pis input name and its proposed output name. Each process starts
by writing its input name and proposed output name 0. It is given that an output name
is claimed only if it has been written to A.
Then a snapshot of A is taken by the process. If no other process has claimed
the same output name, then the process conforms on that output name. Otherwise, the
process picks a new name so that it has not been claimed by any other process, and
starts over.
A process chooses the next name to claim in two steps. In First step, it
computes the rank, r, of its input name among the input names of processes that have
written to A. In Second step, it selects rth-ranked output name from the output names
were not claimed in its most recent snapshot.
According to this renaming problem as explained above there are some
lemmas proved by Maurice Herlihy, Dmitry Kozlov, Sergio Rajsbaumin In
Combinatorial Topology and Distributed Computing [11] states that No two
processes decide the same output name. And the other lemma states that If m + 1
processes participate in this protocol, all names chosen will lie in the range 0,..., 2m
and it also says that this renaming protocol is wait-free.
13


2.2 Most Recent Researches
A central concern in the renaming problem is reducing the output name space
as much as possible. When the size of the new name space M, should be as small as
possible, as a function of n, the number of processes I have, non-adaptive renaming.
Adaptive renaming is more demanding: the size of the new name space should
be as small as possible as a function of the actual number of processes participating in
an execution. Most algorithms solving renaming are adaptive. But proving lower
bounds for non-adaptive renaming is substantially more difficult than proving lower
bounds for adaptive renaming.
Long lived renaming: In the long-lived renaming problem, a process can
(repeatedly) acquire a new name and then release it [2]. Long-lived renaming can be
useful in systems in which processes acquire and release identical resources. Each
new name gives then access to a resource (e.g., its address) and the renaming
algorithm control accesses to the resource
Group renaming: a generalization of the renaming problem for groups of
processes has been proposed in [12] and later investigated in [13]. In this variant,
each process belongs to a group and knows the original name of its group. Each
process has to choose a new name for its group in such a way that two processes
belonging to distinct groups choose distinct new names.
14


In very recent years of researches, most efficient hardware based architectures
and algorithms are proven in renaming problems and asynchronous systems mostly
on multiprocessors communication systems. Though many of the renaming
algorithms are proven, to improve the efficiency or the simplicity in designing the
renaming algorithm is still a challenging one. Finally, the researchers have studied
how to use renaming to solve other problems, and also the relation between renaming
and others.
15


3. TECHNICAL PRELIMINARIES
This section starts by defining the model of computation of interest to the
thesis. Later it defines the Renaming Problem and discusses about the main renaming
variants. Finally, it discusses some basic results about renaming and gives an intuition
of the difficulty of the renaming problem.
3.1 Model Assumptions
I follow the following model assumptions to discuss the renaming problem and its
algorithms
asynchronous
shared memory with read/write registers
At most/crash failures of processes.
maybe a few asides into message passing
Shared memory:
In the model of shared memory process communicate among themselves
through objects called shared variables or registers. Shared variable is a container that
16


stores a value. Registers may be considered as playing a similar role to messages in
the model of message passing, in the sense that the value stored by a shared variable
corresponds to the contents of a message. Assuming there are some n processes in the
system and a number of registers.
Process can communicate by retrieving the values stored in shared variables
and by changing the values. When initiating interaction comes between a process and
a register, then the process will be active and register passive. Register responds
through a feedback for operation which is invoked by process, which is typically the
value stored at the register when the operation occurred, but it may be just an
acknowledgement of performing the operation.
It is good to have registers as they enhance the system and make programming
simpler. Shared variables are either provided by hardware or by a simulation. It is
seen that one can simulate certain shared variables on top of message-passing.
Motivation for improving the computational power of the system is achieved when
shared registers are provided by hardware, say, in terms of its fault-tolerance,
consider a general asynchronous environment.
Atomic snapshot object:
The model considered here is that of asynchronous shared memory. Registers
are only read write kind. There are some n processes which may fail by crashing.
17


Developed an implementation of an object built from read-write registers that allow
to be modified independently by all participating processes, but also to be read as a
whole in an atomic fashion. A sequence of values of shared variables as they exist
instantaneously at some points in execution makes a snapshot of this region of shared
memory. Consensus is not solvable in an asynchronous environment with even one
crash; one could suspect that only trivial problems are solvable in such environments.
This is not the case and atomic snapshot is an example. Atomic snapshot objects are
used as building blocks in shared memory algorithms.
There are a number of read-write registers partitioned into n segments. A
multi reader single writer register is said to be owned by the process which can write
to it. Each process i own a segment memory[i]. That is each register in the segment
memory [i] is readable by all, but only process i can write to it.
A segment consists of registers designated to store values relevant to an
application, and also of special registers used in the implementation at hand. We want
to develop two operations: updates of segments by their owners, and learning the
values in all segments as they existed simultaneously at some point of an execution.
This is what intuitively provides a snapshot object. Our goal is to obtain an
implementation that has two qualities. First, it is wait-free; with any process with any
able to complete its operations independently of the actions of the renaming
processes. Secondly it is atomic in the sense that all operations occurring in an
18


execution could be ordered in a way that preserves the ordering on non-overlapping
operations and the results of operations are consistent with the specification.
A precise specification of a snapshot object is as follows. There are two kinds
of operations denoted scant and update,; this notation indicates additionally that
process i invokes them. A response to an invocation of scan returns a view, which is a
vector of value from all the segments. An invocation of updatet(v) has the ith segment
modified in such a way that v is stored in memory[i] as the new value for the ith
segment.
So what we want to achieve is an implementation of these two operations.
Observe the similarity between implementation and simulation; the difference is in
terminology rather than in goals. To argue between about correctness we consider
various levels of implementing executions, with simulated executions. One can
visualize a distributed environment in which scans and updates occur instantaneously
as interactions between processes and shared objects. This is the top level of an
implemented execution exactly as in a stimulated execution. An execution
represented as a sequence of invocations and response of operations scan and update
is on the middle level of interpretation of an implementing execution. The bottom
level is an execution viewed as consisting of events occurring on the level of running
procedures launched by invoking the implemented operations scan and update.
19


To define correctness of an implementation, we apply a methodology similar
to one used in discussing simulations. We require correctness of an implementation to
hold in the strong form of linearizability of all implementing executions, which
means that we want to implement an atomic object. The semantics of a snapshot
object is an extension of that of a read-write register, in the sense that is like
composite objects allowing to read by taking a snapshot and to write by performing
an update. A view assigned to a serialization points is to contain the results of the
most recent updates of all shared registers, and if there were no updates of a memory
segment owned by some process, then the initial value in this region of memory.
3.2 Process Model
The system consists of n sequential processes that are denoted as pi, p2... pn.
The integer i is called the index of pj.
The initial names belong to the totally ordered set [1...N] that can be
compared with N n such that each process p* has an initial name denoted
old_namei. A process does not know the initial names of the other processes rather it
only knows that no two processes have the same initial name. An initial name can be
seen as a particular value that are uniquely identified for example, its IP address that
are also defined in pi's initial context.
20


The processes are asynchronous. This indicates that the relative execution
speed of different processes is completely arbitrary, and the time taken by the process
has no bound to execute a step.
3.3 Failure Model
In this execution a process may crash or halt prematurely. If it has crashed,
then a process executes no step. This indicates that a process executes correctly until
it possibly crashes. If a process that does not crash in a run is correct in that run.
Otherwise it is faulty in that run.
If the failure model is called wait-free then there may crash any number of
processes. It is called so because it is useless for a process to wait for events to
happen related to other processes that is, waiting until another process writes a value
to the shared memory. Thus, in a wait-free solution to the renaming problem,
independently of the steps taken by other processes, a process has to choose its new
name in a finite number of steps.
3.4 Communication Model
By accessing atomic read/write shared registers the processes communicate
with each other. Atomic means that each read or write operation appears as if it has
been executed instantaneously at some point of the time line time between its begin
21


and end events[6] Each atomic register is a single-writer/multi-reader register which
is denoted as lWnR.
This indicates that a single process which is statically determined can write it,
but every process can read it. Atomic registers are denoted with uppercase letters and
these are structured into arrays. If X [l...n] is such an array, X[i] denotes the register
of the array that pi is allowed to write.
This communication model provides a convenient abstraction level. More
elementary communication means, such as single-writer/single-reader registers, or
message passing channels, can be used to construct single-writer/multi-reader
registers.
3.5 The Renaming Problem
This section discusses renaming problem in detail, and its performances. This section
also discusses about the conflicting theories on renaming problems and the lemmas.
The main two steps a process has to be done that is, a process has to acquire a unique
new name it may later release it. And then the range of new names should be small
which depending only on the number of active processes and it must be at least 2k-l.
Renaming is a building block for adaptive algorithms. It first obtains names in an
adaptive range and then apply an ordinary algorithm using these names.
22


In this renaming problem each process starts with unique names from a large
domain. And processes should pick new names that are still distinct but that are from
a smaller domain.
3.5.1 Motivation
Suppose original names are serial numbers may be many digits, but I had
liked the processes to do some kind of time slicing then it can be done on the basis of
their ids.
3.5.2 Terminologies
The renaming problem can be defined on the basis of four terminologies that
are explained below,
Termination. The invocation of new_ name () by a correct process returns it
a new name.
Validity. Each new name is an integer in the set [1...M].
Agreement. No two processes obtain the same new name.
Index Independence. The new name obtained by a process is independent
of its index.
23


3.5.3 Performance of Renaming Algorithm
The performance of renaming algorithm can be explained as step by step
process which is as follows. In the first step new names should be drawn from
{1,2,...,Af}. In the second step I would like M to be as small as possible. And then
uniqueness implies M must be at least n. Finally due to the possibility of failures, M
will actually be larger than n.
Example:
P3s view at some iteration
Pi
P2
P3
P4
Ps
P6
The group of free names:
{2,4,5,6,8...}
Figure 3.1: Group of Free Names
24


Pi
p2
p3
p4
Ps
p6
P3s suggests the value 5
Figure 3.2: P3s view for Free Name
At most n-1 integers were suggested by other processors, therefore n names in
{1,..., 2n-1} are available
The rank of a processor is at most n
The maximal proposal value is 2n-l
3.5.4 (2n-l) Wait Free Renaming Algorithm
The main requirements of the renaming problem are:
Termination: For every non faulty processor pi5 y; is eventually assigned a value
25


Uniqueness: For all distinct non faulty processors p* and pj, y; ^ yj.
Here the goal is to minimize M, the size of the new name space. Superficial solution
is to let processors choose their index, that is, processor p; takes i as the new name;
the new name space is of size n. Even though this solution is not good if the indices
are larger than the actual number of processors. To rule out this, there is an additional
requirement:
Anonymity: The code executed by processor p, with original name x is exactly the
same as the code executed by processor pj with original name x.
3.5.4.1 The Wait Free Case
The wait free case namely f = (n-1), there is a renaming algorithm whose output
domain contains (n+f) = (2n-l) names. Namely M = (2n-l). This algorithm is simpler
as it uses larger name space.
Lets have an idea of the processors in this algorithm for the better understanding.
Processors communicate using some atomic snapshot containing for each
processor its original name and new name it suggests for itself.
26


Stepl: Each processors pi in the snapshot object starts the algorithm by writing its
original name to its segment.
Step 2: Then pi scans the snapshot object and picks any new name that has not been
suggested yet by another processor.
Step 3: processor pi suggests this name by writing it to its segment in the snapshot
object and scans the snapshot object again.
Step 4: if no other processor suggest this name, pi decides on it.
Step 5: otherwise it picks another new name and suggest it again.
The uniqueness condition here implies that M must be at least n. Here renaming
algorithms shown with A/=n+/where /is the number of crash failures to be tolerated.
In the algorithm proposed below the rule for picking a new name is to choose the rth
ranked integer from the free numbers in the range [1.. .2n-l] where r is the rank of the
processors original name among all the original names of participating processors.
The algorithm uses a single atomic snapshot object, whose name is left implicit in the
calls to the update and scan procedures. The ith segment of the snapshot object
contains a pair of values: pis original name x; and pis current suggestion s, for its
new name.
27


Wait-Free Renaming Algorithm: Code for Processor Pi, 0<= i<= n-1:
s\ = \H suggestion for my new name
while true do
update p^s segment of A to be [x, s], where x is p,'s original name
scan A
if s is also someone else's suggestion then
let r be rank of x among original names of non-J_ segments
let 5 be rth smallest positive integer not currently suggested by another proc
else
y:=s //decide on s for new name
terminate
th
In the above algorithm the rule for picking a new name is to choose the r
ranked integer from the free ( not suggested ) numbers in the range [1..2n-l]. Where
28


in r is the rank of the processors original name among all the original names of
participating processors.
3.5.4.2 Analysis of Renaming
Uniqueness:
Suppose in contradiction p; and pj choose same new name, s.
pi's last update before deciding: suggests s
pi's last scan before deciding s
pj's last scan before deciding s (sees s as pfs suggestion and doesn't decides)
The rank of a processor is almost n and at most n-1; integers are already suggested by
other processors so the highest integer a processor may suggest is 2n-l. Because a
processor decides only on a name it has previously suggested.
3.5.4.3 Execution of the Renaming Algorithm
I present an exaction of (2n-l)-Renaming Algorithm proposed by Attiya, Bar-Noy,
Dolev, Peleg and Reischuk.
Preliminaries:
29


Processors communicate via snapshot object
Each processor iteratively stores its original name and (suggested) new name
If the suggested name is in conflict with some other proposal, the processor
next suggests the kth free name, where k is the rank of its original name
By using another example for 4 processors namely pi, p2, p3, p4, I can do
better admissible exaction of the wait free algorithm.
Each processor starts the algorithm by writing its original name to its segment
in the snapshot object.
In some point of time, If P3 suggest the same name suggested by P2,
P3
P2
Figure 3.3: Snapshot Object of P2 and P3
Then, P3 will choose the new name from the new space and suggest it.
For the better understanding, I can discuss about the schematic renaming algorithm
proposed by Attiya as explained below,
30


Execution of Renaming Algorithm in which some processes takes (2k-l) names
where k is the number of processors competing for a new name ID.
Consider 5 processes we have,
Assuming that there are 5 processes and we start from left to right i.e. from PI to P5.
In the first step the processor PI will suggest 1 as its new name ID and in the 2nd step
sees that P2 has also suggested the same name ID. Then the processor P2 want to
choose the next name ID available In the name space according to its rank. When the
processes have already chosen 1 as there new name ID then depending on their rank
they choose the next name ID available. Here in this case they have already chosen 1
so the free names which are available will be {2, 3, 4, 5, 6, 7...} and the rank of the
processor P2 is 2 so it chooses the 2nd name from the free space which is nothing but
3. Similarly all the processes chose their new names based on their rank and the free
names available at that instant.
In the 4th step P3 is introduced and see that there is a collision as PI has already
chosen 1 as its new name ID so it goes for next available name ID. In the 5th step P3
chooses 5 as its new name ID and then P4 is introduced in the 6th step sees a collision
and chooses 7 as its new name ID in step 7. In the 8th step the last processor P5 is
introduced and sees that PI has also suggested the same name ID so choses 9 as its
new name Id in the 9th step.
31


The last Process P5 has new name ID 9 which is nothing but 2 k-lth name when k
(number of processes) is 5. From this we can conclude that an execution takes 2k-1
name where k is the number of processes.
PI P2 P3 P4 P5
1
1 1
1 3
1 3 1
1 3 5
1 3 5 1
1 3 5 7
1 3 5 7 1
1 3 5 7 9
Table 3.4: Execution of Renaming for k Processors
32


An Execution of the Renaming Algorithm in which some processes takes an
exponential number of steps before deciding.
Proof:
I prove the execution of the renaming problem as follows:
I assume initially for 3 processors and follow the similar procedure by considering 4
processors and also 5 processors, based on the total number of steps in an execution
procedure they decide their new name and by this result we can come to a conclusion
that some processes takes exponential number of steps before deciding.
Considering for 3 processors:
For 3 processors we have
PI P2 P3
1 1
1(2) 1(3)
1(2) 1(2) 1(3)
2 2 1(3)
2(3) 2 1(3)
33


3(4) 2 3(5)
4 2 3(5)
4 2 5
Table 3.5: Execution of Renaming for 3 Processors
In presenting execution we use the following convention. An entry x(y) means that
the process has read x from the snapshot and has enabled writing y.
Assuming that there are 3 processors and we start from right to left i. e. from P3 to
PI. In the first step we consider two processors namely P3 and P2 which suggests ID
1 for the process then they see that there is a collision so they try opting for 2 & 3
respectively based on their rank assigned, whenever they get a chance to write the
new ID.
In the 3rd step, PI is introduced and it tries for ID1 but sees that P2 and P3 also
suggested the same ID, so it goes for ID 2 too. Whenever it has a chance, based on its
rank it will update its ID. Now the processors 1 &2 will update their new IDs to be
2 and as P3 is a slow processor it still wants to update 3 as its name.
In the 5th step the PI sees that P2 has already suggested 2 as its name. So, it wants to
go for 3 whenever it gets its chance while processors 2 and3 remains the same.
34


In step 6 the processor 1 and 3 will update 3 as their new name and encounters a
collision. So, based on their ranks they would choose ID 4 & ID 5 respectively.
In the 7th step PI chooses ID 4, P2 stays with 2 and P3 wants to update it to ID 5.
Finally, PI, P2 & P3 has IDs 4, 2 & 5 respectively.
Similarly, the same procedure is followed when you consider 4, 5, 6...etc processors.
Therefore, for 3 processors it executes in 8 steps = 23 number of steps.
For 4 processors we have
1 2 3 4
1 1
1(2) 1(3)
1(2) 1(2) 1(3)
2 2 1(3)
1 2(3) 2(4) 1(3)
1(2) 2(3) 2(4) 1(3)
35


2 2(3) 2(4) 1(3)
2(3) 2(3) 2(4) 3
3 2(3) 2(4) 3
3(4) 2(3) 2(4) 3(5)
4 2(3) 4 3(5)
4(6) 2(3) 4(7) 3(5)
6 2(3) 4(7) 3(5)
6 3 4(7) 3(5)
6 3 7 3(5)
6 3 7 5
Table 3.6: Execution of Renaming for 4 Processors
As we can see from the above table it takes 16 steps= 24 steps for 4 processors.
For 5 processors we have
36


1 2 3 4 5
1 1
1(2) 1(3)
1 1(2) 1(3)
1(2) 1(2) 1(3)
1(2) 2 1(3)
2 2 1(3)
2(3) 2(4) 1(3)
1 2(3) 2(4) 1(3)
1(2) 2(3) 2(4) 1(3)
2 3 2(4) 1(3)
2(3) 3 2(4) 1(3)
1 2(3) 3 2(4) 1(3)
1(2) 2(3) 3 2(4) 1(3)
2 2(3) 3 2(4) 1(3)
37


2(3) 3 3 2(4) 1(3)
2(3) 3(4) 3(5) 2(4) 1(3)
3 3(4) 3(5) 2(4) 1(3)
3(4) 3(4) 3(5) 2(4) 1(3)
4 3(4) 3(5) 2(4) 1(3)
4 3(4) 3(5) 2(4) 3
4 3(4) 3(5) 2(4) 3(6)
4 4 3(5) 2(4) 3(6)
4(5) 4(6) 3(5) 2(4) 3(6)
5 4(6) 3(5) 2(4) 3(6)
5 6 3(5) 2(4) 3(6)
5 6 5 2(4) 3(6)
5(6) 6 5(7) 2(4) 3(6)
6 6 5(7) 2(4) 3(6)
6(8) 6(9) 5(7) 2(4) 3(6)
38


8 6(9) 5(7) 2(4) 3(6)
8 9 5(7) 4 3(6)
8 9 7 4 6
Table 3.7: Execution of Renaming for 5 Processors
Similarly, by the above table it takes 32 steps= 25 steps for 5 processors.
So when you repeat the pattern for 6, 7, 8...so on it takes approximately 64, 128,
256...etc.
The series looks as {8, 16, 32, 64, 128, 256.} which can be generalized to 2k.
From this we can conclude that an execution takes exponential steps before deciding
i.e.
For 3 processors it takes 8 steps 23
For 4 processors it takes 16 steps 24
For 5 processors it takes 32 steps 25
So if there are k processors then a process takes 2k steps which is exponential where
k= {1, 2, 3...}
39


3.5.4.4 The General Case
Now lets consider the general case of an arbitrary f renaming algorithm with n+f new names. Although the wait-free algorithm will
obviously work in the case as well, we pay a price in terms of an unnecessarily large
name space when f is smaller than n-1. Thus I am interested in more efficient
algorithms for smaller number of failures.
This algorithm extends the idea of the previous algorithm by restricting
number of processors that are proposing names at the same time. A processor
suggests a name only if its original name is among f+1 lowest name of processors that
have not decided yet.
The only addition to the snapshot object is a bit, where the processor
announces that it has decided. The uniqueness property follows by the same argument
as No two processors decide on the same name. Clearly at most n-1 integers are
suggested or chosen by processors other than pi itself, in any view returned in line 4.
A processor suggests a name only if its rank is at most f+1; thus the name suggested
by a processor is at most (n-1) + (f+1) = n+f. since a processor decides on a name it
has previously suggested, its new name must be in the range [1.. .n+f].
40


3.5.5 (k, A )-Majority Renaming Algorithm
In the problem of renaming any set of k < n processes, which hold their
original names from a large range [N] = {1. . N} contend to acquire unique integers
as new names in a smaller range [M] using some r shared registers. Refer k as the
contention. Consider one-time renaming problems, in which each contending process
needs to acquire a name starting from the very beginning of an execution, and no
name is ever released to be possibly reused. When an algorithm can know some of the
parameters k and N, in the sense that they can be a part of code, then indicate this by
attaching these parameters at the front of the name of the problem and its solutions.
There are four cases, with each of k and N either known or not. Here in this thesis I
am solving only when k is know while N is not know. [15]
For better understanding on this lets go through two theorems which are Algorithm
CASCADE- RENAME and Algorithm TIGHT-RENAME
Algorithm Cascade-Renamed N),
It solves (k, N)-Renaming with M = e14k as an upper bound on the magnitude
of new names. It goes through a sequence of epochs. The start of the epoch is the
current original name and acquiring a new name to be used as original name in the
41


next epoch is the end. Using its original input name a process executes Basic-Rename
(k, N) in the first epoch. Throughout iterations, a process executes Basic-Rename (k,
Nj) in epoch j, where Nj = N and NJ+i = 24c7 k \g(N/k) are bounds on the magnitude
of the original names.
The epochs continue through epoch j* such that Nj* < eI4k. Ultimate output
name is the name acquired by a process during the last epoch. Each execution of an
instantiation of algorithm Basic-Rename uses its own set of auxiliary shared registers.
Theorem: Algorithm Cascade-Rename(k, N)
Solves (k, N)-Renaming with M =e14k as a bound on the range of new names, in
0(logk(logN +logk loglogN)) local steps, and with 0(k log k )auxiliary shared
registers.
Proof:
Lets estimate the rate of decreasing of the bounds on the range of original names in
consecutive epochs by resorting to Lemma{a} which states that Algorithm Basic
Rename(k, N)solves (k, N)Renaming with M = 24e7 k lg N/k as a bound on the
range of new names, in 0(logk logN)local steps, and with 0(k log N/k ) auxiliary
shared registers.
42


For an integer j > 2, the fraction Nj+i /Nj can be estimated by
24e7klg(Nj+i/k)/24e7klg(Nj/k)14<471/500, provided that Nj>e14k. Thus after
j*<2+log5oo/47i (N2/e14 k =0(loglogN)epochs we obtain a bound on the range of the
original names in Basic-Rename that is at most e14k. The number of registers needed
for the first iteration of Basic-Rename is 0(k log N/k ), it isO(k loglog N/k Registers
for the second iteratation, 0(k logloglogN/k Registers for the third one, and so on,
for a total of 0(k log N/k Registers. The number of local steps is
0(logklogN+Xj*j=2 logk logNj),which is at most 0(logk logN +log2 k loglogN), by {a}
and by the bound of O(loglogN) on the number of epochs j .
Renaming when only k is know while N is not
Let MA(fc) be an algorithm given by Moir and Anderson [2] which solves k-
Renaming with a bound on the new names that is 0(k2 ), in O(k) local steps, and with
0(k2) auxiliary shared registers. Let AF(&, N) be the algorithm of Attiya and Fouren
[14] that solves(k, N)-Renaming with 2k 1 as a bound on new names, in 0(N)local
steps, and with 0(N2) auxiliary shared registers.
Combining algorithm Cascade-Rename together with algorithms AF and MA
to solve k-Renaming with 2k 1 as a bound on the magnitude of new names for any.
This is achieved by running algorithm Tight-Rename (k), which operates as follows.
43


First run MA (k) with processes using the original names, with Ml = 0(k2) as a
bound on the range of new names. Then proceed by running Cascade-Rename (k,
Ml) with processes using the new names acquired from MA (k), and with M2 = euk
as a bound on new names.
Finally execute AF (k, M2) with processes using the new names acquired from
Cascade-Rename (k, Mi). A name obtained when executing AF (k, M2) serves as the
final name. Each of the three executions uses its own dedicated set of registers
disjoint from the others. [15]
Theorem: Algorithm Tight-Rename(k)
Solves k-Renaming with 2k 1 as a bound on the magnitude of new names, in
>y
0(k)local steps, and with 0(k )auxiliary shared registers.
Proof:
When we have the three algorithms synchronized such that output names
serve as original names in the next algorithm, the only requirement is to set bounds on
new and original names according to specifications. This we do so that names are
assigned properly by the pipeline. The magnitude of new names is first reduced by
44


MA to Mi=0(k2), next to M2 = 0(k) by Cascade-Rename, and finally to 2k 1 by
AF.
The characteristics of the obtained algorithm Tight-Rename rely on Theorem
Algorithm Cascade-Rename, on the properties of algorithm MA derived in [2], and
on the properties of algorithm AF showed in [14]. We obtain by direct calculations
that the local step complexity of algorithm Tight-Rename(k) is
0(k +log2kloglogk+M2)= 0(k)and that the number of registers needed is 0(k2 +k
log(k2 /k)+M22 )= 0(k2).
3.5.6 Wait-Free M-Renaming Algorithm
To illustrate solutions to the M-Renaming problem, in [9] Michel RAYNAL
proposed two algorithms. The first algorithm was based on Moir and Anderson [2],
considers a shared memory system and solves the problem for M = n (n + 1) =2. It is
based on a grid of splitters which is explained in [2], a new data/control structure
specially suited to wait-free computing. For shared memory systems several renaming
protocols have been designed.
45


A splitter is characterized by the following global property: if x processes
access the splitter, then at most one receives the value stop, at most x-1 receive the
value right, and at most x-1 receive the value down.
r processes
V
right
----= < i 1 processes
down
V
< ./ 1 processes
Figure 3.8: A Splitter
Algorithm: Procedure direction(movej)
stop
< 1 process
X = idi // write your identifier
if Y then move; = right
else Y = true
46


if ( X == idj) // check identifier
then move* = stop
else movei = left
The splitter is trivially wait-free as there is no loop. Assume that x processes
access the splitter object. Let us first observe that, due to the initialization of Y, not
all of them can get the value right i.e.; for a process to obtain right, another process
has to first set Y to true. Let us now consider the last process cannot that executes line
1. If it does not crash, this process cannot get the values down (due to line 4), hence
not all processors can get the value down. Let pi be the first process that finds X=idj at
line 4 (consequently, p* gets the value stop if it does not crash). This means that no
process pj has modified X while pj was executing the lines 1-4. It follows that any pj *
Pi that will modify X (at line 1) will find Y = true (at line 2), and consequently cannot
get the value stop.
47


1
3
4
L 0
0
-39- 4
14
Figure 3.9: A Renaming Grid
The effective and simple Moir-Andersons solution consists in a grid made up
of n (n-l)/2 renaming splitters (Figure above depicts such a grid for n = 5). A process
Pi first enters the left comer of the grid. Then, it moves along the grid according the
values it obtains from the splitters either down or right until it gets the value stop.
Finally the value associated with the splitter where it stops is its new name. The
48


property attached to each splitter ensures that no two processes stop at the same
splitter.
Wait Free Renaming Algorithm for n (n+l)/2:
function get_name(idj)
kj <0; 1; < 0; term; < false;
while ((k;+ lj < n-1) A -itermO do
X[ki+ li] <- id;
if y[ki, li] then lj < f +1 %move right%
else y[kj, lj] < true;
if(X[ki+ k]= idO
then terms < true %stop%
else kj< kj+l%move down%
endif
endif
enddo;
return(n x kj +1; -(ki(ki- l)/2)
%the name is the position [kj, li] in the grid%
49


The resulting Moir-Anderson protocol is described in the algorithm above.
X[0...(n-2), 0...(n-2)] are upper left triangular matrices of shared variables(the
entries of Yare initialized to false). kj,lj and term; are local variables of the invoking
process pj.
The update kj< kj +1 (resp. l,< lj+1) implements a down move (resp. right
move). The setting of term* to true implements a stop. No process takes more than
(n-1) iteration steps. Moreover, as no two processes arrive at the same grid position
after taking (n-1) steps, the diagonal (n-1, 0), (n-2; 1)... (0, n-1) of Xj and YjS not
used. It is relatively easy to see that the worst case time complexity is 4(n-l) that is
the maximum number of shared memory accesses.
50


4. CONCLUSION
Throughout this thesis work I have presented Introduction of Asynchronous
computability and the problems solvable in asynchronous systems. I presented the
wait free renaming problem in details and an overview of recent research results
about renaming and its relation to other problems, in shared memory systems. Also
this thesis gives an intuition of the difficulty of the renaming problem, the techniques
used in this area, and the underlying algorithmic principles. This thesis gives the state
of art of the renaming problem; this thesis is majorly devoted to clearly describe the
difficulties of renaming problem mainly in the area of shared memory model. Also it
presents the algorithms and proof for these models.
51


REFERENCES
[1] H. Attiya, A. Bar-Noy, D. Dolev, D. Peleg, R. Reischuk, Renaming in an
Asynchronous Environment. Journal of the ACM 37 (3) (1990) 524_548
[2] Moir M. and Anderson J.H., Wait-Free Algorithms for Fast, Long-Lived
Renaming. Science of Computer Programming, 25:1-39, 1995.
[3] M.J. Fischer, N.A. Lynch, M.S. Paterson, Impossibility of Distributed Consensus
with one Faulty Processor. Journal of the ACM, Vol. 32, No.2, 374-382, Aprill985.
[4] D. Dolev, C. Dwork and L. Stockmeyer, On the Minimal Synchronism Needed for
Distributed Consensus. Proc. 24th Symp. on Foundations of Compo Science, 1983.
[5] Peterson G.L., Concurrent Reading while Writing. ACM TOPLAS, 5(l):46-55,
1983.
[6] Lamport L., A Fast Mutual Exclusion Algorithm. ACM Transactions on
Computer Systems, 5(1): 1-11, 1987.
[7] Herlihy M.P. and Shavit N., The Topological Structure of Asynchronous
Computability. Journal of the ACM, 46(6):858-923, 1999.
[8] Jayanti P., Chandra T.D. and Toueg S., Fault-Tolerant Wait-Free Shared Objects.
Journal of the ACM, 45(3):451-500, 1998.
52


[9] Michel RAYNAL, "An Introduction to the Renaming Problem," prdc, pp.121,
Ninth Pacific Rim International Symposium on Dependable Computing (PRDC'02),
2002
[10] Armando Castaneda, Sergio Rajsbaum, Michel Raynal, The Renaming Problem
in Shared Memory Systems. November 2010.
[11] Maurice Herlihy Dmitry Kozlov Sergio Rajsbaum., Combinatorial Topology and
Distributed Computing. January 18, 2011
[12] Gafni E., Group Solvability. Proc. 18th Int'l Symposium on Distributed
Computing (DISC'04). Springer Verlag LNCS #3274, pp.30-40, 2004
[13] Afek Y., Gamzu I., Levy I., Merritt M. and Taubenfeld G., Group Renaming.
Proc. 12th Int'l Conference on Principles of Distributed Systems (OPODIS'08),
Springer Verlag LNCS #5401, pp.58-72, 2006.
[14] H. Attiya and A. Fouren. Adaptive and efficient algorithms for lattice agreement
and renaming. SIAM Journal on Computing, 31:642-664, 2001.
[15] Bogdan S. Chlebus, Dariusz R. Kowalski, Asynchronous Exclusion Selection.
Journal of the ACM, 2008.
[16] Michel Raynal, Wait-Free Objects for Real-Time Systems (Position paper). Proc.
5th Intl IEEE Symposium on Object-Oriented Real-Time Distributed Computing
(ISORC02), IEEE Computer Press, pp. 413-420, Washington DC, 2002.
53


[17] Eli Gafni, Achour Mostefaoui, Michel Raynal & Corentin Travers. From
Adaptive Renaming to set agreement. Theoratical Computer Science 410(2009) 1328-
1335.
54