PEER TO PEER MESSAGING SYSTEM FOR DISTRIBUTED
AND PARALLEL APPLICATIONS
Geoffrey Dix Jr.
B.S., Metropolitan State College of Denver, 2002
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
This thesis for Master of Science in Computer Science
Geoffrey Dix Jr.
has been approved
Dix Jr., Geoffrey (M.S., Computer Science)
Peer to Peer Messaging System for Distributed and Parallel Applications
Thesis directed by Assistant Professor Ilkyeun Ra
Traditional parallel applications are commonly built using Message Passing
Interface (MPI) or Parallel Virtual Machine (PVM) or a similar message-passing
library. Using one of these parallel computing libraries developers are able to start
processes on remote machines and gain the computing power of those machines.
Both MPI and PVM require a priori knowledge of participating machines in addition
to the requirement that applications be copied to and compiled on each machine. This
limitation does not have a negative effect on developers when all of the computers
involved are on the same local network, but if there are remote machines outside the
network that could also be utilized, the process becomes more complex. The fact that
programs need to be compiled on each machine architecture and as well as for each
operating system combined with the fact that the IP addresses must be resolved
beforehand minimizes the use of resources outside the local computing environment.
The peer-to-peer computing paradigm offers a solution to this problem. Using
a peer-to-peer network, resources can be located dynamically to offer location
transparency to the requesting machine. This means no a priori machine information
is necessary before the application is run. This still leaves the problem of having to
distribute and compile the application on all participating nodes. Selecting the Java
programming language to implement the application can alleviate this problem.
Because Java is mostly platform independent, the application code can be packaged,
distributed, and run on any platform.
In this thesis we introduce ParaPeer, a system that attempts to address these
shortcomings by providing a convenient API for parallel and distributed computing
application development. It allows developers to create distributed applications that
are bundled in a Java JAR file and run them on participating peer machines. With
JXTA as the peer network, ParaPeer creates a cooperative work environment that
allows for dynamic discovery of participants, automatic distribution of application
code, and Java object serialization for ease of application development.
This abstract accurately represents the content of the candidates thesis. I recommend
I dedicate this thesis to my wife for always believing in me and supporting me
throughout this whole process.
I would like to thank my advisor, Ilkyeun Ra, for the direction he provided me over
the past year and a half.
1.2 Problem Statement...........................................................1
2. Related Work.................................................................4
2.1 Parallel Computing..........................................................4
2.1.1 MPI (Message Passing Interface)............................................5
2.1.2 PVM (Parallel Virtual Machine)............................................6
2.3 Peer-to-Peer Systems......................................................9
2.3.1 Peer-to-Peer Computing.....................................................9
2.4 Peer-to-Peer Distributed Computing Systems.............................17
3. ParaPeer Distributed Computing System.....................................21
3.3. Architecture and Design.................................................23
3.3.1 High-Level Architecture................................................23
3.4 Implementation Specifics................................................32
3.4.1 ParaPeer Daemon Communication..........................................32
3.4.2 ParaPeer Slave Application Management..................................35
3.4.3 ParaPeer Master Application............................................37
3.5 A ParaPeer Client Application Example...................................39
3.5.2 Setting Up the Daemon..................................................39
3.5.3 ParaPeer Application addNumbers......................................41
4.1 Comparison of Alternatives.........................................46
4.2. Performance Analysis...............................................49
4.2.1 Analysis Environment..............................................49
4.2.2 Object Serialization..............................................50
4.2.3 Data Transfer Comparisons.........................................52
6. Future Work..........................................................59
3.1 ParaPeer Protocol Stack..................................................22
3.2 ParaPeer Architecture....................................................24
3.3 ParaPeer Package Diagram.................................................26
3.4 JxtaPipeManager Class Diagram............................................34
3.5 Application Management Class Diagram.....................................36
3.6 ParaPeerJob Class Diagram................................................38
3.7 ParaPeer Startup Screen..................................................40
4.1 Serialization Times......................................................51
4.2 PTP Data Transfer Times..................................................52
4.3 Group Data Transfer Times................................................53
4.4 Point-to-Point Socket Transfer Times.....................................53
3.1 JxtaPipeManager Methods..............................................33
4.1 Communication Comparisons............................................47
4.2 Distributed Systems Comparisons......................................48
3.1 addNumbers Initialization................................................42
3.2 addNumbers Slave Start Method...........................................43
3.3 addNumbers Master Job Start.............................................44
Most parallel computing libraries and environments were built before the
proliferation of platform independent languages such as Java and before computing
resources outside of the local networks were seriously considered for their available
unused computing power. The use of parallel computing systems like Message
Passing Interface (MPI)  and Parallel Virtual Machine (PVM)  have proven that
additional speedup can be gained by distributing work to cooperative computers on a
network. High performance scientific applications have long made use of the
availability of clusters of workstations (CoW) rather than always relying on
supercomputer power. The focus of these applications has been to use computers
already on the same network and to specify the set of machines on which to run the
application in parallel. Recent developments in Internet technology and peer-to-peer
networks means that applications of this type do not necessarily need to limit
themselves to running on only known machines and only on local networks.
The Java programming language provides additional flexibility in that it does
not require that client applications be compiled for each target architecture. Since
Java is an interpretive language it can be compiled into byte code and run on any
machine that has access to a JVM (Java Virtual Machine). This significant portability
feature of Java makes it a particularly good choice as an implementation language for
building distributed applications. The increased use and popularity of Java is
partially due to its compile once, run anywhere, nature.
Additionally, peer-to-peer networks have emerged to provide a decentralized
architecture that eliminates a single point of failure. While eliminating the
centralized computing architecture, a number of peer-to-peer networks also provide
mechanisms for dynamic resource discovery in addition to dynamic cooperative
group formation. This combination lends itself to distributed application
The area of parallel computing has typically relied on local resources to accomplish
performance increases over serial execution. Both MPI and PVM are good examples
of parallel computing systems that rely heavily on local resources as well as known
resources. Though this has proven effective, it fails to recognize or utilize existing
resources that may be available but might be outside the local network. One solution
to this would be to have distributed parallel applications request resources through a
central server. This would eliminate the need for each node to know all of the
participating nodes IP address or be able to address them in some other way. Each
node would only have to know about the server. The problem with this solution is
that it does not take into account the load on the server. The result is that a single
server could be inundated with an inordinate number of requests to send and retrieve
data to different nodes. The server would become a bottleneck that would
significantly reduce any gains hoped to be achieved by distributing the work in the
first place. An alternative to both the traditional parallel computing and client-server
paradigms is peer-to-peer communication. In peer-to-peer networks each node
operates as an equal to each other node. This eliminates the reliance on a central
server. In addition, typical peer-to-peer networks are built with the Internet in mind.
This provides disparate resource acquisition that is for the most part unavailable when
using a traditional parallel computing system. Another advantage to peer-to-peer
systems is that they can provide location transparency.
1.2 Problem Statement
Current application development in distributed computing is centered on a client
server architecture. One example of this is the use of Java 2 Enterprise Edition
(J2EE) distributed applications. Typical J2EE applications are built to run on a
central application server and would be completely unavailable if the server were to
go down. Additionally, most resource sharing applications require a priori
knowledge of all participating nodes. This is required for the sake of application
compilation and distribution, as well as for data routing. Typical parallel computing
applications require a list of hosts on which the communication software is running
for the purposes of starting the application on the remote nodes.
In this thesis we will introduce a distributed computing system that allows
participating nodes to share computing resources and achieve increased performance
through partitioning of workload in addition to the automatic distribution of
application code. The dynamic nature of peer-to-peer networks will be exploited to
create an ad-hoc group of cooperative nodes on which distributed and parallel
applications can be run. The system will provide an Application Programming
Interface (API) that will enable developers to create applications that can be
automatically distributed at runtime. Another main advantage of the system is that it
allows for Java object serialization simply by passing the object through the API. Not
only will the system eliminate some of the static nature of parallel applications, it will
also allow for greater flexibility in the type of data that can be distributed to
A literature review of existing parallel computing systems and libraries is presented in
Chapter II. The advantages and design features of both MPI and PVM will be
discussed. Additionally the drawbacks of these systems help us define the
requirements for a new system. The chapter will conclude with topics on JXTA and
current research related to parallel and distributed computing in the peer-to-peer
environment. Chapter III introduces the ParaPeer system for parallel and distributed
computing using peer-to-peer communication methods. The prototype of the
ParaPeer system will be developed and the key features of the system will be
discussed. An evaluation of the system, including performance analysis will be
provided in Chapter IV. Chapter IV will also discuss Java object serialization to
extensible Markup Language (XML) through the use of XStream and the XML Pull
Parser (XPP). This technology is important as it increases data transfer flexibility in
cooperating distributed applications. The conclusion in Chapter V will give an
overall breakdown of the system and its possibilities for use in distributed and parallel
applications. Additionally Chapter VI will discuss the possibilities for future research
involving the ParaPeer system.
2. Related Work
Many parallel and distributed systems exist for use in application development. Two
of the most common parallel computing systems are MPI and PVM. These systems
offer programming interfaces to application developers enabling them to run
applications in parallel on known participating nodes. This chapter will discuss these
two parallel computing message passing systems and evaluate their architecture
among other things. The emergence of the Java Message Service (JMS)  has led
to additional enterprise application development using messaging as the
communication method. The JMS specification will be discussed for its strengths
and weaknesses. Additionally, peer-to-peer systems including Pastry , Gnutella
, and JXTA , will be introduced and their different architectures analyzed. The
development of peer-to-peer systems has led to the implementation of distributed
parallel computing applications on top of peer-to-peer networks. Two such
applications that will be discussed are P3  and JNGI .
2.1 Parallel Computing
Parallel computing offers a way to utilize (often local network) compute cycle
resources for the purpose of increased performance through divide and conquer
algorithms. Many compute intensive applications are built as parallel applications in
the absence of supercomputers. Parallel computing supports a number of different
models that define how the application is split up into multiple simultaneously
computable units. MPI and PVM are two of the most popular parallel computation
enabling systems. The different parallel programming models used by MPI and PVM
are: SPMD (Single Program Multiple Data) and MIMD (Multiple Instance Multiple
Data) , The SPMD model is structured such that each participating node
performs the same function with a different set of data. This is the most common
parallel programming model and most efficiently measures performance. The MIMD
model allows each participating node to perform a different function with different
data. This allows the nodes to be handling different aspects of program execution at
the same time, thus achieving a speedup due to the distribution of separate tasks.
PVM and MPI both support either of these two models  ,
2.1.1 MPI (Message Passing Interface)
MPI was initially developed at Argonne National Laboratory with MPI Standard 1.1
being released in June 1995 . The system provides a communication mechanism
for parallel applications. Some of the major design goals as stated in the MPI
Standard  are:
- Design an application interface, not a library
Heterogeneous implementation environment
- Semantics of interface should be language independent
These (and additional) design goals govern how implementations of MPI
behave. This guarantees a common interface and in language bindings for both C and
The message-passing model employed by the MPI architecture allows collective
process group communication as well as point-to-point communication. MPI is able
to send and receive derived data types that are specified by the application
programmer . All participating MPI nodes are specified in a hosts file prior to
application startup. When calls are made against the MPI library the hosts file is used
to identify the appropriate nodes and address them accordingly. MPI supports both
synchronous and asynchronous communication through both the collective and point-
to-point communication models .
One of the most common implementations of MPI is MPICH. To create a
minimal parallel application using MPICH, the developer first initializes the MPI
system with a call to MPI_Init. This call includes command line arguments passed to
MPI. The next step would be to call the functions MPI_Comm_Size and
MPI_Comm_Rank to determine the number of nodes participating in the computation
and the rank of this specific node. After that communication calls such as MPI_Send
or MPI_Bcast and MPI_Recv or MPI_Reduce would be called to perform point-to-
point or group communication. The final step is a call to MPI_Finalize to end the
MPI computation. In addition to the above synchronous communication methods
MPI also provides asynchronous communication.
Though the availability of MPI implementations on multiple computer architectures
allows for the possibility of heterogeneous development, to go between the platforms
the application developer must consider available libraries and system calls on each
architecture. Another drawback to the MPI architecture is that participating nodes
must be known at run-time. This results in a very static distributed computing
environment that takes significant effort to simply add an additional cooperating
node. In addition to requiring a priori knowledge of machines in the group, MPI
requires the distribution and possible compilation of application code on host
machines. This step is necessary because the MPI bindings, which are implemented
in C and FORTRAN, are not platform independent. This means that an application
compiled on a Sun Ultra that includes an Intel machine running Linux in the hosts file
requires a full recompilation on the Intel machine. At a minimum, in the case of a
completely homogenous environment, machines require the executable to be
distributed to all target machines or be made available to all machines through the use
of a shared disk or other resource.
2.1.2 PVM (Parallel Virtual Machine)
PVM is another message passing parallel programming system. PVM was developed
before MPI and therefore MPI borrows many concepts from PVM. As is evident in
the name (Parallel Virtual Machine) PVM attempts to provide developers with the
feel of using a single machine while distributing workload and additional resources
behind the scenes. In addition to the single virtual machine view that PVM attempts
to achieve, it also boasts the ability to accommodate a heterogeneous network of
Like MPI, PVM functionality is contained on daemons running on host machines and
support for application development is achieved through the use and availability of
user libraries. Similarly to MPI, PVM passes data structures specified by the system
itself . These data structures include the common data types (integers, reals, and
strings) as well as complex types such as C structures . Another advantage of
PVM is that at any point during execution any task can start or stop another task as
well as remove or add computers to the virtual machine . MPI and PVM also
share the similar host file setup that requires a priori knowledge of other machines.
One important difference between MPI and PVM in this respect is that PVM allows
for runtime addition and/or deletion of host machines . Like MPI, PVM has
language bindings for C, C++, and FORTRAN .
In the PVM model, each separately computable portion of code is segmented
out as a task . To properly address messages to specific host machines, PVM
issues each task a task id , This allows the calling code to send messages to a
specific task rather than a specific machine. Additionally, PVM applications can
address groups of hosts that have formed for use in processing a specific task as well
as groups that they are not themselves a part of . In PVM, data is sent to
participating nodes by first packing the data and then sending it. On the receiving
side data needs to be unpacked before it can be used. These tasks are accomplished
through the use of pvm_pack() and pvm_upk() respectively.
The pvm_spawn() function is used to start tasks on participating nodes. Additionally,
upon completion of the computation each node must make a call to pvm_exit() to
properly release all resources. The message sending and receiving routines in PVM
are similar to MPI, and like MPI, PVM supports point-to-point and group
communication as well as synchronous and asynchronous communication.
Though PVM supports heterogeneity, it still requires that each host has previously
compiled application code available for its architecture. Also, because of the data
structure format used to pass messages, and similarly to MPI, this results in the
limitation that there is not a facility for passing objects to participating hosts. PVMs
support of the ability to add or delete nodes at runtime is definitely an improvement
on the MPI model but still requires knowledge of a machines IP address or some
other way to otherwise locate and address messages to the computer.
The Java Message Service (JMS) is used for enterprise messaging. JMS is a
specification that provides a common form of messaging for creating, sending,
receiving, and reading enterprise messages . As is typical in most messaging
system, JMS provides an interface for point-to-point messaging (queue) and
broadcast messaging (topic).
The JMS architecture provides a publish/subscribe model of message passing. This
supports one-to-one communication, one-to-many communication, many-to-one
communication, and if desired many-to-many communication. The most common
communication models are point-to-point (one-to-one) and broadcast (one-to-many).
These models map to queues and topics respectively in the JMS architecture.
According to , The JMS PTP model defines how a client works with queues:
how it finds them, how it sends messages to them, and how it receives messages from
them. The publish/subscribe or producer/consumer model of communication defines
a relationship similar to a broadcast message across a network. The main difference
is that the publish/subscribe model can accommodate multiple publishers as well as
The two different communication models are defined as the Point-to-Point
domain and the Publish/Subscribe domain. Though JMS provides specific definitions
of how to connect to the specific domains, it is preferred that the domain neutral
specification be used for the purpose of maximum flexibility . The common
domain neutral interfaces according to  are:
ConnectionFactory used to create connections to a JMS provider
Connection the connection to the JMS provider
Destination used to address a message to a specific endpoint or topic
Session created by a connection to produce and consume messages
MessageProducer used by a JMS client to send messages to a destination
MessageConsumer used by a JMS client to receive messages from a
The JMS provider is an implementation of the JMS specification that is usually run
on an application server.
JMS provides a flexible way to communicate with many nodes at the same time while
also providing for point-to-point communication. Additionally, if all Java clients are
used, JMS could also support serialization of objects. A significant drawback of JMS
is the single point of failure. JMS providers are typically deployed on an application
server that is hosted by a single machine. It is also the case that a number of JMS
providers do not provide for dynamic destination creation, meaning that the server
must be configured ahead of time to handle messages to specific queues and topics.
2.3 Peer-to-Peer Systems
2.3.1 Peer-to-Peer Computing
File sharing applications such as Napster  (though not a true peer-to-peer model)
and Gnutella  popularized the peer-to-peer computing model. These applications
showed how software can take advantage of a peer-to-peer communication model to
provide services for interested parties. Peer-to-peer systems commonly define how
peers communicate, how common peer services are offered, and how peers discover
other peers. A common thread shared by peer-to-peer systems is that they are
decentralized, self-organized, adaptable and scalable . Another commonality
between the different peer-to-peer systems is that each node is both a client and a
server at the same time. Additionally, because peer-to-peer systems are adaptive and
self-organizing, most of them require some type of overlay network to properly
address other peers without depending on IP addresses. Some of the more common
peer-to-peer networks in use today are: Pastry, Gnutella, and JXTA.
According to its creators, Pastry is ...a generic peer-to-peer object location and
routing scheme, based on a self-organizing overlay network of nodes connected to the
Internet . As with most peer-to-peer systems Pastry exhibits the qualities of a
decentralized, reliable, scalable, and fault-resilient system . Its purpose is to be
used as the core for various peer-to-peer applications.
The Pastry network assigns a 128-bit node id to each node in the network . This id
is used to properly identify and address each particular node. To communicate with
other nodes, each Pastry node maintains a routing table, a neighborhood set, and a
leaf set . Routing tables contain IP addresses of other nodes that have a certain
degree of commonality in the prefix of their node ids . The neighborhood set
contains the ids of neighboring nodes . The leaf set contains the node ids of the
closest nodes as determined by value of their 128-bit node ids . To route
messages, when a node receives a message it first checks its leaf set to determine if it
has direct knowledge of the node the message is meant for, if not then the node
checks its routing table . In the routing table, Pastry tries to find a node with a
prefix of matching digits that is at least one more match with the destination node id
than the current node id . If a node with at least one greater digit prefix match
does not exist, then a node with the same number of matching prefix digits as the
current node and a value closer to the destination nodes 128-bit id is chosen .
The two key API calls available in pastry are: pastryInit(Credentials,
Application), which returns the node id assigned to the calling node, and route(msg,
key), which sends a message to the specified key . Additionally, applications that
are built on top of Pastry must implement the following three functions: (1)
deliver(msg,key), which is called to request that the current node play a part in
delivering the message to its final destination, (2) forward(msg,key,nextId), which is
used to send the message to a specific destination, and (3) newLeafs(leafSet), which
alerts the current node to changes in its leaf set .
The process of new node arrival into the Pastry system is handled by an
adjacent node which is either previously known by the arriving node or can be found
using a number of different searching methods . The arriving node then tells its
known node to forward the join request that indicates its arrival to its numerically
closest nodes as determined by the prefix in its newly assigned node id . Upon
receipt of the join request the adjacent nodes respond to the arriving node with their
state tables so that the node may create its own tables . At this point the arriving
node needs to also insure that all nodes in its routing table are aware of its state and
therefore sends that information to each node in the table . Departing nodes are
handled by its neighboring nodes realization that the node is no longer accessible.
When this happens the neighboring node sends a request to see the leaf table of the
node nearest the departed node. This table is used to modify the nodes own leaf table
replacing the departed node with a similarly close node . All neighbor nodes that
recognize the departure of the node proceed in the same manner.
Pastry is a good example of an effective peer-to-peer system that provides for and
efficiently handles the dynamic nature of peer-to-peer networks. Its ability to handle
node failures and quickly integrate new peers into the network is a necessity when
dealing with peer-to-peer systems. Though having been around since 2001 Pastry
does not appear to have a cohesive user community as Gnutella and JXTA enjoy.
This limitation is likely a result of the lack of a popular associated application (as
Gnutella has with file sharing) in addition to the fact that there seems to be greater
open source support for Java related projects (as the Java implementation of the
JXTA protocols has). This minimal user community leads to a lack of support and
advancement in the core of the Pastry network.
Though Gnutella was originally created for the specific purpose of peer-to-peer file
transfer and distributed searching, it defines a set of protocols that can be used in
general peer-to-peer interactions , Just like other peer-to-peer systems, each node
in the Gnutella network is both a client and a server. The Gnutella protocols define
the methods of communication between peers across a network .
The Gnutella discovery process is performed through the use of Ping and Pong
messages , The Ping messages are used to discover nodes on the network. The
Pong messages are the responses to Ping messages that include the addressable
location of the peer as well as information on currently available data , When a
peer enters the network, it announces its arrival by sending a message to another peer.
Though not part of the protocol, the arriving peer commonly uses a host cache service
to find a peer on the network . Connections are made with other peers through
Additional types of messages in the network include the Query, the
QueryHit, and the Push . The Query message initiates a search on the
network for the specified item. The QueryHit message is a response to a Query
notifying it of a potential match. The Push message is the message that enables
peers behind a firewall to make data available to the rest of the network . To
maintain the network paths and prevent unexpected messages from being propagated
through the network, Gnutella response messages (Pong, and QueryHit) must
follow the same path to the requesting peer as was used to reach the responding peer
. If a response message is found that has no matching request, it is removed from
the network .
Each peer keeps track of the peers around it by saving data from the Ping
messages sent out , This allows the peer to send any request messages from a peer
out to all peers it knows about to ensure that the request will be fulfilled , When a
peer initiates a download of a resulting QueryHit, the connection is made outside of
the Gnutella network , The network does not handle any actual transfer of file data
- all of this effort is handled through an HTTP connection with the peer that has the
file available .
The Gnutella network provides a dynamic peer-to-peer system for distributed
searches. The network is directly geared towards file and data transfer and is not
made flexible enough to easily build alternative applications on top of it. The idea of
keeping bandwidth intensive data transfers off of the network is good for maximizing
the scalability of the system.
Project JXTA was started by Sun Microsystems with the main goals of platform
independence, interoperability, and ubiquity. JXTA is not an acronym rather it stands
for Juxtapose meaning side by side with client server and web-based computing
JXTA is a set of protocols that defines peer communication, group formation, and
discovery (among other things). It forms a virtual overlay network of groups and
resources that sits on top of the underlying hardware and actual transportation
software (TCP/IP, HTTP). The fact that it is simply a set of protocols affords JXTA
the language and operating system independence that were also two of the most
desired design goals of the project. Additionally, JXTA communication can be
specified as XML, thus allowing communication to be language as well as platform
independent. There are six core JXTA protocols:
Peer Discovery Protocol
Peer Resolver Protocol
Peer Information Protocol
Pipe Binding Protocol
Endpoint Routing Protocol
These six protocols define peer-to-peer interaction and general group
formation and communication in the JXTA network. A brief description of each
protocols functionality as a component of the JXTA network follows.
The Peer Discovery Protocol provides the ability to discover peer resources
(this includes peer groups, peers, pipes, and other services offered by the peer group)
. The Peer Resolver Protocol is used to define and send generic queries to other
peers to inquire about the availability or status of peer resources . The Peer
Information Protocol is used to monitor other peers in the peer group. This allows for
checking if a peer is still available using a ping message, as well as measuring uptime
and network congestion . The Rendezvous Protocol defines how rendezvous peers
propagate messages within a peer group . The Pipe Binding Protocol allows a
peer to bind a pipe advertisement to a pipe endpoint. According to  the pipe
binding protocol .can be viewed as an abstract, named message queue that
supports a number of abstract operations..The Endpoint Routing Protocol
provides peers the ability to navigate firewalls and NAT .
At the core, the protocols define how peers within a certain peer group
interact. Peer groups are defined as a collection of peers that provide a common set
of services . Additional properties of peer groups are :
Peers self-organize to create peer groups
Peers can implement a membership policy that limits access to the group
Peers can belong to multiple peer groups
Peer groups are identified by a unique group ID
All peers by default belong to the Net Peer Group
With these properties of peers in mind we can now talk about individual peer
roles within a peer group. There are three different distinct peer roles: edge peers,
rendezvous peers, and relay peers. Edge Peers maintain a connection to a rendezvous
peer at all times . This is an edge peers primary means of discovering resources
throughout the peer group. Rendezvous Peers propagate messages throughout the
peer group and provide peers with a means to discover resources throughout the
group. Relay Peers are used to route messages past firewalls, proxies, and NAT .
All resources in JXTA (e.g. peers, peer groups, pipes, or services) are
represented by an XML description document called an advertisement.
Advertisements uniquely (not globally unique) identify a resource by specifying what
type of resource it is and by assigning it an ID. Resources are subsequently
discovered by searching for their advertisements. The following advertisement types
are defined by JXTA:
Peer Advertisement describes a peer resource
Peer Group Advertisement describes the different peer group related
services and resources
- Pipe Advertisement describes the different pipe endpoints and the type
- Module Class Advertisement describes and documents the availability of
a module class
Module Spec Advertisement defines the specifications for module class
Module Impl Advertisement describes and defines an implementation of
an existing module specification
Rendezvous Advertisement describes the availability of a peer that acts
as a rendezvous for other peers in the peer group
Peer Info Advertisement describes peer resource information (this
provides peer availability and usage statistics)
These advertisements define how peer group resources within JXTA are
discovered. Rendezvous peers keep an index to other peer machines that contain the
advertisements. The rendezvous peer does not actually store known advertisements,
rather it stores their locations.
Messages in JXTA can be represented in XML or in binary. The XML
representation of the message gives it the language and platform flexibility desired by
JXTA architects. The binary representation is used to compact the messages if
necessary. Additionally, binary data can be encoded and sent using Base64 encoding.
This is a particularly useful feature especially if there is a consideration for Java
Pipes are the communication mechanism over which messages are passed. By
default they provide only asynchronous unidirectional message passing capabilities
. There is however implementations of bi-directional pipes, and synchronous
message passing capabilities can be built using the basic pipe constructs . There
are three kinds of pipes: point-to-point pipe (connect one peer to another), propagate
pipes (connect one peer to multiple peers), and secure unicast pipes (provide secure
transfer of data using Transport Layer Security).
The nature of peer-to-peer computing makes fault tolerance an important
consideration when developing a parallel and distributed programming API. The fact
that peers may drop off and appear at any time makes it necessary to account for the
very dynamic nature of the network. At the same time, the ability to dynamically
discover resources allows for the development of a system that can increase compute
capacity without manually adding nodes.
2.4 Peer-to-Peer Distributed Computing Systems
As the peer-to-peer computing paradigm proved to offer dynamic, decentralized, and
fault-resilient networks, it has increasingly been used to provide an alternative to the
client/server computing model to create the same applications. This is evident in the
file-sharing transition from a simple FTP site, to a semi-peer-to-peer Napster model,
to a fully peer-to-peer Gnutella network. Additionally, the advantage of sharing
resources through the use of peer-to-peer networks is currently being explored.
Current projects include the distribution of computing power. Two such systems that
attempt to provide parallel computing facilities are Parallel Peer-to-Peer (P3) and
The P3 system is geared towards parallel computation using the peer-to-peer
computing model to implement the infrastructure . The system provides
portability through the use of Java . As with other peer-to-peer systems, the P3
system is autonomous and self-healing .
The P3 system creates its own distributed file-system, object space, and application
management system . The P3 system contains two different kinds of nodes:
manager nodes and compute nodes . According to the authors, the reason this
hierarchy was chosen was due to previous research performed by Napster and
Gnutella that concluded that scalability was decreased in and environment of
complete decentralization. This resulted in a decision to create the manager nodes as
hybrid nodes that contain a combination of the peer-to-peer model and the traditional
client-server model . The manager nodes are responsible for keeping track of
routing information, file-system data, and application status . To provide high
availability and fault-tolerance, each primary manager node has at least one shadow
node that keeps consistency with the primary manager node . Compute nodes are
any nodes that are not currently manager nodes. They are used to provide disk space
and computation power . Compute nodes only contribute processing power to a
single P3 application at a time .
The shared file-system portion of P3 is also a shared Java object storage
system . Searching facilities are also provided enabling for the storage of
application code . To prevent name clashes with the shared objects residing in
the file system, each P3 application contains at least one private namespace that can
be used for its own object storage . In addition, the file system provides a
caching functionality that provides for high availability and deadlock avoidance.
The P3 system is built as a three-tier model. The bottom tier is the Kernel,
which provides for discovery, communication, and object access. The middle tier is
the Services tier and contains available standard and component services available in
the P3 system. The top tier is the Application tier. The Application tier is where P3
applications reside and their point of entry into the P3 system .
Application developers using the P system would provide the functions
p3compute and p3divide as overridden methods of the class P3Parallel (which must
be extended by any P3 application). These functions would be used to collect and
post process results, and to divide the parallel data set respectively . In addition
to these two methods, the developer must also override p3main and p3restart for the
purposes of starting and restarting the P3 application .
The P3 system has many advantages, including the shared object space and
scalability. As far as this author knows work is ongoing on the actual implementation
of the system. One drawback is that there is no mention of making use of existing
peer-to-peer infrastructures such as Pastry  or JXTA . This means that the P3
system cannot leverage existing development efforts aimed at improving performance
and security of these systems. This also leads to yet another method for peer-to-peer
communication increasing the number of competing peer-to-peer communication
methods. Additionally the P3 system has the limitation that a compute node may only
work on one P3 application at a time. Although this is claimed to lead to a decrease in
resource competition, it could wind up under-utilizing resources and creating resource
bottlenecks that are formed when a single P application consumes all available
compute nodes on the network preventing new P3 applications from making use of
JNGI is a distributed computing system that uses JXTA as the communication
infrastructure. It is made to distribute jobs to multiple peers for the purpose of
increasing performance in similar fashion to parallel computing. According to ,
initial attempts at parallelization focused on fine-grained problems, but today the
bigger focus is in the are of coarse-grained parallelization such as throughput
computations and ensemble averages. Issues addressed by the project include a
dynamic network with nodes arriving and departing during the duration of job
completion, redundancy, organization of resources, and the heterogeneous nature of a
peer-to-peer network .
The JNGI framework is built using three different JXTA peer groups: the monitor
group, the worker group, and the task dispatcher group . The monitor group
handles the high level requests including admission to the framework group hierarchy
and job submission . The worker group handles the actual completion of job
requests, and the task dispatcher group is used for the purpose of task distribution
, The submission of a job into the framework contains two parts, the code
repository, and the data repository. Both of these are distributed among the nodes in
the framework creating a virtual file system of code and data . Tasks are
distributed to worker nodes by the task dispatcher. The worker nodes continually
update the task dispatcher as to their available resources and previously cached
application code . The original job submitter is responsible for requesting results.
The results are returned as a whole as soon as all separate tasks that make up the
submitted job complete .
The JNGI framework does a good job of separating the functions of monitoring,
tasking, workers, and code repository by creating multiple groups responsible for the
different distinct tasks within the framework. One severe limitation of the system due
to this separation is that communication is only allowed at the beginning in the form
of data and code propagation. Intermediate results cannot be processed and
additional data dispersed based on partial calculations. This design limits the
functionality of the system and does not provide the flexibility available in even the
earlier parallel and distributed programming models of MPI and PVM. Another
drawback of JNGI is that there is an overall framework management system for the
purpose of code and data repositories. This provides limited flexibility on the part of
individual peers to configure the system to purge code and data at their own selected
3. ParaPeer Distributed Computing System
In this chapter the ParaPeer system will be introduced. Its design and architecture
will be discussed as well as key decisions that helped form the functionality of the
system. The underlying implementation will also be presented and improvements
upon alternative systems mentioned in chapter 2 will be pointed out. The parallel
computing technologies of MPI and PVM are improved upon by providing dynamic
resource discovery. In addition, ParaPeer provides a truly architecture independent
interface that provides automatic code distribution. Because of the minimal
community support of Pastry, JXTA was chosen as the underlying peer-to-peer
The requirements take into account the desire for a decentralized peer-to-peer system
that provides dynamic discovery of available resources
1. Decentralized each node in the system must handle its own connections
and must not be forced to answer to or update a central system manager.
2. Dynamic nodes should be able to join the cooperative computation
group at any time and immediately contribute to or request compute
resources from the system
3. Object Transfer Capability the system should provide the developer with
the ability to transfer objects without any special effort.
4. Reliability the system should reliably deliver data to participating peers
without interaction from the application developer.
5. Multiple Group Collaboration the system shall allow the participation of
a single peer in multiple collaboration efforts simultaneously.
6. Architecture Independent the system will not be dependent on a specific
architecture and furthermore will not require recompilation on different
7. Automatic Code Distribution the system will automatically distribute
code to the participating nodes without additional developer effort.
8. Leverage Existing Technologies the system will be implemented using
existing technologies with strong user communities that will allow
ParaPeer to easily integrate future improvements in the core technologies
The ParaPeer system aims to be a dynamic decentralized distributed computing
environment. Its focus is on parallel programming applications that can benefit from
the use of Java and can take advantage of its object-oriented nature. The ParaPeer
system is built on top of the Java reference implementation of the JXTA peer-to-peer
protocols. The use of JXTA enables the decentralization by taking advantage of the
ability to dynamically discover participating peers. JXTAs specification provides
definitions for peer groups that aid in the formation of cooperative computing groups
in the ParaPeer system.
Figure 3.1 ParaPeer Protocol Stack.
As figure 1 shows, daemon services that are connected to the JXTA network
runs at each participating node. The service waits for socket connections from client
applications on a specific port and utilizes this connection to communicate with the
JXTA network. The JXTA network in turn uses the lower level communication
methods provided by the operating system. The communication between the job
initiator and the helper peers goes through the local application client socket
connection and out onto the JXTA network. Existing bi-directional and propagation
pipes implemented in the JXTA network are used for the purposes of point-to-point
and group communication respectively.
3.3. Architecture and Design
3.3.1 High-Level Architecture
The high-level architecture of the ParaPeer system has two major parts: the ParaPeer
daemon, and ParaPeer applications. The high-level architecture of the ParaPeer
system is shown in figure 2.
Figure 3.2 ParaPeer Architecture
The high-level architecture shows how the ParaPeer system communicates
with client applications. The group of four ParaPeer daemons are all members of the
JXTA network. The nodes all communicate with each other via JxtaBiDiPipes. Each
peer node has a bi-directional pipe connection to each other peer in the network.
Additionally, when the daemon is started it creates a server used for local machine
client application connections. These client applications are the actual ParaPeer jobs.
In addition to the ability of the system to accommodate multiple client programs at
one time, each individual daemon node can handle multiple local machine client
applications simultaneously. A client application starts up, connects to the local
ParaPeer daemon, initializes its job in the system with the desired number of peers,
and runs the job. The origination peer is able to send data before and during
application execution. This functionality provides the ability to perform intermediate
processing on results retrieved from remote ParaPeer nodes. When a job is initiated,
if it requests fewer peers than are available in the network, then messages are only
sent to the peers that have been chosen to participate in the computation. This
prevents from the possibility of flooding the network and hogging unnecessary
bandwidth. On the side of a slave peer (job responding peer rather than job initiating
peer), the application is run in the same process space as the ParaPeer daemon rather
than going through a Java socket and then to the application. This minimizes
communication channels as well as gives the peer direct access to communication
facilities necessary to perform mid-computation data reporting and processing.
The design of the ParaPeer system is geared towards satisfying the requirements
discussed at the beginning of chapter. Additionally the system is built to be flexible
to provide a convenient framework for future extension implementation. The
following package diagram shows the organization of ParaPeer.
Figure 3.3 ParaPeer Package Diagram.
The package diagram from figure 3 shows the seven different ParaPeer
packages. Each package provides a distinct functionality that enables key features
necessary to fulfill the requirements of the ParaPeer system. The following
descriptions and diagrams provide the utility of the individual ParaPeer packages.
22.214.171.124 Package advertisementmanger
The advertisementmanager package is used to manage advertisements discovered for
a particular group. This package provides the necessary support to send discovery
messages to the group via the JXTA network and react to discovery responses. The
current advertisement manager specifically queries the group for pipe advertisements
in an effort to establish active communication channels with all available peers. Each
group running on the ParaPeer daemon contains an advertisement manager that
regularly polls the group for new advertisements. This ensures that new peers joining
the group are found by the peers that are already on.
126.96.36.199 Package applicationmanagement
The applicationmanagement package is used to manage incoming application job
requests. The package does not contain many classes but the functions they perform
are central to the behavior of slave applications on all ParaPeer nodes. There are
three classes that make up the application management package:
ApplicationManager this class handles the creation of new slave jobs in
the ParaPeer node. In addition, the ApplicationManager is used by the
groupservices package to send and retrieve data to and from the slave
applications. The ApplicationManager also keeps track of the last use
time of a slave job so that when it has expired, its associated jar file and
ApplicationData object may be removed. Additionally the
ApplicationManager starts jobs when requested to do so by the application
- ApplicationData this class is used to hold data on its way to the slave
application or being sent to the master from the slave application.
Additionally, job information such as Java jar file location, method to
begin execution in, last modification of jar file (used to cache jar file if
possible), last execution time, and a unique job id that is tied to the pipe
- ApplicationProcessor this class is used to save and load the
ApplicationData. ApplicationData objects member data are converted to
XML and saved as an XML document. This ensures that upon shutdown
of a ParaPeer node, the information regarding slave applications is not
188.8.131.52 Package clientlnterface
The clientlnterface package contains all the necessary code to connect to the ParaPeer
system. In addition to this, the class that is overridden to create ParaPeer jobs is a
member of the clientlnterface package. Like the applicationmanagement package, the
clientlnterface package does not contain many classes, but because they are key to the
operation of submitted jobs as well as an entry-point into the ParaPeer system, each
class will be explained here:
PPClientSocketlnt this is the client interface to the ParaPeer daemon. It
is not an Interface in the Java sense, but rather the communication entry-
point. It creates and maintains a socket connection with the daemon for
the purpose of job initialization, job-related data transfer, and job start
requests. An additional point to note about this class is that it is the point
at which data going to (coming from) other peer[s] is converted to (from)
XML. The XStream object serializer is used to convert objects to (from)
JobCommunicator this is the ParaPeers interface to the
PPClientSocketlnt. This hides all the necessary management of the client
interface from the developer of a ParaPeer application.
ParaPeerJob this is the class that contains the application developer
Application Programming Interface (API). To create a ParaPeer
application, the developer must extend this class to receive the necessary
inherited methods for initialization, communication, and results collection.
PPClientMessageListener this is an Interface used to handle incoming
message events. The Interface contains one abstract method (onMessage).
The interface is implemented by the client application developer if they
prefer a reactive form of message passing rather than the send/receive
style provided by ParaPeerJob. If the application developer needs this
behavior, they implement the interface and provide an onMessage method
that reacts to received messages. They then use ParaPeerJob.setListener to
inform the ParaPeer job that there is a listener for this application. The
onMessage behavior is mimicked on the slave ParaPeer daemons, so if the
client application developer requires a different behavior depending on
whether the application is running as a master or a slave, they need to put
checks in the onMessage method to react appropriately.
184.108.40.206 Package communication
The communication package contains the ParaPeer daemon client interface server
code as well as JXTA Rendezvous node management code. There is additional code
that manages peer pipe and current data transfer information on a peer-by-peer basis.
The communication package consists of the following four classes:
PeerConnectionManager this is the class that manages each remote
peers pipe connections. In addition the pipe connections, this class keeps
track of whether this peer has an open input pipe that listens for propagate
messages and for which application^] the peer has input pipes. Current
communication status is also available from the PeerConnectionManager.
- JxtaRendezvousManager this class is responsible for finding existing
Rendezvous peers in the peergroup. Since each peergroup must have at
least one rendezvous peer, if one is not found than the querying peer
assumes the role.
- PPLocalClientServer this class waits for ParaPeer client application
connections (local only) and kicks off instances of PPSocketThread to
handle individual application communication.
- PPSocketThread this class is created when a local client application
initiates a ParaPeer daemon connection. PPSocketThread handles all
communication with the local application and makes calls into the
ParaPeer system to interact with other peers.
220.127.116.11 Package groupservices
The groupservices package is made up of three classes that perform the core of the
JXTA related communication and interaction. The functionality of this package is to
provide management of the different number of groups this ParaPeer daemon is a
member of as well as individual management of each groups communication
methods with other ParaPeer daemons. The three classes that make up the
groupservices package are:
JxtaBase this class is the central class to the ParaPeer daemon. JxtaBase
provides the necessary methods to get data back and forth between master
and slave as well as performs initialization of settings obtained from the
ParaPeer xml configuration file (this includes the names of groups that this
ParaPeer daemon is a member of).
- JxtaGroupManager this class manages the different groups that the
ParaPeer daemon is a member of. The main function of the group
manager is to provide access to the communication facilities used by the
client application to communicate with additional ParaPeer daemons, as
well as providing the facilities to the underlying JXTA communication
layer to communicate with the client application. Because this class
manages multiple groups, it also keeps track of the different group-
specific mappings to other group dependant classes such as the
JxtaRendezvousManager, the JxtaAdvertisementManager, and the to be
JxtaPipeManager this class performs all the incoming and outgoing
communication with other peers in the same peergroup. Each peergroup
that the ParaPeer daemon is a member of also has a corresponding
JxtaPipeManager. The pipe manager handles all peer-to-peer
communication using bi-directional and propagate pipes. Additionally,
the pipe manager contains a number of inner classes that extend
java.lang.Thread to provide timers for necessary operations as well as
multi-threaded servers used to handle incoming connections.
18.104.22.168 Package settings
The settings package contains all the ParaPeer specific settings as well as the
necessary class used to parse those settings. Currently the settings for ParaPeer are
limited to the groups that the ParaPeer daemon is a member of but can be extended to
provide for future settings that would better fit in a configuration file rather than are
defined on the command-line call to start the ParaPeer daemon. All the current
functionality of the settings package is provided by two classes:
JxtaSettings this class holds any of the ParaPeer specific settings
information. This currently only includes the peergroup names that the
ParaPeer daemon is a member of. Additional settings could be added.
- JxtaSettingsProcessor this class is responsible for parsing the ParaPeer
settings file. As ParaPeer management tools are developed, this class
would also save off settings based on administrator management of the
22.214.171.124 Package utils
The utils package contains a mish-mash of classes that did not make sense to be
anywhere else. Additionally, classes that provide extendable functionality are stored
here. Currently the package contains six classes. Only the classes that are central to
the ParaPeer system will be fully explained:
paraPeer this class is the driver for the ParaPeer daemon
- FilelO this class contains file utilities used when transferring jar files
and maintaining slave applications.
- JxtaConfigHelper this class is used to assist in the initial configuration of
the peer. The class contains code to provide a graphical interface to allow
the initial setup of a peer name as well as password.
MethodlnvokerThread this class is used by slave peers in the ParaPeer
system to invoke the application start method specified by the master
application client. The class allows the application to be run on its own
thread thereby allowing the ParaPeer daemon to respond to other
communication requests as well as manage other applications currently
running within the same peergroup.
PeerSelectionModelAbs this class provides the overridable methods to
implement a peer selection functionality. It is an abstract class that
defines one abstract method (getPeers) whose functionality is provided by
any subclasses. PeerSelectionModelAbs allows developers to implement
an alternative way to select the peers that will contribute to the
computation of a client application.
DefaultPeerSelectionModel this class is the default peer selection model
provided with the ParaPeer system. It is a subclass of
PeerSelectionModelAbs and implements the getPeers method with a
random selection model.
3.4 Implementation Specifics
3.4.1 ParaPeer Daemon Communication
All communication between peers for a specific peergroup is handled by the
JxtaPipeManager class. This class includes the functionality to send messages,
receive messages, and over both bi-directional pipes and propagate pipes. The
different methods used to send messages from peer to peer is shown in table 1.
Table 3.1 JxtaPipeManager Methods.
sendlnitMessage This method sends the application name, jar file name, and start method name to the remote peer.
sendJobJarFile Sends the jar file as Base 64 encoded string data. The jar file is only sent if the remote peer does not have a current one.
sendStartMessage Sends a message to the remote peer to start the application. This message can also be sent on the propagate pipe.
sendDataVec Sends object data that has been serialized to XML. To send multiple pieces of data to multiple peers at one time, the objects are stored in a Vector. These messages can also be sent on the propagate pipe.
sendCollectResultsMessage Sends message to request results from the remote peers. This method is the initiator of a data receive on the side of the master. Messages can also be sent on the propagate pipe.
In addition to the send message methods, JxtaPipeManager also has corresponding
methods to handle these send messages on the receive side of the ParaPeer daemon.
JxtaPipeManager also manages the connections between each peer as well as the
propagate pipes used for a specific application.
ResponseTimer DataSender PipeThread
Figure 3.4 JxtaPipeManager Class Diagram.
Figure 4 shows the inner class structure of the various different threads running in a
single instance of JxtaPipeManager. The PipeThread is used to handle
communication for connection requests on bi-directional pipes. Each time a
connection request is made, the pipe manager kicks off a new PipeThread to handle
communication on the connection. The ResponseTimer is used when propagation
pipes are used for group communication. Because JXTA propagate pipes do not
provide reliable communication, the ResponseTimer is needed to assist in providing
that guarantee. When a group communication request is initiated, for each peer with
an input pipe listening to this peers propagation pipe, a ResponseTimer is created
and started. The ResponseTimer is set to timeout after an amount of time specified
by an environment vairable. If a peer does not respond to a propagate pipe message
before the timeout value is reached, the ResponseTimer initiates a repeat of the same
message using the DataSender thread. From this point on the peer that timed out will
no longer receive messages on its input pipe that is listening to this peers propagate
pipe. In fact, the timed out peer will receive a message on its bi-directional pipe
requesting that it close its propagate pipe connection and only listen on the bi-
directional point from here on out for this specific client application. This does not
stop the peer from continuing to receive propagate messages related to another client
application. The DataSender thread is simply responsible for sending the first non-
confirmed propagate message to the timed out peer. If this message is in the middle
of a multi-message data transfer then the timed out peer will continue communication
over its bi-directional pipe. Additionally the JxtaPipeManager contains any number
of PeerConnectionManagers. Each PeerConnectionManager is mapped to a single
peer currently known by this ParaPeer node. The class provides access to specific
peers bi-directional pipes as well as storing information about the use of propagation
pipes, propagation pipe response status, in addition to current data offset information
for use in multi-message data transfers.
3.4.2 ParaPeer Slave Application Management
Each ParaPeer node is responsible for managing applications run as slaves. These
applications are all of the client applications submitted by other ParaPeer nodes. The
data associated with each application includes a sub-directory of the jobs directory
(jobs directory is created by the ParaPeer system upon initial startup). Contained in
this sub-directory is the actual Java code (in the form of ajar file) required to start
and run the application. In addition, each application that is added as a slave creates
an XML entry in the jobs history file (the name and location of this file is specified
on the command-line when running the ParaPeer daemon) that contains all the data
(except objects being sent or received) contained within the ApplicationData class.
The data stored in the XML entry includes location of the jar file, method to call
when starting the application, last updated time (from the master peers perspective)
of the jar file, last time the application was used (from this peers perspective), group
application was run under, master specified name of the job, and current status of the
job (fault, inited, running, or complete).
Ap p I i c ati on M a n a g e r
instance : ApplicationManager
myCurrentApss : HashMap
myAppKeys : Vector
myAppDataF lename : String
myExpChecksr : ExpirationChecker
addSlaveApp(appName: Strhg.joblD: String groupName: Stiing.jarName: Sirin g.startMethoc: String.lastUpdate: String) :
removeApp(ksy: String) : boolean
updateJanJobID: String,lastT me: String) : boolean
geUarNameCjobID: String) : String
addMasterApp(appName: String.groupName String,jarName: String.startMethod: String) : bcolean
ioadJarAndC ApplicationData) : Class
startAppfloblD: String) : boolean
isExpired(key String.diffHrs: long) : boolean
acquireHashLockO : void
releaseHashLockCi : void
getDataQoblc: String) : Object
pvitDataftablt. Stiina.flbi. Ijlbiesti; bocltan
Ap p I i c ati o n Data
log : Lcgger
myGroup : String
myjobld : String
myNarre : String
myJarFile : String
myStanMethod : String
myStatjs : int
myRolc : int
myResultObject : Object
myLastJpdate : Date
myLastJse : Date
mylncomingData : Linked List
myOutgoingData : Linked List
mylncomingLock : boolean
myOutgoingLock : boo ean
Figure 3.5 Application Management Class Diagram.
Figure 5 shows the ApplicationManagers relationships with the actual
ApplicationData and the mechanism used to store and load the application related
data. The ApplicationManager is a singleton class that is used no matter which group
is currently running an application. The manager may contain any number of
ApplicationData instances. ApplicationData instances are only removed if an
applications time since last use is greater than the amount of time an application
should be cached (this is set on the command-line when starting the ParaPeer
daemon). The ApplicationManager also contains an expiration checker class that
determines if an application no longer needs to be cached. When an application has
expired, the jar file associated with the application is removed from the file system
as well as the directory that contained the jar file. Additionally the XML data saved
in the jobs history file is removed and the ApplicationData object instance is removed
from the ApplicationManager. When any kind of update occurs to an application
(including a status change), the existing ApplicationData is saved and updated in the
jobs history file using the ApplicationDataProcessor.
In addition to application management, the ApplicationManager is also
responsible for starting the slave application when requested to do so. This consists
of loading the Java jar file that contains the application code, locating the class that
contains the starting point for the application, creating an instance of the class, and
invoking the start method. The actual invocation of the start method is performed by
the MethodlnvokerThread, which is part of the utils package.
3.4.3 ParaPeer Master Application
The master application (also called client application) is a client to the ParaPeer
system. When a client application is developed, it subclasses the entry point into the
ParaPeer systems ParaPeerJob. The ParaPeerJob class contains all the necessary
methods to communicate with remote peers. This class also encapsulates the same
functionality from the slaves point of view even though the slave communicates with
the master in a completely different way. The ParaPeerJob class also attempts to
make communication as easy and straightforward as possible by allowing objects to
be directly sent and directly received. The following class diagram shows the major
API functions (in addition to others) that are available to someone developing an
application for use in the ParaPeer system.
Figure 3.6 ParaPeerJob Class Diagram.
Figure 6 shows the methods available to a developer of a ParaPeer
application. The figure also shows the relationship between a ParaPeer application
and the local client application interface with the daemon as well as how the slave
peers interact with the ParaPeer system. Because a ParaPeerJob object knows
whether it is running as the master or as a slave, it is able to use the appropriate object
(object of type JobCommunicator in the case of the master, and object of type
ApplicationData in the case of a slave) when sending or receiving data from the
ParaPeer system. As stated in the package description, an object of type
PPClientSocketlnt creates a socket connection with the local ParaPeer daemon and
uses this socket to transmit and receive data to and from other peers. There is
currently not support for a slave application to communicate directly with other slave
3.5 A ParaPeer Client Application Example
The goal of this section is to show how to develop a ParaPeer application and to
highlight the different features available. To do this one must setup a ParaPeer
daemon, so by default the configuration and initialization of the ParaPeer daemon
also becomes one of the goals of this section. The example is a simple program that
adds a series of integers together to come up with a final sum. Each peer will be sent
a start value and a stop value, and will be responsible for summing all numbers in the
series [start value, stop value). The master will contribute to the computation and will
collect the results, tally the final result, and display the final result on stdout for the
user to see.
3.5.2 Setting Up the Daemon
Though we will not actually be setting up the ParaPeer system to run as a daemon, it
is essentially a daemon. Before the initial startup, the first step is to create a
configuration file containing the names of groups that this ParaPeer daemon would
like to be a member of. A sample configuration file might look like:
At this point the system is ready to be started for the first time. There are multiple
defines expected by the ParaPeer system on the command line. An example startup
command-line might look like:
/usr/share/j2sdk_nb/j2sdk1.4.2/bin/java -DPORT=9990 \
-DJOB_CHECK_TIME=10000 -DDATA_TIMEOUT=60000 \
-DAPP_HISTORY_FILE=jobs/jobsHistory.xml -jar ParaPeer.jar
The various command-line defines are used for the following purposes:
-DPORT this is the port that the ParaPeer daemon will listen on for
client application connections
-DJXTA_HOME this is the ParaPeer home directory
-DJOB_CHECK_TIME this is the amount of time to wait before
checking job expiration dates
-DDATA_TIMEOUT this is the amount of time to wait before
returning null for a master retrieve this prevents the system from
-DPROP_DATA_TRANSFER_TIMEOUT this is the initial timeout to
use for messages on the propagation pipe
-DAPP_EXPIRATION_HOURS this is the amount of time the
ParaPeer system should wait in between uses of a cached application
before deleting its code and runtime resources
-DAPP_HISTORY_FILE this is the file used to store information
about each application the peer is requested to contribute to as the
The first time the ParaPeer daemon is started a login screen like the one shown in
figure 7 will appear.
# Peer Settings
Figure 3.7 ParaPeer Startup Screen.
The use of this initial login screen prevents the user from having to fill out fields in
the JXTA Configurator. The ParaPeer system uses a default configuration file that
contains IP addresses for the initial seeds of the rendezvous peers for TCP and HTTP
communication. Now that the ParaPeer daemon has been started (and we assume at
least two others have been started and are members of the same peergroup) we can
write our ParaPeer application.
3.5.3 ParaPeer Application addNumbers
As stated earlier in this thesis, the entry-point to the ParaPeer system from a client
application perspective is through the class ParaPeerJob. To create an application we
must inherit from ParaPeerJob and to communicate with the ParaPeer system we must
use the methods provided by ParaPeerJob. Once we have a subclass of ParaPeerJob,
we have at least three things (in-order) we need to do over the course of a ParaPeer
l.Initialize the job this is where we request the number of peers we want to use,
send the local file system location of the Java jar file where the code resides, specify
the method that the slave peers should use to start the application, and specify the
name of the application.
2.Start the job this is where we send the message to the other peers to start the job
using the start method provided in the initialization phase.
3.Kill the job here we are not really killing the application it should have already
completed successfully at this point. The purpose of this method is to tell the system
that it can release any resources associated with this application.
These are the minimum methods used to run a ParaPeer application, of course the
remote peers will not have much to do without data, and the master application will
not gain anything from the distributed computation if it does not request some kind of
results. This leads us to two additional important families of methods; send and
The send and receive methods allow the master and slave applications to
communicate by sending objects back and forth. The master is able to send and
receive from specific peers or the requested group as a whole. The slave is able to
send objects directly to the master and receive objects that came from the master.
Additionally, the process of sending data can be performed synchronously or
asynchronously dependent on the method called. Send methods that do not include
the word Async in their name are synchronous method calls. Listing 1 shows the
class declaration and initialization steps necessary to create the addNumbers program.
Listing 3.1 addNumbers Initialization.
public class AddNumbers extends ParaPeerJob
public boolean init(int startNum, int stopNum,
int numPeers, String jarFile)
Vector initResp = init(jarFile, "addNumbers",
if (initResp == null)
System.out.println("Could not initialize +
Vector mainVec = new Vector();
double incrDouble = (stopNum startNum) /
incrDouble = Math.ceil(incrDouble);
int incr = (int) incrDouble,-
for (int i = 0; i < numPeers; i++)
Vector startStopVec = new Vector();
Integer startlnt = new Integer(startNum +
Integer stoplnt = new Integer(startNum +
if (stoplnt.intValue() >= stopNum)
stoplnt = new Integer(stopNum+1);
startStopVec .add(stoplnt) ,-
System.out.println("Could not send data");
As line one from listing one shows, to create a ParaPeer job the
communication class must extend ParaPeerJob. The next important lines are 6 and 7.
These lines show the initialization of the add numbers job by passing in the fully
qualified path to the jar file containing the code, the name of the job (addNumbers),
the name of the method to start remote peers in, and the number of peers requested.
The return Vector contains the address of the participating peers (these are actually
the peers bi-directional pipe id) so that each one can be individually addressed if
desired. Lines 16-37 show the initialization of the data to be passed to each peer.
Since we are passing more than one object (different start and stops for each peer), we
create a Vector (mainVec) to hold the objects being passed to each peer. Then, we
create another Vector that holds the start integer and stop integer for each peer. So,
when we are done we have created a Vector that contains two Vectors that in turn
contain two Integers each, the start and stop integers. The final part of the
initialization is line 39. This line contains the call to the ParaPeer system to send the
Vectors to the participating peers. The call is synchronous so we wait for a response
before continuing on. Listing two shows the method that is run by the remote slave
peers and how the calculation is performed.
Listing 3.2 addNumbers Slave Start Method.
public void startPeerO
Vector startStop = (Vector)getDataFromPeer();
Integer startlnt = (Integer)startStop.get(0);
Integer stoplnt = (Integer)startStop.get(1);
int start = startlnt.intValue();
int stop = stoplnt.intValue();
if (stop < start)
int tmp = stop;
12 stop = start;
13 start = tmp;
15 16 int res = 0;
17 for (int i = start;
18 res += i;
19 20 sendObjectAsync(new
Line three from listing two is one of the most important lines in the method.
This is where the peer retrieves the Vector containing the start and stop integers from
the master peer. Lines 4-18 show the processing of the start and stop values, and the
calculation of the summation of the series. Line 20 is where the peer sends the results
to the master application. Here the peer is sending the Integer object and not waiting
for a response this allows the peer to move on and not waist any more computing
power than necessary. Listing three shows the master peers call to start the job and
to collect and post-process the results.
Listing 3.3 - addNumbers Master Job Start.
l public void runJobO
3 A startJob();
5 c. Vector results = getDataFromAllPeers();
o 7 int seriesTotal = 0 ;
8 for (int i = 0; i < results.size(); i++)
10 Integer res = (Integer)results.get(i
11 seriesTotal += res.intValue();
14 System.out.println("Received all results
Line three from listing three is the call to the ParaPeer system to tell the slave
peers to invoke their starting methods. Though it was not done in this peer, after we
called startJob() for the slave peers we could have performed one of the series of
calculations ourselves before returning to the runJob() method of AddNumbers to
collect and post-process the results. Line five contains our request to collect the
results from all of the peers participating in the computation. This call does not return
until we have received the results from all peers. The Vector returned contains the
results from each of the peers calls to sendObjectAsync(Object). From our example
we know that each Object in the Vector returned by our getDataFromAHPeers() call
is an instance of the Java type Integer. Now on lines 7-12 we add all our results
together and in lines 14 and 15 we report the results.
Though this example did not show point-to-point style communication, it did
show the main methods used in creating and executing a ParaPeer application. The
development of the application is straightforward and fairly effortless. It is easy to
imagine how splitting up tasks on large amounts of data or a large number of
computations could benefit the overall execution time of the application. The next
section shows the potential of the ParaPeer system by incorporating a number of
different performance tests and evaluations.
In this chapter we evaluate the performance of the ParaPeer system. The first
section analyzes the different systems that could be used to create distributed systems
or already are distributed computing systems, and compares them to the ParaPeer
system. Additionally, the system is compared to other message passing systems MPI
and JMS. The next section focuses on performance comparisons with a comparison
of the point-to-point communication speed followed by a group communication
comparison. With group communication, because of unreliable JXTA propagate
pipes, additional measures were taken to ensure reliability. Finally, the performance
of Java Object Serialization compared to the XML serialization tool XStream will be
4.1 Comparison of Alternatives
There were a total of four different message-passing or peer-to-peer
communication systems presented in the literature review. Each one had positive
attributes that added to their adoption as a system of choice among users. Table 2
describes the key drawbacks of three of these systems and explains how JXTA
improves upon these drawbacks or eliminates them all together.
Table 4.1 Communication Comparisons.
Drawbacks JXTA Solutions
JMS 1. Central messaging provider results in single point of failure 2. Most providers do not allow dynamic destination creation 1. JXTA is peer-to-peer which protects it from a single failure taking down the whole network 2. Any JXTA Advertisement addressable resources can be created and propagated throughout the network on the fly
Pastry 1. Minimal user community results in lack of open source development to leverage 1. JXTA community is supported by numerous open source development efforts as well as the support of Sun Microsystems
Gnutella 1. Not very flexible 2. Made for very specific purpose of peer querying 1. The JXTA network does not define specific message types rather it defines message structures that may contain any kind of data 2. Because JXTA was not made specifically for file search and transfer, it does not suffer from the same limitations as Gnutella
Table 2 shows that JXTA is one of the best if not the best choice for a
dynamic, decentralized, flexible message-passing system. The increasingly mature
code-base of JXTA has contributed significantly to its success. Additionally the
strong user community has produced numerous advancements that have rolled up into
the core of JXTA.
Table 3 shows a comparison of the different distributed computing solutions
that exist for the purpose of parallelizing applications. Each systems drawbacks are
presented and the ParaPeer system improvement or elimination of the disadvantage is
explained. Because they are similar in their negative aspects, MPI and PVM are
presented as one entry in the comparison table.
Table 4.2 Distributed Systems Comparisons.
Drawbacks ParaPeer Solutions
MPI/PVM 1. Available machines must be known at runtime 2. Not operating system independent 3. Application code must be distributed to and compiled on target machines 4. Cannot make use of Object-Oriented paradigm in that they do not provide object passing capabilities 1. ParaPeer uses JXTA which provides dynamic discovery capabilities 2. ParaPeer uses Java which is operating system independent 3. Application code distribution is automatic and no compilation is necessary 4. ParaPeer provides interfaces for passing Java objects
P3 1. Reinventing the peer-to- peer wheel (not using JXTA, Pastry, or another existing peer-to-peer system) 2. Cant leverage open source peer-to-peer development efforts available on other peer-to-peer systems 3. Can only run one job at a time leading to significant resource contention 1. ParaPeer uses the core Java JXTA reference implementation 2. Any future improvements to JXTA will by default benefit ParaPeer 3. Multiple jobs can be run at a time as master or as slave from the same ParaPeer daemon only resource limitations exist
JNGI 1. Does not allow in-progress communication 2. Code and data storage managed at a group level limits the ability of individual peers to tailor this resource use to their own preferences 1. At any time during computation in ParaPeer, a slave can communicate with the master and the master can communicate with any one of its slaves 2. Application management is controlled on a per peer basis allowing individualized controls
As is evident from table 3, the ParaPeer system offers a more flexible dynamic
environment than the popular parallel computing libraries of MPI and PVM.
Additionally the ability to transfer objects provides ParaPeer with the advantage that
programming can be done in an object-oriented fashion. This also avoids the need for
object data parsing in the case that a slave peer needs to reconstruct the object from
individually passed data from the master. ParaPeer also offers notable improvements
upon the peer-to-peer distributed systems P3 and JNGI. The P3 systems single
program limitation results not only in possible crippling resource contention, it could
also lead to resource under-utilization if a job is running but not making use of
compute resources. ParaPeer also works to increase the possibilities for
communication during all phases of computation. As will be discussed during the
conclusion section of this thesis, ParaPeer is also built so that it would not be a
significant undertaking to add the ability to communicate from slave to slave during
4.2. Performance Analysis
This section focuses on the performance of the ParaPeer system compared to the
alternatives of MPI and JMS. The next two sub-sections will provide performance
analysis of ParaPeer point-to-point and group communication.
4.2.1 Analysis Environment
One factor in the test environment is the use of the JXTA peer-to-peer network
in the ParaPeer system. Because of the way messages are propagated through the
system and because of the way advertisements are published and discovered, traffic at
an individual location cannot be controlled without completely closing off the testing
environment to the outside world. This means that although times when usage would
be expected to be low, there are no guarantees that outside traffic did not enter the test
The test environment consisted of a 10-node Linux cluster with a gigabit
Ethernet switch. Each node on the system is an AMD PC. Each node was used to
test the different message passing systems (ParaPeer, MPI, and JMS). The initial test
was performed to compare the point-to-point data rates of the different message
passing systems. An additional test that included ten nodes total showed the
performance comparison of the different options with respect to group
communication. One factor in all of the testing for the ParaPeer system was the
serialization of objects.
4.2.2 Object Serialization
Object passing and remote object access has become a common part of
todays distributed systems. The J2EE model provides RMI as one of the most
common methods for remote object access. One reason passing objects rather than
passing strictly data has become so popular is because it is more intuitive in the
Object Oriented programming paradigm. The Java language accommodates object
passing through its serialization interface. Java provides a way to convert basic Java
objects into a stream of bytes .
Another option for object serialization is the use of an XML serializing
library. XML serializing libraries convert Java objects straight to an XML
representation making them human readable. One such library is XStream. The
XStream library supports most objects without the need for special handlers to map
existing Java objects to new ones created by an application developer .
Additionally, because XStream creates XML that avoids unnecessary duplication of
data that can be obtained through reflection, it creates serialized objects that are ...
more compact than native Java serialization . XStream also provides methods to
improve the slowdown inherent in any serialization method that is to use a
performance minded XML parser like XPP3 . XPP3 is an XML Pull Parser that
is geared toward fast parsing of XML data . When using XStream the use of
XPP3 is under the hood and does not require the developer to create any type of
additional parsing code.
Because the ParaPeer system was the only one of the tested message passing
systems serializing objects, it was the only system that suffered from the performance
cost of object serialization. Since serialization time provided an area of possible
improvement, this was taken advantage of by exploring the two different serialization
options discussed above (XStream vs. Java built-in). Figure 1 shows the comparison
of the two different serialization methods.
Data Size (KB)
Figure 4.1- Serialization Times.
According to the results it appears that basic Java object serialization performs better
when used with smaller data transfer sizes, but as soon as the transfer size is greater
than 1KB, XStream takes over as much faster. Other factors such as object
complexity, or more specifically the number and types of object member data, might
also need to be considered when selecting a specific object serialization method.
Since Java built-in serialization only offered minimal performance gains in the
smallest data transfer size and was significantly slower in the larger data transfer
sizes, XStream was chosen as the sole serialization method for the ParaPeer system.
4.2.3 Data Transfer Comparisons
The data transfer comparison of the three different message passing systems
(MPI, JMS, ParaPeer) is shown in the following figures. Both point to point and
group comparisons were performed. The goal of the tests was to determine if data
transfer times were comparable in the three different systems. Additionally, because
MPI uses TCP sockets and does not incur the same overhead experienced by JMS and
ParaPeer which use Java sockets, a comparison of TCP sockets and Java sockets is
Point to Point Data Transfer Times
Figure 4.2- PTP Data Transfer Times.
10 Node Group Data Transfer Times
0.5 1 16 32
Data Size (KB)
Figure 4.3- Group Data Transfer Times.
Point to Point Socket Transfer Times
Data Size (KB)
Figure 4.4- Point-to-Point Socket Transfer Times.
The figure shows that MPI provides superior performance in the case of point to point
message passing. JMS comes in second and significantly higher than MPI while the
ParaPeer systems performance is slower than JMS. This is also the case for the
group data transfer time comparison. MPI definitely benefits from being optimized
for the local network environment and in this instance is only transferring a simple
data type (in this case a character array). One advantage that JMS and the ParaPeer
system hold over MPI is that they are geared toward the distributed computing
environment. This makes it easier to take advantage of available nodes that live
outside the local network and that might otherwise be unused. Another advantage of
the two distributed systems over MPI is that they do not require a priori knowledge of
participating nodes. Additionally, because both JMS and ParaPeer use Java as their
implementation language, they are able to take advantage of Java serialization of
objects allowing for complex data types to be easily transferred from one node to
another. In the group comparisons MPIs performance is closer to that of JMS but
still outperforms it. When comparing JMS and ParaPeer, we must consider the cost
of serialization that was discussed earlier in this chapter. If we factor in the cost of
serialization to the performance of JMS, then for the point to point comparison the
values become very similar to the ParaPeer system. In fact, if we perform a straight
addition of the cost of serialization to the point to point data transfer times for JMS,
we come up with an average performance gap of 25.24 ms. From figure 2 we can see
that for the transfer sizes less than 32KB the gap between JMS and ParaPeer is
actually very small, and indeed we come up with an average performance gap of
Moving on to the group data transfer comparisons we can see that we have the
same performance positioning as the point to point, with MPI leading the way
followed by JMS and ParaPeer. In the group data transfer the performance gap
between MPI and JMS is much smaller while the gap between JMS and ParaPeer is
much larger. Again we must consider the serialization time when comparing JMS
and ParaPeer and so we come up with an average performance gap of 144.53 ms.
Just like the point to point comparisons, the group comparisons have a smaller gap
when the data transfer size is smaller. In fact the average performance gap for the
.5KB and 1KB data transfers narrows to 54.93 ms.
The additional performance statistics gained from the comparison of point-to-
point transfer times between TCP and Java sockets (figure 4) indicates a performance
penalty incurred due to the use of Java. This gap helps to explain MPIs performance
superiority over both JMS and ParaPeer which both use Java sockets. Though the
performance gap points to a negative impact as a result of using Java other
advantages such as architecture independence and object serialization still make Java
the best choice for the ParaPeer system. Java allows the ParaPeer system to achieve
the design goals set out at the beginning.
Beyond the performance analysis we are left to consider the benefits of the
ParaPeer system that might outweigh the performance of data transfer. One
significant redeeming quality of ParaPeer is that it makes it very simple to create
parallel applications. By simply extending the ParaPeerJob class the developer
instantly gains access to the divide and conquer nature of a peer-to-peer network.
Objects can be sent to participating peers by simply making a method call. To use
JMS to accomplish the same feat would require the development of a parallel
computing infrastructure. Additionally, the performance of JMS would only be
hindered by such an infrastructure. The other significant drawback of using JMS is
that the server becomes a single point of failure. It is likely that backup and overflow
servers could and would be added if such a project were pursued, but then that would
defeat the purpose of using free resources provided by a cooperative peer group.
MPI also suffers in the case of object representation. Basic data types and
structures are the only possible ways to transfer data in MPI. Additionally, the need
to know all participating nodes in advance limits both the failover capabilities in
addition to the possibility for load balancing. Though the ParaPeer system in its
initial iteration does not have failover or load balancing capability, it is easy to see
how these important features could be implemented.
The comparison of features of the ParaPeer system with its use of JXTA compared to
JMS, Pastry, or Gnutella, illustrates the significant advantages it enjoys. Among the
advantages JXTA has is that it has incredible community and commercial (Sun
Microsystems) support. Also the ability to dynamically create endpoints that become
instantly visible to fellow members provides for a very flexible resource network that
can change to fit the current users needs. In addition to the comparison of
underlying peer-to-peer technologies, a comparison of ParaPeer to MPI/PVM, and P3,
and JNGI, reveals how ParaPeer could be used to improve upon or solve their
respective drawbacks. Among the advantages that ParaPeer provides is that
communication can be performed during execution, automatic distribution of
application code, and the ability to run more than one job at a time. Each of these
items is a limitation of one of the other parallel processing systems mentioned.
Object serialization is also discussed and a comparison of two possible serialization
methods is offered. In explaining the differences between the two methods the better
overall choice of XStream emerges through the serialization comparisons. The data
transfer time shows that especially with point to point communication, ParaPeer can
compete with JMS. This could come in handy if all peers are working on the same
problem but each one requires a different data set. Additionally the group
comparison shows that for smaller data transfers, ParaPeer has reasonable
performance statistics when compared to JMS.
This thesis considered the current use of different parallel and message passing
systems. As is common with todays parallel computing systems, the libraries
mentioned (MPI and PVM) focus on local networks and directly addressable
computing resources. As distributed and enterprise systems become more common it
is apparent that ignoring the availability of additional computing power can hinder
technological development. The recent proliferation of peer-to-peer networks has led
to the rapid development of cooperation and collaboration that can be leveraged to
create groups of resources which are made available to trusted members.
One lacking area in parallel computing is that the collaborative group is not
dynamic. All participating nodes must be known at runtime and cannot be
dynamically chosen based on user preferences. Another drawback is that parallel
computing systems typically only support basic data types and are not built for
todays Object Oriented applications. The ability to pass objects rather than
formatting all data as a regular type could have the effect of increasing usage of such
a technology due to virtually seamless integration with existing applications.
In addition to studying the message passing directly related to parallel computing
this thesis looked at enterprise message passing in the form of JMS. Though JMS
provides an improvement upon the parallel message passing systems in that all
messages are routed through a central server and thus do not require the knowledge of
each receiving node, it does suffer from the fact that all communication goes through
this central server. This leads to the possibility of a significant bottleneck as well as
the addition of a single point of failure.
Different peer-to-peer technologies were considered. Though Gnutella was ruled
out due to its very specific message structure and intended use, Pastry and JXTA
both offered significant benefits when it came to peer-to-peer computing. Ultimately
JXTA was chosen due to its strong user community in addition to the rich API of
the Java implementation of the JXTA protocols. Existing parallel systems built on
peer-to-peer networks were found to have deficiencies. These deficiencies ranged
from the apparent lack of run-time communication support to minimizing resource
usage by only allowing one job to run at a time. Another significant drawback in the
case of P is lack of reuse when it comes to existing peer-to-peer systems.
In the case of ParaPeer, JXTA provided the dynamic nature necessary in a
distributed cooperative computing environment. This allowed for dynamic node
selection as well as the ability to dynamically create messaging endpoints specifically
for the use of a single job. The ParaPeer system is intended to be used to provide the
ability to either develop new applications that can take advantage of parallel
computing resources in addition to providing the ability for existing applications to
realize performance improvements through the use of distributed computing
resources. Additionally ParaPeer makes it easy to add additional compute nodes
without notifying every other node directly. It is expected that the ParaPeer system
would provide a cost effective way of utilizing idle resources for the benefit of
increased application performance.
Application groups that would benefit from the use of the ParaPeer system
include traditional parallel algorithms such as matrix multiplication and merge sort.
Additionally newly developed distributed applications developed for the specific
purpose of increasing computing power without increasing owned resources could
benefit from the decentralized nature of peer-to-peer systems. Application groups
similar to SETI@home  and Rosetta@home  could benefit from peer-to-peer
interaction and could also be developed without the dependency on central server data
6. Future Work
The development of the ParaPeer system has led to the possibility of exploring
futher improvements and additions. One topic of future improvements would be the
ability to pass off master control of a job to another computer. The ability of a
ParaPeer node to start a job, send it to another underutilized node to take control of,
and to continue performing other tasks while waiting for results would be quite
valuable. This also leads to another area of possible future research. ParaPeer
currently has a single default peer selection model, but it is easy to envision an
improved selection model that would take into account a peers current resource
availability, a peers track record when it comes to participating in group
computation, and a peers overall availability over a period of time. These topics of
research would definitely contribute to the overall quality of the system as well as
helping it become more predictable and increase the performance.
 W. Gropp, E. Lusk, N. Doss, A. Skjellum, A High-Performance, Portable
Implementation of the MPI Message Passing Interface Standard, Parallel
Computing, vol. 22, no. 6, pp. 789-828, Sept. 1996.
 The MPI Specification 1.1, [Online Document] Jun. 1995, [2004 Nov 10],
Available at HTTP: http://www.mpi-forum.org/docs/mpi-l 1-html/mpi-
 Nikos Drakos, PVM: Parallel Virtual Machine A Users Guide and Tutorial for
Networked Parallel Computing, [Online Document] Aug. 1994, [2004 Nov
17], Available at HTTP: http://www.netlib.org/pvm3/book/node 1 .html
 The Gnutella Protocol Specification v0.4 Document Revision 1.2, [Online
Document], [2004 Dec 13], Available at HTTP:
http://www9.limewire.com/developer/gnutella protocol 0.4.pdf.
 A. Rowstron, P. Druschel, Pastry: Scaleable, distributed object location and
routing for large-scale peer-to-peer systems, in Proceeding of the 18th
IFIP/ACM International Conference on Distributed Systems Platforms. 2001,
 Sun Microsystems, Project JXTA v2.0: Java Programmers Guide, [Online
Document] Mar. 2004, [2004 Dec 13], Available at HTTP:
 Sun Microsystems, Project JXTA: A Technology Overview, [Online
Document], Oct. 2002, [2004 Dec 13], Available at HTTP:
http://www.ixta.org/proiect/www/docs/ixtaview 01 nov02.pdf.
 B. Traversat, A. Arora, M. Abdelaziz, M. Duigou, C. Haywood, J.-C. Hugly, E.
Pouyoul, B. Yeager, Project JXTA 2.0 Super-Peer Virtual Network, [Online
Document], May 2003, [2005 Mar 10], Available at HTTP:
 Sun Microsystems, Java Object Serialization Specification, [Online Document],
2003, [2005 Mar 10], Available at HTTP:
 XStream, [Online Document], [2005 Mar 10], Available at HTTP:
 A. Slominski, XPP3, [Online Document], [2005 Mar 10], Available at HTTP:
 M. Baker, B. Carpenter, G. Fox, S. Hoon Ko, X. Li, mpiJava: A Java Interface
to MPI, [Online Document], May 2000, [2005 Jan 22], Available at HTTP:
 L. Oliveira, L. Lopes, F. Silva, P3: Parallel Peer to Peer An Internet Parallel
Programming Environment, presented at International Workshop on Peer-to-
Peer Computing, Pisa, Italy, 2002.
 J. Verbeke, N. Nadgir, G. Ruetsch, I. Sharapov, Framework for Peer-to-Peer
Distributed Computing in a Heterogeneous, Decentralized Environment,
[Online Document], [2005 Jan 24], Available at HTTP:
 M. Hapner, R. Burridge, R. Sharma, J. Fialli, K. Stout, Java Message Service,
[Online Document], Apr. 2002, [2005 Mar 10], Available at HTTP:
 Napster, [Online Document], [2005 Mar 10], Available at HTTP:
 D. Anderson, J. Cobb, E. Korpela, M. Lebofsky, D. Werthimer, SETI@home:
an experiment in public-resource computing, Communications of the ACM,
vol. 45, no. 11, pp. 56-61, Nov. 2002.
 Rosetta@home, [Online Document], [2005 Nov 13], Available at HTTP: