Stochastic diffusion model of the heterogeneous populations

Material Information

Stochastic diffusion model of the heterogeneous populations
Zakharyan, Armen
Publication Date:
Physical Description:
x, 164 leaves : ; 28 cm


Subjects / Keywords:
Particles -- Mathematical models ( lcsh )
Stochastic analysis ( lcsh )
Diffusion ( lcsh )
Diffusion ( fast )
Particles -- Mathematical models ( fast )
Stochastic analysis ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 158-164).
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Armen Zakharyan.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
436442945 ( OCLC )
LD1193.L622 2008d Z34 ( lcc )

Full Text
Armen Zakharyan
M.S., UCD, 2006
A thesis submitted to the
University of Colorado at Denver
and Health Sciences Center
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Applied Mathematics

This thesis for the Doctor of Philosophy
degree by
Armen Zakharyan
has been approved
Barbara Bailey
Lisa Johansen
l ')HM
Kathe Bjork

Zakharyan, Armen (Ph.D., Applied Mathematics)
Stochastic diffusion model of heterogeneous populations
Thesis directed by Professor Karen Kafadar
Kolmogorovs first and second equations in partial differential equations can
be used to find solutions of transition probabilities in simple systems; while they
could be solved for simple cases, it is virtually not possible to solve them for
complex processes that involve more than 3 or 4 population types. In this
work we propose a framework which allows exact simulation of such complex
processes. This framework will aid in our understanding in many processes in
various areas of science. In this work we develop a generic approach to simu-
lation of interactions between populations of particles in an arbitrary shaped
closed volume. The work describes direct (mesoscopic) simulation of particles,
where each particle is represented as a computational unit in the program, and
stochastic simulation of particles, where particle behavior is modeled using prob-
abilistic and statistical techniques. We simulate directly and stochastically the
following interactions: random flight, collision, chemoattraction/repulsion and
aging/death. By showing that the stochastic simulation algorithm (SSA) and
the direct simulation algorithm (DSA) are equivalent, the SSA can be applied
to simulation of large scale populations. The complexity of the SSA is deter-

mined by the size of the finite-volume partitioning, and the complexity of DSA
is determined by the size of populations, which is, in many simulation tasks,
computationally intractable. Therefore, the aim of this work is to show that
SSA can be applied to simulation of large-scale populations and to provide an
illustrative example using a model of disease progression. The disease used in
the illustration is multiple sclerosis, a paralytic disease of the central nervous
system initiated by myelin specific autoagressive T cells. Many processes in this
disease are well suited for simulating with SSA. Mechanisms triggering relapses
in multiple sclerosis are not well understood; they are the result of complex
interactions between immune and nervous systems containing large populations
of cells, proteins, etc. We apply the developed approach to simulation of some
of the processes in a closed volume that contains several populations interacting
with each other in the vascular, interstitial and endothelial compartments.
This work presents a new (faster) algorithm for simulations of interacting
populations in space, a new implementation for statistical properties of collided
particles, and a new implementation for calculated simple systems using Kol-
mogorovs first and second equations in generating functions. As a part of this
work we developed a software that is based on the developed algorithm.
This work was motivated by complexity of processes that take place in the
phenomena of multiple sclerosis. The complexity of processes that occur in
different spaces in different times makes it computationally impossible to find
a solution in a closed form. Moreover, because the number of particles and the
number of reactions that are taking place in such problems are very large, it
is also very difficult to develop an algorithm that simulates all reactions in a

relatively short time.
The SSA model is based on branching processes and processes with particles
interaction. We consider a community of different cell types that meets the
following conditions. The entire community is divided into several populations.
Each type of cell in the community is described by the set of parameters, which
can be changed with the age of the cell. These parameters are the same for
one species of a population and different for populations of different species.
During their lifetimes, cells can divide, interact with other cells and be exposed
to the influence of external factors. The division property of a cell, and also
the intensity, with which it interacts with other types of cells, depend on cell
parameters. Cells could die either as a result of interactions with other cells or
because they have reached the end of their lifetimes.
This thesis describes the model for the case when the community consists
of a single isolated population and then the more generalize case involving an
arbitrary number of populations. The model assumes that 1) the population is
subject to certain continuous external shocks, the intensity of which at various
specimens is not the same and is proportional to the weight, and 2) the inter-
action between the populations of individuals could be one of m different types.
Following the model specifications is a description of the modeling algorithm in
the SSA model. The DSA model employs Monte-Carlo simulation where each
individual particle is simulated, in contrast to the SSA model, where simulation
is executed at the level of finite-volume partitions. Deterministic solutions for
simple systems will be compared with the results when run under DSA.

This abstract accurately represents the content of the candidates thesis. I
recommend its publication.

This work is dedicated to my uncle Michael.

This thesis would not have been possible without the generous support of
adviser, Professor Karen Kafadar.

1. Introduction........................................................... 1
2. Compare solutions of different methods............................. 8
3. Statistical Properties of Population.................................. 11
4. Direct Simulation Algorithm........................................... 23
4.1 The Algorithm....................................................... 23
4.2 Random walk/diffusion............................................... 28
4.3 Chemo-attraction.................................................... 33
4.4 Reaction upon collision, birth and death of objects.............. 33
4.5 Tau Leaping Algorithm............................................... 49
4.6 Model of Infiltration Demyelination................................. 53
4.7 Identification...................................................... 55
5. Elements of theory and research tasks................................. 58
5.1 Markov branching processes with a finite number of types of particles 58
5.2 Markov process on the Nn. Kolmogorov equations.................... 58
5.3 Branching process with n types of particles....................... 60
5.4 Multidimensional generating functions............................... 61
5.5 Equations of branching processes.................................... 63
5.6 Probabilities of extinction......................................... 65
5.7 Equations for generating functions ................................. 65
5.7.1 Equations for expectations......................................... 66

6. Transformations of the form A > 71A + 72-B; B > 0.............. 68
6.1 Transformation A>A-f.B,2A + .B;.B>0.......................... 68
6.2 Transformation: A > 0, B, A + B, 2A; B 0................... 71
6.2.1 System of non-linear equations .............................. 71
6.3 Transformation A > 0, A + B, 2A + B, 2A; B > 0................ 75
6.4 Transformation A > B, 2A + B\ B > 0 .......................... 79
7. Transformations of the form A > 71A + 72R; B 2B.............. 83
7.1 Transformation A > 2 A + B\ B > 2B............................ 83
7.2 Transformation A > 0, B, A + B, 2A; B > 2B.................. 86
7.3 Transformation A > 5, 2A + 5; B > 2R ................... 90
8. Other forms of transformations .................................... 93
8.1 Transformation A B] B A....................................... 93
8.2 Transformation A A + B\ B A................................... 96
8.3 Transformation A>A + B]B^>A + B............................... 99
8.4 Transformation A > 2A; B > A + 2J5............................. 101
9. Conclusion......................................................... 106
A. The pseudocode..................................................... 108
A.0.1 Description of Vixdum........................................... 108
A.0.2 Pseudocode...................................................... 119
Figures .............................................................. 153
References............................................................ 158

1. Introduction
Kolmogorovs first and second equations in partial differential equations can
be used to solve for the transition probabilities in simple systems; while they
can be solved for simple cases, it is virtually not possible to solve them for com-
plex processes that involve more than 3 or 4 population types. In this work we
propose a framework that allows exact simulation of such complex processes.
Such a framework will aid in understanding in many processes in various areas
of science. For example, processes that lead to infiltration of T-Cells from the
vascular space to the interstitial space are not well understood. Various types
of objects such as cells, and proteins, peptides interact together to transform
into new states, produce proteins, bind with each other, etc. It is believed that
this complex interplay among populations of these objects defines the location
of infiltration of activated T-Cells into the brain, which follows by attack of
infiltrated activated T-Cells against the myelin, a protective layer around axons
of the neural cells [82]. Numerous pathways of interactions among populations
of the objects have been described. Since it is their cooperative interaction that
results in a blood brain barrier breakdown, development of numerical methods
for simulation of these processes is of significant importance. We will attempt
to simulate some of the known processes in this simulation framework. Dis-
crete stochastic algorithms for simulating reactions among populations are more
computationally demanding than deterministic algorithms employing ordinary
differential equations due to the randomness that must be taken into account

in stochastic models of reactions. However, discrete stochastic algorithms offer
more realistic representations of the interactions as they are better suited for
modeling fluctuations resulting from stochasticity of processes and the random-
ness of processes.
Algorithms for simulation of reactions among populations of objects in ho-
mogeneous media have been developed earlier. Deterministic algorithms solve
ordinary differential equations; while stochastic algorithms address simulation
from probabilistic random standpoint. Gillespies algorithm [35], and the con-
sequent algorithms based on his algorithm, calculate the probability of every
reaction using reaction rate constants and population sizes, then based on the
probabilities random numbers are generated to determine the length of the step
and the reaction. The StochSim [49] algorithm employs a constant time step;
however, the time complexity of StochSim is comparable to Gillespies algo-
rithm. Hybrid approaches were also implemented: fast reactions are simulated
with deterministic or Langevin equations, and the slow reactions are simulated
with Gillespie [35]. The basic problem with these algorithms is their assump-
tion of homogenicity and inability to represent 3D models and geometry. This
problem was resolved later; Stundzia in 1996 [71] offered a finite volume ap-
proach to a stochastic simulation algorithm. Cellular reproduction and death
have been implemented later. We adopted the ideas from the above authors
to develop finite volume based simulation algorithms for modeling reactions
among populations, object reproduction, death, chemo-attraction, and random
flight movement. The current challenge of the algorithms in this field is to take
into account fluid dynamics [83], in our case, plasma dynamics.

Objectives of this proposed activity are summarized as follows:
Implement a discrete particle dynamics algorithm for simulation of plasma
in the vascular space of a blood vessel.
Integrate the implemented discrete particle dynamics algorithm with a
prototyped stochastic population simulation algorithm of a discrete time
Apply the integrated algorithm to simulation of some known reactions
and phenomena that causes lesions in the brain during the autoimmune
disorders such as multiple sclerosis.
Identify the scope of complexity to create a model with mixed mesoscopic-
probabilistic representation of objects in a space partitioned into finite
Use solution of simple models with differential equations and compare the
results with discrete particle dynamics algorithms results.
Numerous models of blood flow and plasma dynamics use computational fluid
dynamics [84], molecular dynamics [85], lattice Boltzmann model [86], etc.
While each approach has its advantages and disadvantages, application of a dis-
crete particle dynamics technique seems to be attractive as it fits in the context
of a simulation algorithm. The main advantage is that both models use discrete
time steps, and both models represent the model at the mesoscopic level. We
have developed a prototype of a stochastic algorithm for simulation of inter-
actions between populations of moving objects in 3D space. The prototyped

Figure 1.1: A sample screen showing objects with random walk that collide in
an invisible collection of finite volume, undergo proliferation, death and reactions
with other population objects to transform and produce other objects.
version is lacking an accurate model of plasma dynamics. However, the pro-
totyped model implements a stochastic algorithm with the discrete time step
for simulation of reactive interactions between populations and for simulation
of reproduction and programmed death of objects in populations. We propose
to integrate the prototyped model with a particle dynamics model of plasma in
a blood vessel. Such integration will enable a more realistic simulation of pro-
cesses at the blood brain barrier. Particularly, we will be able to simulate the
processes that lead to infiltration and demyelination in autoimmune disorders
of the brain in the three compartments, where infiltration phenomenon occurs:
vascular, endothelial and interstitial spaces.
To simulate all processes that lead to the infiltration it is not sufficient to
simulate only one aspect of the phenomena. Therefore, it is necessary to model

interactions between dynamic populations of cells, proliferation and apoptosis
of cells, chemo-attraction of one population of cells by another and the dynam-
ics of plasma, where the population are located. To address this multifaceted
problem we started with a basic model with a weak assumption that the ob-
jects move with a random flight in the space. There is an established theory
behind the random walk movement [6], and it can be extended easily to the
finite-volume simulation; therefore, random walk has been selected as a starting
point. The prototyped simulation algorithm [unpublished data] is similar to
the StochSim algorithm. The algorithm is extended to 3D space by the finite
volume technique, which partitions space into tiny volumes such that hetero-
geneous media can be considered as a collection of volumes with homogeneous
concentrations of population of objects. In the prototyped model, the objects
move as a random walk of constant step and constant velocity. Therefore, all ob-
jects of a population move with constant velocity and step size that are assigned
to the population at the model design stage. New objects, which are created
at a collision triggering a reaction, move with a predefined velocity assigned to
the population. In contrast to collision-reaction, population reproduction and
death occur as a programmed process on the population level. At reproduction
and death the energy conservation is preserved by corresponding changes in the
velocity of the objects that are involved in the reproduction and death and the
objects in the finite-volume, that are located at the time of reproduction and
death. In this proposal we would like to make the next step, which is to replace
the random walk motion of the objects with a model that correctly represents
the plasma fluid. The task is to implement movement of the object in this finite-

volume space in such a way that the fluid dynamics in a vessel are accurately
represented and simulated together with simulation of other phenomena such
as reaction, production, transformation, destruction, proliferation, apoptosis,
and chemo-attraction processes. We are considering implementation of a well
described discrete particle method, particularly, a dissipative particle dynamics
introduced by [87], to simulate plasma dynamics in a vessel.
There are variety of different models that can benefit from the software,
such as applications of down-regulatory phenomena, central nervous system en-
dothelial cell-polymorphonuclear cell interactions, T and B-cell interactions in
autoimmune syndromes, changing and testing cytokines and chemokines influ-
ence in autoimmune diseases, renal tubular epithelial and T-cell interactions
in autoimmune renal disease, T-cell vaccination in Experimental Autoimmune
Encephalomyelitis. Other applications beyond this one in molecular biology in-
clude population dynamics and epidemiological processes, though such applica-
tions may require modification of the ideas presented here. This thesis discusses
the use of this algorithm only for applications in molecular biology.
Like all software this software has some limitations. First, it handles only
three pre-defined compartments in the space: vascular, endothelial and intersti-
tial. This limitation is relatively easy to address: the number of compartments
could be 10 or more, depending on the computer power and the nature of the
problem. The number of compartments that can be created is a function of
the number of cells and the number of types of interactions in each compart-
ment. Another limitation in this program is that it does not take advantage
of multi-tasking or parallel processing. To extend the program to multi-tasking

or parallel processing the source code will need to be changed substantially.
Adding multi-tasking or parallel processing would add a lot of additional power
and speed, and would help in increasing the scale of the problem by calculating
interactions in different compartments simultaneously, also, by increasing the
number of compartments, number of cells in each compartments and number of

2. Compare solutions of different methods
We compare solutions of different methods by solving deterministically sim-
ple models with differential equations and compare the results with the discrete
particle dynamics algorithm. Suppose, there are three types of particles A, B,
C. Transitions in these particles take place according to the following schema
(N 0,1, 2,...):
{A _> piA + p\B + plC; B -> fiB + %C} (2.1)
PbPh Phflh Pi N, which means that in the considered process the particle
A can transform to group of particles consisting of particles of type A, f3\
particles of type B, particles of type C; the particle B can transform to
/?! particles of type B and /3f particles of type C. The particles of type C
are final and cannot transform further. From the mathematical standpoint this
model is considered to be Markov process £ (t) = (£i (t) ,£2 (t) ,£3 (t)), with
continuous time t [0;oo) and two types of interactions: e1 = (1,0,0); £2 =
(0,1,0); N3 = {(ai, 0:2,0:3), i, a2,0:3 = 0,1, 2,...} is a set of vectors with non-
negative components, where ai = number of particles of type A, q2 = number
of particles of type B and 0:3 = number of particles of type C. The intensity of
transformations are A and fi. The process £(t) with transition probabilities:
p&Zm = p (f w = (A.ft.A)ie(O) = (C, 02,03)}
is a special class of Markov processes.

If the Markov process is in the initial state a = (aj, 2,03), then after a ran-
dom time a2 Q3), P form to a group of particles with probability distributions p^ ^ which means
that the process is moving to a condition with parameters, which corresponds
to the vector: (c*i + f31 1; c*2 + /?2i (*3 + (h)- After a random time Q2 a3^,
P | t2qi Q2 Q3) < = 1 e~Xa2t particles of type B transform to a group of
particles with probability distributions: p2^ ^ ^ and the process transforms
to condition: (c*i + /?i; 02 + fa 1; c*3 -I- (3$), i.e., r1 and r2 are exponentially
distributed with mean Acti and Ac*2, respectively. In the state (01,0:2,0:3) the
process spends a random time:
min {T(ai,02,03)1 02,03)) (2-2)
Consider monomolecular type of reaction A > B. Let X(t) represent a
Poisson process which denotes a random number of particles in the system at
time t, t > 0. The equation of total balance of the reaction is:
Px (t + At) = k(x + 1) AtPx+i (t) + (1 kxAt) Px (t) + o (At) (2.3)
where Px (t) = P (X (t) = x) and A: is a local coefficient of the current Poisson
process, i.e., the conditional probability of the event in the interval of (t, t-l-At) =
k(At) + o(Af), and the probability of more than one event is o(At). Thus, X(t)
itself is a random number of particles in the system at time f is not a Poisson
process. Taking the limit At > 0 leads to Kolmogorovs differential equation:
~^ = k(x + 1) Px+1 (t) kxPx (t) (2.4)

Define the generating function F (s, t) as follows:
F(s,() = ^Pt(()s
where |s| < 1 or the series will diverge. By itself s is not interpretable, except
to show when the series will converge. When s = 1 the expectation and the
variance can be found by differentiating (2.5). Then the equation (2.3) can be
presented in the form of a differential equation:
dt~k{1 S) ds
With initial condition F (s, 0) = sx equation (2.6) has a solution
F(s,t) = 1 + (s 1) e
Applying expressions for mean and variance:
M{xm = %Ll,
£>{*()}= S| +fU,-(f)2
and hence:
M{X (t)} = x0e-k\
D{X{t)} = x0e~kt (l-e~kt) .
The deterministic solutions of the equations of this type obtained in the work
of McQuarrie [89] where the monomolecular process is described by differential
equation dx/dt = kx, and solution has a form: x(t) = xq exp(kt), which fully
coincide with the expectations derived in this work.

3. Statistical Properties of Population
In computational problems of system biology, it is required to compute the
probability of occurrence of certain interactions between populations on a meso-
scopic level in a solution given population characteristics such as concentration
and volume. Moreover, the modern approach to solving computational problems
of system biology increasingly requires implementation of stochastic models de-
scribing the occurrence of an interaction based on the probabilistic nature of the
involved processes. Combinatorial methods are required to complete the least
number of simulations possible. For that reason it is critical to establish a the-
oretical platform that supports the development of various probabilistic models
operating with characteristics of populations and reactions among them.
Since the number of cells in humans and other mammals is very large, we
adopt the concept of finite volumes. This concept has already been successfully
applied in system biology to study various processes. Finite volumes represent
small volumes in which the distribution of populations across the volume can
be assumed to be uniform. In assuming a uniform distribution in finite volumes
we rely on the well-known work of Gillespie [35], with the same assumptions.
In particular, a population of objects is assumed to be randomly distributed in
the space of each finite volume, and the number of objects whithin each volume
can be very large.
First, determine the probabilistic picture of the population objects dis-
tributed in the volume-that is, the portion of the volume that is taken by the

population objects and its variance. It is easy to see that in such computational
problems the issue becomes the placement of r cells in n volumes in such a way
that each volume can contain up to r cells. The probability of a cell entering into
each finite volume is n~r. Therefore, we apply combinatorial methods relevant
to the problem, where r is the number of cells, n is the number of volumes, and
m is the number of empty volumes after applying r cells into n volumes.
The probability Pm(r, n) of the number of empty volumes is expressed by
the recurrent formula
, . n nr , , m + 1 . ,
Pm(r + l,n) =---------Pm(r,n) +--------Pm+1(r,n) (3.1)
n n
which can be simply proved by the formula of total probability. First, calculate
the recurrent formula for the expectation:
E(r,n) = J~^mPm(r,n)
m= 1
where E(r,n) does not depend on whether r < n or r > n. For the expectation
we have
E(r, n) = ^2 mPm{r, n) (3.2)
m= 1
Multiplying both parts of (3.1) by m and taking the sum from 1 ton will lead
^2 mPm(r + 1, n) =
m(n m) . m(m + 1) . /r>
^-----^Pm(r,n)+> ----------Pm+1(r,n) (3.3)
71 f ^ 71
m= 1

Let us examine the second part by substituting: k = m + 1
^m(m + l)n , ^k(k-l)
2^--------pm+i(r,n) = ^------Pk(r,
m= 1 k=2
= E
A: (A: 1)
^fc(r, n)
m(m 1)
Pm(r, n)
Substituting the last expression in (3.3) will lead to:
E mPm(r + 1, n) =
= E =%!=1F(r,n)+ E =^lipm(r,n)
= E
771=1 L
7(77 771) 777(7771)
= (!-;;) E rnPm(r,n)
That is,
A?(r + 1, n) = ^1 ^ E(r, n).
Applying the equation r 1 times yields
E(r,n) = ( 1 P ) E(l,n).
Note that
E(l, n) = n 1
because, with only one cell and n volumes, the number of empty volumes is
naturally equal to n 1. Therefore
E(r, n) = in 1) I 1-J = nil--------
n) \ n

We could derive the same results by letting M(R,N) be the number of empty
volumes with R cells and N volumes. Conditional on R = r cells observed in
N = n volumes with probability Pm(r,n). Then E(M(r,n)) = n(l 1 /r)r, so
finally E(Mr^n) n( 1 1 /n)r.
To calculate the variance it is convenient to designate the expected number
of empty volumes through m(r, n) (a number of empty volumes after placement
of r cells in n volumes); i.e.,
E(r, n) = M (m{r, n)) = n ^1------^
First, calculate the expectation
K(r, n) = M (m(r, n)(m(r, n) 1)) = m(m 1 )Pm{r, m)
Note that the first element in m = 1 is zero, so
K(r, n) = M (m(r, n)(m(r, n) 1)) = m(m 1 )Pm(r, m)
Using (3.1) by m(m 1):
K(r + l,n) = E m(m 1 )Pm{r + 1 ,n)
= £ m(m-l)^P(r,n)+ £
m=2 m=2
= E ^-bi^)-Pm(r,n)+ E m(m-in)(m-2)pm(r,n)
777=2 777=2
777(777 1) (rt m) | 777(t771)(7772)
- E
Pm(r, n)
= (! l) E mim ~ 1 )Pm{r,n)
= (1 -l)K{r,n).
Carrying on the iteration, we arrive at:
K(r + 1,n) = ( 1 ) K(l,n)-,

Note that
K(r, n)
K(l,n) = M (m2(l,n)) M (m(l,n))
= (n-l)2-(n-l)
= (n l)(n 2)
= n(n-i)(1-2)
K(r,n)=n(n 1)^1 ^ (3.5)
Finally, we use (3.4) and (3.5) to calculate the variance as:
D (m(r, n)) = M (m2(r, n)) (M (m(r, n)))2
= M (m(r, n) (m(r, n) 1)) + M (m(r, n)) (M (m(r, n)))2
= K(r, n) + £'(r, n) F^2(r, n)
= n(n-1)(i-|) +n(l-i)
It should be noted that the equations (3.4) and (3.6) are exact, and that they
can determine the exact value of expectations and variances for any n and r.
However, their calculation for large n and r (r > 1000) is considerably more
difficult. We use the assumption of a constant proportion of cells in volumes;
i.e., r/n = const. In reality, the cell concentration is not perfectly constant,
and we here consider C r/n to have a probability distribution. We consider

such a problem as future enhancement. For here, assume that the concentration
C = r/n is constant.
For large n the expectation is equal to:
E(r, n) = nexp (3.7)
and the variance is equal to:
DM = nexp (-^) (l (l + r~) exp (-^)) (3.8)
Analyzing formulas (3.7) and (3.8) we should pay special attention to the
fact that the expectation and variance are factors of n and the concentration
Now, by substituting concentration C = r/n will get to:
E(r, n) = ne~c
D(r, n) = ne C[1 (1 + C)e c]
Note that if there are pr cells in pn volumes, then the entire volume could
be pn divided into p smaller volumes, and in each p volume there will be Sp
cells, so Si pr. As the cells distribute themselves evenly, we can expect
ultimately that
w- 1 (* = 1.2, ...p, j = l,2,...p, i^j)
and, according to limit theorems, Si > r when n > oo. Then, the expectation
and variance of M(r,n) = M(C) number of empty cells in volume Si shall
be E(r,n) and D(r,n), and hence the expectation and variance of the number

of empty cells in the total pn volume satisfy the assumptions for CLT to apply.
That is, pE(r,n) and pD(r,n). It is not difficult to prove that if m,n > oo,
such that E(r,n) > oo and n E{r,n) > oo, the limiting distribution of the
random variable is standard normal.
yj a m(r,n)
Denoting concentration by C = r/n the above equations gets the following
E(r, n) = ne c
D(r, n) = ne~c [l (1 + c)e_c]
For the derivation of the generating function we first consider a case in which
there are cells of two different types. The analysis is rather more complicated.
Suppose that ni cells of type 1 and n2 cells of type 2 are placed independently
in m volumes. There are, altogether, n3 = rq +n2 cells. A cell of type j falls into
the *th volume with probability p^ (j = 1,2; i = 1,..., m) (multinomial model).
We now introduce new notations. Note that Mi, M2, and M3 below are
quite distinct from the Ms in the first paragraph of this section.
Let Mj denote the number of containers not holding cells of type j, (j = 1,2)
and M3 denote the number of containers not holding any cells. We write
P (m| {p!j)})
= Pr
n (mj=mj) i
Then the joint Probability Generating Function (pgf) of Mi, M2, and M3 (given
rii and n2) is
In*'}p (mi = G (xix*x3ni n2i {p*w})
m 1 7712 7713 V<=1 )
The joint Exponential Probability Generating Function (epgf) of Mi, M2, and
M3 is

= n
G (2i,22;Xl,12,^31 {/''"})
= E E L-~F X1X2X3 + Xi jexp - l|
+x2 |exp (zip^mj l|
+ |exp (zip^mj 1J |exp (z2P^rnj l|
where Z\ and are the arguments of the probability generating function, with-
out specific meaning, but they will help us on defining further values of x later.
First divide the m volumes into two groups, one group containing a single volume
(say, the /ith), and the other group containing the remaining (m 1) volumes.
Let n'g be the number of cells of type g in the single volume and n"g = ng ng
the number of cells in the remaining (m 1) volumes. Using the equation of
total probability:
p(miK})=, U
n +n"=ng(g=l,2)9 1
( \
2 n ng

x EE 5 (m^m^n'^n'z) P ( m0| lp\j) (l p?)
m'g+m'g=mg(g=l,2) ' ^ ' '
where 1
1 if m\n\ = rn2n2 = 0
0 otherwise

mo total number of empty volumes in first and second groups, i.e. mo =
m'g + m" and m'g, m" are the numbers of volumes in the two groups that do not
contain any cells of type g. Note that m'g must be 0 or 1, since there is only one
volume in the first group. (The indicator function 8 () is introduced because if
ng > 0 then m'g > 0, while if ng = 0 then mg 1.)
Multiplying both sides of (3.10) by
(mzi)m {mz2)n2
I X1 x2 x3
nil n2l
and summing over rii, n2, mi, m2, m3, the left-hand side will have the form,
G (zuz2]xi,x2,x3\
and, on the right-hand side,
E E E E EE n
m=l n2=l mi=0m2=0 n'g+n'=ng 9=1
(m2gp(1g>) 9
ti ii n
t t > ' \ m, m0 m?
m, ra9 m? r / f ' ' \ m, m9 rr
x EE ^1 *x2 2X3 38 (m1,m2;ni,n2) 1 x2 2x3
l9~rfl9 =U9
xp ( m0
g+ng=9 (9=1.2)
= E E E E EE
ni=1 "2=1 mi=0m2= n'g+n'=ng(g=1,2) ng+ng =ng(g^l,2)
xP (m0\lp\3) (l-p(^ \,i^h
9 l i l
mt m0 rrin P / / / / ' \
% 2x3 3 5 (ml5 m2; nx, n2)
= G (l ,z2 (1 -pi2)) ;ari,ar2,ar3| |p!j) (l -rf}) E* ^ ^
xG (z1p^\z2p(?);x1,x2,x3\l, l)


Proceeding inductively, expressing the first G () function on the right-hand side
of (3.13) as a product of two G () functions, and so on. Eventually, we obtain
(zuz2-xux2,x3\ {rf0}) =Y[G (z1p(£\z2Ph)-,x1,x2,x3\l,lS) (3.14)
Finally, we observe that
{1 if mini = m27i2 = 0
0 otherwise
Hence G (^Zip^\ z2p^\ x i,x2,x3\l, 1^ is the sum of terms: (i) with 7ii > 0 ,n2 >
0 : mi = 0, m2 = 0, m3 = 0 :
|exp (zip^rnj l} {exp (z^mj 1 j
(ii) With Tii > 0,712 > 0 : ini = 0,77i3 = 0 : x2 {exp (zip^m^J 1 j
(iii) With 7ii = 0, n2 > 0 : mx = 1, m2 = 0,77i3 = 0 :
xi {exp (z2p^1)m) 1}
(iv) With ni = n2 0 : mi = m2 m3 = 1 : x3x2x3

These terms sum to give
G [zlPh\z2Ph]\X 1,^2, S3|l, l)
= 3ax2x3 + Xi [exp ^p^rnj 1 j
+x2 [exp (^Zip^rnj 1J
+ [exp (zip^rn) l| [exp ^p^rnj lj
A similar formula has been obtained by Bolotnikov [16]. This formula ap-
plies to a situation where the assignment probabilities change at certain points.
For the first n\ cells, they are p^\ ...,pm ; for the next n2, they are p^\ ...,p$;
and so on, concluding with the last nt cells, for which they are p^ \ ...,pm .
Bolotnikov [16] considered the random variables Kj = the number of empty
volumes after the {n\ + n2 + + nj)th cell has been assigned (j = 1,..., t) and
evaluated the generating function
H (zi,z2,...,zt\xi,x2, [pp}Y2Gn(* {^})n
ni=0 nt=0
3=1 3
* (* K}) EE Pr
n ^
It is also possible to solve the same problem with a different approach. In
the general term the problem is the following: we have t particles, which should
be allocated to N volumes of types ni,n2, ...,nk with probabilities Pi,P2, ,Pk
k k
(YlPj ~ 1, 11, nk = N). In essence this is a polynomial generalization, as dis-
j=1 1=1
cussed above, regarding distribution of empty volumes after m trials. Further,
let us assume that in the jth type of volume there are tj particles (it is obvious

that ^)> and according to the polynomial distribution:
f, t2 tk
y ti, t2, tk j
The probability density of non-empty volumes is rij among volumes of type
j and we denote {ft., n,} (uj), where ft,n can be found by the recurrent equation
ft+i,n (w) = ut+1 (l jj) (l jr) H-H
+urar (t + 1) (l + £) y
at+1 (t + 1) = ai (t + 1) = 1, at(t + 1) = rar(t) + a(t), r = 2,...,t.
Thus the joint probability density for the number of non-empty volumes among
volumes of type j is
/tl,7n,n2,...,njfc (uli U2i uk) ^ ^

PlP2"Pkftmi K) ftknk (Uk)
^1,^2; )
In our case, when there are only two types of interactions, this equation takes a
much simpler form:
ft\n\ri2 (Wi,W2) ^ '
Pi (! Pif ^ fhm (ui) ft-tun^ K)

The expectation and variance can be easily found:
E [X] = n2
1 S) (' i)
Var[Xl = (nj- 1)
("2-2) (l-S)'~] +
/ \ 2t2


4. Direct Simulation Algorithm
4.1 The Algorithm
There are several methods for modeling interaction of populations using a
priori determined rates for reactions and rates of diffusions of the populations.
Methods based on the chemical master equation, the chemical Langevin equa-
tion and reaction rate equations are based on the known rates, reactions and
propensity functions of the occurring reactions. There is a large scope of papers
published in this area, with most of the papers using the approaches pioneered
by Gillespie [35]. The obvious disadvantage of an approach that is based on
the predefined rates is that it requires knowledge of every interaction, which is
almost impossible to achieve when designing large systems of interacting popu-
lations. The approach that we offer is based on a populations dynamics, where
probabilities of collisions between populations are calculated and the probabili-
ties of interactions are determined. Moreover, the probabilities of diffusion and
chemo-attraction are also calculated based on each populations dynamics. A
tau leap algorithm [35] is then applied to model the entire system. Since we di-
vided the volume into finite elements, the fundamental assumption of the model
is that every finite volume is homogeneous, that is all objects of all populations
in each finite volume are evenly distributed. The smaller the finite volume, the
more precise is the result of the simulation.
A set of finite volumes with side d is used to describe interactions between
populations in a large heterogeneous space. To achieve this, the space is di-
vided into K finite volumes Vk = V\,...,Vk- Finite volume has its center in

Figure 4.1: Cross-section in XZ plane of the heterogeneous space represented
by the finite volumes. Enlarged finite volume shows the number of populations
that it represents.
(xvk,yvk, zvK) and the length of the side h. Every finite volume V* contains
Sivk = S\vhi ) Snvk populations, and every population Sivk contains Nsivk
number of cells that can be greater or equal to zero, such that the total number
of objects in the populations at the start of simulation is expressed as follows:
/Vsi = -NsiVi + + + + Nsm-i + NsxVk
Nsi = NSiv! + + NSivk + + NSivk_! + NSivK
A(sn-1 = Nsn-1v1 + + NSn_lVk + ... + NSn_ ivfc_i + NSn_ xvK
Nsn = Nsnvi + + NSnvk + + NSnv+ NSnvK
The total number of objects in the volume across all populations is

n K
Ns ^ Nsi = ^ ^ NSiv k (4.2)
t=l i=l fc=l
The joint probability distribution for the number of interacting population
objects of different types in the entire system is based on the state of each
population in each finite volume:
P = P {NSlVl,-, NSiVk,-, NSiVl,.:, NSiVK,..., NSnVl,NsnvK) (4.3)
The state of the model is described by the probability of having objects
of population S, in each finite volume. Below we develop foundation for exact
simulation of population interactions using finite volumes.
Lets assume that the following events A can occur in every finite vol-
ume: random walk/diffusion, collision/reaction, birth/death, chemo attrac-
..., Rimb, Dui Dimb,
ChiU ..., ChiMCh, Den, DeiMDc, Bn, Bimb,
R{k-i)i) > R{k-\)mr, D(k-\)i, D(K-i)Md,
Ch(K-i)i, Ch(k-i)MCh, De(K-i)i, ,
De(K-i)MDe, B(ki)i, B(k-i)mb,
Rki, Rkmr,Dku Dkmd,
ChKi, ...,ChKMch,DeKi,..., DekMoe, BKi,BKMb,
= {R,D,Ch,De,B}

D {-fiuij j RiMri Dki, M
is a set of all MR reactions in the space across all finite volumes. Our main
assumption is that the population objects are distributed evenly in a given finite
volume, and reaction in a finite volume does not (directly) affect any process in
neighboring volumes.
Let D {Du,Dimd, Dki, Dkmd} *s a se^ f M diffusions in a
finite volume. Here the diffusion event means that an object left a finite volume.
Since the event of leaving one finite volume is equivalent to the event of entering
into another finite volume (see the boundary conditions below, which imply
that no leaving event happens without a corresponding entering event), then by
modeling the events leaving the finite volume, we have to also model the events
entering the volumes. The treatment of this condition is provided in the same
section below (the notation comes from Birth and Death processes):
De = {Den, DeiMDe, Dexi, DekMDe}
D { Bi j, Dimq ? Dki Dkatq }
is a set of all M^e deaths and Mf, births processes of objects that occur in all
finite volumes of the space. Although the processes of death and birth can be
modeled using reactions of the first order, we will group these processes into a
separate group in the introduction, and then treat the two processes as reaction
based processes.
Let Ch = {Chn, ...,ChiMCh,ChKi, ...,ChKMch} is a set of all Mch chemo-
attraction events. Similar to diffusion, modeling an event of leaving a finite

volume due to a chemo-attraction event that is due to the presence of the chemo-
attractants in the neighboring volumes, is equivalent to the event of entering into
a volume with chemo-attractants The boundary conditions described below
support this statement.
Events in the set A = {R, D,Ch, De, B} fully describe all possible events
due to processes of random walk, chemo-attraction, interaction, and birth and
death in the space of finite volumes. A state of the system will then be defined
by sequence events, and the sequence of the events depends on the probabilities
of occurrence of events. The laws that govern processes and define probabilities
are described in the next section.
As diffusion and chemo-attraction involve interchanging the objects between
finite volumes, event D is represented by two events the model D+ and Z)_.
There are constraints on the number of objects leaving finite volumes and
entering volumes due to diffusion:
i.e., is the number of objects leaving the finite volumes equals to the number
of the objects entering finite volumes through the course of simulation due to
D = {Da,..., Dimd, Dki, Dkmd}
= {D\i+,..., Dimd+,
Dki+, DKmd+,
D\i-,..., D\md-i
DKi~, Dkmd-}

In addition there are local restrictions at the boundaries of the finite volumes
that lead to constraints. For any given change of state due to an object leaving
a finite volume, the adjacent finite volumes must be updated to reflect the
migration of the object from one finite volume to another. That is, if only a
diffusion process occurs during the entire simulation for a time period of any
length then the number of objects in population Sn remains unchanged.
Similar to diffusion, every chemo-attraction event is a process that changes
two events in adjacent finite volumes:
Ch = {Chn,ChiMD,Chpci, Ch,KMD}
{Chn+,..., ..., ChKX+,..., Ch}(mi
Ch\\, Ch\M£> j ? di'Kij j CHkat£>}
4.2 Random walk/difFusion.
In a finite volume 14, objects Ajvksu = Axsivk, A^nsivk of population
Si = Si,...,Sn move according to the laws of the random walk in three di-
mensions, and the position of a population object is distributed according to a
normal distribution of total variance:
2 ^ ;2
aAjVkSi TlSi
where t is the total time, lst is the length of the step for objects of population
Si, and r is the time between two consecutive steps.
Every population S, has its own unique set of parameters for the random
walk. The random walk leads to diffusion of objects across the entire volume
through the finite volumes. The boundary conditions between the volumes are

Dirichlet type [88] for population Si in any finite volume 14: The boundary
conditions of the experimental physical volume are Cauchy type [88], however,
we model the conditions as Neumann type [88] for simplicity.
Within each finite volume distribution of population objects is uniform.
Therefore, for the inner volumes with the boundary conditions of Dirichlet type,
the distribution of the distance in random walk remains Raleigh and the distri-
bution of x, y coordinate remains Gaussian, which extends to three dimensions
as follows:
\xi, r/j, Zi, xvk, yvki Zvki 0")
FDfa, xvk, h, a)FD(yu yvkj h, cr)FD{zi, zvk, h, a) (4.5)
where FD{au avk, h, a) = ERF (--f-l, a'~a^k+^
Where FD is the function of ERF the error function encountered in in-
tegrating the normal distribution. This expression provides the probability of
finding a population object at location (x^y^Zi) and the probability that a
population object leaves the finite volume is therefore expressed as
. ^ . h h h
P\Di\xvk x xvk T ~, yvk 7^ V ** J/ufcT
h h h
j zvk ~ > z > Zvk T -,Xi,yi,Zi, xvk, yvk, zvk, /i, o)
rx+ 2 ryJr 2 fz~2
/ / / FD(x,xvk,h,cr)FD(y,yvk,h,cr)FD(z,zvk,h,cr)dxdydz
Jx-j Jy-% Jz-%
The probability that a population object will be found in any point in space
is 1.0:

x, xvk, h, a)FD(y, yvk, h, a)FD(z, zvk, h, a)dxdydz = 1.0
Therefore probability that population object will leave the volume is ex-
pressed as follows:
P(Di) = f f f FD(x,xvk,h,a)FD(y,yvk,h,cr)FD(z,zvk,h,cr)dxdydz-
Ck+!t FD(x, xvk, h, a)FD(y, yvk, h, a)FD(z, zvk, h, a)dxdydz,
vfc 2 Vvk 2 ^vk 2
P(Di) = 1-
Ck*? Ck+H Ck^ FD(X, xvk, h, a)FD(y, yvk, h, a)FD(z, zvk, h, a)dxdydz
xvk 2 Vvk 2 *vk 2

The ERF, a scaled cumulative
normal p.d.f., with two arguments
supports both upper and lower limits.
It can be re-written as
ERF(a0,a\) = ERF(a\) ERF(a0)
To solve (4.6) we substitute the one
argument error function and using integral of the error function as well as the
property of the random walk to move independently in three dimensions we
obtain the following:
P(Di) = 1 fXvk+? FD{x, xvk, h, a)dx
&vk 2
jy*k+2 FD(y,yvk,h,a)dy (4.7)
Vvk 2
J^k+h FD(z, zvk, h, a)dz
zvk 2
JXvk+h FD(x, xvk, h, a)dx =
fXvk+j* ERF{x~Xvk+$)dx fvk^ ERF{
Xvk o ^ vk 2
X Xvk o
Xvk 2 V
J X u-h
- - j-j j - /-
ERF{x~Xvk+^)dx fXvk^ ERF(x~xlk~5
"H "o y-i t-) j1 / X-"n \ | X XVJ- f" "9 vfc t 2 T7* D
'Uk 2
f*vfc+J ERF(x~x^+^)adx~Xvk+!t - PPP(
X X^fc ~2 \ jX Xvfc 2
7 ' (7
xfc+% ^j?fiF(X Ivfc+2) 4.

i~j:d)c + ? \2
17 >
(^ xvk 2
x~xvk-^I \2
<7 '
)ERF(^f3.) +
= 2 (fcBfiF(J) £2^=1)
Xvfc 2

Figure 4.2: This figure shows a two dimensional section of space. The center
finite volume contains a population of objects that experience two processes: dif-
fusion and chemo-attraction. Part A of the figure shows process of diffusion when
multiple diffusion events occur, and the objects of the population leave the finite
volume (in the center), and enter the finite volumes adjacent to the center finite
volume. Part B of the figure shows the process of chemo-attraction, when mul-
tiple chemo-attraction events occur during the chemo-attraction process. Part
C of the figure shows superimposed processes of diffusion and chemo-attraction.
The effects of all processes are additive as the vectors of the movements due to
diffusion and chemo-attraction are summed.

Finally, the probability that a population object leaves the finite volume is:
P(A) = 8
(*)2 i
4.3 Chemo-attraction.
In addition to the diffusion/random walk a population object changes its
position due to processes of chemo-attraction or/and chemo-repulsion. This is
a deterministic process of movement of a population object in the direction of
the highest concentration of the chemo-attractant. To model chemo-attraction
we assume that an object of the chemo-attractant population influences the
chemo-attracted object with a force proportional to ra:
f =
where r is the distance between objects and an arbitrary a > 0, e\ is the chemoat-
traction capacity of objects of the first population which chemoattracts the sec-
ond population, and e2 is chemoattraction capacity of objects of the second
population towards the objects in the first population.
4.4 Reaction upon collision, birth and death of objects.
For simplicity, let us consider reactions between two populations of objects.
We assume that objects of every population in the system move according to
the laws of the random walk. This assumption allows not only to model pro-
cesses of diffusion and chemo-attraction, but also process of reaction between
two interacting populations. The reaction between objects from two different
populations occurs only if there is a collision between the object. As the move-
ment of the objects is subject to the law of random walk the positions of the

Figure 4.3: This figure shows processes that occur in one subsection. The
problem is to figure out the state of the system for all sections in time t
objects change constantly. As the positions of the objects change new colliding
pairs form. Every colliding pair is then evaluated for a possibility of reaction
using the rate provided for the reaction.
Let us now consider modeling a system of interacting, dynamic popula-
tions using ordinary differential equations. Every model is based on underlying
modeling assumptions, and there are several different mathematical modeling
methods that can be used to model a system of reacting populations, involved
in the processes of diffusion and chemo-attraction. It is possible to describe the
system using a set of linear ordinary differential equations, which describe all
possible states of the system. The states are described using probabilities, and
it is possible to describe the state of the system using a state vector: X(t) =
state of system at limits discribed by n dimensional vector.



Figure 4.4: This figure shows processes of reaction, birth and death. Part A
shows reaction between two populations. The reaction between objects occurs
upon collision. In addition, not all collisions lead to reactions. There is a prior
probability assigned to each reaction. This is analogous to having a reaction rate
in chemical reaction equations. Part B of the figure shows process of birth in
the finite volume. The process of birth is modeled as a property of population.
Every population has this property, which is implemented as rate of introducing
a new object given population size. Part C of the figures shows the process of
death in population in the finite volume. Similar to birth rate, every population
in the system has death rate.

Then the system of populations Swk = {S'ovfc, Snvk} in a finite volume is
described by the states not by the velocities, positions of the population objects.
For example, the following state describes the event of diffusing an object outside
a finite volume by n-dimensional vector:
As a result of diffusing an object out of the finite volume, the state of the
finite volume changes:
X(t + dt) = X{t) + Qi
Chemo-attraction is an event which as in the case of diffusion an object
leaves the finite volume.

The corresponding state change is reflected in this equation:
X(t + dt) X(t) + (chi
An event of birth of an object in population Si will change the system by
adding on additional object in the following way:
Ci =
where 1 is position in the i-th row of the matrix.
An event of death an object in population Si will change the system by
removing the object in the following way:
where -1 is at the position in the ith row of the matrix.
An event of reaction that involves 2 or more populations, will result in
matrix where 2 or more population change: For example the following vector

shows a reaction that involves three populations:
where Mi} M3, Mk, are positive or negative depending on the nature of reaction.
For example, for reaction of the form:
3 Sk 2 Si + Sj
Mi 2
Mj = 1
Mk = 3
States for chemo-attraction and diffusion of an object into the finite volume
are modeled similarly. For example to model the reaction between two colliding
objects and simulate it we need to estimate the number of colliding objects
between two populations in time dt.

The birth and death processes can be approximated by the first order reac-
tions of following types:
Si 0
where c^ and cdi define the rates of the birth and death processes correspond-
ingly, such that the probability of the birth and death processes (more precisely,
we assume that the probability of this reaction taking place in the infinitesimal
time interval [t,t + dt]) are given by the following expressions:
P{Bi) = CbiXi(t)dt
P{Dei) cdiXi(t)dt
Similarly, for the reactions involving two different populations, the probability
of a reaction taking place after collision between the objects of two populations
is given by the following expression:
P(Rij) = CriXiMXjWdt
Extending the interactions to diffusion and chemo-attraction processes, we can
derive the probability that an object of a population will exit or enter the system
is given by the following expression [71]:
P{Di) = duXi(t)dt
where dti is diffusion rate for leaving the finite volume from the neighboring
finite volumes. The law of total probability says that the probability that X(t)

is in a given state x times the probability that event A occurs, for all positive
(disjoint) events A.
The resulting total probability
P(X(t) = x) = £ P(X(t) = XIA)P(A)
where A is the events in the set defined earlier.
A more detailed expression for the total probability accounts for each type
of process:
P(X(t) = A) = J2p(x(t) = A| R)P{R)+J2p(x(t) = A\Ch)P(Ch)+
£ P(X(t) = A\D)P(D)+ £ P(X(t) = A\B)P(B)+
J2p(X(t) = A\De)P(De)
For reactions between two populations the conditional probability that system
is in state X(t) given that reaction Ri occurred in time dt is the probability of
occurrence of a reaction Ri that leads to the state X(t):
P(X(t) = A\Ri) = Ci(X(t) (Ti)dt
where, (",ri is state vector describing the event of reaction.
Similarly, the probability of an object leaving the finite volume due to dif-
fusion is given by the following:
P(X(t) = A\Di) = ck{X(t) Qi)dt
The corresponding conditional probability for chemo-attraction is more complex
because the chemo-attraction coefficient depends on the concentrations in the

neighboring finite volumes. For simplicity we assume that for any given volume
the chemo-attracting objects that are further than adjacent volumes to the given
volume does not chemo-attract the objects in the given volume. The objects
in the finite volume experience chemo-attracting force from the neighboring
volumes only.
P(X(t) = AICK) = chi(X{t) Cchi)dt
where chi is a chemo-attracting coefficient defined as a function of the concen-
trations in the neighboring volumes:
_ rNsiVij(k-1) NsiVij{k+\)
h-k,adj ~ i -1/ > t/ > /
are vector of concentrations in the adjacent finite volumes. The function that
reflects the force of chemo-attraction is assumed to have quadratic relationship
between the distance and the force. Such assumptions support the fact that the
closer the object to the chemo-attracting objects the faster it moves toward. The
movement direction is assumed to be of gradient type, that is the movement oc-
curs in the direction of the maximum force. The direction of the maximum force
is determined as the direction of the resultant vector that forms by summing all
the vectors forms the neighboring finite volumes.
The birth and death processes are described using the state matrices for the
birth and death processes as follows:
P(X(t) = A\Bi) = k(X(t) Cu)dt
P{X{t) = A\Dei) = dei{X{t) Qei)dt
Since the processes of birth and death can be approximated by the first order
reactions, the birth and death are assumed to be modeled in the same fashion

as we model reactions between populations.
The total probability of finding the system of populations in state X(t) in
time dt, is given by the following probability:
P(X(t + dt) = A) = (P(X(t + dt) = A\A)P(A)) + {P{X{t + dt) = A\AC)P{AC))
Probability that the state P(X(t + dt) = A) will occur due to no event is the
probability that no interaction occurred in time t + dt times the probability of
being in state X(t) at time t (noevent), that is P(X(t)):
P(X(t + dt) = A\noevent)
(1 YriX(t)dt-
Y, diX(t)dt
Y deiX{t)dt
Probability that the state P(X(t + dt) = A) will occur due to an event is defined
as follows:
P{X(t + dt) = A\A)P{A) =
Yrt(X(t) (n)dtP(X(t) (ri)+
Y di(X(t) Qi)dtP(X(t) C*)+
Y chi(X(t) Cchi)dtP(X(t) (chi)+
Y de{(X(t) Qei)dtP(X(t) Cdei)+
Y bi{X(t) (bi)dtP(X(t) Cm)

Therefore, the total probability of finding the system of populations in
state X(t) in time dt, in all finite volumes V containing populations Swk
Sovk, Snvk is given by the following probability:
P(X(t + dt) = A) =
Mr MCh
(1 E TiX{t)dt E chiX(t)dt
2=1 2=1
Mu Mde Mq
E diX{t)dt E deiX{t)dt E kX{t)dt)P{X{t))
2=1 2=1 2=1
E ri(X{t) (ri)dtP(X(t) £rt) +
E di(X(t) Cdi)dtP{X(t) C*)+
E cht(X(t) Ui)dtP{X{t) Ccm)+
E dei(X(t) Cki)dtP{X(t) Qei)+
Rearranging we have
rP(X(t+dt)=A)-P(X(t)=A)] dP(X(t)) _
i l dt J - ^
Mb Mr
E TiiXit) - Cri)P(x(t) - u - E RiX(t)P(X(t))+
2=0 21
Mu Mu
E di(X(t) - Qi)P(X(t) - a) - E DiX(t)P(X(t))+
i=0 t=l
E chi(x(t) Ur)P(X(t) (chi) E Chlx(t)P(X(t))+
i=0 i=1
Mdc Mde
E - Cdei)P(X(t) Qei) E DelX{t)P{X{t)) +
i=0 i=1
Mg Mg
E &i(X(t) Cbi)dtP{X{t) Cw) E BiX{t)P(X(t))
=0 x=l
In the probability equation above we assume that P(X(t + dt) = A) P(X(t) =
A) = dP at dt, which allows us to define a derivative for the probability. This

equation is equivalent the Chemical Master Equation, and each state is repre-
sented as ordinary differential equation. The high dimensionality of this equation
makes it computationally intractable to solve for systems with large number of
interacting populations. A system of ordinary differential equations (ODEs)
with one ODE for each possible state of the system leads to a NP-hard compu-
tational problem and therefore is not practical.
To avoid this we choose to simulate the system by tracing probabilistic paths
through the states thus avoiding expensive computations of all possible states
in the system. This approach is a widely accepted alternative to simulation and
is employed in many simulation systems. The differences between our system
and previously published systems are:
- We provide rigorous mathematical treatment of the simulation algorithms
to provide exact methods for the simulation.
- Based on the mathematical methods developed here we can achieve fast
and practical simulation of the population objects in the space.
Using the results achieved in the previous section, we will derive a foundation
for the stochastic simulation algorithm, which follows a probabilistic path across
the sequence of the state; see Figure 4.5.
Only for demonstration purposes, we introduce the end result of the simula-
tion as a state when the number of objects in a population comes to zero. This
may not be the case in many simulations, however, for illustration purposes, let
us consider a system which has a population modeled in such a way that one
of the populations terminates. This way we define more than one criterion for
measuring the ability of the system to reproduce the results. For example, al-

Figure 4.5: This figure shows an example path out of many paths possible
that the stochastic simulation algorithm will simulate. Due to the nature of the
algorithm, every simulation will result in a new path, thus every simulation most
likely be a unique simulation of the system of interacting populations. However,
due to the large number of objects in the populations, the end result of the
simulation is almost always identical (see details in text).

Nss =n
Afei =#
Nsj =t
Types of end
Figure 4.6: This figure shows the end results of an example simulation system
provided in the Appendix C. As seen, the system arrives to 4 different end
results, which can be possible course of treatment of a particular disease.
though the stochastic simulation algorithm uses different simulation paths, the
end result with high probability will be the same, or there can be several end
results that the simulation system will arrive at various times. This intuitively
supports flow of the processes in living organisms (for the course of a particular
disease) or in industrial production (for example, the life time of cars).
We introduce the quantity X(t) = number of events in [0, t] and
P (noevent, r) = P {X (t + r) X (t) = 0}
as the probability that no reaciton takes place in the time interval [t,t + r)).
Probability of no event for the following time period:
P{X{t + St) = 0} =
Mr Mqh Mb
(1 X) riX(t)dt ^2 chiX(t)dt diX(t)dt deiX(t)dt (4.11)
t=l i= 1 i=1 i= 1
ZbiX(t)dt)P{X(r)= 0}

P{X(t + St) = 0}-P{X (r) = 0} =
Mr Mch Md Mde
TiX{t) + *22 chiX(t) + *22 diX(t) + 22 deiX(t)+
i= 1
£6,X(())P{X(t) = 0}
The right hand side of the equation is assigned to asum(X(t)), and the left hand
side of the equation is the derivative at the limit of the delta time:
dP{X{r) = 0}
Q/sum{.X (t)')
P{X(t) =0} = e~a
The probability that the next event in time interval [r, r + dr) will be event
Ai from any of the following types {/?, D, Ch, De, B} given the system is in the
state X(t):
P(t, Ai\X(t),t)dT = P {X(t) = 0}ai{X{t))dT
Here P {X(r) = 0} is the probability that no reaction occurred in time [r, r+
dr), and ai(X(t))dr is the probability that event Ai from {R, D,Ch, De, B}
occurred in the [t + r; t + r + dr).
Using previously achieved result on probability of having no event, we have
P(r,Ai\X(t),t) = ai(X(t))e~a^x^T
To make the separation between time and the event more clear and convenient
to handle during simulation we have:
P(T,Ai\X(t),t) = -^^asum(X(t))e-a^x^T
^sumy-A- (t))

Term can be use<^ t0 select the next event in the system, and
aSum(X(t))e~as'u,n('X(t^T can be used to derive the time when the reaction will
As before, executing a simulation step the rates for reactions, diffusion,
chemo-attraction, birth and death are updated based on the positions of the
population object. Such simulation results in a very slow process due to the large
number of events, and simulating a system with many interactions is extremely
slow. Therefore, this approach is not practical.
The algorithm works as follows: 1. Evaluate vector of rates for all possible
events in the system of interacting population objects across all finite volumes
in the space. Rates update for diffusion, chemo-attraction, reactions, and birth
and death processes for all K volumes, and the sum of all rates as follows:
{nt,0, 1 > > ^k,Mr} j {dfcO; , .., dk,Md} j Q, chfc,l, -- chktMch\ >
{dekfi, dekt\,..., dek,Mde}, {bk,o, bk,i, bk,Mb}
and the sum of rates across all K volumes:
O'sum ^ ^ T'k,sum T ^ ] dk surn T ^ ] cdf-tSum T ^ ^ dCk,sum ~b
^ ^ bk,sum
2. Using a uniform random number generator create two random variables rx
and 72 for selecting the event and the time at which the event occurs. For
selecting an event index we use condition:
£>(*)< r,()
For selecting time we set r to be equal to:
r = ln(

3. Using the selected time and event, advance the system by defining the new
state of the system as follows:
X (t + t) = X (t) + (^ai
As seen from the algorithm, the simulation of the system happens one event at
a time.
4.5 Tau Leaping Algorithm
The tau leaping method avoids the problem by allowing larger steps at a
cost of accuracy of simulation. However, the benefits of being able to actually
run the simulation all the way through from the initial state to the end result
seems to overweigh the disadvantage of the system. In addition, it is always
possible to minimize the error by choosing r to be small.
Instead of simulation every event the tau leaping method advances the sys-
tem by the given time r, and updates the quantities of the populations a number
of events that would have happened during the interval. The assumption is that
the rates of the events are constant during the time interval r.
X(t + T) = X(t) + J2 ClY(al(X(t)),T)
i= 1
where, Y(di(X(t)),T) is the number of events, which occur during the tau leap
step. The number of events Y depends on the length of the step (r) and the
rate of the reaction a. The rate of the reaction a depends on the population
state X(t). Obviously, if the rate changes very slightly, (which can happen only
when a few objects are added or removed in a population with large number of
objects), then using the method does not introduce large errors. Therefore, the
larger the number of objects, the larger the time period that can be adopted.

To evaluate how many events Y(a,i(X(t)), t) of type At happened over the
time period we do the following. Since we assume that the rate is constant
then events happen at regular intervals and therefore the number of the events
that happened during the time interval follow a Poisson distribution. Therefore,
the number Y(at(X(t)), r) is estimated by drawing a sample from a Poisson
distribution with parameter ai(X(t))dr, which is the probability the event Ai
will occur in time dr.
The algorithm works as follows:
1. Generate vectors of the sample numbers using Poisson distributions with
the following parameters:
{r^opf (t))r, rkti(X(t))r,..., rkiMr(X(t))r},
{chk,o(X(i))r, chkti(X(t))r,..., chk>Mch(X(t))rj,
{dekt0(X(t))r, dek>1 (X(t))r,..., deKMde{X(t))r},
{6M(X(0)t, bkii{X(t))r,..., bktMb(X(t))r}
for each event in the system of interacting population objects across all finite
volumes in the space.
{Yrkfi,Yrk,lt...,Yrk ,Mr} 5
{Y d>ky Oj Y d'k, 1? **-5 Y dk,Md} j
\Y0) Yi,Ychfc Mch},
\Yd6ky0) YdCk i j ? >

{Ybkt0, Ybk,i,Ybk,Mb}
2. Using the computed sample numbers, advance the system by defining the
new state of the system as follows:
X(t + T)=X(t) + Y/CaiYl
Switching from Poisson Poisson(a,i(X (t)), r) to normal distribution a,(X(t))r +
^al(X(t)),T)Zl can be done in case the number of events occurred over the
time interval is large, based on limiting distribution of Poisson for larger mean
(Poisson (p) N{n,n)).
X(t + r) = X(t) + ^2 Ci [oi(X(t))r + v/Oi(X(t)),r)Zi
In this case we arrived at Chemical Langevin Equation, the Euler-Maruyama
discretization of the continuous time problem [74]:
dX(t) = Ci \ai(X(t))dT + Vai(X(t))dWi(t)
where Wi(t) are Brownian motions. The simulation algorithm using Chemical
Langevin Equation is identical to the algorithm with tau leaping:
1. Generate vectors of the sample numbers using Normal distributions for
each event in the system of interacting population objects across all finite vol-

umes in the space.
NsiVi + + NSi vk + + Ns^.i + NSlvK
Nsm + + NSivk + + Ns.Vfc,! + NSivK
Ns^Vi + + NSn_lVk + ... + A^Sn-i^! + NSn_lVK
NsnV! + + NSnVk + + Nsnvk_! + NSnvK
2. Using the computed sample numbers, advance the system by defining the
new state of the system as follows:
X{t + T) = X{t) + J2CaiGi
In both cases, the number of objects in the system should be large to produce
accurate results.
Equation X(t + r) = X(t) + ^ Q ai{X{t))r + y/a,i(X(t)),T)Zi
the stochastic part is a set of ordinary differential equations, which are widely
used for modeling reaction rate systems:
X(t + T) = x(t) + J2 CiteWOto]
X(t + r)-X(t) dX(t)
------------= j = z(*))
when t 0.
Simulating systems using the reaction rates ODEs achieve computation of
the thermodynamic states of the system. All methods stochastic simulations
on event at a time, tau leaping, Langevin equations and reaction rate ODE-s
are closely related as shown above.

In our simulation system Vixdum we deploy Langevin equations, and the
rates of interactions are updated after every simulation time step.
4.6 Model of Infiltration Demyelination
This model is an attempt to model T-cell penetration from blood vessels to
brain tissue [75] thus destroying myelin layer in axons. Activated T-Cells from
the beginning are present only in blood vessels. Activated T-cells do not have
enough kinetic energy to pass through the wall of a blood vessel and pass into the
brain tissue. The wall of the blood vessel consists of very dense endothelial cells.
Activated T-cells produce proteins called cytokines. Cytokines are contacting
with blood vessel wall. During the contact cytokines are touching the endothelial
cells and endothelial cells are getting activated. Activated endothelial cells begin
to produce a special, very small size, protein. The special feature of these
proteins is that they construct a gradient field, which influences those cells that
are sensitive to these proteins. The proteins of this group are called chemokines.
Macroglia are cells that are found in brain tissue and are chemo-attractive to
chemokines produced by endothelial cells. Because of the diffusion and chemo-
attraction processes, chemokines travel to the brain tissue, where they contact
and activate macroglia. Activated macroglia cells produce chemokines which can
attract activated T-cells. These chemokines construct a gradient field which not
only covers brain tissue, but also walls of vessels. Activated T-cells moving in the
gradient field and having more kinetic energy as a result of chemo-attraction,
penetrate through the walls of blood vessels to the brain tissue and destroy
4.7 Identification.

Figure 4.7: Different stages of infiltration and demialination in the model.

The program for the simulation was tested and verified. Nevertheless, with
the aim of further research, we traced the simulation interaction of particles
for those examples where an explicit solution is possible through elementary
functions. Consider the first transformation of the type A -> B the simplest
scheme with two types of particles and complex interaction. Omitting solutions
in the Kolmogorovs equations, we present the exact solutions to mathematical
EA(t) = A0e~Xt EB(t) = A0(l e~Xt)
where A is the speed of transition and Aq is the initial value of the process.
A somewhat complex example is a transformation of a type A B > C
with rates of transformation A, fi. We present solutions for expectation.
EA{t) = <40e_A,
E(t) = A^ (e-* Ec(t) = A (l + - ^ where EA + Eb + Ec = Aq, A 7^ p. The figure 4.8 below presents estimated
(theoretical) and experimental 4.9 (obtained through the simulation) data, when
Aq = 10000, A = 2 fi = 1.5, and it shows an excellent agreement for a very typical
stochastic realization of this type of transformation. The theoretical values of
variance (figure 4.8) are calculated by equations
DA{t) = A0e-Xt{ 1 e~Xt)
DB(t) = A0^-r (e-< Dc(t) = -4o (*e-A - + l) (^e"-' ^e~)
The figures 4.8 and 4.9 show excellent agreement for a very typical stochastic
realization of this type of transformation.

Figure 4.8: Experimental curve of the average result of 20 simulations: the
variance of the number of particles of all types and the simulations. Series 1, 2,
3 are cells of type A, B, C respectively.

Figure 4.9: Theoretical curve of the variance of the number of particles of all
types and the simulations. Series 1, 2, 3 are cells of type A, B, C respectively.

5. Elements of theory and research tasks
5.1 Markov branching processes with a finite number of types of
Let us denote N {0,1, 2,...} as the set of all non-negative integers and
Nn as the set of all vectors with non-negative integer components. Further, for
a, (3,7 £ Nn let us denote 7 = a (3 if 71 = a\P\, ...,7 = an f3n;a > j3, if
(*1 > Pi, ...,an > Pn; |a| = a\ + ... + an; the sum XlaeW" is denoted as J2a-
5.2 Markov process on the Nn. Kolmogorov equations
The state of the Markov process is a vector £ (t) = (£1 (t), ...,£n (t)), t
[0,00), kth. component of which £* (t) shows that at the time t there are (t)
particles of type T*,. ^ (t) is a Markov process if its transition probabilities
Pal3(t) = P {ati + t) = p\Z(h) = a}
satisfy the conditions
Pa0 (t) > 0
for all a,P Nn, t G [0,oo) (non-negativity condition);
Pap{t) 1
in any a £ Nn, t £ [0,00) (normalization condition)
Pa/3 (t + r) = V Pai (t) PlP (r)
for all a,P, 7 £ Nn, t, r £ [0, 00) (Markovian property).

The primary condition is
1, if a = (3
lim PaP (t) = Pa0 (0) = <5q/3 = l
0, if a? 0
It is anticipated that the process £(f) is stochastically continuous. That is, the
lim Paa (t) = 1, lim Pa0 (t) = 0, (a ^ /3),
+04* t >04-
is satisfied.
There are always finite or infinite limits
aaa = lim
Paa {t)
1 r Pc0(t)
(a #/?),<*,/? AT"
These limits are called infinitesimal characteristics and can be written as
dPa/3 (t)
where aaQ can be interpreted as density of transition probability.
If a ^ [3, then aap is finite; aQa is either finite or aaa = oo. In all cases,
Yhp^a a<*0 ~aaa- Using aap classify the states of the process as: the state a is
called instantaneous if aaa = oo. If the state is not instantaneous it is called
regular, when J2p^a aaP ~
The probabilistic meaning of characteristics {aap, a,/3 G Nn} is explained
as follows. The Markov process £(f) starts in state a at random time rQ which
has an exponential distribution with mean l/aQQ
P {ra < t} = 1 eaaat

If aQa 0 then the process is absorbing. If aQQ < 0, then at time rQ with
i.Pa/3 a£)/QO; ft ^ Paa 0}
the process transitions to the state ft\ in the state ft the process spends time r^;
further similar evolution process is observed.
Let all of the states of the process be recurrent, and, given /?, the condition
'Yh1aa-iP')p{t) > is satisfied, and given a, the condition Pap{t)aj0 >
oo also satisfied. Then, given a, ft the first (backward) and second (forward)
system of Kolmogorovs differential equations for transition probabilities Pap(t)
is satisfied.
^^ = ^aQ7P7/J(f), aeNn (5.1)
= T^(*)->/> <5-2)
with initial condition Paa(0) = 1, Pap{0) = 0, where a ^ ft. It is not difficult to
prove the conditions of existence and uniqueness of the solution of the system
of equations (5.1) and (5.2).
5.3 Branching process with n types of particles
Assume there are n types of particles
Ai,..., An.
If there is a set of particles, consisting of particles ai of type Ai, c*2 particles
of type A2). ., an particles of type An, assume that the random process is in
state a = (a = (c*i,..., an)).

In the set of Markov processes we separate a special class: branching pro-
cesses, which are determined by the probabilities of transition, P((f), which
are equal to probabilities of one particle of type in time t to transition into
the totality of the particles 57 = 71 si + ... + 7s, corresponding to the vec-
tor 7 = (7i,...,7n) G Nn. The totality of particles S7i, generated as a result
of zth particles transformation, is determined by a random vector 7 with the
distribution of probabilities jp* (t), P*(t) = 1 j.
The main property that makes a Markov process also a branching process, is
that the particles that exist in time fi, in any other moment of time t\+t,t > 0,
produce offsprings independently of one another.
The transition probabilities of the branching process Pap(t) satisfy the
branching condition
p* for all a, {3 G Nn, t, 6 [0,oo). The equation (5.3) indicates that the distribution
{Pp(t)} is the collection of (3\ distributions (P(t)}, P2 distributions {P^(t)},...,
and f3n distributions {Pa(f)}-
5.4 Multidimensional generating functions
Let ga be the value of a numeric function at a, defined on Nn. The multi-
dimensional generating function F(s), s = (s1;..., s), corresponding to {ga}, is
the sum of series
F(s) = J29asT,-,sT (5.4)
Going forward we will use a shorter notation sa = s"1,..., s"". For the
vectors s = (si,..., sn) we use the notation 0, if all components are equal to 0, and

1 if all Si = 1; we use notation |s| for vector (|si|,|s|). We also use notation:
The generating function F(s) is called positive if all ga > 0. A positive gener-
ating function is called probabilistic if F(l) = 1.
If series (5.4) converges at some point s0 ^ 0, then it converges for all
|s| < |so|, and if s is complex, then the sum F(s) in that region is the analytical
function over all variables si,...,s. In this case the function F(s) is clearly
defined by coefficients ga
Thus, one-to-one correspondence is established between {ga} and generating
function F(s).
Let (3 Nn. If F(s) is a generating function for {gQ}, then (d^F (s) /ds13) is
a generating function for {oPga }. The probabilistic function F(s) corresponds to
n-dimensional probability distribution {ga} on Nn. Also, probability generating
functions do not have to relate to the distribution of probabilities {gQ}. Instead
they can relate to a random vector £ = (£i,..., £), having {^Q} as its distribution.
Using a random vector £ helps in determining an equivalent definition of the
probabilistic generating function
i = 1,..., n, and
daF(s) dlQlF(s)
F{s) = Es?

The expectation is called the factorial moment of \/3\ f3\ H-\-/3n order.
is also called /3-moment. We establish the equality
d0F (s)
E£[/3] =
where the derivative in the point s = 1 is considered to be the corresponding
left derivative over all coordinates Sj, i = 1, ...,n. In particular, for expectation
the component of a random vector £ is
dF (s)
,i = 1,..., n
The expression for the variance takes the form
,i = 1,...,n
The following property of generating functions is frequently used: if £, rj are
independent random vectors, then the generating function of sum £ + rj is equal
to the product of generating functions summands; i.e.
D& =
d F (s) dF (s)
Fi+V{s) = F(s)F^(s)
Equations of branching processes
Let us introduce the generating functions (z = (zi, ...,zn))
Gp (t; z) = E pap (t) g; Fa (f; s) = J2 PP (*)
a /?
M5) = EP7s7> = l,...,n; |s| < 1
and the linear differential operator with constant coefficients

Here |p!) >0, i is the given probability distributions on
The exponential generating function of transition probabilities Gp(t] z) of
the branching process, for all ft G Nn, satisfies the following linear partial dif-
ferential equation
dG0 (t\z)

with initial condition 673(0; z) = z&/f}\. Equation (5.10) follows from the back-
ward system of Kolmogorovs partial differential equations (5.1).
The generating function of transition probabilities FQ(t; s) of branching pro-
cess for each a G Nn satisfies the linear partial differential equation for each
N < 1
8Fa{t-,s) " dFa (t; s)
= ^ A, (fc. (511)
with initial condition Fq(0;s) = sa. Equation (5.11) follows from the sec-
ond system of differential equations (5.2). The branching property (5.3) for
generating functions Fa(t\s) can be written as
Fa(t]s) = F?'(t]s),...,FZ(t]s),
where Fj(t; s) is the generating function of the process, starting with one par-
ticle of type i (single-particle generating function). Single-particle generating
functions satisfy the ordinary system of nonlinear differential equations
3F it' 5^
dfy = A* (hi (Fx (t; s),..., Fn (t; s)) Ft (f; s)), i = 1,..., n, (5.13)
with initial conditions Ft(0; s) Si,..., Fn(0; s) = sn.

The branching process corresponds to the following transformations
Ai > 7M1 + + 7nAi!
At 7^4, + ... + 7n^-
5.6 Probabilities of extinction
If there is a branching process £(t) 0, it means that it has extinct by the
time t. The probability Po(t), % 1, ...,n is called the probability of extinction
at time t.
If the branching process £(t) becomes 0 in some finite t, then the process
extincts. The probabilities of extinction q1 of the branching process are equal to
the limit ql = limt-.oo Pq (t).
The branching process, for which q1 =, ...,= qn = 1, is called extincting. If
for any i the probability of extinction is gl < 1 then the process is not extincting.
The probability of extinction q = (g1,..., qn) of the branching process satis-
fies the system of equations
h ( 5.7 Equations for generating functions
In this work I consider branching processes over the following set
N2 = {(01,0C2), cii, 0:2 ^ N}
(with two types of particles). For now, limit the type of transformations to:
{A 0,B,A + B,2A + B,2A]
B 0, A, A + B, A + 2B, 2 B\

The intensity of transformations are A and p,, A > 0, p > 0, the backward partial
differential equation takes the form:
dc0(t£j,z2) = Azj ^po+pi£_ +p2_IL_ +n_&_ Gp{t\zl,z2) +
+^z2 (q0 + 9x^7 + Q2q^ + 9sq^i + 94 £7 ^7) Gp (f; 21, z2)
with initial condition Gp (0; zi, z2) zf1 z^2/ (/3i\/32\). Here p0 > 0, pi > 0,
P2 > 0,p3> 0, Pi > 0, po +pi +P2 + P3 + Pi = 1; 9o > 0, qi > 0, q2 > 0, q3 > 0,
94 > 0, qo + 91 + 92 + 93 + 94 = 1
The forward equation expressed through a generating function takes the
dFa^1,S2^ A (po + P\S2 + p2S\S2 + P3s\s2 -f Pis\ Si) dFai'l'^ 'S2'> 4
+F (90 + 9l52 + q2S\S2 + qzs\s2 + 54S2 S2) dFn('t,sus2)
with initial condition FQ(0; Si, s2) = s^s^2. The generating function FQ(f; s1; S2)
satisfies the non-linear condition (5.12); i.e. FQ(t;si,s2) = Ff1(t;s1,s2)
F22(t; Si, s2) The systems of equations (5.13) for one-particles generating func-
tions is
( ^ = A (p0 + PiF2 + P2F\F2 + p3F?F2 4- PiFf F\)
< (5-16)
[ Fit = F (9o + 9i-^i + 92^ F2 + q3F1F22 + q4F% F2)
with initial conditions Fi(0;si,s2) = Si, F2(0;si,s2) = s2.
5.7.1 Equations for expectations
The expectations of the number of particles are found by the formula
Ei(t) (t) = E£ =
dF (t;si,s2)
,* = 1,2
S = 1
Differentiating equation (5.15), we get the system of equations:
dE1 (t)
A (p2 + 2p3 + 2p4 1) Ei (t) + p (91 + 92 + 93) E2 (t)

^ (Pi + P2 + Pz) Ei (t) + p. (q2 + 2q3 + 2g4 1) E2 (t)
with initial conditions £1 (0) = ai, £^2 (0) = 0:2
It is easy to build exact solutions of the system of equations (5.16) using
specifically tailored algorithms.

6. Transformations of the form A > 'yiA + 72B\ B > 0
6.1 Transformation A > A + B, 2A + B; B > 0
Transformation of particles of the branching process can be represented in
the following form:
{A -> A + B, 2A + B
The intensity of the transformations are A and /r, A > 0, p > 0
The backward partial differential equation has the following form
dG0(t£uz2) = ^ Gp (f; 21,2:2) +
+PZ2 (l 5^) Gp{t\Zx,Zz),
with initial condition Gp(0; Z\, z2) = z^1 z^2/(/?!!/32!)- Here p2 > 0,p3 > 0,p2 +
Ps = I-
The forward equation represented through generating functions is
dFa jt;s1,s2) _
w ,2 \dFa(t-,sus2) n 5FQ(t;s1)s2)
A (p2sis2 + P3S1S2 si)----HI--------f P (1 s2)
with initial condition Fa(0; Si, S2) = s"1^2.
The system of equations for generating functions with one particle is
= A (p3F2F? + p2F\F2 F{)
= I* (1 f2)

with initial conditions Fi(0; Si, s2) = Si, ^(0; si, s2) = 52
Solving (6.2), shows that F2(i;s2) = 1 (1 s2)e_/lt. Equation (6.1) is a
Bernoulli equation, which is solved in quadratures. Standard replacement
H (t) = ------------
F\ (t; Si, s2)
leads to a linear equation
= A (p3 + p2 (1 S2) e~^) H Xp3 (l (1 s2) e')
This is an ordinary first order differential equation, which can be solved by the
method of variations of constants. The corresponding homogeneous equation
takes the form
= A (p3 + P2 (1 S2) e~*) H
Solving this equation:
H (t) = Cexp {Xp3t a (1 s2) e_Mt}
C = C(t) can be obtained from the initial condition.
The final form is
Fx (t; s\, s2) =
T-e_a 1F1 (b] b 4- 1; a (1 s2)) + (1 s2) x
x 1 F\ (6 + 1; 6 + 2; a (1 s2))
+1F1 (6; b + 1; a (1 s2) e ^4)
- (1 s2) e_MtiFi (b + 1; b + 2; a (1 s2) e_/l£)
The generating function of the transition probabilities of the branching process
with arbitrary initial state (011,0:2) is determined by the branching property
3{a(l-s2)e ''*}
Fa (t; Si, s2) = EjQl (t; Si, s2) F^ (<; s2)

The exact solution of this form of transformation cannot be found. The
above system is simulated using single iteration with various rates, where
rrn is relative growth rate ( percentage ), is interaction rate (
percentage ) for populations 1,
/ /
I A > A + B => A relative growth r2
A > A + B => A produces B i3
< 1^4> 2A + B => A relative growth rx
A > 2A + B => A produces B i2
B - 0 => B self destroys i\
rx = 2, T2 2, *i = 30, i2 = 30, *3 = - 30;
f\ = 2, r2 = 2, = 50, i2 = 50, *3 = = 50;
ri = 2, r2 2, *i = 70, i2 = 70, *3 = = 70;
ri = 7, r2 = 7, *i = 30, *2 = 30, *3 = = 30;
n = 7, r2 7, *i = 50, i2 = 50, *3 = = 50;
n = 7, r2 7, *i = 70, i2 = 70, n = = 70;
ri = 7, r2 = 7, ii = 20, i2 = 30, *3 = = 50;
n = 5, r2 = 5, h = 50, i2 = 10, *3 = = 10;
n = 02% r1 = 02% i1 = 30% i2 = 3C% i3 = 30% rl = 02% r1 = 02% i1 = 50% i2 = 50% i3 = 50%
o 100 ZOO 300 400 0 100 200 300 400
t Im tin

rl = 02% r1 = 02% i1 = 70% i2 = 70% i3 = 70% rl = 07% r1 = 07% i1 = 30% i2 = 30% i3 = 30%
07% r1 = 07% i1 = 50% i2 = 50% i3 = 50% rl = 07% r1 = 07% i1 = 70% i2 = 70% i3 = 70%
r1 = 07% r1 = 07% i1 = 20% i2 = 30% i3 = 50% r1 = 07% r1 = 07% i1 = 50% i2 = 10% i3 = 10%
6.2 Transformation: A > 0, B, A + B, 2A; B > 0
6.2.1 System of non-linear equations
The schema of the transformation of the branching process is
{A > 0, B, A + B, 2A
B ^ 0

The intensity of transformation is A and /x, A > 0, // > 0. The backward equation
expressed through generating functions can be written as
dG0{t^z2) _ (p0 +Pi£; +P2d^ +P*lkz ~ ^7) Gp{t\zuz2) +
+HZ2 (l - G/3 (£; zi, z2)
with initial condition G'ya(0; Zi, z2) z^1 z^2 / (fi\\(d2\). Here po > 0, p\ > 0,
P2 > 0, P4 > 0, po +Pi +p2 +P4 = 1. Kolmogorovs forward equation expressed
in generating functions is
dFa^1's?! A (po + p\s2 +p2s\s2 + P4.s\ si) -f
with initial condition Fa (0; si, s2) = s^s^2.
The non-linear system of equations for generating functions having one-type
particles takes the form
A(p4-Tj2 + p2F\F2 + p\F2 + po F\)
- F2)
with initial conditions Fi(0; s1; s2) = si, F2(0; Si; s2) = s2. The initial condition:
F2(0; Si; s2) = s2, which leads to:
F2(t\s2) = 1 (1 s2)e ^
Replacing z e ^
dFi d 2 a + b + d + c(l s2)z a + b b(l s2)z
~ = -v1 -I----------------------------------------------
dz z z z
where a = Apo/p, b = Xp\/p,, c = Ap2/p, d = Xp^jp.

The Riccati equation (6.3) is written in the form of a linear differential equation
of second order when substituting:
Fi{z-,s1,s2) =
z 1 dy{z\s2)
dy(z-s2) dz
The general solution for equation (6.3) takes the form:
z 1 dy (z; s2) CWX (t; s2) + W2 (t; s2) , ,
d y [z\ s2) dz CHi (t; s2) + H2 (t; s2)
with the notations
Wi (t] s2) = (a + b)x
xe~(a+b^tit2F2 (a + b 4- bd/c, a + b + 1; a + b d + 1, a + 6; c (1 s2) e_Mi) ;
W2 (t; s2) = de~dflt2F2 (d + bd/c, d + 1; -a b + d + 1, d; c (1 s2) ;
H\ {t- s2) = de-(a+fc)/iFi (a + b + bd/c, a + b d + 1, d; c (1 s2) e~^) ;
H2 (t; s2) = de~dfitiFi (d + 6d/c; a b + d + 1, d; c (1 s2) e_Mt) .
The constant C is derived from the initial condition Fi(0, Si, s2) = si,
^2(6; s2) ~ ^1^2(6; s2)
Wi(0;s2) siHi(0;s2)
Initial conditions for generating function Fa(t, si,s2) is determined by the equa-
Fa (tSl,s2) = F?' (i; Sl,s2) F? (i; s2) =
fCW1(t,s2) + W2(t-s2)ai
\lTi (0; s2) S\Hi (0; s2)
(1 (1 82) e-r2
The value of Fi(f, Si, s2) when si = 1, s2 = 1 is:
*1=1,52 = 1
W2(t- l) = e~d^-
H2 (f;1) = e~d^

1> 1) 1
The exact solution of this form of transformation cannot be found. The above
system is simulated using single iteration with various rates:
A > 0 => A self destroy i4
A B => A transforms into B i3
{A > A + B => A relative growth
A > A + B => A produces B i2
A 2A => A relative growth
B > 0 => B self destroys i\
rj = 3, r2 = 2, i4 = 20, i2 = 20, i3 20, z4 = 10;
r4 3, r2 = 2, ix = 20, i2 = 50, i3 = 50, i4 = 10;
ri = 3, r2 2, i4 = 20, i2 = 50, i3 = 70, i4 05;
ri =4, r2 = 2, ii = 20, i2 20, z3 = 50, i4 = 05;
7~i = 5, r2 = 2, = 50, i2 50, i3 = 70, i4 = 05;
T\ = 7, r2 = 2, zi = 70, z2 = 70, = 20, = 05;
r1 = 02% r1 = 03% i1 = 20% i2 = 20% i3 = 70% i4 = t)% r1 = 02% r1 = 05% M = 20% i2 = 20% i3 = 50% i4 = Xt%

n = 03% r1 = 02% i1 = 20% \2 = 20% i3 = 20% i4 = t)% r1 = 03% r1 = 02% i1 = 20% i2 = 50% i3 = 50% i4 = D%
r1 = 03% rl = 02% i1 = 20% i2 = 50% i3 = 70% i4 05% r1 = 04% r1 = 02% i1 = 20% i2 = 20% i3 = 50% i4 = 05%
6.3 Transformation A > 0, A + B, 2A + B, 2A-, B > 0
Let the system of transitions of the branching process be:
A -> 0, ,4 + £, 2Al + £, 2Al;
The intensity of transitions is A and //, A > 0, p, > 0.
Kolmogorovs backward partial differential equation in this case takes the form:
dG0{t£Uz2) = Azi + + _ £- j Gp (f; zu z2) +
+/xz2 ^1 Gp (t; 2i, z2)
with initial condition G/3(0; zi, z2) = z^1 z%2/((3i\f32\). Here po > 0, p2 > 0,
P2 > 0) > 0; po + P2 + P3 + Pi 1-
Kolmogorovs forward equation, expressed through generating functions is
dFa(t;si,s2) =

Hpo + P2S1S2 + Pzs\s2 + Pas\ Si)
with initial conditions Fa(0;si,s2) = s^sy2-
The non-linear system of equations for one-type particles generating functions
with initial conditions ^(O; s1; s2) = Si, F2{0; si, s2) = s2.
F2(t\s2) = 1 (1 s2)e~tlt. Replacing in the backward equation of the
system z = e~^:
where a = Xpo/p, b = Xp2/p,, c = Xp$/p, d = Xp/p,.
The equation (6.5) is a Riccati equation and can be transformed to an ordi-
nary linear differential equation of the second order. The downside is that the
resulting equation is going to be complex and the reccurent equation for coeffi-
cient determination of generalized series is going to be a second order equation
with quadratic coefficients. Because of that, before transforming equation (6.5),
perform the following substitution
Then for V{z\s\,s2) the equation becomes:
dV_ = ay 2
dz z
which is also a Riccati equation.

Introduce the notation
= aea,xt2F2{ a ac/6, a + 1; a + c + d + 1, a; 6(1 s2)e Mi);
W2(t;s2) =
(c + d)e^c+ Fi(t; s2) = deahlt\Fi{a ac/b\ a + c + d+ 1; 6(1 s2)e_/lt);
//2(f; s2) = deay,t\Fi{c d ac/6; a c d + 1; 6(1 s2)e~^).
The general solution of can be written as:
Fi(t-,si,s2) =
H\(t\ s2) + CH2{t\s2)
Wi(£; s2) + CW2{t\ s2)
where the constant C is determined using the initial condition Fx(0; si, s2) = 0
c HY{t\s2) SiVT1(6;s2)
H2{t\s2) s\W2{t\ s2)
The exact solution of this form of transformation cannot be found. The
above system is simulated using single iteration with various rates:
^4 > 0 => A self destroy i\
(A > A + B => A relative growth r4, r2
A > A + B => A produces B i4
A 2A + B => A relative growth
A > 2A + B => A produces B z3
A ^ 2A > A relative growth
B > 0 => B self destroys i2

n = 3, r2 = 2, ii = 20,
ri = 3, r2 = 2, ix = 20,
r\ = 3, r2 = 2, ii = 20,
n = 4, r2 = 2, i! = 20,
ri = 4, r2 = 2, zx = 50,
f\ = 5, r2 = 2, ij = 50,
n 3, r2 = 4, *i = 70,
rx = 3, r2 = 5, ix = 70,
i2 20, *3 20, i4 = 20;
i2 = 50, 23 = 50, i4 = 50;
i2 = 50, i3 = 70, i4 = 70;
i2 = 20, ?3 = 50, 24 = 90;
i2 = 50, 13 70, i4 = 90;
i2 = 50, i3 = 70, z4 = 20;
i2 = 70, i3 = 70, *4 = 70;
i2 = 50, 13 = 50, i4 70;
r1 = 03% ri = 02% i1 = 20% i2 = 20% I3 = 20% i4 = 20% r1
03% ri = 02% i1 = 20% i2 = 50% i3 = 50% 4 = 50%
r1 = 03% r1 = 02% i1 = 20% I2 = 50% i3 = 70% i4 = 70% r1
04% ri = 02% i1 = 20% i2 = 20% i3 = 50% i4 = 90%
r1 = 05% ri = 02% i1 = 50% i2 = 50% I3 = 70% i4 = 90% r1 = 07% ri = 02% i1 70% i2 = 70% 0 = 20% i4 = 20%

r1 03% rl 04% i1 = 70% i2 = 70% I3 = 70% I4 = 70% r1 = 03% rl = 05% i1 = 70% i2 50% i3 = 50% I4 = 70%

6.4 Transformation A > B, 2A + B; B > 0
The schema of the transformation of the branching process is
A -> B,2A + B]
B-* 0.
The intensity of transformations is A and p, A > 0, p > 0. Kolmogorovs
backward equation expressed through generating functions is:
dG0{t-,z1,z2) (_ d _ d3
dt Zl \Pldz2 +P:idz\ds2 dzx
o z\\z2)+
+pz2 ^1 Gp(t] zltz2),
with initial condition Gp(0; zi, z2) = z^1 z^2/(/3i\f32\). Here pi > 0, p3 > 0; p\ +
p3 = 1. Kolmogorovs forward equation expressed through generating functions
dFa (t;s!,s2) =
. / 2 \ {ti ^li ^2) \ dFa (t, Si, S2) /£
A (pis2 + p3s(s2 si)------------------+ //(!- s2)----------^-------- (6.6)
ds 1
with initial condition FQ(0; s2) =

The system of equations for one-type particles generating functions
= X(p3F2F2 + P\F2 Fi),
f =^-F2),
with initial condition Fi(0;si,S2) = Si, F2(0;si,S2) = 52-
Replacing 2 = e~^ in the backward equation F2(t] s2) = 1 (1 s2)e~flt:
dFi a(l (1 s2)z) a + b^ 6(1 (1 s2)z)
~F~ ~---------:-------*T + ~rtx-----------:------->
oz z z z
where a = Xpi/p, b Xp3/g.
Which leads to:
fi(z\sus2) =
z(l (1 s2)z) dz
2d2y (z-,s2) 1 (a + b) (1 (1 s2) z) dy (z; s2)
dz2 Z 1 (1 s2) z dz
-ab (1 (1 s2) z)2 y (z; s2) = 0
The exact solution of this form of transformation cannot be found. The above
system is simulated using single iteration with various rates:
A > B > A produces B i3
{A 2A + B => A relative growth r\, r2
A > 2A + B > A produces B i2
B 0 => B self destroys i\

n = 2, r2 = 2, ii = 20,
rj = 2, r2 = 2, ^ = 20,
fi = 2, r2 = 2, *i = 20,
n = 2, r2 = 2, = 20,
?"i = 5, r2 = 5, ii = 50,
fi = 5, r2 = 5, zi = 70,
n = 5, r2 = 5, n = 70,
ri = 5, r2 = 5, ii = 70,
z2 = 20, is = 20;
i2 = 50, is = 50;
z2 = 50, z3 = 70;
i2 = 20, is = 50;
i2 = 50, i3 = 70;
i2 = 40, z3 = 40;
i2 = 50, z3 = 20;
i2 = 20, i3 = 50;
n = 02% r1 = 02% i1 = 20% i2 = 20% i3 = 20% rl = 02% r1 = 02% i1 = 20% i2 = 20% i3 = 50%
ii = 02% r1 = 02% i1 = 20% i2 = 50% i3 = 50% rl = 02% r1 = 02% i1 = 20% i2 = 50% i3 = 70%
rl = 05% r1 = 05% i1 = 50% i2 = 50% i3 = 70% rl = 05% r1 = 05% i1 = 70% i2 = 20% i3 = 50%
tlW t I Ml

r1 = 05% r1 = 05% i1 = 70% i2 = 40% i3 = 40% rl = 05% r1 = 05% i1 = 70% i2 = 50% i3 = 20%

7. Transformations of the form A > 7^ + 72B; B > 2B
7.1 Transformation A 2yl + B; B 2B
Transformations of the particles in the branching process occur by the fol-
lowing schema
{A -> 2A + B-,
B -> 2B.
The intensity of the transformation is A and n ,A > 0, n > 0. Kolmogorovs
backward equation expressed through generating functions is
dG@ (t; zi, z2)
( d3 d \ ( d2
Xz1 {d&72 ~d71)G! (tz"Z2> + ^ {&|"
with initial condition Gp(0] z1} z2) = zf1 z%2/(Pi\/32\).
Kolmogorovs forward equation in generating functions
Gp (t; zi, z2)
= A(s2s2 si)
dFa(t]slts 2)
with initial condition Fa(0; si, s2) = s^s^2.
The system of non-linear equations for one-type particles generating functions
is as follows
= A(F2F2 F0,
dt ~^2
with initial conditions F!(0;si,s2) = si, F2(0; Si, s2) = s2.

Equation (7.2) is a Bernoulli equation:
F2(t-,s2) =
s 2
s2 + (1 s2)e& 1 + 6e-#rf
where b s2/(l s2). Substituting in (7.1):
dFl A be~*
dt 1 +
^ ~ XFi
Substituting H(t) Fx 1(t;si,s2):
dH Abe~*
i A H ---:---7
dt 1 + be~vt
H(t) = C(t)eXt,
dC Abe~(A+^t
dt 1 + be~^
Using sum of geometric progression
- OO
L =Y(-i)tye
1 + be-^
Integrating (7.3):
C(t) = C0 + be~{Xtl)t J2
A + [i(k + 1)
(-1 )kbke~k* =
r A f..-a+u)ty'r(V/i + 1 + fc) r(A//i + 2) r(/c + l) .fc,fc kut
0 V 2s r(\/n + u r(\/n + 9 + k.n\ fci 1 L) 0 e
r(A/p + 1) T(X/fi + 2 + kfj,) k\
Co +
A 6
A + /r

Fi{X/n + 1,1; A//i + 2; be~*),
where 2Fi(a:, /?; 7; x) is a Gaussian hypergeometric function.
The constant Co is determined from the initial condition Fi(0; si; s2) = $i
C0 = -
A b
si A + n
2Fi(A/^ + 1,1; A//x + 2; 6).

Fi {t-,sus2) =
A So
eXt2F\ I X/p 1,1; X/p + 2;
(A + p) (1 s2)
e ^2F\ ( X/fi + 1,1; X/fi + 2;
1 s2
n -l
(A + fi) (1 s2) V ' r 1 \ s2
The generating function of transition probabilities of branching process with an
arbitrary initial condition (ai,o;2) is determined by the branching property of
Fa{t-si,s2) = F1Ql(t;si,s2)F2Q2(t;s2)
The exact solution of this form of transformation cannot be found. The
above system is simulated using single iteration with various rates:
{A> 2A + B => A relative growth r\
A > 2^4 + B => A produces B i\
B > 2B => B relative growth ry
r\ 0.1, i\ = 0.2;
rx 0.2, ix = 0.4;
rx = 0.2, ix 0.8;
ri 0.2, ii = 1;
ri = 0.2, ii = 3;
n = 0.3, i\ = 4;
rx 0.3, ii = 5;
ri = 0.3, i\ = 6;

r1 = 01% i1 = 02%
r1 = 02% h = 04%
7.2 Transformation A > 0, B, A + B, 2v4; B 25

Transformations of the particles in the branching process occur by the fol-
lowing schema
{A^0,B,A + B,2A-
B -> 2B.
The intensity of the transformation is A and p ,A > 0, p > 0. Kolmogorovs
backward equation expressed in generating functions is
with initial condition G/j(0; zlt z2) = z^z^/ifii!/52!). Here p0 > 0, p\ > 0,
P2 > 0, Pi > 0, po + Pi + p2 + P + 4 = 1.
Kolmogorovs forward equation expressed in generating functions
with initial condition Fa(0;si,S2) = s1S22. Non-linear system of equations for
generating functions that have one-type particles have the following form
dGp (t\zi,z2)
Gp (t; 2i, Z2) +

with initial conditions Fi(0; si, S2) = si, F2(0; si; S2) = s2.
As before:
1 /e-^

where / = $2/(52 !) Substituting z = e the Riccati equation is:
dF\ d 9 1
_L = Fx2 + -
dz z z
where a = Xp0/p,b = Xp\/p,c = Xp2/p,d = Xp^/p.
Transforming this into the linear differential equation of the second order:
^i(z;si,s2) =
1 dy(z)
dy(z) dz
Z dz2
+ 2
1-a-bd ---
1 fz
+ d
y(z) = 0
The solution in the form of generalized series with undetermined coefficients is
y{z) = zeY^Akzk, A0/0,
The general solution can be written as
z 1 dy(z-, $2) CWi{t\s2) + ^2(^52)
dy{z-s2) dz CHi(t;s2) + H2(t-s2)'
Wi(t, s2) = (v y/u/2)z('v ^^2^l2F2{v y/u/2 d,v y/u/2 a b, l + v y/u/2\
v y/u/2,1 + Vu- S2 e~^);
I $2
W2(t-S2) = (v-y/u/2)z{v+^/2)fd3F2{v + y/^/2-d,v+y/u/2-a-b,l+v + y/u/2;
V + y/u/2,1-y/u] e~^);
1 52
-Wi(£; S2) =
de{v-VZ/2M2px ^ y^/2 d, v V/2 o 6; 1 + Vm; j ;
H2{t\s2) =

de(v+V^/2)^2Fi ^ + ^12 _d^v + ^/2
a 6; 1 y/u\ e ^
1 52
The constant C is determined from the initial condition Fi(0, si, s2) = Si
^2(0; 52) SiU2(0; 52)
^(0; S2) S\Hi(0', s2)
The exact solution of this form of transformation cannot be found. The
above system is simulated using single iteration with various rates:
A > 0 => A self destroy i2
A > B '> A produces B i\
{A A + B => A relative growth r2
A A + B => A produces B i\
A > 2A => A relative growth r2
B > 25 => 5 relative growth r\
ri = 0.1, r2 = 0.1, i\ 0.2, i2 = 0.2;
ri = 0.1, r2 = 0.2, i\ = 0.2, i2 = 0.2;
n = 0.2, r2 = 0.2, i\ = 0.2, i2 = 0.2;
ri = 0.1, r2 = 0.3, ii = 0.2, z2 = 0.1;
n = 0.1, r2 = 0.3, ix = 0.2, i2 = 0.2;
J"! = 0.1, r2 = 0.2, = 0.2, i2 = 0.1;

r1 = 01% r2 = 02% 11 = 02% i2 = 02%
r1 = 01% r2 = 02% i1 = 02% i2 = 01%
r1 = 01% r2 = 02% i1 = 02% i2 = 02% r1 = 02% r2 = 02% i1 = 02% i2 = 02%
7.3 Transformation A > B, 2A + J3; B > 2B
Transformations of the particles in the branching process occur by the fol-
lowing schema
A B,2A + B]
B 2B.
The intensity of the transformation is A and p ,A > 0, p > 0. Kolmogorovs
backward equation expressed in generating functions is
dGp{t;zi,z2) ( d d3
----a----= XzAP'a72+P3d4^
) Gp (t; 2i, 22) +
with initial condition Gp(0',zi,z2) = z^ z??Here pi > 0, p3 > 0,
+ p3 1. Kolmogorovs forward equation expressed in generating functions
A(pis2 + pzs\s2 - + ^(s2 S2).
9FQ(t;si,s2) w . N^Fa(t;si,s2) f 2 _ ^dFa(t] sx, s2
ds 1