r
THE INVERSE PROBABILITY METHOD: AN INTERVAL ALGORITHM
FOR MEASURING UNCERTAINTY
by
Guerin Fritzlen Olsen
BSFS, Georgetown University, 1980
A thesis submitted to the
University of Colorado at Denver and Health Sciences Center
in partial fulfillment
of the requirements for the degree of
Master of Integrated Science
2005
This thesis for the Master of Integrated Science
degree by
Guerin Fritzlen Olsen
has been approved
by
Weldon Lodwick
Steve Sain
2)Â£
Date
Olsen, Guerin Fritzlen (M.I.S.)
The Inverse Probability Method: An Interval Algorithm for Measuring Uncer
tainty
Thesis directed by Professor Weldon Lodwick
ABSTRACT
An algorithm is proposed for obtaining upper and lower bounds on a cumu
lative distribution function (cdf). This method uses partitions on the domain
and interval analysis to guarantee enclosure of a cdf. It offers an alternative to
Monte Carlo simulations and other partition methods that measure risk. The
objective of this method is to develop an algorithm that can compute the upper
and lower bounds on a cdf using the fewest number of computations.
in
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
Signed
Weldon Lodwick
IV
DEDICATION
This paper is dedicated to my family who have been particularly encouraging
of my math pursuits.
ACKNOWLEDGMENT
This thesis would not have been possible without the generous support of
Weldon Lodwick. He and Dave Jamison developed the original model. The
method described herein is an extension of their work.
CONTENTS
Figures ix
1. Introduction......................................................... 1
2. Overview of Partitioning, Interval, and Monte Carlo Methods..... 4
2.1 Partition Methods 4
2.2 Interval Arithmetic.................................................. 4
2.3 Monte Carlo Methods 6
3. Other Interval Methods Related to IPM 8
3.1 R.E. Moores Method.................................................. 8
3.2 Daniel Berleants Method 9
3.3 Fulvio Tonons Method............................................... 10
3.4 Weldon Lodwick and K. D. Jamisons Method........................... 10
4. The Inverse Probability Method...................................... 12
4.1 Overview of IPM..................................................... 12
4.2 The IPM Algorithm .................................................. 15
4.2.1 Partition X into m Hypercubes of Equal Probability............ 15
4.2.2 Step 2. Compute the Minimum and Maximum of the Partitions . 18
4.2.3 Step 3. Construct Upper and Lower Probability Bounds.......... 18
4.3 Examples of Algorithm.............................................. 19
4.3.1 Example of X\ + X% = Y, for Xi, X^ ~ U(0,1) and independent . 19
4.3.2 Example of the Sum of Two Normals............................. 25
vii
5. Conclusions
28
Appendix
A. MATLAB Code for a Monte Carlo Simulation................... 31
B. Example of Moores Method in MATLAB Code................... 32
C. MATLAB Code for Example 1................................. 36
D. MATLAB Code for Example 2................................. 47
E. MATLAB Code for Example 3................................. 51
F. MATLAB Code for Example 4................................. 55
G. MATLAB Code for Example 5................................. 60
References.................................................... 65
viii
FIGURES
Figure
2.1 Surface Area of X\ + X2 = Y 5
2.2 Monte Carlo Simulation of X\ + X2 = Y................................. 7
4.1 Vertices............................................................. 20
4.2 Function Values...................................................... 21
4.3 Vertices of the Sum of Two Normals................................... 26
4.4 Function Values for the Sum of Two Normals........................... 27
B. l The Cdf for Xi + X2 on 60 Intervals................................. 35
C. l The Cdf for Y = Xi + X? on Increasing Intervals..................... 46
D. l The Cdf for Xi, ...X10o for Increasing intervals. 50
E. l The Cdf of X\ X2 for Increasing Subintervals...................... 54
F. l The Surface Area of the Joint Cdf for Two Normals................... 56
F. 2 The Cdf for Xi + X2 ~ N(0,1) for Increasing Subintervals........... 59
G. l The Cdf for szn(20 x) + x2 for Increasing Subintervals........... 64
IX
1. Introduction
This paper examines several different partition methods that measure uncer
tainty. In addition, a new approach is offered that requires fewer calculations.
The methods presented here are those of R.E. Moore, Fulvio Tonon, Daniel
Berleant, Weldon Lodwick and K.D. Jamison, and that of the author. These
methods offer either an estimate of, or enclosure on, a joint cumulative distri
bution of a multivariate function.
Partition methods, which will be discussed in detail in Chapter 2, are an
extension of interval analysis. Interval analysis includes any form of math that
defines its solutions within certain bounds, by enclosing real numbers in an
interval and real vectors in boxes [6]. It has applications in error analysis,
computer programming and global optimization. Using interval arithmetic, the
input is a pair of numbers that represents the upper and lower value of the inter
val. The output, as a result, is guaranteed to be within a certain bound. Interval
methods were introduced by Rosalind Cicely Young in 1931 [5]. However, R.E.
Moore became the most influential voice in the field [11, 10, 9). Moore was
prolific, creating volumes of work on interval analysis, including work in calcu
lating risk. Interval methods seem wellsuited for measuring uncertainty because
of their enclosure property. It is important, of course, that the bounds of the
enclosure be tight enough to yield a usable solution.
The various methodologies have many similarities. First, they all partition
the domain in the same way, a method based on Moores early work. The
1
procedure is as follows:
The entire area is partitioned into smaller volumes.
The boundaries are found.
The probability mass captured is measured.
The partitions are added back together, much like Riemann sum.
Second, each method employs interval arithmetic to construct the partitions.
Last, most of the authors verify their results against, or compare them to, those
obtained by Monte Carlo simulations.
Monte Carlo methods are customarily used to measure a joint multivariate
distribution. They offer an approximation to cdf by calculating a large number of
random samples for the inputs, and then counting the frequency of the outputs.
Many of the previously mentioned authors argue that there are situations when
partition methods are preferable to Monte Carlo methods. A few examples of
these situations are when Monte Carlo simulations are impractical, enclosure
of the cdf is required, or when there is a lack of information about either the
distribution or its dependencies.
There are also many differences among the authors, both in methodology
and objectives. Moores method, the first to be developed, differs from all of
the others in that he only gets an estimate of the cdf. His objective is to show
that interval methods require fewer calculations than a Monte Carlo simulation.
Tonons and Berleants partitioning is quite similar to Moores, but each has
a different objective. Tonons is to bound a cdf, and Berleants is to get a
bound on a cdf where the dependencies of the random variables are unknown.
2
Initially, the Lodwick and Jamison method partitions in much the same way as
the others. However, to obtain tighter bounds, it takes internal measurements
inside the partitions, while the others do not.
The method of the authors is similar to the others in many ways, however,
it differs in one important aspect. Rather than determining probability from a
measured interval, an interval is determined from the measurement of probabil
ity. Because of this fundamental difference, the method will be referred to as
the Inverse Probability Method or IPM for short. IPM requires potentially far
fewer calculations than the other methods, because it partitions the domain into
volumes of equal probability. Therefore, after the boundary points are found,
the probability mass captured needs only to be calculated once. Since each par
tition contains the same amount of probability, only the number of partitions is
counted. This streamline technique should prove effective in cutting down the
number of calculations needed to computed bounds on a cdf.
3
2. Overview of Partitioning, Interval, and Monte Carlo Methods
Partitioning, interval and Monte Carlo methods are integral to all the meth
ods considered here. Each method begins by dividing the distribution of the
random variables in the domain into intervals. These intervals create partitions
on the domain of the multivariate function. See the base of Figure 2.1. The
function is then evaluated using interval arithmetic. See the surface area of
Figure 2.1. Once a cdf of the multivariate function is constructed, Monte Carlo
simulations can be used to verify the results. See Figure 2.2.
2.1 Partition Methods
All methods partition the domain in the same manner. The domain consists
of n random variables, where each variable is defined by its cdf. The variables
are divided into intervals. Partitions are created by matching up the endpoints
of the intervals. The function is evaluated at these points, except for in one
model where internal points in the partitions are also computed. The amount
of probability in each partition is calculated. For some given value of y that is
in the range of the function, the probability is summed to derive the cdf of y.
2.2 Interval Arithmetic
Each of the methods described uses interval arithmetic to calculate the func
tion values of the intervals and partitions. An abbreviated explanation is given
next.
Let / be an interval, such that / can be defined by its upper and lower
bounds. Let x denote the lower bound and x the upper bound. In turn, / is
4
o o
Figure 2.1: Surface Area of X\ + X2 = Y
denoted as [x,x]. For many functions evaluated on intervals, it is easy to obtain
the minimum and maximum of the partition by using interval arithmetic. For
example, if 1\ = [x,x] and I2 = [y,y} the following rules apply [12].
Addition
x + y = [x + y,x + y}.
Subtraction
x y = [x y,x yj.
Multiplication, if x, y > 0
x *y = [x*y,x*y\.
The above operations are for the most simplistic algebraic functions. How
ever, there are many analytical techniques which can guarantee global minimums
5
and maximums over an interval, even for the most complex functions.
2.3 Monte Carlo Methods
A Monte Carlo algorithm is the conventional method for measuring the cdf
of a multivariate function /. An approximation to the cdf is calculated by eval
uating the function using a large sample of values from the distributions of the
random variables. In the example X\ + X2 = Y, a Monte Carlo simulation on
10,000 trials would take two random variables from their given distributions
and add them together 10,000 times. The frequency with which the result falls
within an interval is divided by the number of samples, yielding a probability.
The cdf is calculated by taking the cumulative sum up to a given interval. Fig
ure 2.2 shows the cdf of the function, X\ + X2 Y, created by a Monte Carlo
simulation using 10,000 samples.
Pseudocode for a Monte Carlo simulation for X\ + X2 = Y
Input ms=number of samples.
Let ms go from = 1 to 10,000.
Sum two random numbers, 10,000 times.
Create intervals for histogram that span the range of /, ([0,2]).
Use a histogram to count frequencies.
Find the cumulative sum of the histogram.
Divide by the number of simulations. This equals the probability.
Plot y vs. probability.
For Matlab code of this example see Appendix A.
6
Monte Carlo Simulation for X1 *X2=Y
Figure 2.2: Monte Carlo Simulation of X\ + X2 = Y
7
3. Other Interval Methods Related to IPM
The methods of Moore, Berleant, Tonon, and Lodwick and Jamison offer an
overview of the some of the most notable work done on using partitions to mea
sure uncertainty. Each method is described briefly in order to give comparisons
between the methods.
3.1 R.E. Moores Method
R.E. Moore was one of the first to explore the use of interval methods
to measure risks. In Risk Analysis without Monte Carlo Methods [11], he
demonstrates an algorithm for obtaining an estimate of a cdf, using interval
calculations. This method is the foundation for all of the other partition methods
presented here.
/ is a function of the random variables. Each random variable is divided
into intervals. The intervals contain a certain known amount of probability.
For example, take two random variables, X\,X2 Let each be uniformly dis
tributed on [0,1]. If both variables are divided into two intervals, then the
resulting intervals are [0,0.5] and [0.5,1]. The function is evaluated four times
at ([0,0.5],[0,0.5]),([0,0.5],[0.5,1]),([0.5,1],[0,0.5]), and([0.5,l],[0.5,1]). The values
of the result are also intervals that contain the range of the function. If inde
pendence is assumed, the probability of the resulting interval is the product of
its marginals. See Theorem 2.7 [13]. A distribution function is constructed by
taking the intervals of the range verses the probability they contain.
8
Moore developed his method as an alternative to Monte Carlo simulations.
He writes, ...the point to be made here is that, with a very small number of
computations, the interval method gives results which would require much more
computation using Monte Carlo methods. See [11] pl7. See MATLAB code for
Moores method in Appendix B.
3.2 Daniel Berleants Method
An interval based approach, very similar to Moores, is presented in Daniel
Berleant and Chaim GoodmanStrauss paper, Bounding the Results of Arith
metic Operations on Random Variable of Unknown Dependency Using Intervals
[1]. In this method the variables are not independent. Berleants objective is
to obtain an enclosure on the cdf where all possible dependency relationships
will be represented within the bounds. This is important in situations where
there is no information about the correlation between the variables. His method
encloses the cdf of random variables that are totally positively and negatively
correlated. To obtain these bounds, he uses linear programming to minimize
and maximize the probability in each partition, while keeping the marginals the
same. See examples in [1].
Berleants model claims to know nothing about the shape of the distribution
within the interval, only the total amount of probability the interval holds. The
majority of the distribution can be clustered to one side of the interval. In this
situation, further subdivisions of the interval are impossible. This is also an
important component of Tonons method.
9
3.3 Fulvio Tonons Method
In his paper, On the Use of Random Set Theory to Bracket the Results of
Monte Carlo Simulations [16], Tonons method follows that of Moores quite
closely. However, Tonons finds an enclosure of, rather an estimate to, the cdf.
He does this by finding the minimum and maximum value of the function in each
partition. He demonstrates his method using independent random variables. So,
the probability of the partition is the product of the marginals. To obtain upper
and lower bounds, he uses the following formulas.
Flower{y) = ^2 P(Ai)'
Ai:y>s\ipf(Ai)
FUpper(y) = ^ P(Ai).
Ai .y>\n{ f(Ai)
Here, the y is a value in the range of the function. The formula says, if y
is greater than the maximum value of a partition, than the probability of the
partition is added to the lower bound of the cdf of y. If y is greater than the
minimum value of the partition, the probability of the partition is added to
the upper bound. The actual probability of y inside the partition is unknown,
because the distribution inside the partition is unknown. However, an upper
and lower bound on the cdf of y can be guaranteed.
3.4 Weldon Lodwick and K. D. Jamisons Method
The Lodwick and Jamison method is described in Estimating and Validat
ing the Cumulative Distribution of Random Variables: Toward the Development
10
of Distribution Arithmetic [8]. The authors initially partition the domain in
the same manner as the others. However, this method goes further by dividing
each partition into areas of equal probability. These measurements of the prob
ability are referred to as /3s. The following formula is then used to obtain an
upper and lower bound on the probability of some value y being in the partition.
/?" < FY\a{9{Fx\\aFxl\Am) < 1 (1 /?)",
where
Pe[o, l].
By taking a value of beta in a partition, the inverse can be derived. Using the
above formula, the probability of this inverse value y, being in the partition, is
enclosed by 0n and (1 (1 /3)n). The values are further multiplied by the
probability of being in the partition. To obtain bounds on the cdf of y, the
probability of y in each partition is calculated and then summed.
Comparisons to the IPM of the above methods are given in the conclusion.
11
4. The Inverse Probability Method
IPM partitions the domain in the same manner as that of the others. How
ever, for this algorithm to be computationally efficient, the partitions must con
tain equal amounts of probability. In situations where the inverses are deriv
able, this should be a straightforward process. In this section, an overview of
the mathematics of the algorithm is given first. Then, the mathematics are
described more in detail. Examples follow at the end.
4.1 Overview of IPM
The objective is to find the cdf of a function /
f (xi, ..., ^n)
where the xts are independent random variables. The domain consists of a set
of continuous random variables.
X = {X1,...,Xn},
for n equal to the number of variables. Each Xt is divided into intervals of equal
probability.
V ___ vl ym
A.i A j .
An interval on Xt is [xf1,xf], where j = 1 These intervals, in each
random variable, construct partitions on the domain.
The entire domain is referred to as A.
A = {A\, ..., A(m)n},
12
for (m)n partitions. Let D be the domain.
D = U ... U A(m)n.
Each Ai is a ndimensional hypercube. For convergence, we let d equal the
diameter of Ai such that
lim d{Ai} = 0.
m oo
At is defined by the boundary points of its partitioned domain which are referred
to as its vertices. The vertices of Ai are derived by obtaining every possible
combination of the endpoints of the intervals in each Xi. Each vertex will
contain one interval endpoint from each Xi, and each At has 2" vertices.
Let / be a mapping of the domain onto the real numbers.
f : D * R.
The function values will be evaluated at each of the vertices of A,. These values
will be called the corner points of //over the Ai. Let Bi be a set of the corner
points of //over the Ai.
B = {B\, ..., B(m)n}.
Using interval arithmetic, if / is an increasing monotonic function, the min
imum and maximum of / on Ai can be determined. They will be at one of the
corner points of Bi. For more complicated functions, exploitation of structure
and other analytical interval techniques can be used.
After finding the minimum and maximum of the partition, the upper and
lower bounds on the cdf can be constructed. A vector ~y = [j/i,..., yx] is created,
with K number of discrete values of the function / at which the cdf is evaluated.
13
To create an upper bound, if yis greater than the minimum value, the en
tire probability of Ai is added to the cdf at the value of y*.. For a lower bound,
if yk is greater or equal to the corner point of maximum value of the partition,
the probability of Ai is add to the cdf of y^.
F'upper(l/k') k 1, ..., K
Vk> inf {f(At))
FloweriVk) = ^2 P(Ai),k =
Vk>sup(f(Ai))
In conclusion, there are three broad steps for constructing upper and lower
bounds. They are as follows:
1. Partition X into m hypercubes of equal probability.
2. Compute the minimum and maximum of the partitions.
3. Construct upper and lower probability bounds on the cdf of the distribu
tion.
Computer programs demonstrating the algorithm are written in MATLAB
and are included in Appendices. They include examples of the algorithm demon
strated on several different functions. The sum of two variables with a uniform
distribution on [0,1] is the easiest function on which to demonstrate the method.
Also included are examples of: a multidimensional sum of random variables on
a uniform distribution on [0,1]; the product of two random variables; the sum
of two normal variables; and one notsowell behaved function in two variables.
14
For most functions, it is simple to find the minimum and the maximum of a
partition using interval arithmetic. For a notsowell behaved function, other
analytical techniques can guarantee bounds.
4.2 The IPM Algorithm
Let X = {Xi,..., Xn} be a set of continuous random variables, and let X, be
one such variable. Since Xt is a continuous random variable, it has an associative
function /(x,) called the probability distribution function (pdf) such that the
following applies,
f(xi) > 0, Vs*.
See Definition 4.1 in [13].
4.2.1 Partition X into m Hypercubes of Equal Probability.
Let [a, b] be an interval of Xi such that
b
Divide the cdf of Xt into intervals,
where m is equal to the number of intervals. Let
15
and
x? = b.
Further, let the jth interval equal [r^_1. xi]. Since Xt has a probability density
function that describes the shape of its distribution, it also has a corresponding
cumulative distribution function,
cdf(Xi) = FXi
The cumulative distribution function has an inverse function such that
F~'(P(Xi < ajj)) = Xi.
The next step is to determine how much of the probability area of each Xi
needs to be measured. Of course, 100 percent would be optimal. However, many
distributions have long tails, where tiny amounts of probability lie in intervals far
from the means. It becomes almost impossible to calculate the minute amounts
of probability that they hold. To cut off the tails at points where the probability
is no longer negligible, redefine the function to have positive values for the
interval, [a, b], and to be defined as having no probability outside the interval.
P(a < Xi < b) .998.
However, in some cases there are finite supports, such as in cases where the
random variables are uniform on [0,1].
Let
F1(.001) = a
and
F1(.999) = 6.
16
Determine the amount of probability to capture in each interval of each Xt.
Note, it must be the same amount of probability per interval. Smaller intervals
will require more computations but that will result in tighter bounds. Based on
the amount of probability per interval, create a vector ~v such that
~v = [vi,..., Vm+\],
where m + 1 equals the number of components of v.
vj = (j 1 )/m,j = 1,..., m + 1
If each interval holds 20 percent of the probability of
0 0.2 0.4 0.6 0.8 1
Since the amount of probability in each interval will most likely be the same for
each variable, 1? can be used for every Xi. Next, take the inverse cumulative
distribution function of u* for each Xu and let it equal u[, so that
FXi (vi) = <
The components of Uj will be uj, for i {1,..., n} and j {1,..., m +1}, where n
is the the number of variables and m+ 1 is the number of components of Ui. To
find the vertices of the domain, take all of the vectors u and find every possible
combinations their components. Let S be a set of these combinations.
Each element of 5 is a vertex. The vertices enclose the partitions of the domain.
A {j4i,...,y4m}.
17
Since partitions share vertices, evaluate the function at the vertices and later
align them with the partitions they enclose.
4.2.2 Step 2. Compute the Minimum and Maximum of the
Partitions
Evaluate the function at the vertices. These values are referred to as the
corner points. At this point, interval analysis needs to be employed to guarantee
that the minimum and the maximum function values of the partitions are known.
In many cases, we will not need to calculate f{S) for every vertex of every A,.
Using interval arithmetic, the minimum and maximum values can be found. If
this does apply, all the vertices need to be evaluated for the function /.
4.2.3 Step 3. Construct Upper and Lower Probability Bounds
Find the probability of one A,. Only one is necessary, since each A, contains
the same amount. In most cases, it would be the length of the interval of
raised to the number of variables. This assumes independence.
P(Ai) = (WW"1)B
Create a vector ~y that spans from the minimum to the maximum of }(S).
The cdf is evaluated at the components of y. For an upper bound, if yk is greater
than the minimum corner point of Bu the entire probability of the A, is added
to the joint cdf. For the lower bound, if yk is greater than, or equal to, the
maximum corner point of Bi, the entire probability of A, is added to the joint
cdf.
FupperiVk) = ^2 P(Ai)k = I,, K
yk>mff(Ai)
18
FlaweriVk) = ^ P(Ai), k = 1, K
yk>supf(Ai)
By calculating both upper and lower bounds, an enclosure on the cumulative
distribution function of a multivariate joint distribution is obtained.
4.3 Examples of Algorithm
4.3.1 Example of Xi + X2 = Y, for X\, X2 ~ U(0,1) and independent
This is one of the easiest functions to show an example of the method.
However, in some ways it is so simple that it can lead to some confusion. The
primary problem is that x = Fx(x). In other words, the values of x equal their
cdf. The domain is the joint distribution of the cdfs of X\ andA2. See Figure
2.1. The domain of the function is a square, [0, l]x[0,1]. The range of the
function is the interval, [0,2], Since X\, Xi is uniform on [0,1], let [a, 6] contain
100 percent of the probability, where [a,b] are the end points of the distribution.
These distributions have finite supports. Next, find F~1 at 0 and 1.
F_1(0) = 0 = a.
and
F1(l) = 1 = 6.
Step 1
Determine the amount of probability in each interval. For simplicitys sake,
in this example, let each interval contain 50 percent of the total probability. The
probability intervals of X\ are equal to their inverses.
= [0, .5]
19
and
[x^xl] = [.5,1].
Since X2 has the same distribution as X\, V2s intervals are the same.
[xl, xQ = [0, .5]
and
[xl,xl] = [.5,1].
These intervals are used to form the partitions.
Vertices of A
0.9
0.8
0.7
0.6
<5* 0.5
0.4
0.3
0.2
0.1
O
O 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X1
Figure 4.1: Vertices.
Since each variable is divided in half and there are 2 variables, the probability
of each partition Ai is 1/4. This creates four partitions. Make a vector v, using
the boundary points of the intervals of V*.
0 0.5 1
20
Find and Fx^(v2) to obtain u\ and U2. Since both X\ and X2 have the
same cdf, u\ = U2
u 1 =
0 0.5 1
u2.
To find the vertices, take all the possible combination of entries of u\ and U2.
{(0,0), (0,0.5), (0,1), (0.5,0), (0.5,0.5), (0.5,1), (1,0), (1,0.5), (1,1)}.
If order does not matter, as in this function, (0,0.5) and (0.5,0) only need to
be calculated once. This will reduce the number of required computer compu
tations.
Step 2
Vertices of A *
*3 0.5
0.2 0.3
0.5
x1
0.7 o.e
Figure 4.2: Function Values.
21
Evaluate /(S) at the vertices. This will be done at the following points for
the function, X\ + X2 = Y.
(0,0) =0.
(0,0.5) = 0.5.
(0,1) = 1
(0.5,0.5) = 1.
(0.5,1) = 1.5.
(1,1) = 2.
Align the corner point values with their corresponding box, Bt.
Bi = {0,0.5,0.5,1}.
B2 = {0.5,1,1.1.5}.
Bz = {0.5,1,1,1.5}.
B4 = {1,1.5,1.5,1}.
Find the minimum and the maximum value of the corner points of Bt.
min/max{Bi} = {0,1}.
mm/ma:r{.B2} = {0.5,1.5}.
min/max{Bz} = {0.5,1.5}.
min/max{Bi\ = {1,2}.
22
Step 3
To calculate upper and lower bounds on the joint cdf for f(S), create a
vector ~y ~y = [jq,yk], for k equal to the number of components of y. In
this example,
0 0.5 1 1.5 2
Construct the upper bound. If yk is greater than the min(St), the probability
of Ai is added to the cdf at yk. Begin with y\ = 0. This value of y is not greater
than the minimum of any of the B*s. Therefore,
Fupper (0) = 0.
At ?/2 = 0.5, 3/2 is greater than the minimum of B\ but no other B,. So
FUpper(05) = 1 x 0.25 = 0.25,
where 0.25 is the probability of one A\. Continue the process for each value of
yk. For
V =
0 0.5 1 1.5 2
For the upper bound,
n?)
0 0.25 0.75 1 1
Next, construct the lower bound. If yj > max{B,}, then the probability of the
corresponding Al is added to the cdf at yr For y\ = 0, this is not greater or
equal to the maximum of any box. Therefore, F;ou;e7.(0) = 0. At y? = 0.5, y2 is
still not greater than or equal to the maximum of any partition. At y = 1, the
probability of A\ is add to the cdf because max{B]} = 1.
iw(l) = 1 x 0.25 = 0.25
23
Continue the process for each value of yt.
V =
The upper bound,
FCy) =
0 0.5 1 1.5 2
0 0 0.25 0.75 1
The results bound the actual cdf. For y equal to 1, 0.75 is the upper bound
and 0.25 is the lower bound. The actual cdf is 0.5. Therefore, it encloses the
actual value. As the number of partitions increase, the bounds converge to the
actual cdf. In this case, a crude check was used to validate the solution. How
ever, for a more rigorous check, a Monte Carlo simulation written in MATLAB is
used to see if the Monte Carlo approximation is enclosed within the IPM bounds.
Pseudocode for IPM for Xi + X2
Input, let d=number of intervals.
Find the number components of a subinterval vector by adding 1 to d.
Let p be a vector, for p 6 [0,1] in increments equal to 1 divided by
number of intervals.
Find the inverses of p for each random variable.
Make a matrix using the inverses of X\. Replicate the vector
horizontally to create a square matrix, M\
Make a matrix using the inverses of X2. Replicate the transpose of
the vector vertically to create a square matrix, A/2.
Add M\ and M2.
24
Remove the last row and column of M\.
Remove the first row and column of M2.
Find min M\, minimum of the range.
Find max M2, maximum of the range.
Let y be a vector that spans from minimum to maximum of the range.
Create a histogram that bins M\ into the intervals of y.
Create a histogram that bins M2 into the intervals of y.
Find the sum of both histograms.
Multiply (1 divided by the number intervals) times the
number of variables to get the probability in each partition.
Multiply the probability by the sum of each histogram.
Calculate the cumulative sum for both values. Plot y verses both vectors.
4.3.2 Example of the Sum of Two Normals
The IPM algorithm can handle various distributions as easily as it handles
uniform ones. This is because it only needs the minimum and maximum value
of the partition.
Step One
For this example, partition the area into 16 areas of equal probability and
find the inverses. See Figure 4.3.
25
3
.67, 1 1 1 1 3167 3 
67
\ .67, 3,.^7 0
67, 3.3 i i 67 O..i .67,3 i7 .67, 67 3,.67
0..3 g7,_g
3 '11111111
321 0 1 2 3
Figure 4.3: Vertices of the Sum of Two Normals
Step Two
Evaluate the function at the vertices and find the minimum and maximum.
See Figure 4.4
Step Three
Find the upper and lower bounds for y = 2, using the formula. For the
upper bound, 2 is greater than the minimum of seven partitions. Therefore,
the upper bound is 7/16 = .4375. For the lower bound, 2 is not greater than
the maximum of any partition. The probability of the lower bound is 0.
The Appendix includes 5 MATLAB Programs.
26
3
Function Values of the Sum of Two Normals
2.33
1  \ 2.33
a
6
2
O
x 1
2.33
3
Figure 4.4: Function Values for the Sum of Two Normals
X\ + X2 Y uniform on [0,1], Appendix C.
Xi + ... + X100 = Y uniform on [0,1], Appendix D.
X\ X2 = Y uniform on [0,1], Appendix E.
X\ + X2 = Y N ~ (0,1), Appendix F.
sin(20 X\) + sin(X2) = Y uniform on [0,1], Appendix G.
27
5. Conclusions
IPM, although quite similar to that of the authors presented in this paper,
was developed independently. The issues of underlying distributions and depen
dencies were not considered in the original problem. The original problem wus
to create an algorithm to quickly compute the cdf of a function of independent
random variables.
Tonon's and Berleants methods each calculate both the values of the func
tion and the probability of the partitions. This requires 2" calculations for the
values of the function multiplied by number of partitions. In addition, it re
quires the probability to be calculated for each partition. The IPM has the
same number of calculations for the values of the function but only one proba
bility calculation. In Tonons example, he does 119 calculations for the values
of the function and 100 probability computations. Note: 100 is the number of
partitions in the example. IPM also requires 119 calculations for the values of
the function but only one probability computation. That is a difference of 99
calculations. This one minor difference will prove significant as the number of
variables increase. If the Tonon example had another variable the number of
partitions wrould be 103 = 1000. In this case, the difference in the number of
calculations between the two methods is 999.
Despite the computational effectiveness of IPM, Tonons method may prove
advantageous when there is a lack of knowledge about the shape of the distribu
tion inside the intervals. This is because IPM needs to divide the domain into
28
areas of equal probability. An interval cannot be arbitrarily partitioned. If all
the probability is clustered to one side or the other but location of the clustering
is unknown, no further refinement is possible.
The examples given in this paper are simple. That is to show the workings
of the algorithm in its most basic form. For reallife problems the algorithm will
need to handle multiple variables along with correlation. It is the authors belief
that these demands can be handled effectively using the internal mechanism
of the algorithm along with interval methods. At present, the disadvantage
of this method is that the MATLAB program is not generic and has to be
adjusted for each model. This is because there are interval operations that can
be incorporated for certain functions to minimize the number of calculations.
As demonstrated in the paper, interval rules can be used to cut down the
number of computations of the program. If minimum and maximum values can
easily be found at the corner points, then the program should run rather quickly.
There are catalogs of interval arithmetic equations that can be used to analyze
the functions. The use of these rules must be incorporated into the algorithm
to make the solution valid.
For most reallife probability problems, the most important question to be
answered is the point of failure or ruin. It may be easier using IPM to search
out the intervals that include these critical points rather than searching every
possible outcome. Certainly, narrowing the area of interest cuts down on the
number of computations.
Finally, many situations only need broad bounds on the likeliness of an
event. The most important concern is guaranteeing enclosure in a bound. In
29
these cases, with exception of Moores, all the algorithms presented here would
be more useful that the result of a Monte Carlo simulation.
30
Appendix A. MATLAB Code for a Monte Carlo Simulation
'/.Code for a Monte Carlo Simulation for the function X1+X2=Y.
ms=10000; '/.Number of simulations.
r=rand(2,10000) ; '/.Take two random variables 10000 times.
t=sum(r); '/.Sum the two variables.
y=(0:.01:2); /.Create intervals to calculate the frequency.
h=histc(t ,y) ; /.Histogram t in y.
c=cumsum(h)/ms; /.Find the cumulative sum of the frequency and divide
/.by the number of simulations.
yr=y+.01; /.Adjust y from left side of interval to right.
plot(yr,c,r)
xlabel(y)
ylabel(cdf)
title(Monte Carlo Simulation for X1+X2=Y)
31
Appendix B. Example of Moores Method in MATLAB Code
Moores Example X\ + X2 = Y, uniform on [0,1]
nv=2 /.number of variables X,Y
/.Assuming independence among variables
7.[0,0.5]+ [0,0.5] = [0,1] prob=0.5*0.5=0.25
7. [0,0.5] + [0.5,1] = [0.5,1.5] prob=0.5*0.5=0.25
7.[0.5,1]+ [0,0.5] = [0.5,1.5] prob=0.5*0.5=0.25
7. [0.5,1] + [0.5,1] = [1,2] prob=0.5*0.5=0.25
/.So each interval has probability of 1/4
nipv=60 /.number of intervals per variable
ivec=0:1/nipv: 1 /.vector that indicates the
/.the end points for the interval
k=0 /.initialize k
for i=l:nipv;
for j=l:nipv;
k=k+l;
C(k,:)=ivec(i:i+l)+ivec(j:j+1);
/.Combination of each interval in each variable
end
32
end
C %Print the resulting intervals
"/.Find the probability of each interval assuming independence.
pipv=l/nipv /.probability of the interval per variable
ip=pipv*pipv /.probability of the summed interval
/.Next the intervals need to be refined in order to get rid
*/,of over laps.
/.Divide interval in two.
mp=sum(C,)/2 nCl=[C(:,l) mp^mp' Conv(:,2)]
/.Since each interval is divided in half the prob of
/.each is 1/4* 1/2
rip=ip*.5 7, the probability of each refined interval
/.Find a mid point for each interval.
mi=sum(nCl)/2 /.bisect interval
33
y=pipv:pipv:2 /.Create a y vector
n=histc(mi,y) /.number of intersections between intervals
lenn=length(n)/.length of n
nr=n(l :lennl) /.adjust n
pv=nr*rip /.probability vs y
cs=cumsum(pv) /.cumulative sum
yr=y(2: length(y)) /.adjust y to left hand values
plot(yr,cs) /.plot
/.Compare to a Monte Carlo simulation
ms=50;
t=sum(rand(2,ms));
md=histc(t,y);
mid=cumsum(md)/ms;
plot(y,mid,r)
34
Figure B.l: The Cdf for X\ + X2 on 60 Intervals.
35
Appendix C. MATLAB Code for Example 1
The IMP MATLAB Program for Y = + X2 is given in detail below.
The other examples use the same coding with minor adjustments.The program
that calculates this example was simplified using interval arithmetic. Using the
additive rule of interval arithmetic, the minimum values of the boxes will occur
at the lower left corner point, and the maximum values are at the upper right
corner. Therefore, the vertices at these points will be the only ones evaluated.
Each program begins with clear all so that variables can be reused in other
programs.
cl ear all
By putting tic at the beginning of the algorithm and toe at the end, it can
be seen if IPM runs faster than a Monte Carlos simulation.
tic
Let d be the number of the intervals. For wide bounds, the number of intervals,
d, is very small. For narrower bounds, the number of intervals, d, is large, say
between 20 to 100.
d = 2.
If d is 2, then the entries of v equals 3.
dd = d + 1.
36
The number of variables for this example is 2.
nv = 2.
This is the probability vector that contains the boundaries of the intervals.
input
pv = 0 : (1 /d) : 1.
output
pv =
0.51
Because the probability equals the inverse, pv vector is used.
In order to find all the pairs of vertices without using a for loop, we replicate
the vector entries of pv horizontally.
input
x\ = repmat(pv, dd, 1).
output
xl
0 0.5000 1.0000
0 0.5000 1.0000
0 0.5000 1.0000
Then pv is transposed and then replicated d + 1 times, but instead of hori
zontally, vertically.
input
x2 = repmat(pv', 1, dd).
37
output
0 0 0
x2 = 0.5000 0.5000 0.5000
1.0000 1.0000 1.0000
Now the two matrices combined contain all the vertices of the area.
input
v = x\ + x2.
output
0 0.5000 1.0000
v =
0.5000 1.0000 1.5000
1.0000 1.5000 2.0000
For this example, only calculate the lower left and upper right corner points of
the box. Next, we find the length of v.
Iv = length{v).
By removing the maximum value of each row and column, the left lower corner
points are obtained.
input
Ic = v(l : Iv 1,1 : Iv 1).
output
Ic =
0 0.5000
0.5000 1.0000
By removing the minimum value of each row and column, the right upper corner
points are obtained.
38
input
rc = v(2 : Iv, 2 : Iv).
output
rc
l.onnn 1.5000
1.5000 2.0000
In order to find the span of y, find the the minimum and maximum value
of the area.
mr = min(min(rc))
ml = max(max(lc)).
To bin using the MATLAB histogram function, add one subinterval to the
minimum and maximum value, in order that the bins on the ends are empty.
The Matlab command histc includes all values equal to or greater than the left
hand value and strictly less than the righthand value and aligns them with the
lefthand value.
input
my = mr (1 /d) : (1 /d) : ml + (1 /d).
output
my
o.r,oik) o 0.5000 i.oooo 1.5000 2.0000 2.5000
The left corner points are next put into the intervals of the my vector.
input
u = histc(lc, my).
39
output
u =
0 0
1 0
1 1
0 1
0 0
0 0
0 0
Because [Zc] is.a matrix, [it] is also a matrix. Sum the columns of u to get
histogram values. These are then multiplied by the probability of the Ai. Then
a cumulative sum is calculated.
input
up = cumsum(sum(u') (1 /d).nv).
output
up
0 0.2500 0.7500 1.00001.0000 1.0000 1.0000
Since the command histc does not include the maximum value of the interval,
adjust my for the calculation of the lower bound.
input
ny = my + .005.
output
ny
0,1050 0.0050 0.5050 1.0050 1.5050 2.0050 2.5050
40
Again, a histogram is run.
input
l = histc(rc, ny).
output
l =
0 0
0 0
1 0
1 1
0 1
0 0
0 0
The lower bound is then calculated in the same manner as the upper bound.
input
output
lb = cumsum(sum(l') (1 /d).nv).
lb =
0 0 0.2500 0.75001.0000 1.0000 1.0000
For many functions this algorithm runs faster than a Monte Carlo simulation
run 10,000 times.
toe
output elapsed time =0.0460. Time Monte Carlo simulation
tic
41
Number of simulations
Create a loop. For
ms = 10000.
n = 1 : ms.
Add 2 randomly generated numbers together,
r = rand + rand.
Create a vector ~r to hold summations,
t(n) = r.
End loop
Process through a histogram
end.
md = histc(t, my).
The cumulative sum divided by the number of runs shows the cumulative prob
ability of being in the interval or below.
mid = cumsum(md)/ms.
Adjust the vector my by (1/d) in order to have it match up with the righthand
values,
y = my + (1/d).
Clock length of run,
toe.
42
output elapsed time =0.2660. Plot lower bound, upper bound and Monte Carlo
probability vectors in same plot. Then check to see if the algorithms bounds
contain the simulation.
plot(y, mid,' r', y, lb,' b', y, up,' g)
clear all
/.X1+X2
tic /.begin run time
d=2 */,number of subintervals
dd=d+l /.number of components of the interval vector
nv=2 "/,number of variables
pv=0:(l/d):l /.probability vector
xl=repmat(pv,dd,l) /.vertices of xl
x2=repmat(pv ,l,dd) /.vertices of x2
v=xl+x2 /.evaluate the function
43
lv=length(v) "/.length of function matrix
lc=v(l: lv1,1: lv1) "/.lower left corner points, mins
rc=v(2:lv,2:lv) "/.upper right corner points, maxs
ml=min(min(lc)) "/.min value of range
mr=max(max(rc)) /.maximum value of range
my=ml(l/d): (1/d) :mr+(l/d) /.vector that spans range, y
u=histc(lc,my) "/.histogram of mins vs y
up=cumsum(sum(u,)*(l/d) .~nv) /.cumulative probability, upper
ny=my+.005 /.adjust y for upper bound because histc is
strictly less than upper value of the interval.
l=histc(rc,ny)%histogram of maxs vs y
lb=cumsum(sum(l,)*(l/d) .*nv) /.cumulative probability, lower
toe "/.end run time
44
tic "/.begin run time
ms=10000; /.number of simulations
for n=l:ms; /.for loop
r=rand+rand; /.evaluate function
t(n)=r; /.all 10,000 results
end md=histc(t ,my); "/.count frequency vs y
mid=cumsum(md)/ms; y=my+(l/d); cumulative probability
toe /,end run time
subplot(2,1,1),plot(y,mid, r,y,lb,b,y,up,g) /.plot
45
2 ttbMntffvri*
20 subintervals
Figure C.l: The Cdf for y = X\ + X2 on Increasing Intervals
46
Appendix D. MATLAB Code for Example 2
MATLAB Program for X\ + ....Xn
This program is somewhat contrived because it can only be used with sums
of variables that are uniform. However, it is included here in that it may have
some use in later research. It uses a short loop that calculates the number
of partitions that have identical corner points, which is similar to a binomial
expansion.
For example, if the area has 2 subintervals per side on 3 variables, then it
13 3 1
, only 4
has eight partitions. Using the coefficients of (a + b)3 =
partitions that have different bounds. The large number of repetitions allows
for quick computation.
clear all
"/.X1+______+Xn
tic "/.begin run time
d=20; "/.number of subintervals
dd=d+l; "/,number of components of interval vector
47
nv=100; "/.number of variables
pv=0: (1/d): 1; '/.probability vector
o=ones(l,d); '/.initialize o
Z=zeros(d,2*dl); '/.initialize Z
for j=l:nvl; '/.l to number variables less on
for i=l:d; "/,1 to number of divisions
Z(i, i : length(o)+il)=o; '/.put ones in Z
end
o=sum(Z); '/.sum Z, counts number of repetitions
end
g=(l/d).~nv; /.probability for partition
c=cumsum(g*o); /.cumulative probability
y=0: (1/d) :nv; /.y vector
48
lb= [zeros(l,nv) c] ; "/.adjust c for lower bound
up=[0 c ones(l ,nvl)] ; /.adjust c for upper bound
toe /.end run time
tic /.begin run time
ms=10000; /.number of simulations
for n=l:ms; /.begin for loop
r=rand(l,nv); /.random numbers
t(n)=sum(r); '/.function values
end md=histc(t,y); /,count frequency
mid=cumsum(md)/ms; my=y+(l/d); /.cumulative probability
toe '/.end run time
subplot(2,1,2) .plot(my,mid, r ,y,lb, b ,y,up, g) /.plot
49
4 subintervals
Figure D.l: The Cdf for Xl5 ...X10o for Increasing intervals.
50
Appendix E. MATLAB Code for Example 3
The multiplicative rule of interval arithmetic is used for A'i, X2 > 0
x y = [x y, x y]
Since for X\,X2 ~ C/(0,1) both are nonnegative, the rule applies. Again,
we use the same program as in Example 1, because we get the min and max
values at the same corners points. If X\ or A2 were negative, we can use other
multiplicative rules to shorten the computations. See Neumaier [12].
The MATLAB Program for $X1*X2$
clear all
'/.X1*X2
tic '/.begin rim time
d=20 '/,number of subintervals
dd=d+l '/.number of components in subinterval vector
nv=2 '/.number of variables
51
pv=0:(l/d):l '/.probability vector
xl=repmat(pv,dd,l) '/.vertices of xl
x2=repmat(pv, 1 ,dd) /.vertices of x2
v=xl.*x2 /.function values
lv=length(v) /.length of v
lc=v(l:lvl,l:lvl) /.lower left corner points
rc=v(2:lv,2:lv) /.upper right corner points
ml=min(min(lc)) /.min corner point
mr=max (max (rc)) /.max corner point
my=ml(l/d): (1/d) :mr+(l/d) /,y vector
u=histc(lc,my) /.frequency count
up=cumsum(sum(u)*(l/d) ~nv) /.cumulative probability, upper
52
ny=my+.005 "/.adjusted y vector for lower bound
l=histc(rc,ny) "/.frequency count
lb=cumsum(sum(l)*(l/d).~nv) "/.cumulative probability, lower
toe "/.end run time
tic 7,begin run time
ms=10000; "/.number of simulations
for n=l:ms; /.begin for loop
r=rand*rand; "/.function value for random inputs
t(n)=r; /.all the function values
end /.end loop
md=histc(t,my); /.count frequency
mid=cumsum(md)/ms; "/.end loop
53
y=my+(l/d); /0adjust y to the max of the interval
toe %end run time
subplot(2,1,2) ,plot(y,mid, r ,y,lb, b' ,y,up,g) /.plot
4 subintervals
Figure E.l: The Cdf of X\ X2 for Increasing Subintervals
54
Appendix F. MATLAB Code for Example 4
The MATLAB Program for the sum of two normals.
Since the cdfs of each variable is monotone and increasing, use the interval
arithmetic from Example 1.
x + y = [x + y,x + y]
Since the minimum values will be in the left lower corner and the maximums
in the right upper corners, use the MATLAB program used in Example 1 for
this problem with only one minor change. The inverse cumulative probability is
calculated using the erf function.
clear all
/.Sum of 2 normals
tic d=2; /.number of subintervals
dd=d+l; /.number of subintervals
nv=2; /.number of variables
p=0: (1/d): 1; '/.probability vector
55
3
0 0
Figure F.l: The Surface Area of the Joint Cdf for Two Normals
pv=[.001 p(: ,2:length(p)l) .999]; /.adjusted probability
nl=((2" .5) .*erfinv(2.*(pv)l)); /,inverses of xl /.
n2=((2~.5).*erfinv(2.*(pv)l)); /.inverse of x2
xl=repmat(nl,dd,1); /.vertices of xl
1
56
x2=repmat (n2', 1 ,dd) ; "/.vertices ofx2
v=(xl+x2); "/.function values
lv=length(v); "/.length of v
lc=v(l:lvl,l:lvl) ; "/.left corner points
rc=v(2:lv,2:lv); "/.right corner points
ml=inin(min(lc)); "/.min function value
mr=max(max(rc)); /.max function value
my=ml(l/d) : (1/d) :mr+(l/d) ; "/.y vector
u=histc(lc,my); "/.count frequency, upper
up=cumsum(sum(u)*(l/d).nv); "/.cumulative sum, upper
ny=my+.005; /.adjust y for lower
l=histc(rc,ny); "/.count frequency
57
lb=cumsum(sum(l,)*(l/d) .nv) ; "/.cumulative sum
toe "/.end run time
tic ms=1000; /.number of simulations
for n=l:ms; /.begin for loop
r=randn(l ,2); 7,random number
t(n)=sum(r); /.all function values
end "/,end run time
md=histc(t,my); /.count frequencies
mid=cumsum(md)/ms; /.cumulative sum
y=my+(l/d); /.adjust y to interval max
toe '/.end run time
subplot(2,1,1) ,plot(y,mid, r ,y,lb, b ,y,up, g) '/.plot
58
4 subintervals
g(x1,x2)
20 subintervals
Figure F.2: The Cdf for X\ + X2 ~ N(0,1) for Increasing Subintervals.
59
Appendix G. MATLAB Code for Example 5
The MATLAB Program for sin(20 X\) + sin(X2)
Unlike the earlier examples, the minimum and maximum of the boxes of
this function are not at certain corner points. They may not even fall inside the
box. However, IPM still gives bounds. For this program, calculate the values of
the functions at all of the vertices. Then align these values with their partitions.
MATLAB finds the minimum and maximum of each partition. The rest of the
algorithm is the same as in Example 1.
For functions where it is difficult to guarantee minimum and maximums,
good results are still obtained. To guarantee the results are accurate, other
methods of finding global minimum and maximums are necessary. INTLAB
offers an algorithm for guaranteeing minimum and maximum of an interval.
This may need to be a companion program when dealing with complicated
functions.
clear all
/,sin(20*xl)+sin(x2)
tic d=20; /.number of subintervals
dd=d+l; /.number of components of subinterval vector
60
nv=2; '/.number of variables
pv=0: (1/d): 1; /.probability vector
pvl=length(pv) ; '/.length of probability vector
xl=repmat(pv,dd,l); /.vertices ofxl
x2=repmat(pv,l,dd); '/.vertices of x2
v=sin(20*xl)+sin(x2); /.function values
s=0; /.initialize s
/.for loop to find all vertices of the partition
for i=l:pvll;
for j=l:pvll;
s=s+l;
mp(s,:) = [v(i,j) v(i,j+l) v(i+l,j) v(i+l,j+l)]
61
end
end
mv=[min(mp) max(mp)]; /.min and max values of each partition
smv=sort (mv); '/,put in order
ml=smv(:,1); "/.mins
m2=smv (: 2); /.maxs
ml=min(ml) '/.min of range
mr=max(m2) '/.max of range
my=ml(l/d) : (1/d) :mr+(l/d) ; /.y
u=histc(ml ,my); /.count frequency
up=cumsum(u)*(l/d).nv; '/.cumulative probability, upper
ny=my+.005; /.adjusted y for lower
62
I=histc(m2,ny); "/.count frequency
lb=cumsum(l)*(l/d)."nv; "/.cumulative sum, lower
toe "/.end run time
tic /.begin run time
s=10000; "/.number of simulation
for n=l:ms; '/.begin for loop
r=sin(20*rand)+sin(rand); "/.function value on random numbers
t(n)=r; "/.all function values
end "/.end run time
md=histc(t ,my); "/.count frequency
mid=cumsum(md)/ms; /.cumulative probability
y=my+(l/d); "/.adjust y to right hand value
G3
toe /.end run time
subplot(2,1,2) .plot(y,mid, rJ ,y,lb, b ,y,up, g) /.plot
4 subinterval
Figure G.l: The Cdf for sin(20 x) + x2 for Increasing Subintervals.
64
REFERENCES
[1] Daniel Berleant and Chaim GoodmanStrauss. Bounding the results of
arithmetic operations on random varibles of unknown dependency using
intervals. Reliable Computing 4:147165, 1998. Netherlands.
[2] George Corliss. Tutorial on validated scientific computing using inter
val analysis. World Wide Web, http://www.eng.mu.edu/corissg/PARA04,
2004.
[3] Jianzhong Zhang Daniel Berleant, Lizhi Xie. Statool: A tool for distribu
tion envelope determination(denv), and intervalbased alogithm for arith
metic on random variables. Reliable Computing 9(2):91108.
[4] Gareth I. Hargreaves. Interval analysis in MATLAB. Numerical Analysis
Report No.416, Manchester Center for Computational Mathematics, 2002.
http://www.ma.man.ac.uk/ nareports.
[5] B. Hayes. A lucid interval. Amercian Scientist, Vol.91, No.6 pp.484468,
2003.
[6] Luc Jaulin, Michel Kieffer, Olivier Didrit, and Eric Walter. Appied Interval
Analysis. SpringerVerlag Limited, London, England, 2001.
[7] Weldon A. Lodwick. Constrained interval arithmetic. Center for Compu
tation Mathematics, Report No. 138, 1999.
[8] Weldon A. Lodwick and K David Jamison. Estimating and validating the
cumulative distribution of random variables: Toward the development of
distribution arithmetic. Reliable Computing 9(2):127141, 2003. Nether
lands.
[9] Moore, Strother, and Yang. Interval integrals. Technical Report Space Div.
Report LMSD703073, 1960. Lockheed Missiles and Space Co.
[10] Moore and Yang. Interval analysis I, 1959. Lockheed Missiles and Space
Co.
[11] R.E. Moore. Risk analysis without Monte Carlo methods. Freiburger
IntervallBerichte 84/1, 1984.
65
[12] Arnold Neumaier. Interval Method for System of Equations. Cambridge
University Press, Cambridge, England, 1990.
[13] Richard L. Scheaffer. Introduction to Probability and its Applications.
PWSKent Publishing, Boston, Massachusetts, 1990.
[14] Vladik Kreinovich Scott Ferson, Daniel Berleant and Weldon Lod
wick. Combining interval and probabilistic uncertainty: Foun
dations, algorithms, challengesan overview. World Wide Web,
http://www.cs.utep.edu/vladik/2005/tr0509a.pdf 2005.
[15] Fulvio Tonon. Aggregation of evidence from random and fuzzy sets. Zamm,
Z. Agnew. Math. Mech.84, NolO11,700709, 2004.
[16] Fulvio Tonon. One the use of random set theory to bracket results of Monte
Carlo simulations. Reliable Computing 10:107137, 2004.
66
