Citation
Modeling uncertainty using probabilistic based possibility theory with applications to optimization

Material Information

Title:
Modeling uncertainty using probabilistic based possibility theory with applications to optimization
Creator:
Jamison, Kenneth David
Publication Date:
Language:
English
Physical Description:
vii, 146 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Probability measures ( lcsh )
Fuzzy numbers ( lcsh )
Mathematical optimization ( lcsh )
Fuzzy numbers ( fast )
Mathematical optimization ( fast )
Probability measures ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 142-146).
General Note:
Department of Mathematical and Statistical Science
Statement of Responsibility:
by Kenneth David Jamison.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
41470592 ( OCLC )
ocm41470592
Classification:
LD1190.L622 1998d .J36 ( lcc )

Downloads

This item has the following downloads:


Full Text
MODELING UNCERTAINTY USING
PROBABILISTIC BASED POSSIBILITY THEORY
WITH APPLICATIONS TO OPTIMIZATION
by
Kenneth David Jamison
M.S., University of California at Riverside, 1981
B.S., University of California at Riverside, 1978
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Applied Mathematics
1998


This thesis for the Doctor of Philosophy
degree by
Kenneth David Jamison
has been approved
by
Weldon Lodwick
Stephen Billups
Jan Mandel
Burt Simon
William Wolf
Date


Jamison, Kenneth David (Ph.D., Applied Mathematics)
Modeling Uncertainty Using Probabilistic Based Possibility Theory with
Applications to Optimization
Thesis directed by Professor Weldon Lodwick
ABSTRACT
It is shown that possibility distributions can be formulated within the
context of probability theory and that membership values of fuzzy set theory
can be interpreted as cumulative probabilities. The basic functions and opera-
tions of possibility theory are interpreted within this setting. The probabilistic
information that can be derived from possibility distributions is examined.
This leads to two functionals that provide estimates for the expected value of
a random variable, the expected average of a single possibility distribution and
the estimated expectation that requires two special possibility distributions
to compute. Secondly, the space of fuzzy numbers is examined. It is shown
that this space can be partitioned into a vector space and that the expected
average functional motivates a norm on this space. It is shown that for most
applications, Cauchy sequences converge in this space. Thirdly, applications of
this theory to problems in optimization are examined. The concept of a fuzzy
function is formulated. The minimum and minimizer of a fuzzy function are
described. Objectives in fuzzy optimization are examined. Lastly, the ideas
m


of the thesis are applied to the linear programming problem with uncertain
coefficients.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
Signed
Weldon Lodwick
IV


ACKNOWLEDGMENTS
I wish to thank Weldon Lodwick for his direction, support and en-
couragement, the fuzzy set theory study group (Weldon, Steve Russell, Chris
Mehl and Ram Shanmugan) for their inspiration, Burt Simon for his challeng-
ing comments and Steve Billups for his very thoughtful review of this thesis.
Most of all I thank my wife Liza ONeill. Without her love and support I would
not have been able to persevere.


CONTENTS
1. Introduction and Overview............................................... 1
1.1 Overview ............................................................. 7
2. Possibility Theory..................................................... 10
2.1 Possibility Measures and Possibility
Distribution Functions.............................................. 13
2.2 Membership Values as Cumulative Probabilities........................ 16
2.3 Possibility Distributions when Membership
Values are Cumulative Probabilities................................. 26
2.4 Set Representations.................................................. 36
2.5 Functions of Possibilitity Distributions -
Extension Principles................................................ 42
2.6 Probabilistic Based Possibility Distributions
for Random Vectors ................................................. 52
2.7 The Information Contained in a
Probabilistic Based Possibility Distribution........................ 55
3. The Space of Fuzzy Numbers ............................................ 77
3.1 Fuzzy Numbers........................................................ 77
3.2 A Normed Space of Fuzzy Number
Equivalence Classes................................................. 80
3.3 An Isometry between (£, ||-||fyt) and BV[0,1] as a
Subspace of Li[0,1]................................................. 85
3.4 Convergence in (£, || ||fyt) ....................................... 91
4. Optimization of Fuzzy Functions........................................ 94
vi


4.1 Fuzzy Functions............................................. 94
4.2 The Minimum and Minimizer of a
Fuzzy Real-valued Function................................. 102
4.3 Method of Minimum Regrets ................................. 107
5. Fuzzy Linear Programming using a Penalty Method.............. 115
5.1 Problem Formulation........................................ 116
5.2 Properties of the Fuzzy Optimization
Problem.................................................... 119
5.3 An Algorithm............................................... 122
5.4 Example.................................................... 124
6. Conclusion................................................... 129
Appendix
A. Details of Fuzzy
Linear Programming Example................................... 132
B. Formulas for Implementation of
Fuzzy Linear Programming..................................... 138
References...................................................... 142
vii


1. Introduction and Overview
Suppose a decision maker (DM) is interested in optimizing (in some
sense) the function
f(x) = a2x b
(1.1)
where a and b are fixed but unknown (uncertain) parameters. How does the
DM model the unknown parameters a and b? Given the model, how does
the DM evaluate and interpret f(x)=a2x 6? How is a decision reached? Two
common techniques for estimating the parameters a and b are interval analysis
and probability theory. In this thesis we will consider a third technique, called
possibility theory.
The advantage of interval analysis is that the interval solution is guar-
anteed to contain all solutions (see Moore [32]). Interval analysis also offers
the advantage of being computationally tractable. It is often straightforward to
evaluate the function using the known methods of interval arithmetic (Moore
[32]). For example, if a£ [1,3] and b£ [4,6] then
/(l) £ [1,3]2 [4,6] = [1,9] [4,6] = [-5,5], (1.2)
1


The evaluation gives an interval which represents the range of possible val-
ues of the function, though most often this range is overestimated (see Moore
[32]). The DM is then faced with the problem of optimizing an interval val-
ued function, i.e. the DM must decide on an ordering of the set of intervals.
One approach is to assume a uniform probability distribution over any given
interval and optimize the expected value, i.e. optimize the midpoint of the
interval. For example, f(2)G[l,3]22 [4,6] = [2,18]-[4,6] = [-4,14] would be con-
sidered greater than f(l)G [5, 5] since the midpoint (14-4)/2=5 is greater
than the midpoint (5-5)/2=0. A disadvantage of interval analysis is that it
gives the worst case and interval solutions can be large (useless) if care and
computationally intensive approaches are not used. Another disadvantage is
that it gives no consideration to the likelihood of a particular outcome. It only
considers the range of all possible outcomes.
Probability theory considers not only the range of possible outcomes
but the likelihood a particular outcome may occur. For example, suppose a
and b are modeled as the independent random variables X and Y with the
following probability density functions:
fx(x) = <
x 1 for x G [1,2]
3 x for x G [2, 3]
and fY(y) = <
y-4 for yG [4, 5]
6 y for yG [5, 6]
(1,3)
Then the probability density function for the random variable Z=f(l)=X2 Y
2


is the following:
fz(z) =
/ -------
f^6+z(w 1)(6 w2 + z)dw zG[-5,-4]
X/gyy (w ~ 1)(6 w2 + z)dw + f/5+z(w 1 ){w2 z 4)dw zG[-4,-3]
Iy^(w ~ Xl6 ~w2 + z)dw + 1 )(w2 z 4)dw zG[-3,-2]
f^5+z(w ~ 1)(6 w2 + z)dw + f/6+z(3 w)(6 w2 + z)dw + -l)(w2 z 4)dw ze[-2,-l]
X/yfy (3 w)(6 w2 + z)dw + J^^(w l)((rc2 z) 4)dw +(3 w)(w2 z 4 )dw zG[-l,0]
X/Sf (3 w){6 w2 + z)dw + /XJ/(3 w)(w2 z 4)dw zG[0,3]
X/s+y(3 w)(6 -w2 + z)dw + 3 w)(w2 z 4)dw zG[3,4]
Xyi+y(3 w)(w2 z 4 )dw zG[4,5]
Using this approach, the DM might seek to optimize the expected value of f(x).
For example, the expected value of Z=f(l), E(Z), is -.8333. A disadvantage of
this approach is that it may be impossible, difficult and/or impractical to
obtain the probability distribution for the random variable f(x) for each x to
be considered in the optimization problem. In addition, some problems do
not admit to a probabilistic formulation (e.g. locating a tumor for radiation
therapy).
In this thesis we examine a third approach called possibility theory.
3


It is argued that possibility theory is a compromise between interval analysis
and probability theory. In essence, we perform interval arithmetic over families
of intervals where the families of intervals are parameterized by a probability
distribution. The result is a family of intervals representing the possible values
of f(x) that preserves some of the characteristics of the probability distribu-
tion. There are two principle advantages to be gained from this approach. One
advantage is that problems formulated in possibilitistic terms are more com-
putationally tractable. Another advantage is that possibility theory allows for
computations with imprecise probabilities. For example, a possibility distribu-
tion can be formulated from a family of confidence intervals. These advantages
will become clear in the sequel.
Consider the example above where a and b were modeled as indepen-
dent random variables X and Y with density functions given in (1.3). We will
show that two parameterized families of intervals can be constructed for X and
Y to arrive at the following:
[1, 2 + 1] for a £ [.75,1]
[1,2 ^1 §y(2-2yP + l))) + 1] for a £ [0, .75]
and
YL =
[4, 2 + 4] for a £ [.75,1]
4,2(l-4Y(2-2^(-a + l))j +4 for a £ [0, .75]
4


and
XR =
[3-2 (J\y/T^) 3] for a G [.75,1]
[3 2 ^1 ^{2-2^a + l))^ 3] for a G [0, .75]
and
YR =
6 2 (^1 2 Y (2 2^(-a + f))j 6j for a G [.75, f]
[6-2 (/17^7) 6] for a G [0, .75]
Then two parameterized families of intervals can be constructed for Z=f(l)
using interval arithmetic as follows:
ZR =
and
[(3-2^17r^) (2^17^7 + 4) ,5] for a G [.75,1]
(3-2(1-1^2-27-0 + !)^
- (2 (l 1^2-277^7^) +4^ ,5] for a G [0, .75]
Zi

-5, (2 + l) (6 277^7)] for a G [.75,1]
-5, (2 (l |\/(a 2+(-a + 1+) + I)'
- [e 2 [1- iy4- 2^+4+!)H] for a [0, .75]
The DM must now rank these families of intervals in order to reach a
decision. We will show that in general these two families of intervals provide an
5


upper and a lower bound for the cumulative probability distribution function
of Z = /(l). If the goal of the DM is to optimize the expected value, this
characterization allows for the calculation of an estimated expected value (in
this case an actual value). For this example, the calculation of the estimated
expected value of Z, which we denote as EE(Z), is as follows:
The focus of this thesis is to explore the application of possibility
theory to the problem of incorporating uncertainty into math modeling, par-
ticularly problems in optimization. The thesis will examine the use of possibil-
ity distributions as a method of modeling unknown parameters in real valued
6


functions. The contributions of this work to the mathematical foundation and
application of possibility theory include (i) a formulation of possibility theory
within the framework of probability theory (ii) a method for selecting among
families of possibility distributions over the real line based on an estimate of
the expected value (iii) a norm on the space of fuzzy numbers based on this
estimated expected value (iv) an isometry between this normed spaced and
the subspace of L1 [0,1] (Lebesgue integrable functions on the interval [0,1])
consisting of functions of bounded variation (v) convergence properties of this
normed space (vi) a new definition of a fuzzy function and several implications
of this definition and (vii) an approach to linear programming problems where
the coefficients are stated in terms of fuzzy numbers. Many of these results
are a compilation and continuation of research that will appear in publications
Jamison&Lodwick [16], Jamison [17] and Jamison&Lodwick [15]. Additional
aspects of this research have appeared in Lodwick&Jamison [28] and Lod-
wick&Jamison [29].
1.1 Overview
The second chapter of this paper develops topics in possibility theory
as used here. The chapter begins with known definitions and basic implications
from possibility theory. Then it is shown that a possibility distribution can be
7


interpreted as a cumulative probability distribution. Several results are pre-
sented showing how the image of a possibility distribution under an arbitrary
function is defined and how the resulting possibility distribution is interpreted.
Chapter three focuses on possibility distributions over the real line.
Of particular interest are fuzzy numbers, i.e. unknown numbers characterized
by possibility distributions. There are possibilistic distributions over linguistic
variables but these are not examined here. The mathematics of fuzzy numbers
is presented. It is shown how fuzzy numbers can be added, subtracted, multi-
plied, etc. An equivalence relation on the space of fuzzy numbers is presented
and it is shown that this results in a vector space. A norm on this space is
defined. This norm is motivated from an estimated expected value functional.
The convergence properties in this normed space are examined.
In chapter four, applications to certain optimization problems are ex-
amined. First, the concept of a fuzzy function between finite dimensional vector
spaces is discussed. A fuzzy function is defined as a possibility distribution,
satisfying certain properties, over the space of bounded functions. The image
of a fuzzy function is shown to be a fuzzy vector. In particular, if the range
space is the real line, the image is a fuzzy number.
Second, the case of minimizing an unconstrained fuzzy real-valued
convex function is examined. The minimum of a fuzzy function is defined
8


and it is shown to be a fuzzy number. The minimizer of a fuzzy function is
defined. It is shown that the possibility distribution for the minimizer has a
connectedness property. A minimization approach is examined based on the
estimated expected value functional of chapter two. It is called the method of
minimum regrets, and seeks to minimize the estimated expected value of the
possible error.
In chapter five, the theory and methods of chapters two, three and
four are applied to the linear programming problem where all of the coefficients
are replaced by fuzzy numbers. The problem is first formulated in possibilistic
terms and converted to a problem of maximizing the estimated expected value
of an unconstrained fuzzy function. It is shown that the resulting problem is
concave. Conditions to insure a bounded solution are developed and an upper
bound on the decision variables is determined. An algorithm is provided.
Chapter six concludes by summarizing the results of this thesis and
suggesting areas of future research.
9


2. Possibility Theory
L. Zadeh [48] introduced the concept of a fuzzy set in the 1960s.
The basic idea is to extend the notion of set membership to include degrees
of partial membership. This is generally characterised by a membership
function which is a mapping from the universe of discourse into the unit
interval. For example, if A is a fuzzy subset of set X, the membership function
for A is ha ' X > [0,1]. For x£ X, fiA(x) is called the membership value of
x and is interpreted as the degree to which x is a member of fuzzy set A where
1 represents full membership and 0 represents complete lack of membership.
If Ha(x) = 0 or 1 for all x, A is called a crisp set (in this case /jla is just the
characteristic function of set A). Zadeh defined a set of rules of combination for
fuzzy sets, each of which is a generalization of the same concepts for crisp sets.
So, for example, the standard union, intersection and complement operations
for fuzzy sets A and B, defined in terms of their membership functions are
I^aub(x) = max(iia(x)7iib(x))} nAnB(x) = min/iB(x))} and /iAc(x) =
1 /jLA(x) where Ac is the complement of A. Note that these are the usual
operations if A and B are crisp sets.
10


Possibility theory, also introduced by L. Zadeh [49], is a natural ex-
tension of his work with fuzzy sets. As a tool in optimization problems that
involve uncertainty, the basic idea is to consider the set of all possible alter-
natives for an unknown parameter of the problem. But the set of possible
alternatives may not be well defined. Thus, a fuzzy set of alternatives is used.
To do this, each element in the universal set is assigned a membership value
(or possibility level) in the interval [0,1]. In possibility theory, as formulated
by Zaden, the membership value of an element is interpreted as the degree of
possibility that the parameter is that element. If the membership value is zero,
it is impossible that the parameter is the element. If the membership value is
one the element is considered a possible value of the parameter without reser-
vation. Membership values produce an ordering on the universal set in terms
of each elements acceptability as a possible value of the unknown parameter.
The mapping of each member of the universal set to its membership value is
called a possibility distribution function for the unknown parameter.
The typical process for utilizing possibility theory in optimizing a
function with ill-defined parameters is as follows: First, each unknown param-
eter of the function is represented using a possibility distribution. Second, for
a given point in the decision space the function is evaluated over the set of
possible parameter values using the rules of combination of possibility theory.
11


This produces a possibility distribution for the function evaluation. Third, the
possibility distribution is assigned a value. The process of assigning a value
to a possibility distribution is called defuzzification (taken from fuzzy set
theory). Fourth, this value is used to order the possibility distributions so that
an optimal decision can be made. It is the second of these steps that makes
possibility theory an interesting alternative to probability theory in modeling
uncertainty. For many problems, it is fairly straight forward to calculate the
image of a function of possibility distributions, where it may be very difficult
if probability distributions are used.
The process just described is controversial. The basic problem is that
membership values have not been well defined except as a tool for ordering
the universal set. Very different membership values can be used to arrive
at the same ordering. But most defuzzification methods are dependent on
the membership values. Thus the decision derived from the process may be
arbitrary, since it depends on what membership values were assigned. Also, it is
difficult to interpret the meaning of the number derived from defuzzification. In
[12], Dubois&Prade state ...when we scan the fuzzy set literature, including
Zadehs own papers, there is no uniformity in the interpretation of what a
membership grade means. This situation has caused many a critique by fuzzy
set opponents, and also many a misunderstanding within the held itself. Most
12


negative statements expressed in the literature turn around the question of
interpreting and eliciting membership grades.
This thesis begins with an interpretation of membership values based
on probability theory. The fundamental properties of possibility theory are
then examined from this perspective. The advantage of this approach is that
it gives a concrete meaning to membership values and the value derived from
the defuzzification method proposed.
2.1 Possibility Measures and Possibility
Distribution Functions
Before introducing the probabilistic interpretation of membership val-
ues, the formal definitions and resulting implications from possibility theory are
provided. Note that the basis for selecting membership values is not discussed
in this section.
Definition 1 (Klir&Yuan [21]) Given a universal set X and a nonempty family
C of subsets of X, a fuzzy measure on (X, C) is a function
<7 = [ 0,1] (2.1)
that satisfies the following requirements:
{gl) g(0) = 0 and g(X) = l (boundary condition)
(g2) V A,Be C, if AC B then g(A)< g(B) (monotonicity)
13


(g3) for any increasing sequence Ai C A2 C ... in C, if (J^r A; G C, then
OO
Yira g{Ai) = g{{j Ai) (2.2)
%Voo .
2=1
(continuity from below)
(g4) for any decreasing sequence Ai D A2 D ... in C, if f\Ai Ai G (7, then
OO
lim flf(At-) = flf(fl Ai) (2.3)
%too . '
2 = 1
(continuity from above).
An immediate consequence of this definition is that if A, B, AU B and
A fl B G (7, then every fuzzy measure satisfies the inequalities,
9{A n B) < min(g(A),g(B)) and g(A 1J B) > max(g(A),g(B)). (2.4)
Fuzzy measures include probability measures as well as belief and
plausibility measures of evidence theory (see Klir&Yuan [21]). Possibility the-
ory is based on the following two fuzzy measures.
Definition 2 (Klir&Yuan [21]) Let nec denote a fuzzy measure of (Y, C).
Then nec is called a necessity measure iff
nec I P| Ak = inf nec(Ak) (2-5)
\kei< ) keK
for any family {A& | k G K} in C such that p|keK Ak G (7, where K is an
arbitrary index set.
14


Definition 3 (Klir&Yuan [21]) Let pos denote a fuzzy measure of (X}C).
Then pos is called a possibility measure iff
pos [J Ak = sup pos ( Ak) (2.6)
\kei< / keK
for any family {A& | k G K} in C such that (JkeK X G C, where K is an
arbitrary index set.
When C = V(X), the power set of X, possibility and necessity mea-
sures occur in pairs.
Theorem 1 (Klir&Yuan [21]) If pos is a possibility measure on V(X). Then
its dual set function nec, which is defined by
nec(A) = 1 pos(Ac) (2-7)
is a necessity measure called the necessity measure associated with pos.
Definition 4 (Klir&Yuan [21]) Given a possibility measure pos on power set
"P(X), the function fi : X > [0,1] such that p(x) = pos ({}) for all xG X is
called the possibility distribution function associated with pos and p(x)
is called the possibility level or membership value of x.
Theorem 2 (Klir&Yuan [21]) Every possibility measure pos on a power set
"P(X) is uniquely determined, for each A G V(X), by the associated possibility
distribution function via the formula
pos(A) = supxeA/i(x). (2.8)
15


Theorem 3 (Wang&Klir [46]) If /x is a possibility distribution function for
possibility measure pos then
supxeXfJ,(x) = 1. (2.9)
Conversely, if a function /x : X > [0,1] satisfies (2.9), then /x can determine a
possibility measure pos uniquely, and /x is the possibility distribution function
of pos.
Definition 5 (Dubois&Prade [10]) A possibility distribution is called normal
if 3a G X such that pos({a}) = l.
2.2 Membership Values as Cumulative Probabilities
Given a possibility distribution function, there are various ways mem-
bership values have been interpreted. For example, membership values have
been interpreted as probabilities over nested sets as in Dubois&Prade [11] or
as upper bounds on probability distributions as in Dubois&Prade [9]. A good
overview of the various interpretations of membership grades in fuzzy set theory
as well as possibility theory can be found in Dubois&Prade [12]. In this thesis
we adopt an interpretation that is closely related to the random set interpreta-
tion and the interpretation as the upper bounds on probability distributions.
This interpretation provides a straight forward method for constructing pos-
sibility distributions. This interpretation will give context to the operations
16


proposed in this thesis. The need for a concrete interpretation of membership
values was discussed in the introduction to this chapter.
The prototype for the following discussion is a parameterized real
valued function where the parameters are not known with precision. However,
other applications of possibility theory will be considered.
Definition 6 Let X be a space and x a random vector on X. A possibility
nest for if is a relation >p on X X X such that
(1) >p is a linear ordering of X with the property that V x,y G X either
x >p y, y >p x or x =p y and
(2) \/x G X, {y G X \ y >p x} is measurable with respect to the probability
measure for x and
(3) \/B C X, 3 an at most countable subset C CB with the property that
Vx G B 3xn, xm G C such that xn

This third property requires some examination. The assumption is
not trivial since there exists an unbounded infinite well ordered set that has
the property that every countable subset is bounded above (see the set Sq in
Munkres [33]). However, the following theorem shows that the assumption is
not too restrictive.
Theorem 4 Let B C [0,1]. Then 3 an at most countable subset C C B with
the property that \/x G B 3xn, xm G C such that xn > x > xm.
17


Proof:
We first show that there is a countable dense subset of B. For
each q £ Q fl [0,1] (where Q is the set of rational numbers). Let S(q,^) =
£ [0,1] | \x q\ < ^ }. Let bq^n £ S(q, fl B if S(q, fl B ^ 0 otherwise
set bq^n = b', an arbitrary element of B. Let D = UgeQn[o,i] {^g,n \ n = 1,....}.
Then D is at most countable since it is a countable union of countable sets. Let
b £ B. Pick n > 0. Since the rationals are dense in [0,1] 3q rational such that
\q b\ < A-. Then b £ S(q, yy) so S(q, ^)C\B ^ 0 which means 3bq^n in D such
that |q bq:2n\ < But then |b bqM\ < \b q\ + \q bqM\ = ^ ^ = y-
Hence D is dense in B.
Let C = D U (sup D 0 B) U (inf DOB). We claim C has the desired property
of the theorem. Assume not. Then without loss of generality, assume 36 £ B
such that Vc £ (7, c < b. Since D is dense in B 3 {dn £ D \ n = 1,...} such
that dn > b. But then by assumption on 6, 6 = sup D thus 6 £ C.O
We motivate the naming possibility nest by noting that any such
ordering produces a nested family of measurable subsets of X.
Definition 7 Let >p be a possibility nest for x. Define x to be possible if
x>p x and impossible otherwise.
The following theorem provides some of the motivation behind these
definitions.
18


Theorem 5 Let A = {Aa \ a £ A} be a nested family of confidence intervals
for random variable x where a = prob(x £ Aa). Then A defines a possibility
nest for x.
Proof:
Define an ordering on R by setting x >p y if sup{l a\x £ Aa} >
sup{l a\y £ Aa} where sup0 = 0. This clearly establishes a linear ordering
on R since every two elements are comparable and the relation is transitive.
For given x £ R let 1 7 = sup{l a\x £ Aa}. We claim that
'C
{y\y >P x}
nGA|>7A if inf{ £ A|a > 7} = 7
nGA|>7A otherwise.
Let 2: = {y\y >p x} then z >p x so sup{l a\z £ Aa} >17. Since
the sets are neseted and indexed by a probability measure it must hold that
a > (3 =z Aa D Ap or equivalently 1 a < 1 f3 =z Aa D Ap. Therefore
sup{l a\z £ Aa} >ly means 2: £ Aa Va > 7 since otherwise 3(3 > 7 for
which 2: ^ Ap but then 2: ^ Aa Va < (3 and hence sup{l a|2: £ Aa} < 1 (3 <
1 7 which is a contradiction. Therefore, in all cases 2: £ naGr|a>7^a-
Assume inf{a £ A|ct > 7} = (3 > 7. If 7 £ A and if 2: ^ A7 then
2: ^ Aa Va < (3 and sup{l a|2: £ Aa} < 1 (3 < 1 7 which is a contradiction.
Therefore 2: £ neA|>7^- If 7 ^ A then neA|>7^ = neA|>7^ and the
19


prior case holds.
D
Assume inf{a G A|cc > 7} = 7- Let z G n>7Aa. Then sup{l a\z G
Aa} >1-7 and z G {y|y >p x}.
Assume inf{a G A|cc > 7} = /3 > 7. If z G neA|a>7A then in
particular if 7 G A then z G A7 so sup{l a\z G Aa} >17 and z G {y |y >p
x}. But it must be that 7 G A since otherwise 1 7 = sup{l a\x G Aa}
would imply that inf{a G A|cc > 7} = 7 contradicting our assumption.
The third property of the definition of possibility nest follows from the
prior theorem and the fact that A C [0,1]. It remains to show that C\a(z^\a>1Aa
and neA|a>7A are measurable. But by assumption, 3 at most countable sub-
sets C and D of A such that naGA|a>7Lfa = flceC'Ac and naGA|a>7Lfa = CideoAd
and both of these latter sets are measurable since each Aa is measureable by
assumption.
We are interested in functions of multiple random vectors. Therefore,
we extend the notion of a possibility nest to a finite collection of random vectors
as follows:
Definition 8 Let X be a space and X = {xl \ i = 1 to N} be a finite set of
symbols representing random vectors on X. Let f = X X X. A possibility
nest in the context of X is defined to be a relation, >c, onf x$ such that
20


(1) >c is an linear ordering of with the property that V(x\ a), (xJ, b) £ 'h
either (x\a) >c (xJ,b), (xJ, b) >c (x\a) or (x\a) =c (xJ, b) and
(2) V a £ X and 1< i, j < N, {b £ X \ (x\b) > (x\a)} is measurable with
respect to the probability measure associated with random vector xl and
(3) \/B C f, 3 an at most countable subset C CB with the property that
V (x\a) £ 3(xJ, an), (xk, am) £ C such that (x\an) A possibility nest for x1 in the context of X is defined to be the relation
>p on X X X which is the restriction of >c to X by setting b >p a if (x\ b) >c
(x\a). Note that the properties of >c ensure that >p satishes the properties
of Definition 6.
In a little while we will construct contextual possibility distributions
from contexual possibility nests. One can think of contextual possibility dis-
tributions as interim distributions on the way to constructing a possibility
distribution for a function of the random vectors in X. Examples of orderings
on and its usefulness are given throughout the rest of this chapter.
From this point forward, we will use (X, X, >c) or (X, x, >p) to denote
possibility nests. Note that (X,X, >c) includes the special case (X,x, >p).
will always be used to represent X X X.
For each i, the symbol xl represents an unknown element of X. For
some unique X{ £ X, xl = X3, i.e. X{ is the realization of xl. For purposes of
21


the following definition we will distinguish between the symbol for the random
variable, P and the actual value of the random variable X;. We will drop this
distinction after this definition.
Definition 9 Given (x, X} >c) let lu = min{(P,W) \ i = 1 to N} where
min is with respect to the ordering >c. Given (P, a) £ db call P = a possible
in the context of X if (P,a) >c lu and impossible in the context of X
otherwise.
When X consists of a single variable, (P, X\) = lu then a is possible
in the context of X iff a >p Xi} i.e. a is possible. In this case the ordering is
a possibilty nest for x and the reference to the context is dropped.
Given (^X, X ,>^j let
= {^ G ^ | ^ >c 7} (2.10)
and let
7Ti(^7) = ja £ X | (x\ a) £ d/7| . (2-11)
For 7 £ db if 7 >c lu then for some i, xl (ji 7t8(^7). Recall that lu =
min{(P, Xi)\l = l,---,iV}. We will show lu has the cumulative probability
distribution characterized by:
Fw(7) = 1 prob(xl £ 7Ti(^7) I x2 £ 7t2(vI,7),
, XN £ 7rjv(d/7)) -prob(xN £ ttn(^p))} (2.12)
22


where Fw(7) = prob(p) >c lu). If the Ps are independent this reduces to
^(7) = 1 prob(xl £ 7Ti(^7)) prob(xN £ 7rjv(d/7)). (2.13)
Thus Fh(7) is the probability that 7 is possible in the context of X. Then the
statement the probability is a that xl = a is possible in the context of X
has this specific meaning, namely that prob((x\a) >c lu) = a.
Theorem 6 Given (x,X,>c^, w is a random element of T and is the
cumulative distribution function for lu relative to the ordering >c.
Proof:
The a field of subsets and probability measure follow from the def-
inition of possibility nest in the context of X and is clearly the distribution
function.
Note that if (x\ r) >c lu then this is also true for any (x\ s) >c (x\ r).
For this situation,
prob (y(x\ r) >c lu and (x3, s) >c cuj = prob (y(x\ r) >c uuj (2-14)
and
prob (y(x\ r) >c lu or (x3, s) >c cuj = prob (^(x3, s) >c cuj . (2.15)
Furthermore,
prob (y(x\ r) 23


= 1
x\s) >c CO
x\s) and finally,
x r) = 1 -
x\ r) >c = prob (y(x\ r) Note also that (x\s) >c (x\r) implies that
prob (y(x\ s) >c coj > prob (^(x\ r) >c uj
and
x\s) These observations generalize as follows.
Theorem 7 Given (X, X,>^) the following identities hold V Be
0,
prob( \/ (x\r) >c w) = sup
,r)£B
\r) >c co) | (x\r) G B}
and
prob( f\ (x\ r) >c co) = inf ^prob((x\ r) >c co) \ (x\ r) £ B j
(tc* ,r)GB
(2.17)
(2.18)
(2.19)
T with
(2.20)
(2.21)
24


and
prob( (x\r) < cu) = inf
[xl ,r)£B
= 1 sup
and
{{x\r) x\ r) >c u>) | (x% r) G f?}
prob( \/ (\r) < Ccu) = sup |pro6((x*, r) (a;* ,r)GB
= 1 inf |pro6((F, r) >c to) \ (x\ r) G Bj
Proof:
For the first identity, note that
( V (x\r)>cOJ = ((V)>cw)
\(i',r)£B / (x,r)eB
and the events ((x\r) >c cu) are nested because of the linear ordering >c .
By definition, 3 an at most countable subset C CB with the property that
V(x\a) G B 3(xJ, an), (xk, am) G C such that (x\an) This with the nested property imply that
and
((X\r) >C^) = U {(X\rn) >C^)
(5,r)£B n£C
V ^ U&,r)eB(lX\r) >c^
suPnec {prob((x% rn) >c cu)}
suP(^,r)eB {prob((x\ r) >c cu)} .
25


The other identies are similarly proven along with the fact that
inf {prob((x\r) = inf {1 prob((x\ r) >c lu) \ (x\ r) £ B}
= 1 sup {{x\ r) >c lu) | (x\ r) £ B}
with a similar result using sup in place of inf.
2.3 Possibility Distributions when Membership
Values are Cumulative Probabilities
In this section it is shown that possibility distributions can be formu-
lated from the cumulative probability distribution of lu as a random element of
T. This gives a probabilistic interpretation to membership values and to the
possibility and necessity measures and other measures of possibility theory as
will be shown.
Given (V, V, >c) for ( xfct^ £ T, define a function by the
equation
= Prb((xfa) >c (jj). (2.24)
In other words, p~t(a) is the probability that xl = a is possible in the context
of X (or x = a is possible, when X = {5}).
Thus p~,(a) = F^fxfa)) where Fw((xl,a)) (see (2.12)) is the cumu-
lative distribution function for the distribution of lu when lu is considered a
random element of T.
26


Theorem 8 Given (A", A",>cj for x1 £ X, the function /i~8 is a possibility
distribution function.
Proof:
By definition, p~t : X > [0,1]. Also
supxexl^x' (x) = supprob Ux\x) >c to) .
x(E.X
By definition 3 an at most countable set CCX with the property that
sup prob f (x\ x) >c to)
x(E.X
= supprob((x\x) >c to
x(E.C
>r tO
= prob | l^J (yx\x^
\xec /
= prob i l^J (jx\ x^j >c to J
\xex )
= 1
since (x\ Xi) >c to by definition.
With this interpretation of membership values, it is clear that possi-
bility distributions are context dependent in the following sense. If additional
random vectors are added to X and/or removed from X, then the possibil-
ity distribution for xl will change. It will also change if the ordering of is
changed. In fact, for a given collection of random vectors, X, there is a set of
possibility distributions (one for each ordering of 'h).
27


The next theorem is a construction theorem for the case where the
random vectors in X are independent. It is straightforward to construct a
possibility distribution for a random vector x if X contains only the singleton
x. In this case lu = (xl,X\) and the possibility nest forms a nested sequence
of subsets of X (the sets 7r(^(ja)), see (2.11)). Then p~(a) = 1 prob(x £
7r(T(ya))) = prob(a >p P). For example, in Theorem 5 we saw that a nested
sequence of confidence intervals can be a possibility nest for a real valued
random variable. Once possibility distributions have been constructed in this
manner for a collection of independent random variables, the following theorem
shows how the possibility distributions can be combined to produce contextual
possibility distributions for each of the variables. By contextual we mean the
possibility distributions produced by an ordering on T when X is the set of
random variables of interest.
Theorem 9 Let X = {P \ i = 1 to N} be a collection of independent random
vectors on space X. Let |/i~, | i = 1 to N| be possibility distribution functions
constructed for each P alone. In other words, for each i, an ordering has been
imposed on X such that p~,(a) = prob(a >p P) with respect to this ordering.
Then an order, >c can be imposed on the T = X xX by setting (P, a) >c (P, b)
if p~t(a) > p~3(b) and (P,a) =c (P,&) if p~t(a) = p~3(b). For fixed a£ X and
28


iG {1,2,..., N} and for each j = l,2,...,N let
ccj = 1 sup |/i~J.(6) | b £ X and p~,(a) > /i
{
(2.25)
Then with respect to >c,
N
H~t(a) = 1 ]{[ ocj
3 = 1
(2.26)
is a possibility distribution function for x% in the context of X.
Proof:
From (2.13)
= 1 ~prob{xl G 7ri(%i,a)))-"Pro6(JV e ^v(%))
where T7 = {ip G T | ip >c 7} and 7p('F7) = {a G X \ (x\ a) G T7} But from
the order imposed,
{(
x\b)\ /iT(6) > p~t(a
)}
and ^-(T^) = {be X \ p3~(b) > pl~t(a
{
)}
and thus
Prob(^ G ^(%*,a)))
= prob (dG{kI | /ib(6) > /i~,(a)})
= 1 prob (x3 G 16 G df | p~j(b) < p~t(a)}) .
29


But the event (hejkl | /i~ (6) < /i~(a)}) = (&) xJ)- Since
x3 xl
these events are nested, and using property three of the dehntion of a possi-
bility nest, we have probf (a)(b xJ)j = sup{prob(b > x3) |/i~,(6) <
\ x3 xl '
But prob(6 > x3) = /i~,(6). Therefore 1 prob(x3 £ {b £ X\p~}(b) <
h~*()}) = 1 sup{/iT(6)|6 £ ^lh~,(&) < D
Corollary 1 If in addition to the other assumptions, the possibility distribu-
tion function p~t in Theorem 9 maps onto the interval (0,1) for each i = 1 to
N then
/'?() = 1 (!-£.()) (2-27)
is a possibility distribution function for xl in the context of X.
Proof: Given p~t(a) £ (0,1), by assumption 36 £ X such that
= h~,(0- Therefore pl~t{a) = sup {/i~ (6) | 6 £ X and pF(b) < /i~,(a)}.
If p~t(a) = 1 we know from the definition of a possibility distribution function
that
SUP | 6 £ X and pF(b) < l} = 1.
If p~t{a) = 0, sup |/iT(6) | 6 £ X and /i~,(6) < 1 j = 0 since either 36 such that
/i~ (6) = 0 or |/iT(6) | 6 £ X and /i~,(6) < lj = 0 and we defined sup 0 = 0.

Let Fi : R > [0,1] be the cumulative distribution function for random
30


variable P (i.e. Fi(x) = prob(xl < x)). In addition, let Gi(x) = prob(x < P).
These functions map R into the interval [0,1] and satisfy the property that
sup {Fi (x) | x G R} = sup {Gi (x) | x G i?} = 1. Therefore these are possi-
bility distribution functions for the case X = {P} and the ordering on R
is either the natural ordering or the reverse of the natural ordering. Then
p~t(x) = prob(x is possible) = prob(x >p P) = prob(x > P) = Fi(x) or
/hy(x) = prob(x is possible) = prob(x >p P) = prob(x < P) = Gf(:r). When
these functions satisfy the conditions of the previous corollary, they can be
used directly to construct contextual possibility distributions. These special
possibility distributions will be useful later when we examine the probabilistic
information that can be derived from a possibility distribution.
Corollary 2 If X consists of random variables and for each element of X the
distribution functions Fi and Gi map onto the interval (0,1) (for example, if
each P is a continuous random variable), then selection of one each of p or Gi
induces an ordering of T determined as in Theorem 9 and for each i = 1 to N
= 1 (1 ff,(o))J'' (2.28)
is a possibility distribution for P in the context of X where Hi = Fi or Gi.
Proof:
Apply the previous corollary.
31


Note that when X = {x}, this reduces to ji~{a) = F{a) or ji~{a) = G{a).
These distributions concentrate the possibility distributions to the left
or right hand side of the real line.
Example 1 Constructing Possibility Distributions-Application of
Theorem 9. In this example we will construct contextual possibility distri-
bution functions. The first step is to construct possibility distribution functions
for each random variable separately. Second, we use these distributions to con-
struct the set T = X X X as in Theorem 9. Third, for each element of the set
T we construct the cumulative distribution function for u, Fw. Finally, using
Fw, the contextual possibility distribution functions for each element of X are
constructed.
Let x1, x2 be independent discrete random variables over the set X = {1, 2, 3, 4,
5} where the distributions of x1 and x2 are:
r s- II cC s- II cC
1 .25 .3
2 .5 .5
3 .125 .2
4 .125 0
5 0 0
We will construct two possibility distributions in the context of X = {m1, m2},
32


one concentrating possibility to the left and one to the right side of the real
line. We will denote these using superscripts L and R. Construct possibility
distributions for each variable separately. Recall that p(x) = prob((x} x) >c u)
which reduces to prob(x >p x) when X consists of the single variable if. For
if1, the ordering for the left distribution is 1 >p 2 >p 3 >p 4 >p 5 and for
the right distribution, 4 >p 3 >p 2 >p 1 >p 5. For if2, the ordering for
the left distribution is 1>P 2 >p 3 >p 4 =p 5 and for the right distribution,
3 >p 2 >p 1 >p 4 =p 5.
r Lb~i (r) LbW)
1 1 i
2 .75 .7
3 .25 .2
4 .125 0
5 0 0
r RbW) RbW)
1 .25 .3
2 .75 .8
3 .875 1
4 1 0
5 0 0
We can combine one distribution for each of the random variables. We will
combine Lp~1 (r) with Lp~2 (r) and Rp~1 (r) with Rp~2 (r) to get two sets of
possibility distributions. As in Theorem 9, we construct an order on $ =
{if1, if2} X X, by setting (if, a) >c (x\b) if p~t(a) > p~3(b) and set p~,(a) =
1 nf=i aj where ay = 1 sup |P~j{b) \ b G X and p~t(a) > //A(6)}. The
result is shown below with the elements of T listed from highest to lowest in
33


the ordering along with prob(7 >c u) = 1 prob(-f T for i/i~1 (r) with Lp~2 (r) l-prob((x, x) (x1,1) =c (72,1) 1 (1 1)(1 1) = 1.0
(x\2) 1 (1 .75)(1 .7) = .925
(S2,2) 1 (1 .25)(1 .7) = .775
l1,3) 1 (1 .25)(1 .2) = .4
(x2,3) 1 (1 .125)(1 .2) = .3
(1,4) 1 (1 .125) (1 0) = .125
(x*,5) =c (72,4) =c (72,5) 1 (1 0)(1 0) = 0
T for Rp~! (r) with Rp~2 (V) l-prob((x, x) < u)
(71,4) = (72, 3) 1 f i i)(i 1) = 1-0
(x\3) 1 (1 .875)(1 .8) = .975
(72,2) 1 (1 .75)(1 .8) = .95
(S\2) 1 (1 .75)(1 .3) = .825
(72,1) 1 (1 .25)(1 .3) = .475
(S\l) 1 (1 .25)(1 0) = .25
{xl ,h) = (72,4) = (72,5) 1 (1 0)(1 0) = 0
The later table gives us the following information:
P^1 = 4 and 72 = 3) = 1 .975 = .0 25
P(x1 = 3 or 4 and x2 = 3) = 1 .95 = .0 5
34


P((x1 = 3 or 4 and x2 = 2 or 3) = 1 .825 = 175
P(x1 = 2, 3, or 4 and x2 = 2 or 3) = 1 .475 = 525
P(x1 = 1, 2, 3 or 4 and x2 = 2 or 3) = 1 .25 = 75
P(x1 = 1, 2, 3 or 4 and x2 = 1, 2 or 3) = 1 0 = 1
The left and right possibility distributions for x1 and x2 in the context of X
are: ________________________ __________________________________
r R^(r) RfIx2 (r)
1 .25 .475
2 .825 .95
3 .975 1
4 1 0
5 0 0
r Lf*x'(r) Lhp(r)
1 1 1
2 .925 .775
3 .4 .3
4 .125 0
5 0 0
Example 2 The continuous Case Application of Corollary 2 Let x1
and x2 be independent random variables with density functions
fi(x)
x 1 for x G [1,2]
<
and /2(:r)
3 x for x G [2, 3]
so the cumulative distribution functions are:
x 4 for x G [4, 5]
<
6 x for x G [5, 6]
(2.29)
F1(x)
x + | for x G [1, 2]
3x 3.5 for x G [2, 3]
35


and F2(x) = <
4x + 8 for x G [4, 5]
17.0 + 6x for x G [5, 6]
and the distribution functions from the left (where G(x) = prob(x < x) are:
Gi(x)
and
G2(x)
- |x2 + x for x G [1,2]
3x + |x2 for x G [2, 3]
'
7.0 |x2 + 4x for x G [4, 5]
= <
18 6m + |x2 for x G [5, 6]
(2.30)
For this example, the combination rules of corollary 2 apply so Rp~i{x) =
1 (1 Fi(x))2 and Ljx~,{x) = 1 (1 Gi(x))2. For example,
1 (i \x2 -x + \)2 xG [1,2]
1 (l 3x |x2 3.5) xG [2,3] (2,31)
0 otherwise
is a right possibility distribution function for x1 in the context of X. A left
possibility distribution is constructed similarly but using Gf.
2.4 Set Representations
Before examining functions of possibility distributions we consider a
method for representing a possibility distribution by a family of nested sets.
36


These representations, called set representations, will be useful when func-
tions of possibility distributions are examined. They are useful for computa-
tional as well as theoretical purposes. We will show in the next section that
the possibility distribution, of a function of random variables represented by
possibility distributions, can be determined by evaluating the function over the
set representations of the possibility distributions of the random variables.
Recall that possibility theory can be viewed as a subset of fuzzy set
theory. In both theories, membership functions and possibility distribution
functions are represented by the symbol /x and in both theories /x maps the
universal set X to the unit interval. We begin this section with two definitions
from fuzzy set theory that are also used in possibility theory.
Definition 10 (Kaufmann&Gupta [20]) Given a fuzzy set x its a-cut is
equal,
xa = {y £ X | fx~(y) > a} for a £ [0,1] (2.32)
and its strong a cut is equal,
xa+ = {y £ X | /x~(y) > a} for a £ [0,1] (2.33)
Definition 11 (Kaufmann&Gupta [20]) Given a fuzzy set x} the set x0+ =
{a | [x~(a) > 0} is called the support of x.
The a-cut for a possibility distribution constructed using the methods
37


of this thesis consists of all elements whose probability of being possible is at
least a. The support of such a possibility distribution consists of all elements
that have non-zero probability of being possible.
acuts are used to form a representation for a possibility distribution
as a nested family of sets {xa}ae[0 1j. These and similar representations will be
used quite extensively in the sequel. This notion can be extended.
Definition 12 (Kruse et al. [23]) The nested family of subsets q is
called a set representation for a possibility distribution ji~ for x if Va £ X,
p~{a) = sup{a I a V Aa} (2.34)
where by definition sup0 = 0.
Set representations have continuity properties.
Theorem 10 (Kruse et al. [23]) Let (Aa)aG[0ji] be a set representation for a
possibility distribution ji~ for x. Then Va £ (0,1] xa = fl7 Xa-\- U
Set representations are bounded above and below by the a cuts and
strong a cuts of a possibility distribution.
Theorem 11 (Kruse et. al. [23]) The family of sets (Aa)ag^Q ^ is a set
representation for a possibility distribution ji~ for x if and only if it holds that
xa~j~ ^ £ xa Va £ (0,1).
38


Example 3 For Example 1, the set representations for L/x~i formed by a -
cuts and strong a cuts are as follows, where Lx1a represents the a cut of
the possibility distribution for x when the possibility distribution for x in the
context of {if1, if2} is Ljx~i:
a L~ 1 a L~ 1
.925 < a < 1 {1} a = 1 0
.4 < a < .925 {1.2} .925 < a < 1 {1}
.25 < a < .4 {1,2,3} .4 < a < .925 {1.2}
0 < a < .25 {1,2,3,4} .25 < a < .4 {1,2,3}
a = 0 {1,2,3,4,5} 0 < a < .25 {1,2,3,4}
Example 4 Constructing Set Representations For Example 2, we will
form the set representations using the a cuts. Rather than using the possi-
bility distribution functions developed earlier, we will create the set represen-
tations from first principles. We begin with the set representations for the left
distributions.
Let
Lx\ = [1, 2(3 + 1] and Lx2a = [4, 2(3 + 4] for (3 £ [0,1], (2.35)
Then the subscript, a as a function of (3} can be determined. Noting that the
39


two distributions are identical except for a linear shift, observe that
prob(xi GL x\) = <
fif3+1(x 1 )dx = 2(32 for i=1,2 and (3 G [0, .5]
I + J22/3+1(3 z)cte = -1 + 4/3 2(32 for (3 G [.5,1]
(2.36)
Then from (2.13) and (2.24),
a
= 1 ~ P(x e xa)P(x G xa) =
1 (2(32)2 for i=1,2 and [3 G [0, .5]
1 ( 1 + 4(3 2(32)2 for [3 G [.5,1]
Suppose a = 1 (2/32)2, then [3 = for a G [1, .75].
Suppose a = 1 ( 1 + 4(3 2/32)2, then (3 = 1 |y^^2 2^(a + l)j for
a G [.75,0]. Thus a set representation for the left possibility distributions for
x1 and x2 in the context of {'r1,72}are given by:
Lx( =
[1, 2 + 1] for a G [1, .75]
_ [1, 2 (l i]/(2-2]/(^+ljj^ + 1] for a G [.75, 0]
and
L~ 2
X,

[4, 2 (yfyfw) +4] for a £ [1, .75]
4,2 !-14(2-2y(-a+i)) 4-4
for a G [.75, 0]
Now let
R~1
xl = [3 2(3, 3] and Rx2a = [6 2(3, 6] for (3 G [0,1], (2.37)
Using the same reasoning as above to determine the subscript a as a function
40


of (3 we get:
Rx\, =
and
Rxl =
[3-2 y\7f^) 3] for a G [1, .75]
[3 2 (l ^{2-2^a + l))^j 3] for a G [.75, 0]
[6-2 (^v^a) 6] for a G [0, .75]
6-2(1- 1^/(2 2^(-a + 1)) J 6 for a G [.75, f]
An alterative construction is to form the a cuts that concentrate the proba-
bility mass. We will use the left superscript U to distinguish this distribution,
meaning the points with higher probability density are concentrated in the
a cuts with higher values of a. Let
ux1a = [1 + 2/3, 3 2(3] and ux2a = [4 + 2/3, 6 2(3] for (3 G [0, .5], (2.38)
Then the subscript, a as a function of (3} can be determined. Noting again
that the two distributions are identical except for a linear shift, observe that
prob(xi 71+2/3
Then from (2.13) and (2.24),
a = 1 P(xl eU xla)P{Xt eU x2a) = l-(l- 4(32)2 for [3 G [0, .5],
Let a = 1 (1 4f32)2, then (3 = |-\/l a/1 a for a G [0,1].
Thus set representations for the possibility distributions for x1 and x2 in the
41


context of {a;1,^2} are given by:
uxi
1 + \/l \fl a, 3 \/l \fl a for a G [0, f]
and
U~2
4 + yJl-y/T^L, 6 y/l VT^
a
for a £ [0, f].
2.5 Functions of Possibilitity Distributions -
Extension Principles
Suppose there are two possibility distributions for unknown elements
of the real line, x and y and the decision maker is interested in the unknown
element x + y. In particular, how can the possibility distribution for the sum
be determined from the distributions for x and y? This section shows how this
is done in terms of the contextual possibility distributions. It is also shown
how set representations can be used to arrive at the same result. In particular,
by applying the operation as a set function on the set representations of each
distribution, a set representation for the combined distribution is derived.
The most important property of possibility theory is the ease with
which different possibility distributions can be combined. Zadeh provided the
formula for constructing the possibility distribution for the function of a pos-
sibility distribution as follows:
42


Definition 13 (Extension Principle) (Zadeh [48]) Let / : X > Y be an arbi-
trary function and let x be a fuzzy set over X. Define a fuzzy set over Y which
we denote f(x), by setting Pf^)(b) = supb=f(a)fj,~(a).
The cumulative probability interpretation of membership values gives
justification for this definition. Recall from Theorem 7 that for B C
prob( \/ (x\a) > u) = sup |p~i{a) \ (x\a) £ f?j (2.40)
(x1 ,a)£B
and
prob( /\ (x\ a) > u>) = inf \ (x\ a) £ i?| . (2-41)
(x1 ,a)£B
In words, the probability that at least one element of set B is possi-
ble is the supremum over the probability that any particular element in B is
possible and the probability that all elements of B are possible is the inhmum
over the probability that any particular element of B is possible.
Definition 14 Given (X, X, >c) where X = {xl \ i = 1, N} and a mea-
surable function / : XN > Y. Call y£ Y possible in the context of
X if 3 Hi ai ^ XN such that each a4- is possible in the context of X and
y = f{nl a).
This definition provides a basis for a probabilistic interpretation of
the extension principle.
43


Theorem 12 Given (^X, X, >c) where X = {P \ i = 1, N} then a mea-
surable function / : > Y induces a possibility nest >p on Y for the random
vector/(n^i P) over Y with contextual possibility distribution function given
by the extension principle, i.e. Vy £ Y
^f(UN Y)(y)= SUP m,inat(/GK')) (2-42)
m'=i ] ,=/(nf=08=ltoN
where y^(y) = prob(y is possible in the context of X).
Proof:
Let W = / V y, w £ LFC and Vz £ LF set y =p w

a point to set mapping / : T > "P (3^) by
V £ P I 3 nil a* G ^ with / (nil a*) = y
f{(x\a)) = <
and
(P, a) Since T is linearly ordered, j/((P, a)) | (P, a) £ T j is nested and thus linearly
orders Y. Let >p be this linear ordering. For each y £ Y, /-1({u> >p y}) =
/_1 (/((P, ))) for some (P, a) £ T. For this (P, a), IlyLi £ /_1 (/((P, a)))
if and only if (P, a) p y}) =
nf=i {j | (P, a) respect to the random vector P by assumption on >c. Therefore {w >p y} is
a measurable set in Y.
44


Let y = (/(n^i A). Recall that to = min {{x\ Xi) \ i = 1, N}.
N \
n = y and V = 1, N,lo *'=i J
(x\a,i)). In words, prob(y >p y) is the probability that there is an n^i ai in
XN such that each a8- is a possible candidate for x1 in the context of X and such
that y = i-e- A is the probability that %i) = y is possible.
But the probability that each a4- is a possible candidate for xl in the context of
X is equal to mim_1 (cq)). Also prob ai I / (n^=i ai) = V and
V i = 1 . N,lo N
n^i/(nL i)=y
i=i
And these events are nested by the linear ordering of T. Therefore, using prop-
erty three from the definition of a contextual possibility nest
prob a,|/(nJl1 ai)=y ^ ~c (* ^ = 1 to
= sup ({cu = suP,=3(nf= a,) milb=i to n(vv (*)) D
The usefulness of set representations is demonstrated by the following
theorem.
Theorem 13 Given (^X, X,>c^j where X = {xl \ i = 1, A-}. Let {A}aG[0 i]
be a set representation for a possibility distribution for random vector xl
over X in the context of X and let g : XN > Y be measurable. Then
N
Then prob(y >p y) = prob(3 JI ai I /
*'=i V
45


is a set representation for the
N
g( El ai) where a4- G Ala V i = 1, N
i=1
*e[o,i]
possibilistic variable yQlim A) as dehned in Theorem 12.
Proof:
Since n iv (y) = sup iv niin;=iijv(/iji(fli)), we need to show
y=a(riad
that Vy G T,
sup min(/i~8(a8))
crr^ x*=1>w
^=A1L=1 ad
= sup |a I y G jy( n cti) where a4- G A*a Vi = 1 iV jj (2.43)
We consider two cases.
C
ase one:
Let y G Y. Assume sup^a \ y G |y(rii=i,jv ai) where a8- G A^ Vi = l,tvjj =
r > 0. Then V7 < r, y G |y(rii=i,jv ai) where a8- G A^ Vi = 1 to tvj since
the A^s are nested. So 3 flti ai such that y = y(ri8=i ai) and ai V A%. Thus
/yy (tq) A 7 Vi = 1, N (definition 2.34) so min^i^/yy(a;)) A 7- But this is true
V7 < a so sup^_^j-jw a^ minJ=ijjv(y7 (a;)) A t. On the other hand, if 8 > r
then y ^ |y(rii=i,jv ai) where ty G A*5 Vi = 1 to tvj. This means that Vflili ty
such that y = y(n^=i ai), for some j, a3 (jt AJS. Therefore for this j p~j(aj) < 8
which implies that min^i^/yy (ty)) Since this is true Vh > r, it must
be that min^i^/yy (ty)) r- Thus sup^_^j-jiv a^ min^i^/yy (ty)) r anc^
equality follows.
46


Case two:
Let y £ Y. Assume sup^a \ y £ |flf(rii=i,iv ai) where a4- £ VI = l,tvjj =
0. By the previous argument, sup^^^v a ^ min^i^l/Ly (a4-)) < 0. But by
definition of /i, supy=3(j-jx ^ (at)) > O.Q
This is an important result. It tells us that to evaluate a function
of possibility distributions we dont have to evaluate the supremum over the
minimums of the membership values. Instead, we can evaluate the function
over the a cuts of the possibility distributions. For some problems, this
simplifies to performing interval arithmetic using the a cuts of each possibility
distribution.
Example 5 Consider the random variable y=x1-\-x2 where x1 and x2 are as
given in Example 1. This variable is monotonic in each variable x1 and x2.
Therefore, we can calculate the set representations formed by the acuts for
the left and right possibility distributions for y from the a cuts for the left
and right possibility distributions for x1 and x2 respectively. The a-cuts for
47


the left possibility distribution are:
a L~ 1 L~ 2 LVa
.925 < a < 1 {1} {1} {2}
.775 < a < .925 {1.2} {1} {2,3}
.4 < a < .775 {1.2} {1.2} {2,3,4}
.3 < a < .4 {1,2,3} {1.2} {2,3,4,5}
.125 < a < .3 {1,2,3} {1,2,3} {2, 3,4,5,6}
0 < a < .125 {1,2,3,4} {1,2,3} {2, 3,4, 5, 6, 7}
This has the following meaning;
7.5% of the time (1 .925), y=2 is the only possible value for y,
22.5% of the time (1 .775), y=2 or 3 are the only possible values for y,
60% of the time (1 .4), y=2,3, or 4 are the only possible values for y,
70% of the time (1 .3) y=2,3,4 or 5 are the only possible values of y,
87.5% of the time (1 .125), y = 2, 3, 4, 5 or 6 are the only possible values of
y,
and 100% of the time y=2,3,4,5,6 or 7 are the only values for y.
48


The a-cuts for the right possibility distribution are:
a ^a R^2 ^a RVa
.975 < a < 1 {4} {3} {7}
.95 < a < .975 {4.3} {3} {6.7}
.825 < a < .95 {4.3} {3,2} {5,6,7}
.475 < a < .825 {4,3,2} {3,2} {4,5,6,7}
.25 < a < .475 {4,3,2} {3,2,1} {3,4, 5,6,7}
0 < a < .25 {4,3,2,!} {3,2,1} {2, 3,4, 5, 6, 7}
Example 6 For the set representations from Example 5, consider y = (xl)2
if2. To construct the right possibility distribution for y, we note that over
the support of if1, y increases as x1 increases and over the support of if2, y is
increases as if2 decreases. Therefore, to construct a set representation for the
right distribution for y, we apply interval arithmetic to the set representation
for the right distribution for if1 and the left distribution for x2. This results in:
49


R'
and
'
[(3-2\[W^^)2 (2/lVr^ + 4) 5] for a G [1, .75]
[(3 2 T1 §\/2-2\/(-+ 1)))
- (2 (1 - + 4j ,5] for a [.75,0]
[5, (2 +l) (6 2^J\\fY^a^j] for a G [1, .75]
[-5. (2 (l |^(2 2^/(0 + il)) + i)2
fe 2 ^1 1^2-2^^))] for a [.75,0]
Using the upper possibility distribution a set representation for another possi-
bility distribution for y is:
UVa = [(l + ^1 VT^a^j ^6 \Jl VT^a^j ,
^3 \Jl a/T a^j ^4 + \Jl a/T a^j ] for a G [0,1]
A graph of the upper possibility distribution versus the probability density
function for y is shown in Figure 2.1. A graph of the upper possibility distri-
butions for xx}x2 and y is shown in Figure 2.2.
50


Figure 2.1. Graph of the upper possibility distribution function and proba-
bility density function for y = (x1)2 x2.
Figure 2.2. Graph of upper possibility distributions for x1, x2} and y =
(xl)2 x2.
51


2.6 Probabilistic Based Possibility Distributions
for Random Vectors
We consolidate the possibility distributions constructed in the pre-
ceding definitions into a single definition.
Definition 15 Let Y be a space and y a random vector. We will call y~ a
probabilistic based possibility distribution for y if there exists a contex-
tual possibility nest (^X, X, >c^ and a measurable function / : XN > Y such
that y = /OIL xl) and y~(a) = prob [y = a is possible in the context of X^j.
Note that this definition includes all of the possibility distribution
functions constructed earlier. For example, if X = {x} and Y = X and
f(x) = x (i.e. the identity map) then we have the possibility distribution
with membership function y~(a) = prob (a >p x) (where >p is the restriction
of >c) and if X = {xl \ i = 1 to N} and f(x) = x (i.e. the identity map)
then we have the possibility distributions with membership functions y~, (a) =
prob ((x\ a) >c u) With this in mind, from this point forward we drop the
phrase in the context of X from our definition of possibility since it will
always be understood to exist.
Using the above definition, we can interpret the measures of possibility
theory from a probabilistic point of view.
52


Theorem 14 Let x be a random vector with probablistic based possibility
distribution function p~. The function posy : V(X) > [0,1] given by:
pos~(A) = prob \J (a is possible) (2.44)
V a£A /
is the possibility measure associated with the possibility distribution function
K (2-24).
Proof:
From Theorem 12 (using the linear ordering of the events (a is possi-
ble) and the fact that the probability of the union is the sup of the probabili-
ties from the third property of a possibility nest), prob (VaeA (a is possible)) =
prob (Uaen (a is possible)) = supaGyl {prob(a is possible)} = supaGyl {/iy(a)}
and the theorem follows from Definition 3.
In other words, pos(A) is the probability that at least one element of
A is possible.
Theorem 15 Let x be a random vector with probablistic based possibility
distribution p~. The function necy : V(X) >[0,1] given by:
nec~(A) = prob /\ (a is impossible) (2.45)
\aeAc /
is the necessity measure associated with the possibility distribution function
K.
53


Proof:
Again, from Theorem 12,
prob l/\ (a is impossible)
VaeAc
= 1 prob \/ (a is possible)
VaeAc
= 1 sup ifi~(a)}
aeAc
= 1 ~Pos~{Ac).
Then (2.7) applies.
In other words, nec~(A) is the probability that every element that is
in Ac is impossible, i.e. x must be in A.
Two other set functions found in the literature on possibility theory
also have probabilistic interpretations.
The function Ay : V(X) > [0,1] is called an uncertainty measure
and is defined as (see Kruse et.al. [23]),
Ay(A) = inf {p~(a) \ a £ A} = prob /\ (a is possible)
\aeA
That is, every element of A has at least probability Ay(A) of being a possible
candidate for x.
54


The function y~ : V(A) > [0,1] is called a guaranteed possibility
measure and is defined as (see Kruse et.al. [23]),
\/~(A) = 1 A~(AC) = prob | \J (a is impossible) j . (2.46)
\aenc /
Thus, every element that is not in A has probability less than of being a
possible candidate for x, or every element with probability at least y~(A) of
being a possible candidate for x is in A.
2.7 The Information Contained in a
Probabilistic Based Possibility Distribution
Given a probablistic based possibility distribution for a random vari-
able, what can be deduced about the probability distribution of the variable?
How can the possibility distribution be used to the advantage of a decision
maker? This section examines these questions for the case were X consists of a
finite collection of random variables and where we are interested in the random
variable consisting of a function of these random variables.
Let F~ be the cumulative distribution function (c.d.f) for random
variable x. We will denote the support of F~ by supp(F~), i.e.
supp(F~) = G R | dF~(x) > 0} .
We will use E(x) to denote the expected value of x} i.e. E(x) = xdF~(x)
if it exists.
55


Theorem 16 Let y be a random variable with probabilistic based possibility
distribution function y~. Let F~ be the c.d.f. for y. Then 3{Fa\a £ (0,1)}
where Fa
(1) Fa is a c.d.f.
(2) supp(F) C ya
(3) if ya = yp then Fa = Fp
(4) Fy(y) = fo Fa{y) da and
(5) for all A measurable fA dF~ = Jq1 fA dFa da.
Proof:
Associated with y is the contextual possibility nest (^X, X ,>c^j. Let
F^ be the cumulative distribution function for w as a random vector on T (see
( 2.12)).
(i)
Let Fa(y) = F~(y \ u = 7(a)) where 7(a) =c inf {(x\ x) £ T | F^ (x\ x) > a}
and inf is with respect to the ordering >c (note, if F^ is continuous then
7(a) = F~1(a)). This is well defined since F^ is right continuous implies that
the inf is achieved for some 7 (a) £ T. We also know that the set is non-empty
since Fw > 1. We also know that this c.d.f. exists from Proposition 4.32 of [2],
56


(2) Recall that
ya = jy G R | /i~(y) > a j = {y G i? | prob(y is possible) > a}
( N (N \ 1
= < y | 3 a,i G ^ y = / H and Vi prob((x\ a8) >cw) > a)
= (y | 3 J)[ a* G 3 y = / fII and \/i Fw((x\ai)) > ccl .
If z ya then nil ai V such that z = f ^nil aij and Vi Fu>((x\ a*-)) a-
But then
(
V I 3 nil ai V such that y = / (nil a) and
Vi (P,a8) >c 7(a) and for some j (x\cij) =c 7(a)
= {y | cu = 7(a)} where to = min |(P, | i = 1 to iVj
2; ^ <
so if y ^ y then y ^ y and dF~(y \ lu = 7(a)) = 0.
(3) If ya = yp then 7(a) = 7((3) so Fa = Fp.
(4) Note that if a is uniformly distributed over (0,1), then the random ele-
ment 7(a) of T has the distribution of u. To see this consider F,b (7 (a)) =
prob (u 1 (xl ,:r) |i7 (xl ,:r) >a
so by applying property three of a possibility nest we have
prob (u inf^ prob (lo<
c) |Fw (a;1 ,x^j>a

57


Let (3 = infi^(7)> Fw (7) = ^(7(0)). Then prob(a £ [0, /?]) = ^(7(0)).
Therefore, conditioning on u,
Fy{y) = Fy{y | u)d,Fw = jf F^(y | 7(a))da = jf Fa(y)da.
(5) For any set A measurable, conditioning on u, fA dF~ = fA dF~^dFw =
fo Sa dFada.n
Example 7 Consider the left possibility distribution constructed for y = x1 +
x2 in Example 5. The conditional probabilities are as follows:
Table of Fa(r)
a r = 2 3 4 5 6 7
.925 < a < 1 1 1 1 1 1 1
.775 < a < .925 0 1 1 1 1 1
.4 < a < .775 0 .3333 1 1 1 1
.3 < a < .4 0 0 .3750 1 1 1
.125 < a < .3 0 0 .2857 .8571 1 1
0 < a < .125 0 0 0 .3000 .8000 1
fo Fa(r)da .0750 .3500 .6875 .8875 .9750 1


This representation of the cumulative distribution function for y as a
parametrized family of conditional probability distributions allows us to esti-
mate various functions of the underlying probability distribution. The following
theorem shows that the measures pos and nec provide upper and lower bounds
for the probability measure.
Theorem 17 Let pos and nec be the possibility measure and its associated
necessity measure for a probabilistic based possibility distribution for random
variable x. Let A C X be a measurable set. Then
nec(A) < prob(A) < pos(A). (2.47)
Proof:
Consider the set {a £ (0,1] | Af]xa ^ 0} where xa is the a cut with
respect to the possibility distribution. The case when this set is empty is trivial
since, then nec(A) = prob(A) = pos(A) = 0. Assume the set is not empty.
We first prove that prob(A) < pos(A). Let 7 = sup {a £ (0,1] | Af]xa ^ 0}.
From Theorem 16 prob(A) = fA dF~ = Jq1 fA dFada and supp(F) C xa. So if
Af]xa = 0 then JAdFa = 0. Then prob(A) = JAdFada. This integral is
maximized if Va < 7, supp(F) C A in which case fA dFa = 1 and prob(A) =
da = 7. Therefore prob(A) < 7. But
59


pos(A) = sup {p(a) | a £ A}
= sup {sup {a | a £ | a £ A}
= sup ja | A n
= 7
We now show that nec(A) < prob(A). Let 7 = 1 sup {a \ Ac f) xa ^ 0} Using
similar arguments as above, the probability of A is minimized if Va such that
Ac f|U / 0 then supp(Fa)p|A = 0 where Ac is the complement of A. Then
Ia dF(x) = /q1 fA dFa(x)da = /11_7 da = l-(l-7) = 7. Therefore 7 < prob(A).
But nec(A) = 1 pos(Ac) = 1 sup {a \ Ac p| xa ^ 0} = 7.0
This bound can be tightened.
Corollary 3 Let x be a random variable for which 3 a probabilistic based
possibility distribution. Let (^X, X ,>c^j be the associated possibility nest. If
? is the set of all probabilistic based possibility distributions corresponding to
all orderings >y of X X X resulting in a possibility nest then
supneCu(A) < prob(A) < inf posfl(A). (2.48)
Ater ^er
where pos^ and nec^ are the possibility measure and its associated necessity
measure for the possibility distribution p.
60


We can use a probabilistic based possibility distribution to calculate
a bound on the expected value of the random variable as follows.
Theorem 18 Let if be a random variable with probabilistic based possibility
distribution ji~ with a-cut xa and c.d.f. F~ for which supp(Fy) is bounded.
Then
E{i) = ['
Jo
= / Ea(x)da
(2.49)
and
/ inf xada < E(x) < / sup xada
Jo Jo
where Ea(x) is the expected value of a random variable x with c.d.f.
Theorem 15.
(2.50)
Fa of
Proof:
For the first result, by conditioning on lu as a random element of
T and using the results of Theorem 16, we have E(x) = E(E(x \ cj)) =
h (C ZiF~w) iFw = g (/_~ UFa) da.
For the second result, note first that the integrals are well defined because
inf if and sup ifa are bounded (supp(Fy) is assumed bounded) and each is
monotonic as functions of a. Then since supp(F) C xa, it must hold that
Va inf ifa < Ea(x) < supxa.O
The following theorem is useful for estimating the variance of the
61


distribution. If x is a random variable let Var('f) be the variance of x.
Theorem 19 Let x be a random variable with c.d.f. F~ for which supp (F~) is
bounded. Assume 3 a probabilistic based possibility distribution for x. Then
Var(x) = j Ea(x2)da (^J Ea(x)da^j (2.51)
where Ea(x) is the expected value of a random variable x with c.d.f. Fa of
Theorem 16.
Proof:
Conditioning on w as a random element of T and using the results of
Theorem 16 we have
E(x2) = E(E(x2 | a;)) = J* (/? x2dF~]uj) dF,3
= Jo1 (fZo *2dFa) da = Jo1 Ea(x2)da.
Then since Var(x) = E (x2) E (x)2 = Jq Ea(x2)da ^Jq1 Ea{x)daj the
theorem follows.
For any given a, we may not wish to calculate or may not be able to
calculate the probability distribution F. Instead, an approximate distribution
can be used. A simple approximation for F is to assume it is uniformly
distributed over xa. For example, assume that xa = [x~,x+], a closed interval
on the real line. Then the expected value of x can be approximated as
E(x) = / Ea(x)da ~ / (x~ + x~^)da (2.52)
Jo 2 Jo
62


and the variance can be approximated as
Var(x) ~l fQ ((XaY + XtXa +(Xt)2)da~\ (fQ (Xa + Xt)da) (2'53)
where we used the fact that x uniformly distributed over [x~, x+\ implies that
E(x2) = |((h)2+^h + (^)2) an(i E(x) = \(x~+x+) and Var(x) = E(x2)-
(E(x))2. We will not examine the estimates of the variance using possibility
distributions further in this thesis. This is an area for future research. From
(2.50), we see that this estimate of the expected value is simply the midpoint
of the upper and lower bound on the expected value that can be determined
from the possibility distribution.
Given a single possibility distribution for a random variable, the es-
timate of the expected value in (2.52) suggests the following functional.
Definition 16 Let x be & random variable with bounded support. Let xa be
the a-cut for a possibility distribution for x. Define the expected average of
x to be the functional
EA(x) = ^Jo(x~ + x+)da. (2.54)
Note that every ordering of that satisfies the property of a pos-
sibility nest will produce a different possibility distribution for x and each
possibility distribution for x produces a closed interval bounding the expected
63


value of x (2.50). Therefore, the bound on E(x) can be tightened.
N
Let y be the random variable /( n xi)- If ? is the set of all probabilis-
8 = 1
tic based possibility distributions corresponding to all orderings >c of X X X
resulting in a possibility nest, then
E(y) G P| [ [ inf {^xa) da, f sup {^xa) da]. (2.55)
Ater Jo Jo
This will suggest another functional estimate of the expected value,
but first some additional theory is needed. In the examples presented earlier,
we developed left, right possibility distributions. We saw that the possibility
measure and its associated necessity measure give upper and lower bounds
on the probability measure. In the next theorem, we show that left, right
probabilistic based possibility distributions for a random variable give an upper
and lower bound on the c.d.f. for the random variable.
Definition 17 Let if be a real valued random variable with bounded support.
A left possibility distribution for if is a possibility distribution Ly~ such
that Vx,y G R, x < y implies Lfi~(x) > Ly~(y) (i.e. Lfi~(x) is nonincreasing).
A right possibility distribution for if is a possibility distribution Ry~ such
that Vx,y G R, x < y implies Ry~(x) < Ry~(y) (i.e. Ry~(x) is nondecreasing).
Theorem 20 For any closed interval [a, b], pos ([a, 6]) = ^//^(a) and pos
RK{\aM)= RVx{b).
64


Proof: This is clear since (for example) post^. ([a, b]) = sup{L px(x)\
x £ [a, b]} and nonincreasing implies L pLx(a) is an upper bound on this set.

Theorem 21 Let Lp~ and Rp~ be probabilistic based possibility distributions
for random variable x with c.d.f. F~, where supp(Fy) is bounded. Then
1 -L p~(x) < F~(x) Proof:
Note that Lp~(x) gives the possibility that x £ [x, x~q+\ so by Theorem
17 prob ( tZy C ^ tZ/ , tZ/ Q j ]) < prob (^x £ [x,Xo+]) < pos (jayaj]) =L //-(x). But
prob (x £ (ay ij]) = 1 prob [x £ [xq+,x]^ = 1 F~(x) so F~(x) > 1Lp~(x).
On the other hand, Rp~ gives the possibility that x £ [xq+1 x\ which by Theorem
17 implies that F~(x) = prob (x £ [xq+,x\^ We can use this result to estimate the distribution function of x and
the expected value of x.
Theorem 22 Let Lp~(x) and Rp~(x) be probabilistic based possibility distri-
butions for random variable x with bounded support. If FE(x) is given by the
formula
FE(x) = Jim q ^ (l -L p~(x + e) +R p~(x + e)) (2.57)
65


then FE(x) is a cumulative distribution function and the expected value of the
random variable X represented by FE(x) is given by the formula
E(X) = l-£ (Lr4+R K) da. (2.58)
Proof: Note that R/x~(x) and Lfi~(x) are monotone so the right-hand
limit exists and FE(x) is well-defined.
Let x < y, then R/x~(x) Vxix) < 1 ~LVx{y)- Tlien \ (l -L y~(x) +R fj,~(x)) < \ (l -L y~(y) +R y~(y))
thus FE is monotone increasing. Also, Ly~(y) > 0 as x> oo and Ly~(y) > 1
as x> oo, and Ry~(y) > 1 as x> oo and Ry~(y) > 0 as x> oo. This
gives FE(x)> 1 as x> oo and FE(x)> 0 as x> oo. FE is continuous from
the right by definition. Therefore, FE is a distribution function. Let X be the
~ A1"
random variable represented by FE. Then E(X) = Xq+-\- f~+ (1 FE(x j) dx =
xo+
x~d+ + f~+ (l | (l L y~(x) -\-R y~(x)j') dx which reduces to
and ^++ {Rdx(x)) dx = xo+ ~ Jo (R-Xa) da-
Substitution gives the desired result. Note that these last two relationships
follow from consideration of the area of the rectangle with base [xd+-,xt+\ and
66


height one and the areas under the curves Ly~(x) and Ry~(x) in R X [0,1] and
the curves Lx+ and Rxa in [0,1] x R.O
This gives us an alternative estimate of the expected value for those
situations where left, right probabilistic based possibility distributions are avail-
able or computable.
Definition 18 Let Lfi~(x) and Ry~(x) be possibility distributions for random
variable x with bounded support. Define the estimated expectation to be
the functional
EE^ = \j0 {Lxt+R xa) da-
(2.59)
It turns out that in some special situations this estimate of the ex-
pected value is exact. Defining these circumstances is an area of future research.
Example 8 For Example 1, the probability distributions for x1} x2 and y =
67


x1 + x2 are
r s- II & s- II Si II cC
1 .25 .3 0
2 .5 .5 (,25)(.3) = .0 75
3 .125 .2 (,25)(.5) + (.5)(.3) = .275
4 .125 0 (,5)(.5) + (. 125) (.3) + (.25)(.2) = .3375
5 0 0 (. 125) (.5) + (.5)(.2) + (. 125) (.3) = .2
6 0 0 (. 125) (.5) + (. 125) (.2) = .0875
7 0 0 (. 125) (.2) = .025
The expected value of y is
E(y) = 2(.075) + 3(.275) + 4(.3375) + 5(.2) + 6(.0875) + 7(.025) = 4.025.
Recall that the left, right distributions for y are:
a LVa
.925 < a < 1 {2}
.775 < a < .925 {2,3}
.4 < a < .775 {2,3,4}
.3 < a < .4 {2,3,4,5}
.125 < a < .3 {2, 3,4,5,6}
0 < a < .125 {2, 3,4, 5, 6, 7}
a RVa
.975 < a < 1 {7}
.95 < a < .975 {6.7}
.825 < a < .95 {5,6,7}
.475 < a < .825 {4, 5,6,7}
.25 < a < .475 {3,4, 5,6,7}
0 < a < .25 {2, 3,4, 5, 6, 7}
68


The upper estimate of the ex
pected value, Jq (lx+
da.
is
2(1 .925) + 3(.925 .775) + 4(.775 .4)
+ 5(.4 .3) + 6(.3 .125) + 7(.125) = 4.525.
The lower estimate of the expected value, Jq1 rnx+j da, is
7(1 .975) + 6(.975 .95)
+ 5(.95 .825) + 4(.825 .475) + 3(.475 .25) + 2(.25) = 3.525.
The estimated expected value is:
EE(y)
1
2
(4.525 + 3.525)
4.025.
69


The EE functional does not always equal the actual expected value. For ex-
ample, consider the random variable y = x1 x2.
r s- II & s- II Si II cC
1 .25 .3 (,25)(.3) = .0 75
2 .5 .5 (,5)(.3) + (.25)(.5) = .275
3 .125 .2 (. 125) (.3) + (.25)(.2) = .0875
4 .125 0 (,5)(.5) + (. 125) (.3) = .2875
6 0 0 (. 125) (.5) + (.5)(.2) = .1625
8 0 0 (. 125) (.5) = .0625
9 0 0 (. 125) (.2) = .025
12 0 0 (. 125) (.2) = .025
The expected value of y is
1(.075) + 2(.275)
+ 3(.0875) + 4(.2875) + 6(.1625) + 8(.0625) + 9(.025) + 12(.025) = 4.0375.
The left, right distributions for y and the upper, lower estimates of the expected
70


value are:
a L~ 1 L~ 2 LVa
.925 < a < 1 {1} {1} {1}
.775 < a < .925 {1.2} {1} {1.2}
.4 < a < .775 {1.2} {1.2} {1,2,4}
.3 < a < .4 {1,2,3} {1-2} {1,2,3,4,6}
.125 < a < .3 {1,2,3} {1,2,3} {1,2,3,4,6,9}
0 < a < .125 {1,2,3,4} {1,2,3} {1,2,3,4,6,8,9,12}
The upper estimate of the expected value, Jq1 (^lxda, is
1(1 .925) + 2(.925 .775)
+ 4(.775 .4) + 6(.4 .3) + 9(.3 .125) + 12(.125) = 5.55.
a ^a 2 RVa
.975 < a < 1 {4} {3} {12}
.95 < a < .975 {4.3} {3} {9,12}
.825 < a < .95 {4.3} {3,2} {6,8,9,12}
.475 < a < .825 {4,3,2} {3,2} {4,6,8,9,12}
.25 < a < .475 {4,3,2} {3,2,1} {2,3,4,6,8,9,12}
0 < a < .25 {4,3,2,!} {3,2,1} {1,2,3,4,6,8,9,12}
71


The lower estimate of the expected value, fc
Rxt
da.
is
12(1 .975) + 9(.975 .95)
+ 6(.95 .825) + 4(.825 .475) + 2(.475 .25) + 1(.25) = 3.375.
Then EE(y) = §(3.375 + 5.55) = 4.4625.
Note that in this case we do not get the expected value.
Example 9 The distribution of y for Example 2 was shown in the introduction
to this thesis were we saw that E(y) = .8333 = EE(y). Using the upper
possibility distribution of Example 6 the EA estimate of the expected value is,
after some simplification:
EA(5) = \ [ (2 4/- 20^4)
- (3)d(3 = -.73333.
An upper and lower bound on the distribution function of y, derived from the
left, right possibility distributions is shown in Figure 2.3.
72


Figure 2.3. Graph of upper and lower bound on cumulative distribution of
y = (xl)2 x2.
Example 10 Consider Z = X2Y where X and Y have the distributions of
example 2. Then E(Z) = E(X2)E(Y) and E(X2)=f2 x2(x 1 )dx + J23 x2(3
x)dx = 4.1667 and E(Y) = J45 y(y 4)dy + J56y(6 y)dy = 5.0 so E(Z) =
(4.1667) (5) = 20.834
The EE functional estimate is calculated as follows:
[(3 2^ivO)! (e-2(l- ^(2 2diZFlYj)) 108]
for (3 G [1, .75]
2?r__ <
[(3 2 (l iyZ-2 2^(-/3 + 1))) (6-2(yiymf)),108]
for (3 G [-75, 0]
73


and


[16, ^2 (l
(2 M
2\j(2 ~~ + 1))) + X) *
^2 2\J((3 + 1) j + 4^ ] for [3 £ [.75, 0]
Then the integral over the left endpoints of the right distribution
(lower bound) is
* 6 21 w 2 2a/((3 + 1) ) d(3
= 15.8142
The integral over the right endpoints of the left distribution (upper bound) is
= 27.1859
giving EE(Z) = I (15.8142 + 27.1859) = 21. 5
Using the EA functional estimate, the possibility distribution for Z using the
74


upper possibility distributions for X and Y is:
7U
l + \ 1-V1-/5 4 + V1-Y1-/3 ,
3 V1 V1 ft
for (3 G [0,1].
Then
EA(Z)
= 22.4.
Example 11 Let f(x) = y^y and i~(x) = e~x for xG [0,20] (we truncate the
distribution so the support is bounded). Note that the support of x is the
interval [0,20]. Then E(5f)=J020 e~x dx = 59635.
Let RXfj = [20a, 20] for a G [0,1]. Then prob(Y G [20a, 20]) = J2oae~xdx =
e-20 + e-20" so (3 = 1 ( e-20 + e-20") = 1 + e-20 e-20" and a =
yy In (/3 + 1 + e-20) Therefore Rxp = [20 (~ yy hi (/3 + 1 + e-20)) 20] or
Rxp = [ In ((3 + 1 + e-20) 20] for (3 G [0,1]
75


Now let Lxp = [0,20 20a]. Then prob(A" £ [0,20 20a]) = /020-20 e~xdx =
_e-20+20a _)_ 1 so ^ = 1 [ g 20+200- _|_ = g-20+20a anc[ a -L ln ^
Therefore Lxp = [0, 20 20 In (3 + l) ] or
Lxfj = [1, In (/?)]
Now for y=f(x), note that for the interval [a,b], 1+^ ^ = [y^, yyyj so
R:
W =
^ (1 -ln(-/3+l +e))
-1
for (3 £ [0,1]
An upper bound of E(y) is Jq1 ((1 In ((3 + 1 + e 20)) ^ d(3 = 59635.
Lyp = [(1 In ((3)) 1 l]
A lower bound on E(y) is Jq1 ((1 In (/3))_1) d(3 = 59635
The upper bound equals the lower bound equals the expected value as it must
when there is a single random variable in the context. In this case left and
right possibility distributions are just cumulative probability distributions.
In the next chapter we examine single possibility distribution repre-
sentations of random variables and a norm on these representations motivated
by the EA functional. This allows us to examine the convergence properties
of estimating expected values using this functional. Future research will focus
on the EE functional and what it implies. We anticipate that both functionals
will ultimately have their place in applications.
76


3. The Space of Fuzzy Numbers
In this chapter we examine the space of fuzzy numbers. This space
includes possibility distributions for real valued random variables with bounded
supports. However, the space of fuzzy numbers is a more general concept.
3.1 Fuzzy Numbers
In this chapter the set of fuzzy sets over the real line is considered
(for example random variables represented by single possibility distributions).
Consideration will be further limited to fuzzy sets that capture the idea of
a fuzzy measurement of a real quantity. We wish to capture the idea of an
approximate number such as a number close to one. It is desirable to have
certain properties hold when defining fuzzy quantities and relationships on
them. This may be for intuitive as well as practical reasons. In particular, at
a given a-level of possibility, convexity offers computational advantage. For
certain distributions it also has intuitive appeal. For example, if it is equally
likely that a measurement taking a value in R is x or y, then any z between x
and y should be of equal or greater possibility. It does not seem too restrictive
to expect that if there exists a quantity arbitrarily close to x that is at least
77


a possible, then x is at least a possible. Finally, the possible values that we
believe a single finite object can be should be bounded if we assume there exists
a single correct measurement in the underling reality. The following standard
definition is given.
Recall that fuzzy set x is characterized by a membership function
K ' X [0,1]. x is called normal if 3 x £ X 3 //-(a:) = 1 and xa = {x £
X\n~(x) > a}.
Definition 19 (Klir&Yuan [21]) A fuzzy number, x, is a fuzzy subset of R
such that 1) x is normal 2) Va £ (0,1] xa is a closed interval and 3) x0+ is
bounded.
Two special types of fuzzy numbers are often found in the literature
as follows.
Definition 20 A trapezoidal fuzzy number is a fuzzy number x charac-
terized by the set (a,b,c,d) where a,b,c,d are real numbers with a< b < c < d
and xa = [a + a(b a), d + a(c d)] A triangular fuzzy number is a fuzzy
number x characterized by the triple (a,b,c) where a,b,c are real numbers with
a< b < c and xa = \a + aib a), c + aib c)] .
Let £ be the space of fuzzy numbers. The extension principle is used
to define mathematics over £ (see Definition 13).
78


Definition 21 (Fuzzy Arithmetic) Let R be the real line, and x and y be two
fuzzy numbers over R. Let equal +,-,x or -y. Dehne a fuzzy set for the
quantity x *y by the following formula:
K*y{c) = SUP
a*b=c
(3-1)
where sup{0}=O and in the case where /i~(0) > 0 then x -y y is left undefined.
Theorem 23 (for proof see Klir&Yuan [21]) £ is closed under the operations
of +,-,x and -y (provided it is defined).
Example 12 Consider the fuzzy numbers defined as follows:
Kia) = <
a 2 a G [2, 3]
4 a a £ [3,4]
hyib) = <
6-4 6 G [4, 5]
6 6 6 G [5, 6]
0 otherwise
.5(c 6) GO SJ
.5(10 -c) c G [8,10]
0 otherwise
0 otherwise
Then the fuzzy number z = x + y is defined as follows:
KKC) =
Applying Definition 12 to obtain a set representation for x we get [2+a,4-
given by using interval arithmetic to arrive at [6-\-2a}10-2a]ae[O^. However, as
mentioned before, fuzzy numbers are more general than interval numbers since
each fuzzy number is a weighted family of intervals, (probability weighted when
the distribution is a possibility distribution as constucted in chapter two).
79


3.2 A Normed Space of Fuzzy Number
Equivalence Classes
Suppose that all a decision maker knows about some unknown quan-
tity is that it is an element in an interval, [a,b]. The decision maker might
indicate the amount he would pay in exchange for the unknown amount. With
no additional information, the decision maker might assume a uniform prob-
ability distribution over [a,b] and choose the midpoint of the range, i.e. the
expected value. Another way the decision maker might approach the problem
is to find the number which balances the maximum possible gain against the
maximum possible loss. For example, if the decision maker pays 1.5 in ex-
change for the interval of possible values [1,2] then the largest possible gain of
.5 (2-1.5) is equal to the maximum possible loss of .5 (1.5-1). Using this logic,
the decision makers utility for an interval of possible values is the midpoint
of the interval and the decision maker would be neutral with respect to two
intervals with the same midpoint. The extension of this idea to fuzzy numbers
involves a cuts, which is considered next.
Let £={x | x is a fuzzy number}. An equivalence relationship on £ X
£ is defined as follows.
Definition 22 Let x= [x,x+] and y = [y,y}]. Then x=y iff Va £ (0,1]
3e £R such that [x~ ea,x+ + ea\ = [y,y+].
80


It is clear that this is an equivalence relation (see Diamond [7]). If
a decision makers utility for an interval is the midpoint of the interval, then
this decision maker is neutral with respect to every a cut of all members
of an equivalence class. Let C be the partition of £ into equivalence classes
resulting from this relationship. If x £ £ then < x > will denote the element
of C consisting of all fuzzy numbers equivalent to x. In other words, define two
fuzzy numbers to be equivalent if the midpoint of each a-cut is the same for
the two fuzzy sets.
Theorem 24 Let < x >£ C, then 3to £< x > of minimal possibility, i.e.
if ma = [m~,m+\ and y£< x > with ya = [y,y+] then Va G[0,1] y^ < m~ and
Proof:
Let rna = Hye [y^^y]? then rna is closed and convex and therefore
an interval. If (3 > a then [yp,yp] C [y~,y], thus
^H \vp>vp \ ^_H [va,yZ\
yE yE
and m is a fuzzy number with the desired property.
Theorem 25 (Diamond [7]) C with addition and scalar multiplication of fuzzy
numbers forms a vector space over R.
Recall that for a given possibility distribution for the random variable
81


if, the function EA(if) of Definition 16 provides an estimate of the expected
value of x. Note that for all members of a given equivalence class < x >£ £,
this functional will result in the same estimate. Since EA(x) determines the
midpoint of the interval containing the actual expected value of x when if is a
random variable, this functional is consistent with the assumed utility of the
decision maker for an interval of possible values. This motivates the following
extension of this definition to C.
Definition 23 Let < x >£ C the expected average of < if > is
EA() = ^Jo(x~+x+)da. (3.2)
where if £< if > .
Example 13 Let x be the fuzzy number with set representation [1 + a, 3
a]e[o,i], then EA(< x >) = | fo(l + a + 3 a)da = Jq 2da = 2. This is not
surprising since the midpoint of each acut is 2.
Example 14 Let x be the fuzzy number with set representation [2 + a, 4
a2]e[o,i]. Then EA(< x >) = Jq |(6 + a a2)da. = 3.0833 The expected
average is to the right of 3, since the fuzzy number is weighted to the right of
3.
Example 15 Let x be the fuzzy number with set representation equal [1,2]
for a = (|, 1] and [0,2] for a = [0, |]. Then EA(< x >) = fl 1.5da + f02 da =
82


2
(1.5)+ 1(1) = 1.25.
It has been shown that C is a vector space. A norm can be put on
this space using the expected utility concept.
Definition 24 On the vector space £, let the function
(3.3)
where x £< x >.
This is well defined since x~ and x+ are integrable on [0,1] since each
is monotonic (see 6.9 in Kolmogorov&Fomin [24]). This implies that x+x+ is
integrable which implies that |x~+x^| is integrable. ||< x >||Sj4 measures the
the subscript EA is an abbreviation for expected average. A fuzzy number
that has membership value zero, at all but a single point where the membership
value is one, is called a crisp number. For a crisp number the expected average
norm reduces to the absolute value of the number.
Example 16 Consider the triangular fuzzy number £=(-2,-1,1), i.e. its a
cuts are given by xa = [2 + a, 1 2a], Then
Next it will be shown that this function defines a norm on £. To do
this, the following lemma is needed.
expected absolute value of the midpoint of each cc-cut. As mentioned earlier,
83


Lemma 1 (Kruse et al. [23]) Let x£ £. Consider xa and x+ as functions of
a. Then x and x+ are continuous from the left.
Observe that these functions are monotonic on [0,1] and so are con-
tinuous except at countably many points in [0,1] and all discontinuities are of
the first kind (see Rudin [39] page 96).
Theorem 26 The function ||< 5 >||Sj4 defines a norm on C.
Proof:
(Nl) ||< 5 >\\EA >0 since |x+x+| >0 Va G[0,1]
(N2) ||< x >\\EA =0=> 5 = 0.
Case(l) Assume for some 7 > 0 we have
(3 y 7 from below and similarly for x+, > 0 such that
x7 +x+
= r > 0. Since Xg > x7 as
X/?-X7
< r/3 and
x[?-x+
< r/3 and thus
X/3+xJ
> 0V/3 < 7 with 7 (3 < S.
Then | |xa +x+ | da > 0 =>- fg | |xa +x+| da > 0, a contradiction. Thus
Va > 0, |x~ +x+| = 0 =^x=-x+ i.e. 5 = 0.
Case(2) Let |xq+Xq | = r > 0. Then for some 7 arbitrarily close to 0 and
r > s > 0
x7 +x+
= s > 0 and case(l) applies.
(N3) ||r < x >||Sj4 = |r| ||< x >11^- Clear since absolute value is a norm.
84


(N4|
||< x + y >11^ < ||< x >||Sj4 + ||< y >11^^
\\\\EA = /o | |x+y+x+ +y+| da
< fo HIX +*£l + \ya+yt.\}da = ll< 5 >11EA + ll< >11EA
since absolute value is a norm and by properties of the Lebesgue integral.
3.3 An Isometry between (£, \\-\\£A ) and BV[0,1] as a
Subspace of Li[0,1]
For this section, the following definitions and results from the theory
of functions of bounded variations are needed.
Definition 25 (Kolmogorov&Fomin [24]) Let f:[a,b]> R. f is said to be of
bounded variation if 3C > 0 such that [CILi I f(xi) ~ f(xi-1)| > C for every
partition a = x0 < x\ < ... < Xk = b on [a,b]. We denote the set of all functions
of bounded variation on [a,b] by BV[a,b],
Definition 26 (R oyden [38]) Let f:[a,b]> R be a function of bounded varia-
tion. The total, positive and negative variation of f on [a,b] are denoted
v^(f) = spPE?=il/(0-/(^-i)l,
Pa(f)=SUPp £r=lmax{0J(Xi) f(xi-l)},
Na(f)=SUPp £r=imax{,f(xi-l) f(Xi)}
where p represents all partitions of [a,b].
85


Theorem 27 (Royden [38]) Let f:[a,b]>Rbe a function of bounded variation.
Then Va(f)=Pa(f)+Na(f) and f(b)-f(a)=P*(f)-N*(f).
Theorem 28 (Kolmogorov&Fomin [24]) Let f : [a, b\ R be a function of
bounded variation and a < b < c. Then V^(f)=V^(f)+Vj (f).
It is now shown that each element of C can be represented by an
equivalence class of functions from BV[0,1], the space of bounded variations on
the interval [0,1] when BV[0,1] is considered a subspace of Li [0,1] (the space
of Lebesgue integrable functions on [0,1] partitioned into functions which are
equivalent up to a set of measure zero). Using this representation, it will be
shown that C is isometric to this space under the EA-norm above.
Theorem 29 (Cadenas&Verdegay [5]) Let (Aa)aG[0ji] be a family of non-empty
closed nested intervals with a < (3 => Ap C Aa and A0 = cls(U7e(o,i]A7). Define
H~(x) =sup{a | x G Aa} with sup0 =0. Then x is a fuzzy number.
Definition 27 Let x be a fuzzy number and let (Aa)ag|-Q ^ be a set represen-
tation of x consisting of non-empty closed intervals with
A0 = cls( [J A7).
7£(o,i)
Dehne f~:[0,l]>R, called a functional representative of x, by f~(a) =
(aa + at)!2 wliere Aa=[a",a+].
86


Theorem 30 Let R be a functional representative of x, then R GBV[0,1]
and fi(a-) = ((3~+(3+)/2 where [/?,/?+]=n7 where [/3-,/3+]=cls(U7>A7).
Proof:
Note that f£(a) = (a~ + a+)/2 = (f~ (~f*))/2 where /-(a) = a~
and /~i(a) = ~at- Since (Aa)ag|-Q ^ is nested and f~ and -ft are mono-
tone increasing real-valued functions, R is a bounded variation (see Theo-
rem 4. on page 100 of Royden [38]). Also, R(a-) and R(a+) exist (since
the left and right limits exist for monotonic functions, see Theorem 4.29 on
page 95 of Rudin [39]). Let [/3-,/3+]=n7 and /-() = (3+ since each is monotone increasing. Thus, /-() =
(/-() (/-()))/2 = (ft~ + (3+)/2. Similarly for the limit from the
right.
Definition 28 Let f,g£BV[0,l] and let f~g mean that f(x)=g(x) almost ev-
erywhere. This defines an equivalence relation over BV[0,1]. Let BVLi[0,l]
denote BV[0,1] for this partition with respect to the Li norm.
Theorem 31 Let (A^,)Q,e|^q and (Ba)ag|-Q be set representations of fuzzy
number, x consisting of non-empty closed intervals. Moreover, let f and g be
the corresponding functional representatives. Then f~g.
87


Proof:
Since x=n7 this implies that f~g since the points at which a function of bounded variation
are discontinuous are countable and the measure of a countable set is zero. At
all points of continuity we have f(a)=f(a)=g(a)=g(a).D
Theorem 32 Let (Aa)ag|-Q ^ and (Ba)ag|-Q ^ be set representations of fuzzy
numbers x and y consisting of non-empty closed intervals and assume x~y,
then if f and g are functional representations for these two set representations
respectively, f r^j g #
Proof:
Va e[0,l], (a+a+)/2=(b+b+)/2 where x = [a,a+] and y = [b,b+]
by definition of our equivalence relation. Thus f(a )=g(a) and the same
argument as applied in the previous theorem holds.
Theorems 30, 31 and 32 show that there is a single functional rep-
resentation (up to a set of measure zero) for each equivalence class of fuzzy
numbers. Thus, the map of fuzzy number equivalence classes to the space of
equivalence classes of functions of bounded variation is well defined. The next
step is to show that it is not only well-defined but establishes an isomorphism.
The next theorem establishes that the map is one-to-one and the following one


establishes that it is onto.
Theorem 33 Let < x >^< y > be two fuzzy number equivalence classes.
Then their functional representations are not equal.
Proof:
Let f and g be the functional representations for < x > and < y >
respectively. Since < x >^< y > 3a e(0,1] such that (x+x+)/2^ (y+y+)/2.
Then for this a, {(a-)^g(a-). Then 3 disjoint neighborhoods of f(a) and
g(ct) with positive separation so in a neighborhood near a (a set with measure
greater than zero), f^g.D
Theorem 34 Let f£BV[0,l] then 3 < x >£ C such that f is the functional
representation for < x >.
Proof:
For a G (0,1], let [x, X+ ] = [f(l)-2Pi(f),f(l)+2N£ (f)] and
[xq ,Xq]=c1s ( U [x,x+]). The claim is that the intervals [x,x+] are a se^
ct > 0
representation for a fuzzy number x. Since each interval is closed and non-
empty, by our prior theorem we only need to show that a < (3 => [x^,xjj ]c
[x,x+], but this is immediate from the definition of Pj(f) aad Nj(f).
To show that this is a functional representation for < x > note that
g(Q)=(x;+x+)/2=f(l)+(2N ;(f)-2P;(f))/2=f(l)+(f(a )-f( 1))=f(Q)
89


Theorem 35 The space, C is isomorphic to BVLi[0,l].
Proof:
Theorems 33 and 34 establish a one-to-one correspondence between
C and BVLi[0,l]. If < x >, < y >£ C with functional representations f and
g, then f+g is a functional represenation for < x > -\- < y >. To see this let
(Aa)ae|o and (Ba)ag|-Q ^ be the set representations of x and y corresponding
to f and g. We know by Theorem 13 that (Aa + f?a)ae|Q is a set representation
for x-\-y. But the mipoint of Aa-\- Ba is just the sum of the midpoints of A and
Ba. So f+g is a functional representation for x + y. Similar reasoning shows
that cf is a functional representation for cx for scalor c. Thus the mapping
preserves addition and multiplication by scalars and is an isomorphism.
Theorem 36 The space (£, ||-||fyt) is isometric to BVLi[0,l].
Proof:
Since an isomorphism between C and BVLi[0,l] has already been
established in Theorem 35, we need only to establish an equivalent norm. Note
that if < x >£ C and f~ a the functional representation for < x > then f~ ~ g~
where g~ is the functional representation for < x > formed by the a-cuts of
x. But then ||< x >||Sj4 = Jo \£fe(a)\da = Jq \f~(ct) \ da. Thus the norm is
independent of the choice of functional representation since we have shown in
90


Theorem 31 that all functional representations for < x > are equal almost
everywhere. But f~ is an element of BVLi[0,l] as a subspace of Li[0,1] and in
Li[0,1] H/^l^ = Jo |C(a)| da. Therefore, the norms are equivalent.
3.4 Convergence in (£, ||-||fyt)
It has been established that the equivalence classes of fuzzy numbers
are isometric to the space BVLi[0,l] which is a subspace of Li [0,1]. Thus
every Cauchy sequence of fuzzy number equivalence classes in the space C will
converge but not necessarily to a fuzzy number equivalence class. However,
we have the following result that shows that Cauchy sequences in the space
converge if our fuzzy numbers have a uniform bound in variation.
Theorem 37 Suppose {< x >} is Cauchy in C under the EA-norm and sup-
pose that 3 Mg R such that V n, Vq(/) < M, where fn is a functional
representation for < x >n. Then 3 < x >G C such that < x >n>< x > in the
EA-norm.
Proof:
By assumption {/()} is Cauchy in Li[0,1]. Thus 3 fG Li[0,1] such
that fn(a) > f(a) in the mean since Li[0,1] is complete. We know that 3 a
subsequence fnk (a) > f(a) almost everywhere (see Kolmogorov&Fomin [24]
page 388 problem 7c). Let S equal the subset of [0,1] where we have pointwise
91


convergence. Assume that f is not of bounded variation on S. Since f is not
of bounded variation 3 {cq £ S \ i = f,m} such that f(ai) f(ai+1)
>
2M. We can chose N large enough such that Vi, |/(cq) /jv(cq)| < e for e
arbitrarily small. Then J2 /iv(cq) /jv(cq+i) > ID /(cq) /(cq+i) ~ xrie. But
then Vq(/jv) > M which is a contradiction. Therefore f is of bounded variation
on S. We need to show that there is a function of bounded variation on [0,1]
which is equal to f almost everywhere. For every a £ [0,1] S. Let {xn} be
a sequence from S such that xn > a. Such a sequence exists since [0,1]-S has
measure zero. Define f(a) = limsup f(xn). This limit exists and is finite since
f is a bounded variation on S and, therefore, bounded on S. But then f as just
defined is a bounded variation on [0,1] since Va £ [0,1] S we can find x £ S
arbitrarily close to a such that \f(x) f(a) \ < e for arbitrarily small e
Corollary 4 Let {xn} be a sequence of fuzzy numbers with the property that
Vn (xn)o (V)q < 2M. If {< x >n}, the sequence of equivalence classes in
£, is Cauchy then it converges.
Proof:
Let iin be defined by (cc) = ((xn) + (xn)+)/2 where
{xn)a=\(xn)~,(xn)+\. Thus f~ is the functional representative of < xn >
formed by the a cuts of xn. Applying the theorem, we will be finished if
we show that 2Vj(/~ ) < (Vn)o (xn)o < 2M.
92


For each n we define the fuzzy number mn with set representation given by
{mn)a =[(mn)-,(mn)+] = [f~n(l)-2P^(f~n),f~n(l)+2N^(f~n)] V£ (0,1] and
[(mn)d,(mn)o ]=cls(U>0[(mn),(mn)+]). We have shown that mn £< xn > .
We claim that mn is the element of < xn > with minimal possibility. As-
sume not. Then 3y £< xn > such that for some a £ [0,1], y+ < (to)+ and
{mn)~ < y~ i.e. y+ y~ < (mn)+-(mn)~ =2V* (f~J. But we know that
hSa) = Mf + Mf which implies that 2V* (f~J< V*(y+) + V*(yd). Since the
functions on the right-hand side are monotonic, their variation is defined by
their values at the end points. Thus 2V* (f~ )< yf + yj~ Va Va ~ Va
and we have arrived at a contradiction.
Since mn is the distribution of minimal possibility for < xn >, 2 V* (fyn) =
(fiA)o (mn)d < (^n)o (n)o < 2M.
Corollary 5 Let {xn} be a sequence of fuzzy numbers with the property that
Vn {xn)o C B where B is a bounded subset of R and (xn)0 is the support of
xn. If {< x >}, the sequence of equivalence classes in £, is Cauchy then it
converges.
Proof:
This follows from the prior corollary since (xn)0 C B => {xu)q
(n)o < 2M where M is an upper bound on the absolute values of the elements
of B.D
93


Full Text

PAGE 1

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 2

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 3

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 4

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 5

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 6

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 7

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 8

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 9

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 10

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 11

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 12

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 13

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 14

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 15

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 16

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 17

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 18

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 19

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 20

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 21

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 22

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 23

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 24

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 25

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 26

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 27

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 28

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 29

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 30

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 31

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 32

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 33

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 34

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 35

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 36

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 37

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 38

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 39

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 40

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 41

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 42

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 43

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 44

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 45

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 46

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 47

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 48

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 49

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 50

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 51

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 52

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 53

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 54

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 55

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 56

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 57

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 58

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 59

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 60

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 61

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 62

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 63

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 64

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 65

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 66

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 67

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 68

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 69

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 70

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 71

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 72

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 73

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 74

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 75

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 76

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 77

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 78

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 79

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 80

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 81

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 82

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 83

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 84

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 85

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 86

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 87

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 88

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 89

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 90

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 91

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 92

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 93

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 94

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 95

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 96

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 97

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 98

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 99

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 100

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 101

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 102

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 103

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 104

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 105

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 106

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 107

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 108

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 109

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 110

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 111

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 112

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 113

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 114

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 115

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 116

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 117

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 118

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 119

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 120

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 121

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 122

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 123

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 124

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 125

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 126

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 127

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 128

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 129

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 130

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 131

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 132

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 133

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 134

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 135

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 136

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 137

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 138

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 139

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 140

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 141

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 142

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 143

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 144

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 145

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 146

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 147

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 148

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 149

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 150

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 151

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 152

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

PAGE 153

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.