Citation
Linear programming problems for generalized uncertainty

Material Information

Title:
Linear programming problems for generalized uncertainty
Creator:
Thipwiwatpotjana, Phantipa
Publication Date:
Language:
English
Physical Description:
xii, 173 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Uncertainty (Information theory) -- Linear programming ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 167-173).
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Phantipa Thipwiwatpotjana.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
672319491 ( OCLC )
ocn672319491
Classification:
LD1193.L62 2010m T54 ( lcc )

Full Text

LINEAR PROGRAMMING PROBLEMS FOR GENERALIZED
UNCERTAINTY
by
Phantipa Thipwiwatpotjana
M.S., Clemson University, South Carolina, USA, 2004
B.S., Chiang Mai University, Chiang Mai, Thailand, 1999
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Mathematical and Statistical Sciences
2010


This thesis for the Doctor of Philosophy
degree by
Phantipa Thipwiwatpotjana
has been approved
by
Weldon Lodwick
|L
lO
Burt Simon


Thipwiwatpotjana, Phantipa (Ph.D., Applied Mathematics)
Linear Programming Problems for Generalized Uncertainty
Thesis directed by Professor Weldon Lodwick
ABSTRACT
Uncertainty occurs when there is more than one realization that can repre-
sent an information. This dissertation concerns merely discrete realizations of an
uncertainty. Different interpretations of an uncertainty and their relationships
are addressed when the uncertainty is not a probability of each realization. A
well known model that can handle a linear programming problem with proba-
bility uncertainty is an expected recourse model. If uncertainties in the problem
have possibility interpretations, an expected average model, which is comparable
to an expected recourse model, will be used. These two models can be mixed
when we have both probability and possibility uncertainties in the problem,
provided these uncertainties do not occur in the same constraint.
This dissertation develops three new solution methods for a linear optimiza-
tion problem with generalized uncertainty. These solutions are a pessimistic, an
optimistic, and a minimax regret solution. The interpretations of uncertainty
in a linear programming problem with generalized uncertainty are not limited
to probability and possibility. They can be also necessity, plausibility, belief,
random set, probability interval, probability on sets, cloud, and interval-valued


probability measure. These new approaches can handle more than one interpre-
tation of uncertainty in the same constraint, which has not been done before.
Lastly, some examples and realistic application problems are presented to illus-
trate these new approaches. Some comparisons between these solutions and a
solution from the mixed expected average and expected recourse model when
uncertainties are probability and possibility are mentioned.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
Signed
Weldon Lodwick


I dedicate this work to my parents,
Amporn and Wut.


ACKNOWLEDGMENT
I am most grateful to my adviser, Dr. Weldon A. Lodwick, who introduced
me to the uncertainty world, for his guidance, support, and commitment. His
invaluable advice, patience, and encouragement make this work possible.
I am thankful to Dr. Stephen Billups, Dr. George Corliss, Dr. Alexander
Engau, and Dr. Burt Simon for serving on my Ph.D. committee. I appreciate
their time, comments and suggestions that were very helpful in developing the
research. I express my special thanks for their corrections and suggestions that
improved this dissertation.
I would like to thank the Thai government officers who took care of me
while I am in the US. I extend my appreciation to the Bateman family and the
Department of Mathematical and Statistical Sciences, University of Colorado
at Denver, who provide the financial support throughout my study after the
support from the Thai government was terminated.
Finally, I am most thankful to my mother, Amporn, for her care, love, and
support. Warm thanks for my sister, Yui, and my brother, Yook, for sharing
their opinions. Special thanks to Iris for her kindness for a Thai girl. To, Tur,
for his love, care, support, and help. Because of their encouragement, support,
help, and love, my success is also theirs.


CONTENTS
Figures .................................................................... x
Tables..................................................................... xi
Chapter
1. Introduction to the dissertation....................................... 1
1.1 Problem statement...................................................... 4
1.2 Research direction and organization of the dissertation............... 9
2. Some selected uncertainty interpretations............................. 12
2.1 Possibility theory.................................................... 13
2.2 Random sets .......................................................... 24
2.2.1 Upper and lower expected values generated by random sets ... 39
2.2.2 The random set generated by an incomplete random set information 50
2.2.3 The random set generated by a possibility distribution or possibil-
ity of some subset of U.................................................... 50
2.2.4 The random set generated by a necessity distribution or necessity
of some subset of U ............................................... 52
2.2.5 The random set generated by a belief distribution................ 53
2.2.6 The random set generated by a plausibility distribution.......... 53
2.3 Interval-valued probability measures (IVPMs).......................... 54
2.3.1 Upper and lower expected values generated by IVPMs............... 66
2.4 Clouds................................................................ 70
2.5 Relationships of some selected uncertainty interpretations............ 75
vii


3. Linear optimization under uncertainty: literature review ........... 80
3.1 Deterministic model ................................................... 81
3.2 Stochastic models...................................................... 83
3.2.1 Stochastic program with expected recourse......................... 85
3.3 Minimax regret model................................................... 90
3.4 Possibility model...................................................... 93
3.5 Mixed model of probability and possibility............................. 99
3.6 Mixed uncertainties model............................................. 101
4. Linear optimization under uncertainty: pessimistic, optimistic, and
minimax regret approaches.............................................. 107
4.1 Pessimistic and optimistic expected recourse problems .............. 109
4.1.1 Pessimistic and optimistic expected recourse problems generated
by random sets with finite realizations.......................... 115
4.1.2 Pessimistic and optimistic expected recourse problems generated
by IVPMs with finite realizations.................................. 121
4.2 Minimax regret of the expected recourse problems.................... 123
4.2.1 A relaxation procedure for minimax regret of the expected recourse
problems .......................................................... 125
4.3 Storage or selling costs in expected recourse problems............... 132
4.3.1 Selling costs in expected recourse problems....................... 133
4.3.2 Storage costs in expected recourse problems....................... 134
4.4 Comparison between pessimistic, optimistic, minimax regret ap-
proaches and the reviewed approaches ................................ 135
5. Examples and applications............................................. 139
viii


5.1 Farming example................................................. 139
5.1.1 A random set version of the farming example..................... 143
5.2 An application for radiation shielding design of radiotherapy treat-
ment vaults........................................................... 149
6. Conclusion......................................................... 158
Appendix
A. How to obtain a basic probability assignment function m from a given
belief measure Bel ................................................ 161
B. The nestedness requirement of focal elements for a possibility measure 163
References............................................................ 167
IX


FIGURES
Figure
1.1 Uncertainty interpretations: A-----> B represents that an uncertainty
interpretation B contains information given by an uncertainty inter-
pretation A, and A--------> B represents that A is a special case of
B................................................................... 10
2.1 Venn diagram of an opinion poll for a Colorado governors election. 18
2.2 Possibility measure defined on U = {uj, u2, .., 117}.................. 22
2.3 A finite representation of an infinite random set...................... 33
2.4 A random set generated by possibility distribution..................... 51
2.5 A cloud on R with an a-cut at a 0.6.................................. 71
2.6 A cloud represents an IVPM............................................. 72
2.7 Uncertainty interpretations: A ---- B : there is an uncertainty in-
terpretation B contains information given by an uncertainty inter-
pretation A, A-------> B : A is a special case of B, A << B : A and
B can be derived from each other, and A > B : B generalized A. 77
3.1 Feasible region and the unique optimal solution of the production
scheduling problem (3.1)............................................ 83
5.1 An example of a radiation room........................................ 150
x


TABLES
Table
2.1 Probability theory versus possibility theory: comparison of mathe-
matical properties for finite sets.................................. 23
2.2 The number of belief and plausibility terms required for finding two
density functions for the lowest and the highest expected values of 6
by using Theorem 2.17 versus by solving two LP problems.......... 46
2.3 The differences between IVPMs and clouds ............................ 72
3.1 Productivity information shows the output of products per unit of
raw materials, the unit costs of raw materials, the demand for the
products, and the limitation of the total amount of raw materials. . 82
3.2 Computational results for different models of the production plan-
ning problem....................................................... 105
4.1 Computational Results for the new approaches and the reviewed ap-
proach of the production planning problem.......................... 137
5.1 Data for farming problem............................................ 140
5.2 An optimal solution based on expected yields...................... 140
5.3 The scenarios of yields for all crops............................... 141
5.4 An optimal solution based on the expected recourse problem (5.2). 142
5.5 Optimal solutions based on the pessimistic and optimistic expected
recourse of the farming problem. P = pessimistic, O = optimistic,
R = minimax regret................................................. 147
xi


5.6
A pessimistic, an optimistic, a minimax regret, and an extreme sce-
nario solution for the radiation shielding design problem...............
157
xii


1. Introduction to the dissertation
This dissertation develops a technique for solving linear optimizations un-
der uncertainty problems that have discrete realizations for broad classes of
uncertainty not studied and analyzed before. We study optimization under un-
certainty in which the probability density mass value of each realization is not
known with certainty. Information attached to an uncertainty can be categorized
into different interpretations. The interpretations of uncertainty information as-
sociated with this thesis are probability, belief, plausibility, necessity, possibility,
random set, probability interval, probability on sets, cloud, and interval-valued
probability measure (IVPM). For convenience, we call these interpretations of
uncertainty lPC-BRIN\ We develop an approach to compute a pessimistic, an
optimistic, and a robust (minimum of maximum regret) solution for a linear
programming (LP) problem with these uncertainty interpretations. These prob-
lems are solved based on the transformation of a linear optimization problem
with uncertainty to a set of expected recourse models.
An expected recourse model is a paradigm to solve a stochastic program-
ming problem. Stochastic programming is the study of practical procedures for
decision making under uncertainty over time. Stochastic programs are mathe-
matical programs (linear, integer, mixed-integer, nonlinear) where some of the
data incorporated into the objective or constraints are uncertain with a proba-
bility interpretation. An expected recourse model requires that one makes one
decision now and minimizes the expected costs (or evaluations) of the conse-
1


quences of that decision. We consider a two stage expected recourse model in
this thesis. The first stage is as a decision that one needs to make now, and the
second stage is as a decision based on what has happened. The objective is to
minimize the expected costs of all decisions taken.
We refer to a probability interpretation of uncertainty when we can create
or assume a probability for each realization from an experiment without using
any prior knowledge. For example, we will not assume that a coin is fair when
we know nothing about this coin. The other interpretations of an uncertainty
information mentioned above (except probability) share the same behavior, i.e.,
the information that leads to one of those interpretations is not enough to obtain
a probability for each realization. Instead, an appropriate function is created
based on information provided.
Let U be a finite set of all realizations of an uncertainty u. A belief in-
terpretation of uncertainty is given in the form of a belief function, Bel, which
maps from an event (a subset of U) to a number between 0 and 1. For an event
A, Bel(A) can be interpreted as ones degree of belief that the truth of u lies
in A. The probability may or may not coincide with the degree of belief about
an event of u. If we know the probabilities of events, then we will surely adopt
them as the degrees of belief. But, if we do not know the probabilities, then it
will be an extraordinary coincidence for the degrees of belief to be equal to their
probability. In general, sum of the probability of two mutually disjoint events
is equal to the probability of the union of those two events. This statement is
relaxed when the function is a belief function. Intuitively, ones degree of belief
that the truth lies in A\ plus the degree of belief that the truth lies in A2 is
2


always less than or equal to the degree of belief that the truth lies in Ai U A2.
G. Shafer, [60], mentioned that ones beliefs that the truth of u lies in an
event A are not fully described by ones degree of belief Bel(A). One may
also have some doubts about A. The degree of doubt can be expressed in the
form Dou(A) = Bel(Ac). A plausibility interpretation of uncertainty is closely
related to a belief because a plausibility function, PI, can be derived from a
belief function, and vice versa, by using Pl(A) = 1 Bel(Ac). Ones degree of
plausibility Pl{A) expresses that one fails to doubt A or one finds A plausible.
Hence, Bel and PI convey the same information, as we shall see in many ex-
amples throughout the thesis. A necessity and a possibility interpretations of
uncertainty are special versions of belief and plausibility, respectively. We call
belief and plausibility functions necessity and possibility functions, Nec and
Pos, when for events Ai and A2, Bel{A\ fl A2) = min [Bel(Ai), Bel(A2)\ and
PI(Ax\JA2) = max [Pl(Ai), Pl(A2)], respectively. The mathematical definitions
and some properties of belief, plausibility, necessity and possibility functions are
provided in Section 2.1.
An uncertainty provided in a form of random set interpretation has informa-
tion as a set of probabilities that are bounded above and below by plausibilities
and beliefs. A probability interval is an interval mapping from each element of U
to its corresponding interval [a, a], where [a, a] C [0,1]. An IVPM interpretation
of uncertainty has information as intervals on probability of A, for every subset
A of U. A cloud is defined differently from IVPM. However, it turns out that
a cloud is an example of IVPM. More details on random set, cloud, and IVPM
interpretations are in Chapter 2. Probability on sets is a partial information of
3


probability, i.e., we know a value of P(A) for some A C U, but it is not enough
to obtain the probability of each realization in U. Probability intervals and
probability on sets can be viewed as examples of IVPM. Thus, the uncertainty
interpretations over finite realizations we include in our analysis are: probabil-
ity, belief, plausibility, necessity, possibility, random set, probability interval,
probability on sets, cloud, and IVPM, or PC-BRIN, in short.
1.1 Problem statement
The problem which is the focus of this thesis is
mincTr s.t. Ax > b, Bx > d, x>0. (1.1)
X
We sometimes call (1.1) an LP problem with (generalized or mixed) uncertainty.
This dissertation tries to answer the question of how to solve (1.1). To date, the
theory and solution methods for (1.1) have not been developed when A, b, and
c have only one of the PC-BRIN uncertainty interpretations, except probability
and possibility. Moreover, when A, b, and c are mixtures of all the PC-BRIN
uncertainty interpretations within one constraint, there is no theory or solution
method yet to deal with this case. The significance of what is presented is
that problems possessing these uncertainty interpretations can be modeled and
solved directly from their true, basic, uncertainties. That is, the model is more
faithful to its underlying properties. Secondly, the model is faithful to the data
available.
We provide a simple LP problem with uncertainty without solving it for the
purpose of showing that uncertainty information, which does not have proba-
bility interpretation, may be an integral part of in an LP model. Suppose that
4


a tour guide wants to minimize the transportation cost. There are usually 100
to 130 tourists the guide has to care for each day. However, the guide has to
rent cars in advance without knowing the total number of the tourists. A car
rental company provides the guide some information about the car types H and
T based on a questionnaire of its 3,000 customers about their opinions on how
many passengers the car types H and T could carry. The response from the
questionnaire is indicated in the table below.
Passenger Number of responders
capacity Car type H Car type T
Up to 4 people 250 -
Up to 5 people 250 500
Up to 6 people 2500 2500
This information is similar to our Example 3 on page 22 and can be presented
as a random set. Suppose that the rental prices for the car types H and T
are $34/day and $45/day, respectively. The guide assumes that the number
of tourists are equally likely to be any number between 100 and 130 people.
Therefore, without knowing the age, the size, or other information about the
clients, the guide sets up a small LP problem with these uncertainties as
min 34H + 45T
s.t. aiH + a2T > 6, and H, T > 0,
where ai can be 4, 5, or 6 and a2 can be 5 or 6 with the random set information
above. The number of tourists b is between 100 and 130 persons with equal
5


chance. There might be some other restrictions to make the problem more
complicated. For example, the deal from the car rental is that a customer needs
to rent at least a certain number of cars of type T to get some reduced price.
The guide also may need to please his/her clients by assigning family clients in
separate cars, and so on. We should be able to see now that there is an LP with
uncertainty, where the uncertainty may not be interpreted as probability.
Linear program (1.1) with mixed uncertainty is a mathematical linear pro-
gram, where some of parameters in the objective or constraints are uncertain
with any of the interpretations mentioned earlier. Let us consider a linear pro-
gramming problem with mixed uncertainty through a production planning prob-
lem, which minimizes the production cost and satisfies the demands at the same
time, as a protypical problem (1.1). Let c be an uncertain cost vector per unit of
raw material vector x, A be a matrix of uncertain machine capacity, and b be an
uncertain demand vector. Here, we assume that components of A, b, and c may
possess one of the PC-BRIN uncertainty interpretations. An LP problem with
uncertainty stated as the system (1.1) is not well-defined until the realizations
of A, b, and c are known.
Suppose that there is no uncertainty in the cost c of raw materials. The
model (1.1) becomes
min crx s.t. Ax > b, Bx > d, x > 0. (1.2)
X
We apply a two stage expected recourse model to an LP problem with uncer-
tainty, when all uncertainties have probability interpretation. The first stage
is to decide the amount of raw materials needed. Based on this decision, the
consequent action is to make sure that these raw materials provide enough to
6


satisfy the demands. If not, the second action is needed, i.e, the amount of the
shortages needs to be bought from a market at a (fixed) penalty prices. In the
case that there is an excess amount of products left after satisfying the demands,
this excess amount can be sold in the market or stored with some storage price.
Therefore, the expected recourse objective function is to minimize the cost of
raw materials and the expected cost of the shortages together with one of the
following: (1) the expect cost of storage, or (2) negative of the expected cost
of profit from selling the excesses. These two cases do not happen at the same
time when we have no further planning for the excesses, because we will rather
sell to make profit than spend out of the budget for the storage.
We introduce a variable 2, when some component of the cost c is uncertain
with a probability interpretation and transform the stochastic program (1.1) to
min 2 s.t. Ax > b, z = cTx, x,z> 0. (1.3)
X,Z
In this case, the first stage variables are x and 2. The surplus y and slack
v variables that control all realizations of constraint 2 = cTr are the second
stage variables, in addition to the shortage variable w. The expected recourse
objective function is to minimize the expected cost of raw materials and the
shortage together with either the expected cost of storage or negative of the
expected cost of profit.
An expected average model [24, 76], see also in Section 3.4, is comparable to
an expected recourse model. It is designed to handle the model (1.1) only when
all uncertainties have a possibility interpretation. The possibility interpretation
in [24] is actually a possibility distribution, which is the possibility measure for
only all singleton events. If we have a possibility distribution of u we also have
7


the possibility measure of u, which is explained in Chapter 2.
An interval expected value approach, [36, 69], is the only approach so far
that takes advantage of the knowledge, 'possibility and necessity measures of an
uncertainty u convey the same information. Uncertainties in an LP problem
with possibility uncertainty automatically have necessity interpretations. Pos-
sibility and necessity measures are recognized as the bounds on the cumulative
distribution of probability measures. An interval expected value for a possibility
uncertainty is an interval that has the left and right bounds as the smallest and
the largest expected values, respectively. An interval expected value approach
transforms (1.1) to an interval linear program, where the coefficients are now
these interval expected values. After we study relationships among the uncer-
tainty interpretations in Chapter 2, we can use interval expected value to handle
LP problems with mixed uncertainty, since each of PC-BRIN uncertainty inter-
pretations with finite realizations can be characterized as a set of probability
measures, and hence, we can find an interval expected value of that uncertainty.
We provide the details of an interval expected value approach in Section 3.6.
If all uncertainties in problem (1.1) are probability, then expected values of
these uncertainties can be found. However, we do not use these expected values
to represent (1.1). Instead, we represent (1.1) as a stochastic programming
problem, because first of all, the expected values of each uncertainty may not
even be one of the realizations of that uncertainty. Secondly, a solution obtained
from the expected value representation is not the best decision in the long run.
The interval expected value approach is not a good representation of an LP
problem with uncertainty for similar reasons. Therefore, instead of the interval
8


expected value approach, we suggest three treatments for an LP problem with
uncertainty (1.1); (1) optimistic approach, (2) pessimistic approach, and (3)
minimax regret approach, after we are able to characterize each uncertainty
interpretation as a set of probability measures.
1.2 Research direction and organization of the dissertation
We study some mathematical details of these different uncertainty interpre-
tations, PC-BRIN, and the relationships among them, to be able to characterize
each uncertainty interpretation as a closed polyhedral set of probability mea-
sures. Figure 1.1 illustrates that there is a random set that contains information
given by probability, belief, plausibility, necessity, possibility, or probability in-
terval interpretations of an uncertainty. Moreover, as we shall see, random set,
probability interval, probability on sets, and cloud are special cases of IVPM
interpretation. A similar figure will be seen in Chapter 2 with more details.
A literature review of linear programming with uncertainty is given in Chap-
ter 3. We conclude this section with a word about the limitations of the ap-
proaches found in the literature.
1. The approaches in the literature are limited to probability and possibility
uncertainty interpretations in linear optimization problems. Moreover,
these two interpretations cannot be in the same constraint.
2. Although Pos and Nec convey the same information, there is no method
(except an interval expected value approach) in the literature to handle
necessity uncertainty interpretation.
9


Possibility Necessity
t----------- ----------r
i i
i i
Figure 1.1: Uncertainty interpretations: A --------- B represents that an uncer-
tainty interpretation B contains information given by an uncertainty interpre-
tation A, and A--------> B represents that A is a special case of B.
3. There is no approach (except an interval expected value approach) for
solving a linear program with more than one uncertainty interpretation in
one constraint.
4. An interval expected value approach is not a good representation of prob-
lem (1.1). The reason is stated at the last paragraph of the previous
section.
The method presented in this dissertation overcomes these limitations, both
theoretically and practically. The new approaches can handle probability, belief,
plausibility, necessity, possibility, random set, probability interval, probability
on sets, cloud, and interval-valued probability uncertainty interpretations in one
problem.
10


These uncertainty interpretations tell us that although we do not know
the actual probability for each realization of an uncertainty u, we know the
area where it could be. In Chapter 4, we will find two probabilities / and
/s that provide the smallest and the largest expected value of u, respectively.
Therefore, the method presented in this dissertation is based on the stochastic
expected recourse programs to find a pessimistic, and an optimistic solution.
We may have infinitely many expected recourse programs related to all possible
probabilities. However, these two probabilities / and /s for each uncertainty u
in a linear programming problem with uncertainty lead to the smallest and the
largest expected recourse values. Moreover, we find a minimax regret solution
as the best solution in the sense that without knowing the actual probability,
this solution provides the minimum of the maximum regret.
Next, the comparisons of our new treatments and the other models in litera-
ture are provided through numerical examples. Some useful example and appli-
cation illustrating the power and efficacy are contained in Chapter 5. Chapter
6 summarizes the results of this thesis and present plans for further research.
11


2. Some selected uncertainty interpretations
This dissertation focuses on uncertainty as it applies to optimization. When
information has more than one realization, we call it an uncertainty, u. Math-
ematically, uncertainty not only contains the standard probability theory but
also other theories depending upon the information we have. Interpretations of
uncertainty are based on the theories and information behind them. For exam-
ple, a fair coin has probability 0.5 that a head (or a tail) will occur. However,
if we do not have information that this coin is fair, we should not assume that
it is fair. The information we really have here is only Pr({a head, a tail}) = 1,
and nothing else. In describing the outcome of a coin flip, where its fairness is
in question, we could say that it is possible that a head (or a tail) will occur
with degree of possibility 1. This tells us that the actual probability of a head
(or a tail) of this coin can be anything between 0 and 1, which will be known
only when we test the coin. Moreover, even though the degree of possibility for
a fair coin that a head occurs is equal to 1, the knowledge we have that the
coin is fair is much stronger than the possibility information. This knowledge
provides the exact probability, which is more useful than the range of possible
probabilities.
This chapter has the aim of describing PC-BRIN interpretations of uncer-
tainty. We will spend time on some details of these theories so that we can
find an appropriate treatment to deal with linear optimization problems with
mixed uncertainty interpretations as we will see in Chapter 4. Some results in
12


this chapter are known results. However, there also are some new contributions,
which are strongly pointed out inside and at the end of the chapter. The chap-
ter starts with a possibility measure, which is derived from a belief measure.
Then, the definition and some examples of a random set are provided. We can
construct a set of probability measures, whose bounds are belief and plausibility
measures, given a random set. The smallest and the largest expected values of
a random variable, whose probability is uncertain in the form of a random set,
can be found using the density (mass) functions given in Subsection 2.2.1. The
mathematical definitions of interval-valued probability measures and clouds are
provided in Sections 2.3 and 2.4. We also point out that, for the case of finite
set of realizations, a probability interval can be used to create a random set (not
necessarily unique). Finally, we conclude with the relationships among these
uncertainty interpretations. Basic probability theory is assumed.
2.1 Possibility theory
Possibility theory is a special branch of Dempster [9] and Shafer [60] evidence
theory, so we will provide some details of evidence theory first. Most of the
materials in this section can be found in [9, 28, 60], and [74],
Evidence theory is based on belief and plausibility measures. For a finite
set U of realizations, where V{U) is the power set of U, a belief measure is a
function
Bel : V{U) -> [0,1] (2.1)
such that BeZ(0) = 0, Bel(U) = 1, and having a super-additive property
for all possible families of subsets of U. The super-additive property for
13


a belief function generated by a finite set U, where Ai,...,An C U, is:
Bel (Ai U ... U An) > yT Bel(Aj) Yljck &el {Aj ft Ak) + ... +
(1 )n+lBel (y4i n ... D i4).
When U is infinite, V{U) becomes a er-algebra, {(Ju). The function Bel also
is required to be continuous from above in the sense that for any decreasing
OO
sequence Ay A2 D ... in av, if A = P| At E au, then
i=l
lira Bel(Ai) = Bel (/l). (2.2)
i*oO
The basic property of belief measures is thus a weaker version of the additive
property of probability measures. Therefore, for any A, Ac C (/, where Ac is the
complement set of set A, we have
Bel(A) + Bel(Ac) < 1.
(2.3)
A plausibility measure, PI, is defined by
Pl(A) = 1 Bel{Ac), VA V(U).
(2.4)
Similarly,
Bel(A) = 1 Pl{Ac), MA V(U). (2.5)
The inequality (2.3) says that ones degree of belief that the truth lies in A
together with his/her degree of doubt that the truth is not in A may not be able
to capture the knowledge that s/he knows for sure the truth lies in U = A U Ac.
S/he will say it is plausible that the truth lies in A, when s/he cuts off the doubt
of A. The explanation will be clearer with an example.
14


Example 1. Consider an opinion poll for a Colorado governors election.
Let U = {a, b, c, d, e} be the set of candidates. There are 10,000 individuals
providing their preferences. They may not have made their final choice, since
the poll takes place well before the election. Suppose that 3,500 individuals
support candidates a and b from the Republican party, and 4,500 people support
candidates c, d, and e from the Democratic party. The remaining 2,000 persons
have no opinion yet. Therefore, we believe that one among the candidates from
the Democratic party will become the new governor with the degree of belief
0.45, and for those who prefer that a Republican candidate will win, they doubt
that the Democrat will win to the degree 0.35. That is, Doit (Democratic) =
DeZ( Democratic0) = BeZ(Republican) = 0.35. Combine the degree of belief and
the degree of doubt that the Democrat will win the Colorado governor election,
we obtain 0.45 + 0.35 = 0.70 <1. It is also plausible that the Democrat will
win with 0.45 + 0.20 = 0.65 degree of plausibility when we assume that all 2,000
voters with no opinion finally choose one of the candidates from the Democratic
party. This 0.65 degree of plausibility is obtained when we subtract the 0.35
degree of doubt from the total belief of 1 that the new governor is a person in
the set U. <0
Belief and plausibility measures also can be characterized by a basic prob-
ability assignment function m, which is defined on V(U) to [0,1], such that
m(0) = 0 and m(A) = 1. (2-6)
Aev(u)
It is important to understand the meaning of the basic probability assignment
function, and it is essential not to confuse m(A) with the probability of occur-
15


rence of an event A. The value rri(A) expresses the proportion to which all
available and relevant evidence supports the claim that a particular element u
of U, whose characterization in terms of relevant attributes is deficient, belongs
to the set A. For Example 1, an element u can be referred to as a candidate
who will win the Colorado governors election, that is u £ {a,b,c,d,e}, where
m({a,b}) = 0.35, m ({c,d,e}) = 0.45, and m({a,b,c,d,e}) = 0.20. It is clear
that {a, 6} and {c,d,ej are subsets of U = {a,b,c,d,e}, but m({a,b}) and
m({c, d, e}) are greater than m(U). Hence, it is allowed to have m(A) > m(B)
even if A C B.
The value m(A) pertains solely to the set A. It does not imply any addi-
tional claims regarding subsets of A. One also may see m(A) as the amount of
probability pending over elements of A without being assigned yet, by lack of
knowledge. If we had perfect probabilistic knowledge, then for every element u
in a finite set U, we would have m({u}) = Pr ({}), and X^uer/m({u}) = 1-
Thus, m (A) = 0, when A is not a singleton subset of U. Here are some dif-
ferences between probability distribution functions and basic probability assign-
ment functions.
When A C B, it is not required that m(A) < m(B), while Pr(A) < Pr(B).
It is not required that m(U) = 1, while Pr(U) = 1.
No relationship between m.{A) and m(Ac) is required, while Pr(A) +
Pr(Ac) = 1.
A basic probability assignment function m is an abstract concept that helps
us create belief and plausibility measures. The reason to have such an abstract
16


concept is for the cases when the exact probability of all sets in the universe
is not known. When we do not know the probability on all elements of the
universe, but we have information on some collection of subsets, it is possible to
define a belief and a plausibility based on this information using the assignment
function. As long as we have an assignment function (2.6), we can construct
belief and plausibility functions (see (2.7) and (2.8) below).
We call a set A V(U), where m(A) > 0, a focal element of m, and denote
T as the set of focal elements. For the associated basic assignment function
m, the pair is called a body of evidence or random set in [16]. More
details about random sets are in the next section. We can formulate belief and
plausibility measures uniquely from a given basic assignment m. VA V(U)
Bel(A) = £ m(B), (2.7)
B\BCA
pi(a)= y, m(£) (2-8)
B\AnB&
The basic assignment function m(A) characterizes the degree of evidence or belief
that a particular element uoiU belongs to the set A, while Bel(A) represents the
total evidence or belief that the element u belongs to A as well as to the various
subsets of A. The plausibility measure represents not only the total evidence or
belief that the element in question belongs to set A or to any of its subsets, but
also the additional evidence or belief associated with sets that overlap (have at
least one element in common) with A. Hence,
Pl(A) > Bel(A), VA V(U). (2.9)
17


Example 2. Consider the group of 2,000 individuals who did not have any
opinion at first, from Example 1. Suppose 500 of them admit that they will
vote for the candidate a, and the other 500 will vote for either b or d, but they
want to have a closer look before making final choice. Let A\ = {a, 6} A2 =
{c, d, e} A3 = {a}, and A4 = {b,d}. Figure 2.1 shows the Venn diagram of
these sets. From this latest information, we obtain m{A\) 0.35, m(A2) =
0.45, m(A3) = 0.05, m(A4) 0.05, and m(U) = 0.10. Then, for instance,
Bel(A3) m{A3) = 0.05, Bel(Ai) = m(Ai) + m(A3) = 0.40, and Pl(Ai) =
m(Ai) + m(A3) + m(A4) + m{U) = 0.55. 0
Figure 2.1: Venn diagram of an opinion poll for a Colorado governors election.
The inverse procedure to get m from Be/ is also possible. Given a belief
measure Bel, the corresponding basic probability assignment m is determined
for all A E V(U) by the formula
m(A) = J2 (-1 )lA-B'Bel(B). (2.10)
B\BCA
Hence, the basic probability assignment of the set A\ in Example 2 is rn(Ai) =
Bel(Ai) Bel(A3) = 0.40 0.05 = 0.35, as expected. The proof of (2.10) can
18


be found in Appendix A.
Now, we have enough material to develop some details of possibility theory.
A possibility measure is a plausibility in which all focal elements are nested. The
sets defined in Example 2 are not nested. Therefore, information for Example
2 is not considered to be possibility information. Let a possibility measure and
a necessity measure be denoted by the symbols Pos and Nec, respectively. The
basic properties of possibility theory are the result of the nestedness of focal
elements, and the proof can be found in Appendix B. These properties are as
follows: VA, B e P(U)
However, no one has mentioned that if we have properties (2.11) and (2.12), then
the nestedness of focal elements is preserved. Therefore, we prove the stronger
statement (see Appendix B) that Bel(A fl B) = min {Bel(A), Bel(B)}, and
Pl(A U B) max {Pl(A), Pl(B)} if and only if the focal elements are nested.
The nestedness of focal elements guarantees that the meanings of necessity
and possibility are preserved as in (2.11) and (2.12). A possibility measure also
has a monotonicity property, which is in contrast with general basic probability
assignment functions. That is
This monotonicity property is a result of (2.12). It can be interpreted as when
more evidence is provided, the possibility can never be lower than the possibility
when less evidence is given.
Nec(A fl B) = min {Nec(A), Nec(B)} ,
Pos(A U B) = max {Pos(A), Pos(B)} .
(2.11)
(2.12)
if AC B, then Pos(A) < Pos(B).
19


Necessity measures and possibility measures satisfy these following proper-
ties since they are a special case of belief and plausibility measures:
Nec(A) + Nec(Ac) < 1, (2.13)
Pos(A) + Pos(Ac) > 1, (2.14)
Nec(A) -I- Pos(Ac) = 1, (2.15)
min {Nec(A), Nec(Ac)} = 0, (2.16)
max {Pos(A), Pos(Ac)} = 1, (2.17)
Nec(A) >0=> Pos(A) = 1, (2.18)
Pos(A) < 1 = Nec(A) = 0. (2.19)
The function pos : U > [0,1] is defined on elements of the set U as
pos (u) = Pos({t/}), Vti U, (2.20)
when a set-valued possibility measure Pos is given. It is called a possibility
distribution function associated with Pos. We can construct a set-valued pos-
sibility measure from a possibility distribution in the following way. For each
A V(U),
Pos(A) = sup {pos (u)} . (2-21)
ueA
Let us start the process of constructing a possibility distribution from a
basic probability assignment function m. This will help us to see the relationship
between possibility/necessity measures and random sets. For simplicity, consider
a set U = {tii, U2,..., txn}. We will construct a possibility measure Pos defined
on V{U) in terms of a given basic probability assignment m. Without loss
20


of generality, assume that the focal elements are some or all of the subsets
in the sequence of nested subsets Ai C A2 C ... C An = U, where At =
{ui,U2,... ,Ui} i = 1,2,... ,n. We have
n
m(0) = 0, and ^ rn(Aj) = 1.
i=1
However, it is not required that m(Ai) ^ 0 for all i = 1,2,..., n. We define a
possibility distribution function for each Ui U using Equation (2.8) as
n
pos (Ui) = Pos({ui}) = '^2 ra(^U), i = 1,2,..., n. (2.22)
ki
Written more explicitly, we have
pos (ux) = m(Ai) + m(i42) + m(A3) + ... + m(Ai) + m(Ai+i) + ... + m(An)
pos (u2) = m(A2) + m(A3) + ... + m(Ai) + m(Ai+i) + ... + m(An)
pos (Ui) = m(Ai) + m(Ai+i) + ... + m(An)
pos (un) = m(An).
This implies
m(Ai) = pos (uj) pos (ui+1). (2.23)
For example, suppose that n = 7. Figure 2.2 (see also in [74]) illustrates a
possibility distribution constructed from the associated basic assignment defined
as m{A\) = 0, m(A2) = 0.3, m(A3) = 0.4, m{A3) = 0, m(A5) = 0, m(Ae) =
0.1, and m(Aj) = 0.2. The mathematical differences (for finite sets) between
21


probability and possibility theories are summarized in Table 2.1, which can be
found in [74],
Example 3. A car manufacturer asks 100 customers on how many people they
think should be able to ride in a new model car. The answers are that 4 (by 40
customers), 5 (by 30 customers), and 6 (by 30 customers) persons can ride in
this car. We can interpret this information as a possibility information. This car
can carry up to 4 persons is corresponding to m ({1,2,3,4}) = 0.4. Similarly, we
have m ({1, 2,3,4,5}) = 0.3, and m ({1,2,3,4,5,6}) = 0.3. Therefore, pos(l) =
pos(2) = pos(3) = pos(4) = 1, pos(5) = 0.6, and pos(6) = 0.3. We can also
check that the properties (2.11) (2.19) are satisfied. <)
Figure 2.2: Possibility measure defined on U = {iq, u2,..., U7}.
Klir [27] argued that probability is not adequate to capture the full scope of
uncertainty, since probability theory is a subset of evidence theory when the focal
elements are singletons, while possibility theory is a subset of evidence theory
when the focal elements are nested. Hence, probability and possibility theories
22


are the same only when the body of evidence consists of one focal element that
is a singleton.
Table 2.1: Probability theory versus possibility theory: comparison of math-
ematical properties for finite sets
Probability Theory Possibility Theory
Based on measures of one type: Probability measures, Pr Based on measures of two types: Possibility measures, Pos and Necessity measures, Nec
Body of evidence consists of singletons Body of evidence consists of a family of nested subsets
Unique representation of Pr by a probability density function pr : U [0,1] via the formula Pr(A) = pr(u) u A Unique representation of Poa by a possibility distribution function pos : U [0,1] via the formula Pos(A) max pos(u) ui4
Normalization: pr(u) = 1 ueu Normalization: max pos(u) 1 u t/
Additivity: Pr{A U B) = Pr(A) + Pr(B) Pr(A n B) Max rule: Pos(A U B) = max {Pos(A)} Pos(B)} Min rule: Nec(A O B) = min {Nec(A), Nec(B)}
Not applicable Nec(A) = 1 Pos(Ac) Pos(A) < 1 =$- Nec(A) = 0 Nec(A) > 0 => Pos(A) 1
Pr(A) + Pr(Ac) = 1 Pos(A) + Pos(Ac) > 1 ; Nec(A) + Nec(Ac) < 1 max (Pos(A), Pos(Ac)} = 1 min {Nec(A), Nec(Ac)} = 0
Total ignorance: pr(u) = j for all u U Total ignorance: pos(u) = 1 for all u 6 U
The nestedness property in possibility theory guarantees the maximum ax-
iom (rather than additivity axiom in probability theory). Therefore, these two
theories are not contained in one another; they are distinct. Moreover, proba-
bility and possibility theories involve different types of uncertainty and should
be applied differently. For example, probability is suitable for characterizing the
number of persons that are expected to ride in a particular car each day. Pos-
23


sibility theory, on the other hand, is suitable for characterizing the number of
persons that can ride in that car at any one time. Since the physical characteris-
tics of persons such as size or weight are intrinsically vague, it is not realistic to
describe the situation by sharp probabilistic distinctions between possible and
impossible instances. The readers who interest in more discussion can find full
details in [27],
We explained how a basic probability assignment function m relates to belief
and plausibility functions, given a finite set U. In the next section, the math-
ematical definition of random sets and some examples that help understanding
the definition of random sets are provided. Theorem 2.14 shows that the belief
(plausibility) and the necessity (possibility) functions are the same, when the
focal elements are nested, (regardless the cardinality of U). We prove that be-
lief and plausibility measures are the bounds on a set of probability measures
generated by a random set. Finally, we show how to find the smallest and the
largest expected values of a random variable with unknown probability from a
given random set.
2.2 Random sets
We introduced in the previous section a random set as a body of evidence
when U is a finite set of realizations. In this section, we give a more solid
mathematical definition of a random set. Although, we limit the scope of this
dissertation to a finite set U, it is worth explaining random sets in this section as
generally as possible, so that we have some useful material for future research. A
random set is a random variable such that an element in its range is a subset of a
set X (finite or infinite). The differences between random variables and random
24


sets from their definitions can be seen in Definitions 2.4 and 2.5. First of all,
we provide the definitions of a cr-algebra, a measurable space, a measurable
mapping, and a probability space, which will be used in the definitions of a
random variable and a random set. Definitions 2.1 2.3 can be found in any
measure theory book.
Definition 2.1 Let Cl be a non-empty set. A a-algebra on Cl, denoted by an,
is a family of subsets of Cl that satisfies the following properties:
0 <7$2,
B E an =$ Bc E (Tn, and
Bi E (Tq, for any countable (or finite) subset Bi of an => U, Bi E an.
A pair (Cl, an) is called a measurable space.
Definition 2.2 Let (0, an) be a measurable space. By a measure on this space,
we mean a function p : an > [0, oo] with the properties:
p (0) = 0, and
We refer to the triple (Cl, an, p) as a measure space. If p (D) = 1, we refer to it
as a probability space and write it as (Q, an, Prn), where Prn is a probability
measure.
Definition 2.3 Let (Q,crS2) and (U,au) be measurable spaces. A function
if Bi E an, Vi = 1,2,..., are disjoint, then p
f : Cl > U is said to be a (an^^-measurable mapping if f *(,4) =
{w E Cl : f(w) E A} E an, for each A E av.
25


Definition 2.4 (see Mann [43]) Let (fl, (U, ay) be a measurable space. A random variable X is a (a^, ay)-measurable
mapping
X : ft -> [/. (2.24)
This mapping can be used to generate a probability measure on (U,ay). The
probability Pry (A) for an event A G cry given by
Pry (A) = Pm (X^A)) = Pm ({coG ft : X(ui) G A}), (2.25)
where {tv G ft : X(co) G A} G op.
It follows that, for A G ay,
Pry{A) := ^ Pr(M) uex-'iA) (discrete case) (2.26)
:= [ dPrn(u) JX-'iA) ' ' (continuous case), (2.27)
where c.d.f. stands for cumulative distribution function. Pm{tv) in (2.27) is the
cumulative distribution with respect to ft when tv G ft, while Pru({iv}) is the
probability measure on {a;}, which is an element in op.
Example 4. Let ft be the set of all outcomes of tossing two fair dice, i.e.,
ft = {(1,1), (1,2),..., (6,6)}, with the probability Pm ({(L j)}) = where
i,j = 1,2,. ..,6. A mapping from ft to the sum of each outcome is a random
variable X : (i,j) e-> i + j, where the set U = {2,3,..., 12}. The sets op and
a y are the power set of ft and U, respectively. Let A G ay. Then Pry (A) =
Prn({(i,j) fl : X((i,j)) G A}). For instance, suppose A = {2,12}, then
Pry ({2,12}) = Prp ({(1,1), (6,6)}) = 0
26


Definition 2.5 (see Marin [43]) Let (Q, (F, o>) be a measurable space, where F C oy, U ^ 0, and (U,ay) is a measur-
able space. A random set T is a (oq, r : ft -> F (2.28)
u i T(o;).
We can generate a probability measure on (F, a?) by using the probability mea-
sure m(&) = Prji {u : r(w) ^}, where Therefore, when F is finite
and op is the power set of F, we have ^m({7}) = 1- Finally, we use the
notation (F, m) to represent a finite or infinite random set depending on
the cardinality of F.
Remark 2.6 The probability m in Definition 2.5 is called a basic probability
assignment function, which is distinct from a regular probability on U. A basic
probability assignment function is actually a probability on ay.
Remark 2.7 In practice, the set F can be given so that m({7}) > 0, for all
7 G F. Forui 0, a set 7 := r(u/) IF is called a focal element if m({7}) > 0.
A set {7 | m({7}) > 0} is called a focal set.
Remark 2.8 T becomes a random variable: T(a;) = U(u), when U is a finite
non-empty set and all elements of F are singletons, i.e. T {{ti} : u £ U}.
However, we cannot know the exact value of Pry(A) for any A £ ay,in general,
when we have information in the terms of random sets, because elements of F
can be any subsets of U.
27


There are two cases that generate a finite random set: 1) when U is finite,
we have that T is finite, and 2) when U is infinite (discrete or continuous), but
T is finite. Hence, there are a finite number of focal elements. Each of them
can be a finite or an infinite set. An infinite random set is generated from an
infinite set U, which also can be digested into two situations: 1) when U is
infinite discrete, and 2) when U is a continuous set. Each of an infinite number
of focal elements of an infinite random set can be either finite or infinite. The
following examples help us understand the definition of random sets.
Example 5. Finite random set when U is finite.
Let us revisit Example 1. Consider an opinion poll for a Colorado governors
election. The set of candidates is U = {a,b,c,d,e}. There is a population 0.
of 10,000 individuals that provide their preferences. They may not have made
a final choice, since the poll takes place well before the election. The opinion
of individual u 6 fl is modeled by T(w) C U. Therefore, each person w of
3.500 Republican supporters is modeled by T(w) = {a,b}, each person u/ of
4.500 Democrat supporters is modeled by r(cj) = {c,d,e}, and each person
tu}' of 2,000 undecided individuals is modeled by T(w) = {a,b,c,d,e} = U.
Hence, the finite random set (Jr,m) is when T = {{a, 6} {c, d, e} U}, such
that m({{a,6}}) = 0.35, m({{c, d, e}}) = 0.45, and m ({[/}) = 0.20, where
m({7}) is the proportion of opinions of the form T(tj) = 7. <0
Remark 2.9 If we use different group of 15,000 people to provide the survey
for Example 5 (different information), we may obtain a totally different random
set from what we got in Example 5. Therefore, we need to be clear about the
28


sample space fl we are using. However, if we add these 15,000 people to the
other 10,000 that we asked for the survey (more information), we will increase
the size of the sample space LI, but the updated belief (or plausibility) still could
be less than, greater than or equal to the belief (or plausibility) got from random
set in Example 5, depends upon the survey of these 15,000 people.
On the other hand, consider the random set information for Example 5,
this time we receive more details on those of 4,500 who support Republican
candidates that 2,000 will vote for candidate c, and other 2,500 still do not have
final decision, so each of their votes could go to one of either c, d, or e. In
this case, the new information is based on the old information, but more specific
on who will vote for c. It should not difficult to see that the updated belief will
be greater than or equal to the belief from the old information. Moreover, the
updated plausibility will be less than or equal to the old plausibility.
Remark 2.10 We provided an example of a coin where its fairness is in ques-
tion, at the beginning of this chapter. This coin has random set information as
follows. Let H := head, andT := tail. Then, LI = {II, T} with Pru {H, T} = 1.
The set U also is {H,T}, which is the only focal element for the random set,
i.e., m({U}) = m({{H,T}}) = 1. Therefore, Bel({II}) Bel({T}) = 0, and
Pl({H}) = Pl({T}) = 1. However, if we flip this coin 100,000 times and it
turns head for 43,000 times (turns tail for 57,000 times), this information is dif-
ferent from the information Pru {II, T] = 1. Therefore, the clarification of the
set fl is very important in the conclusion of Bel and PI values. For this exper-
iment, refers to the set {flipi, flip2 , fliPioo.ooo}> wt^1 = 10-5.
The set U = {//, T}, with m({H}) = 0.43 and m({T}) = 0.57. Since, the
29


focal elements now are singleton, we can conclude that Bel({H}) = Pl({H}) =
Pru({H}) = 0.43, and Bel({T}) = Pl{{T}) = Frf/({T}) = 0.57. Based on
these 100,000 trials, the probability (experiment based) of a head is 0.43, and
the probability (experiment based) of a tail is 0.57. This probability is based on
the experiment, and we will use it to represent this coin when the only infor-
mation we have is this experiment. It may over and/or under estimate the real
probability of this coin.
We conclude the thought from Remarks 2.9 and 2.10 as follows: 1) the
clarification of the set fi is important for the conclusion of Bel and PI values,
and 2) if we receive more information by digesting the old information to become
more specific, then Belold(A)) < Belupdated(A)) < F/updated(A)) < Pl0\d{A)).
Example 6. Finite random set when U is infinite.
A round target is spinning with an unknown speed and placed in a dark room.
It also has numbers 0, |, n, and 4^ to indicate all angles in [0,27r). John throws
a dart to this target for a total of 1,000 trials. The result he delivers is that
the darts on the pie areas between [0, , [^, 4p), and (4y, 2tt) of total 350,
467, and 233, respectively. To be consistent with Definition 2.5, we consider
Q = {u | u = 1,2,..., 1,000}, representing trial number 1,2,..., 1,000, where
Prn(u) = We also have U = [0,2n) and T = { [0, , [^, 4f) (^, 27r)}.
The information John provided can be represented as ur({[0, ^f)}) = 0.350,
m({[^I,^t)}) = 0.467, and m({(^L,27t)}) = 0.233. This information is not
enough to be able to know that the proportion that the dart will lie on a selected
area, e.g., the area between [f, f], for the next trial. However, we can say that
the dart will point on the pie area between [|, |] with plausibility of 0.350. <0>
30


Example 7. Infinite random set.
An urn contains N white and M black balls. Balls axe drawn randomly, one at a
time, until a black one is drawn. If we assume that each selected ball is replaced
before the next one is drawn, each of these independent draws has probability
p = for being a success (a black ball is drawn). Let Q be the set of the
number of draws needed to obtain a black ball, i.e., Q = {uj : uj 1,2,3,...}
with probability Pru({n}) = p (1 p)n_1, where n = 1,2,3,_____In addition, we
have a round target as in Example 6. This time, John provides the information
that he can relate the number of draws until getting a black ball and a pie area
that the dart will hit as follows:
u> = 1 =£ the dart can lies somewhere in the pie area [0,7r] =$> r(l) = [0,7r],
uj = 2 = the dart can lies somewhere in the pie area [0, =4- T(2) = [0, =y],
uj = 3 =* the dart can lies somewhere in the pie area [0, ^f \ => T(3) = [0, ^p],
and so on. Thus, U = [0, 27t], and T j 0, ^22n_VT j ;n = 1,2,3, ...j. Hence,
m ({ [> (22^~V7r] }) =P(1 ~pT~\ and Er=i171 ({ [> f22n~-V7r] }) = 1- What is
the probability that the dart lies somewhere in the pie area [^, ^p]? We cannot
get the answer for this question with the information we have. The only thing
we can say is that the dart may lie in the pie area [^, -^p] with the degree of
possibility 1 p(l p)n 1. 0
Let U be a finite set, T be the power set of {/, be the power set of J7,
and m be a probability basic assignment function on The other versions of
the belief and plausibility measures for a finite random set, given a finite set U,
31


can be derived as: V A C {/,
(A)= 5^m({7j) (2.29)
liCA ner = "*({li 7i Q 4,7i 0}) (2.30)
= Prn{u> : r(a>) C A,r(w) 0} , (2.31)
and (2.32)
= m({7i : 7i D A 0}) (2.33)
= Pra{u : r(w)n 4^0}, (2.34)
We add the subscript (F, m) of Bel and PI functions to make clear that these
functions are based on the random set (F, m). A random set (F, m) can be a
finite random set even when U is infinite. The belief and plausibility formulas
for a finite random set, with respect to a random set whose U is infinite, are
the same as (2.29 2.34). We will sometimes say (F,m) is a finite random set
without mentioning the cardinality of U. An infinite random set can happen
when U is an infinite set. The belief and plausibility measures of any set A C U
for an infinite random set also are provided by (2.30 2.31) and (2.33 2.34),
respectively.
Next we will use a continuous possibility distribution pos(u) on U C R",
when U is a continuous set, to represent an infinite random set {F, m), where
F is the family of all a-cuts Aa = {u U : pos(u) > a, a G (0,1]}.
32


Definition 2.11 Let U be a continuous subset of R. For a given continuous pos-
sibility distribution pos(u) on U C R, we define (T,m) as an infinite random
set generated by pos, where T := {Aa : a 6 (0,1]}. Let G C (0,1], and
& = {Aa : Aa J7, a G G}. The corresponding basic probability assignment
where Pr is the probability measure on R. corresponding to the uniform cu-
mulative distribution function. That is, Pr(a < a) = a; a [0,1], where
a represents the random variable, and a is its realization. Furthermore, let-
ting n N, we use (JTn, mn) as a finite representation of this infi-
nite random set {F, m) by discretizing [0,1] into n subintervals, where
Figure 2.3: A finite representation of an infinite random set.
A finite representation of an infinite random set given a possibility distribution
is illustrated in Figure 2.3. The belief and plausibility measures for a finite
representation (Tn,mn) of {T, m) are given by
m : ojr > [0,1] is induced as
(2.35)
We also can see that
Er.,">({-4i}) = i-
ACA
ALnA^ n
n
33


Theorem 2.12 (The strong law of large numbers, [2]) Let X\, X2,... be a se-
quence of independent and identically distributed random variables, each having
a finite mean p = E[Xi\. Then
lim
X\ -t- X2 + ... + Xn
p, almost surely.
(2.36)
n>oo
n
The strong law of large numbers is used in the proof of Theorem 2.13.
Theorem 2.13 (see Marin [43]) Let (Jrn,mn) be a finite representation of an
infinite random set ( J~, rn) as defined in the Definition 2.11. The belief (and
plausibility) of the random set (Tn,m,n) converges almost stirely to the belief
(and plausibility) of the random set (lF,m) as n > oo, i.e.,
almost surely, V A ay.
The next theorem tells us that in general, if the focal elements are nested
(regardless of the cardinality of the focal set T), then the belief measure is the
necessity measure, and the plausibility measure is the possibility measure.
Theorem 2.14 (see Marin [43]) Let (.Fn,mn) be a finite representation of an
infinite random set (lF,m) as defined in the Definition 2.11. Then
Bel^m)(A)= lim Bel{r^mn){A), and
n*oo
^(^,m)(-^) = Pl(Tn,mn){A),
n>oc
(2.37)
(2.38)
Necu(A) = Bel{^m)(A) = lim Bel{rn>mn)(A), and
n*oo
Posu(A) = Pl(jrtm){A) = lim Pl(rn,mn)(A),
n> oo
(2.39)
(2.40)
almost surely, V/i 6 cry-
34


We skip the proofs of Theorems 2.13 and 2.14. The readers can view the proofs
from [43], Theorems 2.13 and 2.14 will be useful for future research, when we
consider the continuous case of U.
Let us define the set Ml of probability measures associated with A1 as
Ml {Pr1 : A finite random set [T, m) can be interpreted as the set of probability measures,
Mr, of the form Mr = |Pr : Pr(A) = m(Al)Prl{A), A £ <7[/|, where
each Pl belongs to the set Ml of probability measures on A*, as claimed by
many authors, see [16] for example. Many papers, e.g. [13, 16, 55, 56], claim
that for all A £ ay, Bel(A) < Pr(A) < Pl{A), VPr £ Mr- We state the
theorem and proof here. The details are as follows. The term m({7}) will be
written as m(7), V7 £ T in the rest of the dissertation, for convenience.
Theorem 2.15 Let u be an uncertainty with a set of realizations U, where U
can be finite or infinite, and ay is given. Suppose that ( J-, m) is a finite random
set of u, where the focal set is T = {A1 £ ay}, i = 1,2,..., L, for some L £ N.
Then, the random set can be interpreted as the unique set of probability measure
AAjr of the form
Mr = jpr | Pr{A) - m(^)Pri(A), ^ , (2.41)
where each Pr1 belongs to the set Ml of probability measures associated with A1,
such that for all A £ ay, Bel(A) < Pr(A) < Pl(A), VPr £ Mr- Additionally,
we have inf Pr(A) = Bel(A), and slip Pr(A) Pl(A).
PreMr PrM T
Proof: Please note that Pr1 is an arbitrary element of M%. First, we verify
that Pr in (2.41) satisfies the probability axioms as follows:
35


Since m(A') and Prl{A) are in [0,1], for each A1 G T, A G ay, and
Prl G Ad1, 0 < Pr(A) i or(A*)Pr*(A) < 1.
Pr(U) mi-A1) = 1, because Prl(U) = 1 for all Prl G Ad1, i =
1,2
For any Ai, A2 G ay such that Ai P| A2 = 0,
Pr(A\ [J A2) = m{Ai)Pri{Ai [J A2)
= Ef=i m(Ai) (Pri(A\) + PP(A2))
= Ef= i mCAOPr^Ax) + £f=1 m(Ai)Pri(A2)
= Pr(A!) + Pr(A2).
Define Pr(A) = inf Pr(A) and Pr(A) = sup Pr(A), therefore
PreMjr PreM r
Pr(A) = miiPr(A) = ^2m{Ai)Pri(A), PPGAd1, Aeop, and
t=i
L
Pr(A) sup ^Pr(A) = 'Y,m(At)Prt(A), Prl G Ad'j, A G ay.
We note that these two functions may not follow the axioms of probability. We
can see that for a given A G ay, the value of Yli=i or(A*)Pr*(A) depends on
the value of Prl(A). Therefore, oi(A*)Pr*(A) is the smallest value when
Pr'(A) = 0, as long as it does not violate the restriction of the set Ad*. It is not
hard to see that a specific probability measure Pr\, for each i = 1,2,... ,L,
provide
Pr [(A)
1; if Ai C A,
0; otherwise,
Pr(A) = Y m(Ai) = Bel(A).
A*CA
36


Similarly, X]f=i rn(Al)PP(A) is the largest value when Prl(A) = 1, as long as it
does not violate the restriction of the set AT. If this set A is such that Al(lA = 0
and Prl(A) > 0, it implies that Pr'(A') < 1, since Pr'(Al) = 1 Prl{A). Hence
this Prl AT. Therefore, Prl(A) = 0, when A1 fl A = 0, and Pr1 6 AT.
Moreover, if A'dA ^ 0, there is Pr1 AT such that Prl(A) = 1 = Pr1(AlC\ A),
because this Pr1 satisfies Prl(B) = 1, whenever A1 C B (i.e., A1 D B ^ 0).
Hence,
rf(a) = y m(Ai) = pl(A)-
by using a specific probability measure Pr2, for each i = 1,2,..., L,
Pr'2(A) =
1; if yTn A 0,
0; otherwise.
Next, we show that M.? is the unique set of probability measures that has
property Bel(A) < Pr(A) < Pl(A), V4 £ av. Suppose that P : ay > [0,1]
is a probability measure such that Bel(A) < P{A) < Pl(A), VA cry. Let
A E cru- Then,
L L
Bel(A) = Ym(Ai)Pr\(A) < P(A) < Ym(Ai)Pri(A) = pl(A)-
i=i i=i
Without loss of generality, we suppose that there exists l < L such that A1 C A,
i i
for each i = 1,2,Therefore, Bel(A) = ^^m(Al) = m(Al)Pr\(A).
1=1 1=1
Since A1 A, for all i = l + 1, l + 2,..., L, we suppose further that there exists
l < k < L such that A1 Pi A / 0, for all i = / + 1, l + 2,..., k. Hence,
l k
Bel(A) = Ym(Ai)pr\(A) < P(A) < m^Pr'(A) = Pl(A).
I 1 1=1
37


If / = k, then l = k = L and P(A) = Bel(A) = Pl(A). We are done. Otherwise,
set a = P(A) Bel(A) > 0. Pr\A) = -- = PP(A n A1), Vi =
( rC l )Tn[/\t}
l + 1,1 + 2,... ,k, does not violate the restriction of AT. Thus, this Prl Ml,
for each i / + 1,1+2,..., k. Moreover, Prl(A) = 0, Vi = k+l,k+2,... ,L also
does not violate the restriction of AT, i = k + l,k + 2,..., L, since A' fl A = 0.
Hence,
P{A) = Bel(A) + a + 0
i
= ^m{Ai)Pr\(A) + m(Al+1)Prl+1{A) + ... + m(Ak)Prk(A) +
i=l
m(Ak+1)Prk+1(A) + ... m(AL)PrL(A)
satisfies Bel(A) < P(A) < Pl(A), V A E ou and is in the form of (2.41).
It follows from Theorems 2.13 and 2.14 that inf Pr(A) and sup Pr(A),
PreMf PreM:F
where is interpreted from a (finite or infinite) random set induced by a
possibility distribution are belief and plausibility of A, VA ay We state this
result in the following Corollary 2.16.
Corollary 2.16 Assume
HI: (Q,cts2, Prn) is a probability space,
H2: U is an arbitrary set of consideration, and
H3: (Jr, m) is a random set generated by a possibility distribution.
Then, we have the same conclusion as in Theorem 2.15.
Theorem 2.15 and Corollary 2.16 mean that a random set (lF,m) can be inter-
preted as a set of probability measures Mjr whose bounds on probability are
38


belief and plausibility measures. By these bounds, we can find a probability
density mass function that generates the smallest (largest) expected value of
our random variable.
2.2.1 Upper and lower expected values generated by random sets
Given a random set (J7, m) of U = {ui,u2,..., un}, and an evaluation func-
tion 8 of U, where 8(ui) < 6(112) < ... < 9(un), the lowest and the largest
expected values of 8 can be evaluated by using the following density functions
/ and / of a random variable U : Q > U,
where
and
£(ui) = Bel({uuu2,... ,u}) Bel({u2,u3 ... ,un})
£(ui) = Bel({ui,ui+1,..., un}) Bel({ui+uui+2,... ,u})
(2.42)
/() = Bel({un}),
f(ui) = Bel({ui})
f(ui) = Bel({uuu2,... ,Ui}) j5eZ({ui,u2,---,i-i})
> (2.43)
f (un) = Bel({uuu2,... ,un}) Bel({u\,u2,...,u_i}). .
These two functions / and / are indeed density functions because each of them
sums to 1, and are nonnegative. They are important because they are density
functions that create the smallest and largest expected values E(9), respectively.
We define Prg as a probability measure generated by a density function g.
It was proved in [54] by Nguyen that / in (2.42) provides the smallest expected
value of 9. We combine his proof together with our contribution to show that /
39


in (2.43) provides the largest expected value in Theorem 2.17 below. Example
8 illustrates the use of Theorem 2.17.
Theorem 2.17 For a given random set (E,m) of U = {u\,U2-.., un}, let
6 : U > R be such that 9(ui) < 0(u2) < ... < 6{un), and Ej(9) be an expected
value of the evaluation 9 with respect to the density mass function f. Then
the density functions f and f defined in Equations (2-42) and (2-43) have the
property that
E}_{9) = mi{Ef(9) : Prf G M?) (2.44)
Ej(9) = sup {Ef(9) : Prf Mr) , (2.45)
where / and f M?, and M? is defined by Theorem 2.15.
Proof : We first show that Prj and Pry are in A4jf. From (2.42) and (2.43),
we have
f_(ui) = Bel(ui, ui+1, ...,un)~ Bel(ui+i,Ui+2, ...,un)
= XI m(fl) ~ XI m(fi)
BC{ui,iij+i,...,u} BC{ui+i,ui+2..un}
= XZ
_ BC{ui,uj + i,...,un},UjeB
and f(ui) = Bel(ui,U2,...,*) Bel(ui,u2,... ,Ui_i)
= XI ~ XI m(fi)
BC{ui ,U2,-)Wil BC{ui,U2, -mu*-1 }
= XI
BC{ui,U2,...,Ui}
UiB
Let A = Ui+itl j... j Ui+kj}, then
PrL{A)=^2 l(ui) = Y^m(B) + X>(5) + + ^m(B) (246)
UteA BC{u* U} BC{ui + fcl....U} BC{uHtj....U}
UieB ut+fcleB ui+k. 40


We will show that Prj(A) > Bel(A) = ^ m(B). Let B C A.
BCA
If Ui B, then m(B) is included in the first term of (2.46).
If u,i £ B, and ui+kx B, then m(B) is included in the second term of
(2.46).
If Ui, Ui+kx, .., ui+kj_1 $ B, and Ui+kj 6 B, then m(B) is included in the
last term of (2.46).
Hence, Prj(A) > Bel(A). Moreover, since B fl A ^ 0 for each set B in any
terms of (2.46), Prf(A) < Pl(A). Therefore, Pr/ E M?. Likewise,
Prj(A)= ^ f(Ui) = ^m(B) + ^m(B) +...+ J^m(H) . (2.47)
UjC.4
SC{ui,...,*}
u, B
BC{ui,...,ui+fcl}
BC{ul,...,ui+^}
Ui+fc^eB
We will show that Prj(A) > Bel(A) = ^ m(B). Let BCA.
BCA
If ui+kj B, then m(B) is included in the last term of (2.47).
If ui+kj B, and ui+kj_1 B, then m(B) is included in the second last-
term of (2.47).
If Ui+kpUi+kj^, ,ui+kx ^ B, and 6 B, then m(B) is included in the
first term of (2.47).
Hence, Prj(A) > Bel(A). Moreover, since B ["1/1^0 for each set B in any
terms of (2.47), Prj(A) < Pl(A). Therefore, Pry Adyr.
To prove (2.44) and (2.45), we apply the well-known expected value formula
(see [54] and [59]),
Ef(o)= r Prf({
Jo
u | 6{u)
> t})dt + j (Prf{{u | 9(u) > <}) 1 )dt. (2.48)
J 00
41


Let {u | 9{u) > t} {ui, ul+1,..., u}, for some i = 1,2,..., n. We know that
Pr/({n | 0(u) > t}) = Prfi{ui,... ,un}) = ^/(%) = Bel{{uu ...,un})(2.49)
j=i
n
Prj({u | 6(u) > <}) = Prj({ui,... ,un}) = = Pl({uh ... ,un}).(2.50)
j=i
The last equality of (2.50) holds, since
n
YJ(uj) = (Bel({u1,u2,.. ,Ui}) Bel({ui,u2,... ,Ui_i})) +
j=i
(Bel({ul,u2,...,ui+1}) Bel({uuu2,...,Ui})) + ... +
{Bel({uuu2,... ,u}) Bel{{ui,H2,... ,un-i}))
Bel({ui,u2,..., u}) Bel{{uuu2,...,Ui-i})
= 1 Bel({ui,u2,...,Ui-i})
= Pl({Ui,Ui+1 ... ,un}).
By Theorem 2.15, for every Prj Mr,
Prj({u | 6{u) > t}) < Prj({u \ 0(u) > t}) < Prj({u \ 9(u) > f}).
Without loss of generality, suppose that there exists k < n such that 9{uk) < 0
and 9(uk+\) > 0. We divide t into the following subintervals
t G (oo,9(ui)) => {u | 9(u) > t) = U,
t [0(uj), 9(u2)) {u I 0(u) >t} = {u2, u3,..., u} ,
t G [9(uk-i),9(uk)) =*> {u | 0{u) > t} = {uk,uk+i, ,u} ,
t £ [$(wjt), 0) (u | 9{u) > t} , Uk+2> > ^n} j
42


t £ [O,0(ufc+i)) => {u I 9(u) > t} = {uk+uuk+2,...,un} ,
t £ [9(un^),9(un)) => {u I 9(u) >t} =
t £ [0(u),oo) => {u | 9(u) > t} = 0.
Applying / to (2.48), we have
roo rO
Ef{9) = / Prj({u | 9(u) > t}) dt + / (Prf({u \ 9(u) > t}) 1) dt,
Jo Joc
A/i
= A/i + A/2, where
r^(ufc+i)
Ah
rv(uk+i)
= / fle/({u*:+1,wfc+2,...,?in})dt + ...+
Jo
r6(u) poo
/ BeZ({un}) dt + / Bel($)dt.
J8(u-n-\) Jo(un)
/0(l)
(Bel(U) 1) d t 4- / (#e/({u2, u3,..., un}) 1)
00 J$(u\)
d t
(Bel{{uk,uk+l,... ,un}) 1) dt
fV{uk)
+ ...+ /
+ / (Bei({ufe+i,fc+2, ,Un}) 1) dt.
J0(uk)
We can see that A/i is the smallest positive value and A/2 is the largest negative
value associated with random set information. Hence Equation (2.44) hold.
Similarly, we also can apply / to (2.48), and obtain that Equation(2.45) holds.

Example 8. Let fi be the set of outcomes from tossing a die where we know
only Pr$j({1,6}) = |. Each face of this die is painted by one of the colors Black
(B), Red (R), or White (W). However, we cannot see the die because it is in a
dark box. Suppose that only information we have is
{1,6} > {B, R, W} and {2,3,4,5} > {R} ,
43


i.e., we know only that faces 2, 3, 4, and 5 are all painted by Red. However,
we do not know which color (B, R, or W) is used for painting face 1, and which
color (B, R, or W) is used for painting face 6. We will pay $1, $2, or $3 if B, R,
or W appears, respectively, i.e.,
0(B) = $1, 0(R) = $2, and 0(W) = $3.
The random set (P, m) for this situation is T = {{R},{B, R, W}}, where
m ({R}) = 2/3 and m ({B, R, W}) = 1/3. The focal elements are nested, so we
can create possibility and necessity measures using this random set.
PS({B}) =|
Pos ({R}) =1
Pos({W}) =|
Pos({B,R}) =1
Pos({B,W}) =1
Pos{{R,W}) =1
Pos({B,R,W}) = 1
The density mass functions / and /
Nec({B}) = 0
Nec ({/?}) _ 2 3
Nec({W}) = 0
Nec ({B, R}) _ 2 3
Nec{{B,W}) = 0
Nec({R,W}) _ 2 3
Nec({B,R,W}) = 1.
require the calculations of four Bel's:
Bel{{B}), Bel({W}), Bel({R,W}), and Bel{{B,R}), i.e.,
/($1) = /({£}) = Bel({B,R,W}) Bel({R,W}) = 1 § = |,
/($2) = / ({R}) = Bel({R, W}) ~ Bel({W}) = § 0 = |,
m) = f({W}) = Bel({W}) = Q,
J(U)=J({B}) = Bel({B}) = 0,
7($2) = / ({R}) = Bel({B, R}) Bel({B}) = § 0 = |,
7($3) = 7 ({W}) Bel({B, R, W}) Bel({B, R}) = 1 § =
44


With respect to the information we have, / provides the smallest E(9) = 1 | +
2 | + 3 0 = $1.67, and / gives the largest E(9) = l- 0 + 2- | + 3- | = $2.33.
However, if we were to use an LP problem (2.51) to find the lowest (and
largest) expected return value, we have 2(23 2) = 12 terms of PI/Bel's to
calculate. Moreover, we will need to solve 2 linear programs.
min / max $1 x /(B) -f$2x /(G) + $3 x /(Y)
s.t. f(B)e[Bel({B}),Pl({B})}
f(G)e[Bel({G}),Pl({G})}
f(Y)[Bel({Y}),Pl({Y})}
/(B) + /(G) [Bel({B, G}), P/({B, G})]
/(B) + /(Y) e [Bel({B, Y}), P/({B, Y})]
/(G) + /(Y) [Bel({G, Y}), P/({G, Y})]
/(B) + /(G) + /(Y) = 1. 0
(2.51)
Let U = {ui, U2, , un] be the set of all realization of a random set uncer-
tainty u, and 9 is an evaluation of U such that 9(ui) < #(u2) < ... < 9(un).
Theorem 2.17 has advantage for finding the lowest and highest expected value
of 9. Table 2.2 illustrates the number of belief and plausibility terms required
for finding / and / by using Theorem 2.17 versus by solving two LP problems.
It is clear that / and / obtained by the construction (2.42) and (2.43), when
random set information is given, require much less calculation than solving two
linear programs min / max Ej(u), where M.? = {density function / on U :
f&Mr fMr
Bel (A) < Prj(A) < Pl(A), A C [/}, because we need to find beliefs and plau-
sibilities of all subsets A of U to be able to set up these LP problems. More-
45


Table 2.2: The number of belief and plausibility terms required for finding two
density functions for the lowest and the highest expected values of 6 by using
Theorem 2.17 versus by solving two LP problems.
Theorem 2.17 2 LP problems
Bel terms 2(n 1) 2n 2
PI terms - 2n -2
Total 2(n 1) 2(2" 2)
over, Mjr cannot be reduced to M. = {density function / on U : Bel({ui}) <
Prf{{ui}) < Pl({ui}), Viii U}. Example 9 shows that optimal solutions of
min / max EAu) may not satisfy the random set information, since these opti-
feM feM
mal solutions may not be elements in M
Example 9. Let U = { 1,2,3,4,5,6} be a set of all realizations of an uncertainty
u, and suppose (E, m) is a random set such that m({l, 2,3}) = |, m({l, 4,5}) =
and m({4,6}) = Then, the construction (2.42) and (2.43) provides the
smallest and the largest expected values of u by using a probability density mass
vectors / = [|, i, 0,0,0,0]T, and / = [0,0, ^,0, |]T. Furthermore, we have
Bel{{ 1}) = Bel{{2}) = Bel({3}) = Bel{{41}) = Bel{{5}) = Bel{{6}) = 0, and
P/({1}) = | P/({2}) = P/({3}) = P/({4}) = 1, PZ({5}) = P/({6}) = i
An optimal solution of max Ef{u) is / = [0,0,0, |, |, |]T. However, Theorem
fM
2.15 said that Pr7({l,2,3}) [Bel ({1, 2, 3}), PI ({1,2,3})] = [|,f], V/
Mf, but Pr/.({1,2,3}) = 0 ^ [|, |], That is, optimal solutions for max Ef(u)
ftzAi
may not satisfy the random set information. <0
46


The next concern is about infinite random sets and finite random sets with
an infinite set U. We try to build a similar theorem as Theorem 2.17 for the
infinite case of U. Similar to Equations (2.49) and (2.50) in the proof of Theorem
2.17, if we can find density functions / and / such that
and show that these Pr/ and Prj Mr, then we are done, because Bel and
PI are lower and upper bounds to probabilities as mentioned in Theorem 2.15.
A plausibility measure has properties Pl({u | 6(u) < ti}) < Pl({u \ 0(u) < <2})
Vtj < I2, and Pl(U) = 1. We can use a plausibility measure to represent a
cumulative probability distribution function of 9. Likewise, a belief measure is
used to construct a cumulative probability distribution function of 6. Therefore,
we can get the density functions / and / from these cumulative probability
distribution functions
Prf({u | 9(u) > t}) = Bel({u | 6{u) > <}), and
Prj({u | 9(u) > <}) = Pl({u | 9(u) > £}),
Prj({u | 9{u) < <}) = Pl({u | 9(u) < t}),
Prj({u | 9(u) < t}) = Bel({u | 9{u) < t}),
(2.52)
(2.53)
respectively. It turns out that
\
Prf({u | 9(u) > t}) = 1 Prj({u \ 6{u) < t})
= 1 Pl({u | 0(u) < i}) (2-54)
= Bel({u | 9(u) > t})
= 1 Bel({u | 9(u) < t}) > (2.55)
= Pl({u | 6{u) > t}).
47


Therefore, we have the following theorem.
Theorem 2.18 Let a measurable space (U,au) and a continuous function 0 :
U > R be given. Let (T, m) be a random set with T C ay Then there exist den-
sity functions f and f G M? that provide the smallest and the largest expected
values of 9, and their probability distribution functions are Prj({u \ 0(u) < t}) =
Pl({u | 9{u) < t}) and Prj({u \ 6{u) < <}) = Bel({u | 6(u) < <}), respectively.
Proof: The proof of the smallest and largest expected values was presented
before stating the theorem. We still need to show that / and / Mjr. Given
A ou, then A C U. We also can write >4 := { | £i < 9(u) < £2} for some
t\ < t2 G R. We use Equations (2.53 2.55) to show that for any ti < t2 R,
Bel(A) < Prf_{A) < Pl(A), and (2.56)
Bel(A) < Prj(A) < Pl(A). (2.57)
Since
Prs_{A) = PrL{{u | fi < 9{u) < t2})
= Prf_({u | 6(u) > tt}) Prf_{{u \ 9(u) > t2})
= Bel(Ai) Bel(A2); where j4i = { | 6(u) > ti}, A2 = {u \ 0{u) > t2}
= m({j E P : 7 C ylj}) m({7 T : 7 C A2})
= m ({7 T : 7 ^^1,7 ^^2})
>m({'yeJr : 7 C Ai \ A2})
= Bel({u | ti < 9(u) < t2})
= Bel(A),
48


and
PrAA) = Prj_{{u | ti < 0(u) < t2})
= Prf_{{u | 6(u) < t2}) PrL({u \ 0{u) < U})
= Pl(B2) Pl(Bi); where B\ = {u \ 0(u) < ti}),B2 = {m | 6(u) < t2}
= m({'y£jr : 7 PI B2 0}) m ({7 e T : 7 n 0})
-m({7 : 7 n B2 0,7 (~l Bx = 0})
= Pl({u | tx < 6(u) < t2})
= Pl(A),
we obtain (2.56). Similarly, we also can show (2.57).
Uncertainty interpretations mentioned so far in this chapter are belief, plau-
sibility, necessity, and possibility measures, and random set. A belief (plausibil-
ity, necessity, and possibility) measure assumes that there is a belief (plausibility,
necessity, and possibility) about every subset of a non-empty finite set U. A ran-
dom set needs the sum of the basic probability assignment function of all focal
elements to be 1. Unfortunately, information we received may not have all sub-
set details to define a belief (or other) measure. Likewise, we may have focal
elements and their associated basic probability assignment values that do not
add up to 1. Subsections 2.2.2 2.2.6, which also are our contribution in this
dissertation, address these issues, so that we can continue using the construc-
tions (2.42) and (2.43) to find the smallest and the largest expected value of u,
based on the partial information we have.
49


2.2.2 The random set generated by an incomplete random set
information
Let U = {ui,U2, .. ,un} be a set of all realizations of u. An incomplete
random set information is information containing focal elements and their asso-
ciated basic probability assignment values that are not adding up to 1. Suppose
this information is m(Ai) = at > 0, ^tL=1 < L fr some L. This partial in-
formation can belong to any random set that has Ajs as its focal elements with
m(Ai) = a,. We can generate a random set from this partial information which
guarantees the bounds of an unknown probability of each {it*}, k = 1,2,..., n.
This random set has Ai, A2,..., A/, U as the focal elements with m(/lj) = at
and m{U) = 1 at. It is not difficult to see that this random set pro-
vides the largest set Mr, because this random set generates the widest interval
[Bel(A), Pl(A)], VA C U, and hence, it covers the unknown probability based
on this partial information.
2.2.3 The random set generated by a possibility distribution or
possibility of some subset of U
In some situations, we may receive only a possibility distribution instead
of a possibility measure. We still are able to generate a unique random set
corresponding to this possibility distribution, because possibility distribution
implies possibility measure by (2.21). Hence, necessity measure is obtained.
Then, we can use the formula m{A) = E (1)A\BINec(B) to create the
B\BCA
random set. More constructively, let U = {iq, u2,..., un}. Suppose 0 < a\ <
a,2 < < 1, and
50


(2.58)
pos(uki) = ai, i = l,2,...,tu
pos(uki) = a2, i = h + 1, tx + 2,..., f2, ,
pos(uki) = 1, = t[ + 1, Let Aj = {uki, i = tj-i + 1, tj-i + 2,..., tj}, where t0 = 0. Then, the basic
probability probability assignment function values of the associated focal ele-
ments are
m(U)
m (U\Ai)
m(U\Ai\A2)

= a2 ai,
= a3 a2,
(2.59)
m(U\A1\A2\...\Al) = l-al.
Figure 2.4 illustrates a method to obtain a random set from a given possibility
distribution when U = {ui, u2, U3, rx4, U5, tie}. The dashed lines indicate the
corresponding focal elements.
pos(uk) ' Pos(Uk)
1-- -----* m({u3,u4}) = 1 a2
a2
d! --
--1 I I I I
U\ U2 Us U4 Us u6
m({u2, u3, it4, u5}) = a2-ax
- m(U) = ai 0 = dj
Figure 2.4: A random set generated by possibility distribution.
Uncertainty information may also be possibilities of some subsets of U. For
example, let U = {ui,u2,, un}, we may have only Pos (B{) = bi, Pos (P2) =
62, ..., Pos (Bk) = for some K, as our information. Assume that the
51


information we have satisfies the possibility properties. (We need to make sure
that the information we have satisfies the possibility properties, because if U =
{1,2, and we have Pos ({2,4,5}) = and Pos ({1,3,6}) = |, then 1 =
Pos(U) = Pos({2,4,5}U{1,3,6}) = max [Pos ({2,4,5}), Pos ({1,3,5})] = |, is
wrong.) To create the largest based on this partial information, we assume
that pos(ui) = bk, Vu, Bk, k = 1,2,..., K, and pos(ui) = 1, Vu* ^ (J/tLi Pfc-
Then, we continue the processes (2.58) and (2.59) to obtain the random set that
generates the set M
2.2.4 The random set generated by a necessity distribution or
necessity of some subset of U
Suppose the information of an uncertainty u is given by a necessity distri-
bution. Let nec : U [0,1] be a necessity distribution of an uncertainty u with
all realizations in U. Since necessity measure is a special case of belief measure,
m({uk}) = nec(uk), \/k = 1,2, ...,n. However, the focal elements need to be
nested. Hence, we cannot have more than one singleton focal element {ui} for
some i {1,2,..., n}, otherwise the focal elements are not nested. We can cre-
ate more than one random set that satisfies this necessity distribution. However,
we use the random set (P, m), where T = {{it,} (/}, and rn(U) 1 nec(ui).
This random set provides the largest set A4Hence, we will cover all possible
expected values of u. If the uncertainty information is just necessities of some
subsets of U, we can turn them into possibilities of the complement of those
sets, using Nec(A) = 1 Pos(Ac). Assume that this information satisfies the
possibility properties, we can find the random set that provides the largest set
AAjr by following the method in the previous subsection.
52


2.2.5 The random set generated by a belief distribution
Suppose the information of an uncertainty u is given by a belief dis-
tribution. Let bel : U > [0,1] be a belief distribution of an uncertainty
u with all realizations in U. Then, {ufe} such that bel(uk) > 0 is an fo-
cal element of a random set. We also can create more than one random
set that satisfies a given belief distribution. However, we use the random
set (^m), where T = {U,{uk} such that bel(uk) > 0, k = 1,2,... ,n}, and
m(U) = 1 Bel({iik}), by similar reasoning to that provided in the last
subsection.
2.2.6 The random set generated by a plausibility distribution
Suppose the information of an uncertainty u is given by a plausibility dis-
tribution. Let pi : U * [0,1] be a plausibility distribution of an uncertainty u
with all realizations in U. We want to find the random set generated by a given
plausibility distribution, which provides the widest bound on the unknown ex-
pected value. This can be done by setting Bel({uk}) = 0, Vk. There are more
than one random set that provides Pr({uk}) 6 [0,pl(uk)], V k 1,2,..., n, by
Lemma 2.23 (presented later in the next section). Among these random sets, we
can choose one that provides the smallest and the largest expected values, based
on the information Pr({uk}) [0 ,pl(uk)],V k = 1,2 ,...,n. However, it is easier
to find this biggest bound on expected values by solving two LP problems.
Subsections 2.2.2 2.2.6 suggest that the smallest and the largest expected
values of u are obtained by finding the two appropriate density mass functions
through the random set that provides the biggest set Mjr, based on the partial
information we have. However, if the partial information is beliefs or plausibili-
53


ties of some subset of U, then finding the random set that provides the biggest
set Aijr, based on this partial information may be more troublesome. Instead,
we may set up two LP problems to find the bounds on the expected value. The
partial information in the form of beliefs or plausibilities of some subsets of U
falls under the scope of an IVPM, so we will address these two in detail after
the section of IVPM.
Belief and plausibility measures are considered to be the tightest bounds
on a set of probability measures, Mj?, generated by a finite random set, as in
the conclusion of Theorem 2.15. However, an imprecise probability may not
be presented as random set information. It could be in the form of probability
intervals, e.g., see Example 10.
Example 10. Let U {a,b,c}. Suppose someone provides the information
that Pr({a}) G [g, |], Pr({b}) G [|, |], and Pr({c}) G [|, §]. This information
is not considered to be a random set, and we can see that | is not the tightest
upper bound of the probability of ({}), since | + | + | > 1. <>
There are some mathematical concepts that try to capture an uncertainty
form of intervals of all possible probabilities, in general. Two concepts presented
in this thesis are interval-valued probability measures (IVPMs) and clouds. We
will first provide some basic details of these two concepts. Then we will explain
the reason for using a random set and an IVPM as the unified approach for
optimization with uncertainty related to this thesis.
2.3 Interval-valued probability measures (IVPMs)
Let Intjo.i] denote the set of all intervals contained in [0,1], i.e., Int[0,i] =
{Ml I o < a 54


(IVPM) was first given by Weichselberger [75], who used the term R-probability
as an abbreviation of reasonable probability, since it interprets a reasonable
range that an unknown probability could be.
Definition 2.19 (see Weichselberger [75]) Given a measurable space (U,ou),
an interval-valued function i : ay > Int[o,i] is called an interval-valued prob-
ability measure (IVPM) (or an R-probability) if
i(A) = [P(,4),i+(/l)] C [0,1], where A E ou and i~(A) < i+(A), and
There exists a probability measure Pr such that
Vj4 6 av, Pr(A) E i(A).
Definition 2.20 (see Weichselberger [75]) Given a measurable space (U,ou),
and an IVPM i on ou, the set ^ = {Pr : Pr{A) E i(A), \A4 E ou} is called
the structure of i.
Definition 2.21 (see Weichselberger [75]) An R-probability is called an F-
probability, ifV A E i+{A) = sup{Pr(7l) : Pr E .W, }, and
i~(A) = inf {Pr(A) : Pr E .^} .
F-probability is an abbreviation of feasible' probability. In any F-probability,
none of i+(7l) and i~{A) are too wide, while this may be the case for an IVPM
(being too wide).
Theorem 2.15 shows that a finite random set is a special example of an IVPM
by using i(A) = \Bel{A), Pl(A)\, V A QU. Moreover, since Mr is the structure
Bel(A) = inf {Pr(A) : Pr E and Pl(A) = sup{Pr(/l) : Pr E a
55


random set generates an F-probability. We may be able to find a (not unique)
random set, when the only information we have is in the form of an R or F-
probability of all singleton subsets of U (instead of any subsets of U). For
example, if we have information that the left and right bounds of intervals [|, ,
[|,^], and [|, |] create an F-probability of {a}, {6}, and {c}, respectively,
when U {a,b,c}, then these bounds represent beliefs and plausibilities of
a, b, and c. We can construct a random set by using these bounds, e.g., | =
Bel {{a}) = Esn{a}#0m(fi) = m ({}) Similarly, m({b}) = and m({c}) =
|. Together with m(U) 1 | | | = |, the random set (J7, m), where
J7 = {{a} {6} {c} U}, preserve the plausibilities of a, 6, and c as the right
bound of the intervals of this F-probability, as it should be. An R-probability
of all singleton subsets of U also is called a probability interval.
Definition 2.22 (see De Campos, et al. [8]) Let U = {u\,u2, ,} be a fi-
nite set of realizations of an uncertainty u, and C = {[%.,*:], where 0 < ak <
dk < 1, V k 1,2,..., n} We can interpret these intervals in C as a set of
bounds of probability by defining the set & as & = {Pr | ak < Pr({uk}) < dk,
V k = 1,2,..., n} We will say that C is a set of probability intervals (proba-
bility interval, in short), and & is the set of possible probabilities associated
with C.
Assuming that £? ^ 0, a probability interval interpretation of an uncertainty u
also can be an example of an IVPM by setting
i(A) =
[A:> ) A {^fc} j k 1,2,... ,n
[0,1]; otherwise.
(2.60)
56


Therefore, there is a random generating the F-probability of this IVPM.
De Campos et ah, [8], showed how to generate an F-probability of all sin-
gleton subsets of U from a given IVPM of all singleton subsets of U, (also called
a probability interval) when U is finite and |C/| = n. The set of probabilities
associated to this F-probability is equal to the structure
We will present in Lemma 2.23 how to obtain an F-probability of all {uk} C U
from a given IVPM of {ti*,} ; k = 1,2,..., n, in a similar way as in [8], and use it
to generate a random set. Lemmer, et al. [31] provides the conditions when the
bounds on a given probability interval of {m*,} represent beliefs and plausibilities
of {u*;}, Vk = 1,2,... ,n. However, our contribution results from Lemma 2.23,
and it is that we can construct a given probability interval of {ujt} V k as
[Bel({uk}), Pl({uk})], which is the tightest interval in the sense that
Lemma 2.23 Let a finite set U = {iq, ?x2,..., }, and a corresponding R-
probability of {uk}, k = 1,2, ...,n be given. We can construct a random set
{T,m) (may not be unique) from a probability interval, (the R-probability of
{ui} {u2} ,..., {}), such that the F-probability of {u*,} is
Proof: Consider [i~ ({ut}),i+ ({u*.-})] of an given probability interval Vfc =
1,..., n. We try to get the tightest possible subintervals of [i~ ({/,}), i+ ({*:})]
that represent the corresponding structure as in (2.61). This tightest subin-
n
.4? = l Pr | Pr{uk) [i {uk), f+(ufc)] ^ Pr(uk) = 1 1 . (2.61)
where = < Pr |Pr({uk}) [Bel{{uk}),pl{{uk})\,
[Bel({uk}),Pl({uk})}, V/c = 1,2,... ,n.
57


terval is the F-probability of {ufc} VA\ Since there is a random set gener-
ating the F-probability of this probability interval, f~({wfc}) < Bel({uk}) and
*+({wfc}) > Pl{{iik}), Vk = 1,2, We note that M ^ 0, by the defini-
tion of an IVPM. [8] provides conditions to check the emptiness of a probability
interval.
Some of the endpoints of these intervals [i ({ut}), i+ ({*:})], k 1,..., n,
can be probabilities Pr({uk}). We categorize them into six cases.
Case 1. i ({*,}) + ({nj}) < 1. We cannot increase the terms i+ ({itj})
j=i
j^k
anymore to change *< to =, so we have to increase i~ ({n^}). Hence,
i~ ({*}) < Bel({uk}), and i+ {{uj}) = PI ({_,}). We get Bel {{uk}) =
i-E*+(K})-
j=i
n
Case 2. i~ ({u*:}) + ({_,}) > 1- We can reduce some terms of i+ ({?/_,}),
j=i
j^k
so that *> becomes =\ This implies i~ ({*,}) = Bel ({ti*,}).
Case 3. i ({ufc}) + ({%}) = 1- Then, i ({ufc}) = Bel ({it*}).
i=i
jyfc
Therefore, we obtain Bel ({u;t}), V k = 1,2,..., n. Similar to these three cases
above, the later three cases provide PI ({u/t}), Vk = 1,2,..., n.
Case 4. i+ ({it*}) + ({uj}) < 1. We can change some of the terms
j=i
jjtk
i~ ({uj}), so that *< becomes Therefore, i+ ({rtfc}) = PI ({*}).
Case 5. i+ ({itfc}) + ({uj}) > 1. Since we cannot reduce any of the terms
j=i
i~ ({_,}) further, the term i+ ({uk}) has to be lower. Hence, i+ ({*,}) >
58


PI ({*}). We can get PI ({u*}) by PI ({ufc}) = 1 - ({j})-
j=i
i^k
Case 6. i+ ({uk}) + ({j}) = 1- Then, i+ ({uk}) = PI ({it*}).
j=i
j^k
Only one of the first three cases can happen for each a*,, while finding
Bel({uk}). Similarly, only one of Case 4, 5, or 6 occurs for each u^, in the
process of finding Pl({uk})- These cases of finding Bel({uk}) and Pl({uk})
may seem to interact to each other; for example, the updated Bel({uk}) from
Case 1 may results the updated Pl({uj}) from Case 5. However, the following
analysis shows that the updated Bel({uk}) does not effect the updated Pl({uj}).
If Case 1 happens for some k = 1,2,,n.
Then i+ ({uj}) = PI ({%}), Vj 7^ k, and Bel ({u*}) = 1 ({%}) Thus,
j=1
j^k
for each j 7^ k,
i+ ({j})=1 X/+ (M)- Bel ({*}) (2-62)
(=1
lj,k
n
<1- JV ({,})-fleZ ({*}). (2.63)
/=i
bb.fc
Bel ({a*,}) is the new left bound of the interval [i~ ({*,}), i+ ({*;})]. Thus
(2.63) implies
n
i+ ({u,-}) + JV ({ui}) < 1.
1= 1
Therefore, Case 5 will never happen, when j 7^ k. Hence, Pl({uj}) is not
changed and equal to *+({uj}), Vj / k. If Case 1 happens for more than
59


one k, says ki and k2, we can set j := k\ and k := k2 (and vice versa) in
(2.62 2.63). Hence, Case 5 will never happen for any k = 1,2, ...,n, and
PI ({uk}) = i+ ({ujt}). However, if Case 1 happens for only one k, then Case 5
might happen for uk.
1. If Case 5 happens for uk, then PI({uk}) = l ({t/j}). Moreover, for
j=i
68 ' -({, = 1 - ({,})-({,<}) (2.64)
1=1
l^j, k
> i - ({/}) ~ Pl ({*}) ,
(2.65)
/=1
tyj, k
which falls into Case 2 ,or 3. These two cases imply that Bel ({nj}) =
i~ ({%}), Vj ^ fc. Therefore, we obtain the tightest subinterval
[Bel({uk}), Pl{{uk})}, V/c = 1,2, ...,n.
2. If Case 5 does not happen for uk, then PI ({*}) = i+ ({%}). We still do
n
not know Bel ({tij}), j ^ k. The case i~ ({uj}) + X]*+ ({w/}) < 1) 3 j 7^ k
1=1
Bj
will never happen, since we are considering when Case 1 happens just for
k. Hence, Bel ({_,}) = i~ ({uj}), V j ^ k. We also obtain the tightest
subinterval [Bel({uk}), F/({u/t})], Vk = 1,2,..., n.
If Case 1 has never happened for uk, V k = 1,2,..., n.
Then Bel({uk}) = i~ ({a.}) VA;. If Case 5 also has never happened, then
Pl({uk}) = *+(W-})) Vfc. If Case 5 happens for some k 1,2,...,n, then
Pl({uk}) = 1 - ({uj}) 811(1 Pl({uj}) = 1+({uj})> Vj k. Moreover
i=1
(2.64) and (2.65) are true for these k. Suppose that there is more than one
60


k satisfies (2.64) and (2.65), says k\ and k2. Setting j k\ and k := k2
(and vice versa), we can see that Case 1 will never re-happen, and it results
the unchanges Bel ({ujt}) = i~ ({ujt}), VA\ If Case 5 happens for only one k,
then Case 1 will never re-happen for Uj, V j ^ k because of (2.64) and (2.65).
We also get the unchanged PI ({u^}) = i+ ({Uj}), Vj / k, which means that
n
i~ ({)i/t}) + ^2i+ ({uj}) remains unchanged and hence, it does not fall into Case
i=i
1. Therefore, Case 1 will never re-occur. We obtain the tightest subinterval
[Bel({v,k}), Pl({uk})], VA; = 1,2, We conclude from this analysis that
the updated Bel({iik}) does not effect the updated Pl({uj}).
Let ./# = {Pr | Pr ({ufc}) E [Bel ({uk}) ,Pl ({a*})], Vk = 1,2,..., n}. It is
easy to see that C On the other hand, if Pr E .<#, then because of the
restriction ^>r{{uk}) 1, Cases 1-6 imply that Pr E M. Thus, =
.Ft. Next, we construct a random set (lF,m) from [Bel ({u/t}), PI ({*})], k =
1,2, ...,n, by first setting F = 0, then adding {u^ to IF, when m({uk}) =
Bel ({ipt}) > 0, V k = 1,2,..., n. So,
n
l-J^m({u*})= m{B)- (2.66)
fc=l BCU
Since, for each k = 1,2,... ,n, Pl{{uk}) = Esn{Ufc}/0 m(B)>
Pl({uk})-Bel({uk})= Y, Vfc = l,2,...,n. (2.67)
Bn{ufc}^0
The system of Equations (2.66) and (2.67) has the total of n + 1 rows and
2n n 1 columns. We check that the right-hand-side matrix of this system
has the full row-rank n-1-1 in Lemma 2.24. Therefore, there are infinitely many
61


solutions to 2n n 1 terms of m(B). We can construct one of the solutions of
the system as follows.
1. Set M := m(X) = min {PI ({ufc}) Bel ({ufc}), k = 1,2,... ,n}. If M >
1 ~ ZLi m (Wl)> set m(U) = 1 m ({*})> and m(B) = 0, VBC
U, B ^ U, B ^ {uk},k = 1,2, then we are done. Otherwise, let
A = {1,2,..., n} and Ai = (fc | M = PI ({it*}) Bel ({*})}. We have
m(B) = 0; VBCU, B^U, Br\ {ujt} 0, and B {u*,} VA; G Ai.
2. Now, we consider the system (2.67) when k G A Aj. Choose an
unknow value m(B1) for some B\ C [/, such that m{B\) is a term
in Y m(B), for each k G A At. Such a set Bi always ex-
Bn{ufc}#0
B^{uk}
ists. Set m(Bi) = min {PI ({u*:}) Bel ({ufe}) M, V k G A Aj}. If
m{Bi) > 1 Yl=i m ({*}) set miB\) = 1 YJk=i m ({*}) M>
and m(B) = 0 for the rest of unassigned value m(B)'s, then we are
done. Otherwise, set M m(Bi), and A2 := {k \ M = PI ({u*,})
Bel ({uk})}. Again, we have m(B) = 0 for every unassigned value rn(B)
in Y Tn(B), k G A2.
Bn{ufc}^0
3. Set Ai := Ai U A2, return to step 2.
We add set B C U into IF, whenever m(B) > 0. Hence, we obtain a random
set (F,m). The procedure of constructing this random set will terminate in a
finite number of iterations, since U is finite. It is indeed a random set, since
YAerm(A) = L D
Lemma 2.24 The system of Equations (2.66 2.67) has the full row-rank.
62


Proof: It is trivial, when \U\ = 1 or 2. Consider when n = 3, so X = {tq, it2, U3}.
The left-hand-side of Equations (2.66 2.67) are nonnegative numbers between
[0,1], The right-hand-side can be expanded as:
m({n1,u2})-l-m({Mi,U3}) +m({u2,u3}) +m(U)
m({ui,u2})+m{{uuu3}) +m(U)
m({uuu2}) +m({u2,u3}) +m(U)
m({uuu3}) +m({u2, u3}) +m(U)
The terms m(B) in (2.68) are considered to be unknown variables. The matrix
7b3) in (2.68) represents the coefficient matrix of these unknown variables. It is
not difficult to transform the matrix to the 4x4 identity matrix, 14. Thus,
the right-hand-side coefficient matrix of Equations (2.66 2.67), when n 3,
has the full row-rank.
1111
1101
1011
0111
A(3). (2.68)
When n = 4, U = {ui, u2, u3,114}. A part of the right-hand-side of Equations
(2.66 2.67) is
m({uuu2,u3})+m({ui,u2,u4}) +m({uj, u3, u4}) +m({u2, u3, u4}) +m(U)
m({ui, x2, u3})+ m({ui,u2,U4}) +m({ui, u3, u4}) +m(U)
m({ui,u2,1x3})+ m({ui,u2, n4}) +m({u2, u3, U4}) +m(U)
m{{uuu2, u3}) +m({i, w3, u4}) +m({n2, u3, n4}) +m(U)
m({ui,u2, u4}) +m({i,3,u4}) +m({u2, u3, u4}) +m(U),
whose coefficient matrix can be written as
1 1111 1
1 1101 1 j4(3)
1 1011 = 1
1 0111 1
0 1111 0 1111
:= A(4).
63


Again, we can transform A^ to /5 by using the Gauss Jordan elimination
method. Hence, the right-hand-side coefficient matrix of Equations (2.66 2.67),
when n = 4, has the full row-rank.
In general, for |X| = k = 4,5,..., n 1, we have U = {rq, u2,..., */*,}.
A part of the right-hand-side of Equations (2.66 2.67) can be written as the
following square matrix of dimension (fc+l)x(fc + l), which associates with k
of unknown terms m(B), where \B\ = k 1, and m(U).
1
1 yt(fe-l)
1
0 ll--- 1
:=A{k\ Vfc = l,2,...,n- 1.
By mathematical induction, suppose that AW can be transformed to the identity
matrix, /(jt+i), for each k = 1,2,..., n 1. Thus, for k = n, the matrix
1
1
1
1 1 1
:= A(n)
can be transformed to I(n+\) by Gauss Jordan elimination. Hence, the right-
hand-side coefficient matrix of Equations (2.66 2.67) has the full row-rank,
when 11/| = 3,4,..., n.
Lemma 2.23 helps transform probability interval information to (not unique)
random set information. However, we may not get two probability density
64


mass functions from this random set to obtain the largest bound on the ex-
pected value of u. To guarantee the smallest and the largest expected val-
ues of u given probability intervals information, we solve the LP problems:
n
min/max Ej(u) s.t. /(u,) [a^a*], ^/(rtj) = 1, and f(ui) >0, VL
1=1
Let A = {Ax, A2,. , Ai} C V(U), for some i N, i < 2^ 1. Sup-
pose we know the values of probabilities on these sets in A, Pr(Ak) = pk,
k = 1,2, but the information is not enough to provide the values of
Pr({uk}),Vk. Then, we may not be able to find any random set to repre-
sent probability on sets information, because Mjx ^ M, in general, where
Ma = {Pr Pr(Ak) = Pk, V Ak A}, and M is the set of probability gen-
erated by a random set. Therefore, a probability on sets interpretation of an
uncertainty u is not a special case of a random set interpretation. It can be an
example of an IVPM by setting
i{A) =
/
[Pr(A),Pr(A)\;
[o,i];
V
Ae A,
A e V(U)\A.
(2.69)
To guarantee the smallest and the largest expected values of u given a probability
on sets information, we solve the LP problems:
min/max Ef(u) s.t. ^ f(v,i) = pk, V,4fe A, and ^/(iti) = 1.
UiZAfc i=l
Let Bel(Ak) = ak, and Pl{Ak) = fa, VAk A. Suppose we have partial
information in the form of belief (or plausibility) of some subsets of U, then for
an unknown probability Pr that obeys this partial information, ak = Bel{Ak) <
T.UieAkPr{ui), VAk A, (or fa = Pl(Ak) > Eu,e^ ^(u*), VAk A). We
65


can write belief of some subsets of U as the following IVPM:
i(A) =
[.Bel(A), 1]; A e A,
[0,1]; Ae V(U)\A.
V
(2.70)
To guarantee the smallest and the largest expected values of u given a probability
on sets information, we solve the LP problems:
n
min / max Ef(u) s.t. ^ f(ui) > ak, V Ak 6 A, and ^ f(ui) = 1.
i=l
A similar approach applies to a partial information of plausibility on subsets of
the set U.
2.3.1 Upper and lower expected values generated by IVPMs
We consider the case when the incomplete information is interpreted as
an IVPM, i.e., suppose that a finite set U of n realizations of u is defined by
U := {ui,u2, ,un}, where u\ < u2 < ... < un, and have the probability
information in a nonempty set & as follows:
/ & < ^2 Prk 9 = < (Pri,Pr2,...,Pr) *eA( n / ^Pr* = l, Prk > 0, VA: = 1,2, ...,n. fc=i
A, in (2.71) is an index set which is a subset of {1,2,... ,n}, where t is
the total number of index sets, i.e., t N and t < 2n 2. The unknown
probability of {uk} is Prk. The sum of these probabilities in A, is A* [0,1].
Let Pr be any probability satisfying (2.71), and Epr(u) denote the expected
values of u, with respect to Pr. We show in Theorem 2.25 and Corollary 2.26
below that there are density mass functions Pr, and Pr* such that the expected
value of u with respect to Pr, and Pr* are the lower and the upper bounds on
all expected values of u with respect to any probability satisfies (2.71), i.e.,
Epr.(u) < Epr(u) < Epr* (u).
66


Theorem 2.25 Suppose that the system (2.71) is nonempty. Then, there is a
fixed density mass Prt = {Pr\m,Pr2,,... ,Prnf), which is an optimal solution
for the problem Pu := min U\Pr\ + u2Pr2 + + unPrn, for any uncertainty u
Pr3*
that has n realizations u\,u2,... ,un such that oo < U\ < u2 < ... < un < oo.
Proof: An optimal solution for the problem P[/ exists because the set V is a
closed and bounded nonempty polyhedral set. We construct a probability Pr*
using a greedy algorithm as follows.
step 1. Choose all Prs in & that have the highest value possible of Pr\. Define
the set of all these Prs as
step 2. Prom choose all Prs that have the highest value possible of Pr2.
Define the set of all these Prs as &2.
step 3. From g?2, choose all Prs that have the highest value possible of Pr3.
Define the set of all these Prs as <^3.
step i. From choose all Prs that have the highest value possible of Pr,.
Define the set of all these Prs as
step n. We will get one value of Prn, by following these steps.
Hence, we obtain one probability measure from this construction. Let us define
this probability as Pr. We suppose that this Pr is not an optimal solution for
Pt/, (and try to find a contradiction). Therefore, each component i has changed
from Prim to Pr,. (5, + A,, where 8X, Ai >0, Vi. Suppose that this new changed
67


probability is optimal for Py. Let Z be the minimal objective value of Pj/. This
requires the analysis of n cases.
Case 1. Suppose that Prx, changes to Pr\, Si + Aj.
We know that some other components i, i ^ 1, also need to be changed, so that
this changed probability still satisfies the conditions in The 1st component
can only be reduced, i.e., Pr\, Si, for some <5i > 0, since Pr\. is the highest
value by the construction above. Thus,
Z = Ui (Pri <5i) + U2 (Pr2, S2 + A2) + 113 (Pr3, S3 + A3) + ... +
Un (P^Ti. Sn A) .
As we know that Pri, Si can be increased, let us increase it by Ai, where
0 < Ai < Si, and to be able to satisfy &, Pru <5* + A, may be reduced by
Aj > 0, such that ^_2 A; = Aj. Hence,
Zi = u\ (Pri, <5i + Ai) + U2 (PT2. S2 + A2 A2) + 113 (Pr3, (53 + A3 A3)
T T un (Prn, 5n + A An)
= U\ (Pr 1, <5i) + U2 (Pr2, S2 + A2) + U3 (Pr3m <53 + A3) + ... +
Un (Prn. ~ Sn + An) + (Ui U2)A2 + (til U3)A3 + . + (til tin)An
= Z + (til U2) A2 + (til u3)^3 + + (til tin) A.
Now, consider = (ititi2)A2 + (tii ti3)A3+.. + (tiitin)A. If tti < ti2 < ... <
tt, then @ < 0. Hence, Zi < Z. If ti* = tij+1 = tii+2 = ... = Ui+j = u, for some
1 > 1 and i + j < n, then 0 = (ut U2)\2 + (ui u3)X3 + ... + (tii tti_i)Aj_i +
(ui (j i + l)ti)(Ai + Ai+i +... + Aj) + (ui tij+i)Aj+i +... + (tii tin)An < 0.
Hence, Z\ < Z.
68


Case 2. Suppose that Pr2, changes to Pr2. S2 + A2, where A2 S2 ^ 0.
We know that some other components i, i ^ 2, also need to be changed, so that
this changed probability still satisfies the conditions in The 1st component
can only be reduced, i.e., Pry, for some > 0, since Pri. is the highest
value by the construction above. If Pri, is reduced, then it becomes the previous
case. If there are no changes in Pri., then this changed probability is in
Pr2. can only be decreased, otherwise this changed probability is not in
So, Pr2, changes to Pr2, S2, for some S2 > 0. Thus,
Z = ui (Pru) + u2 (Pr2. S2) + u3 (Pr3. 63 + A3) + ... + un (Prn. 6n + A).
By a similar pattern as in Case 1, we know that Pr2. S2 can be increased.
Let us increase it by A2, where 0 < A2 < S2, and to be able to satisfy
n
Pr^ (5, + A, may be reduced by A* > 0, such that A, = A2. Hence,
i=3
Z2 = ii\ (Pr\,) + u2 (Pr2. S2 + A2) + u3 (Pr3. <53 + A3 A3)
T Un (P^n T An An)
= U\ (Pr\,) + u2 (Pr2, S2) + u3 (Pr3, S3 + A3) + ... +
Un (Prn, ~ 8n + A) + (U2 U3)A3 + (u2 U4)A4 + ... + (u2 un)Xn
= Z + (u2 U3)A3 + (u2 U3)A4 + ... + (u2 un)A.
Hence, Z2 < Z. We apply a similar argument to the rest of the cases. Then,
the changed probability is not optimal, a contradiction. Thus, Pr is an optimal
solution for Py.
69


Corollary 2.26 Suppose that the system (2.71) is nonempty. Then, there is
a fixed density mass Pr* (Pr\, Pr, Pr*) as an optimal solution for the
problem Pv' := max V\Pr\ + v2Pr2 + ... + vnPrn, for any uncertainty v that
has n realizations v1: V2,... ,vn such that 00 < V\ < v2 < ... < vn < 00.
Proof: Apply Theorem 2.26 by using Vi = un, v2 = un-i, ..., and vn =
U\.
Next, the concept of clouds is presented. We point out some differences
between clouds and IVPMs and give an example to shows that a cloud could be
represented as an IVPM.
2.4 Clouds
The concept of a cloud, which tries to combine probability and possibility
into one conceptual basis, was introduced by Neumaier in [52]. The general
definition of a cloud is given below.
Definition 2.27 (see Neumaier, [52]) Suppose all realizations of an uncertainty
parameter are in a set U. A cloud over set U is an interval mapping c : U >
Int[o,ij such that
1. Mu U; c(u) = [c(it),c(u)] with c(u),c(u) E M, 0 < c(u) < c(u) < 1, and
2. (0,1) C Uc(u)C[0,l].
uU
In addition, a random variable Y taking values in U is said to belong to cloud
c, (written as Y E c), if and only if
Ma G [0,1]; Pr(c(Y) > a) < 1 a < Pr(c(Y) > a). (2.72)
70


There is at least one random variable belonging to any given cloud. The
proof of this statement is given in [51]. This random variable is proved to exist,
not assumed to exist as in the definition of an IVPM. We can restate the property
(2.72) by saying that
the probability that a random variable Y £ c belongs to the upper a-cut
Aa := {u £ U | c(u) > a} is at least 1 a, and
the probability that a random variable X £ c belongs to the lower a-cut
Aa := {u £ U | c(u) > a} is at most 1 a.
Figure 2.5 shows a cloud over R with an a-cut.
Definition 2.28 (see Jamison [24]) Let pos : U > [0,1] be a regular possibil-
ity distribution function, i.e., sup {pos (u) £ U} = 1, with associated possibility
U
measure Pos and necessity measure Nec, (no need to be the dual of Pos). Then
pos is said to be consistent with a random variable Y if for every measurable
set A, Nec(A) < Pr(Y £ A) < Pos(A).
A pair of necessity and possibility measures in Definition 2.28 is an example
of an IVPM. The reference [36] shows that every cloud c can be defined as
c(u) = [n(u),p(n)], where p and p are regular possibility distribution functions
71


Table 2.3: The differences between IVPMs and clouds
IVPMs Clouds
Domain Ox X
Range t(A) = [*-(,4),*+(/l)]C[0,l], VA e ax c(x) = [c(i),c(r)],Vr X, and (0,1) c U c{x) C [0,1] xX
Random assumed to a part proved to be
variable of IVPMs definition in any clouds
Example [Nec(A), Pos(A)] [nec(x),pos(x)}, when pos is regular possibility distribution
on X such that Vu 6 U\ p(u)+p(u) > 1 and n(u) 1 p(u), and their associated
possibility measures are consistent with every random variable belonging to c.
More specifically, Destercke et ah, [11] proves that a cloud c is representable by
the pair of possibility distributions 1 c and c. We distinguish some differences
between the definitions of an IVPM and a cloud in Table 2.3.
Example 11. Suppose U = {1,2,3,4,5,6}. A cloud over set U is given in
Figure 2.6. We can see that
1-
0.8--
: c
: c
0.5 --
0.1-
1 t
1
3 4 5
6
Figure 2.6: A cloud represents an IVPM.
72


a = 0 =$ Pr (U) < 1 0 < Pr(U),
a G (0,0.1) =* Pr ({2,3,4,5}) < 1 -a< Pr(U),
a = 0.1 = Pr ({2,3,4,5}) < 1 0.1 < Pr ({2,3,4,5}),
a G (0.1,0.5) = Pr ({3}) < l-a< Pr ({2,3,4,5}),
a = 0.5 => Pr ({3}) < 1 0.5 < Pr ({3,4}),
a G (0.5,0.8] =* Pr ({3}) < 1 a < Pr ({3,4}),
a G (0.8,1) =J> 1 a< Pr({3,4}).
Hence, Pr(U) = 1, Pr ({2,3,4,5}) = 0.9, Pr ({1,6}) = 0.1, Pr({3}) < 0.2,
and Pr ({3,4}) > 0.2. This information can be considered as an IVPM, because
we can set i ({3}) = [0,0.2], i ({3,4}) = [0.2,1], and i(A) = [0,1], whenever any
nonempty subset A of U is not one of the sets U, {2,3,4,5} {1,6} {3,4}, or
{3}. 0
Any cloud can be represented as an IVPM, in general. Hence, we conclude
that clouds are special examples of IVPMs. Walley [73] introduced his imprecise
probability theories to characterize a set of gambles of interest, where a gamble
is a bounded real-valued function defined on a set U. However, his approach is
too general for uncertainties concerned in this dissertation. We will not present
the details of Walleys imprecise probability in this thesis. Interested readers
can find out more in [73]. A discussion of relationships of these concepts in the
sense of theory A is more general than theory B is well explained in [10] and
[11]. We are interesting in arguing the relationships of different interpretations
of an uncertainty u. These interpretations are
1. possibility measure
73


(a) possibility distribution
(b) possibility on some subsets of U
2. necessity measure
(a) necessity distribution
(b) necessity on some subsets of U
3. plausibility measure
(a) plausibility distribution
(b) plausibility on some subsets of U
4. belief measure
(a) belief distribution
(b) belief on some subsets of U
5. random set
6. probability interval
7. probability on sets
8. cloud
9. IVPM.
We wish to use an appropriate approach to obtain two special probability density
mass functions that provide the smallest and the largest expected values of u.
74


2.5 Relationships of some selected uncertainty interpretations
Some results in this chapter are generalized to be able to handle the continu-
ous case of U. However for simplicity, we reduce the scope of this dissertation to
the finite case of U. Since an interval interpretation of uncertainty falls into the
continuous case of U, we will not consider interval uncertainty in this research.
The continuous case is for further research.
We summarize the review of the interpretations of an uncertainty from Sec-
tions 2.1 2.4 as follows.
We can derive a unique random set from possibility, necessity, plausibility,
or belief measures by using the formula (2.10).
We can derive a unique possibility, necessity, plausibility, or belief measures
by using the formula (2.7) or (2.8).
We can generate the random set that provides the largest set Mj? from
a given partial information of a random set, as explained in Subsection
2.2.2.
We can generate the random set that provides the largest set A4jr from
a given a possibility, a necessity, a plausibility, or a belief distribution, as
explained in Subsections 2.2.3 2.2.6.
There is the random set generated by partial information in the form of
possibility or necessity on some subsets of U. This random set also provides
the largest set M.
We can construct a random set from a given probability interval.
75


Random set, probability interval, probability on sets, and cloud are exam-
ples of IVPMs.
We consider partial information in the form of belief or plausibility on
some subsets of U as an IVPM information.
Probability is just a random set when all focal elements are singletons.
We now provide a full relationship diagram of all different uncertainty interpre-
tations related to this thesis in Figure 2.7, which enhances the basic diagram of
Figure 1.1 in Chaper 1.
We will never know with certainty the probability of a parameter when
information we received is one of the uncertainty interpretations mentioned in
this chapter. However, we can at least find out two probability density mass
functions from M. that provide the smallest and the largest expected values.
There are two approaches to obtain these two probability density mass functions.
The first approach is by the constructions (2.42) and (2.43) when an uncertainty
interpretation can be viewed as a random set. The second approach is by setting
up two associated LP problems to solve for the minimum and maximum expected
values of u, when an uncertainty interpretation is considered to be an IVPM.
Our contribution results in this chapter are summarized below.
1. Remarks 2.9 and 2.10 provide the insight that Bel(A) and Pl(A), for each
A C U, depend on the set 0. Moreover, if we receive more informa-
tion where the old information becomes more specific, then Bel0\^{A)) <
Rc^updated(^)) ^ -^updated (-4)) ^ Pfcld(4)).
76


Figure 2.7: Uncertainty interpretations: A ---------> B : there is an uncertainty
interpretation B contains information given by an uncertainty interpretation A,
A-------1 B : A is a special case of B, A <> B : A and B can be derived from
each other, and A B : B generalized A.
77


2. We provide a stronger statement to ensure the meanings of possibility and
necessity, i.e., Nec(A fl B) = min {Nec(A), Nec(B)} and Pos(A U B) =
max {Pos(A), Pos(B)} if and only if the focal elements are nested. The
proof of this statement is in Appendix B.
3. Based on a given uncertainty u with random set interpretation, we prove
in Theorem 2.15 that the lower and upper functions of M. are belief and
plausibility functions.
4. Theorem 2.17 and 2.18 guarantee that probability density functions / and
/ provide the lowest and the highest expected values for a given uncertainty
u with random set interpretation. For a finite set of realizations of u, Table
2.2 illustrates the number of Bel and PI terms needed to derive / and /,
which is much less than the number of Bel and PI terms needed to state
LP problems to find the lowest and the highest expected values.
5. Subsections 2.2.2 2.2.6 emphasize that if we have only partial information
about belief (or other) measure or random set, then we still can find the
random set generated by this partial information that provides the largest
set Mjr.
6. We can find a (not unique) random set that contains information given by
a probability interval, see Lemma 2.23.
7. Theorem 2.25 and Corollary 2.26 are used for finding / and /, when u has
IVPM interpretation.
78


8. We provide the relationships of PC-BR.IN interpretations of uncertainty
in Figure 2.7.
Chapter 4 explains how to solve an LP problem with uncertainty to get
a pessimistic and an optimistic result. It involves using these two probability
density mass functions and two expected recourse models. Moreover, a minimax
regret approach for an LP problem with uncertainty is presented to provide a
minimax regret solution when the true probability of uncertainty is unknown,
but its bound is known. The next chapter is devoted to a literature review of
linear programming problems with uncertainty. Uncertainty presented in the
review is limited to probability measure, possibility distribution, and interval
(which is not developed in this thesis).
79


3. Linear optimization under uncertainty: literature review
We provide a review of literature dealing with modeling LP problems with
uncertainty. If the objective is to find a solution for an LP problem with un-
certainty that will minimize the maximum regret of using this solution due to
the uncertainty, we may apply a minimax regret model to the problem and
ignore any interpretations of uncertainty. However, if we are interesting in pre-
dicting the average of objective values in the long run, then we must consider
uncertainty interpretations. Toward this end, we categorize uncertainty in LP
problems depending on the information available about the uncertainties. They
fall into the following cases.
Uncertainty interpretations in the problem are probability. The modeling
approach is an expected recourse model.
Uncertainty interpretations in the problem are possibility. The modeling
approach is an expected average model.
Uncertainty interpretations in the problem can be both probability and
possibility, but they are not in the same constraint. The modeling ap-
proach is an expected recourse-average model.
Uncertainty interpretations in the problem can be both probability and
possibility in the same constraint. The modeling approach is an interval
expected value model.
80


To date, there is no modeling concept that capture LP problems with other
uncertainty interpretations, except the ones listed above. We try to overcome
this disadvantage by using a new approach (based on an expected recourse
model), which uses the knowledge we had from Chapter 2 that each of PC-BRIN
uncertainty can be represented as a set of all probability measures associated
with their information. The details of the new approach are provided in Chapter
4.
In this chapter, the modeling concepts for the LP problems with uncertainty
categorized above are presented. We begin with an illustration of a deterministic
model of a production planning example. Then, after we explain the modeling
concept for each of the LP problems with uncertainty categorized above, we
provide an example for each concept by modifying this production planning
example, which leads us to different types of models depending on the inter-
pretations of uncertainties. The solutions of these models are done in GAMS,
which is a program language for solving general optimization problems. For
simplicity, we assume that we have information that already has been classified
into probability and possibility interpretations of uncertainty. The differences
should be seen in modeling and semantics of each example.
3.1 Deterministic model
A small production scheduling example is presented for ease and clarity of
presentation. From two raw materials x,\ and x2, (for example, supplies of two
grades of mineral oil), a refinery produces two different goods, cream #1 and
cream #2. Table 3.1 shows the output of products per unit of the raw materials,
the unit costs of raw materials (yielding the production cost z), the demands
81


for the products, and the maximal total amount of raw materials (production
capacity) that can be processed. We assume that the mineral oil is mixed with
other fluids (e.g., water) at no production cost to manufacture. Hence, these
ingredients are not in the model.
Table 3.1: Productivity information shows the output of products per unit of
raw materials, the unit costs of raw materials, the demand for the products, and
the limitation of the total amount of raw materials.
Mineral oil (fl.oz.) Products Costs ($/fl.oz.) The limit amount of mineral oil that can be processed (fl.oz.)
cream #1 (oz./fl.oz.) cream #2 (oz./fl.oz.)
Xi 2 3 2 1
6.1667 3 3 1
relation > > = <
Demands
174.83 161.75 2 100
A manager wants to know how many units of raw materials to order to
satisfy the demands and minimize the total production cost. Therefore, we
have the linear programming problem (3.1) with the unique optimal solution
(xj,x^) = (37.84,16.08), and = $123.92. Figure 3.1 illustrates the feasible
region of the system (3.1) and its unique optimal solution.
min z := 2xi +3x2
s.t. 2xj+6.1667x2 > 174.83,
3xx+ 3x2 > 161.75, >
Xi + X2 < 100,
xi,x2 > 0.
(3.1)
The values shown in Table 3.1 are fixed for the deterministic problem (3.1).
However, this is not always a realistic assumption. For instance, if the produc-
82


X2
Figure 3.1: Feasible region and the unique optimal solution of the production
scheduling problem (3.1).
tivities and demands can vary within certain limits or even have different types
of uncertainty, and/or we need to formulate a production plan before knowing
the exact values of these data, then this deterministic model is not adequate.
We have to reformulate the system (3.1) so that it can produce a reasonable re-
sult. In Sections 3.2-3.6, we present different types of models depending on the
interpretations of uncertainty to put the main thrust of this thesis, pessimistic,
optimistic and minimax regret of expected recourse problems, in the context of
other optimization under uncertainty.
3.2 Stochastic models
The aim of stochastic programming is to find an optimal decision in problems
involving uncertain data of probability interpretation. The terminology stochas-
tic is opposed to deterministic and means that some data are random. There
are two well-known stochastic programming models. One is a stochastic program
with expected recourse, which transforms the randomness contained in a stochas-
tic program into an expectation of the distribution of some random vector. The
83


other is a stochastic program with chance constraints, which the constraints can
satisfy the requirements with some probability or reliability level. One of these
models is chosen for a particular problem under appropriate assumptions. How-
ever, we will not present a stochastic program with chance constraints in this
dissertation. The details on this topic can be found in many books e.g., [5, 26].
A general formulation for a stochastic linear program is the same as the
model (1.2), when A and b have probability interpretations. We restate this
problem here:
min cTx s.t. Ax > b, Dx > d, x > 0, (3.2)
X
where Ax > b contains m constraints of uncertain inequalities. This problem is
not well-defined, because of the uncertainties. To be more specific, let us assume
the following in addition to what we had in the problem (3.1).
A1 The raw materials for the weekly production process rely on the supply of
two grades of mineral oil, denoted by X\ and X2, respectively.
A2 The refinery uses two grades of mineral oil (and other ingredients at no
cost) to produce cream #1 and cream #2.
A3 The weekly demands of cream #1 and cream #2 vary randomly and mu-
tually independent on each other (for simplicity).
A4 The production (the output of the product per unit of the raw materials)
of cream # 1 varies randomly.
A5 The production per unit of cream #2 is fixed.
84


Given the above assumptions, the problem becomes a stochastic linear program
(3.3), where an and a12 are the production of cream #1 per unit of mineral oil xx
and X2, and b\ and &2 are random demands. They possess known probability of
demands for cream #1 and cream #2, respectively. It is not clear how to solve
the minimization problem (3.3), since it is not a well-defined problem before
knowing the realizations of (an,a12,&i, 62).
min 2xi + 3:r2
s.t. aui! + aux2 > h, 3xx + 3x2 > b2,
X\ + x2 < 100, xx,x2 > 0.
(3.3)
We have to formulate a production plan under uncertainty, since we only have
the probability distributions of the random demands.
3.2.1 Stochastic program with expected recourse
The terminology recourse refers to a second (or further) action that helps
improve a situation whenever the first action (taken before knowing the real-
ization of uncertainty) does not satisfy the requirements. An example of the
situation and requirements in the model is when the manufacturer is required to
satisfy the customers demands, which are uncertain, while trying to minimize
the production cost. The manager has to make a decision on the amount of raw
materials (the first action) without knowing the actual demands. Later on, if
these amounts of raw materials do not satisfy the actual demands, the manager
needs to buy any shortage amount (recourse variables, or second action) from
an outside market with some price attached.
An expected recourse model minimizes the cost of the first action and
the expected cost of the second action. Define the random vector £ =
85


(an, ai2, , am, bi, 62,..., 6m)T, arid the penalty price vector for m constraints
as s (si,s2,..., sm)T > 0. Let ^ = {x \ Bx > d, x > 0}. An expected re-
course model has the general formula
min cTx + E? Q(x, f),
xe'p
(3.4)
where E$ Q(x,£) is the expected value of Q(x,£) with respected to random vari-
able £, and Q(x,£) sT |max ^(b Ax), 0 ^. More precisely, suppose there are
ctij and (3i finite realizations of each uncertainty dij in the matrix A and bi in the
vector 6, respectively. Thus, there are n"=in jCty/?* = N scenarios of £, which
are denoted as £fe, V A; = 1,2,..., TV. Each scenario has probability of occurrence
prk Hence, Q(x, £) = (Q(x, £*), Q(x, £2),..., Q(x,£N)), where Ak and bk are the
matrix and vector at the A;th scenario, and Q(x,£k) = sT (max [(// Akx), 0]).
Let y(£) = max (b Ax), 0 The variable y(£) is a recourse variable vector of
shortages. Model (3.4) can be rewritten as
min cTx + E$ (sTy(£)) s.t. y(£) > b Ax, y(£) > 0. (3.5)
We illustrate an example of an expected recourse problem by supposing that
the assumptions A1-A5 apply to this subsection and assume further as follows.
A6 Fixed amounts X\ and x2 of two grades of mineral oil must be chosen in
advance of knowing actual demand and cannot change during the week.
A7 The customers expect their actual demands of cream #1 and cream #2 to
be satisfied.
A8 Fixed penalty prices will be incurred if the customers demands cannot be
covered by the production, since the amount of shortages has to be bought
from a market.
86


We assume here that the penalty prices per unit of shortages are Si = $7 and
s2 = $12 for cream #1 and cream #2, respectively. For simplicity, we suppose
further that there is no storage cost or selling cost for the over production.
Assume that the realizations of the random productivity of cream #1 and
the random demands of both creams are finite with their probabilities as follows:
1 Pr ({1}) = 1/4 5 Pr ({5}) = 1/6
2 Pr ({2}) = 1/2 oi2 = < 6 Pr ({6}) = 1/2
3 Pr ({3}) = 1/4 7 Pr ({7}) = 1/3
> >
149 Pr({149}) = 5/12 138 Pr ({138}) = 1/4
180 Pr ({180}) = 1/3 b2 =< 162 Pr ({162}) = 1/2
211 Pr ({211}) = 1/4 185 Pr({185}) = 1/4. t >
Define the random vector £ = (d,n, a12, b\, = {£* = (1,5,149,138)T ,£2 =
(1,5,149, 162)t ,..., £81 = (3, 7,211,185)T}, and introduce recourse variables
yi{^) and y2(0 measuring the shortages of cream #1 and cream #2, respec-
tively. The 2/i(0>* = 1,2, are themselves random variables since shortage de-
pends on the realizations of random vector £. We also refer to the expected
costs due to any shortage of production (or due to the amount of violation in
the constraints, in general) as the expected recourse costs. We can replace the
stochastic problem (3.3) by the well-defined stochastic program with expected
recourse (3.7) using E^ as an expected value with respect to the distribution of

min 2xi + 3x2 + Q (xj,x2,£)
S.t. X\ + X2 < 100,
xi, x2 > 0,
(3.7)
87


where for each j = 1,2,..., 81,
Q (xj, x2, £3) := 7 max {frj a3nX\ a3l2x2,0} + 12 max {b^ 3x! 3x2,0}
= min 7yx + 12y2
s.t. a3nxi + a\2x2 + 3/i (£3) > b{,
3xi + 3x2 + y2(£3) >
2/i,3/2 > 0.
The mathematical program (3.7) is a linear programming problem which yields
an optimal solution of x* = 31.80 and x2 = 29.87, with the corresponding
production (first-stage) costs of $153.21 and the sum of the first-stage costs
and the expected recourse costs of $155.38. This optimal solution violates Xj +
5x2 > 211, which results in the recourse variable y\(£3) = 29.85. However, this
constraint happens with a very small probability of Hence, we can
interpret this optimal solution as that in the long run, the manufacturer will
spend on average of $153.21 for the production planning cost by using 31.80
units of mineral oil grade 1 and 29.87 units of mineral oil grade 2.
Suppose instead of using the expected recourse model, we apply the mean
values of these demands and the productivity of cream #1. The problem be-
comes the deterministic model (3.1), and the supply solutions are X\ = 37.84
and x2 = 16.08. Then suppose further that it turns out later that the customer
needs 200 units of cream #1 and demands 155 units of cream #2. Moreover,
the realizations of the productivity for cream #1 are an = 2 and a12 = 5.
We, as the adviser for a manager of this production plan, would need to deal
with the excess of cream #2 and the shortage of cream #1 in some way. Un-
88