Citation
Application of extreme value theory and threshold models to hydrological events

Material Information

Title:
Application of extreme value theory and threshold models to hydrological events
Creator:
Pocernich, Matthew J
Publication Date:
Language:
English
Physical Description:
71 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Extreme value theory ( lcsh )
Hydrology -- Mathematical models ( lcsh )
Floods -- Mathematical models ( lcsh )
Flood forecasting -- Mathematical models ( lcsh )
Extreme value theory ( fast )
Flood forecasting -- Mathematical models ( fast )
Floods -- Mathematical models ( fast )
Hydrology -- Mathematical models ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 70-71).
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Matthew J. Pocernich.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
51821139 ( OCLC )
ocm51821139
Classification:
LD1190.L622 2002m .P62 ( lcc )

Full Text
\
APPLICATION OF EXTREME VALUE THEORY AND THRESHOLD
MODELS TO HYDROLOGICAL EVENTS
by
Matthew J. Pocernich
B.S.E., University of Michigan, 1989
M.S.E., Colorado State University, 1995
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Applied Mathematics
2002


This thesis for the Master of Science
degree by
Matthew J. Pocernich
has been approved
by


Pocernich, Matthew J. (M.S., Applied Mathematics)
Application of extreme value theory and threshold models to hydrological events
Thesis directed by Professor Craig Johns
ABSTRACT
Quantifying the probability of very high stream-flows is an important problem
in hydrology. This paper examines the use of extreme value statistics in predicting
the probability of rare flood events. While the underlying statistical distribution
describing a process may be unknown, there is a large class of distributions for
which the extreme values can be modeled with a 3-parameter distribution called
the generalized extreme value (GEV) distribution.
Hydrologists traditionally fit the GEV distribution using annual maximum
flows measured at a gaging station. Instead of using annual maximum flow values,
one may chose to model the exceedances over a given threshold. These values can
be fitted to a generalized Pareto distribution. Independently, the arrival rate of
these exceedances can be modeled using a Poisson distribution. A 2-dimensional
point process model uses both the arrival rate of exceedances and the distribution
of these exceedances to provide GEV parameter estimates.
This paper uses a maximum likelihood estimate (MLE) method to fit param-
eters in a point process model. Typically linear moment methods are used with
annual maximum values. The MLE method enables covariates to be incorporated
m


in the model. For example, this paper considers time as a covariate in order to
determine if the magnitude and frequency of high flow events change with time.
In applying this method to data from Cherry Creek at Franktown, Colorado,
it was found that the parameter estimates did not vary significantly with time.
This paper discusses the difficulties in choosing threshold. Furthermore, a series
of plots show that varying this threshold has a great effect on the return levels.
This abstract accurately represents the content of the candidates thesis. I rec-
ommend its publication.
/
Signed
Craig Johns
IV


DEDICATION
I would like to thank my Dad for his support and understanding in this effort
as well as my other endeavors.


ACKNOWLEDGMENTS
I would like to thank the people in the CU-Denver Math Department that make
it such a pleasant place to be.


CONTENTS
Figures ................................................................... x
Tables.................................................................. xi
Chapter
1. Introduction............................................................ 1
1.1 Hydrological Context.................................................. 1
1.2 Statistical Context................................................... 2
2. Extreme Value Theory, Relevant Distributions and Point-process Models 6
2.1 Generalized Extreme-Value Family...................................... 6
2.2 Generalized Pareto distribution....................................... 9
2.3 Point Process Models................................................. 10
2.3.1 Historical perspective and motivation.............................. 11
2.3.2 GPD- Poisson, point process model ................................. 12
2.3.3 Poisson-GPD model and GEV distribution parameters.................. 17
2.3.4 Incorporation of covariates........................................ 18
2.4 Methods of parameter estimation..................................... 20
2.4.1 L-moments Estimators............................................... 20
2.4.2 Maximum likelihood estimators...................................... 21
2.4.3 Comparison of methods.............................................. 23
2.5 Standard Error Estimation............................................ 24
2.6 Profile plots for return level and parameter confidence intervals ... 25
3. Model Development and Evaluation...................................... 26
vii


3.1 Seasonality ......................................................... 26
3.2 Threshold Estimation................................................. 27
3.3 Graphical guidance in determining a threshold value.................. 29
3.4 Model Structures..................................................... 31
3.5 Model Selection...................................................... 32
3.5.1 Likelihood ratio test.............................................. 32
3.5.2 Graphical Model Evaluation......................................... 33
3.5.2.1 Quantile Plots (QQ Plots)...................................... 33
3.5.2.2 W-statistic plot................................................. 35
3.5.2.3 Z-statistic plot................................................. 36
4. Streamflow Data ....................................................... 37
4.1 The relation between threshold selection and number of observations. 39
4.2 Return Periods....................................................... 43
4.3 Partial Duration Series and Annual Maximum Flood Series..... 43
4.4 Stream Gage Sites ................................................... 44
5. Case Studies and Implementation........................................ 46
5.1 De-clustering Data and Describing Seasonality........................ 46
5.2 Threshold Selection.................................................. 50
5.2.1 Stability of model as a function of threshold.................... 52
5.2.2 Return period confidence intervals ................................ 54
5.2.3 QQ plots .......................................................... 57
5.3 Exploring Trends in Parameters...................................... 59
6. Discussion............................................................. 63
Appendix
viii


A. Variables and Abbreviations......................................... 68
References............................................................. 70
IX


FIGURES
Figure
2.1 Effects of varying the parameters of the GEV distribution............ 19
3.1 QQ plot for Fountain Creek, CO....................................... 35
4.1 Sequential flows exceeding a threshold............................... 38
4.2 Relation between threshold and number of observations................ 40
4.3 Cherry Creek Data and selected thresholds............................ 41
4.4 A correlogram of clustered and de-clustered flow values.............. 42
5.1 Times Series Plot Franktown Gage.................................. 48
5.2 Number of de-clustered observations as a function of threshold. ... 49
5.3 Boxplot of de-clustered daily flows at Franktown, grouped by month 50
5.4 Mean residual life plot Franktown at Cherry Creek.................. 51
5.5 Parameter estimates against threshold for Franktown data............. 52
5.6 Flows as a function of return periods Cherry Creek at Franktown . 53
5.7 Profile plot for a 70 cfs and 100 cfs thresholds..................... 55
5.8 Upper 95th percentile confidence interval and return levels.......... 56
5.9 QQ plots for Franktown data for a range of thresholds................ 58
5.10 Z-Statistic plot for the Cherry Creek at Franktown gage............. 60
5.11 W-Statistic plot for the Cherry Creek at Franktown gage............. 61
x


TABLES
Table
4.1 Days of flows exceeding 40 cfs Cherry Creek Data................. 41
5.1 Cherry Creek at Franktown model results (Threshold = 40 cfs.) . 62
A.l Variables and definitions........................................... 68
A.2 Abbreviations and definitions....................................... 69
xi


1. Introduction
Hydrology and other branches of civil engineering are concerned with trends
in the magnitude and frequency of extreme flow events. This paper uses extreme
value statistics in the context of a point process model to answer the question of
whether the magnitude and frequencies of high flow events are changing. This
paper has two intended audiences: hydrologists and statisticians. This section
provides context to both parties as well as a description of the data used in the
application of these methods.
1.1 Hydrological Context
As a result of urbanization, pervious surfaces such as rangelands, wetlands
and other natural surfaces are replaced with less pervious or even impervious
surfaces. Less pervious surfaces absorb less water and consequently generate more
runoff which in turn contributes to peak stream flows. Other factors that can
affect peak flows include the construction of storm sewer systems and the paving
of drainages. Runoff delivered to a stream through engineered systems typically
is delivered more quickly. Streams and rivers controlled by dams typically result
in smaller ranges of minimum and maximum flows. Channel morphologys adapt
to handle these reduced flows and ultimately, a greater danger from floods may
result[21, p. 106]. Suddenly rising waters offers people in flood zones less time
to escape danger and pose a danger to people and property. These waters may
1


overwhelm hydraulic devices such as storm drains and dams, thus increasing the
area affected by floods.
Urbanization results in more frequent floods of greater magnitude [21], and
increases the frequency of low and intermediate floods. To mitigate these risks,
land development guidelines often seek to maintain original stream flow character-
istics. This is accomplished through the use of zoning regulations and hydraulic
structures such as detention basins. A trend in the distribution of high flow events
may indicate that existing or past flood control practices may be inadequate or
the frequency and magnitude of large storms is changing. In either case, testing
for trends is valuable.
For a case study, data from Cherry Creek near Franktown Colorado was
selected. Key features of this station are the long period of record (1939 to 2001)
and the fact that northern (lower) sections of the basin have experienced recent
urbanization.
1.2 Statistical Context
This problem poses several interesting statistical and mathematical prob-
lems. In quantifying hydrological risks, the assumption is made either implicitly
or explicitly that climatological and basin drainage conditions have not changed
during the study period. In engineering practice, professional judgment is fre- ,
quently used to decide whether this assumption is true. Large changes in a river
system are easy to detect. The construction of a canal or dam is an example of
a significant change that is easy to identify. Urbanization is a slow process that
takes place in small increments over long time scales and the effects are difficult
2


to detect. For this reason urbanization is often overlooked and conditions in the
stream are assumed to be constant. The following items make this an interesting
problem from a statistical perspective.
Parameters as functions of time (stationary vs nonstationary models).
Coles [1, p. 92] defines a stationary series as one that corresponds to a series
of data that may be independent, but whose distributions remain constant
with time. Alternatively, in a non-stationary series, the underlying distri-
bution changes systematically with time.
Historically, most models used to estimate hydrologic events assume that
data are from a stationary distribution. To challenge this assumption, data
from a non-stationary distribution is considered as a possibility. One way to
accomplish this is to treat time as a covariate in the parameter estimates.
This information may be included in the model in several ways. These
methods are discussed in detail in Chapter 3.
Making flow data independent and identically distributed.
Many statistical procedures require the assumption of independent, identi-
cally distributed (iid) events. Stream flow data is inherently dependent or
auto-correlated. In other words, it is likely that a day with high flow will be
followed by another day with high flow. This violates the assumption of in-
dependence as well as the the assumption that the variables are identically
distributed. Separating high flow events into independent events is not a
trivial task. De-clustering techniques are used to find independent events
in dependent data. An appropriately chosen threshold and de-clustering
3


method produces data that can reasonably be assumed to be independent
and identically distributed (iid).
Peaks over threshold data versus annual maximum values.
Until recently annual maximum peak flows were used to estimate parame-
ters for the GEV distribution. It is plausible that annual maxima are in-
dependent. This makes the data easier to analyze. An alternative to using
maximum annual values is the use of flows greater than a given threshold.
The data set of values above a given threshold in hydrology is referred to
as a partial duration series [20, p. 18.37-40]. In analyzing high flows, this
method is more intuitive. For example, in an arid region during a dry year,
the peak flow may be very low. In a desert stream, it could conceivably be
zero. Therefore, considering this zero flow event on par with other annual
maxima is not consistent. Further discussion of the trade offs between these
two ideas is presented in Section 4.3.
Method of estimation maximum likelihood vs. linear moments.
Historically, the method of linear moment (L-moments) estimators has been
used to estimate GEV parameters[4]. L-moments are easy to use, always
produce an estimate and the associated approximate confidence intervals
for parameter estimates are readily calculated. This method is currently be-
ing used in the update of National Oceanic and Atmospheric Associations
(NOAA) Rainfall Frequency Atlases. However, using L-moments does not
allow covariates to be incorporated into a model. An alternative approach is
to use the maximum likelihood estimates (MLE). Historically, this method
4


has not been used because more it is complicated. The method may fail
to converge and generally requires large sample sizes [20, 4], However, the
MLEs allow for the introduction of covariates such as time into the models.
Methods of finding solutions are discussed in detail in Section 2.4.
5


2. Extreme Value Theory, Relevant Distributions and Point-process
Models
This chapter discusses the properties of extreme values, the distributions of
extreme values, point process models and methods for estimating parameters for
these distributions.
2.1 Generalized Extreme-Value Family
Many traditional statistical estimation techniques focus on estimating the
parameters of a distribution to get the best fit by optimizing a data dependent
function. Typically all data are used to estimate the parameters for this base
function. When one is interested in modeling extreme values, a different function
is normally used. Fortunately, for a broad class of base distributions, extreme
values are successfully modeled by a family of distributions called the generalized
extreme value (GEV) distribution.
The following explanation of extreme values is given in the context of model-
ing high flows [1]. Let Xi be a random value of the maximum daily average flow
at a site for year i with a base distribution F. Let Mn = max{Xi,X2, ...Vra} be
the maximum value of n yearly maximum flow values. In studying extreme flow
values, we are interested in studying the distribution of Mn, not the distribu-
tion of Xi. Assuming that the streamflow data Xi is independent and identically
6


distributed, one can find the distribution of Mn in the following manner.
Pr{Mn < z} = Pr{X1 < z,X2 < z, ...,Xn < z}
(2.1)
= Pr{Xx < z} Pr{Xn <*} Pr{Xn < z}
={n*)n}
where z is a high streamflow of interest. There are two problems with this
approach. First, since F(z) < 1, as n increases F(z)n goes to 0 as n goes to oo.
Second, the distribution of F may be unknown. To address the first issue, the
values are re-normalized in a manner such that Mn = Mn~bn where an and bn are
sequences of constants and an> 0.
For a wide class of base distributions F, there exist sequences an, bn such that
where G(z) is a non-degenerate distribution function that produces a stable dis-
extreme value (GEV) distribution. What is amazing about this GEV family is
that its form does not require knowledge of the parameters of F, only the basic
form of F.
The general form of the GEV distribution has three parameters: jj, (location),
a (scale) and £ (shape) and is written as
Pr{(Mn bn)/an < z} > G(z) as n > oo
(2.2)
tribution. We call the function G(z) a distribution function of the generalized
(2.3)
when £ 7^ 0 and 1 + £(x n)/a > 0.
7


If £ = 0, the distribution becomes
G(x) = exp < exp
{x- n)
a
There are three well known distributions that fall within the class of GEV
distributions: the Gumbel, Frechet and Weibull. These distributions are defined
by the value of the shape parameter £ [6, p. 195]. Graphically, these distributions
can be visualized as having a heavy right tail, a heavy left tail or symmetric
tails. Alternatively, they, are distinguished by their domains, either having a
fixed upper bound, lower bound or being unbounded. The three specific forms of
this distribution are named [17] [8]:
Gumbel (Type I)
G(x) = exp{ exp(-)}
a
£ = 0; oo < x < oo
Frechet (Type II)
0 if x < 0
exp(if 0 < x < oo
£ > 0, n + a/£ < x < oo
Weibull (Type III)
/
G{x) = <
exp{( (^^)f} if oo < x < 0
1
if x > 0
£ > 0, oo < x < fi cr/£.
8


Smith [17] notes a few other properties of the GEV. The mean exists if £ < 1.
Variance exists if £ < 1/2. These expectations of the mean and variance are
M + |{r(i -o-i} Var(X) = E{(X E(X))2}
= |J{r(l 2?) r2(l ?)} ({ < 1/2).
When £ = 0, these expectations become:
2 2
E (X)=/t + o7; Var(X) =
where 7 0.5772 (Eulers constant).
2.2 Generalized Pareto distribution
Assume that F is a distribution whose extreme values can be re-normalized
so that the GEV distribution is a reasonable model. A threshold u can be chosen
such that the exceedances of the threshold are modeled well by Generalized Pareto
distribution (GPD). Let y equal the magnitude of the exceedance of the threshold
u or (y = x u). In other words, conditioned on the fact that x > u, the
distribution of values exceeding the threshold (y) [17, p. 710] is given by
G(y,o>,t)
i+e
y 1 _1/f
(2.4)
where a* = a + £(u /i) and the parameters yt, and a are from the GEV distri-
bution. The shape parameter f is common to both distributions and a* is used
in the GPD distribution.
9


Like the GEV distribution, the support of the GPD distribution 2.4 is re-
stricted based on the sign of the shape parameter £.
If f > 0, then x G (u, oo).
If £ < 0 then x G (u, ^j-).
If £ = 0, then x G (0, oo) and the equation becomes
G{y\a*, 0) = 1 exp(-~).
The expected value and the variance of a GPD distribution are.
(2-5)
Var{Y) = ( i-aV^ (2.6)
. a* + £w u\y>w)=l_ (2.7)
where to is a value greater than 0.
2.3 Point Process Models
j
Chapter 6 of Coles text [1] is entitled A Point Process Characterization
of Extremes. In this chapter (as well as other sources including Leadbetter [8]
and Smith [17]), all extreme value models such as GEV for annual maximums,
threshold models and extreme value models with covariates are discussed in the
context of point processes.
The utility of the point processes paradigm is that it models the occurrence
of extreme events. In modeling extreme flows, we use a model called a marked
10


point process or a GPD-Poisson model.1 Basically, when an event occurs, the
outcome of a related event is recorded. In this application, when a threshold is
exceeded, the magnitude of the exceedance is recorded. While there are many
distributions which model a point process, the most common is the Poisson dis-
tribution. In particular, the number of events that occur in a time period can be
shown to follow the Poisson distribution under mild regularity conditions.
2.3.1 Historical perspective and motivation
A historical perspective for the GPD Poisson model is presented by Onoz
[13]. Shane and Lynn [16] proposed conditions under which the occurrences
of exceedances follow a homogeneous-Poisson process and that the magnitudes
of these events follow an exponential distribution. Kirby [7] showed the Poisson
process could be justified as the limiting form of randomly spaced Bernoulli trials
with small probabilities.
With the Poisson process agreed upon as a justifiable model for the arrival
of exceedances, other work focused on the distribution of the magnitude of these
exceedances. Zelenhasic [22] considered a two parameter gamma function (a gen-
eralization of the exponential distribution) as a distribution of the exceedances.
By assuming the annual number of events was modeled by the Poisson process,
the distribution of annual maxima was found to follow a Gumbel distribution
1 There is a lack of consistency in the terminology used to describe the application of a point
process as applied in this paper. In Smiths paper [17], the term GPD-Poisson model is used.
In some literature the term marked point-process model is used. In McNeils paper [11] a
similar application is referred to as a 2-dimensional point process.
11


(a case of the GEV distribution.) Van Montfort and Witter [12] considered the
GPD for the distribution of the magnitude of the exceedances. In the article by-
Davidson [3, p. 395], the utility of the GPD is motivated in the following ways.
The GPD is the limiting distribution for excesses over thresholds if and
only if the parent distribution is in the domain of attraction of one of the
extreme value distributions.
If X (the data) is from the GEV and the threshold (u) is greater than 0,
then the conditional distribution of the exceedances is a GPD.
If N (the number of exceedances in m years) is distributed as a Poisson
and the exceedances are independent and identically distributed, then the
max(Xx,X2, ...,XN) follows a generalized extreme value distribution.
Finally, Rosbjerg et al [15] and Madsen et al [10] considered modeling the arrival
of peak events with a Poisson process and the magnitude of the exceedances
with the GPD distribution. This is the same model described by Smith as a
GPD-Poisson model by Smith [17] and is considered and discussed further in this
paper.
2.3.2 GPD- Poisson, point process model
In modeling high flows, a GPD-Poisson point process may be used. This
model has the useful feature that parameters used to describe the arrival rate
of exceedances, (frequently denoted by A), are related to the scale and shape
parameters of the GPD distribution. These relations will be discussed later in
12


Section 2.3.3. In general, a point process is a stochastic rule for the occurrence
and position of point events [1, p. 124], In the case of a one-dimensional point
process, let A be a space representing time and let A denote a specified time
period where A C A. The number of events occurring in set A is expressed as
N(A).
In many random processes, the number of these events occurring in A is
approximated as a Poisson random variable with mean proportional to the length
of the interval. Furthermore, the time between these events is assumed to be
exponential. Coles [1, p.126] notes that the physical phenomena for which there
is a natural pattern, are not suitable for modeling using a point process. Examples
of natural processes with order in their spacing includes intervals between trees
and the occurrence times of rainfall events, which tend to occur in clusters. This
emphasizes the importance of de-clustering hydrological data.
Let events ti,t2,,tn be specific times of events occurring within a region
of time period A. Furthermore, assume that the number of occurrences of these
events follows the Poisson distribution with intensity function /(., 9). In essence,
£(., 9) is a function that produces the parameter for the Poisson distribution and
is referred to as the intensity function. L(.,6) is called the intensity measure and
equals the expected number of exceedances within the specified time period [1,
p. 127],
This paper considers two possibilities. First, if A (the parameter of the Pois-
son distribution) is constant with time, the process is called a homogeneous Pois-
son process. A homogeneous process has the following properties [1, p. 125].
For all A = C A, N(A) ~ Poisson with parameter(A(t2 h))
13


The number of exceedances for non-overlapping intervals is independent.
The MLE estimate for A is A = n/A, where n is the number of events
exceeding the threshold within time interval A.
Second, if A is a function of time, it is expressed as l(t, 9) and is known as a
non-homogeneous Poisson process. Therefore, if the arrival rate for these events
is A, when A = [ti,i2] C A,N(A) ~ Pois(L(A)), where
L{A-6)= [2l(x-,d)dx (2.8)
Jtx
This formula for the Poisson rate can be generalized to higher dimensions.
Information from a point process is derived from two sources: the locations
where events occur as well as the locations where events did not occur. Let X be
the interval (t;, X + di) that contains an exceedance event for i = 1,2,..., n and 5
is arbitrarily small. The union of these intervals contains all the occurrences of
events (in this paper events are flows greater than a threshold). Let X = A/A^=1Ii
be all the regions that do not contain an event. Therefore
Pr{N(Ii) = 1} exp{L(/i; 9)} L(/i; 9) (2.9)
rXi+Si
where L(Ii\9) = / l(u)dul(xi)8i. (2.10)
JXi
These intervals, Ij, can be defined as being suitably small so that the chance that
the interval will contain more than one event is trivial. Since 8i is arbitrarily
small (near zero), combining equations 2.9 and 2.10 produces
Pr{7V(ii) = 1} = exp{-l(ti)5i}l(ti)8i (2-ll)
For the spaces without events we have
Pr{iV(Z) = 0} = exp{L(Z)} exp{-L(^l)} (2.12)
14


where the approximation holds if 5 is small. Combining equations 2.11 and 2.12,
the likelihood for the regions with and without events is
Lik(9]t1,t2,...,tn)=Pi{N{l) = 0, N{Ix) = 1, N(I2) = 1, ...,N(In) = 1)}
n
= Pr{iV(Z) = 0} Pr{N(Ii) = 1}
i=1
n
exp{-L(.4.;0)} J]^(2;;0) 21
Dividing through by produces
n
Lik{9; t1}t2,tn) = exp{-L(^l; 0)} £[ l(U] 0)
i= 1
where L(^l;0)= f l(x;0)dx & l(xi)5i. (2-14)
Ja
Note that the expression to the right of the product sign is the derivative of
the intensity function l(ti\0), which, as shown in equation 2.16, is the GPD cdf.
An alternative explanation is provided by Smith [17]. Consider a series of
random variables A* which come from a distribution whose extreme values belong
to the GEV distribution (2.2) with parameters A sequence of point
processes can then be defined in V? as
Nn = {((i/(n + 1), (Xi + bn)/an)) : % = 1,..., n},
where i is a position of the data in the series. Dividing these positions by (n +1)
places all observations on the interval (0,1). A threshold u is chosen and can
define region A = [0,1] x [iq oo). Then, assuming the exceedances are from a
GPD distribution, each of the n points in the series listed above has the following
probability of being above the threshold,
p = Pr{(Xf i>)/a > } i[l + £(^^)]-1/{. (2.15)
n cr
15


When considering a specific value z greater than the threshold u, the occurrence
of an event greater or smaller than z can be treated as a binomial process. As n
gets large, this converges to a Poisson process with rate
L(A) = [l+(£-)]-Vt (2.16)
If A = \tx,t2] is a subset of the interval represented above by [0,1], then the
intensity measure can be represented by
m = (t2-tl)i i+e^ji-vs.
The significance of these arguments is that if the threshold u is chosen to be
large enough, the number of events exceeding the threshold for an interval can
be approximated by a Poisson process.
Again, by scaling [ti,t2\ to an interval (0,1) and incorporating the number of
years of record ny, the intensity measure 2.16 on an annual basis is
L(A)=ny(t2-t1){l + ((^T1,!
The interval [ti,t2] becomes [0,1] when divided by (t2 ti). We then have a
general form of the Poisson likelihood function, [1, p. 133]
N(A)
LikA(0 tl, t*2,..., t*n) = exp{-L(A)} Ifc, x{)
i=1
1V(A)
c exp{n^fl + £(^-)]}_1/? +
i=1
where t* [ti, t2\
16


2.3.3 Poisson-GPD model and GEV distribution parameters
A special and useful feature of the Poisson-GPD model is that its parameters
are directly related to the parameters of the GEV distribution. Smith assumes a
homogeneous process and gives the following explanation in [17, p. 711 712].
Let N be the number of events exceeding the threshold u in a given year. N
follows a Poisson distribution with a mean of A. Given that there is at least
one exceedance per year, the exceedance values (denoted Yi, Y^,..., Yn) follow the
GPD distribution.
Pr{ max Y;< x\ =
. *-i 7\r J
where u is the threshold, a; is a value greater than the threshold, A is the rate
at which the threshold is exceeded, and (/i, a, £) are the parameters for the GEV
distribution.
l 00
Pr{N = 0} + Y, pr{JV = n, Yx < u, ...Yn < u}
(2.17)
When
a* =a + £(u- //)
a=(i+-r1/f
. a '
(2.18)
(2.19)
the GPD is transformed into the generalized extreme value distribution (Equation
2.3).
17


2.3.4 Incorporation of covariates
Estimating the parameters by maximizing a likelihood function allows covari-
ates to be incorporated into a point process model. This paper solely considers
time as a covariate, but other information such as climate data or information
about development within the watershed could be readily included.
Trends in the location and scale parameters for the GEV distribution are
considered. The effects of changes in these parameters can be seen in Figure 2.1.
An increase in the location parameter shifts the mass of the pdf curve right for
values greater than the threshold. Increases in the scale parameter flattens the
curve. Changes in the shape parameter produce the most dramatic results. A
negative shape value results in a finite upper bound. A positive value for the
shape parameter results in a right-heavy tailed distribution.
Trends in parameters cannot be intuitively related to changes in the magni-
tude and frequency of extreme events. Examining equation 2.19, one can see how
changes in more than one parameter could possibly offset changes in another. A
given rate of exceedance can be achieved using many combinations of /i, a and £.
)
18


Probability Probability Probability
10
Flow
10
Flow
Flow
Figure 2.1: Effects of varying the parameters of the GEV distribution (Equation
2.3).
2.4 Methods of parameter estimation
19


There are several methods to estimate parameters. We focus on MLE meth-
ods because of nice asymptotic properties and because this method allows covari-
ates to be added to the model. In hydrology, L-moments are more typically used
as they are easy to compute and efficient for fitting the extreme value distribu-
tions. For this reason they are briefly discussed below.
2.4.1 L-moments Estimators
L-moment estimators are related to product moments and method of mo-
ments estimation. L-moment estimators are also referred to as probability
weighted moments (PWM). The method of moments is dependent on the ex-
pected values of powers of X, where X is a random variable. A general expres-
sion of these powers is /ir = E(X ii)r. Sample estimates of the moments are
mr = n-1 X)iLi(a:* x)r for r > 2 and toi = x. These sample moments can be
used to estimate parameters. A problem with the method of moments is that it is
not robust to outliers since it raises differences to a power[6]. A single point (X*)
that is greatly different than the mean (/i), greatly affects the moment estimate
when squared or cubed.
L-moments and sample L-moments differ from the method of moments esti-
mators in that they are based on ranked data and are made of linear combinations
of ranked observations. Other benefits include the following. With more than 20
observations, biases for most distributions are negligible [5]. The distribution of
L-moment parameters have asymptotic distributions that are multivariate nor-
mal. multivariate normal distributions [5] and more robust to outliers as they
dont depend on squared or cubed values of the data [20].
20


Instead of product of moments, L-moments rely on the relation fjr
E{X[F(X)]r}. A general form for sample L-moment estimators of /3R is given by
where X^ is the jth ordered value. An unbiased form of this equation is
also available. These values (b* or j3r) are called probability-weighted moments
(PWM) and they are used in linear combination to estimate L-moments.
To estimate parameters for the GEV distribution, L-moments are used in
the following manner, The first three PWM estimators (b0, bi, b2) are calculated
using equation 2.20. L-moments estimates of the mean and the variance (Ai
and A2) are calculated as functions of the PWM estimates. Estimates of the L-
moments coefficient of skewness (f^) and kurtosis (r^) are calculated as ratios of
the L-moment estimates. These in turn are used to solve for the GEV parameters.
2.4.2 Maximum likelihood estimators
Consider the following. Let {sq, a;2,...xn} be a set of data. Let f(x; 9) be the
pdf for a specified distribution with parameters 9. Assume that the parameters
9 for this distribution are known. For each data value aq, the likelihood of a
given value can be found by using the pdf function /(aq, 9). Assuming the data
is independent, the likelihood of a given data set occurring can be found by
multiplying these probabilities together as f(xi)f(x2)f(x3)....
Considering that the data are known and the parameters for the distribution
are unknown, the likelihood function can be expressed L(9]x) = Y[7=i
(2.20)
21


The method of maximum likelihood works by finding the values for the param-
eters 6 that maximize the value of the likelihood function. Equivalently, the
log-likelihood may be maximized.
With this in mind, we look to express the log-likelihood function of the point
process models and solve for the values of the parameters that maximize the
following function.
(XT\np-\t N
L(y\\a,0 = -----n S a 0 (2-21)
i=1
where y is the magnitude of the exceedance over the threshold, N is the number of
observations, A is the arrival rate of the exceedances and a* and £ are parameters
for the GPD distribution.
Equation 2.21 is developed using the following rationale. The Poisson model
for the arrival of exceedance events is combined with the GPD distribution of
the magnitude of such events exceeding a given value u. The first part of equa-
tion 2.21 describes the arrival rate of flows exceeding a given threshold (Poisson
distribution). These events are arriving at a rate of A. N is the number of ex-
ceedances over the T-year period. The second part of the equation deals with the
distribution of the magnitude of flows exceeding a given threshold (See equation
2.4).
The log-likelihood function then becomes
1 N Y-
£NjY(\,v',Z) = N log a* AT Nloga* (1 + -) J>g(l £-). (2.22)
^ i=l
Numerical optimization methods are used to find the MLEs of a*,\ and £. As
with any optimization, it is important to consider the problems associated with
poor starting values and assessing global versus local optimality. Furthermore,
22


issues such as distinguishing between local and global solutions and finding an
appropriate starting point for the optimization must be addressed.
2.4.3 Comparison of methods
The benefits and deficits of the MLE and L-moment methods of parameter
estimation have been much debated. Advantages for PWM and L-moments over
MLEs include:
MLEs require more data and maximization algorithms may not converge
or may converge to local, rather than global maxima [6].
PWM always produces a solution. Hosking also suggests that L-moments
produce a less biased solution in the hydrological context [4].
Smith lists several advantages of MLE methods over L-moments and probability
weighted moments (PWM). These advantages include the following [17, p. 731].
The PWM implicitly puts restrictions on the parameter values that the
MLE method does not require. If these restrictions are included in MLE
procedures, some of the advantages of the PWM method disappear.
The MLE method is a more general technique that readily incorporate
covariates. L-moments are more specialized techniques that do not allow
such covariates to be incorporated.
23


2.5 Standard Error Estimation
In order to get confidence intervals for the parameter estimates, several meth-
ods can be used. However, all of these methods are asymptotic and therefore rely
on a large sample sizes. With a large sample, the distribution of the estimated pa-
rameters 0 follows an approximately multivariate normal distribution. With less
than a large amount of data, one hopes that the normal multivariate distribution
provides an adequate approximation.
In most MLE applications, standard errors of the parameters are estimated
using the observed information matrix. For a large number of observations, the
asymptotic distribution is used as an approximate distribution for estimating
standard errors of estimators. Under broad regularity conditions, the asymptotic
distribution of the estimated parameters 6 is multivariate normal with mean 9
and covariance matrix equal to the negative of the the inverse of the expected
value of the Hessian matrix of the log-likelihood function [17, p. 721]. The
square roots of the diagonal elements of this covariance matrix are taken as the
standard errors for the estimated parameters. The variance matrix is estimated
by substituting the MLE values for the parameters into the Hessian.
In practical situations, the information matrix can be difficult to calculate.
For this reason, the matrix of the second partial derivatives of the negative log-
likelihood functions may be estimated using the delta method. The first derivative
with respect to the parameters can be approximated using the following relation:
dly T ^y(9)
--------------------
24


where e is an arbitrarily small number, ey is the unit vector in the yth direction,
and £y(0) is the log-likelihood function given the data. Using this equation for
each parameter and data point produces a matrix M that is n by p, where n is
the number of data points and p is the number of parameters.
The Hessian is approximated by MTM. The square roots of the diagonals of
the inverse of this matrix provide an estimate of the standard errors associated
with each parameter.
2.6 Profile plots for return level and parameter confidence intervals
Profile plots for parameter estimates are created by fixing one or more of the
parameters and finding the maximum partial log-likelihood function by varying
the remaining parameter (s). The value of the fixed parameter (s) of interest is var-
ied across a range of values. Unlike confidence intervals based on the assumption
of normality, these confidence intervals may be asymmetric.
While one may calculate confidence intervals for parameter estimates, this
may not be as intuitively useful as calculating confidence intervals for return
levels [1, p. 31 42], To accomplish this for a specific return period, the return
level is fixed and the parameter(s) of interest are varied across a range of values.
Partial MLEs are found by allowing the other distribution parameters to vary.
The related negative log-likelihood values are plotted. Next, the Xg distribution
is used to estimate the confidence intervals. To find the 95th percent confidence
intervals, a horizontal line is plotted 0.5 C(ii0.95) (where 0(1,0.95) is the 95%
quantile of the xl distribution.) Examples of profile plots are found in Section
5.2.2.
25


3. Model Development and Evaluation
Using the properties of generalized extreme value distribution and the point
process model, we now proceed to create a series of models that will determine
if there is a trend in the frequency and magnitude of high flow events. This sec-
tion addresses the issues of seasonality, selecting a suitable threshold, estimating
parameters, and finally choosing the most appropriate model.
3.1 Seasonality
The functions used in this paper to estimate parameters were developed by
Stuart Coles to accompany his book [1] and they were ported into R by Alec
Stephenson www.maths.lancs.ac.uk/~stephena/software.html. The func-
tions require that the threshold either be fixed or a fully described function.
In the example to be presented, seasonality is removed by using a sinusoidal
function. Attempts to find a trend in the threshold by incorporating a covari-
ate representing time were not successful since adjusting the threshold changes
the size of the data set. These changes in the dataset result in an un-smooth
likelihood function. Furthermore, a trend in the threshold along with trends in
distribution parameters would be difficult to interpret.
Rivers often exhibit a large degree of seasonal behavior. For example, streams
and rivers in Colorado tend to have peak flows during the spring and summer
months. After the summer months, with no snow-melt or storm-flow runoff to fill
26


them, rivers become dry. There are several ways to handle seasonality in data.
One way is to remove the effects of seasonality from the data. Another method
is to analyze the data from different seasons separately. Davidson and Smith [3]
address these techniques and conclude that although accounting for seasonality
had a significant effect on the parameter estimates, it did not have a noticeable
effect on the return values. They suggest that modeling seasonality may not
result in a superior solution. Smith and Davidson [3] note that care must be
taken when modeling the seasonality because structures that may fit the central
portion of the data may not be valid for the extreme values.
Data from distinct seasons can be. separated and the analysis performed on
data from each season separately. For both approaches a definitive basis for sea-
sonality should exist. Rasmussen and Rosbjerg [14] also recommend that in the
the presence of strong seasonality, the most dominant season should be selected
and the remaining seasons ignored. Further attempts to model seasonality are
not recommended since additional parameters may introduce greater sample er-
ror. This study separates the data into a high flow and low flow season,then
analyzes the data from the high flow season.
3.2 Threshold Estimation
The first step in utilizing a point process model is choosing a threshold. Only
the data greater than this threshold 2 4 are modeled using the point process model.
2This threshold is used to isolate extreme values that follow a GPD distribution. In contrast,
the threshold discussed in the subsequent chapter is used to de-cluster independent values from
streamflow data. Creating independent hydrological data will be further discussed in Chapter
4
27


There is a trade-off in choosing a threshold too high or too low. A high
threshold will eliminate much of the data and the variance parameter estimates
will be high. Alternatively, a low threshold will include more of the data and one
may be modeling the bulk of the data, rather than the extreme values. This will
induce bias into parameter estimates.
Ideally, a threshold should be chosen so that the number of occurrences of
such events follows a Poisson distribution and the magnitude of the exceedances
follow a GPD distribution. Lang [9] outlines several methods used for threshold
selection and I review them here.
Method 1: Mean number of over-threshold events This method involves
systematically increasing the threshold value and noting the number of
events chosen per year. Various authors have suggested ideal numbers of
events per year is 1.65 [20], [2]. The justification for 1.65 is based on
work by Cunnane [2]. The author compares the theoretical variances for
the annual maximum value series and partial duration series. For return
periods greater than 10 years if more than 1.65 values per year are included,
the variance for the partial duration series is less than the annual maximum
value series.
Method 2: Mean exceedance above threshold This method was developed
by Davidson and Smith [3]. A threshold value should be selected in the
domain where the relation between the mean exceedance above threshold
(Xa S') is a linear function of the threshold level S, where Xs is the mean
value of the exceedances, (all data greater than S). Values for S from a
28


region of linearity support the hypothesis that the excesses are from the
GPD.
Analogously, if the exceedances truly follow a GPD distribution, the relation
between the expected number of exceedances and the threshold should be
linear. Linearity is maintained by the adjustment of the location and scale
parameters [1, pp. 78-79]. Graphical tools assist in the threshold selection
process and are discussed below.
Method 3: Fixed Percentile A threshold may be chosen by selecting a fixed
percentile of data. Smith uses this method in [18] when studying data
from many weather stations. In addition to simplicity, an advantage of
this method is that an equal number of exceedances are selected from each
station.
Lang [9] recommends the following steps for selecting a threshold.
1. Identify an interval of threshold values that are in the linear region of a
mean exceedance plot.
2. Select within this interval the largest threshold which produces more than
2 or 3 events per year based on method 1.
3.3 Graphical guidance in determining a threshold value.
Several graphics have been proposed to assist in threshold selection. Exam-
ples of these graphs are found in the following section and are not reproduced
here. For each graph discussed, examples are referenced to figures used later in
this report.
29


Mean residual life plot Corresponding to the mean exceedance over threshold
method, a mean residual life plot can be constructed by plotting the average
of exceedances over a given threshold against the value of the threshold [1,
p. 79]. An example of this type of plot is Figure 5.4. (In this example, one
notes a change in the slope of the values near a threshold of 75 cfs.) As the
threshold is increased, the average value of the exceedances in calculated.
Parameter estimates versus threshold plots For a range of thresholds, pa-
rameters are estimated and the corresponding standard errors are calcu-
lated. These are plotted with threshold as the independent variable [1, p.
85]. When exceedances come from a GPD distribution, the shape parame-
ter should ideally be constant with respect to the changing threshold. The
location and scale parameters neednt be constant in the graph, but should
be relatively steady. An example of this type of plot can be found in Figure
5.5.
Maximum likelihood estimation in the GPD setting requires a minimum
amount of data for the likelihood to be stable and allow convergence. Often,
the upper limit of the range of these plots is dictated by the ability of the
numerical optimizer to converge and produce parameter estimates.
Return value plots Useful plots are created by varying the threshold, estimat-
ing distribution parameters, and calculating flows for a range of return
values. These plots illustrate the stability of the return values. An example
of this type of plot is found in Figure 5.6.
Upper confidence level as a function of threshold While the effect of thresh-
30


old on the return level is of interest, the confidence interval about this value
is also important. Plots containing the return values and the upper confi-
dence intervals as a function of threshold were created. These confidence
intervals were estimated based on profile plots. An example of a profile
log-likelihood plot is found in Figure 5.7.
If a point process model is non-homogeneous, estimation of the confidence
intervals for the return levels are difficult. This is due to the fact that
with time as a covariate, the profile plot becomes a surface with two
dimensions.[1, p.138].
3.4 Model Structures
In order to determine if trends exist in the frequency and magnitude of high
flow events, a series of models was created to permit trends in the location and
scale parameters. Following the notation used by McNeil [11], Model 0 contains
parameters (/q parameter p, to vary with time. Model 2 permits the scale parameter a to vary
with time. Model 12 permits both the location and scale parameters to vary
with time. A trend in the shape parameter was not considered because changes
in this parameter have such a dramatic effect on the distribution. The possibility
of a linear trend is considered in the location parameter. An exponential trend
is considered for the scale parameter. The exponential trend prohibits the scale
parameter from being negative.
31


The trends in parameters are made possible with the following substitutions.
/J,(t) = O'! + Pit (3.1)
where 04 are the intercept parameters and j3t are the slope coefficients. If the Pi
are found to be equal to 0, the model is equivalent to a model without trends in
the parameters.
Coles S-Plus point process functions create a default starting point for mod-
els 1, 2 and 12. I found that these starting points produced solutions that had
smaller log-likelihoods values than Model 0. For this reason, the MLE parameters
for Model 0 were found to be a superior starting point for Models 1, 2 and 12.
3.5 Model Selection
This section discusses several methods that may be used to determine if
significant trends in parameters are present.
3.5.1 Likelihood ratio test
The likelihood ratio test may be used to determine if additional parameters
should be added to the model. A likelihood ratio statistic (LRS) can be used to
compare models with different number of parameters. Consider the log-likelihood
functions of two models. The model with more parameters has a log-likelihood
function of 1) and the model with fewer parameters has a log-likelihood
function of £y^(0), where 6l and 9 are the MLEs for the two models. To test
32


the hypothesis that the value of the larger models excess parameters all equal
zero, the likelihood ratio statistic, (sometimes referred to as the deviance) is used.
It is defined as
t = 2{4o)(0(o))-41)(0(1))}-
T is approximately distributed as a chi-squared distribution (T ~ X'j) where q is
the number of extra parameters in the larger model. The log-likelihood value from
Model 0 may be compared to any of the other three models. The log-likelihood
value from Models 1 and 2 may not be compared with each other since they are
not nested models.
3.5.2 Graphical Model Evaluation
Besides evaluating the negative log-likelihood values, there are several graphs
that help evaluate the model and provide insight into the data. These include
quantile plots of the residuals (QQ plots), W-statistic plots and Z-statistic plots.
3.5.2.1 Quantile Plots (QQ Plots)
A quantile (QQ) plot illustrates the relation between the ordered, measured
return levels against the estimated return levels based on an estimated probability
of i/(m + 1), where m is the number of data and i is the rank of the data. The
points on this type of graph are
{G 1(i/(m + 1), z(i)), = l,...,m}
33


where G~l is the inverse of the distribution model and is the z'th order statistic
from the data greater than the threshold. For the GEV distribution, the inverse
function is
If the model accurately fits the data, these points should fall about the line x y.
Examples of these types of plots, created for a range of thresholds are found in
Figure 5.9.
A drawback in using QQ plots with extreme values is that they may be
misleading. To illustrate this, a QQ plot was created using simulated values.
First, a point process model was created using data from Fountain Creek at
Colorado Springs and a threshold of 20 cfs. In 1997 and 1999 the site recorded
the 2 largest events of the period of record. The question was asked whether
these values could plausibly come from the modeled distribution. 5,000 simulated
datasets were created by using the parameters found in the Colorado Spring data
- without the two highest values. For each simulated dataset, parameters were
calculated and Q-Q plots were calculated. For these 5,000 Q-Q plots, the 5th,
95th and median quantiles were calculated. Figure 3.1 illustrates the results of
this simulation. This graph shows that if 20 cfs is the ideal threshold, the two
high flows measured in 1997 and 1999 are still possible. One might also note that
these values could be much larger and still fit within these intervals.
34


Figure 3.1: QQ plot for Fountain Creek, CO. The median of 5000 random
observations is the solid line and the 90% confidence intervals are the dashed
lines.
3.5.2.2 W-statistic plot
The W-statistic is useful for evaluating the assumption that the distribution
of the exceedances are GPD [17]. This plot shows the relationships, if any, be-
tween time and the parameters of the distribution. The W-Statistic is defined
35


as
___ 1 -l / , , U >
* = (l0g(1 + ^(Tt)+((Tt)(u-M(Tt))>
where Tk is the time of the kth occurrence and X& is the value of the kth occurrence.
If the distribution of the exceedances is constant with respect to time, these values
will be exponentially distributed with a mean of 1. A lowess line is also plotted
on the graph to assist the user in determining whether these statistics change
over time. Figure 5.11 is an example of a Wstatistic plot.
3.5.2.3 Z-statistic plot
The Z-Statistic examines the consistency of the arrival rate of events A (t)
with respect to time [11]. Substituting fi(t) and a(t) for fx and a respectively in
equation 2.19 produces
The Z-Statistic is subsequently defined as
Zk = [ X(s)ds, k = 1,..., N + 1
where Tk is the time of the kth exceedance. If the model is correct, Z will be
distributed exponentially with a mean of one. This equation simplifies in cases
when time is discrete to A(Tk Tfc_1). A lowess line is also plotted on the graph
to assist the user in determining changes in these statistics. Figure 5.10 is an
example of this type of plot.
36


4. Streamflow Data
Many statistical methods assume events are independent. Streamflow data
has strong autocorrelation. To support an assumption of independence, it is im-
portant that data represent distinct events which can be considered independent.
This process is commonly referred to as de-clustering data.
Following a large storm event, flows in a river may rise and remain high for
several days. Thus, the flow gage will record several high values, even though they
are all associated with a single event. Selecting all flows above the threshold 3 on
adjacent days would be not create a data set that could reasonably be assumed
to be independent [17, p. 712] and [19].
3 Please note that the threshold discussed in this section is different from the threshold
selected to ensure the exceedances follow the GPD distribution.
37


Figure 4.1: Sequential flows exceeding a threshold
Identifying peak flows from a series of data may be complicated. As shown
in Figure 4.1, if a threshold was below Xmin, only the largest flow (X(l)) would
be selected. Ideally, one would select the second peak if it can be viewed as
an independent event. Conversely, if the second peak is heavily dependent on
the occurrence of the first peak, both events should not be selected. The Water
Resources Council (USWRC, 1976) suggests the following guidance. Flood events
should be at least 5 days plus the natural logarithm of the area in square miles
of the basin. Additionally, the flow between two peaks must drop below 75% of
the lower of these two peaks. Referring to Figure 4.1, the second peak would not
be considered generated by a distinct event if either
9 < 5 days +log(A) or (4.1)
38


A, > (3/4) min[ X(1),X(2)]
(4.2)
where A is area of the drainage basin tributary to the site in square miles and
9 is the time span in days. Other methods discussed by Lang are based on the
shape, of the storm hydrograph. This requires subjective professional judgment
to determine when a high flow begins and ends. Other methods are based on the
correlation between de-clustered peak values [9] while still other methods use the
extremal index [19].
4.1 The relation between threshold selection and number of
observations.
Figure 4.2 illustrates the effect of threshold selection on the number of ob-
servations [9]. If a threshold is selected in region 1, the threshold is below the
minimum value. Consequently, the entire data set is treated as a single obser-
vation. As the threshold increases (region 2), more observations are viewed as
individual exceedances and treated as distinct events. In region 3, the number
of observation declines as the threshold increases. In region 4, the threshold has
exceeded the largest observation and so there are no remaining exceedances. Ide-
ally, a threshold for de-clustering should be selected from region 3 because in
this region raising a threshold will not reveal more events that are assumed to be
independent.
39


Figure 4.2: Relation between threshold and number of observations
The following example illustrates this de-clustering technique. Two hundred
days of data were selected from the Cherry Creek at Denver gaging station from
February to August, 1980. This is shown in Figure 4.3. Selecting the threshold
to be 20 cfs, 6 exceedances are noted, with all high flows between days 80 to 100,
grouped together as a single event. A threshold of 30 cfs yields 7 exceedances,
while threshold of 40 cfs yields 6 exceedances. To ensure independence of events,
a suitable interval must exist between high flows. The area of the tributary
drainage basin is approximately 100 miles square. To satisfy equation 4.1, the
time between events must be greater than 5 + log(100) or 9.6 days. The day and
flow for these 6 days are listed in the Table 4.1 below.
40


Days
Figure 4.3: Cherry Creek Data and selected thresholds.
Table 4.1:
Days of flows exceeding 40 cfs Cherry Creek Data
Day 36 42 48 79 91 96
Flow 47 81 42 83 48 50
Included? N Y N Y N Y
To remove the possibility of a sequence of events less than 9 days apart being
analyzed as a single event, a two-stage process is performed. After selecting
peak that are likely independent using all the data, these values are removed.
Then the process is repeated to determine if other peak flows exist, but were
ignored because they were part of a sequence of peak flows less that 6 days apart.
Examining sample data a second time typically produces a few additional data
points.
41


To compare the auto-correlation of the data before and after de-clustering,
correlegrams were created for both sets of data. In figure 4.4 we see that the orig-
inal data is inherently highly correlated. After de-clustering, the data becomes
less correlated.
Orginal Data
Lag
De-Clustered Data
CO _
o
LL ~
2
O
o ______________ _______ ____________________
I " "l =- = = * = = - -
0 10 20 30 40
Lag
Figure 4.4: A correlogram of clustered and de-clustered flow values from Frank-
town gage
42


Since only one exceedance value per 6 days (Eqn 4.1) is possible, the max-
imum number of events per year is 365 /Q. To create a complete time series, a
single value per interval was selected. Intervals without an exceedance are as-
signed a value of zero. Each interval was assigned a sequential value to represent
time.
4.2 Return Periods
In hydrology, the size of a flood or event is often placed into perspective
by referring to its expected recurrence period. A one hundred-year flood has
a probability of 0.01 of occurring in a given year. The probability of a flood
larger than a fixed flow x is denoted P(X < x) = p. Here X ( a random
variable) represents the maximum flow for a given year. The relation between
this recurrence period and the percentile p of a given flow is
T=rf (4.3)
1 ~P
where T refers to the T-year flood and p refers to the probability.
4.3 Partial Duration Series and Annual Maximum Flood Series
In modeling extreme hydrologic events, two general approaches are typically
used. First, annual maximum flood values may be calculated for each year. This
series of data is known as an annual maximum flood series (AMF) and yields a
value for each year of record. It is plausible to think of these events as independent
and so this achieves the same goal as de-clustering the data discussed above.
A partial duration series (PDS) is a collection of events that exceed a given
43


threshold. This is alternatively referred to as the peaks-over-threshold (POT)
approach. This can yield more observations than number of years in the period
of record.
Historically, the AMF approach has been more popular than the POT method
because of its simplicity. As discussed in section 3.2, selecting a threshold can be
difficult. Finding the greatest daily flow for a year is easy. Another difficulty with
the POT method is the selection of a criteria to determine whether the events
are independent [9]. The AMF approach avoids this issue by choosing a single
event per year. This makes the AMF procedure more restrictive than the POT
method.
Benefits of the POT method are that the more flow data is used in estimat-
ing the parameters of the model. Additionally, POT method may make more
sense in dealing with stream flow data. During an extremely dry year in an arid
environment, flow might not be measured in an intermittent stream. The AMF
would consider 0 cfs a maximal flow for that year. On the contrary, the POT
method would not use data from that dry period.
4.4 Stream Gage Sites
Data used in this paper is available from the US Geological Surveys web
page (www.usgs.gov). Daily flow data was retrieved. Daily flow data is not the
best data for studying flood events because high flows of short duration may
be missed. Ideally, one would use data from a shorter time period. At stream
gage sites, data is originally recorded in 6 minute intervals. This data is not
readily available. From this 6-minute data, daily values are calculated and made
44


available on-line while the 5-minute data are archived. Discrepancies may result
when peak flows occur over the course of two days. Suppose a peak flow occurs
at midnight. Daily flow data would record this event in these two days. Finer
resolution data could permit the 24 hour period to span the period from noon the
first day until noon the second day. In practice, adjustment factors are available
to increase estimates based on daily data. This paper does not consider these
factors as they are constant.
45


5. Case Studies and Implementation
In this chapter, we use the theory and techniques discussed earlier in the
paper to fit data from the USGS gaging stations on Cherry Creek at Franktown,
CO (USGS 06712000). This station was selected for several reasons. The station
has been continuously in operation since 1939 giving it an unusually long period
of record. (Typically, a flow record longer than 30 years is considered adequate
for predicting extreme values). The northern end of the watershed has seen some
development over the last decade. One could hypothesize that this development
would result in larger, more frequent high flow events. Furthermore, there are no
upstream diversions, control structures or large alluvial pumping upstream.
5.1 De-clustering Data and Describing Seasonality
Figure 5.1 contains a time series plot of the daily average flow at this station.
By convention, these values are expressed in cubic feet per second (cfs). As a
point of reference, 1 cfs = 440 gallons per minute or 1.99 acre-feet per year. This
figure includes all of the data. The solid diamonds represent the de-clustered
flow values. The method for de-clustering these values will be discussed in the
subsequent section.
46


The Franktown gage is tributary to 169 square miles. Following the concepts
discussed in Section 4.1, a minimum interval of 10.12 days between events is used
to help ensure that data used in the analysis are independent. Therefore at most
36 exceedances per year are possible. When considering a high flow season, this
number is reduced to 18 quasi-independent events per season. Figure 5.2 shows
the effect of a de-clustering threshold on the resulting number of data. Figure
5.2 was used to select a threshold to de-cluster the data. Fifteen cfs is acceptable
since any greater value will only lower the number of data. The number of de-
clustered events greater than 15 cfs is 306 (from 22,229 days of flow data.)
47


200 400 600 800 1000 1200 1400
Figure 5.1: Times Series Plot Franktown Gage


Number of de-clustered observations
Figure 5.2: Number of de-clustered observations as a function of threshold -
Cherry Creek at Franktown (Vertical line indicates median flow value)
Figure 5.3 is a boxplot created by grouping by month the logarithm of the
daily average flow. This plot suggests seasonality is present and the highest
flows occur in the months March through August. Data from these months are
grouped together to make the high flow season. These data are used in the
following analysis. No further attempt has been made to analyze or model the
seasonality of the data.
49


Month
Figure 5.3: Boxplot of de-clustered daily flows at Franktown, grouped by month
5.2 Threshold Selection
A threshold needs to be selected in order to create a point process model. A
series of plots were created to assist in this process. A mean residual life plot is
shown in Figure 5.4. The basis for this plot is discussed in Section 3.3. Linearity
is evident from 15 cfs to approximately 120 cfs. In this range the exceedances may
be assumed to follow the GPD distribution. Above 120 cfs, each sharp change in
mean excess is a result of a few data points being eliminated by increasing the
threshold. In this region (say 125-250 cfs), the assumption of the exceedances
coming from a GPD distribution would not be valid.
50


o
o
u
Figure 5.4: Mean residual life plot Franktown at Cherry Creek
Figure 5.5 shows the changes in the GEV parameters estimated by a point
process as the threshold is incrementally increased between 15 and 120 cfs. The
range of thresholds checked for this plot is smaller than the mean residual life
plot (Figure 5.4) because a minimal amount of data is required for the likelihood
function to be well behaved and the optimizer to converge. As with the MRL
plot, the interpretation is subjective and can be difficult. This plot shows a large
jump in the standard error of the location and scale parameters as the threshold
is increased from 75 cfs to 80 cfs. The standard error of the shape parameter
increases gradually across this range of thresholds. As one is looking for a region
51


in which the shape parameter varies linearly with respect to a change in threshold,
a threshold should be chosen between 40 and 70 cfs.
Figure 5.5: Parameter estimates against threshold for Franktown data
5.2.1 Stability of model as a function of threshold.
From a theoretical perspective, the threshold needs to be chosen so that the
GPD distribution provides a reasonable model for the exceedances. On a more
pragmatic level, one might want to know how much the selection of a threshold
effects the estimated return levels for a range of return periods. Models were
created for a range of thresholds between 30 and 70 cfs. Using these models,
52


return levels were predicted for a range of return periods from 10 to 500 years.
Figure 5.6 shows these return level plotted against the return periods. Note
that changes in thresholds are not directly correlated with return levels. As the
threshold increases, the return level at first increases but then decreases.
Figure 5.6: Flows as a function of return periods Cherry Creek at Franktown
53


Thresholds chosen in this region are relatively stable. Note, the order of the
return levels does not vary consistently with changes in thresholds.
5.2.2 Return period confidence intervals
The confidence intervals for the return levels are estimated through the use
of profile plots (as discussed in Section 3.3). For a given return period, the
negative likelihood values are plotted. The upper and lower confidence intervals
are found at the intersection of this curve with the horizontal line 0.5 *xf below
the maximum likelihood value. Figure 5.7 shows negative log-likelihood profile
plots for thresholds of 70 and 100 cfs with a return period of 100 years. The
upper plot shows that for a threshold of 70 cfs, the return value is 4,232 and the
upper interval is 33,600 cfs. With a threshold of 100 cfs, the return level is 2,039
and the confidence interval is 10,782 cfs. The lower confidence intervals may also
be estimated. However, from a practical point of view, these lower bounds are
not of interest.
54


0
5000 10000
20000
30000
"O
o
o
CD
i
O)
O
JD
o
Q_
CD
sf
i
00
'3'
I
o
CO
I
0 5000 10000 20000 30000
Return Level
Figure 5.7: Profile plot for a 70 cfs threshold (top) and a 100 cfs threshold
(bottom) and a return period of 100 years. The horizontal line in 0.5xfa=O95
units below the optimal log-likelihood value.
In order to illustrate the effect of the threshold on the upper confidence inter-
val, the upper 95^% confidence intervals for the return levels were calculated for
a range of thresholds. Figure 5.8 illustrates the effect of the changing threshold.
This plot shows an overall decrease in the confidence intervals as the threshold in-
55


creases. The erratic nature of the intervals is in part because the dataset changes
with each change of the threshold. This graph suggests that a threshold greater
than 100 cfs might be acceptable since the upper bound interval as well as the
estimated return value is smaller. From an engineering perspective, a smaller
return level and confidence interval would translate into less cost incurred con-
structing hydraulic structures, but a higher risk of failure. However, because the
model is valid only in the range between 40 and 70 cfs, a threshold of 40 cfs
appears to produce the lowest confidence intervals.
20 40 60 80 100 120
Threshold cfs
Figure 5.8: Upper 95th percentile confidence interval and return level of flow
from profile plots Franktown
56


5.2.3 QQ plots
QQ plots were created using data screened at different thresholds. The basis
for this procedure is discussed in Section 3.5.2.1. From the plots in Figure 5.9
we can see at lower thresholds, the models tend to over fit the largest values.
At higher thresholds, the higher values are under-fitted. A threshold of approxi-
mately 40 cfs appears to do an adequate job of predicting most of the data with
the exception of the most extreme value. Note that an exact fit of of points is
not necessarily required. While following a population distribution, the sample
population may exhibit a large degree of randomness. This is also discussed in
Section 3.5.2.1.
57


o
o
a Threshold = 20 cfs
b .) Threshold = 40 cfs
Figure 5.9: QQ plots for Franktown data for a range of thresholds.
58


Relying primarily on the MRL plot (Figure 5.4) and the return value plot
(Figure 5.6), 40 cfs was selected as a threshold. While selecting a threshold is
still somewhat of a subjective choice, this choice is supported by following items.
The MRL plot (Figure 5.4) indicates that 40 cfs is in the region where the
mean residual is linear with respect to the threshold. This supports the
assumption that the distribution of the exceedances is GPD.
The plot of thresholds versus parameter estimates indicates that in the
range of approximately 40 to 70 cfs, changes in the shape parameters are
approximately constant.
In the vicinity of 40 cfs, the estimated return levels do not vary greatly
(Figure 5.6). This suggests that a threshold of 40 is a stable selection.
A threshold of 40 cfs provides the narrowest range of confidence intervals
within the range that supports the assumption that the exceedances follow
the GPD distribution (Figure 5.8).
From the QQ Plots one finds that most events are neither greatly over or
under estimated (Figure 5.9, panel b).
5.3 Exploring Trends in Parameters
Having selected a threshold, we proceed to study whether a trend exists in the
frequency or magnitude of high flow events. Trends in these factors are quantified
by finding trends in the parameters of the GEY distribution. As discussed in
59


section 2.3.4, a sequence of models were constructed to determine if there is a
trend in the location or scale parameter.
Before creating these models, W-statistic and Z-statistic plots are may pro-
vide a useful intuition about the stationarity of the arrival rate of exceedance
events and the distribution of these exceedances. A Z-statistic plot is shown in
Figure 5.10. This plot suggest that the arrival rate may be changing non-linearly.
The solid line is a lowess fit to the data.
Figure 5.10: Z-Statistic plot for the Cherry Creek at Franktown gage. The
threshold was set at 40 cfs.
60


To examine the stationarity of the distribution of exceedances, a W-statistic
plot was created (Figure 5.11). This plot suggests that the distribution is nearly
stationary.
Figure 5.11: W-Statistic plot for the Cherry Creek at Franktown gage. The
threshold was set at 40 cfs
The models described in section 2.3.4 will detect a linear or near linear trend
in location and scale parameters. As discussed in Section 3.4, four models were
created. The first model considers the parameters to be fixed with respect to time.
The second model allows the location parameter to change linearly with time.
The third model allows the scale parameter to change exponentially with respect
61


Table 5.1: Cherry Creek at Franktown model results (Threshold = 40 cfs.)
Model 0 Model 1 Model 2 Model 12
Loc. 84.51 84.27 84.27 84.59 (10)
(9.87) (9.84) (9.81)
Loc. trend NA -1087.33 NA 4723.7
(X 1000) (12970.69) (35003.7)
Scale 4.4 (0.16) 4.4 (0.16) 4.4 (0.16) 4.4 (0.16)
Scale trend X NA NA 4399.64 101.11
1000 (157.52) (554.85)
Shape 0.73 (0.16) 0.73 (0.16) 0.73 (0.16) 0.73 (0.16)
Neg log-lik 735.08 735.08 735.07 735.06
to time while the fourth model allows both the scale and location parameters
to vary. The results of these four models are presented in Table 5.1. Note, in
order to improve the performance of the optimization routine, the time covariate
was scaled and centered prior to optimization. The significance of the trend is
judged in two ways. First, the standard errors are used to determine whether
the trend parameters differ from zero. Second, the log-likelihood values for the
different models are compared. From this table, we conclude that the parameters
are not changing with time and that the additional parameters do not improve
the model fit of the data to the parameter. Letting p equal l/(Return Period),
the recommended model for the return level is
^=8451 w(1 {_ log(1 p)>"'731 (!U)
62


6. Discussion
Initially, I intended that the creation of the point process model be the be-
ginning of a more extensive project. With full confidence that I would detect an
increasing trend in the frequency and magnitude of flood events, I believed that
the real challenge would be to determine if this was due to trends in the weather
or possibly changes in the drainage areas. As it turned out, the next year was
spent learning and applying a point process model. While the end result is not
what I anticipated, it has been a learning experience. Key findings and areas for
further exploration follow.
Threshold Selection. The most difficult part of creating a point process
model is choosing a threshold. In the literature, I found most threshold selection
methods rely on graphical tools rather than analytic procedures. The MRL plot
and the plot of the standard errors versus the threshold require the user to identify
breaks in linearity. Return level plots with models created using a range of
thresholds illustrate the effects of threshold selection. They may be used to
identify regions of stability. Finally, these graphs potentially show the lack of
robustness with respect to threshold selection of the point process models. On
these plots small changes in threshold may result in significant differences in the
return levels.
Plotting the return level and confidence intervals as a function of the thresh-
olds illustrates how sensitive the confidence intervals are to small changes in the
data set. Instead of the smooth lines often associated with confidence intervals,
63


these intervals are erratic and non-monotonic. I found this plot also to be some-
what difficult to interpret. The generally decreasing confidence intervals and
decreasing return levels suggest that a model with a high threshold is desirable
fot the data set analyzed. Ultimately, I chose a threshold based on these latter
two graphics. As a side note, producing these similar plots using Coles rainfall
data [1] and Smiths River Nidd data [17] produced similarly erratic patterns.
De-clustering. Hydrology data is inherently heavily auto-correlated. De-
clustering requires a minimum interval between events. In the case of the Frank-
town data this interval between events is approximately 10 days. This effectively
reduces the maximum potential number of exceedances from 365 to 36 events.
Modeling the 6-month high flow period reduces the maximum number of events
from 36 to 18. This may partially explain the low overall number of exceedances.
Therefore; the recommendations often cited may not apply.
Datasets analyzed by Smith and Coles contain data that was not de-clustered.
Therefore, the assumption of independent events is less likely to be true. De-
clustering data has more impact on model estimates and confidence interval
widths when lower thresholds are chosen. On the other hand, higher thresh-
olds result in essentially the same data being used whether or not de-clustering
was performed.
Trends. Considering both the standard errors for the trend parameters and
the changes in the negative log-likelihood values, a significant linear trend was
not found in the Franktown data.
The second issue involves the use of a trend in the threshold itself. Consider
a non-homogeneous Poisson process. This model is sufficiently flexible to allow
64


the intensity function l() to increase with time. However; a steady increase
in the location parameter // causes the rate and magnitude of exceedances to
increase. After a period of time, most of the data will exceed a fixed threshold
and assumptions about the arrival rate and the distribution of the exceedances
will be violated. At this point, estimation is based on the bulk of the data
rather than the extreme values. Modeling a trend in the threshold so that the
exceedances follow the GPD and the arrival rate is Poisson would also be difficult.
Inclusion of a trend parameter in the likelihood is not readily possible, since small
changes in the threshold produce unsmooth changes in the likelihood function.
Optimization solving for MLEs. The functions provided by Coles
create a starting point for the optim algorithm. Initially, I had relied on these
default starting points. However, this produced apparently correct solutions with
sub-optimal likelihood values. For example, a model with more parameters had
a higher negative log-likelihood value than a model with fewer values. Without
using models in a nested manner, this would be very difficult to detect.
For a model without trend parameters, I found that adding noise to the
starting point resulted in the same solution (if it converged). For models with
trends in the parameters, noise in the starting point produced different negative
log likelihood values. This suggests that the region in which my solution exists
is non-linear and that other nearby local minima are present. If it were not for
the fact that a sequence of models were created, this might not be noticed. This
problem was addressed by using the results from a model without trends in the
parameters as a starting point for models with trends in the parameters.
65


Applications to engineering problems. Many of the issues discussed
above make the point process modeling technique difficult for routine use. Meth-
ods such as selecting a threshold and dealing with lack of convergence need to be
formalized. I believe that the greatest deficiency in this approach is the lack of
robustness in the model as a function of the threshold. Deliberately or not, one
could chose a threshold to increase or decrease the return level. With graphs as
the only guidance, this approach could be problematic. These are several issues
to be addressed.
Areas for further study. The following items merit further consideration.
In the research setting, I think papers using point process models should illustrate
the effect of using alternative thresholds. This may include multiple return plots
and plots of confidence intervals of the return levels as a function of the thresh-
old. While all papers and references on this topic emphasize the importance of
choosing a threshold, none present models created using a range of thresholds.
This can instill in the reader a false sense of security about the final threshold
choice.
Although simulating realistic extreme values is difficult, simulation studies
could be conducted to determine at what threshold levels trends could be de-
tected. This might also provide guidance for modeling the parameters.
In the course of working on this paper, several additional environmental en-
gineering applications occurred to me. Low-flow periods in rivers are studied in
order to identify periods that may cause aquatic life stress. A point process could
be used to model trends in the frequency of these periods. A common type of
environmental data is a time series of chemical measurements. Often, many of
66


these values are below the detection limits of the laboratory equipment. In the
context of a point source model, a threshold could be chosen above the detection
limit. Data above this threshold could then be treated as extreme values. Trends
in these data are of interest. Additionally, explaining these values with covariates
is also useful.
67


A. Variables and Abbreviations
The following variables and abbreviations are used throughout the paper.
Table A.l: Variables and definitions.
Variable Description
X,x Flow data
Y,y Data (values over threshold)
Location parameter for GEV
a Scale parameter for GEV
Scale parameter for GPD
£ Shape parameter for GEV and GPD
u threshold
F(x) cdf for GEV distribution
f{x) pdf for GEV distribution
G{x) cdf for GPD distribution
H(x) cdf for Poisson-GPD model
T Return period
A Arrival rate of exceedances.
68


Table A.2: Abbreviations and definitions
Abbreviation Description
AMF Annual maximum flood
CDF cummulative distribution function
GEV Generalized extreme value
GPD Generalized Pareto distribution
MLE Maximum likelihood estimate
PDF probability distribution function
PDS Partial distribution series
POT Peaks over threshold
69


References
[1] Stuart Coles. An Introduction to Statistical Modeling of Extreme Values.
Springer, New York, NY, 2001.
[2] C. Cunnane. A particular comparison of annual maxima and partial duration
seriews methods of flood frequency prediction. Journal of Hydrology, 18:257-
271, 1973.
[3] A. C. Davidson and.R. L. Smith. Models for exceedances over high thresh-
olds. J.R. Statisitical Soc., 52:393-442, 1990.
[4] J.R.M. Hosking et al. Estimation of the generalized extreme value distribu-
tion of probability-weighted moments. Technometrics, 27:251-261, 1985.
[5] J.R.M. Hosking. L-moments: Analysis and estimation of distributions us-
ing linear combinations of order statistiscs. Journal of the Royal Statistical
Society, Series B, 52(1):105-124, 1990.
[6] J.R.M. Hosking and J.R. Wallis. Regional Frequency Analysis. Cambridge
University Press, Cambridge, 1997.
[7] W. Kirby. On the random occurrence of major floods. Water Resources
Research, 5(4):778-789, 1969.
[8] M.E. Leadbetter, Georg Lindgren, and Holger Rootzen. Extremes and Re-
lated Properties of Random Sequences and Processes. Springer-Verlag, New
York, 1983.
[9] T.B. M.J. Ouarda M. Lang and B. Bobee. Effect of the occurrence process
of hte peaks over threshold on the flood estimates. Journal of Hydrology,
225:103-117, 1999.
[10] H. Madsen, D. Rosbjerg, and P. F. Rasmussen. Comparison of annual max-
imum series and partial duration series methods for modelling extreme hy-
drologic events. Water Rewources Research, 33(4):747-757, 1997.
[11] Alexander J. McNeil and Thomas Saladin. Developing scenarios for future
extreme lossies using the pot model. Personal web site, pages 1-23, 1998.
[12] M.A.J. Van Monfort and J.V. Witter. Testing exponentiality against gen-
eralized pareto distributed exceedances. Journal of Hydrology, 78:305-315,
1985.
70


[13] B. Onoz and M. Bayazit. Effect of the occurrence process of the peaks over
threshold on the flood estimates. Journal of Hydrology, 244:86-96, 2001.
[14] Peter Runder Rasmussen and Dan Rosbjerg. Prediction uncertainty in sea-
sonal partial duration series. Water Resources Research, 27(11) :2875 2883,
1991.
[15] D. Rosbjerg, H. Madsen, and P. F. Rasmussen. Prediction in partial duration
series with gerneralized pareto distributed exceedances. Water Rewources
Research, 28(11):3001-3010, 1992.
[16] R.M. Shane and W.R. Lynn. Mathematical model for flood risk evaluation.
J. Hydraulic Div ASCE 90, pages 1 20, 1964.
[17] Richard L. Smith. Extreme value statistics in meteorology and the environ-
ment. Unpublished, pages 701-761, 2000.
[18] Richard L. Smith. Trends in rainfall extremes. Unpublished, pages 1-8, 2000.
[19] Richard L. Smith and I. Weissman. Estimating the extremal index. J.R.
Statisitical Soc., 56:515-528, 1994.
[20] J. R. Stedinger, R.M. Vogel, and E. Foufoula-Georgiou. Handbook of Hy-
drology Frequency analysis of extreme events., volume 18. McGraw-Hill,
New York, 1992.
[21] Ellen E. Wohl. Anthropogenic impacts on flood hazards. In Ellen E. Wohl,
editor, Inland Flood Hazards, pages 104 144. Cambridge University Press,
Cambridge UK, 2000.
[22] E. Zelenhasic. Theoretical probability distributions for flood peaks. Hydrol-
ogy paper 42, Colorado State University, Fort Collins, CO, 1970.
71