Citation
Using return level in a statistical model for the joint distribution of the extreme values of equities

Material Information

Title:
Using return level in a statistical model for the joint distribution of the extreme values of equities
Creator:
Labovitz, Mark Larry
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
xviii, 290 leaves : ; 28 cm.

Subjects

Subjects / Keywords:
Stocks -- Mathematical models ( lcsh )
Risk -- Mathematical models ( lcsh )
Extreme value theory -- Mathematical models ( lcsh )
Distribution (Probability theory) -- Mathematical models ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Colorado Denver, 2009. Mathematical and statistical sciences
Bibliography:
Includes bibliographical references (leaves 278-286).
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Mark Larry Labovitz.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
606828853 ( OCLC )
ocn606828853

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

USING RETURN LEVEL IN A STATISTICAL MODEL FOR THE JOINT DISTRIBUTION OF THE EXTREME VALUES OF EQUITIES by MARK LARRY LABOVITZ AB, The George Washington University, 1971 MS, The Pennsylvania State University, 1976 MA, The Pennsylvania State University, 1977 PhD, The Pennsylvania State University, 1978 ApSc, The George Washington University, 1981 MBA, The University Of Pennsylvania, 1989 MS, Regis University, 2003 MS, The University Of Colorado, 2008 A thesis submitted to the University of Colorado at Denver/Health Sciences Ce nter in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mathematical and Statistical Sciences 2009

PAGE 2

-ii 2009 by Mark Larry Labovitz All rights reserved.

PAGE 3

-iiiThis thesis for the Doctor of Philosophy degree by Mark Larry Labovitz has been approved by _______________________________________ Stephan R. Sain _______________________________________ Peter G. Bryant _______________________________________ Daniel S. Cooley ______________________________________ Michael S. Jacobson ______________________________________ Craig J. Johns ______________________________________ Weldon A. Lodwick _________________________ Date

PAGE 4

-ivLabovitz, Mark L. (Ph.D., Mathematical and Statisti cal Sciences) Using Return Level In A Statistical Model For The J oint Distribution Of The Extreme Values Of Equities Thesis directed by Thesis Advisor Stephan R. Sain ABSTRACT Taking risk for the sake of financial reward is a l ong accepted reality of investing; however it has been shown that investing using stat istically more precise models of risk is associated with higher expected and realize d returns. Two additional observations guide this research. Firstly, if finan cial returns are viewed as a frequency distribution, it is the downside or left hand tail of returns which most concerns investors. Secondly, for most investors th e joint risk inherent in a collection of securities held (a portfolio) is of g reater interest than the risk associated with any given single security. Thus th e concern is the joint behavior of downside returns from a portfolio. In this research, the following hypotheses were exa mined: 1) Generalized Extreme Value (GEV) distributions ar e suitable for describing the probabilistic nature of the “downside” return b ehavior of financial securities. 2) The three parameters of the GEV distribution, ra ther than being constant, are functions of financial indicators, and as such are time-varying.

PAGE 5

-v3) Multivariate or joint extreme behavior of securi ties can be described and modeled as the joint behavior of time-varying retur n values, instead of a more commonly used dependence function, such as a c opula. 4) Functions of the return values can be used to im prove the characterization of risk and thereby the returns from portfolio cons truction. For 3,000 equities randomly selected from those pub licly traded on exchanges, daily performance measures were collected from Janu ary 2000 to August 2007. Weekly block minima of the returns were computed fo r each equity series. Time varying GEVs were fitted using 44 financial covaria tes (down selected from 139). 95% of the time-varying models showed significant i mprovement over the static model. Return values from these distributions were modeled satisfactorily as a Gaussian Process, using both fixed and random effec ts representing important ancillary factors such as market capitalization, bu siness sector and stock exchange. Finally, when the error or nugget variance is used in the description of risk for portfolio formation, the financial portfolios so fo rmed outperformed conventionally used models by greater than 300% over the time fram e of the study.

PAGE 6

-viThis abstract accurately represents the content of the candidateÂ’s thesis. I recommend its publication. Signed ________________________________ Stephan R. Sain

PAGE 7

-viiAcknowledgements When a fifty-something attempts a doctoral program, the threads of support are widely cast and numerous. It is no exception for t his author. Firstly, the author would like to acknowledge the support of the facult y members in the Department of Mathematical and Statistical Sciences at the Univer sity of Colorado at Denver. I would like to call out three of the faculty (and fo rmer faculty) in particular, Dr. Craig Johns with whom, I spent long hours talking a bout statistics, Dr. Richard Lundgren, who was always there with a kind word of encouragement and Dr. Stephan Sain, my advisor, without whose enthusiasm and insights I could not have succeeded in completing this research. The author wishes to thank his colleagues at Lipper and Thomson Reuters, in particular the Li pper COO Eric Almquist (who actively made certain I had everything I needed), t he late Jed McKnight, my first boss at Lipper, an incredibly knowledgeable and sup portive soul, Barb Durland, editor par-excellence, Hank Turowski and Jeff Kenyo n who offered wise criticisms and last but not least my colleague, my boss and my mentor in finance Andrew Clark. To my family, thanks to my wife Susan for t he love, time and support you gave me, this is my last degree, you now have it in writing; thanks to my children Leah and Edward who listened to me with semi-patien ce whenever I would babbled on about a new discovery and would respond with an encouraging

PAGE 8

-viii“whatever.” I remember my late mother Florence Lab ovitz, while a poor woman she gave me the most important gift of faith in peo ple and causes, to champion the underdog and never forget where you came from. My m other was of the greatest generation and I know that I am unlikely to meet an other of her courage and grit in my lifetime. Finally to my uncle Erwin Newman, an excellent mathematician, a tutor, and a very gentle man, who died much too you ng and whom I think about often. For these two and others un-named who have departed and left their mark on me: “…even when they are gone, the departed are with us moving us to live as, in their higher moments, they themselves wished to live. We remember them now; they live in our hearts; they ar e an abiding blessing.” (From a Jewish meditation on the departed)

PAGE 9

-ixFigures .......................................... ................................................... ........ xii Tables .......................................... ................................................... ......... xv Chapters 1.Background And Literature Review .................. .................................................. 11.1Overview Of The Chapter ........................... ................................................... .. 11.2Introduction ...................................... ................................................... .............. 41.3Financial Risk And Normality ...................... ................................................... 41.4Models Of Risk-Reward ............................. ................................................... ... 61.4.1Mean Variance Optimization ........................ ................................................. 71.4.2VaR And CVaR ...................................... ................................................... .. 101.4.3Generalized Capital Asset Pricing Model (G-CAPM) ............................ ............................................... 121.5Modeling Risk As It Effects Reward ................ .............................................. 151.6Extremes And The Central Limit Theorem ............ ........................................ 171.8Statistics Of Extremes ............................ ................................................... ...... 221.8.1Existence Of Extreme Value Distributions .......... ........................................ 231.8.2Extreme Value Distributions ....................... ................................................ 271.8.3Unification Through GEV Distributions And Extreme Maximas ............................... ................................................. 2 91.8.4.Extreme Minima .................................... ................................................... ... 311.9Extreme Value Generation Models ................... ............................................. 331.9.1Block Maxima Model ................................ .................................................. 331.9.2Peaks Over Thresholds Model ....................... .............................................. 331.9.3Poisson Generalized Pareto Model .................. ............................................ 351.9.4Relation Between Extrema Models ................... .......................................... 361.10Parameter Estimation .............................. ................................................... ..... 381.11Departures From Independence ...................... ................................................ 411.11.1Threshold Clustering .............................. ................................................... .. 421.11.2Serial Correlation Effects ........................ ................................................... 421.12Return Values ..................................... ................................................... ......... 471.13Multivariate Extreme Distributions ................ ................................................ 481.13.1Dependence Functions And Copulas .................. ......................................... 511.14Organization Of Remainder Of Dissertation ......... ......................................... 532.Statement Of The Problem And Outline Of The Proposed Research .......................... ................................................... ... 54TABLE OF CONTENTS

PAGE 10

-x2.1Overview Of The Chapter ........................... ................................................... 542.2Research Threads .................................. ................................................... ....... 562.3Ingest, Clean, And Reformat Data .................. ................................................ 622.4Fitting Time-Varying GEVs ......................... .................................................. 642.5Computing Return Values And Developing A Dependence Function Model ......................... ................................................. 6 62.6Portfolio And Innovation Processing ............... ............................................... 683.Data, Data Analysis Plan, And Pre-Analysis Data Manipulations ................................ ................................................... ......... 693.1Overview Of The Chapter ........................... ................................................... 693.2Classification Of Data Types ...................... ................................................... 723.3Performance Data .................................. ................................................... ...... 733.4Ancillary Data .................................... ................................................... .......... 733.5Market Capitalization ............................. ................................................... ..... 783.6Equity Liquidity .................................. ................................................... ......... 833.7Covariates ........................................ ................................................... ............ 893.7.1Sub-Selecting Covariates .......................... ................................................... 913.8A Couple Of Final Words On Data Organization ...... .................................... 984.Fitting Time-Varying GEVs ......................... ................................................... 1004.1Overview Of The Chapter ........................... ................................................. 1 004.2GEV Reprise ....................................... ................................................... ....... 1064.3Non-Time-Varying Model ............................ ................................................ 10 84.4 Block Size And Distribution ....................... .................................................. 1124.5The Time-Varying Model ............................ ................................................. 1 254.5.1Matrix Form Of Relationships Between TimeVarying Covariates And GEV Parameters ............. ................................... 1274.6 Examining Covariates .............................. ................................................... .. 1284.6.1TimeÂ’s Arrow And Periodic Covariates .............. ...................................... 1284.6.2Financial Markets And Economic Covariates ......... .................................. 1334.7The Full Fitting Of Time-Varying GEVs ............. ........................................ 1374.7.1Stepwise Model .................................... ................................................... .. 1374.8Analyzing The Covariate Models .................... ............................................. 1425.Estimating Time-Varying Return Levels And Modeling Return Levels Jointly ................ .............................................. 1445.1Overview Of The Chapter ........................... ................................................. 1 445.2A Brief Recap To This Point ....................... ................................................. 1 495.3Computing Return Value Levels ..................... ............................................. 1495.4The Variance Of Return Values ..................... .............................................. 1515.5Multivariate Model ................................ ................................................... .... 1545.6Model Building .................................... ................................................... ...... 1585.6.1Fixed-Effect Models ............................... ................................................... 158

PAGE 11

-xi5.7Examining Factors As Sources Of Variability ....... ...................................... 1795.8Further Modeling .................................. ................................................... ..... 1835.8.1Step 1–Further Fixed-Effect Modeling .............. ........................................ 1835.8.2 Step 2–Fitting Gaussian Processes Using Maximum-Likelihood Estimation ..................... ........................................ 1865.9The Selected Model, In Detail ..................... ................................................. 1 966.Consequences Of The Research For Portfolio Formation And Innovation .......................... ................................................... 2016.1 Overview Of The Chapter ........................... ................................................. 2 016.2Tasks To Be Performed In The Chapter .............. ......................................... 2056.3Test Data Sets .................................... ................................................... ........ 2056.4Model Validation .................................. ................................................... ..... 2066.4.1Expectations Of The Test Set ...................... .............................................. 2086.4.2Variability And Distribution In The Test Data Set .............................. ................................................... 2126.5Application Of Model Results To Portfolio Formation ............................ ................................................... .. 2146.5.1Construction Of Efficient Frontiers ............... ............................................ 2216.5.2Applying The MVO Weights .......................... .......................................... 2276.6Predicting The Best Covariance Structure .......... .......................................... 2337.Summary And Conclusions, Along With Thoughts On The Current Research .................. .............................................. 2397.1Overview Of The Chapter ........................... ................................................. 2 397.2Summary ........................................... ................................................... ......... 2407.3Conclusions........................................ ................................................... ........ 2497.4Unique Aspects Of The Research .................... ............................................. 2507.5Future Research ................................... ................................................... ...... 252 Appendix A. List of Countries, Stock Exchanges, Industries And Sectors Used In This Research ................. ............................................... 255B.Detailed Analyses Of The Covariate Models ......... ......................................... 259 Bibliography ..................................... ................................................... ............ 278Sources Of Data .................................. ................................................... ............... 287

PAGE 12

-xiiLIST OF FIGURES Figure 1.1 Example Of Efficient Frontier Built Upon Mean Variance Optimization ……………………………………………. 8 1.2 Probability Density Function Of Exponential Distribution With Rate l ll l =1…………………………………………….. 17 1.3 Normal Density Function Superimposed On Histo gram Of 10,000 Means Of Sample Size 50 Drawn From An Exp(1) Distribution………………………………………………….. 19 1.4 Normal Density Function Superimposed On Histo gram Of 10,000 Maxima Of Sample Size 50 Drawn From An Exp(1) Distribution………………………………………………….. 19 1.5 Extreme-Value Distributions Depicted On The S ame Plot……… ……... 28 1.6 Relationships Between Return Values………………………… ……….. 49 2.1 High-Level Data Flow Diagram Depicting The Organization Of Research Elements In This Thesis…………………………………………………………… 61 3.1 Log-Log Plot Of Market Caps Versus Rank For 51,590 Equity Series By Continent Of Domicile, Overlaid By Market Caps As Defined In Table 3.4…………………………………………………. 80 3.2 Notched Box And Whisker Plot Of Logged Market Capitalization Of 51,590 Equity Series………………………………… 81 3.3 Log–Log Plot Of Market Caps Versus Rank For 15,528 Equity Series By Continent Of Domicile, Overlaid By Market Caps As Defined In Table 3.4…………………………………………………. 82 3.4 Notched Box And Whisker Plot Of Logged Market Capitalization Of 15,528 Equity Series…………………………… …….. 83 4.1 Plot Of Estimated Medians of GEV Parameters O ver Different Time Units Expressed In Weeks…………………………… … 118 4.2 Estimates Of GEV Median Means And Standard Deviations Over Different Time Units Expressed In Weeks…………………………………………………….. 119 4.3 Estimates Of GEV Median Skewness And Kurtosis Statistics Over Different Time Units Expressed In Weeks…………………………………………………….. 120

PAGE 13

-xiii4.4 Histograms Of Extremes On Different Time Scal es……………………. 121 5.1 Histograms Of Log Return Values For Selected Y ears For PX [ x ] 0019 £ Censored At A Value Of 2,000 Percent………………………………………………………… 161 5.2 Histograms Of Log Return Values For Selected Ye ars For PX [ x ] 0019 £ Censored At A Value Of 1,000 Percent………………………………………………………… 162 5.3 Histograms Of Log Return Values For Selected Ye ars For PX [ x ] 0019 £ Censored At A Value Of 500 Percent…………………………………………………………... 163 5.4 Notched Box-Plots Of 52-Week Logged Return Val ue Aggregated By Continent For (A) Year 2001 And (B) Year 2007……………………………………………………….166 5.5 Notched Box-Plots Of 52 Week Logged Return Val ue Aggregated By Sector For (A) Year 2001 And (B) Year 2007……………………………………………………… 167 5.6 Notched Box-Plots Of 52-Week Logged Return Val ue Aggregated By Market Cap For (A) Year 2001 And (B) Year 2007……………………………………………………… 168 5.7 Notched Box-Plots Of 52 Week Logged Return Val ue Aggregated By Exchange For (A) Year 2001 And (B) Year 2007…………………………………………………….. 169 5.8 QQNorm Plots Of Standardized Residuals From Three-Factor Fixed-Effects Model For Years 2001 To 2007……………………………………………………. 175 5.9 Scatter Plot Of Standardized Residuals Versus Fitted Values From Three-Factor Fixed-Effects Model For 2001…………………………………………………………. 178 5.10 Plot Of Mean And +/Two Standard Deviations Of Logarithm Of Market Cap Versus Logarithm Of Mean Return Value……………………………………………………… 180 5.11 Plot Of Year Versus Logarithm Of Mean Return Value ………………... 181 5.12 Plot Of Market Capitalization Versus Standard ized Residuals From Model Composed Of Earlier Three Factors Augmented By Market Capitalization As A Continuous Predictor………………………………………………… 182 5.13 Selected Qqnorm Plots Of Standardized Residual s From Various Market Capitalization For 2001 Coming From The Selected Model…………………………………....... 195

PAGE 14

-xiv6.1 Plot Of Estimated Mean Response For The Test Dataset, Using Coefficients Based On The Training Set Fixed-Effects Model Versus The Logged 52-Week Return ResponseÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 209 6.2 Mean Response And Observation Values Within 95% Prediction Interval Envelopes, Using Individually Derived And ScheffAdjusted Prediction IntervalsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 211 6.3 Graphs Depicting The Results In The Formation O f Efficient Frontiers For The Sigma Covariance Structure, Rebalance Year 2002Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 22 4 6.4 Graphs Depicting The Results In The Formation O f Efficient Frontiers For The Sigma, 26.071 Weeks Covariance Structure, Rebalance Year 2004Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…. 225 6.5 Graphs Depicting The Results In The Formation O f Efficient Frontiers For The Sigma, 256.42 Weeks Covariance Structure, Rebalance Year 2006Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â… 226 6.6 Cumulative Return Plot (VAMI) Over The Years Indicated For An Invest-And-Hold Strategy Under The Set Of Covariance Structures Described In The TextÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…... 230 6.7 Returns Normalized By Standard Deviation For An Invest-And-Hold Strategy Under The Set Of Covariance Structures Described In The TextÂ…Â…Â…Â…Â…Â…Â… Â…Â…Â….. 231 6.8 Cumulative Return Plot (VAMI) Over The Years Indicated For An Annual-Rebalance Strategy For Maximum, Minimum, And Base Covariance Structures For RiskÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…... 232 6.9 Plot Of Daily Returns From S&P 500 Versus Simil ar Measure Of The Vix For The Period From 2000-2007Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. 234

PAGE 15

-xvLIST OF TABLES Table 1.1 Salient statistics for the two samplesÂ…Â… Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 20 3.1 Equity-based time series used in present rese archÂ…Â…Â…Â…Â…Â…Â…Â…Â…. 73 3.2 Description of ancillary dataset used in the equity sample designÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 74 3.3 Reducing the number of securities by processi ng stageÂ…Â…Â…Â…Â…Â…Â… 76 3.4 Market capitalization classes and their cut p ointsÂ…Â…Â…Â…Â…Â…Â…Â…Â… .. 79 3.5 Analysis of deviance table from stepwise logi stic regression for weekly extreme-value seriesÂ…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â….... 86 3.6 Analysis of deviance table from stepwise logi stic regression for monthly extreme-value seriesÂ…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â… Â… 86 3.7 Cross-tabs from the assignment of weekly extreme-value series to classesÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… ... 87 3.8 Cross-tabs from the assignment of monthly extreme-value series to classesÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â….. 87 3.9 Globally recognized benchmarks and indicesÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. 90 3.10 Number of covariates satisfying the high-gra ding criteria as a function of percentage variation explained and threshold valueÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. 96 3.11 Distribution of variation in loadings betwee n covariates and factors for the factor structure arising from the selected high-grading results described in the accompanying textÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 97 3.12 Covariates selected for use in estimating GE V parametersÂ…Â…Â… Â…Â…Â… 98 4.1 Correlations among GEV parameters for weekly and monthly data seriesÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 114 4.2 Tabulation of results from the empirical (bootstra p) analysis of maxima of different time units and estimation of related GEV parameters and statisticsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…............................ .................... 117 4.3 Description of candidate periodic behaviorsÂ…Â…Â…Â… Â…Â…Â…Â…Â…Â…Â…. 128 4.4 Number of weekly extremal series failing to re ject the null hypothesis of level stationarity under the KPSS testÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 130

PAGE 16

-xvi4.5 Results from the permutation test from power s pectra examining hypothesized salient periodsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â… 131 4.6 Results from white-noise test from power spect rum examining hypothesized salient periodsÂ…Â…Â…Â… Â…Â…Â…Â…Â…Â…Â…Â…Â… 132 4.7 Correlation matrix from the covariate series cr eated by crossing aggregation and time frame factorsÂ…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â…. 135 4.8 Percentage representation of the number of mo dels in which each of the covariate classes appearsÂ…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â….. 136 4.9 Tally of stopping steps for the equities in t raining set (stopping rules used are discussed in the text abov e)Â…Â…Â…Â…Â…Â…Â…Â… 141 5.1 Results from stepwise model construction for f ixedeffects model by yearÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 172 5.2 Coefficients of determination 2 R () for the annually computed three fixed-effects model described in the text aboveÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. 173 5.3 ANOVA arising from Type III SoS for fixed-effec t models containing additional discrete and continuous factors/predictorsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…. 185 5.4 Selected results from maximum-likelihood compu tations under assumptions and structure described in the previous textÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…... 189 5.5 Results from repeated performance of the Jarque -Bera test of normality by variance grouping and by model for first pair of treatments of nuggetsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â… 193 5.6 Results from repeated performance of the Jarqu e-Bera test of normality by variance grouping and by mode l for second pair of treatments of nuggetsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â… 194 5.7 Example of cell mean contrasts for discrete market capitalization predictorÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…...197 5.8 Structure of the columns of the design matrix X Â…Â…Â…Â…Â…Â…Â…Â…Â… 197 6.1 Identifiers/indices and names of 98 securitie s in test data setÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â…Â… 207 6.2 P-values from KS test of the null hypothesis that the residuals from the fixed-effects model run on both the training and test sets have the same distribut ionÂ…Â…Â… Â…Â…Â…Â… Â….. 213 6.3 Rebalance strategy yielding highest returns u nder the set of covariance structures described in the text for an annual rebalance over the years indicatedÂ…Â…Â…Â…Â…Â… Â…Â…Â…Â…Â…. 229 6.4 Correlation between returns on VIX and return s on Sharpe portfolios for the years indicatedÂ…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â… 236 A.1 Countries used in this research Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. 255

PAGE 17

-xviiA.2 Stock exchanges used in this researchÂ…Â…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â…Â…Â… 256 A.3 Industries used in this researchÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…. ..257 A.4 Sectors used in this researchÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… Â…Â…...258 B.1 Covariates grouped as per factors developed u sing a principal-components extraction and a varimax rotation, groups (factors) contain more than one covariateÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 261 B.2 Covariates grouped as per factors developed us ing a principal-components extraction and a varimax rotation, groups (factors) contain only one covariateÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 262 B.3 Overall tally of covariates entering models, broken down by parameters and broad themesÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 263 B.4 Tally of covariates entering models, broken d own by parameters and time lag, not including contemporaneous observationsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 264 B.5 Tally of covariates entering models broken do wn by parameters versus aggregate function and covariate groupingsÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 265 B.6 Tally of covariates entering models, broken d own by parameters versus covariate group/factorÂ…Â…Â…Â…Â…Â…Â… Â…Â…Â…Â…. 267 B.7 Results from chi-square test for the presence of association between the values of the stated factor and the dependent variableÂ…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â….. 270 B.8 Directional representation of residuals from contingency table analyses between the Market Value factor and significant dependent variables as listed in Table B.7Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 272 B.9 Directional representation of residuals from contingency table analyses between the Trade Region factor and significant dependent variables as listed in Table B.7Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 273 B.10 Directional representation of residuals from contingency table analyses between the Exchange Region factor and significant dependent variables as listed in Table B.7Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â…Â… 274

PAGE 18

-xviiiB.11 Directional representation of residuals from contingency table analyses between the Sector factor and significant dependent variables as listed in Table B.7………………………………………….. 275 B.12 Q-Mode factor analysis—cases represented by dependent variable minimum aggregation (Min.ex) and properties represented by the data factor sector (Sect)…………………………………………….. 276 B.13 Results from Q-Mode factor analysis………………………………….. 277

PAGE 19

-11. Background And Literature Review 1.1 Overview Of The Chapter In Chapter 1, the material first motivates the doma in for the research by introducing and discussing risk in a financial context. It is succinctly put that what most rational investors want from their investments is t o maximize reward and minimize risk. In most “fair” financial markets risk and ex pected reward are directly related, as expected reward increases so does risk. The conc ept of reward as defined by return on investments is generally well understood, however it is the concept of risk where the issues lie. Firstly the risk appetite of individuals is difficult to directly connect to the most commonly used measure of risk, the standard deviation and the exactness of the relationship between standard devi ation and the likelihood of loss is really only understood in the context of the pro bability distribution of return. In this regard it is shown that the assumptions of the Gaussian distribution of returns by early investigators are not borne out by later r esearch. In fact, departures from the Gaussian form are such that investors who are d riven by concerns regarding the likelihood associated with the loss of various amou nts of wealth (and many investors are motivated by this downside behavior, as it is called) are poorly served by assuming the probability in the tails of the Gau ssian. So the focus becomes the

PAGE 20

-2behavior of extreme returns along with a discussion examining various means of describing the distribution of extremes, showing th e deficiencies in a number of models and commonly used statistics “tricks” or mac hinery. The bulk of the Chapter is spent on a review of the development of extreme value theory and the pertinent literature. The material includes Extreme Value Theory (EVT) and distributions, some of the key theorems a nd conditions under which the distributions exists and the unification of the for m of the three EVT distributions under the rubric of Generalized Extreme Value (GEV) distributions. Related topics are discussed which cover the models for identifyin g and defining extreme values, including the block maximum, or the point process a pproaches the peaks over threshold (POTs) and the Poisson-generalized Pareto (P-GPD) model. The relationship between the three models and the assoc iated distributions are also reviewed. These sections are followed by a discuss ion of the methods and issues which are involved with parameter estimation for GE V parameters. Other topics commonly associated with the modeling of extremes are reviewed or at least acknowledged. The impact of relaxing the initial assumption of identically and independently distributed behavior of the extre me random variables are briefly visited. Under this hypothesis topics such as clus tering of extreme values, and

PAGE 21

-3forms of dependence, both serially and spatial are highlighted along with the relevant literature. The final two topics of the Chapter (Chapter 1) are of special relevance to the dissertation. These are the concept of return valu es and the construction of multivariate extreme distributions. Return values are the quantile values defined for a given distribution and return level of N time units (in this case weeks) such that the probably that the value of an extreme rand om variable from the distribution will exceed the quantile (return) value is equal to 1/N. The discussion of multivariate extreme value distri butions is focused on dependence functions or functions for “tying” toget her a set of univariate extreme values distributions into a multivariate distributi on. Of the dependence functions presently favored, the most popular is the copula. The copula is a multivariate distribution of unit (0,1) support and of dimension ality equal to the dimensionality of the desired multivariate distribution, defined such that each of its marginal distributions is uniform.. The problem with copula s is not in defining them but in showing/knowing that the target joint distribution is the correct one for the circumstance. The thrust of this research as will be spelled out in other Chapters is to use return values as a central element in the fo rmation of a model to create multivariate extreme value distributions.

PAGE 22

-41.2 Introduction No one likes to lose money. One way to avoid that i s for investors to place their money in risk-free investment vehicles (for example U.S. Treasury bills). However, the average return from such investments i s often much less than desired by investors. To raise the expected return investor s must be prepared to take some risk, that is, they must be willing to chance losin g some money. The questions for investors become: What is the measurement of risk t hey should be using, that is, how to quantify risk? What do statements involving this risk quantity mean in a financial context? How do these measures and statem ents of risk help guide the investment selection process? As the dissertation a dvances these questions will be explored more fully and lead to the development of a model to provide some additional guidance. 1.3 Financial Risk And Normality Taking a statistical perspective, let us assume the observation of a random variable (RV) X which is measurable on a standard Borel s ss s -algebra, and that this random variable represents return performance of a financi al security over some given period. For this random variable risk can be equate d with uncertainty and the distribution of non-zero probability density. The e xtent to which we can

PAGE 23

-5characterize the distribution of non-zero probabili ty density is the extent to which we can characterize risk. Let F be an arbitrary unknown probability distribution function of X Assuming F has a finite second moment, we know from the centra l limit theorem (CLT) (Casella and Berger [2001]) tha t appropriately transformed sums and means of X convergence in distributions to the normal distrib ution with mean equal to 0 and variance equal to 1 [(0,1)]. Noting that categorizing risk under a normality assumption is largely defining fu nctions that are proportional to the standard deviation, Markowitz (1952) suggests v ariance (and functions of variance) were “sufficient” descriptions of risk. T his assumption by Markowitz is a key to his portfolio1 theory, commonly called the mean-variance portfoli o theory or mean-variance optimization (MVO). Using the CLT to assume the distribution of expected values is normal, we know the portfolio of linear combinations of asymptotically normal random variables formed is it self normal. Any level of risk/uncertainty can be estimated for any value or quantile of the portfolio random variable because of the tractable nature of such co mputations under a normal assumption. The use of the normality assumption and mean-variance portfolio theory was formalized further in the development of the capital asset-pricing model 1 For the purposes of this dissertation, a portfolio is defined as a set of financial securities owned by one entity. They may include equity or ownership i n a business, bonds or debt which pays interest on money loaded, collective investment mutual funds exchanged trade funds, unit investment trust and derivative securities, securities built upon eq uities and first placed debt. In this dissertation the investigator will focus on or restrict the discussi on to collections of equities as portfolios.

PAGE 24

-6(CAPM) and the concept of systematic risk embodied in the financial coefficient called b bb b 2 (Sharpe [1964] and Lintner [1965]). While Costa et al. (2005) applied the multivariate normal in the proc ess of evaluating risks in the joint distribution of portf olios, there is a substantial body of financial journal literature examining the skewness and leptokurtosity (fattailedness) of the distribution of financial return s (Affleck-Graves and McDonald [1989], Campbell et al. [1997], Dufour et al. [2003], Szego [2002], Jondeau and Rockinger [2003], and Tokat et al. [2003]). The nearly unanimous conclusion is that financial returns are not normally distributed So, if distributions of returns are not normal and the investor is interested in protec ting against/computing the risk associated with the extreme downside of portfolio r eturns, does the CLT help here? 1.4 Models Of Risk-Reward In reviewing some of major models for capturing ris k, it should be evident that not all levels of risk are of equal concern. Downside extreme returns are of far greater concern than either returns which are centrally loc ated in the distribution or upside extreme returns. With regard to the upside extreme returns, while there are at present few complaints by investors because they ma de more money than expected, 2 Beta is parameter which represents how changes in the returns in the larger financial market as a whole relate to changes returns in return of an ind ividual security or group of securities, that is a portfolio.

PAGE 25

-7it is the authorÂ’s view that it is just a matter of time. As methods of portfolio construction get more sophisticated, investors will demand that portfolio analysts consider both tail extremes in portfolio constructi on. 1.4.1 Mean Variance Optimization As stated earlier one of the earliest models of the modern portfolio theory era, Mean Variance Optimization (MVO) treats portfolio construction or asset allocation as a balance between return and risk, wh ere expected return is defined as the arithmetic mean and risk is limited to the seco nd (central) moment or variance. This is frequently represented as return penalized by risk in the following formula TT www ml ml ml ml -S (1.1) where: w is a weight vector describing the allocation of p asset to the portfolio, with 1 1 p i iw= = and (often) 0 iw m mm m is the expected return vector of assets in the por tfolio, frequently estimated by the arithmetic mean, S is the covariance matrix frequently estimated by th e maximum likelihood estimator, and l ll l is the risk aversion parameter. The formula above is the objective function which i s to be maximized, subject to a set of constraints. The use of the MVO is best suit ed to situations when the underlying asset returns are distributed normally. One of the most common

PAGE 26

-8visualization tools of MVO is the efficient frontie r, a depiction of which is given in Figure 1.1. The Efficient Frontier (EF) is generated by repeat ed solutions of the MVO optimization over the range of values of the standa rd deviation which the object portfolio can take on. It represents the maximum e xpected return which the portfolio can achieve for a given standard deviatio n, i.e. risk. Another important element here is the Capital Asset Pricing Line, whi ch is depicted as the dashed line in Figure 1.1 We assume the existence of a risk-fr ee investment such that the Line representing CAPM Risk-freerate Randomly generated feasible solutions Efficient Frontier Line representing CAPM Risk-freerate Line representing CAPM Line representing CAPM Risk-freerate Randomly generated feasible solutions Efficient Frontier Randomly generated feasible solutions Efficient Frontier Figure 1.1 Example Of Efficient Frontier Built Upon Mean Variance Optimization.

PAGE 27

-9investor may lend or borrow at the risk-free rate. All other investments under consideration have returns which are not known a priori with certainty, and are collectively called the risky portfolio. The compon ents of the risky portfolio can have only non-negative weightings (unlike the riskfree asset). We assume that all other budgetary and existence constraints remain th e same. Then the investor can realize any combination risk portfolio existing on quadrant 1 or 4 of the plane ( ms msms ms ) falling on the line 0 mmxs mmxs mmxs mmxs =+ with m mm m being return of the investment portfolio, 0 m mm m being the risk-free rate of return, s ss s being the risk measure--in this case the standard deviation--and x xx x 0 being the slope,which varies as function of the efficient frontier and the universe of securiti es used. From Markowitz (1959), we find 222 01 w () p pp p ss ss ss ss =and 0 0p pp p p pp pmm mm mm mm mms mms mms mms s ss s=+ or more generally 1 0 0 1 / / a aa a p pp p a aa a a aa a ap apap apmm mm mm mm mml mml mml mml l ll l=+ (1.2) where (in addition to previous definitions): 0 w is the proportion of wealth in the risk-free invest ment p pp p m mm m is the return on the risky portfolio, and

PAGE 28

-102 pp pp pp pp ss ss ss ss are the variance and standard deviation, respectiv ely, of the risky portfolio. 1 / a aa a a aa a l ll l 1 / a aa a ap apap ap l ll l are risk measures for the portfolio and the risky portion of the portfolio, respectively, of order a aa a defined such that 1 /a aa a a aa a ls ls ls ls = when a aa a =2. What makes these results particularly interesting i s that under the same assumptions, we can realize the Generalized Capital Asset Pricing Line or Model (G-CAPM) in the same format by substituting the nth root of the cumulant of the nth (see eqn. 1.2) order for standard deviation (Malevergne an d Sornette [2001]). 1.4.2 VaR And CVaR The focus of MVO upon downside returns is not expli cit, but more indirectly through dispersion of the deviates as graphically d epicted in the dispersion ellipsoid. The Value at Risk (VaR) and the Conditi onal Value at Risk (CVaR) measures are directly associated with the downside distribution of returns. It is however like the MVO univariate in character, in th at is it is a measure made at the portfolio level and not at the level of the individ ual components making up the portfolio. It also performs best when the distribut ion of returns is unimodal. If we select a (low) probability, typically in the 0.01 t o 0.10 range, (call this probability a aa a ), then VaR is smallest quantile value x such that Xx Pr[] a aa a £= We suppose X is a random variable describing the distribution of returns from a portfolio and we choose a aa a =0.05. Then the VaR is the return quantity x satisfying the conditions described, and means that a return v alue of x or less will occur

PAGE 29

-11randomly 5 times in 100 returns. Given the distrib ution of asset returns and a designated level risk that an investor is willing t o accept (that is the tail probability), one could couple the VaR for any port folio composed of these assets. CVaR, or conditional value at risk, is a measure wh ich answers the question: Given the desired tail risk, what expected value of the m easure (i.e. return) would an investor see given that the return was less than th e VaR value? Consequently, CVaR is also called expected shortfall or expected tail loss. CVaR is computed using established conditional probability identitie s. xx XX ETailLossEXXxtftdtftdt [][|]()/() --=£= (1.3) where: E [] is the expected value operator, X is a random variable defined on the distribution of return, x is the VaR value, and X ft () is the function describing the distribution of pro bability density for portfolio returns. A figure depicting the VaR and CVaR elements is giv en in Figure 1.6 in conjunction with a discuss of return levels for ext reme value distributions. Rockefeller and Uryasev (2002) developed a linear o ptimization which identifies the components and weights that probabilistically m inimize CVaR and VaR for a given a aa a and probability density or probability mass functi on.

PAGE 30

-121.4.3 Generalized Capital Asset Pricing Model (G-CAPM) Let us assume that distribution of returns being ob served is leptokurtic? Let us leave out of consideration the distribution that is asymmetric or skewed. As stated earlier, leptokurtic behaviour means that, in compa rison to the normal distribution, the leptokurtic distribution of returns has too muc h density in the central part of the distribution, too little density in the shoulder an d too much density in the tails (similar to behaviour demonstrated by the t-distrib ution with few degrees of freedom). This means: 1. If the investor believes the distribution of return s for a given asset or portfolio is normal while it is indeed leptokurtic, and uses the sample of returns to estimate the distributionÂ’s parameters ( namely the mean and variance), and sets a quantile value for some tail probability, the quantile will be wrong. Even worse the quantile value will be wrong in a nonconservative fashion, that is, more density (meanin g more probability) will be in the tail and tail events will be more likely. 2. While the first two cumulants for a normal are equa l to the mean and variance and the cumulants beyond the second are ze ro, the same is not true for leptokurtic distributions.

PAGE 31

-13Malevergne and Sornette (2002) describe the impact of moments higher than the second moment in terms of the higher-degree cumulan ts. In the work cited, cumulants up the 8th degree are examined. Malevergne and Sornette (M&S) pose the risk in marginal distribution of assets as bein g made up of the non-zero cumulants of higher order. For a symmetric distrib ution, the odd cumulants of a symmetric leptokurtic distribution are zero. In exa mining higher order cumulants, we learn that the cumulants are polynomials of orde r equal to the cumulant order, so greater amounts of density in the tail of the di stribution add considerably to the risk, considerably more risk than limiting the esti mate of risk to the second cumulant or variance. M&S show two important results for heavy tailed di stributions and examining risk as expressed by cumulants of up to order eight. Th ese are: 1. As indicated in the discussion above on MVO, the re sults describing the Capital Asset Pricing Model can be generalized to i nclude higher order cumulants. 2. While the efficient frontier constructed using lowe r order cumulants dominates the efficient frontier constructed from h igher order cumulants, for the root 1/n, where n is the order of the cumulant, the return for the given “true” level of risk as approximated by highe r order cumulants increases with the order of the cumulant. Hence, t he results support the

PAGE 32

-14findings of Clark and Labovitz (2006) wherein highe r returns are available with increasingly complex computation of risk. Having established the results for a marginal or un ivariate distribution, the question becomes one of estimating cumulants for portfolios or random variables which are weighted sums of univariate random variables. Male vergne and Sornette (2001), using the transformation theorem and a normal copul a, define an approach for going from an arbitrary marginal distribution to th e characteristic or moment generating function of the joint distribution. Her e we show the form for the moment generating function of the bivariate general ized Weibull distribution, given by: 1 12 2 1 2 12 11122211 2 21 22T S qq PkdydyyVy yy ikwywy ˆ ()exp[ (sgn()||sgn()||] pr pr pr pr cc cc cc cc -- =-+ + (1.4) The moments (and hence cumulants) which are functio ns of the moments of this distribution defined by 1212 0 n pnp nqq pn Mwwnp p (,)... g gg g==

PAGE 33

-15This method turns out to be somewhat intractable fo r large universes of candidate asset classes and as discussed below the use of cop ulas has some issues of its own. 1.5 Modeling Risk As It Effects Reward Is there any advantage to pursuing such a line of i nquiry, i.e., can various performance/risk measures produce systematically di fferent results and “better” results? Or, as Clark and Labovitz (2006) put it, is money being left on the table? Clark and Labovitz (2006) used both equity and bond fund universes to examine the question “Do portfolios formed using different risk/performance metrics and models yield systematically different level of perf ormance?” The authors examined portfolio formation under three risk rewar d models: 1. Single-Index CAPM (Capital Asset Pricing Model) : Assumes a onefactor return generating process, typically a funct ion of overall market return, plus a firm-specific innovation (from Sharp e, 1974). 2. Multi-Factor Model: Fama and French added two f actors, (i) small caps and (ii) stocks with a high book-value-to-price rat io (customarily called "value" stocks; their opposites are called "growth" stocks), to CAPM to reflect a portfolio's exposure to these two classes Carhart added a momentum factor to the Fama/French model and this f our-factor model is what the authors used (from Day, et al., 2001).

PAGE 34

-163. Generalized CAPM: G-CAPM evokes higher moments (mainly kurtosis) to talk about risk because returns tend t o be more peaked and fat-tailed than assumed in either CAPM or multi-fac tor models. Minimizing the variance of the return portfolio wil l overweight this asset, which is wrongly perceived as having little risk due to its small variance, or waist (from Malevergne and Sornette, 2 001). The authors answered with a rousing YES, the questi on as to whether, based upon the risk measure used in portfolio formation, the i nvestor was leaving money on the table. The authors found: Single-Index CAPM increases gross annual returns by 70-80bp (basis points). Multi-factor models increases gross annual returns by 220bp. Generalized CAPM, on average, increases gross annu al returns by 300bp or more. Just as interesting was the ordering of the result: the greater the detail or complexity, with which risk is modeled, the greater the increase in return which was realized. The authors concluded that modeling risk more accurately is indeed a valuable pursuit in improving return in portfolio f ormation.

PAGE 35

-171.6 Extremes And The Central Limit Theorem Let us look at extremes and some common statistical mechanism. Firstly can the Central Limit Theorem be of use in working with ext remes? So let us assume the returns, rather than forming a nearly symmetric uni modal distribution, appear to behave far more like an exponential distribution wi th l ll l = 1. Figure 1.2 depicts a probability density function of such a form. While forming the following two figures, samples of size 50 were drew from this exp onential distribution and then computed the sample means and sample maxima. Figu re 1.2 Probability Density Function Of Exponential Distribution With Rate l ll l =1.

PAGE 36

-18Figure 1.3 is the resulting distribution of means, while Figure 1.4 is the resulting empirical distribution of maxima. The line plot on each figure is the normal distribution with parameters estimated from the sam ple. Despite the highly nonnormal shape of the original distribution, the mean s suggested by the CLT in Figure 1.3 and Table 1.1 are nicely approximated by the normal distribution. This is not so for the sample distribution of maxima. Th e Jarque-Bera (JB) test has a null hypothesis that the data are from a normal distribu tion. The statistic is a function of the excess estimated skewness and kurtosis in the d ata. Under the null hypothesis of normality, the JB statistic is asymptotically di stributed as2 2 df .. c cc c = While the statistics seem to indicate that the number of obse rvations forming a mean is not yet large enough be distributed normally (recall th at the results are asymptotic results), the means are clearly trending towards a normal distribution much more than the maximum values. The conclusions appear to be: • The CLT and the normal distribution are not appropr iate descriptions of the maximum values of the function. • The maximums are asymmetric and leptokurtic (fat-ta iled), i.e., they have too much density in the center and, more importantl y, in the tails.

PAGE 37

-19Figure 1.3 Normal Density Function Superimposed On Histogram Of 10,000 Means Of Sample Size 50 Drawn From An Exp(1) Distribution. Figure 1.4 Normal Density Function Superimposed On Histogram Of 10,000 Maxima Of Sample Size 50 Drawn From An Exp(1) Distribution.

PAGE 38

-20For this example, under assumptions of normality an d the CLT, the following values should be observed: Mean = 1.0 Skewness = 0.0 Kurtosis (this definition) = 3.0 Legend: Nobs == number of observations Means == arithmetic means Median == median Var == variance Std == standard deviation Min == minimum value Max == maximum value Iqr == interquartile difference Skew == skewness Kurtosis == kurtosis Jarque-Bera == Jarque-Bera test of normality Ta ble 1.1 Salient statistics for the two samples

PAGE 39

-211.7 Order Statistics Another approach or piece of statistical machinery commonly taught to statisticians is the field of order statistics: Let X 1 X 2 ,…,X n be an identically and independently distributed (i.i.d.) random sample of size n from t he cumulative density function (cdf) F. We can define Y 1 Y 2 ,…,Y n as the order statistics of the {X i}, where Y n = max({X i}). If F is known, it can be shown that the distribution of F Y i = 11 1ini X n FFf nii()()! () ()!()!----(1.5) where: X f is the probability density function. However, the selection of the probability model (ch oice of F) is often based upon heuristics—the world view of the researcher—and the estimation of parameter values are sensitive to model selection. But there is little theory, excepting some special cases, suggesting the magnitude of potentia l bias/variance, which can be extremely large nor are there asymptotic results as sociated with the situation (the circumstances do not improve with increased sample size). Carrying on from a classical statistical perspective, we have: 12 12 n n n n XPYxPXxXxXx PXxPXxPXxbyindependence Fxbypreviousdefinitions ()(,,,) ()()()[] [()][] £=£££ =£££= (1.6)

PAGE 40

-22However, 0n X Fxasnxx sup ()"<, where 1 iixxPXxsup inf({}:()) =£= That is, the distribution of Y n or the max ({X i}) converges to a point mass on X sup. So, the distribution function of the order statisti c of the extreme value of X is degenerative as n —not a particularly useful result in evaluating th e distribution of extreme values. 1.8 Statistics Of Extremes The development of the statistics of extreme values has a long history with respect to the modern theory of statistics, with the origin al development being attributed to Fisher and Tippet (1928). However, it has been plac ed on a much stronger theoretical foundation by Gnedenko (1943), picking up speed in the latter half of the twentieth century, starting with the work of Gu mbel (1958). The theory has a number of threads or approaches, depending on the n ature of the uncertainty under examination. Starting with univariate i.i.d. random variables, the theory progresses through multivariate p-dimensional random vectors possessing dependence (the covariance of i X and j X notated cov( ij XX ) 0 for at least one pair i j {1, 2, …,d}) to stationary serially correlated processes (wher e, for example, in weak stationary processes 22 ttlttl EXEXEXEX [][],[][] ++ == and cov(,)cov(,) ttltdtdl XXXX ++++ =, (where E [] is the expected value operator)

PAGE 41

-23regardless of the value of l for both univariate and multivariate environments) and concludes with the extreme-value statistics of non stationary processes. 1.8.1 Existence Of Extreme Value Distributions We start the discussion with univariate i.i.d. rand om variables. Much of the early work on the subject of the statistics of extreme va lues, including the early work of Gumbel (1958), tried to develop the theory via the classical theory of sums of random variables and CLTs. This alternative will be reviewed here, drawing heavily on the text of Coles (2001) and Beirlant et al. (2004). Let X be a random variable, and let the support of X be [a,b] + (where + are the augmented reals, notated for simplicity as ). Let X F (x) be the cdf of X otherwise notated as F (x) or just F where F (x) = Pr( X £ x ) [0,1]. Similarly, let us assume x f (x) exists and is the probability density function (pdf) for X d F /dX = X f or X F (x) = x x f - (x) dx. Let 12 ,,, n XXX be a random sample of size n from the cdf F We can then define max({}) ni YX =. To develop useful extreme-value limit laws we need to find linear renormalization sequences of constants {a n >0} and {b n }, nn abn "

PAGE 42

-24nnn Y nYb PxGxasn a () []()£ where nY GG is a nondegenerative pdf (distribution function). T hen, G belongs to one of three families of distribution functions. In fact, these assumptions form the hypotheses for an important theorem concerning limi ts for the probability distribution of extreme values: Theorem 1: If $ sequences of constants {a n >0} and {b n }, nn abn " nnn Y nYb PxGxasn a () []()£ (1.7) where nY GG is a nondegenerative probability density, then () Gx belongs to one of three distribution families: xb TypeIx a :exp{exp([])}, ---<< 0 xb TypeII xbaxb : exp([]),£ n --> a (1.8) 1 xbaxb TypeIII xb exp([{}]), : ---< n a Where 00 baandforTypesIIandIII ,. -<<>> a

PAGE 43

-25The proof of Theorem 1 can be found in Leadbetter et al. (1983). An informal proof is provided by Coles (2001). The sufficient c ondition (if the distribution is an extreme-value distribution [EVD3], then it is max-stable) is a bit of algebra for e ach distribution. The necessary condition (if the distr ibution is max-stable, then it is an EVD) relies upon some results from functional analy sis. Definition 1: Let () nn Y Hr be the distribution function of n Y = max({ i X }) for 12 ,,, in = where the i X s are independent random variables, and let () X Hr be the distribution function of the independent i X s. Then X H is said to be max-stable iff (defined as if and only if) for all 1, nn > there exist constants 0 nA > and n B such that ()()[] nn YnnX HArBHrisidenticalinlaw +=. (1.9) Under max-stability, the distribution of the indepe ndent random variable is the same (identical in distribution or law) as the dist ributions of the extreme values. That is, if n H is the distribution function of n Y and the maximum order statistic of { i X } and these independent variables each have an iden tical distribution function H then max-stability is a property that is satisfie d by these distributions. 3 Throughout the discussion below the investigator w ill, as appears to be common usage refer the non-unified set of distributions as EVD and the uni fied form of the theory (discussed below) which describes all three EVDs under one distributional f orm as the GEV or generalized extreme value theory.

PAGE 44

-26The property is that a sample of maxima yields an i dentical distribution—up to a change of scale and location. The connection with t he extreme-value limit laws is made by the following theorem: Theorem 2: A distribution is max-stable if and only if it is a generalized extremevalue distribution (EVD). Proof (informal; for a formal proof see Leadbetter et al. [1983]): Let us define nk Y to be the maximum of a sequence of i nkX s or maximum of k maxima 12, ,,,, ni Yik = with each 12, max(')max({,,,}) th ni i ni YksetnindependentXsXXX= Assume from the hypothesis that the limiting distribution of ()/ nnn YBA is H By Theorem 1, as n gets larger {()/}() nnn PYBArHr -£. Therefore, for any 0 k > {()/}() nknknk PYBArHr -£. However, since nk Y by independence is the max of k i.i.d. n Y we have {()/}[{()/}] k nknknknknknk PYBArPYBAr -£=-£. By simple manipulation: ()() () nk nk nk k n n rB PYrH A rB H A £ (1.10)

PAGE 45

-27Consequently, H and k H are identical distributions—up to a change in locat ion and scale. Therefore, H is max-stable and from the EVD family of distribut ions. 1.8.2 Extreme Value Distributions Theorem 1 is called the extreme-value theorem (EVT) ; the three distributions are commonly called the extreme-value distributions (EV Ds), and the setup model is called the block-maxima model (detailed for compari son purposes below). The beauty of this model is that its result holds true regardless of F or the distribution of the original observations. Types I, II, and III are commonly known as the Gumbel, Frchet, and Weibull distributions. All three have location and scale parameters ( b and a ), and the Frchet and Weibull distributions have a shape parameter (). The formulation of the Weibull for the EVT is the left tail (finite or bounded upper value), where for reliability or failure studies th e Weibull is reformulated to have a positive support.

PAGE 46

-28Legend: Weibull Frchet Gumbel W F G Gumbelis unlimited, Frchethas a lower limit, while Weibullhas an upper limit. Legend: Weibull Frchet Gumbel W F G Gumbelis unlimited, Frchethas a lower limit, while Weibullhas an upper limit. Legend: Weibull Frchet Gumbel W W F F G G Gumbelis unlimited, Frchethas a lower limit, while Weibullhas an upper limit. The tail behavior depicted in Figure 1.5 gives more precise distinction to the three families. The Weibull distribution has a bounded up per tail and an infinite lower tail. The Gumbel distribution is infinite in both t ails, with an exponential rate of decay. The Frchet distribution has a bounded lower tail but an infinite upper tail that decays at a rate governed by a polynomial func tion. As such, for equal location Figure 1.5 Extreme Value Distributions Depicted On The Same Plot

PAGE 47

-29and scale parameters there is more density in the u pper tail of the Frchet than in the upper tail of the Gumbel (Beirlant et al. [2004]). 1.8.3 Unification Through GEV Distributions And Extreme Maximas Although Theorem 1 is a fairly astonishing result and has been cited by others as the CLT for extreme values, there exists a commonly occurring problem of more than one model being feasible. First, one must deci de which of the three models is most appropriate for the circumstances and the data prevailing at the time the decision is made. As noted previously, our result s uggests a distinction in tail behavior. Secondly, after the model has been select ed the researcher has the issue of parameter estimation, which is an art in itself. Alternative formulations of Theorem 1 are provided by von Mises (1954) and Jenkinson (1955). These formulations combine the Gu mbel, Frchet, and Weibull distribution families to form a single unified fami ly of distribution. Commonly called the GEV distribution, the distribution is of the form: 11x Gx/ (;,,)exp{[()]} + =-+ x m msxx s (1.11) defined on 10 xx {:()}, +->xms where 0 zz []max(,) +=, 0, and msx -<<>-<< The GEV has three parameters: a location

PAGE 48

-30parameter ( m ), a scale parameter ( s ), and a shape parameter ( x ). Note that the equivalent Types II and III are the corresponding e xtremal distributions for 0 x > and 0 x < In fact the shape parameter x xx x governs the tail behavior of the component distribution family. As we shall see belo w, 00 xx xx xx xx >< and 0 x xx x correspond to the Frchet, Weibull, and Gumbel dist ributions, respectively. We can use these inequalities to establish the relationshi p between the EVT and GEV representations. For the Weibull distribution let 00anddefinedonset10 xx {:[()/]} xsxms xsxms xsxms xsxms <>+-> 1 110 1 Gxx x x xx xxba/()exp{[()/]}for exp{[((||)()/)]}exp{[(||()/)]}exp{[||(/||)/]}exp{[||(/||)/]}exp{[((/||))/(/(/||))]}exp{[()/]x xx x a aa a a aa a a aa a a aa a a aa axmsxa xmsxa xmsxa xmsxa sxms sxms sxms sxms sxms sxms sxms sxms xmsxs xmsxs xmsxs xmsxs xmsxs xmsxs xmsxs xmsxs msxsx msxsx msxsx msxsx+ + + ++ + =-+--=> =-+--=---=----=----=---+=--00 xba }for,,a aa aa aa a<>> (1.12) For the Frchet distribution let 00anddefinedonset10 xx {:[()/]} xsxms xsxms xsxms xsxms >>+->

PAGE 49

-3111100 1 00 Gxx x x xxbaxba/ ()exp{[()/]}for(given) exp{[(())/)]}exp{[((/)/)]}exp{[((/))/(/(/))]}exp{[()/]}for,,x xx x a aa a a aa a a aa a a aa axmsxaa xmsxaa xmsxaa xmsxaa sxms sxms sxms sxms xmsxs xmsxs xmsxs xmsxs msxsx msxsx msxsx msxsx a aa a+ + + + -=-+--=-<> =-+-=--+=---=-->>> (1.13) The Type I or Gumbel family of distributions arise s under the limit 0 x and can readily be seen as a consequence of the definit ion of11 lim(/) n n en =+, where we have x Gx ()exp{exp[()]} =-m s. (1.14) 1.8.4. Extreme Minima The discussion up to this point reflects the extrem e values of maxima. How does the theory change if there is interest in the minim um extrema rather than the maximum extrema? Let ii WX =, then the i W s are i.i.d. And let ni YW~ max({}) =, then by EVT and other intermediate results, we ha ve: 1 11 11 11nn nnPYxPYxPYxPYx x asn x~ *()()()() exp([()]), exp([()])+£=-£= -=-£ ---+ =---x xm x s m x s (1.15) where* =mm

PAGE 50

-32The minimum for the Gumbel is derived by once again taking the limit. Using the definition of e defined as a limit, we have: x+ G(x; ,)=1-exp{-exp[()]} (1.16) Now, we have described the mathematics of both maxi mum and minimum extrema. Even though the research proposed here deals with m inima, we provide the following definition: Let a loss be a positive numb er (and a gain a negative number), e.g., a loss of 2% is defined as 0.02 and a gain of 3.2% is defined as minus 0.032. So, large losses are positive. In the following research we examine maxima rather than minima. From a purely mathematic al perspective the analysis of the probability structure of the minimum of a se t of random variables { i X } is the same as the maximum of the transformed set of R Vs {i X }. This definition was motivated by the researcherÂ’s initial work, whe rein he found that a number of computations more easily decompose for maxima than for minima. Consequently, in the remainder of this presentation readers shoul d assume that all references to maxima or minima in the context of data are after t he above transformation.

PAGE 51

-331.9 Extreme Value Generation Models 1.9.1 Block Maxima Model There are other representations of the uncertainty of extreme values. We may motivate those results by describing mechanisms by which the extrema may be generated. The block-maxima model is an extrema gen eration mechanism associated with the family of GEV distributions. Le t 12,, l XXX be a set of independent random variables. This set is grouped o r blocked into blocks of length ,() nnl Without loss of generality we can assume there ar e m such blocks, and we may generate a set of maximum values from the maximum of each block, defined as12 ,,, ,,, nnnm YYY It is this set {} ni Y under the hypothesis of Theorem 1 that follows a GEV distribution. In the p resent context blocks may be trading weeks (5 days), trading months (~21 days), or trading years (~252 days). 1.9.2 Peaks Over Thresholds Model There are two other models of extreme-value behavio r commonly used. Instead of defining a set of n random variables and the maximum of the set, defin e a high threshold value u (which yields a similar selection challenge). Call ed “exceedance over thresholds” (Smith [2003]) or “peaks over thre sholds” (POTs), the process works as follows:

PAGE 52

-34Fix u then define the conditional distribution of the r andom variable X given that xu > From previous results for n (large enough) we hav e: 1 11 1nFxGxx nFxx / / ()(;,,)exp{[()/]} log()[()/] x xx x x xx xmsxxms msxxms msxxms msxxms xms xms xms xms+ +=-+-+Using the well-known result that for small values o f y particularly y in the range (0,1), the log( y ) y : 1 FxFx log(())(()) -By substitution we have: 1111 Funu for large u /()[()/]x xx xxms xms xms xms-+-+Define the RV as YXu =, then from the previous equation we have: 1111 Fuynuy for y>0 /()[()/]x xx xxms xms xms xms-+-+++From the definition of conditional probability, 0 1XX X FuyFu PYyY Fu ()() (|) () +£>= (1.17)

PAGE 53

-35Hence, 11 11 1 11 1 1 1 1 1 1 1X XFuy PXuyXu Fu nuy nu uy u y u Gy/ / / /() (|) () [()/] [()/] ()/ [] ()/ []where () (;,)Generalized Pareto Distribution (GPD ) -+ -+ -+ -+ -+ ->+>= ++= ++=+ +=+=+-=x x x xxms xms xms xms x ssxm s sx (1.18) The results producing the GPD are after Pickands (1 975). 1.9.3 Poisson Generalized Pareto Model A third model is called the Poisson-Generalized Par eto Distribution or P-GPD. This result is after Coles (2001) and Smith (1989): Let12 n XXX ,,, be a random sample of i.i.d. RVs, and suppose there exists an {a n}, {b n} such that the GEV model holds, then let12ininn YXabin ()/,,,, =-= Let n N denote point processes defined on 2 as 1 nin NinY {/(),} =+ (1.19) The first element rescales process times to be (0,1 ), and the second element gives a form to the extremes. Form a region A in 2 where01 Au [,][,] = for some very large value of u Then, the probability that each point in n N has of falling in A is (from a previous result) given by:

PAGE 54

-36111in pPYunu / ,[][()/] x xx x xms xms xms xms-+=>+(1.20) Because the i X are mutually independent, the points of n N in A are distributed as per a binomial probability law with parameters np Using standard results from mathematical statistics, an RV distributed as a bin omial converges to a limiting Poisson distribution as n p 0, that is, the limiting distribution of N n defined on A is Pois((A)), where is the rate of the Poisson, which is a function of the area A where the points are located. We have from the prev ious result: 1 1 Au / ()[()/] x xx x xms xms xms xms+L=+(1.21) Because of the defined homogeneity of the process i n time, for any region A = 12 t, t u, [][] composed of any interval of time [ 12 tt ] [0,1], the limiting distribution NA () is Pois(( A )) with 1 211 Attu / (')()[()/] x xx x xms xms xms xms+L=-+(1.22) The resulting process is called by Smith (2003) a P -GPD model. 1.9.4 Relation Between Extrema Models All three of the models—the block-maxima, the thres hold model, and the P-GPD— are related. Let us demonstrate these relationships starting each proof with P-GPD. For the block-maxima model we have:

PAGE 55

-37Let M = max{X 1 X 2 ,…,X n } and 1 ninn NinXba {/(),()/} =+. Thus, the event nnn Mbaz {()/} -£ is equivalent to N n (A z )=0, where 01zAz (,)(,) = 1 0 0 1nnnnz zzPMbazPNA PNAA z/ [()/][()] [()]exp{()} exp{[()/]}x xx xxms xms xms xms+ -£== ==-L =-+(1.23) For an exceedance-over-threshold model, based upon the earlier discussion related to homogeneity over the Cartesian space for the poi nt process, we can factor by coordinates the value of over A z. So, let 1122 zAttz ()([,])([,]) L=LL where 1 1122121 ttttzz / ([,])()and([,])[()/] x xx x xms xms xms xms+L=-L=+-. Consequently, 22 11 11 1 11 1 1 1 1inninnPXbazXbauzu nz nu z u zu u/ / / / [()/|()/][,]/[,] [()/] [()/] ()/ [] ()/ () []where ()x xx x x xx x x xx x x xx xxms xms xms xms xms xms xms xms xms xms xms xms xms xms xms xms x xx x ssxm ssxm ssxm ssxm s ss s-+ -+ -+ -+ ->->=LL += +=+ +=+=+(1.24) One important question for each of these models is the size of each of the blocks, i.e., the number of observations in a block, the le ngth of a block, or the value of u —the threshold for discerning the extreme-value reg ion. In these contexts the issue can be thought of as a bias-variance tradeoff In the block-maxima model, if

PAGE 56

-38the blocks are too short or too small a value of n we will realize a poor fit to the limiting behavior of the result and hence a biased estimate. On the other hand, too large an n yields too large an estimate of the variance. One way of overcoming this for the researcher is to turn to the subject m atter domain under examination. In the analysis of financial data there are natural tr ading periods such as a week, a month, or a quarter, often examined over horizons o f three, five, or ten years (subject of course to limitations on the availabili ty of data, which may dictate the use of shorter time frames). Given the relationship s among the models as described above, we might intuit that low or high estimates o f the value u will have a similar result with respect to the impact on parameter esti mation. Indeed, that turns out to be the case. Once again, too low a value for u will result in biased estimators, while setting too high a value of u will produce an overly large value for the estimate of the variance. A series of statistical d iagnostics for threshold values is found in Davison (1984), Smith (1984), Davison and Smith (1990), and Coles (2001). 1.10 Parameter Estimation With respect to parameter estimation, although meth ods of moments techniques (MME), (see Hosking et al. [1985], methods based on order statistics [Dijk a nd de Haan, 1992], and Bayesian statistics) have been use d in the literature, likelihood techniques are most commonly used because of their greater adaptivity to and

PAGE 57

-39utility in complex model-building circumstances. Th is does not mean there are not examples in which other estimation methods perform a credible if not superior job. For example, according to Madsen et al. (1997), the MME quantile estimators have smaller root mean square error when the true value of the shape parameter is within a narrow range around zero. Examples of the use of the Bayesian approach may be found in Stephenson and Tawn (2004) and Cooley et al. (2006a, 2006b). Maximum-likelihood estimators (MLEs) of the paramet ers GEV distributions were considered by Prescott and Walden (1980, 1983) and Smith (1985). However, because the end points of the GEV distributions are functions of the parameters being estimated, the model conditions violate commo n regularity conditions underlying theorems that prove MLEs asymptotically converge to the BLUE estimator (Casella and Berger [2001]). Smith (1985) produced the following results: If x xx x >-0.5, MLEGEV (defined as the maximum likelihood estimators of t he GEV parameters) estimators possess common asymptoti c properties. If -1< x xx x <-0.5, MLEGEV estimators may be computed, but they do not have regular or standard asymptotic properties. If x xx x <-1, MLEGEV estimators may not be obtained. If 1 m ii Z {} = is an i.i.d. random sample having a GEV distributi on, MLEGEV are defined through the use of the log-likelihood as fo llows:

PAGE 58

-401 1 11 11 1m i i m i iz m z (,,)log()()log[()] [()]x xx x m mm m msxsx msxsx msxsx msxsx x xx x s ss s m mm m x xx x s ss s= = =--++ -+ (1.25) given that 10iz i (),m mm m x xx x s ss s+>"= 1,2, … ,m. If 0 x xx x = we compute the log-likelihood of the Gumbel distr ibution; the limit of GEV under this assumption is: 11mm ii iizz m (,)log()()exp[()] mm mm mm mm mss mss mss mss ss ss ss ss==-=---(1.26) Under the assumptions listed earlier in the paragra ph, the approximate distribution of ( ˆ ˆ,ˆ, msx msx msx msx ) is multivariate normal, with mean ( ,, msx msx msx msx ) and covariance equal to the inverse of the observed (computed) information matr ix evaluated at the maximumlikelihood estimate (Coles [2001]). In the process of optimization under Newtontype methods this is equal to the inverse of the He ssian matrix evaluated at local maxima of the log-likelihood function or the local minima of the negative loglikelihood function. Care must be exercised when estimating GEV paramete rs with MLE for small sample sizes, e.g., sample sizes less than 50. In s uch circumstances the MLE may be unstable and can give unrealistic estimates for the shape parameter ( x xx x ) (see Hosking and Wallis [1997], Coles and Dixon [1999], and Martins and Stedinger

PAGE 59

-41[2000, 2001]). On the other hand, MLEs allow covari ate information to be more readily incorporated into parameter estimates. Furt hermore, obtaining estimates of standard error for parameter estimates using MLEs i s easier compared to most alternative methods (Gilleland and Katz [2006]). 1.11 Departures From Independence It should be clear from the previous discussion tha t the univariate EVT is reasonably well developed. However, there are many other threads of active research. For example, extreme-value research often starts with the assumption that underlying RVs are i.i.d. that is, independently an d identically distributed. Another important active area of research relates to proble ms involving the joint extreme behavior of uncertain phenomena. With respect to the issue of independence, an impor tant assumption underlying many of the previous results is stochastic independ ence, that is, cov(Xt, Xt-j) 4 = 0. Realistically, financial data are serially correlat ed and maxima occur in clusters. Two related issues arise in the literature: (1) How are investigators to decided when extremes, particularly those defined as peaks above a threshold, represent new activity or are just a clustering phenomenon?, and (2) How do we go about making 4 t is an arbitrary value of the index sequence use as a reference for other values of the index in the sequence. In this context it refers to discrete po ints in time.

PAGE 60

-42adjustments to the theory to take into account the presence of autocorrelation or dependency structures among the observations, inclu ding seasonal dependence? 1.11.1 Threshold Clustering In the presence of substantial serial correlation a nd under a POTs model it is likely exceedances over a threshold will cluster. A number of methods for dealing with this behavior have been developed and presented in the literature. The element many of the solutions have in common is the creatio n of a definition of a cluster and then the application of the POTs method to the peak value or function of the peak values within the cluster. Solutions for defin ing and handling clusters and often simultaneously estimating u (the appropriate value of the extremes threshold) are found in Leadbetter et al. (1989), Davison and Smith et al. (1990), Smith and Weissman (1994), Walshaw (1994), and Smit h (2003). 1.11.2 Serial Correlation Effects Within series or processes, dependency structure ra ises issues in several forms. These include the previously cited issue of cluster ing. Beyond clustering, we face a variety of analysis concerns arising from the requi rements of stationarity and independence. As is the case for many temporally or dered sequences, strict stationarity or the property of consistency in all of the moments without respect to start position for the sample is a concern for extr eme-value analysis. Strong

PAGE 61

-43stationarity requires that the stochastic propertie s of series are homogeneous through time. This means that joint distribution of 12,,,, n ttctctc XXXX +++ is the same as12,,,,n tltcltcltcl XXXX +++++++ where l is a constant and all the subscripts resolve to an existing index value. In the event a probability distribution exists, stationarity implies the moments of the two distrib utions will be identical, suggesting the probability distribution will be the same. Many theorems assuming strong stationarity also work (asymptotically) unde r the assumption of what is known as “weak stationarity,” wherein it is assumed that only the first two moments (the mean and the variance) are not changin g or are homogeneous over time. So, as is often the case, the researcher in an extreme-value analysis may be satisfied with weak stationarity. A series may stil l be stationary or can easily be made stationary. Lack of weak stationarity for extr eme-value analysis has been handled in the literature by common methods, includ ing: use of inhomogeneous models; detrending: differencing, prewhitening, cur ve fitting, and filtering; and nonconstant variance: logging and weighting (Chatfi eld [1996]). The presence in the data of general serial or tempo ral dependence clearly impacts the results of the GEV model as presented. Of cours e, there are an infinite number of forms of dependence a series can take on. Howeve r, there are fundamentally two ways to adapt an analysis in which the researcher a ssumes independence when

PAGE 62

-44there appears to be a lack thereof. One is to adjus t the hypotheses of a given theorem or analysis approach be used as well as the consequences/results of same for the lack of independence. In the case of seria l dependence a researcher may need to change the structure of the variance-covari ance matrix, for example indexing the elements of variance-covariance matrix by time step and then adjust the associated theorem or application accordingly. A second approach is to take advantage of a consequ ence of stationary processes, that is, under a stationary process the strength of the dependence decreases both mathematically and empirically as the time between the random variables increases. This leads, for example, to approaches t hat attempt to create nearindependence by selecting observations at some dist ance apart from each other. Coles (2001) calls this satisfying the () n Du condition, which is iff for all 111 pqp iijjwithjil <<<<-> 1 1 1 1|(,,,,,) (,,)(,,)|(,) p q p qininjnjn ininjnjnPXuXuXuXu PXuXuPXuXunl a £££££££££ (1.27) where 0 (,)nnla for some sequence n l 0 /nlnasn The idea is that under the () n Du condition the difference between the true joint di stribution and the approximate joint distribution approaches zero suff iciently fast ( the quantity within the absolute value of the above inequality gets suf ficiently close to zero), so that it

PAGE 63

-45has no effect on the limit of the GEV. One means o f effecting this result is to take samples at a sufficiently large distance apart from one another. Of course, such an approach cuts the number of observations available for an analysis and therefore may be impractical in many cases. Finally, serial d ependence often is found in the form of seasonal dependence. Seasonal dependence of order d is such that Cov( ttd XX + ) 0 for all t. Both for environmental and financial d ata, seasonal dependencies are often present for market data that are a function of the time of year or in the context of a larger-scale business c ycle. Extending the extreme formulations (block maximum, POTs and P-GPD) to inc orporate seasonal dependence can be accomplished by a number of metho ds, some of which are commonly used in the context of any type of serial correlation, including: 1. Removal of the seasonal effects, either through fit ting trend models to the data, perhaps using periodic functions, or seasonal differencing the data. 2. Creation and fitting of inhomogeneous P-GPD process models, with the inhomogeneity defined by allowing the value of l [eqn. 1.12] (and potentially other model parameters) to vary and be estimated separately by season. 3. Expanding the model by allowing either expected val ues or the deterministic portion of the model or the error str ucture to be adjusted by the presence of covariates.

PAGE 64

-46We also find some of these techniques useful in the environment of modeling multivariate extrema, and as a consequence we revis it these topics later in the thesis. It should be clear from the previous discussion tha t stationarity and dependence are constructs that will be important elements in later results. In particular, the researcher calls attention now to a complication th at arises later in the research in the guise of analyzing covariation and which, invo lves stationarity and independence: the decomposition of variation over a random field into temporal and spatial components. So there are clearly a temp oral component to financial returns—provoking issues as per the previous discus sion—of both stationarity and independence. Amongst other purposes the use of a Gaussian Process in the model for multivariate returns values (Section 5.5) was c hosen as a method to deal with these issues. Additionally there are spatial dime nsions the researcher will define later in the research to which financial returns wi ll be indexed. The addition of spatial dimensions is very common in the study of e xtremes, especially among studies of extrema that focus on climatology or wea ther; for examples see Schlather and Tawn (2002, 2003), Heffernan and Tawn (2004), a nd Gilleland and Nychka (2005). Of course, for natural resources spatial di mensions are more intuitive, commonly understood geographic ones, e.g., latitude longitude, and stated plane feet/coordinates, while the spatial dimensions for finance are considerably more

PAGE 65

-47esoteric. Regardless of the definition of spatial c oordinates, many of the techniques used are the same. 1.12 Return Values Extreme values of differing phenomena can be placed on a more comparable scale through the introduction of estimates of extreme qu antiles. If we invert the GEV distribution function () Gx we obtain estimates of the extreme quantiles from the following equations: 110 10 ppxpxp [(log()), log(log()),=---- =---=xs mx x msx (1.28) where p is the probability, such as 1p Gxp () =. Therefore, we may define p x as the return level associated with the 1 p / return period. Another way of thinking about this result is as follows: it is expected tha t the quantile p x will be exceeded on average once every ( 1 p / ) base periods (defined by the interval used to gro up the data and compute extrema, e.g., weeks, months, quarters, years, etc.). One of the useful aspects (at least for the researc h pursued here) of working with the return value, return level, and return period i s the importance of the analog of this construct in the financial world. Recent defin itions of financial risk have moved beyond simple measures such as variance. Valu e at risk (VaR), popularized

PAGE 66

-48in the 1990s, is a means of looking more closely at tail behavior of financial performance and defining portfolios more directly i n terms of confidence (probability) of the magnitude of a loss. To define VaR (McNeil et al. [2005]) let a aa a be a confidence value with 01 (,) a aa a Fix a aa a and fix a period m then VaR a aa a is the smallest x such that the probability of X exceeding x is less than or equal to (1 a aa a ), or more precisely: 101 VaRxPXx inf{:()()},(,) a aa aaa aa aa aa=>£- (1.29) So, VaR is a quantile value of a loss-probability d istribution, which is defined by a period of time, a confidence value (typically 0.90, 0.95, or 0.99), and a loss distribution (where gains may be considered to be n egative losses). Figure 1.6 identifies some key relationships between the conce pt of the return value and VaRConditional VaR (CVaR), also know as expected short fall, is the expected-value of the random variable XXx | £ where x is VaR a aa a These relationships between VaR and the return value will be exploited later in this research. 1.13 Multivariate Extreme Distributions A final (but very important) research thread to ove rview in this Chapter is the transitions or changes in theory that occur when mo ving from a univariate model of extremes to a bivariate or multivariate model of th e stochastic nature of extremes. Moving from univariate to multivariate models might be described as moving from

PAGE 67

-49the probabilistic behavior of individual random var iables or random vectors made up of one random variable to random vectors made up of two (bivariate) or more (multivariate) random variables. The probability la ws describing the distribution of density or probability of two or more random variab les simultaneously (if the random variables composing the random vector are co ntinuous) are known as joint probability density or probability distribution fun ctions. There are discrete random variable analogues, however some of the terms are d ifferent than the continuous case. However in this research discrete random var iables will not be used. Figure 1.6 Relationships Between Return Values. x Return Value =Value at Risk (VaR) Probability Return Value =1-Confidence level of VaR=1/return periodx TcVaRtftdt ()-=* x Return Value =Value at Risk (VaR) Probability Return Value =1-Confidence level of VaR=1/return periodx TcVaRtftdt ()-=*

PAGE 68

-50While there are some well-known results in statisti cs of specific probability laws or specific probability families that readily define t he formation of joint probabilities from marginal or univariate random variables, there are no such results that provide generalizations on the formation of joint G EV distributions from distributions of lower dimensions. Suggesting and a pplying a method to allow the researcher to join the uncertainty in the extreme b ehavior, in this case of financial equities, will be one of the key efforts in this re search. While much of the previous research has dealt with the extremes of a single ra ndom variable within univariate circumstances, researchers such as Smith (1989) and Davison and Smith (1990) have developed regression models within an extremevalue context, allowing analysts to relate response variables to covariates In point of fact there are some important results for the formation of bivariate pr obability distributions of extremes. LetÂ’s look at an example (after Cebrin et al. [2003]) for random vectors 12 XX (,) The distributions of these random vectors are sai d to conform to an EVD with unit exponential margins if: 11122212P[X > y] = exp(-y) and P[X > y] = exp(-y ), for y, y > 0, (1.30) and the joint cumulative density function defined a s 121122 F(x, x) = P(X > x, X > x) (1.31)

PAGE 69

-51possesses the scaling property, e.g., max-stability (while we initially defined maxstability as a univariate specification, it may be generalized to include multivariate environments [Smith et. al., 1990]): n1212 12F(y, y) = F(n y, n y), for any y, y and n1, n " (1.32) By a bivariate extension of Theorem 2, n12 F(y, y) in the limit is a bivariate EVD. 1.13.1 Dependence Functions And Copulas An important element in forming a joint distributio n is called a dependence function. If the above cdf can be expressed as: 121221212F(y,y) = exp-(y+y)A[y/(y+y)], for any y, y > 0 (1.33) where: 1 011 AwwqwqdLq ()max[(-),(-)]() = for some positive finite measure L on [0, 1], then function A is called the dependence function. A random bivariate vector foll ows an EVD if and only if 12 F(y,y) can be expressed as the two equations above (Tawn [1988]). But results like these are few and farther between for problems of higher dimensionality. An important set of dependence func tions is the extreme-value copula. In the bivariate context we have:

PAGE 70

-521212 212Cuuuu Auuu (,)exp{[log()log()] [log()/(log()log())]} =+ + (1.34) where A is a dependence function (defined earlier), 12 01 uu << and C is the extreme-value copula (Nelsen [1999]). In the multivariate context only some of the bivari ate results extend (Tawn [1990]). For example, the application of such results to spa tial-multivariate analyses is problematic. “…. In the analysis of environmental e xtreme value data, there is a need for models of dependence between extremes from different sources: for example at various sea ports, or at various points of the river” (Tawn [1988]). Further and in a more general context, according to Tawn (1990), there is not a natural or consequential parametric model of the de pendence behavior between two (and greater) extreme-value marginal distributions. Despite growth in the research over the past 20 years, “copulas are well studied i n the bivariate case, the higherdimensional case still offers several open issues a nd it is far from clear how to construct copulas which sufficiently capture the ch aracteristics of financial returns” (Fischer et al., [2009]). The conclusion we may draw is that there are no uni que or generally agreed approaches for combining the uncertainty from two o r more extreme RVs, and there are not a lot of generally applicable results for developing joint distributions

PAGE 71

-53for three or more RVs. In fact, TawnÂ’s quote provid ed above could just as well be applied to the analysis of extrema for securities i n general and equities in specific. Therefore, an important focus for the present resea rch, which will be detailed in the next and subsequent Chapters, is the effort to reca st the problem to characterize the joint uncertainty in the tails of the distributions as the behavior of the joint model of the RV as related to a specified set of return leve ls and return periods. We can thereby move away from the problem of forming the j oint pdf of the RVs analyzed as extreme values or otherwise. 1.14 Organization Of Remainder Of Dissertation The organization of the remainder of this thesis is as follows: Chapter 2: Statement of the Problem and Outline of the Proposed Research Chapter 3: Data, Data Analysis Plan, and Pre-Analy sis Data Manipulations Chapter 4: Fitting Time-Varying GEVs Chapter 5: Estimating Time-Varying Return Levels an d Modeling Return Levels Jointly Chapter 6: Consequences of the Research for Portfol io Formation and Innovation Chapter 7: Summary and Conclusions, Along With Thou ghts on the Current Research

PAGE 72

-542. Statement Of The Problem And Outline Of The Proposed Research 2.1 Overview Of The Chapter In Chapter 2, the problem being examined will be la id out as well as the technology/statistical framework in which the resea rch will be conducted. These include the goals of the research, the process and steps by which the research will be conducted and the domain universe and consequent ial data sets to which the research will be applied. The research will examine monetary returns, in part icular extreme monetary returns of financial equities. Because as describe d in Chapter 1, Section 1.4, downside or negative returns are at the heart of po rtfolio design. Additionally, because a portfolio is typically composed of multip le securities (multiple equities in the present circumstance), it is the joint or multi variate behavior of such monetary returns rather than the univariate behavior which i s of most interest. To this end the research exploits extreme value the ory to describe the random behavior of downside returns. Many disciplines whi ch make use of GEV theory

PAGE 73

-55find that static or non-time varying models (models in which the distributional parameters do not change) make sense or are at leas t sufficient for the dimensions of time and space over which the research is being conducted. However, financial markets, especially at the end of the last decade a nd the beginning of this one, are very dynamic. Therefore, a key hypothesis in this research is that 1) financial and market conditions have a significant impact on the behavior of extreme monetary returns and 2) a time varying GEV model, in which the parameters are functions of the financial markets and the economy, is more appr opriate. As per issues raised in Chapter 1, Section 1.13.1, the researcher has chosen to use return values as the basic element for a joint mode l or multivariate model of downside behavior. Return values are a means for q uantifying and selecting the investorÂ’s desired level of risk and uncertainty. A joint behavior model will be developed through the use of a Gaussian Process mod el (see Chapter 5, Section 5.5). This Chapter carefully enumerates and introduces bo th the processing steps as well as the data types to be employed in the research. I t concludes with a section on how the research maybe used in the domain financial and financial research.

PAGE 74

-562.2 Research Threads Myriad threads can be derived from the earlier rese arch reviewed in Chapter 1. Pertinent to the present research, these include: 1. Treating univariate processes as members of the gen eralized extremevalue/Poisson-Generalized Pareto Distribution (GEV/ P-GPD) family of distributions. There are a number of formalisms for modeling an extremal environment that in turn lead to models describing the behavior of extremevalue random variables. These models and distributi ons have both clear conceptual and mathematical relationships to one an other. 2. The choice between looking at the estimation of the extreme as a block maxima or as a value greater than a threshold and t he bias-variance tradeoff inherent in the estimation process. 3. The multi-faceted impact of serial correlation and the lack of stochastic independence—one of the very common consequences of which occurs in the cluster definition mechanism, wherein under a “ peaks-over-thresholds” (POTs) model the exceedances are defined as belongi ng to clusters of exceedances. These clusters are a function of presu med positive autocorrelation—that is, if a value exceeds the thr eshold, it is more likely that the value at the next time step will exceed th e threshold than if the previous value is lower than the threshold. An exem plary solution is provided in Smith et al. (2006).

PAGE 75

-574. The handling of nonstationarity as inhomogeneity in the base distributions through the use of covariates to describe the form of the inhomogeneity in terms of changes in the parameters—in other words, moving as much of the variation as possible into the model of the expecte d value of the parameters. 5. Issues in the definition of multivariate models as a combination of a component-wise univariate extreme-value marginal di stribution, combined with a dependence function or copula being used to functionally connect the independent univariate distributions to the joint p robability density function. 6. The definition of return level and its interpretati on as the means of relating the distribution of extremes to the behavior of phe nomena under study within the domain of interest. This result is parla yed in the connection between return level and the financial concept of v alue at risk (VaR). 7. The increasingly popular incorporation of spatial c ovariation/correlation structures as a means of relating extreme-value poi nt processes. For example, in the field of environmental or climatolo gical studies the collection of observations at geographically distri buted stations or spatially diverse locations provides the spatial network, wit h latitude, longitude, and altitude (or some related triple) forming the dimen sions. Using many of the research threads described above and in Chapter 1 as prologue, the goal of the present research is to develop a mo del to examine the joint behavior

PAGE 76

-58of a universe of financial securities. With joint b ehavior we wish to describe in a stochastic sense extreme returns (to be defined lat er) over a large universe of securities, using extreme-value theory as well as m arket variables (also to be defined later) and covariation structures from each The model and its elements will then be used as an aid to gain insight into and per haps perform a number of important finance functions such as risk estimation portfolio definition, and designing and forecasting returns. Therefore, it is hoped this model will provide both explanatory power to aid in understanding the variation in returns as well as predictive power to aid in the formation of portfol ios. In developing the detailed discussion of the propos ed research the researcher has identified the following goals or threads to be exa mined in the research. This is not meant to be a procedural list; procedures will be e numerated in this Chapter and subsequent chapters. Also, the details underlying t hese threads will be exposed elsewhere as appropriate. The goals of this researc h are: 1. Define joint extreme-value behavior of securities t hrough the behavior of return levels. 2. Use this joint behavior and the introduction of cov ariate-driven parameters to define a model of return levels in space-selecte d market dimensions. 3. Adapt this model to derive portfolios of target ret urn-level behavior, using VaR:return-level relationships.

PAGE 77

-594. Investigate the propagation of extreme values by in troducing extreme innovations into portions of the market. We frame answers to these questions as well as prov ide guidance on how to generate an understanding of the phenomena of secur ities’ returns in the process of laying out the model. The model will be derived as an analogue for the financial domain of a model proposed by Smith et al. (2006) for the climatological domain, specifically the distribution of extreme precipitat ion events over the contiguous U.S. The current research is aimed at providing a s olution to the problem of modeling extrema in financial returns that moves aw ay from the formalisms that make use of dependence structures and copulas and t he non-uniqueness formulations that are part and parcel of such appro aches. Figure 2.1 depicts a high-level data-flow diagram o f the overall processing steps to be conducted in this thesis. The processing ovals t ake in data from outside sources as well as data in the form of derived results from earlier processing steps, along with control data (not shown), and generate results and data to be used as input for later processing steps. Clearly, we could “drill do wn” into each processing oval to discover additional processing steps, along with co nsiderable detail. For each of the processing ovals this drill-down is articulated in the summary in this Chapter but effected in detail in the named Chapter associated with each processing oval. It is in

PAGE 78

-60the performance of these processing steps that the results necessary to accomplish the research threads and produce the accompanying r esults will be generated.

PAGE 79

-61Figure 2.1 High-Level Data Flow Diagram Depicting The Organ ization Of Research Elements In This Thesis.

PAGE 80

-622.3 Ingest, Clean, And Reformat Data Three major groupings of data will be used in this thesis: 1. Equity5 performance data, including: Equity identifier data Share price data (U.S. dollars) Total return data (returns adjusted for corporate a ctions, dividends, capital distribution, stock splits, etc.) Market capitalization net of Treasury shares Volume or number of shares traded 2. Covariate data used to create time-varying estim ates of GEV parameters: TimeÂ’s arrow Risk-free interest rate (vars. [various] countries) Rates (vars. countries and maturities) Commercial paper rates (vars. countries) Salient interest-rate spreads Consumer spending measures Inflation 5 An equity is financial security which provides the owner a portion (often very small portion) of the ownership of the business it is issued against. Shares of common stock are an example of an equity. Price of the equity traded on the public m arket is the key character of equity used in this research. The price is a (poorly understood) funct ion of the tradersÂ’ expectations of success of the firm over a range of time frames.

PAGE 81

-63 Unemployment rate Trigonometric functions a la Smith et al. (2006) Recognized indices in all major world markets Volatility measures 3. Ancillary data used to segment equity data, ret urn values, in particular: Sector Market capitalization Continent and country Exchange on which the equity is traded Geographies representing issuing corporation headqu arters and stock exchange trading The emphases of this processing step will be identi fication of data sources, cleaning of the data (i.e., missing data processing, filteri ng time series based upon completeness), initial transformation or manipulati on of the data to create derived series, and reformatting data for further processin g. This effort will be described in Chapter 3.

PAGE 82

-642.4 Fitting Time-Varying GEVs Recall from Chapter 1, Section 1.8.3, that the extr eme-value theorem (EVT) can be reformulated as a generalized extreme-value (GEV) d istribution (reproduced here for convenience): 11 x Gx/ (;,,)exp{[()]} + =-+ x m msxx s (2.1) defined on the set 10 xx {:()}, +->xms where 0 zz []max(,) += where0 and msx -<<>-<< Also recall that there are equivalences between the extrema-generation models described in Chapter 1, Section 1.9.4. In a preliminary analysis it was concluded that a blockmaxima model was a superior model from both a domain vantage point (focus upon trading periods such as weeks and months is a common framework used by traders in thinking about their trading activities) and because of the success in the earli er analysis performing parameter estimation using the block-maximum-related form of the GEV. To deal with the likely occurrence of nonstationarity in securitiesÂ’ returns we generalize Equation 2.1 to allow for time-dependent inhomogeneity in th e parameters ms and x ; we reformulate Equation 2.1 as

PAGE 83

-650 x xx x 11 t tt ttttt tx Gx/ (;,,)exp[{()}] +=-+ x m msxx s (2.2) where: z [] + = 01 tttt x max[,(())] +xms -<< tm 0 < t s << x StS ab with SS ab being suggest to the limits offered by Smith (1985) as presented in Section 1.10. 0 x xx x = tt ttt tx Gx (;,)exp{exp[()]} =--m ms s (2.3) where: -<< tm 0 < t s Let 12 ,,, n XXX be a time-ordered nonstationary set of extreme ret urns from a security, such that 1 X is the oldest random variable and n X is the youngest random variable. Then, we assume each t X ~ GEV( ttt ,, msx msx msx msx ). As a consequence we postulate a model for an extrema-probability dis tribution (Equation 2.2 or 2.3) that is inhomogeneous in the model parameters. The inhomogeneity of the parameters of the distribution is in turn described as a function of covariates. As a result of identifying the functional relationship b etween the covariates and the parameters, changes over time in the covariates are used to describe changes in the parameters over time.

PAGE 84

-66This parameterization allows each of the distributi on parameters to be expressed in terms of covariates and to find the subset of covar iates that, along with the data, maximizes the likelihood and yields good-quality me asures for other model-fitting diagnostics. While this research is detailed in Cha pter 4, it is worth noting here that the preliminary data examination provided some obse rvations on model fitting that directed the manner in which the larger piece of re search was conducted. Within the preliminary analysis the maximum likelihood sea rch proved very sensitive and catastrophically failed to converge when multiple ( and what turned out to be) nonsignificant parameters were entered into the mod el at the same time. To achieve a parsimonious model and avoid the convergence prob lems observed at the outset of the analysis, a step-wise strategy (forward and backward) was adopted. 2.5 Computing Return Values And Developing A Dependence Function Model As per the material presented in Chapter 1, it is t he researcher’s contention that it will be more advantageous to work ultimately in ter ms of shortfall probability, commonly called just “shortfall.” Shortfall is the probability that for a given threshold and given probability law, the return wil l be lower than the threshold. Shortfall probability has, as noted in the first Ch apter, Section 1.12, a return period and return level or quantile. As also pointed out i n same section, the shortfall probability is often used within the financial conc ept of VaR. In the present

PAGE 85

-67research this shortfall probability concept is bein g translated to an examination of the return period or return level and the return va lue. Because the return level is created by computing the inverse of the GEV, the us e of a time-varying GEV will have an impact on the methods used to compute the t ime-varying return level (detailed in Chapter 5, Section 5.3). Having used covariates that reflect behavior in the market and the economy to estimate parameters of the extreme-value distributi ons (EVDs), these parameter estimates are used in turn to estimate the return v alues and the errors in their estimates. In Chapter 5, Section 5.3, results spec ific to each security from the GEV fittings are used to model return levels. In Chapte r 5, Section 5.4 a method to compute the variance-covariance of return values is described. Following and using these results, the research exa mines an interesting and useful question: are these return values functions of impo rtant physical and market dimensions? Let the return values represent a new set of random variables,i N x ˆ Q i=1, 2,…, (number of securities) K, under the parameterization i ˆ Q for the N th return level. The realizations of these random var iables return values are notated by i N ˆ Q. In Chapter 5 (Sections 5.5 through 5.9) the mode ling of these (return) values as functions of physical and market dimensio ns is conducted. Under such

PAGE 86

-68an outcome, an arbitrary but fixed vector of return values, perhaps a domainmeaningful vector can be described as a function of a host of ancillary characteristics. 2.6 Portfolio And Innovation Processing Chapter 6 returns the thesis more explicitly to the problem domain of finance and portfolio design and evaluation. Armed with the res ults of Chapters 4 and 5, Chapter 6 focuses on two areas of research: 1. What do these models have to say about the portfoli o formation and modification problem? 2. What can we learn from these models about how extr eme values (innovations) propagate through the system of equit ies and the model developed from them? Combining the various datasets and models produces a pool of results we may use to describe the joint uncertainty of equities over a set of salient market dimensions. From the expected-return-value model and the joint distribution of return levels, an appropriately guided investor can select a portfoli o that has a particular return level, while minimizing the tail probability.

PAGE 87

-693. Data, Data Analysis Plan, And Pre-Analysis Data Manipulations 3.1 Overview Of The Chapter The focus of Chapter 3 is upon the data which defin e the basic experimental unit of the research that is the publically-traded equity. The Chapter covers the type of data which define the properties of the publicallytraded equity and the environment in which it “lives.” In addition, the Chapter describes the universe of publically traded equities examined in this researc h and the manipulations preformed on the data prior to and in preparation f or the fitting of time-varying General Extreme Value Distributions described in Ch apter 4 Section 4.7.. The research involves data from publically trade eq uities listed on all recognized stock exchanges world-wide for the year 2000-2001. Salient data concerning the markets and economy which form much of the environm ent in which these equities live were also of interest. Where appropriate, the data used were collected on a daily basis. The number of equities in the univers e was initially 76,424 and through various filters which were required to faci litate the analysis the number was reduced to 12,185 of which a sample of 3,000 wa s randomly selected for

PAGE 88

-70model construction (sometimes referred to as the tr aining set) and 100 were randomly selected for the test set. The three groups of data were used in the research. These were: 1. Performance Data, 2. Covariate Data, and 3. Ancillary Data. The data are further defined according to their con stituent members or variables and the values these variables may take on. A number of processing steps were performed and rep orted upon in the Chapter. These include: 1. An analysis of the market value/capitalization data to determine reasonable cutoff values for a discrete market valu e global and continental level classification. 2. The development and application of a liquidity of t rade evaluation. Liquidity in the trading of securities is a latent or hidden property of the security. A highly liquid security is one which a trader can buy or sell readily at a fair price. The liquidity test develo ped here was applied to the

PAGE 89

-71equity universe, eliminating those equities deemed illiquid. The test has now found its way into data processing at the Thoms on ReutersÂ’ index generation business unit. 3. The reduction or sub-selection of the initially ide ntified set of market and economic measures into a set of potential covariate s for use in the fitting of time-varying GEV distributions. The covariates w ill be examined in Chapter 4, Section 4, for utility in the modeling t he time-varying GEV parameters. The method used in performing this anal ysis is based upon Factor Analytic approaches and has now found its wa y into data processing at Thomson ReutersÂ’ index generation bus iness unit for the generation of optimal indices6. 6 Optimal indices are portfolios of securities where in the security weighting scheme is generated by analytical research. This approach stands in oppos ition to the approach of passive index generation in which the weight scheme is taken from a common p ublically known measure such as proportional market capitalization.

PAGE 90

-723.2 Classification Of Data Types Figure 2.1 portrays the data analysis plan for the proposed research. It consists of four major analysis stages, each of which contains myria d processing steps. In performing this research the data used falls into o ne of three groups. These groups, briefly described in Chapter 2, Section 2.3, are: 1. Performance Data 2. Covariate Data 3. Ancillary Data The basic experimental unit for the performance dat a is an equity issued by a publicly traded company. While it may be quite comp lex in structure, each share of equity at a first approximation represents a portio n (usually a small portion) of ownership of the issuing company. The equity gives the holder a claim against the firm ownerÂ’s equity. Since the ownerÂ’s equity is th e (potentially) volatile accumulation of undistributed profit, unlike with b onds or fixed income assets there is no a priori fixed upper limit on the value of eq uity assets. Nor is there a lower limit, short of the physical value of zero (bankrup tcy) for equities. Since bond holders are in front of equity holders in the case of dissolution of a company, the probability of equity assets being zero is always g reater than or equal to the probability of bond assets being zero. Prices of an d, in turn, returns on equities tend to move more rapidly and to a greater degree reflec t the goings-on in the marketplace and economy. This behavior led to a pre ference for using equities in

PAGE 91

-73the current analysis. Secondly, by restricting the analysis to common share equities and avoiding other asset classes and securities, th e model was likely simplified at this initial stage of research. 3.3 Performance Data All data source references are found in the Section Sources of Data, the last section of the dissertation. The sources of the performance data were the following systems: FactSet Research System (FactSet [2007]), Security Analysis and Validation Modeling (StockVal, Reuters [2007]) Lipper Analytical New Application (LANA, Lipper [2007]) and the Lipper s ecurity master file (Lipper [2008]) Daily data for four performance time series were d ownloaded for the period from January 2000 until the end of August 20 07. These series, collectively known hereafter as the performance or market series are given in Table 3.1. 3.4 Ancillary Data Other points-in-time attributes/elements previously identified as ancillary data were also retrieved from the FactSet system and to a sma ller degree from the Lipper Abbreviation/Name Description PR, Equity Price Daily closing price of a common share provided in U .S. dollars. TR, Total Return Daily return of an equity adjusted to add back divi dends and capital distributions. MV, Market Value Daily market value of the equity issung firm given in U.S. dollars. VT, Trade Volume Daily number of shares traded. Table 3.1 Equity-based time series used in present researc h.

PAGE 92

-74security master file (SMF). These included: Table 3.2 Description of ancillary dataset used in the equi ty sample design. Name Symbol Description ContinentCN The continent of security domicile. These include: F==Africa A==Asia E==Europe, N==North Ameica S==South America CountryCOThe country of security domicile. (See App endix A for list.) ExchangeEX Exchange on which the security is traded. (See Appendix A for list.) Exchange Region ER Region in which the exchange operates: Africa Central Asia Eastern Europe Middle East North America Pacific Rim South America Western Europe Trade RegionTR Trade region of the organization issuing security: Africa Central Asia Eastern Europe Middle East North America Pacific Rim South America Western Europe IndustryIN Industry in which issuing firm particpates. (See Appendix A for list.) SectorSE Sector in which issuing firm partcipates. (See Appendix A for list.)

PAGE 93

-75The data elements displayed in Table 3.2 were used in the design and selection of the securities sample as described below. In this data preparation portion of the research, r educing the number of securities7 included in the performance data set was desired. T he goal of this data-reduction effort was to create a sample from which could be d iscerned the pattern of change in joint extreme-value behavior, particularly as a function of financial and market factors embodied in the ancillary and covariate dat aset8, but which was within the researcherÂ’s time and computer resource budget. Pu t more simply, the initial discussion in this Chapter focuses upon reducing th e number of securities in the performance series to a much smaller number so the number of securities to be analyzed is tractable, while at the same time maint aining much of the structure of joint extreme-value behavior found in the larger se t of securities. Table 3.3 provides counts associated with each of the funneli ng steps, which in turn are described below. 7 The word securities, unless otherwise noted, in th is dissertation means common shares or common equities. 8 The covariate dataset is discussed in much greater detail below.

PAGE 94

-76Table 3.3 Reducing the number of securities by processing s tage. In a preliminary analysis the researcher decided th at the block-maxima model using a week or month as the block provided both a suffic ient number of observations in the context of six or seven years of data, as well as stable nondegenerative estimates of the various extreme-value parameters. Consequently, the first three processing thresholds were performed to filter the securities so that the securities passing through these selection criteria possessed the required length and time continuity of data. Additional reductions of 7,324 equity series were made because in a sequential evaluation there were no nearly com plete market-capitalization values for 5,561 equity series, and there were no i ndustry or sector attributes for 1,763 equity series. Processing Stage CountRemaining Original number securities 76,424 Having no return values over last four years(35,821) 40,603 Not having at least 6 years return values 2002 to 2007(16,481) 24,122 Gaps of length longer than two (1,270) 22,852 Missing a market cap value (5,561) 17,291 Not having disclosed sectors (1,763) 15,528 Useable series with respect to length, number of original observations,market cap value and identifed industry sector 15,528

PAGE 95

-77Even though this processing reduced the number of s ecurities by two-thirds, it was still considered impractical by the researcher and the thesis committee to process 15,528 securities. Although it was selected in a so mewhat arbitrary fashion, various committee members suggested that if the methodology worked for a universe as large as 15,528, it would work for a representative but smaller universe. The number of 3,000 was adopted. The need for a represe ntative sample suggested a stratified sample. Four factors were selected for s tratification: The first was the near-term market capitalization, commonly defined a s the total value of a companyÂ’s outstanding shares, calculated by multipl ying the price of a common share of an equity by the number of shares outstand ing in the marketplace (Investopedia [2007]) The other three factors used to stratify the sample s in this analysis were the continent on which the security i s domiciled, the sector classification under which the firm offering the se curity has its primary product or service offering, and the trade region in which the firm, against which the security is positioned, operates. The list of continents use d in this analysis is given in Table 3.2, and the sectors are listed in Appendix A. Whil e drilling down to a more granular level such as country and industry was a c onsideration, after examining the distribution of securities over this extended c lassification, the resulting schema appeared too sparsely covered. Trying to build mode ls using this schema could result in biased estimates of model parameters, so the schema was paired back to the one described earlier.

PAGE 96

-783.5 Market Capitalization The market capitalization of the security-issuing e ntity was a daily measure provided by FactSet (2007). The firms were grouped into six classes, according to numerical market-caps: mega-capitalization (hereaft er abbreviated in many instances as cap), large-cap, mid-cap, small-cap, m icro-cap, and nano-cap. According to Answers.com (2007), the market-capital ization cut points for 2006 were: Mega-Cap: Market-cap of $200 billion and greater Big-/Large-Cap: $10 billion to $200 billion Mid-Cap: $2 billion to $10 billion Small-Cap: $300 million to $2 billion Micro-Cap: $50 million to $300 million Nano-Cap: Under $50 million Allowing for an estimated growth of approximately 4 .9%, according to the International Monetary Fund (2007), the cut points in Table 3.4 were used to divide the firms into capitalization classes. Computing an average market capitalization for the last quarter of a year of the investigatory period (roughly covering the period June-August 2007), 51,590 members of the ori ginal universe were found to have sufficient observations. Figure 3.1 is a log-l og plot of the market-caps versus ranks for these 51,590 equity series. The axes for this plot are log rank of market capitalization versus log market capitalization in nominal dollars. Superimposed on

PAGE 97

-79the plot are the capitalization cutoffs per Table 3 .4. The mega-caps are clearly dominated by securities traded on North American an d European exchanges. Table 3.4 Market capitalization classes and their cut point s. (B=billions, M=millions) Figure 3.2 is a notched box and whisker plot of log ged market capitalization for the same 51,590 equity series by the continent of the e xchange on which the security is traded. While the extreme values for each of the co ntinents vary considerably, the distributions for each continent are fairly closely centered as measured by the sample medians. In fact, for Africa, Asia, Europe, and North America the notches appear to align, and the data suggest we failed to reject a significance test (p value = 0.0931) with the null hypothesis that the medians are equal (Tukey [1977]). Capitalizations Upper Cut Point Lower Cut Point Mega Cap$ $ 209.8B Big/Large Cap$ 209.8B$ 10.49BMid Cap$ 10.49B$ 2.098BSmall Cap$ 2.098B$ 314.7MMicro Cap$ 314.7M$ 52.45MNano Cap$ 52.45M $

PAGE 98

-80Mega Large Mid Small Micro Nano Figure 3.1 Log-Log Plot Of Market Caps Versus Rank For 51,5 90 Equity Series By Continent Of Domicile, Overlaid By Market Caps As Defined In Table 3.4.

PAGE 99

-81When the same plots for the reduced universe of 15, 528 are created, we can see some changes to the rankings. In Figure 3.3—the log -log plot of the market caps versus rank for the reduced universe—the North Amer ican equities dominate the top spots with respect to market cap, displacing th e Europeans and leading to the equity series in the larger universe. The median ca pitalization levels compared across the two universes are very close, and the me dians across the continents for the reduced universe (Figure 3.4)—while they show a bit more variation (p value = 0.0279) than the larger universe—are still reasonably close. It is the researcher’s Figure 3.2 Notched Box And Whisker Plot Of Logged Market Capitalization Of 51,590 Equity Series.

PAGE 100

-82conjecture that the reduction in the universe from the hypothetical universe to the available universe does not strongly bias the marke t-cap distribution. MegaLarge MidSmallMicroNano Figure 3 3 Log–Log Plot Of Market Caps Versus Rank For 15,5 28 Equity Series By Continent Of Domicile, Overlaid By Market Caps As Defined In Table 3.4.

PAGE 101

-833.6 Equity Liquidity Examination of the return series shows that a value of 0 for the daily return was not uncommon. According to (FactSet staff, personal int erview, November 5, 2007), these are accurate observations and are not indicat ive of missing or poor-quality observations. What does a 0 value for a daily retur n mean? Well, since a daily return is defined as: 1 1tt t tPP R P ()-= (3.1) where: t R is the total return for time t t P is the price adjusted for corporate action at time t Figure 3.4 Notched Box And Whisker Plot Of Logged Market Capitalization Of 15,528 Equity Series

PAGE 102

-84Therefore, a return value of 0 suggests 1 tt PP = It is most likely in the event of substantial numbers of the daily values being 0 tha t little or no trading of the security had taken place. Such a circumstance is kn own as illiquidity. In the event of illiquidity “an asset or security cannot be conv erted into cash very quickly (or near prevailing market prices)” (Financial.Dictiona ry [2008]). A market-clearing price for a trade is not easily or typically as rap idly found for an illiquid equity as for a liquid one. The price movement of the illiqui d equity is “sticky” and not as regularized; also, the response of the return to th e market factor is likely to be much more idiosyncratic. In the context of this research a major consequence of such a result would be that the fitting of a generalized e xtreme-value (GEV) model would be problematic because, as is described in the next Chapter, the method of parameter-fitting used in this research is a highly nonlinear, iterative search. The manifestation of this lack of liquidity would be th e failure to converge to reasonable maximum-likelihood estimator (MLE) GEV p arameters (as defined by Smith [1985] and reported on in Chapter 1, Section 1.10) or to converge at all. The GEV model-fitting activity will be developed in detail in Chapter 4. However, to look at the relationship between the distributio n of zeros in the extreme-value series (liquidity) and the ability to estimate MLEproperty-confirming estimates, non-time-varying estimates (once again discussed in Chapter 4) of the GEV location, scale, and shape parameters for weekly an d monthly extreme-value series

PAGE 103

-85were computed using the block-maxima model for all 15,528 equities resulting from the filtering detailed in Table 3.3. If the eq uities did not converge or did converge but did not meet the criteria laid out by Smith (1985), reproduced here for convenience, then a predictor variable ascribing th e successful fit of the equity was assigned the value of 0. On the other hand, a succe ssful convergence meeting the Smith criteria was assigned a 1: If x xx x >-0.5, MLEGEV estimators possess common asymptotic properties. If -1< x xx x <-0.5, MLEGEV estimators may be computed, but they do not have regular or standard asymptotic properties. If x xx x <-1, MLEGEV estimators may not be obtained. Additionally, a number of independent (predictor) v ariables were created from the analysis of the extreme weekly and monthly series. These were: Largest number of consecutive zeros (Max.Gap) The percentage of zeros in the extreme-values serie s (Portion.Zero) The number of “zero gaps,” namely, the number of se ts of consecutive zeros separated by sets of one or more non-zeros (Num.Gap s) A zero implies no change in price. The dependent ra ndom variable describing GEV model quality was modeled as a function of the func tions of the sets of zeros, using a logistic regression model (Neter et al. [1996]).

PAGE 104

-86Table 3.5 Analysis of deviance table from stepwise logisti c regression for weekly extreme-value series. Table 3.6 Analysis of deviance table from stepwise logistic regression for monthly extreme-value series. SourceDfDevianceResid.DfResid.DevP(Q>=q) InterceptNANA15,527 10,349.14 NA Portion.Zero19,170.26 15,526 1,178.88 0 Max.Gap161.59 15,525 1,117.29 <0.001 SourceDfDevianceResid.DfResid.DevP(Q>=q) InterceptNANA15,527 13,889.15 NA Portion.Zero17,747.07 15,526 6,142.08 0 Num.Gaps1458.05 15,525 5,684.03 0 Portion.Zero X Num.Gaps1125.55 15,524 5,558.48 0 Max.Gap160.02 15,523 5,498.46 <0.001 Portion.Zero X Max.Gap182.96 15,522 5,415.50 0

PAGE 105

-87The predictor selection method was a stepwise proce ss using both forward and backward steps. The results (not shown here) exhibi ted a strong inverse relationship between conversion and liquidity measu res. Tables 3.5 and 3.6 are analysis of deviance (AOD) tables from the logistic regression for weekly and monthly extreme-value series, respectively. For bot h series the coefficients of the predictors are significant at small values of p. Al so, for both series the portion of zeros found in a series played a leading roll in co nvergence. More zeros means the GEV is less likely to converge. Pred.FailedPred.ConvergedTotal Row Obs.Failed2,014 599 2,613 Obs.Converged730 12,185 12,915 Total Column 2,744 12,784 15,528 Table 3.7 Cross-tabs from the assignment of weekly extremevalue series to classes. Table 3.8 Cross-tabs from the assignment of monthly extreme -value series to classes. Pred.FailedPred.ConvergedTotal Row Obs.Failed1,487 117 1,604 Obs.Converged69 13,855 13,924 Total Column 1,556 13,972 15,528

PAGE 106

-88Tables 3.7 and 3.8 are cross-tabs relating the obse rved convergence and failure to converge (denoted as Obs.Converged and Obs.Failed, respectively) to the predicted convergence and failure to converge (denoted as Pre d.Converged and Pred.Failed, respectively). The predicted classification was cre ated by thresholding the predicted probability at 0.5. This value was chosen by defaul t because there was no information to suggest reducing the uncertainty in a particular direction. Furthermore, when a series of threshold values on e ither side of 0.5 were tested it was the misclassified cells which changed the most, swapping values in a direction dependent on the direction of the movement of the t hreshold away from 0.5. The correctly classified cells remained fairly constant under this examination. When the monthly and weekly datasets were examined separatel y greater liquidity was seen in the monthly series, with 13,857 liquid series ve rsus 12,185 for the weekly series. Further, all securities found in the weekly liquid set were also in the monthly liquid set. (This result supports the view that increasing the number of zeros in the set increases illiquidity, which in turn provides a deg enerative extreme-value distribution, certainly under the block-maxima meth od.) Note that every monthly series having a maximum of zero over a given month resulted in four or five zeros in the weekly series Not only were the illiquid series problematic for extreme-value parameter estimation, their existence contradicted the consequences of the efficient market hypothesis (Fama [1965]). Therefore,

PAGE 107

-89other idiosyncratic behavior could have been expect ed from them, particularly when modeling joint distributions of uncertainty. T he useable universe of equities was reduced for further study to the 12,185 securit ies observed and predicted to converge in the cross-tabs of the liquidity analysi s of the weekly extreme-value series. 3.7 Covariates The remainder of this Chapter is dominated by a dis cussion of the preparatory data analysis and selection of the covariates to be used in the time-varying estimates of the GEV parameters. This latter analysis will be di scussed in detail in Chapter 4. All sources of covariate data are detailed in the S ection Sources Of Data, the last section of the dissertation. An initial set of 139 covariate series were collect ed. Each series consisted of daily observations for the period from December 31, 1999, until August 31, 2007. (The sources of these data series are noted in the Secti on entitled Sources Of Data.) The types of data found in this initial set were:

PAGE 108

-90The series included various-term interest rates and selected other metrics from the seven global central banks. The central banks were: Bank of Canada Bank of England Bank of Japan Table 3.9 Globally recognized benchmarks and indices. Index/BenchmarkCountry Dow Jones Indus. Avg USA S&P 500 Index USA Nasdaq Composite Index USA S&P/Tsx Composite Index Canada Mexico Bolsa Index Mexico Brazil Bovespa Stock Index Brazil DJ Euro Stoxx 50 EU FTSE 100 Index UK CAC 40 Index France DAX Index Germany IBEX 35 Index Spain S&P/MIB Index Italy Amsterdam Exchanges IndexNetherlandsOMX Stockholm 30 Index Sweden Swiss Market Index Switzerland Nikkei 225 Japan Hang Seng Index Hong Kong S&P/ASX 200 Index Australia

PAGE 109

-91 European Central Bank Swiss National Bank The Reserve Bank of Australia The U.S. Federal Reserve Bank Other covariates in the initial set were: Volatility measure of the Chicago Board of Exchange (CBOE) Lehman Brothers fixed income indices (various) Dollar futures indices of the New York Board of Tra de (NYBOT) U.S. mortgage rates (various maturities) Rates for MoodyÂ’s AAA and BBB corporate paper Russell equity indices (various) Interest rate swap rates (various maturities) 3.7.1 Sub-Selecting Covariates While this set of covariates was very rich, it was also too large to use in its entirety. It was also undoubtedly the case that many of the c ovariates possessed moderate to strong correlation with other members of the set. S o, the task became one of retaining much of the covariation within the set of covariates, while at the same time reducing the dimensionality of the set. One pr ocedure for doing this was to use a factor-analytic procedure to form factor scores a nd use these scores as the reduced set of covariates (Johnson and Wichern [200 2]). However, the problem

PAGE 110

-92with factor scores was that they leave something to be desired in terms of interpretation; that is, unless we can give a compe lling interpretation to a factor, we lose understanding of what is driving the parameter from a phenomenological or domain perspective. In another context the researcher developed a paten t-pending method (Labovitz et al. [2007]) using a factor-analytic model within an i terative framework to extract a set of securities that reduces the number of securi ties under consideration but retains the diversification protection found in the universe of securities. Clearly, it is important to retain the identity of the securiti es because the investor must select from the securities to form a portfolio. The method was adjusted to perform in the present circumstances, so that in an analogous fash ion, while the dominating patterns of variation are retained in the set of co variates, the identities of the covariatesÂ’ terms are also retained to gain greater domain insights. As a preprocessing step those covariates not alread y presented in terms of returns or rates were converted to decimal returns using Eq uation 3.1, with index values taking the place of price as required. The covariat es then were transformed to decimal representation from a percentage representa tion where necessary. Finally, the value 1 was added to each covariate value, and the resulting quantities were logged. (The logarithm transformation is commonly u sed with returns as a

PAGE 111

-93variance-stabilizing transform as well as a transfo rm to drive the sample closer to a normal or Gaussian form.) The modified “high-gradin g” procedure was applied to the data: Start with formation of a data matrix, wherein ther e are n objects (e.g., days) and p properties (e.g., covariates). The data can then b e depicted in a matrix of n rows and p columns. Consistent with matrix notation, let’s ca ll the matrix X and the ij x the entry at the intersection of the ith row (where 12 in ,,, = ) and the jth column (where 12 jp ,,, = ), which corresponds to the measurement of the jth property on the ith object. The Factor-Analytic (FA) model (results from Johnso n and Wichern [2002]) hypothesizes that X is linearly dependent upon unobservable (often call ed latent) random variables (known as factors). For a detaile d description of the FA model, the reader is recommended to Johnson and Wichern [2 002]. It is sufficient to say that the FA model operates on a positive semi-defin ite matrix commonly the covariance or correlation which is a function of X commonly denoted S SS S or R To compute the factors a principal components extra ction (PCE) method (Johnson and Wichern [2002]) was performed. The PCE is based upon several theorems

PAGE 112

-94from linear algebra. The underlying assumption is t hat the matrix being operated upon is symmetric and real, and therefore, by the s pectral decomposition theorem for reals, it has an orthonormal basis of eigenvect ors and a full complement of eigenvectors (Lay [2005]). Further a theorem corol lary has that such a matrix can be written as a sum of its eigenvalues and eigenvec tors. In our present circumstance we have: R which is a standardized data covariance matrix (defined as a correlation matrix), and by th e definition of distance under a Euclidean metric R is a positive definite, so all the eigenvalues are positive. From other results of linear algebra, it can be shown th at for the FA model operating on correlation matrix, the proportion of the variance explained by the model is a function of eigenvalues and the “loadings” or the c orrelation between the factors and the properties (columns) of the data matrix X (or more appropriately its standardized form) is a function of the eigenvector s. However the loadings are determined only up to an orthogonal matrix. From l inear algebra we know that an orthogonal matrix corresponds to an orthogonal tran sformation, which in turn corresponds to a rigid rotation or reflection of th e coordinate system. The objective of applying such a transformation is to maximize th e loading of properties on one factor (i.e., approaching 1.0 or -1.0) and to minim ize the loading of these properties on all other factors (i.e., approaching 0). The rea son to have multiple properties with high, similarly signed loadings on the same fa ctor is because the properties are

PAGE 113

-95varying together, and the use of any of the propert ies from this set would bring the pattern of variation occurring in the set into the analysis. On the other hand, properties with high, oppositely signed loadings mo ve in an inverse direction to one another. Finally, properties possessing high absolu te-value loadings, but on different factors, tend to move independently of on e another. It should be noted that the above discussion primarily holds in the context of linear or near-linear relationships among properties. In selecting covariates to use in the examination of GEV, the researcher wanted to find a set of covariates that might be common to “d riving” changes in the extreme values of many securities, not one unique to one or a very few securities. Therefore, the researcher, based on his experience in using fa ctor analysis for this purpose and in concert with the variant of the high-grading pro cess referred to above, sought covariates possessing commonly shared patterns of v ariation among the covariates as the covariates most likely to achieve the object ive. Of course, there was the issue: How do you recognize the common pattern from the unique? While there are a number of rules of thumb for identifying the fact ors representing this distinction between common and unique sources of variation, one based on changes in the rate at which the number of covariates made it through t he high-grading filter was chosen.

PAGE 114

-96From examination of Table 3.10, a minimum number of 33 covariates high-graded from 98% of the total variation were selected. In s electing this set of covariates up to three positively and three negatively correlated covariates per factor were permitted and each covariate had an absolute value of correlation greater than or equal to 0.75. This selection came from 30 factors with a distribution of variability as described in Table 3.11. Precentage of Variation 1 and -12 and -23 and -34 and -45 and -5 90% 172935 95% 193236 98% 2229333637 99% 22 29 33 36 37 Number of Covariates Retained Above Threshold * x and -y means the largest x positive covariates above the threshold and -y means the y covariates with the smallest negative v alues less than the negative threshold. Table 3.10 Number of covariates satisfying the high-grading criteria as a function of percentage variation explained and th reshold value.

PAGE 115

-97Table 3 11 Distribution of variation in loadings between co variates and factors for the factor structure arising from the selected high-grading results described in the accompanying text. (Legend : SS==Sum of Squares; Var == Variability Explained) Table 3.12 lists the covariates used in the analyse s presented in Chapter 4. The covariates selected clearly have representatives, i n many cases multiple representatives, from each of the data domains desc ribed at the beginning of this section. Based upon conversations with domain expe rts, the minimum of 33 was SourceFactor 1Factor 2Factor 3Factor 4Factor 5Facto r 6 SS loadings34.943719.83858.67849.07312.61772.7229Proportion Var0.32360.18370.08040.08400.02420.0252Cumulative Var0.32360.50720.58760.67160.69580.7211SourceFactor 7Factor 8Factor 9Factor 10Factor 11Fac tor 12 SS loadings3.75742.17402.10321.78232.10481.3589Proportion Var0.03480.02010.01950.01650.01950.0126Cumulative Var0.75580.77600.79550.81200.83140.8440SourceFactor 13Factor 14Factor 15Factor 16Factor 17 Factor 18 SS loadings2.17481.16141.00681.00300.96350.8131Proportion Var0.02010.01080.00930.00930.00890.0075Cumulative Var0.86420.87490.88420.89350.90240.9100SourceFactor 19Factor 20Factor 21Factor 22Factor 23 Factor 24 SS loadings0.74271.20010.89630.57250.71120.4840Proportion Var0.00690.01110.00830.00530.00660.0045Cumulative Var0.91690.92800.93630.94160.94810.9526SourceFactor 25Factor 26Factor 27Factor 28Factor 29 Factor 30 SS loadings0.57620.43000.76090.36290.56540.3431Proportion Var0.00530.00400.00700.00340.00520.0032Cumulative Var0.95800.96190.96900.97240.97760.9808

PAGE 116

-98augmented by 11, increasing the number of covariate s used in further analysis to 44. Table 3.12 Covariates selected for use in estimating GEV par ameters. 3.8 A Couple Of Final Words On Data Organization The researcher used large datasets in this research and generated additional large datasets; many of the individual datasets were in e xcess of 200 megabytes of data. In order to maintain “control” of a project with da tasets of this size, several conventions were observed, including: 1 Year Swap RateFTSE 100 P IX10 Year Constant Maturity (CM)Spread: 3M ED 3M TB10 Year Swap RateHang Seng Hong Kong15 Year MortgageJapan Nikkei Average 225 Benchmark ed Spread: 15 Year Mortgage Less 7 Year CMKOSPI IXSpread: 30 Year Mortgage Less 10 Year CMLehman Muni Sec Total Return (TR) Inv 3 Month Euro Dollar Return Libor Six MonthAustralian Bank accepted Bills 180 daysLibor Three Month Australian Bank accepted Bills 30 daysMoody aaaAustralian Treasury Bonds 2 yearsMoody bbbAustria ATX (Austrian Traded Index ATX)NASDAQ 100 P IX Austria WBI (Wiener Brse Index) Benchmarked NASDAQ Composite Index BOE (Bank of England) IUDVCDA (Daily Sterling certificates of deposit interest rate 3 months) Russell 1000 Brazil BovespaRussell 2000Canadian Rate V39072 (Prime corporate paper rate 1 Month)Russell 3000CBOE (Chicago Board Options Exchange) U S Market VolatilityRussell Mid CapChina Shanghai SE CompositeS P 1500 Supercomp TR IXDJ Ind Average Price (P) Index (IX)Spread: Invest G rade-5 Year CM Dollar Futures Index NYBOTUS Interest Rate 10 YearsEuro STOXX 50US Interest Rate 20 YearsFederal Funds RateUS Interest Rate 6 Months France CAC 40 VXO (old VIX) Covariates Selected For Analysis

PAGE 117

-991. Organization of different types of analyses in sepa rate databases or separate directory structures. Many of these directories obs erved a similar subdirectory structure: Input data Code Comments Output results 2. A highly non-normalized data structure was observed This resulted in records that were very redundant in the repetition of data elements, even data items that did not vary from record to record. However, the investigator could pick up any random record and kn ow exactly to what it referred, without having any additional context. 3. The researcher incorporated in each code a means fo r automating the construction of a “meaningful” file name for each f ile. This allowed understanding in a detailed fashion of the file con tents just from reading the file name. Taken together—the directory structure, the record format, and the file naming—no ancillary information was required to navigate t he data side of the research.

PAGE 118

-1004. Fitting Time-Varying GEVs 4.1 Overview Of The Chapter In Chapter 4, the data preparation and pre-processi ng described in Chapter 3 is leveraged into one of the central analyses of the d issertation, the fitting of time varying Generalized Extreme Value (GEV) distributio ns. Estimation of the parameters required by the fitting is performed usi ng a maximum likelihood approach. In preparation for performing this analysis, two im portant preliminaries are put forward. Firstly, and after some experimentation t he fitting strategy was adopted -use a Nelder/Mead (gradient-free) search (Nelder an d Mead [1965]) in combination with the quasi-Newton BFGS (Broyden-Fletcher-Goldfa rb-Shanno) method (Nocedal and Wright [1999]). It was discovered that for these block maxima data, Nelder/Mead fitting had some significant deficienci es amongst these was slow convergence or even a failure to converge within th e parametrically set number of iterations. Using the estimates of parameters gener ated by Nelder/Mead as the initial estimate values for BFGS improved the optim ization. This result was an overall substantial improvement in the value of log -likelihood (actually the

PAGE 119

-101negative log likelihood) and a higher percentage of series parameter fitting convergences. Secondly, up to this point in the research block si zes of a week and month were used side-by-side in the analyses. It was desired t o carry only one of these data sets forward and the preference was to use the week bloc k maxima data set primarily because it possessed the greater number of observat ions. Using both simulation and theoretical approaches, it was concluded that t he distribution of weekly block maxima data was reasonably represented by the GEV d istribution. Furthermore, the parameters of the GEV fitted by the weekly bloc k maxima data sets changed appropriately in comparison to the parameters of GE Vs fitted from data sets generated by block maxima for other units of time. The analyses showed that the distribution of block maxima whose lengths are mult iples of the weekly block maxima series are GEV distributions, if the weekly block maxima series are GEV and the parameters of these larger block sizes are functions of the weekly block size parameters. Additionally the shape parameter x xx x remains fairly constant in moving from the weekly block data set to blocks of longer time periods, another indication that the weekly block maxima series are carrying in formation similar to that contained in the larger blocks. So the analysis mo ved forward with a week as the basic unit of time for the analysis, and model is s et out describing the time varying

PAGE 120

-102parameters of the distribution as a linear function of a yet to be determined subset of the proposed set of economic and financial covar iates. Prior to fitting the overall covariate set, an anal ysis was conducted of the utility of such covariates as a constant, a trend over time (c ommonly called timeÂ’s arrow) and periodic functions of various and meaningful cy cles. The later were introduced as sets of sine and cosine functions. A sizeable r andom sample of the block maximal equity series (approximately 2,500 liquid b lock maximal series) were examined for periodic behavior by analysis of their power spectra. The significance of the estimated coefficients of the sine and cosin e functions was evaluated using both a permutation test and a test due to Jenkins a nd Watts (Jenkins and Watts [1968]). The conclusion was that such periodic fun ctions demonstrated no explanatory power once a constant and trend were ac counted for. Therefore from these analyses only the constant and trend terms we re carried forward as candidate covariates. The number of financial and economic series covaria tes had been substantially reduced in Chapter 3, Section 3.7, from 139 to 44. However these were daily series and the returns were weekly, further there is ample evidence in finance of lead/lag relationships amongst financial and economic measur es. Because no guidance existed, each of the 44 daily financial covariate s eries were converted for purposes

PAGE 121

-103of the analysis into nine weekly series by crossing three levels of an aggregation factor (maximum value, minimum value and median val ue) with three levels of a time leading or offset factor (two weeks lead, one week lead and time coincident). After fitting distributions to a randomly selected set of 200 equity series and examining the covariates which entered the model, i t was decided that the median aggregation level could be dropped from the larger analysis. This reduces the number of series per covariate from nine to six and therefore yielded a total of 266 candidates (44*6 + trend + constant) with which to linearly model each of the GEV parameters. Additionally a constant term was f orced into the model of each parameter in order to create an “embedded model str ategy” and thereby facilitate testing of the improvement which occurred with each term which was added to the model. It also should be clear that a constant onl y model (for the three parameters) is equivalent to fitting a non-time varying GEV. A stepwise modeling approach allowing a forward and backwards step (candidate introduction and elimination) was used. On the ste p after the constants were fitted, 265 combinations of predictor variables were tested one by one against each parameter to see which maximally improved the model (thereby effectively testing 795 covariates parameter combinations). The followi ng step fitted and evaluated 794 covariates, and so forth. Entry, exit and sto pping rules were based upon BIC (Bayes Information Criteria) and a Likelihood Ratio Test (taking advantage of

PAGE 122

-104model embedding). The fitting of the covariate mod els and estimation of distribution parameters was accomplished by maximiz ing likelihood model (in fact minimizing negative log-likelihood) using the optim ization strategy outlined above. The fitting of the 3000 weekly block maxima trainin g series and the 100 weekly block maxima test series were individually performe d and was very expensive computationally and required the use of approximate ly 90 processors for 10 days. The Chapter concludes with a brief analysis of the results from the parameter fitting, with a very extensive analysis found in Ap pendix B. While they are not part of the central thread of the dissertation, these re sults are important to examine because they do provide a domain sanity check upon the parameter estimation effort, which is central. Further, the results do suggest new lines of inquiry which in future research can be articulated/as hypotheses concerning impacts upon distribution parameters and their moments. Some o f the salient results of the analysis of parameter fitting include: 1. The models for the distribution parameters had on a verage approximately seven covariates in total. 2. 95% of the time-varying models showed significant i mprovement over the static model. 3. Financial and economic covariates were as or more i mportant compared to constants or trend. 4. Of the financial/economic covariates, the plurality were used in the estimation for ˆ m mm m followed by ˆ x xx x and ˆ s ss s

PAGE 123

-1055. Time-contemporaneous covariates were less than half of the covariates compared to time-lagged covariates (46.6% against 5 3.4%). 6. Covariates associated with lags of one or two weeks were equally divided, suggesting that, from an overall perspective, the e xistence of a lead-lag relationship, but there is not a strong differentia tion by time frame for at least the first two weeks 7. The minimum values of the covariate in the block mo re frequently entered the model than the maximum values of the covariate in the block -62.1% versus 37.9%, respectively. 8. Distinct groupings of covariates exist with respect to when and in what formed (aggregation) they entered models, for examp le global market indices behavior similarly. 9. When the results were observed from the perspective of market capitalization, model complexity (in terms of the n umber of components, the number of leading components and the number of distributional parameters which were modeled by covariates beyond the constant) tended to increase with greater capitalization. 10. Africa, Eastern Europe, the Middle East, South Amer ica, and to a lesser extent North America can—at the gross level of this analysis—be modeled (in the sense described above) by less complex mod els. 11. The Pacific Rim and Western Europe, in the same sen se as the previous, tend to be more complex models. An important take-away is that in the end, the valu e of fitting the TV GEVs is, as developed throughout Chapters 5 and 6, to compute t he return values from which are generated surrogates for the “risk levels” secu rities introduce into a portfolio.

PAGE 124

-1064.2 GEV Reprise While the topic and ultimate focus of the present C hapter is upon the fitting of time-varying generalized extreme-value (GEV) distri butions, it should be clear that the value of time-varying GEVs is fundamentally in their contrast to non-timevarying GEVs. The distinction between these two mod els lies in the nature of the function, its form, as well as the parameters it co ntains, which relate the covariates to the GEV parameters. Recall from Chapter 1, Secti on 1.8.3, that we are using a GEV of the form: 11x Gx/ (;,,)exp{[()]} + =-+ x m msxx s (4.1) defined on 10 xx {:()}, +->xms where 0 zz []max(,) +=, 0, and msx -<<>-<< The GEV has three parameters: a location parameter ( m ), a scale parameter ( s ), and a shape parameter ( x ). We shall note this form as the non-time-varying form. The paramet ers in this distribution are hypothesized to remain constant for all values of t he random variable X Recall that in Chapter 2, Section 2.4, we introduce d the notion of the time-varying distribution 11 t tt ttttt tx Gx/ (;,,)exp[{()}] +=-+ x m msxx s (4.2)

PAGE 125

-107where z [] + = 01 tttt x max[,(())] +xms -<< tm, 0 < t s and -<< tx. The model being suggested is that the GEV distribut ion is not a single distribution but a set of distributions (albeit related) whose p arameters change with the random variable t X —a random sequence indexed by time (t). Further, as discussed in more detail later in this Chapter, (Chapter 4,Section 4 .7) we hypothesize and fit the timevarying parameters and ttt ,, msx as a linear function of the subset of covariates identified in Chapter 3, Section 3.7. Also discusse d in detail are functions relating the parameters and covariates. These functions are of the form: 01122 tjtjjtjtmjtm pfCCCC ,,,,,,,, ()qqqq qqqq qqqq qqqq==++++ (4.3) where: tj p is a parameter of the time-varying GEV— and ttt ,, msx at time t t C is a time-varying vector of covariates ij q qq q is the coefficient multiplying the ith covariate 12 im (,,...,) = associated with the jth parameter ti C is the value of the ith covariate at time t While we have started out treating non-time-varying models (NTVM) and timevarying models (TVM) as distinct from each other, i t should be apparent from examining the above linear function that the two mo dels are indeed related. If we let 0 tjj p ,, q qq q =, we see that the parameter tjj pp = is not dependent on time, i.e., it is a NTVM. In fact, the NTVM is an embedded model with respect to the TVM, and fitting it along with TVMs generates fit statistics that allow us to evaluate the improved fit (if any) that the TVM provides over th e NTVM. Therefore, we start

PAGE 126

-108out this Chapter (Chapter 4, Section 4.3) begins wi th an analysis of the NTVM results. 4.3 Non-Time-Varying Model As indicated in Chapter 3 GEV parameters were succe ssfully estimated for 12,185 and 13,855 equity series of weekly and monthly mini mum extremes, respectively. These series represented those that passed through the liquidity analysis described in Chapter 3, Section 3.6. Maximum-likelihood was used to estimate the paramet ers. The likelihood estimation procedure used the data to estimate and ,, msx such that the negative log of the likelihood was minimized. From the cumul ative density function (cdf) of the GEV given above the probability density functio n is computed as: 11 11 11i i ixx fx / / (,,,)[()]exp{[()]} x x x x x x x x mm mm mm mm msxxx msxxx msxxx msxxx sss sss sss sss---=+-+ (4.4) where: 10 ix ()/xms xms xms xms +-> -<< m 0 and > s -<< x In turn the likelihood function is computed as: x (,,,) msx msx msx msx = 11 1 11 11n i i ixx/ / [()]exp{[()]} x x x x x x x xmm mm mm mm xx xx xx xx sss sss sss sss-=-+-+ (4.5)

PAGE 127

-109Logs of the likelihood yield: 1 1 11 11 1n i i n i ix n x (,,)log()()log[()] [()]x xx x m mm m msxsx msxsx msxsx msxsx x xx x s ss s m mm m x xx x s ss s= = =--++ -+ (4.6) The objective is to minimize the negative of this v alue, subject to the constraints enumerated above. There are two additional points t o note in this formulation: 1. The constraint 10 x ()/ xms xms xms xms +-> is effected by checking every candidate three-tuple ( ˆ ˆ,ˆ, msx msx msx msx ) for each x in the sample, that is, 1012i xin ˆ (ˆ)/ˆ,,,xms xms xms xms+->= (number of weekly maximas). 2. The candidate values for ˆ s ss s are chosen by exponentiating an underlying value taken from a search region, which ranges over an interval contained in ; in other words, we are optimizing over log( ˆ s ss s ). There are no closed-form estimates for the paramete rs, so an iterative search strategy must be adopted. The search strategy selec ted by the author was suggested by Sain (Sain, S.R., personal interview, December, 2007) and consists of two parts: The first part uses a Nelder-Mead search algorithm (Nelder and Mead [1965]) to enter the general region of the solution. The Nelde r-Mead method, also called the downhill simplex or the amoeba method, is commonly used in nonlinear searches for which a gradient cannot be computed or for whic h it is difficult or undesirable

PAGE 128

-110to compute a gradient. Central to the Nelder-Mead m ethod is the object called a simplex: a polytope of 1 p + vertices in p dimensions, where p is the number of parameters over which the search is being conducted In each iteration the NelderMead simplex expands and contracts, depending on th e nature and degree of improvement found for the value of the function bei ng optimized. Nelder-Mead seems to operate better when the function being sea rched over is steep. The algorithm can be slow converging, and in many insta nces in the preliminary research effort the algorithm did not converge with in the set number of iterations. These appeared to be occasions when the algorithm e ither encountered very large and relatively flat regions or was moving along lon g ridges in the space. In many such cases the algorithm failed. This result was improved by cutting the number of i terations allowed and increasing the completion threshold tolerance level Of course, then the open question—whether or not the Nelder-Mead solutions c ould be improved further— was left. To this end several features were added t o the search procedure. Chief among these was taking the results from the NelderMead processing and using them as the initial starting point in an instantiat ion of the Broyden-FletcherGoldfarb-Shanno (BFGS) method.

PAGE 129

-111BFGS is another method for solving nonlinear optim ization problems (Nocedal and Wright [1999]). BFGS is a member of the family of hill-climbing methods that uses a Newton-like method of optimization to identi fy a stationary point of the objective function. Stationary points are those whe re the gradient of the function is equal to zero. To this end we assume that, locally, the objective function can be approximated as a quadratic in the region around th e optimum value. Using results from calculus, we can apply the first and second de rivatives to find the stationary point (Nocedal and Wright [1999]). In point of fact BFGS is called a quasi-Newtonian m ethod because the matrix of second derivatives of the objective function with r espect to the parameters, also called the Hessian, is not computed in its entirety but is approximated by successive additions to the initial guess for the H essian of inner products of derivatives along the direction of lines of greates t descent. The BFGS algorithm is expensive in terms of time and in order to speed it up it requires the user to provide gradients of the objective function up through the second derivative, which in the present circumstance required the calculation and t hen repeated computation of the three gradient equations for the first derivative a nd the nine gradient equations for the second derivative. Since we have an approximati on of the Hessian coming out of the BFGS method, we can use the approximation of the Hessian under convergence to compute the Hessian inverse, which i s the covariance matrix of the

PAGE 130

-112parameter estimates. Under the asymptotics of the m aximum-likelihood estimator (MLE) procedure the estimates of the parameters ( ˆ ˆ,ˆ, msx msx msx msx ) are distributed as multivariate normal, with mean equal to ( ,, msx msx msx msx ) and covariance equal to the justdescribed inverse of the Hessian matrix (Nocedal an d Wright [1999]). The combined approach of using Nelder-Mead/BFGS was augmented with the use of a number of random starts. There were up to 15 r andom starts and the values were chosen randomly from the range of values recom mended by Smith (1985). An estimation of the parameters was considered to have converged if 75% or more of the random starts had converged, and the estimates used were those associated with the run yielding the best maximum value of the like lihood. In the results observed the combined procedure worked far better than the r esults of using Nelder-Mead alone. For the equity extreme-value data sets that realized convergence of 75% or more of the random starts, 100% convergence was fre quently observed. Finally, the algorithm was tested against several test sets foun d in the literature, and the coefficients generated compared extremely favorably with the published results. 4.4 Block Size And Distribution Carrying on from our initial discussion at the star t of this section, the 12,185 weekly extreme-value return equity series were a pr oper subset of the 13,855 monthly extreme-value return equity series. (In fut ure discussions the qualifiers extreme-value and equity will be dropped unless the context is confusing; the

PAGE 131

-113reader is to assume the qualifiers apply unless oth erwise stated.) As was shown in Chapter 3, Section 3.6 such a result makes sense in light of the fact that the parameters of a return series do not tend to conver ge in the presence of zeros (i.e., no price change), and for every zero in the monthly series there will be four zeros in the weekly series. Further, the presence of the proper subset suggests we might look for other changes in the form of the distribut ions as a function of block size. The purpose of this section is to examine whether o r not the distributions of block maxima of different block sizes (one week, four wee ks [a month], 26 weeks [a half year], etc.) are related. That is, if the set of r andom variables of the base block size of one week is a GEV, then are maxima of the time u nits of greater length (which can be constructed as a sample of the base block si ze) also distributed as GEVs, and how do the parameters change? If this is indee d the case then general results developed for one week are also likely to be applic able to larger blocks and the selection of a one week block maxima is less arbitr ary from an analysis perspective. To commence this analysis, we first examined the co rrelations between the GEV parameters for the weekly return data and compared these to the correlations among the GEV parameters for the monthly return data. The se correlations are given in Table 4.1. A likelihood ratio test fails to reject the hypothesis that these correlation matrices are the same under any commonly observed p-value.

PAGE 132

-114We expect the GEV parameters to change as we move b etween the weekly and monthly data sets. With respect to location, recall that we have structured the data so that the maximums are in actuality the minimums. Because monthly is the maximum of four weeks, the monthly location paramet er should be equal to or greater than the weekly location parameter. For the scale parameter the high positive correlation with location and the research erÂ’s intuition suggest that taking the maximum of approximately four weeks would intro duce more variability. But care needs to be taken in as much as the location a nd scale parameters are not the mean and variance of the GEV distribution. There is little insight into the manner in which x xx x would systematically change with changes in time un its. The examination of the GEV parameter estimates from the 12,185 week ly and 13,855 monthly distributions showed that location and scale parame ters increased significantly and the shape to a much less degree in moving from the weekly to the monthly time MuSigmaXi Mu1.00000.9160-0.2100Sigma0.91601.0000-0.1144Xi-0.2100-0.11441.0000 MuSigmaXi Mu1.00000.9193-0.3671Sigma0.91931.0000-0.2961 Xi -0.3671 -0.2961 1.0000 Weekly Monthly Table 4.1 Correlations among GEV parameters for weekly and monthly data series.

PAGE 133

-115frame. This problem was examined both empirically a nd theoretic methods to see whether the analysis was sensible. The empirical examination involved generating or si mulating a large number of sets of a predetermined (block) size, formed from w eekly extrema using a sample of the 12,185 liquid equity series. A randomly sele cted sample of 100 these weekly block maxima equity series were extracted. A boots trap procedure (Efron and Tibshirani [1993]) was useed to create/simulate mul tiple samples for each of the designated block sizes from the weekly maximum seri es to generate a sample of 50 observations at each block size. These 50 observat ions were used to estimate the parameters ( ˆ ˆ,ˆ, msx msx msx msx ) of a GEV for the time unit designated. The bootst rap was performed 1000 times for each of the series for eac h of the desired time units. The convenience here is that for this analysis each of the time units examined is defined as a multiple of the basic week length block. For e xample, from sets of four weekly maxima an approximate monthly maximum was obtained, for 13 weekly maxima an approximate quarter was obtained, and so forth. This analysis resulted in 100,000 block maxima sample of size 50 for each of the time units. From each of the estimated sets of ˆ,ˆ ms ms ms ms and ˆ x xx x the mean, variance, skewness, and kurtosis were estimated. These estimates were computed for months (4 weeks), quarters (13 weeks), half years (26 weeks), and one, three, five and ten years (52, 156, 260, and

PAGE 134

-116520 weeks, respectively). Table 4.2, and Figures 4. 1 through 4.4 depict the median values of the parameters and moments analysis by ti me unit, a discussion of which follows the graphics.

PAGE 135

-117Table 4.2 Tabulation of results from the empirical (bootstrap ) analysis of maxima of different time units and estimation of related GEV parameters and statistics Number Weekly Units Equivalent TimeMuSigmaXiMeanVarianceSkewness Excess Kurtosis 1Week0.02840.02870.11350.04850.00192.054812.53264Month0.07120.03360.10940.09470.00262.008812.0159 13Quarter0.11360.03840.11160.14050.00342.033812.294 7 26Six-Months0.14160.04150.11400.17080.00402.060212. 5946 52One Year0.17130.04490.11420.20290.00472.062712.62 28 156Three Years0.22330.05080.11720.25930.00612.09681 3.0227 260Five Years0.25070.05380.11390.28860.00682.059112 .5823 520Ten Years0.28960.05820.11380.33050.00792.058312. 5723 ParametersMoments

PAGE 136

-1180 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0100200300400500600Number of Weekly UnitsParameter Value Mu (Location) Sigma (Scale) Xi (Shape) Figure 4.1 Plot Of Estimated Medians of GEV Parameters Over Different Time Units Expressed In Weeks. (Abscissa: 1 = Week, 4 = Month, 13 = Quarter, 26 = Half Year, 52 = 1 Year, 1 56 = 3 Years, 260 = 5 Years, 520 = 10 Years).

PAGE 137

-1190 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0100200300400500600Number of Weekly UnitsParameter Value Mean StandardDeviation Legend Figure 4.2 Estimates Of GEV Median Means And Standard Deviat ions Over Different Time Units Expressed In Weeks (Absci ssa: 1 = Week, 4 = Month, 13 = Quarter, 26 = Half Year, 52 = 1 Year, 156 = 3 Years, 260 = 5 Years, 520 = 10 Years).

PAGE 138

-1200 2 4 6 8 10 12 14 0100200300400500600Number of Weekly Units Parameter Values Skewness Kurtosis Legend Figure 4.3 Estimates Of GEV Median Skewness And Kurtosis St atistics Over Different Time Units Expressed In Weeks (Absci ssa: 1 = Week, 4 = Month, 13 = Quarter, 26 = Half Year, 52 = 1 Year, 156 = 3 Years, 260 = 5 Years, 520 = 10 Years).

PAGE 139

-121Figure 4.4 Histograms Of Extremes On Different Time Scales (Legend: M = Month, Q = Quarter, H = Half Year, 1 = One Year, 3 = Three Yea rs, 5 = Five Years, T = Ten Years). 0.00.51.01.52.0 0246810 Histograms of Extremes On Different Time ScalesExtremesProbability Density Function MQH135T

PAGE 140

-122There appears to be a systematic, increasing relati onship between the length of time unit and the location parameter, but it is unclear if this relationship extends to the scale parameter and clearly does not extend to the shape parameter estimated from the bootstrap analysis. The shape parameter, after an initial decrease, increases slightly, then returns to its starting level. It re mains fundamentally unchanging over the time frames. Computing the moments from the si mulation yields a similar result, namely: the mean increases with the length of the time unit and the variance (standard deviation), if it increases asymptotes qu ite rapidly. However, the estimates of skewness and excess kurtosis do not di splay the same behavior. The behavior of excess kurtosis over the time units is very similar to the plot of ˆ x xx x The changes in observed values of ˆ x xx x may be a function of the ancillary variables, such as the industries, the exchanges, the market c ap, etc. However, this hypothesis was not examined here. We may also look at the issue from the perspective of order statistics. Let X GEV( ,, msx msx msx msx ). From Casella and Berger (2001) we have: nXnfx()() () = 11 1ini X n FFf nii()()! () ()!()!----(4.7)

PAGE 141

-123X F = 11x/ exp{[()]} + -+ x m x s and 11 1 11 11 11 1 11Xxx fx xx/ / / /()[()]exp{[()]} [()]exp{[()]}---=+-+ -=+-+x x x x mmx xx xsss mm xx sss (4.8) where: 10 xx {:()}, +->xms 0 and -<< > -<< m s x So, for largest-order statistic of a random sample of size n nXnfx()() () = n 1 n XF () X f 11111 1 11 111 1 1nn n Xnxxx n x Fx x n()/ // / () /(exp{[()]})exp{[()]}[()] ()(exp{[()]}) exp{*[()]} -------=-+-++ =-+ =-+ x xx x xmmm xxx ssss m x s m x s (4.9) Since we found that 01 x xx x << in this analysis, let cn 1 / x xx x = n which implies cn = n x xx x resulting in a positive number in the range 0.25< c <1. Taking n inside the exponent and continuing from above:

PAGE 142

-1241 1 1 1 1 let 1 1 11nn n nnn nn nnn nnn nnnnnncxc c ccxc cd dcxc cxcd cxcdcxcd/ / / / /exp{[()]} exp{[()]},() () exp{[()]} (/) exp{[()]} ((/)) exp{[()]} ( exp{[(=-+ =-+=-=-+ -=-+ -+ =-+ -+ =-+x x x x x m x s sm x ss sm x ss msx s x ss msx x s xxms x {} 1 1 1 1 1111 1 1nnnnn n nnn nnnn n nn n nccxcdc c cxcd cccd x c cd x c c/ / / /) )]} () exp{[()]} exp{[(())]} exp{[()]}exp{[()]-+ =-+ nr =-+-+ + =-+nr + =-+nr x x x xxs xxxms x xxs xxxms xs xms x sx xms x x s 1 11 defined as 1n nnn n n n Xnn ncd x c c x Fx()/ / / ()} exp{[()]} ()exp{[()]} + =-+ nr =-+x x xxms x x s m x s (4.10) We have shown that the maximum order statistic of a size four random sample from a random variable distributed as a GEV, is its elf distributed as a GEV. Since there is nothing in this development which restrict s the result to any sample size,

PAGE 143

-125the result is applicable to the maximum of a random sample of any number of weeks. So if a weekly block maxima is distributed as a GEV, then the maximum of a block maximum of a random sample of n weeks is also distributed as a GEV with location or n m mm m = nnn nn cdd cc + =+ xmss m xx scale or n s ss s = n c s and shape n x xx x = x where c,d are functions of n and x Actually the max stable definition and Theorem 2, Chapter 1, Section 1.8.1, would guarante e the above result, but conducting the proof also yielded the forms of par ameters as a function of the base time frame. The earlier simulation shows how the pa rameters tended to behave over the range of lengths of blocks for the universe whi ch is being analyzed. In the further analyses only the weekly block maximum will be used. 4.5 The Time-Varying Model While approaching this modeling an issue which loom ed large was the dimensionality or number covariates to be used in f itting the GEV parameter estimates. To this end, a strategy was adopted of b reaking the analysis into multiple steps consisting of subsets of covariates, finding the “best” covariates in each step, combining the “best” covariates of earlier steps in further analyses, and repeating until the covariate set was “whittled down.”

PAGE 144

-126The data set on which the time-varying modeling was performed was a sample of 3,000 weekly extreme-value series covering the peri od from January 3, 2000 (first business day in 2000), until August 31, 2007 (last business day in August 2007). This period represented 1,927 trading days and 400 trading weeks. The series selected formed a stratified random sample built us ing the proportions of the five ancillary factors (identified in Chapter 3, Section 3.4) found in the 12,185 series created from the data preparation activities. The s ample size was consistent with that suggested by the dissertation committee and re presented 3,000/12,185 or nearly 25% (24.6%) of the available sample. Weekly data were chosen because of the results of the previous section, and because th ey produced the largest set of observations of any time unit, and as found in a pr eliminary analysis the GEV models based on a weekly block-maximum framework co nverged readily with reasonable results. As seen in the previous section there is a strong relationship between weekly models and models with larger time u nits. While it is a reasonable hypothesis that changes in time scale have an effec t on the behavior of financial market phenomenology, examination of monthly and lo nger extreme-value series are left for later study and perhaps for other rese archers.

PAGE 145

-1274.5.1 Matrix Form Of Relationships Between TimeVarying Covariates And GEV Parameters In the present research the matrix form of the rela tionship between the GEV parameters and the covariates is given by: 1 11 txtxppx () mq mq mq mq =X 2 11 txtxqqx () () sq sq sq sq=expZ (4.11) 3 11 txtxrrx () xq xq xq xq =W where and X, Z W are matrices of time-indexed covariate values conta ining and pqr number of covariates, respectively, where 123 111 and pxqxrx ()()() ,, qqq qqq qqq qqq are coefficient vectors such that 1111 TTTT Sxpxqxrx ()[(),(),()] qqq qqq qqq qqqQ=, and where S=p+q+r The time-varying log-likelihood function is defined for the extrema random variable t Y as follows: 0t t ,: x xx x " 1 11 111 111 t TT ttttttttt tt T ttttt tLy y y / log((;,,))[log()(()/) (/)log(()/)] x xx x msxsxms msxsxms msxsxms msxsxms xxms xxms xxms xxms== ==---+-++(4.12)

PAGE 146

-1280t t ,: x xx x =" 1 1 11T T ttt ttttt t t T t tLyyy log((;,))[exp([()/])()/ log()] msmsms msmsms msmsms msmsms s ss s= = ==-----(4.13) For 1210tttt yt ()()(3),and ()/xms xms xms xms QQQ'+->" 4.6 Examining Covariates 4.6.1 TimeÂ’s Arrow And Periodic Covariates One hypothesized set of covariates is a set built s olely upon a time index. The behaviors to be examined included a level constant, a non-cycling time-based trend (also commonly called timeÂ’s arrow), and a set of p eriodic patterns. These are described in Table 4.3. Table 4.3 Description of candidate periodic behaviors. Period (in weeks)*Description 104.36Half cycle per year (bi-annual) 52.18One cycle per year (annual)26.09Two cycles per year (six months)13.04Four cycles per year (quarterly) 4.35Approximately 12 cycles per year (monthly)2.00Two week cycles Based upon 52.18 weeks per year

PAGE 147

-129An initial evaluation of 17 equities that estimated GEV parameters was conducted based on covariates consisting of the constant and trend as well as a sine and cosine covariate equivalent to each candidate period liste d in Table 4.3. The result under the goal of posing general models was that only the constant term was observed as significant in a high percentage of the equities. T he next most commonly significant covariate was the trend covariate (albe it at a very small value coefficient), which could have been confused with l ong-period periodic behavior. However there were only 17 equities in this initial analysis and since the present dataset was much larger and covered the population in a far more complete fashion, we reexamined these time-based covariates, at least in a preliminary fashion. Using a long and well-defined decomposition of time series, each equity data set (of weekly maxima) was first examined for the prese nce of a trend or constant. To this end the a KPSS test was run. Named after Kwiat kowski, Phillips, Schmidt, and Shin (1992), KPSS is a test with a null hypothesis of level or trend stationarity, against the alternative of a unit root or nonstatio narity. While we assumed the use of a constant in any covariate model, we desired: ( 1) to see if the trend was likely present and (2) to see whether the extrema could be made level stationary by simple differencing. The level stationarity series would b e used to test the fit of the periodic elements.

PAGE 148

-130To perform the test 2,500 equity series of the 12,1 85 weekly test series were randomly sampled. As in the initial study of 17 equ ities the first 18 months of data were set aside to reduce transient effects of the p rocessing as well as of missing values. Natural logarithms of the raw extrema serie s were taken as well as first differences of both the logged and raw series. Thes e four derived data sets from the 2,500 series were subject to a KPSS test with the n ull hypothesis of level stationarity. Table 4.4 provides the results in ter ms of p-values > 0.01 (fail to reject null) and p-values < 0.01 (reject null). It is clear from Table 4.4 that extremal series are overwhelmingly first-difference level stationary, implying there is likely a trend component (or very long periodic element, which we treat as a trend). As witnessed i n the initial study, the coefficient of any long term trend is very small and may be con founded with the estimate of the constant. Raw Data Raw Diff'd Data Logged Data Logged Diff'd Data Tally730 2,500 730 2,500 Null Hypotheis Series is Level Stationarity Table 4.4 Number of weekly extremal series failing to rejec t the null hypothesis of level stationarity under the KPSS tes t.

PAGE 149

-131Using the differenced series, power spectra were co mputed for each series and tests were performed to get a sense of which sine and cos ine terms were significant over the samples. Table 4.5 provides the results from a permutation test of the power spectra. Both logged and raw differenced series were examine d. The values in the table represent the proportion of the 2,500 series for wh ich the power spectrum value exceeded the 0.95 permutation critical value. Perio ds under examination greater than six months exhibited no power, while those les s than a quarter of a year (especially for the raw differenced series) possess ed increasing power, such that nearly one-third of the differenced raw series demo nstrated significant power for a period of every two weeks. An alternative way of looking at significant power spectrum elements is to compare the observed spectrum to that of a white noise proc ess (Jenkins and Watts [1968]). A white noise process is one for which the terms of the series are IID N (0,1). Table 4.5 Results from the permutation test from power spec tra examining hypothesized salient periods. Period Every Two YearsEvery Year Every Six Months Every Quarter Every Month Every Two Weeks Proportion-Raw 0000.00040.25800.3144 Proportion-Logged 00000.06480.0808

PAGE 150

-132Under this assumption the power spectrum values are distributed as 2 2 df c cc c = Power spectra were computed for a standardized version of the time series. The ordinates for the power spectrum were compared with a 95% cri tical value (2 21095 df ,. a aa ac cc c=-= = 5.99). Table 4.6 tallies, by specific preselected p eriodic elements, the proportion of the 2,500 differenced raw and differenced logged se ries that exceeded the test value and thereby rejected the null hypothesis. Similar to but more restrictive than the permutatio n results, only the monthly and short period showed any significant periodic elemen ts, and these were minuscule in number. At best, 3.7%*2,500 = 93 series were observ ed. The conclusion reached with respect to the time-bas ed covariates was that, while a constant and trend term may be used in the paramete r fitting, none of the other initially periodic covariates would be candidate pr edictors in this research. Period Every Two YearsEvery Year Every Six Months Every Quarter Every Month Every Two Weeks Proportion-Raw 00000.00640.0372 Proportion-Logged 00000.00200.0096 Table 4.6 Results from white-noise test from power spectrum examining hypothesized salient periods.

PAGE 151

-1334.6.2 Financial Markets And Economic Covariates 4.6.2.1 A Preliminary Fitting The set of financial covariates were next examined. The processing of the selected covariates for use in the model involved several da ta manipulations. First, daily returns were computed for the covariates. This invo lved a difference of present day – previous day for the interest rate covariates; for the index co variates this difference was divided by the previous day value. The daily return values from this computation were used in the analysis and not logged. The raw returns were grouped into the same weekly t imeframe as the equity returns and aggregated. Since there was no real guidance in the literature as how to aggregate the observations for use in a covariate m odel of this sort, three aggregates were computed for each week—the minimum, the maximu m, and the median—to cover the likely range of sources of influence on t he extremal equity events. Also, it was unclear if the impact of the covariate upon max imum-likelihood estimates would be week coincident, week leading, or week lag ging. Lagging, or using covariates from a timeframe post the extremal value was ruled out of the analysis as being of no great practical value. However, the coincident covariate values as well as those ante one and two weeks were used. Thi s meant that each covariate

PAGE 152

-134series could be placed in the model as nine series (three aggregates by three timeframes). The correlations among these nine variables were co mputed and are provided in Table 4.7. While there are some moderate-strength c orrelations in the table, it was not felt that the results warranted eliminating any of covariates at this juncture due to the potential presence of strong colinearity.

PAGE 153

-135Legend–RX.Y, where X = 0 for time coincident, 1 = o ne week prior, 2 = two weeks prior. Y = ME for medi an, PM = maximum, and NM = minimum. R0.MER0.NMR0.PMR1.MER1.NMR1.PMR2.MER2.NMR2.PM R0.ME1.0000-0.19690.1679-0.1039-0.0017-0.0948-0.011 1-0.0454-0.0286 R0.NM-0.19691.00000.65180.01330.67720.6926-0.04400. 70620.6369 R0.PM0.16790.65181.0000-0.07020.64320.58570.02470.6 4770.6189 R1.ME-0.10390.0133-0.07021.0000-0.19420.1547-0.0802 -0.0130-0.0842 R1.NM-0.00170.67720.6432-0.19421.00000.65710.02190. 68120.6938 R1.PM-0.09480.69260.58570.15470.65711.0000-0.05540. 64290.5868 R2.ME-0.0111-0.04400.0247-0.08020.0219-0.05541.0000 -0.18770.1715 R2.NM-0.04540.70620.6477-0.01300.68120.6429-0.18771 .00000.6445 R2.PM -0.0286 0.6369 0.6189 -0.0842 0.6938 0.5868 0.1715 0.6445 1.0000 Table 4.7 Correlation matrix from the covariate series cre ated by crossing aggregation and timeframe factors.

PAGE 154

-136A small sample of 200 equity series was randomly se lected, and GEV parameters were fitted using the step-wise methodology. This e ffort was designed to provide guidance for the decision to remove covariate types from the larger analysis. A tally was computed of the number of models by the occurre nce of each of the following five types of covariates: constants, trends, median aggregations (ME), maximum aggregations (PM), and minimum aggregations (NM). T able 4.8 depicts these tallies presented as percentages. As an example, th e constant term was found in 100% of the 200 models, while a median term was in 3.5% of the models. In fact, each of the three constant terms (one each for ˆ ˆ,ˆ, msx msx msx msx ) appeared in well over 90% of the models. From this examination the model-generation strategy was modified to modestly reduce the effort in two ways: All models were star ted from a baseline model of the three constants, equivalent as discussed above to f itting a NTVM, and the median aggregation covariate was dropped from further anal yses. ConstantTrendMEPMNM 100% 41.00% 3.50% 100% 100% Table 4.8 Percentage representation of the number of model s in which each of the covariate classes appears.

PAGE 155

-1374.7 The Full Fitting Of Time-Varying GEVs 4.7.1 Stepwise Model After the adjustment described above, as well as af ter other adjustments specific to individual covariates, there were a total of 266 co variates to be examined against each of the three parameters under the maximum-like lihood model previously covered. A forward stepwise approach was adopted wi thin the maximumlikelihood model. In this model each covariate was examined for feasibility (meeting the conditions of the GEV) as well as for forming a linear function for each GEV parameter. The space of coefficients for t he covariates was searched using the Nelder-Mead and BFGS algorithm previously specified. The covariate and associated coefficient, which results in the ma ximum value of the maximum likelihood, was retained. The process was repeated, and the covariate generating the maximum likelihood, given that the first-step covar iate was in the model, was retained. This process was repeated until a stoppin g rule for the process was triggered. (The stopping rules used will be discuss ed later in this section.) To provide a baseline equivalent to the non-time-varyi ng GEV (as described earlier) and in response to results of the 200 series analys es, the initial stepwise model contained an estimated coefficient of a constant se ries (of 1s) for each of the GEV parameters. While a separate model was fitted to ea ch series, up to ten additional steps of fitting one covariate at a time were allow ed. The limitation of ten steps was

PAGE 156

-138the result of a preference or philosophy which runs throughout the analysis, namely that—subject to variation associated with the ancil lary factors (described in Chapter 3, Section 3.4)— general results are more interesti ng than results that are idiosyncratic to a specific equity series. With the inclusion of the constant term in the firs t step, in the second step there remained 265 covariates to be examined against the three parameters. Because each covariate could be incorporated into the model for each GEV parameter separately, this meant that the analysis needed to examine 3 265 = 795 covariates for the second step, (2 265) + 264 = 794 for the third step, and so forth. At each step a complete nonlinear optimization step was applied. A dditionally, since the maximum-likelihood model was highly nonlinear and t he GEV parameters were not perfectly correlated, no guidance could be foun d in the literature or via personal communications that would support a priori eliminate from the analysis of large numbers of covariates in later steps, based upon re sults of earlier steps. Consequently, analysis of each equity would take a considerable amount of time, depending upon the number of steps required. For ex ample, using the computer available a ten-step analysis would take three to f our hours. To complete the analysis of the approximately 3,000 equity training sets and the 100 equity testing sets, the over 90 processors (from various sources) were used over 10 days of processing.

PAGE 157

-139Previously we alluded to stopping rules for the ste pwise processing. In the present analysis two stopping rules were used in combinatio n: a likelihood ratio test (LRT) and the Bayesian Information Criterion (BIC). The L RT—a consequence of the Neyman-Pearson Lemma (Casella and Berger [2001])—is the ratio of the likelihood (maximum likelihood) of an embedded mode l over the likelihood of a full or expanded model. In other words, all the est imators or values of estimators in the reduced model can be found in the full model an d then some. The null hypothesis for the LRT is given as x (|) Q and the parameters for the model are contained in a set 0q qq q Q The alternative hypothesis is that parameters for the model are contained in 1 q qq q defined such that 01qq qq qq qq Q Therefore, the numerator is the likelihood of 0 H and the denominator is the likelihood of 1 H Under this construction 0 x (|) q qq q £ 1 x (|) q qq q. Therefore, the value of the ratio varies between 0 and 1, with values closer to 0 leading to rejecti on of 0 H In the present context the LRT was conducted on maximum likelihood, comput ed from maximizing the likelihood for Step s versus that for Step s-1 ; in other words, examining if the additional covariate was significantly increasing t he maximum-likelihood value. If the remaining set of covariates, as represented by the covariate that maximizes the likelihood for the step, did not significantly incr ease the value of the maximum likelihood over the previous step, then the search was stopped and the set of

PAGE 158

-140covariates established was used in the previous ste p. A test of significance arose, as follows: if we let 01 / l ll l= then as 0 l ll l the transformation 2 log() l ll l L=- and as the number of samples n ˆ L n nn n 2 c cc c with degree of freedom = the dimensionality of 1 q qq q the dimensionality of 0 q qq q (Casella and Berger [2001]), which in our tests was 1. The BIC, also sometimes called the Schwarz Informat ion Criterion (SIC) (Schwarz [1978]), is an asymptotic result derived under the assumption that the data distribution is in the exponential family. BIC is o f the following form: 2 BICpn log()log() =- (4.14) where: is the maximized value of the model’s likelihood p is the number of covariates in the model n is the number of observations The likelihood of BIC increases with more covariate s, but the BIC is penalized for increasing the number of covariates used. In this w ay the BIC “biases” toward fewer covariates rather than more covariates in the model. Ideally, the BIC will increase for a covariate, adding more to the likeli hood than being subtracted by the addition of a parameter. The stopping rule was to s elect the step (and therefore the covariates) prior to the step for which the BIC dec reases in response to a greater number of covariates without a commensurate increas e in likelihood.

PAGE 159

-141As mentioned earlier both rules were used in concer t to determine when to stop the stepwise analysis. Table 4.9 is a tally of the stop ping steps for the training set equities. Despite the limitation of a total of 11 steps place d on the processing, more than 95% of the equities required models of 10 steps or fewer (12 covariates or less). For the 4.5% of the models requiring more steps, an d hence more covariates, in the interest of generality, modeling of these equities were restricted to 13 covariates selected by the process. On average the models for the distribution parameters had approximate 7.0 covariates (6.991) in total. Constants Only 4 Coefficients 5 Coefficients 6 Coefficients 7 Coefficients 8 Coefficients Count 5497554577343295 Proportion 0.0020.1660.1850.1920.1140.098 Cumulative Proportion 0.0020.1670.3520.5440.6580.756 9 Coefficients 10 Coefficients 11 Coefficients 12 Coefficients >12 CoefficientsTotal Count 2041431151371323002 Proportion 0.0680.0480.0380.0460.0441.000 Cumulative Proportion 0.8240.8720.9100.9561.000 Table 4.9 Tally of stopping steps for the equities in trai ning set (stopping rules used are discussed in the text above).

PAGE 160

-1424.8 Analyzing The Covariate Models A detailed analysis of the results from the fitting of time-varying GEV distributions is found in Appendix B. Having obtained the parame ter estimates needed as inputs later in the analysis (ses), other than as a sanity check on the estimation process, analysis of the covariate models in and of themselv es was not and is not a central research thread of this dissertation. It does hold hypotheses which the researcher believes are worthwhile examining in follow-up rese arch efforts, which are noted in Chapter 7, Section 7.5. Therefore, this examinatio n is “stubbed out” at this juncture. Some of the salient results of the analysis of the parameter fitting (supporting exhibits in Appendix B) include: 1. The models for the distribution parameters had on a verage approximately seven covariates in total. 2. 95% of the time-varying models showed significant i mprovement over the static model. 3. Financial and economic covariates were as or more i mportant compared to constants or trend. 4. Of the financial/economic covariates, the plurality were used in the estimation for ˆ m mm m followed by ˆ x xx x and ˆ s ss s 5. Time-contemporaneous covariates were less than half of the covariates compared to time-lagged covariates (46.6% against 5 3.4%). 6. Covariates associated with lags of one or two weeks were equally divided, suggesting that, from an overall perspective, the e xistence of a lead-lag

PAGE 161

-143relationship, but there is not a strong differentia tion by time frame for at least the first two weeks. 7. The minimum values of the covariate in the block mo re frequently entered the model than the maximum values of the covariate in the block -62.1% versus 37.9%, respectively. 8. Distinct groupings of covariates exist with respect to when and in what formed (aggregation) they entered models, for examp le global market indices behavior similarly. 9. When the results were observed from the perspective of market capitalization, model complexity (in terms of the n umber of components, the number of leading components and the number of distributional parameters which were modeled by covariates beyond the constant) tended to increase with greater capitalization. 10. Africa, Eastern Europe, the Middle East, South Amer ica, and to a lesser extent North America can—at the gross level of this analysis—be modeled (in the sense described above) by less complex mod els. 11. The Pacific Rim and Western Europe, in the same sen se as the previous, tend to be more complex models. Whether or not the results from fitting these model s will provide succinct insights into which financial covariates ultimately have an effect upon the GEV parameters and in turn upon risk, is beyond this research. Nev ertheless this effort is “a means to an end,” which is the estimation of the time-var ying return values. We turn to this analysis in the next Chapter (Chapter 5).

PAGE 162

-1445. Estimating Time-Varying Return Levels And Modeling Return Levels Jointly 5.1 Overview Of The Chapter In Chapter 5, the time varying GEV distributions wh ich were fitted in Chapter 4, Section 4.7, are put to work or at least their para meter estimates are. The fitting of the distributions was a means to an end and that en d as has been cited in earlier Chapters, is to use the parameter estimates to esti mate time varying return values. The return values in turn are used in association w ith the ancillary variables, first introduced in detail in Chapter 3 Section 3.4, to b uild a model of the joint distribution of extreme return behavior. Chapter 5 follows along this collection of threads. The meaning of return values and the associated ter ms such as return levels and return periods are the same in the time varying cir cumstance as in the static or nontime varying analysis. The return value is simply the quantile value (so it is measured in return units) selected such that the pr obability that a return greater than the return value is equal to 1/return period. The return period is defined in terms of the time units of the analysis, in this case weeks. Another useful meaning of the

PAGE 163

-145return value within the present domain is that the investor will witness a return of the magnitude of the return value or greater on ave rage once every return periods. Estimation of return values for a time varying anal ysis is very similar to that for a non-time varying model. The differences arise out of the presence of a sequence of values for each of the distribution parameters rath er than a single value, which necessitates that the return value be solved by ite ration. The variance of the return value is a quantity whic h may be thought of as the time varying measurement error for the return value and is alternatively denoted the time varying nugget-type9 variance or simply the nugget in this discussion. A function of this variance turns out to be used in describing the error variance of the joint distribution of return values, and as shall be desc ribed in Chapter 6, Section 6.5, the nugget variation proves useful in further defining the risk matrix. The computation of the nugget variance is presented in this Chapter (Chapter 5) and is shown to be a function of the partial derivatives of the return v alue with respect to the model covariates and the inverse hessian or inverse of th e second partial derivative of the covariate estimates at the point of solution of the return value. 9 This quantity is called a nugget-type variance bec ause the quantity is time varying driven by financial covariates which are time indexed, but no t timeÂ’s arrow or time periodic covariates as are used in similar TV climatic models or conventional nugget variances associated with NTV models. However as used in the financial domain a TV return value (the basis for the TV nugget) will only be used for a fixed period of time, then discarded and re-estimated as will be the nugget. So for these short periods of time the nugget may be thoug ht as approximately fixed.

PAGE 164

-146A Gaussian Process (GP) model is postulated as the form of the joint dist ribution of return values. To this end, a vector of return values is defined as a random sample of random variables and which is expressed as a random vector distributed as a GP with a mean defined as a function of the ancillary variables and a variance matrix which is broken into two pieces, one based s olely upon variation associated with some or all of the ancillary factors, but whic h are not captured within the fixed effects of the expected values. A second variance matrix is defined as a function of the nugget variation. In the present model both ma trices are assumed to be diagonal and uncorrelated with one another. The remainder of the Chapter (Chapter 5) is devoted to describing efforts aimed at fitting this model. The fitting is accomplished in stages and steps within stages. But first an examination of the return value data s uggests the need for transforming and trimming the data in order to better meet distr ibutional assumptions. An examination of the transformed and trimmed data sug gests that these manipulations have not materially changed the behavior of the sam ple, the sample size or the relationship between categories within factors even over the time of the study, but they have improved the distributional behavior as w as sought.

PAGE 165

-147After data manipulations, the first stage is to fit the expected values of the GP as a set of fixed effects developed from a subset of the ancillary factors in categorical forms (nominal or ordinal scale) and represented in the model as a set of cell means contrasts. This first stage model results in the i nclusion of fixed effects for sectors, stock exchanges and market capitalizations. Howeve r examination of the model diagnostics suggestions the need to add factors in quantitative form as well as the need to deal with the presence of heteroscadacity. Following on from these initial results, in a secon d stage of model building, predictors are added to the function estimating the expected values. These include discounted market capitalizations and a market capi talization by year interaction. Each of the new predictors as well as the members o f the legacy set proved to be significant under examination using a Type III Sums -of-Squares. With a Type III Sums-of-Squares each predictor is alternatively tes ted for significance assuming all the other predictors are in the model. Using the estimated values of the predictors from t his latest analysis of the expected value portion of the model as a starting p oint, an extensive examination of alternative models for each of the two postulated s ources of variability is conducted. All of the models examined, embed the e xpected value parameters and the variance parameters into the parameters of a mu ltivariate normal. Because of

PAGE 166

-148assumptions, previously articulated, regarding the form and relatedness of the variance matrices, this multivariate normal distrib ution decomposes into a set of sums making parameter estimation using a maximum li kelihood approach fairly straightforward and computationally very tractable. With respect to the two variance matrices, for the ancillary-factors-variat ion matrix (denoted in the text as H ), variances which are a function of year by market capitalization are hypothesized and estimated. This results in 35 var iances. For the portion of the variation which is a function of the nugget variati on (denoted in the text as S SS S ), a nugget multiplier based upon the sector of which th e equity is a member is hypothesized and estimated. Consequently 20 such m ultipliers are estimated. Therefore, the final model contains 103 parameters associated with the expected value predictors (many of which are used for the ce ll means contrasts, particularly those for the stock exchanges) and 55 parameters as sociated with defining the variance matrices. The selected model contains in total 158 parameters estimated using in excess of 20,000 observations. Diagnostics presented at the end of this Chapter (C hapter 5)and model validation perform at beginning of Chapter 6, (see Chapter 6, Section 6.4) suggests that the model is satisfactory.

PAGE 167

-1495.2 A Brief Recap To This Point In the previous Chapter (Chapter 4) the parameters of the extreme-value distributions were modeled as a function of a set o f financial/market covariates. In this Chapter (Chapter 5) the researcher will create and examine a multivariate model composed of fixed and random elements, with t he purpose of creating a model of the joint behavior of equity returns. First some of the important model elements will des cribed prior to combining them to form the model. The discussion begins with a des cription of the time-varying (TV) return value. 5.3 Computing Return Value Levels As the reader may recall from the description in Ch apter 1 (Section 1.12), we may invert the generalized extreme-value (GEV) cumulati ve density function (cdf) and thereby obtain a quantity p x defined as the return level associated with the 1 p / return period. A more accessible definition of the return level states that the quantile p x will be exceeded on average once every 1 p / base periods (in this context, weeks) where p pXx Pr[] = For example, if we let p = 0.001, then 1 p / (also notated as N ) = 1,000, and if we use the results of Chapter 1, Section 1.12 to

PAGE 168

-150find a value 0001 x ., such that 0001Xx. Pr[] = 0.001, then we may conclude that on average X will be greater than or equal to 0001 x .every 1,000 weeks. However, the computation provided in Chapter 1, Sec tion1.12, is based on a nontime-varying (NTV) model. Smith, et al. (2006) suggested an approach for estimating return values for an arbitrary but fixed return period of N weeks for the TV models examined in the previous Chapter10 (Chapter 4). We need to find the quantile value N x , Q such that 1 1 1 11tT Nt t t tx N / ,exp[{()}]Q + =-+='xm x s (5.1) Or, by taking logs, we have 1 1 1 11tT Nt t t tx N / [()]log() Q + =+=--xm x s (5.2) In this model the return value N x Q is arrived at by iteration. For the present circumstance a bisection method is used. Because of the straightforward inverse relationship between N and p x can be indexed by either as long as the index is 10 The use in this research of the equations for TV r eturn value estimation taken from Smith (2006) requires a comment similar to that made in the foot note at the end of Chapter 4. The covariates in SmithÂ’s application of the model are time varying, but deterministic, e.g. timeÂ’s arrow and periodic functions. In this research the covariates are sto chastic. However the use of periodic rebalancing i n the financial domain means that returns levels and functions of returns will be re-estimated relatively frequently and they may be thought of an d used as fixed for relatively short periods of time.

PAGE 169

-151clear. The parameter vector Q is included to remind the reader that x is a function of the underlying covariates through values of the TV parameters tt ms ms ms ms and t x xx x defined previously as was the non-negative function z [] + 5.4 The Variance Of Return Values The estimate11 of the variance of the return value is one of the random elements in the model. To compute this we need to bring togethe r a couple of results. Let h be defined as 1 11 t T Nt Nt t tx hx / ,(;){()}Q Q + =Q=+ x m x s. At the solution for h we then have 0 N jhx,(;) QQ = q where j varies over the components of Q By further manipulation we have: NNN jNjhxhxx bychainrule x,,, ,(;)(;) () QQQ QQQ = qq 00 0NNN jNj NNN Njjhxhxx x hxxhx x,,, ,,, ,(;)(;) (;)(;)QQQ Q QQQ QQQ +=+ QQ =+ qq qq 11 Variances stimated on an annual basis.

PAGE 170

-152N N j N j Nhx x hx x, , ,(;) (;) Q Q Q Q Q =Q q q (5.3) Let us set this result aside for a moment, set out some additional notation, and recall an important result from mathematical statis tics. As per the results from the TV GEV and associated notation, let k ˆ Q be the maximum-likelihood estimate of k Q —the S -length vector of covariate terms used to estimate the GEV parameters tt ms ms ms ms and t x xx x (estimates notated by tt ˆ,ˆ ms ms ms ms and t ˆ x xx x ) for security k coming from the analysis presented in Chapter 4, Section 4 .7. Let 1,2,, ik iS ,,q qq q= be the individual elements or coefficients of the vector k Q with ik ˆ q qq q notating the maximum-likelihood estimator. Let the derivative of the log-likelihood function be denoted kk k Lx( log((;))) QQ which is equal to 0 when the function is maximized and k k ( ˆ ˆ ) Q when k Q is replaced by k ˆ Q in the function. The second partial derivative of the log-likelihood fun ction is also the Hessian matrix12 and is notated by 2 k and 2 k ˆ when once again k Q is replaced by k ˆ Q Finally, let us recall that by theorem maximum-likelihood es timators are asymptotically unbiased. 12 The Hessian is as indicated above defined as the s econd partial derivative of the log-likelihood function. It also has other properties: 1) the neg ative of the expectation of the matrix is the information matrix and 2) the inverse of the Hessia n (if it is positive definite) yields the variances of the estimates of the parameters being estimated.

PAGE 171

-153The result from mathematical statistics—the reader should recall—is the delta method (Casella and Berger [2001]). Let n f ff f be a sequence of RVs, where m mm m and 2 s ss s are finitely valued and 2 0nn (N )(,) fms fms fms fms - Then, for some function g satisfying the property that g' () f ff f exists and is non-zero valued, 220nn (g(gNg' )())(,[()]) fmsm fmsm fmsm fmsm- In the present context we find h taking the role of g and k Q ( k ˆ Q ) and its constituents ik q qq q ( ik ˆ q qq q ) substituting for f ff f Applying a Taylor expansion of k about k Q we have 21 kk k k ˆ ()Q-Q- Recognizing kk E ˆ [] Q=Q and by the definition of covariance, we have 2121 kj k k j jCovCov ˆˆ ˆˆ (,)()(,)() -QQ where the maximum likelihood of the inverse of the Hessian has been substitute for the Hessian. Recalling 0 kˆ = we have T k j k jCov ˆˆ (,)()

PAGE 172

-154where T () is defined as the transpose. Using another Taylor e xpansion, we have 1k k kS N Njkjk N j jx xx, ˆ ,,, ,ˆ () Q Q Q = - qq q Recalling once again that k k N NExxˆ ,[]Q Q= and using our previous results and the delta method, we have j k kj N N T kj NN kjx x CovxxCovˆ ˆ , ˆˆ ,,ˆˆ (,)()(,)() ˆˆ Q Q QQ QQ QQ and if kj = this becomes k k k NN T k N kkxx VarxVarˆˆ ,, ˆ ,ˆ ()()()() ˆˆ QQ Q Q QQ Substituting from an earlier result, we have the de sired quantity, namely kN N T j j k N N N N Nhxhx VarxVar hxhx xx, ˆ , , ,(;)(;) ˆ ()()()() (;)(;) qq qq qq qqQ Q Q Q Q Q Q QQ -QQQ (5.4) where 1,2,, iS = The results present here will be used to compute one of the variance elements, namely the nugget variance in th e analysis below 5.5 Multivariate Model In this section the multivariate model is detailed. The multivariate model will be used in further research to describe the joint beha vior of the return values. This model is posed as an alternative means of describin g the joint behavior of the

PAGE 173

-155downside tail behavior of equities. It is proposed in the place of identifying other dependence functions and using said dependence func tion to form a joint distribution of the univariate GEVs. The issues wit h the latter approach were detailed earlier in this research, in Chapter 1, Se ction 1.13.1. The discussion starts with a review of the existing hypothesis followed by a definition of the model in broad subject matter (fi nance) terms, before proceeding on to a formal statistical definition. Let us defin e the following set of random variables—the return values k N X Q for security k —as a function of the GEV parameters tt ms ms ms ms and t x xx x which themselves are postulated to be a function of a set of financial/market covariates scaled by a set of c oefficients contained in an S length vector Q and for a specified return level N (In the previous Chapter (Chapter 4) the development of a maximum-likelihood estimator k ˆ Q is described, where k = 1, 2,… [number of securities13] K [see Chapter 4, Section 4.5.1].) Earlier in this Chapter (Chapter 5,Sections 5.3 and 5.4) a method for estimating k N Xˆ Q and k N VarXˆ () Q at an arbitrary return level is described and impl emented for 13 Please recall that because the only type of financ ial security being studied in the research is common stock, the words equity and security within the context of this research should be understood to mean common stock, unless explicitly stated otherwise.

PAGE 174

-156each security in the sample. So, one useful questio n is: can these return values be modeled as functions of salient market dimensions? In building this model from a stochastic perspectiv e let () bS bS bS bS + RGPY, (5.5) where: GP stands for Gaussian process14, R is a random vector of extreme value return values f or a given return level, Y is the design matrix of values of predictors of th e expected values of the return values (fixed effects), b bb b is the vector of coefficients multiplying the predi ctor values, H is the covariance matrix of the predictors (random effects), S SS S is the covariance matrix of equity return values ( random effects). In the present research we have R as a linear model composed of both fixed effects that comprise the mean, b bb b Y and random effects that enter the model through t he variance/covariance structure. Therefore, h be be be be ++ R= Y (5.6) where: R and Y b bb b are defined as previously, h is a random vector of innovations with variance, h S e ee e is a random vector of innovations with variance, e ee e S There are clearly two parts to the model. The fixed effects are defined as those predictors such that the values the predictors take on are the only ones at which 14 A Gaussian process is a stochastic process that gen erates random samples over time or space, such that no matter the finite linear combination o f random samples one takes, that linear combination will benormally distributed.

PAGE 175

-157(typically a limited number) the predictor occurs o r the only ones (once again typically limited) in which the analyst is interest ed. In other words, the values of the predictor for a fixed effect do not represent a sample of the values or levels drawn from say an infinitely large set of such valu es, but rather represent the population or population of interest of values. The consequence is that if some or all of the levels of a fixed-effect predictor are e xplaining all the variation observed in the response variable, the response is determini stic or provides a fixed-quantity change from level to level of the fixed effect. The fixed effects are typically associated with the expected value of the response. There is no uncertainty or variability associated with the fixed effects. In contrast, a random effect behaves in a contrary manner. The levels or values of the random effect observed in the sample represent a small number of the total levels that exist in the population. Further the v alues of the random effect as per their name are chosen or occur at random. This ran dom behavior is characterized or modeled typically as some probability law. We w ill return to a discussion of the broader Gaussian process later in this Chapter (Cha pter 5, Sections 5.6 through 5.9).

PAGE 176

-1585.6 Model Building In identifying the form of the model the researcher first examines the adequacy of fit of potential fixed effects before looking at po tential random effects, since the latter tend to be more complex, more flexible in th e form of their implementation, and therefore more useful in the analysis for adjus ting the model for a variety of deficiencies. 5.6.1 Fixed-Effect Models Initially introduced in Chapter 2, (Section 2.3) as the ancillary dataset, seven factors were examined—each quantified in a discrete form—for their utility in describing variation in the return levels. The fact ors once again were: 1. Continent of the security’s domicile 2. Country of the security’s domicile 3. Stock exchange of the security 4. Geographic trade area of the underlying issuing com pany (geographic location of its headquarters) 5. Exchange area in which the security is traded (geo graphic location of the stock exchange) 6. Sector of the issuing company 7. Discretized form of the market value

PAGE 177

-159(Readers are referred to Chapter 3, Section 3.4, to reacquaint themselves with detailed descriptions of these potential predictors .) In an initial analysis the variation in quantile va lue for return level of 52 weeks (a quantile value exceeded on average once every 52 we eks) over approximately 3,000 securities was modeled as a function of the s even ancillary data factors. The results indicated that the factors were remarkably without any substantive discriminatory power with respect to the values of the return level. In the subsequent analyses major modifications were made to the data and the data analysis approach. Firstly, the analyses were broke n up by year, i.e., a separate model was developed for each year from 2001-2007. A year return period, along with breaking up the data on an annual basis, was c hosen because it aligns with the common advice in the financial industry that invest ors should review and rework their financial portfolio no more frequently than o nce a year. After separating the data by year, a histogram of the return values was examined and it was decided to perform a Box-Cox analysis (Box and Cox [1964]) in search of a transformation that would yield a distribution of return value tha t was more symmetric, unimodal, and more closely approximating a normal distributio n. (Transformations with this goal are commonly performed so that the data more c losely conform to the

PAGE 178

-160underlying assumptions of the analytical steps.) Th e Box-Cox analysis suggested that a log transformation could be applied to the d ata. Histograms of data after transformation are provide d in Figures 5.1, 5.2 and 5.3; the data clearly conform more carefully to the desired properties after transformation. These figures are, as the captions provide, histogr ams of logged return levels from the TV GEVs for a return period of one year (approx imated 52 weeks). In examining the distributions under the log transform ation, the effects upon the distributions of trimming the set of return values at selected, large values of return was also examined. Discussions of the results of t rimming are continued in the text below Figure 5.3.

PAGE 179

-1612001 Ln Ret.Val,for P>=0.019 <201 values deleted Quantile, P>=0.019 Frequency -4-3-2-10123 0200400600 2002 Ln Ret.Val,for P>=0.019 <200 values deleted Quantile, P>=0.019 Frequency -4-202 0200400600 2003 Ln Ret.Val,for P>=0.019 <20 0 values deleted Quantile, P>=0.019 Frequency -4-202 0200400600 2004 Ln Ret.Val,for P>=0.019 <20 0 values deleted Quantile, P>=0.019 Frequency -4-202 0200400600 Figure 5.1 Histograms Of Log Return Values For Selected Year s For PX [ x ] 0019 £ Censored At A Value Of 2,000 Percent. The Return Value Will Occur On Average No More Than Onc e In 52 Weeks. Return Levels. Number Of Deleted Observation s Are Provided.

PAGE 180

-1622001 Ln Ret.Val,for P>=0.019 <102 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2002 Ln Ret.Val,for P>=0.019 <104 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2003 Ln Ret.Val,for P>=0.019 <10 3 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2004 Ln Ret.Val,for P>=0.019 <10 3 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 Figure 5. 2 Histograms Of Log Return Values For Selected Year s For PX [ x ] 0019 £ Censored At A Value Of 1,000 Percent. The Return Value Will Occur On Average No More Than Onc e In 52 Weeks. Return Levels. Number Of Deleted Observation s Are Provided.

PAGE 181

-1632001 Ln Ret.Val,for P>=0.019 <518 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2002 Ln Ret.Val,for P>=0.019 <521 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2003 Ln Ret.Val,for P>=0.019 <5 22 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 2004 Ln Ret.Val,for P>=0.019 <5 25 values deleted Quantile, P>=0.019 Frequency -4-3-2-1012 0200400600 Figure 5. 3 Histograms Of Log Return Values For Selected Year s For PX [ x ] 0019 £ Censored At A Value Of 500 Percent. The Return Value Will Occur On Average No More Than Onc e In 52 Weeks. Return Levels. Number Of Deleted Observation s Are Provided.

PAGE 182

-164The data-censored histograms of a log-return value was examined. Censoring was performed upon the weekly return values at arithmet ic values of 5, 10, and 20 (that is, weekly return quantiles of 5 = 500%, 10 = 1,000 %, and 20 = 2,000%). Further an examination was made of notched box-plots15 of each of the ancillary factors for each year. (Selected box-plots from this set are p rovided in Figures 5.4 through 5.7.) The examination of the box-plots suggests tha t the geometry of the relationships for return values, when differentiate d by the ancillary factor levels, remained similar year over year. The change between years for a box-plot within an ancillary factor was dominated by an overall transl ation of the positions of the boxes. For those factors which possess a large num ber of levels, namely Sectors (Figure 5.5) and Exchanges (Figure 5.7), labels on the abscissas were removed as not all could be legibly printed and the scale is n ominal. However the order of the boxes is the same across the years and the point is that the similarity of the patterns is fairly compelling Under these limits the box-plot relationships remai ned the same; even at a censor of 5, very few values were deleted (no more than about two dozen), with most of the deleted values coming from the North American (pred ominantly U.S.) micro-cap class. Since the return values were based upon the TV fits, it was felt that the logged return value of 500% was a more realistic up per value from a financial 15 The notched box plot is a graphical representation of the data distribution with a confidence interval for the median provided by the notch (Tuke y, 1977).

PAGE 183

-165perspective, clearly improving distributional under standing while not affecting relationships within the ancillary factors.

PAGE 184

-166AfricaAsiaEurN_AmerS_Amer -4-3-2-1012007 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5ContinentRet.Val_0.02 AfricaAsiaEurN_AmerS_Amer -3-2-1012001 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5ContinentRet.Val_0.02 Figure 5.4 Notched Box-Plots Of 52-Week Logged Return Value Aggregated By Continent For (A) Year 2001 And (B) Year 2007. Data Censored At Retu rn Value Of 500% R e t V a l P = 0 0 2 Continent Continent Africa Asia Eur NAme r SAmer Africa Asia Eur NAm er SAmer (A) Continent (B) R e t V a l P = 0 0 2 -169-166Log Return Value Log Return Value

PAGE 185

-167Commercial ServicesEnergy MineralsProcess Industrie sUtilities -3-2-1012001 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5SectorRet.Val_0.02 Commercial ServicesEnergy MineralsMiscellaneousTran sportation -4-3-2-1012007 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5SectorRet.Val_0.02 R e t V a l P = 0 0 2 R e t V a l P = 0 0 2 Figure 5. 5 Notched Box-Plots Of 52 Week Logged Return Value Aggregated By Sector For (A) Year 2001 And (B) Year 2007. Data censored At Return Value Of 500% (A) Sector Sector -167Log Return Value Log Return Value Sector Sector (B)

PAGE 186

-168R e t V a l P = 0 0 2 largemegamicromidsmall -4-3-2-1012007 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5F.Markt.ValueRet.Val_0.02 largemegamicromidsmall -3-2-1012001 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5F.Markt.ValueRet.Val_0.02 Market Value (Factor) Market Value (Factor) Figure 5.6 Notched Box-Plots Of 52-Week Logged Return Value Aggregated By Market Cap For (A) Year 2001 And (B) Year 2007. Data censored At Retu rn Value Of 500%. The Mega-Cap Plot Is A Result Of The Confidence Interval Going B eyond The Quartile. (A) (B) R e t V a l P = 0 0 2 R e t V a l P = 0 0 2 -168Log Return Value Log Return Value

PAGE 187

-169AMEXCairoFrankfurtLimaNicosiaSEAQXETRA -3-2-1012001 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5ExchangeRet.Val_0.02 AMEXCairoFrankfurtLagosNairobiOTCSEAQTunis -4-3-2-10122007 BoxPlots Ln Ret. Val.,for P>=0.02 Less >5ExchangeRet.Val_0.02 Exchange Exchange R e t V a l P = 0 0 2 R e t V a l P = 0 0 2 Figure 5. 7 Notched Box-Plots Of 52 Week Logged Return Value Aggregated By Exchange For (A) Year 2001 And (B) Year 2007. Data censored At Return Val ue Of 500%. (A) (B) -169Log Return Value Log Return Value

PAGE 188

-170Using the data just described, a stepwise analysis of the seven ancillary factors was performed, and the Bayesian Information Criterion ( BIC) (previously described in Chapter 4, Section 4.7) was used as the stopping va lue. The analysis was performed using the R lm function (R Development Core Team, 2009). The desi gn matrix was formed using cell means contrasts. Table 5.1 di splays the steps in this analysis. This analysis was iterative with each step includin g the most explanatory factor(s) from earlier steps. An additional factor entered t he model according to whether or not the BIC decreased over the previous step. As c an be seen in the Table the modeling process was concluded for all years for ad ditional main effects at the conclusion of the third iteration or step. An addi tional iteration examining selected first order interactions was conducted, but yielded no additions to the model. For the main fitted effects three of the factors ap peared to be significant: exchange, sector, and market value (in discretized form denot ed as F.Markt.Value). In a second experiment exchange was removed from the ana lysis and the model was rerun. In this result along with F.Markt.Value and se ctor, country entered the model. The substitution of country for exchange is interes ting. The results of the analysis suggests that two factors [sector and market value] are carrying the economics and market trend factors into the return values, and th e exchange or country represents

PAGE 189

-171the impact of regulatory, legal, social, and govern mental effects. The model clearly has explanatory power. In fact, the degree of linea r explanatory power is very strong, as witnessed in Table 5.2, wherein the coef ficient of determination 2 R is quite high. In examining the quality of the fit of this fixed-e ffect model, it was postulated when the model was defined, that the residuals or errors —if the model is appropriate— will follow a normal distribution, with a mean of z ero and a single variance over all return levels, denoted as 2 e ee e s ss s or as commonly denoted, 2 0N (,) e ee e s ss s From elementary mathematical statistics 01 ijkijkijk ijkrr sN ,,,,,, ,, ˆ ˆ () ˆ /(,) e ee ee ee e e ee e=(5.7) where: ijk ,, are factor indices and 2 s e ee e is an estimate of 2 e ee e s ss s

PAGE 190

-172Table 5.1 Results from stepwise model construction for fixe d-effects model by year. The top left of each subtable provides the effect already in the table, and yellow entries highlight the effect of the increased value of BIC for that year. The redhigh lighted subtable identifies the model construction stopping. Step 0, In: None2001200220032004200520062007 Step 2, In: Exchange and F.Markt.Value 2001200220032004200520062007 Continent -1171.4-1131.5-1060.6-962.1-754.3-848.0-838.8 Continent -2191.7-2338.7-2562.7-2612.1-2395.9-2391.1-2400.7 Country -1871.6-1850.2-1789.1-1697.6-1488.1-1568.7-1538.9 Country -2002.6-2166.9-2396.7-2451.6-2233.9-2227.3-2225.5 Exchange -1947.9-2004.4-2109.3-2062.5-1839.8-1875.7-1941.8 Exch.Region -2176.7-2325.4-2547.9-2594.9-2378.5-2372.5-2383.2 Exch.Region -1155.3-1121.9-1056.2-963.6-759.3-851.3-839.4 Trade.Region -2222.4-2368.4-2589.9-2635.3-2417.3-2415.6-2423.8 Trade.Region -1154.9-1118.3-1055.0-965.1-760.1-851.9-839.9 Sector -2325.7-2485.8-2708.9-2736.6-2507.6-2463.4-2480.0 Sector -1362.0-1306.8-1233.0-1120.2-882.1-976.2-947.2 F.Markt.Value -1933.4-1967.5-2083.6-2098.2-1864.7-1931.3-1833.7 Step 1, In: Exchange2001.02002.02003.02004.02005.02 006.02007.0 Continent -1917.3-1974.0-2078.2-2030.9-1807.9-1844.2-1910.0 2001200220032004200520062007 Country -1731.2-1811.3-1920.8-1879.1-1654.6-1687.8-1741.2 Exch.Region -1917.7-1980.4-2081.5-2026.9-1801.2-1836.1-1905.6 Continent -2294.1-2454.8-2679.2-2709.4-2481.7-2435.1-2452.6 Trade.Region -1947.9-2004.4-2109.3-2062.5-1839.8-1875.7-1941.8 Country -2103.3-2287.0-2517.3-2553.4-2324.6-2275.2-2281.7 Sector -2053.3-2125.7-2225.9-2157.7-1922.4-1927.9-2002.3 Exch.Region -2277.9-2439.8-2662.9-2691.7-2464.5-2416.6-2435.0 F.Markt.Value -2222.4-2368.4-2589.9-2635.3-2417.3-2415.6-2423.8 Trade.Region -2325.7-2485.8-2708.9-2736.6-2507.6-2463.4-2480.0 Step 1, In: F.Markt.Value 2001.02002.02003.02004.02005.02006.02007.0 2001200220032004200520062007 Continent -1981.3-2047.3-2142.4-2151.4-1923.1-1955.2-1893.7 Country -1675.8-1782.3-1900.4-1924.5-1705.0-1726.0-1643.4 Exchange X F.Markt.Value -1634.8-1791.5-2032.0-2059.2-1821.1-1759.2-1762.7 Exchange -2222.4-2368.4-2589.9-2635.3-2417.3-2415.6-2423.8 Sector X F.Markt.Value -1934.8-2109.1-2319.4-2342.1-2107.4-2064.0-2076.5 Exch.Region -1964.4-2040.3-2143.0-2161.8-1939.2-1970.3-1901.5 Exchange X Sector 334.0147.1-158.3-214.56.373.898.8 Trade.Region -1958.0-2032.9-2133.0-2149.4-1925.7-1961.1-1891.1 Sector -2081.9-2100.1-2207.9-2209.3-1953.7-1979.3-1899.0 YearYearYear Step 3, In: Exchange, F.Markt.Value and Sector Step 3, In: Exchange, F.Markt.Value and Sector YearYear Year -172

PAGE 191

-173“Standardized” residuals sijk s (),,ˆˆ / e ee e ee ee ee ee= were created. These observations were evaluated against the Gaussian assumption as the me ans of examining the quality of the fit. QQnorm plots of the standardized residuals, as well as a plot of the standardized residuals versus the fitted values arising from thi s model, are provided in Figures 5.8 and 5.9. The plot of the standardized residuals versus the fitted values displays no particular systematic departure from a random sc atter (Figure 5.9). However, as we can see in the QQnorm plots (Figures 5.8A-G) the re are substantial and systematic departures at the upper end of all of th e plots. These departures indicate that in each of the distributions there are too man y observations at the positive end, versus what the theoretical distribution suggests t here should be. The positive skewness observed in the distributions depicted in Figures 5.1 to 5.3is manifested in the standardized residuals. Further, a Jarque-Be ra Test of the standardized Statistic 2001 2002 2003 2004 2005 2006 2007 R^2 0.8310.8440.8740.8890.8900.8860.899 R^2 Adj 0.8260.8390.8700.8850.8870.8820.895 Year Table 5.2 Coefficients of determination 2 R () for the annually computed three fixed-effects model described in the text above.

PAGE 192

-174residuals plotted in Figure 5.7 rejects the null hy pothesis of normality at a p-value < 6 2210 .* It was concluded that the postulated model is def icient.

PAGE 193

-175Figure 5.8 A-G QQNORM PLOTS OF STANDARDIZED RESIDUALS FR OM THREE-FACTOR FIXED-EFFECTS MODEL FOR YEARS 2001 TO 2007. -175

PAGE 194

-176-3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2001Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals 24-2959.359802S552(1012 -3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2003Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals -3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2004Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals (A) For 2001 (B) For 2002 (C) For 2004 (D) For 2005 -176

PAGE 195

-177-3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2005Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals -3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2006Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals -3-2-10123 -2024 QQPlot Based On OLS Standardized Residuals For 2007Theoretical QuantilesSample Quantiles Based Upon Standardized Residuals (E) For 2005 (F) For 2006 (G) For 2007 -177

PAGE 196

-2.5-2.0-1.5-1.0-0.50.00.5 -2-1012 OLS Raw Residuals Vs. Fitted Values 2001 Trtmnt Con Fitted ValuesRaw Residuals -2.5-2.0-1.5-1.0-0.50.00.5 -2-1012 OLS Raw Residuals Vs. Fitted Values 2001 Trtmnt Con Fitted ValuesRaw Residuals Figure 5.9 Scatter Plot Of Standardized Residuals Versus Fit ted Values From Three-Factor Fixed-Effects Model For 2001. Factors Entered Into Design Matrix As Mean Contrasts.

PAGE 197

-179In summary, the issue associated with these result s is that, if we assume a Gaussian model, the residuals—particularly at the tails of the QQnorm plot—are deficient with respect to a normal (0,1) distribution. 5.7 Examining Factors As Sources Of Variability Diagnostics from the previous suggest that both the three-factor mean and the single variance are inadequate explanations for the behavior of the return levels. In the next several figures—Figures 5.10 to 5.12—depic t some results from the examination of the explanatory power of additional predictors and the stability of the variance over the entire dataset. Figures 5.10 and 5.12 suggest that market capitaliz ation and year should be entered as continuous variables because there appear to be at least linear elements which are not being captured in the model., The variance is not constant over the entire residual dataset, the residuals do not appear to be homoscedastic over market capitalization and year.

PAGE 198

-180M M M M M 4681012 -3-2-10 Log(Mkt Cap) Vs Log(Return Value)-Mean and Std DevLog(Mkt Cap)Log(Return Value) U U U U U D D D D D Figure 5.10 Plot Of Mean And +/Two Standard Deviations Of Lo garithm Of Market Cap Versus Logarithm Of Mean Return Value.

PAGE 199

-1812001200220032004200520062007 -1.6-1.5-1.4-1.3-1.2 Log(Return Value) Vs YearYearsLog(Return Value) Figure 5.11 Plot Of Year Versus Logarithm Of Mean Return Value

PAGE 200

-182Figure 5.12 Plot Of Market Capitalization Versus Standardized Residuals From Model Composed Of Earlier Three Factors Augmen ted By Market Capitalization As A Continuous Predictor.

PAGE 201

-1835.8 Further Modeling A two-step strategy was adopted for additional mode l construction. These steps were: 1. Construct a more complete fixed-effect model, going beyond the incorporation of factors in the form of cell-mean c ontrasts to include continuous factor forms for quantitative factors. 2. Use this fixed-effect model as a starting point and complement it with hypothesized error structures to estimate the value s of the parameters, using maximum-likelihood estimators within a proposed Gau ssian process framework. 5.8.1 Step 1–Further Fixed-Effect Modeling A series of additional fixed-effect models were exa mined. In these models, more factors which are in continuous forms as well as se lected interaction terms were incorporated. Added to the model were a discrete f orm of year using cell means contrasts, a quantitative form of year using the va lue of year, a discounted quantitative market cap, and a year-by-market-cap i nteraction. Table 5.3 is an ANOVA based upon a Type III or marg inal sums of squares (SoS), meaning each source of variation (SOV) is treated i n turn as if it were entering the model last. This means that the variation associate d with each SOV is being tested

PAGE 202

-184under an assumption that all of the other variables are in the model. If the SOV represents variation that, when tested against its error, is significantly greater as evaluated by the p-value, the table values yield gr eater confident that the result is not due to the order of placement of the SOV in the model.

PAGE 203

-185Response: 52 Week Return ValueSource of Variation DfSum SqMean SqF valuePr(>F) Discrete Mkt Cap 5 45,670.00 9,134.00 27,713.59 < 2.2E-16 Discrete Exchange 68 2,851.00 42.00 127.23 < 2.2E-16 Discrete Sector 19 554.00 29.00 88.54 < 2.2E-16 Discrete Year 6 446.00 74.00 225.51 < 2.2E-16 Discounted Continuous Mkt Cap 1 6.00 6.00 17.59 2.75E-05 Continuous Year X Discounted Continuous Mkt Cap 1 5.00 5.00 14.19 1.66E-04 Residuals 20,599 6,789 0.33 Table 5.3 ANOVA arising from Type III SoS for fixed-effect m odels containing additional discrete and continuous factors/predictors.

PAGE 204

-186So the further analysis based upon the deficiencies of the earlier fixed effects models incorporated a number of continuous forms of predictors. The sources of variation indicated in Table 5.3 are the predictors used to be used in the final model for estimating the expected value of the return lev el. It is understood the potential of some over-fitting exists within this step, but t he next step will be used to sort out additional issues of parameter significance. 5.8.2 Step 2–Fitting Gaussian Processes Using Maximum-Likelihood Estimation The Gaussian process has been explained in an earli er section (Chapter 5, Section 5.5). Let us recall that the model for the random vector R defined as the 52-week return values is given by c (,) be be be be + R= Yh (5.8) where: Y is the matrix of predictor values, commonly called the design matrix. b bb b is the vector of coefficients. h is an innovation that is due to a factor effect or effects; it is distributed as N (,) 0 H and H is assumed to be a diagonal matrix. e ee e is an innovation that is due to measurement error, commonly called a “nugget effect” (whose computation was addressed ea rlier in this Chapter, (Chapter 5, Section 5.4 ); it is distribut ed as N (,) 0 S SS S and S SS S is assumed to be a diagonal matrix. e ee e h are assumed to be uncorrelated. c (,) e ee e h or g (,) H S SS S are functions (assumed to be simple functions of th e innovations and the variance, for example a simple sum). Ng ~(,(,)) b bb b RYH S SS S ; so, in this situation the likelihood function is

PAGE 205

-18712 2 12 1 2 p TLgHrgH rYgHrY/ /(,(,)|)()(,) exp{[()(,)()]} bp bp bp bp bb bb bb bb -S=S --S(5.9) Maximum-likelihood estimation under a set of assump tions concerning the fixed effects16 and structure of the uncertainty elements was used to estimate the parameters ( and b bb b H S SS S ). To generate the estimates, the commonly used app roach of maximizing the logarithm of the likelihood funct ion (or more accurately in this analysis, minimizing the negative of the log-likeli hood function) was performed. BIC and AIC were used as aids in selecting the part icular model. (Table 5.4 tallies some of the salient model details.) Several differe nt factors or combinations of factors were examined as in building H These included year (Year), market capitalization (Mkt Cap), and year by market capita lization. The use of a specific combination of factors, say year by market capitali zation, meant that instead of the one variance there were 35 variances postulated, on e for every combination of year by market capitalization. With respect to bringing in the annual nugget variance, five alternatives were examined for the form of S SS S (functions of S SS S the diagonal matrix of nugget variances) as means of bringing th is element into the model: 1. Use no nugget at all (Models 1-3), that is g (,) H S SS S defined as + H S SS S and S S S S is define as 0 16 With respect to estimating the full model, the fix ed effects factors and their starting values for ˆ b bb b are defined from the analysis which produced Table 5.4.

PAGE 206

-1882. Estimate a single multiplier for all the nuggets (M odel 4), that is g (,) H S SS S is defined as + H S SS S and S SS S is defined as a S 3. Use the simple addition of the estimated nugget to the variance from the grouping structure (Model 5), that is g (,) H S SS S is defined as + H S SS S and S SS S is defined as S 4. Multiply the estimated nugget by the variance from the grouping structure (Model 6), that is g (,) H S SS S is defined as H S SS S and S SS S is defined as S 5. Estimate a multiplier for each nugget that changes based upon the sector classification of the equity (Model 7). that is g (,) H S SS S is defined as + H S SS S and S SS S is defined as T S a where an individual value of a is estimate for each sector. The results provided in Table 5.4 support the use o f the year-by-marketcapitalization variance grouping with no nugget, or secondly, the year-by-marketcapitalization grouping with sector multipliers.

PAGE 207

-189Table 5.4 Selected results from maximum-likelihood computat ions under assumptions and structure described in the previous text. Model NumberFactor Grouping for H Model for Sigma' Num Parameters Num Grps in H BIC 1YearNone1107-99794.522Mkt CapNone10852927.7123Year X Mkt CapNone138353954.2424Year X Mkt Cap Nug Add Single Multiplier139353275. 923 5Year X Mkt Cap Nug Add No Multiplier138351439.366Year X Mkt Cap NugXVar13835-24327.027Year X Mkt Cap Nug Add Sector Multipliers158353394 .734

PAGE 208

-190Table 5.5 and Figure 5.13 provide some additional i nsights. “Standardized” residuals for each of the models in Table 5.4. Unde r the assumptions laid out above, these residuals were of the form 12 TrrgH '/ ˆ ˆ ˆ ()[(,)] --S where: rY ˆ ˆ b bb b = H ˆ is a diagonal matrix with 2 i H s ˆ where 12 i ,,, = number of groupings in the variance structure (models 3 and 7). ˆ S is a diagonal matrix with each diagonal entry bein g jk an ˆˆ where j a ˆ ; depending on the model, i may be 1 or = 1, 2, number of sector, and n ˆ is the nugget variance estimated for each equity. From examination of Tables 5.5 and 5.6, it was conc luded that—of all the models examined—the model including a sector-adjusted nugg et multiplier most closely met the underlying structural hypothesis of normali ty17. The conclusion was drawn based upon the number of non-significant JB tests ( i.e. the normality hypothesis) by model. So given the results from Table 5.4 where m odels 3 and 7 were preferred and then from Tables 5.5 and 5.6 where model 7 was preferred, so model 7 will be used in further analyses. Even though this model h as the largest number of parameters, the additional parameters were allocate d to the model elements which went into defining the variation in the model, H and S Further the allocation of parameters in this manner does have some domain rat ionality. The factors of time 17 All models had in excess of 20,000 observations, a nd the critical p-values of the tests were adjusted using an (admittedly conservative) Bonferr oni adjustment (J. Neter, et al. [1996]), so that the family of test a aa a level was set at 0.05.

PAGE 209

-191and market capitalization did show up as potential sources of heteroscadestic variation in the earlier model deficiency analysis. The addition of nugget multipliers which are a function of sector may be r easonable under the hypothesis that different sectors possess different volatiliti es. Finally, Figure 5.13 provides a visualization of th e behavior of the standardized residuals from the model. The Figure contains QQnor m plots of the standardized residuals of various market capitalizations (mega-c aps excluded for reasons of sample size) for the selected model, which is model 7. While there clearly are some data departures from the theoretical values (r epresented by the straight line), the model is closing in on a normal-looking set of residuals (supported by results from the Jarque-Bera test). In summary, the final model contains 103 parameters associated with the expected value predictors (many of which are used for the ce ll means contrasts, particularly those for the stock exchanges) and 55 parameters as sociated with defining the variance matrices. Therefore, the selected model c ontains in total 158 parameters estimated using in excess of 20,000 observations. In the next Chapter (Chapter 6) the results develop ed in this and earlier chapters will be incorporated into a focus upon three subjec ts:

PAGE 210

-1921. Validating this ChapterÂ’s (Chapter 5) results using a holdout sample 2. Using the model to perform portfolio optimization 3. Augmenting the model to perform some forecasting

PAGE 211

-193Table 5.5 Results from repeated performance of the Jarque-Be ra test of normality by variance grouping and by model for fir st pair of treatments of nuggets. Bonferroni adjustment for co ntrolling family of test a aa a error by model,. Labels define the filter values f or subsetting observations from the sample by year.market-cap into g roups which were then tested for normality.

PAGE 212

-194Table 5.6 Results from repeated performance of the Jarque-B era test of normality by variance grouping and by model for sec ond pair of treatments of nuggets. Bonferroni adjustment for co ntrolling family of test a aa a error by model. Labels define the filter values f or subsetting observations from the sample by year.market-cap into groups which were then tested for normality.

PAGE 213

-195Figure 5.13 Selected Qqnorm Plots Of Standardized Residuals Fr om Various Market Capitalization For 2001coming From The Selec ted Model. Small Caps Large Caps Micro Caps Mid Caps

PAGE 214

-1965.9 The Selected Model, In Detail In this section, the model selected in the previous sections is more formally detailed. Recall that model formalism is given by eqns.5.5 and 5.8. GP ()) bS bS bS bS RY,g(H, and c (,) be be be be +R= Yh where: GP stands for Gaussian process. R is a random vector of extreme value return values for a given return level. Y is the design matrix of values of predictors of t he expected values of the return values. b bb b is the vector of coefficients multiplying the pred ictor values. H is the covariance matrix of the predictors. S SS S is the covariance matrix of equity return values. Recall the initial matrix of nugget variance is denote S SS S the nugget covariance augmented sector multipliers is denoted S SS S h is an innovation that is due to a factor effect o r effects; it is distributed as N (,) 0 H and H is assumed to be a diagonal matrix. ee eeee ee is an innovation that is due to measurement error commonly called a “nugget effect,” it is distributed as N (,) S SS S 0 and N (,) S SS S 0 respectively, and S SS S S SS S are assumed to be a diagonal matrix ,;, ee ee ee ee hh are assumed to be uncorrelated. c (,) e ee e h or g (,) S SS S Hare functions (assumed to be simple functions of th e innovations and the variance, for example a simple sum). In this analysis and for the selected model R is a vector of just under 21,000 (20,678) logged return values with a return period of approximately one year

PAGE 215

-197(2,954 equities for seven years). The X matrix is a design matrix consisting of 20,678 rows and 103 columns. The discrete or quali tative predictors are represented in the design matrix as cell mean contr asts. For example, the discrete form of the market capitalization predictor is code d as per Table 5.7. The columns of the X matrix is divided up per Table 5.8. Predictor Name Number of Columns Type of Predictor Market Capitalization5QualitativeStock Exchange69Qualitative Sector 20 Qualitative Year 7 Qualitative Continuous Market Capitalization1Quantitative Continuous Market Capitalization) X (Quantitative Year) Quantitative Table 5.8 Structure of the columns of the design matrix X Table 5. 7 Example of cell mean contrasts for discrete market capitalization predictor. Micro10000Small01000Mid00100Large 00010Mega00001

PAGE 216

-198b bb b is a vector of length 103 of coefficients which are the “weightings” of the individual terms of the design matrix. These values were estimated using the maximum likelihood process described in Chapter 5, Section 5.8.2. When an individual row of the design matrix is multiplied b y ˆ b bb b (the maximum likelihood estimate of b bb b ) using the commonly observed rules of linear algeb ra, the result is the expected value for the associated entry of R For the random portion of the model there are, as o utlined above, two innovation vectors (denoted h and e ee e ) each of length 20,678 which are assumed to be uncorrelated. Both h and e ee e are hypothesized to be generated by normal distributions, each having a mean of zero and varia nce-covariance matrices denoted as H and S SS S respectively. H and S SS S are diagonal matrices of size 20,678 20,678 and as per similar assumptions for h and e ee e H and S SS S are uncorrelated. A considerable set of analyses, as detailed once ag ain in Chapter 5, Section 5.8.2, was performed. In these analyses a range of alter native were examined. For the selected model H is posited as

PAGE 217

-1990000 00000000 000ij ij ij ij ij, , ,s ss s s ss s s ss s s ss s s ss s r where: ij s ss s is the variance associated with the random effects from combination of the i th market capitalization class ( i =1,2,Â…,5) and the j th year ( j =1,2,Â…,7). This results in 35 distinct estimates of variance f or this effect. Clearly the matrix is likely to have numerous duplications of each of the values. On the other hand for the selected model, S SS S is structured as follows: 1 2 3 40000 00000000 000k k k k Nka a a a as ss s s ss s s ss s s ss s s ss s r where: m a is the estimated nugget variance computed as per Ch apter 5, Section 5.4, m =1,2,Â…,20,678. k s ss s is a multiplier of the nugget variance and a varian ce of the random effect due to industry sector, k =1,2,Â…,20. Once again there are likely to be numerous duplicat ions of each of the estimated multipliers within S SS S

PAGE 218

-200The postulate version of the selected model was fit ted as the parameters of a multivariate normal distribution, using a maximum l ikelihood process which is described in Chapter 5, Section 5.8.2. In this model the variance-covariance matrices are hypothesized as diagonal, uncorrelated and possess multiple duplicates of eac h of the individual quantities to be estimated, This modeling assumption simplified e normously the estimation process.

PAGE 219

-2016. Consequences Of The Research For Portfolio Formation And Innovation 6.1 Overview Of The Chapter In this Chapter (Chapter 6), the effort is to exami ne and apply the Gaussian Process model and/or its components developed by the conclu sion of Chapter 5 (see Section 5.9). The examination occurs principally i n three thrusts. Firstly, the model is applied to the test data first introduced in Chapter 3, Section 3.6. Secondly, the nugget variation portion of the model is used to augment the commonly estimated MVO risk model in an effort to m ore precisely describe the risk and in turn construct portfolios which yield i ncreased return as suggested by earlier research (Clark and Labovitz [2006]). Thir dly, because the second research thread “requires” the investor to make a choice of which nugget to use (more clearly stated as which return level is appropriate ), the rudiments of a guidance mechanism in this regard is suggested. The test data consists of 100 equity series randoml y selected from the 12,185 liquid series available. They were subjected to the same processing steps as was the larger training set of 3,000 series, and return val ues with a 52 week return level were extracted. Coefficient values developed from the training set for model 7

PAGE 220

-202selected in Chapter 5 (Section 5.8.2) were applied to the test set predictors to estimate the expected returns. These results show that the expectations of these new values are well within appropriate confidence i ntervals. Having used the test data set to validate the fixed effects portion of t he model, an examination is made of the dataÂ’s variability. The empirical distribut ions of both the training and test sets were compared by year and found to be the same for reasonable p-values. Portfolio creation/optimization is at the heart of modern portfolio theory. As described in Chapter 1, Section 1.4, the commonalit y in such models is increases in reward are associated with increases in risk. But i nvesting using more accurate and more fully detailed descriptions of risk is associa ted with higher returns. The nugget variance structure as computed in Chapter 5, Section 5.4, was computed by year for seven return levels covering from just ove r one week out to three years. These were augmented with off-diagonal elements and combined using simple addition with the commonly used second central mome nt variances and covariances (denoted Sigma). These risk structures also called covariance structures were alternatively used in the commonly applied portfolio optimization model MVO (see Chapter 1, Section 1.4.1). The retu rns from the use of these various covariance structures on a well defined por tfolio known as the Sharpe portfolio (Sharpe [1966]) were examined using two r ebalancing schemes as a back test of the strategy (see footnote 14 for definitio n of back test). The results for both

PAGE 221

-203rebalancing schemes were that the covariance struct ures of greater than 26 weeks outperformed MVO Sigma, with respect to returns, wh ile the covariance structures of less than 26 weeks generally underperformed MVO Sigma. The outperformance was between 100 percent and nearly 400 percent over a period of seven years depending on the rebalancing scheme used. This second research thread created a new open rese arch question. Not all of the covariance structures, even those of length greater than 26 weeks produced the same level of outperformance of Sigma, nor was one covariance structure dominant (with respect to MVO return) over all the others du ring the study time frame. Therefore, an investor had a choice at rebalance ti me of which covariance structure to use in portfolio optimization. Clearly the choi ce was very likely to make a difference in the returns enjoyed by the investor. In this third research thread examined within the Chapter (Chapter 6, Section 6.6 ) a time-leading or look ahead signal will be sought as guide with respect to whic h covariance structure to select at rebalance time. Based upon a long established inve rse linear relationship between returns for the S&P500 and returns from a volatilit y measure known as the VIX (le Roux [2007]), the VIX was examined for its potentia l as such a signal. The result was that the VIX tended to pick out the best covari ance structure to use on a time coincident basis. This meant that the correlation between returns of the VIX and returns from the Sharpe portfolios were most negati ve for the covariance structure

PAGE 222

-204which yield the greatest outperformance for the yea r. But the VIX is known as a 30 days look-ahead signal and this study was performin g rebalancing once a year Using the VIX as a year look ahead signal yielded m ixed results, but the findings of this research are promising and do suggest directio ns to go to obtain a look ahead signal.

PAGE 223

-2056.2 Tasks To Be Performed In The Chapter In this Chapter (Chapter 6)the discussion will focu s upon three research threads, representing completion of the substantive portion of the dissertation: 1. Performing a model validation for the expected-valu e portion of the model presented in Chapter 5, Section 5.8.1. 2. Using this model in portfolio construction and comp aring the results with those from a commonly used method. 3. Suggesting a model extension to incorporate a forec asting mechanism. The test dataset described in the next section was used to perform some of these research efforts. 6.3 Test Data Sets As described initially in Chapter 3, Section 3.6, a n independent test sample consisting of 100 securities was created from the o verall set of 12,185 securities. These securities were put through a computational p rocess identical to that for the training set. During this process two securities we re lost because of an inability to generate an invertible Hessian matrix as part of th e optimization. Therefore, the test set was reduced to 98 securities; the names and ide ntifiers/indices are presented in Table 6.1.

PAGE 224

-2066.4 Model Validation As stated earlier, the test data were processed thr ough the same set of procedures as was the training set. In this section both the expe cted values or expected responses and the residuals from the fixed-effects model are examined. The prepared test dataset was run through the fixed-effects model, us ing the coefficients estimated from the training set as created by selected model set forth in Chapter 5 Section 5.8.2. Therefore, a design matrix for the seven yea rs over the 98 securities in the test dataset was created, yielding 686 rows. If eac h row of this design matrix is designated 12686 hXh ,,,..., =, the expected response associated with each of the se row vectors is computed as hhtrain EYX[] b bb b = (6.1) Using the estimates of train b bb b the estimated value of the expected response is g iven by hhtrain YXb ˆ = (6.2) where: hhtrainhtrainh EYEXbXEY ˆ [][][] b bb b===.

PAGE 225

-207Table 6.1 Identifiers/indices and names of 98 securities in t est data set. Symbol/Ticker Security Name Symbol/Ticker Security Name *ACD Accord Financial Corp. AEM Agnico-Eagle Mines Ltd. *DPMDundee Precious Metals Inc.AID Law Enforcement Associates Corp. *DYG Dynasty Gold Corp. B16RK7 Wo Kee Hong (Holdings) Ltd. *EH easyhome Ltd. B17MN5 nTorino Corp. Inc. *GBE Grand Banks Energy Corp. B17Q6Z Dongwon Metal Co. Ltd. *HEM Hemisphere GPS Inc. B1G9ZL See Corp. Ltd. *LNK ClubLink Corp. B1YBRK CEMIG-Cia Ener Minas Ger *MCO.H MCO Capital Inc. BHE Benchmark Electronics Inc. *MEQ Mainstreet Equity Corp. BPOP Popular Inc. *MLI.H Millstreet Industries Inc. BRBI Blue River Bancshares Inc. *NWF.U North West Company Fund BTRNQ Biotransplant Inc. *SNCSNC-Lavalin Group Inc.BTSR BrightStar Information Technology Group Inc. *TCA*X TRANSCANADA CORP. CAR Avis Budget Group Inc. *TN True North Corp. CENX Century Aluminum Co. 001750 F&C Global Smaller Cos. PLC CHKE Cherokee Inc. 026938 DA Group PLC CHMS China Mobility Solutions Inc. 029450 Edinburgh Dragon Trust PLC CTV CommScope Inc. 052688 Gartmore European Investment Trust PLCDELLDell Inc. 078797 Hansa Trust PLC DTIIQ DT Industries Inc. 1739 Seed Co. Ltd. EDEN EDEN Bioscience Corp. 1819 Taihei Kogyo Co. Ltd. ENGY Enviro-Energy Corp. 2220 Kameda Seika Co. Ltd. EQY Equity One Inc. 2657 Internix Inc. FCZA First Citizens Banc Corp. 2804 Bull-Dog Sauce Co. Ltd. FITB Fifth Third Bancorp 2815 Ariake Japan Co. Ltd. FL Foot Locker Inc. 3201 Japan Wool Textile Co. Ltd. FMC FMC Corp. 3577 Tokai Senko K.K. FRX Forest Laboratories Inc. 400212 Bouygues S.A. HE Hawaiian Electric Industries Inc. 406398 Autostrada Torino-Milano S.p.A. HNR Harvest Natural Resources Inc. 4221 Okura Industrial Co. Ltd. KNGS Kingsley Coach Inc. 442383 Immobiliere Hoteliere S.A. LIV Samaritan Pharmaceuticals 454444 Lisgrafica Impressao Artes Graficas S.A.PKTRPacketeer Inc. 455959 Nel Lines PPS Post Properties Inc. 4614 Tohpe Corp. PRA ProAssurance Corp. 465773 Corticeira Amorim SGPS S.BSB.APTEOProteo Inc. 465873 Dromeas S.A. QCRH QCR Holdings Inc. 474249 MOL Hungarian Oil and Gas Plc RGMI RG America Inc. 474346 D.H. Cyprotels Public Ltd. RSG Republic Services Inc. 487891 Thrace Plastics Co. S.A. RVI Retail Ventures Inc. 490084 FORTIS INV MGMT FORTIS OBAMSATCSatcon Technology Corp. 493757 Valeo S.A. SPA Sparton Corp. 500079 Duran Duboi SSCC Smurfit-Stone Container Corp. 516007 Fresenius Medical Care pfd STT State Street Corp. 611292 Consolidated Minerals Ltd. SUNW Sun Microsystems Inc. 612117 Lynas Corp. Ltd. TTES T-3 Energy Services Inc. 627363 ChemGenex Pharmaceuticals Ltd.VSTYVarsity Group Inc. 646557 Downer EDI Ltd. WMK Weis Markets Inc. 691406 Bytes Technology Group Ltd. XCHC The X-Change Corp. ADI Analog Devices Inc. ZHNE Zhone Technologies Inc.

PAGE 226

-208Figure 6.1 is a plot of the h Y ˆ versus the observed values of the log of the 52-we ek return level. The plot suggests the observed values follow very well the response estimated, using the coefficients estimated from th e training set. By eye, there appear to be small changes in variance over the est imated mean response value. Such a result would be consistent with the findings from the analysis of the training dataset. 6.4.1 Expectations Of The Test Set In a related analysis with these data prediction in tervals were computed for the test data. In this analysis the data are treated as new observations. In that case the parameters or coefficients used to estimate the mea n response were the ones estimated using the training set. The results per N eter et al. (1996) were developed through results of the estimate of the matrix stand ard error for the coefficients, which is given by 21 T sbMSEXX ()() = (6.3) where: MSE is the mean squared error for the traini ng data model. X is the design matrix of the training set.

PAGE 227

-209Using some elementary mathematical statistics, the estimated variance of the new response value led to the following result: 22 T hhh sYXsbX ˆ ()() = (6.4) Figure 6.1 Plot Of Estimated Mean Response For The Test Dat aset, Using Coefficients Based On The Training Set Fixed-Effect s Model Versus The Logged 52-Week Return Response. Red line is yx =

PAGE 228

-210with all quantities as previously defined. Finally, the prediction interval of a new observation was given by 22 h spredMSEsY ˆ ()() =+ (6.5) In order to control for a aa a error in setting the (1a aa a ) prediction interval for a large number of new observations, the Scheff predicting limits for g different levels of h X (Neter et al. [1996]) was used, which is given by h YSspred ˆ () (6.6) where: 21 SgFgnp (;,) a aa a =-. n is the number of observations in the training set. p is the number of parameters or coefficients used i n the training set model. Figure 6.2 is a plot of the estimated mean response s (Predicted) versus the observed log of 52 weeks of return values (Obs), the same data as in Figure 6.1.

PAGE 229

-211The blue lines are 95% prediction intervals, indivi dually derived response prediction intervals and the green lines are the 95% Scheff-adjusted prediction intervals using 98 g = Note that not only do the observations correspond well to their expectations but variation is well within the 95% individual prediction intervals and entirely within the 95% Scheff-adjus ted prediction intervals. This 0100200300400500600700 -10-505 Plotting IndexLog Return 52 Weeks 2 2 22 2 2222 2 2 22 222222 2 2 2 2222 22 22 2 2 2 2 22 22222 2 222222 22222222 22222222222222222222222 22 2 2222222222222 222222222222222222222 222222222222222222 22 2 22 2 2222222222 22222222222222222222 222222222222222222 222222222222222 2222222222222222222222222222 2222222222222222222222 222222222222222222222 22222222222222222222222 2222222222222222222 2222 2 22222222222222 2 2 222222222222 2222 22 2 2 22222222222 2222 2 2222222222222 22 2 222222222222222222 2 2222 22222222222222222 22222 2 22222222 2 2222 222 2 222222222222 22 2 222222222222 22 22 22222222222222 2 2 22222222 222222222222222 2222222 2 22 2 2222222 222222222 22222222222 22 2222 2 2 22 222 22222 22 2 2 22 22222222222222 2 222 22222222 2 22222 2 2 2222222 2 2 222 2 22222 2 22 2 222222222 222 2 22 2 2 2 2222 2 22222222 22 2 2 22 2 22 2 22 2 22 22 22 2 222 2 2 222 2 222 22 2 2 222222 22222222 222222 222 22 2222222 22222222 222222222 22222222222 2 22222222 222222222222222222 22222222222222222222 222222222222222222 2 2 222222222222222 222222222222222222222 222222222222222 2222222222222222222222 222222222222222222222222 222222222222222222 2222222222222222222222 22 222222222222222222222 222 2 22222222222222 22 22 22222222222222 2222222222222222222 22222222222222222 222222222222222222222222222 22222222222222 2222222222222 2 22 2 222 22 22222222 2222 22 2 22222222 22222 222 22 22222222222 22 2 2 2 222222 2 222222 222222222222 222222222222 222 222 222 222 222222222222 222222222 22 22222 2 2 22222 2222 22 22222222222222 22 2222 22222222 2 2222 2 22222222 22 2222 2 2222 2 222 222222 222 2 222 222 2 2 22222 222222222 2 22 222 222 222 2 22 1 1 11 1 111 11 1 11 11111 11 1 1 1111 111111 11111 111111111111 11111111 1111111111111111111111 1111111111111111111 11111111111111111111111 1111111111111111111 111111111111 1111111111111111111 1111111111111111111 1111111111111 11111111111111111111111111111 1111111111111111111111 11111111111111111111 1111111111111111111111111 1111111111111111111 1111111111111111111 111111111111111111 111111111111111 1111111111111111111 11111111111111111111111 111111111111111111 11111111111111111111 1 1 111111111111111 11111111111111 111111111111111111 11111111 11111111111111111 1111111111111111 111111111 1111111111 111 1111 111 1 111 11111 11 1 1 111 1111111111111 1 1 11 11111111111111 1 1 111111111 111111111 1 11 1 111111111 1111111 1 1111111111 1111 11 1 111111 1 11 1 11 1 1 11 1111 1 1 111 1 11111 1 1 1111111 1111111 1111111111 11111 1111111111111111111 11111111111111111111 111111111111111 11111111111111111111111 1111111111111111111 1111111111111111 1111111111111111111111 11111111111111 11111111111111111111111 111111111111111111111111 11111111111111111 1111111111111111111111 111111111111111111111111 111111111111111111 11111111111111111 1111111111111111111 111111111111111111 1111111111111111111111 11111111111111111 111111111111111111 1 1 1111111111111111111 1111111111111111 11111111111111111 11111111111111 1111111111111 11111111111 111 111111111 111111111111 111111111 11 11111 1 1 11111 1111 11 111111111111 1111 1111 11111111 111111 11111 111111111 111111 111 1111 111111111111 1 111111111111111 1 11111 111 111 1 11 Scheff Individual Obs Predicted Figure 6.2 Mean Response And Observation Values Within 95% Prediction Interval Envelopes, Using Individually Derived And ScheffAdjusted Prediction Intervals. Plot Elements Are D enoted In Legend Located Within Graph.

PAGE 230

-212result is interpreted to mean that all the expected value structure captured by the model developed in Chapter 5, Section 5.8.2 is applicable to the training set of data. It is believed that the fixed-effects model, which defines the mean response, is thereby validated. 6.4.2 Variability And Distribution In The Test Data Set But what about the overall remaining structure in t erms of the distributions? The two sets of residuals from the extended fixed-effec ts models of the training set and those from the test set were compared for distribut ional similarity on a year-by-year basis as well as over the entire dataset. The test used was the Kolmogorov-Smirnov (KS) two-sample test, which has the following struc ture: 0 H : The two samples come from a common distribution. a H : The two samples do not come from a common distri bution. Test Statistic: The KS two-sample test statistic. The KS test (NIST [2003]) is based on the empirical cumulative distribution functions (ECDF), each of which is defined as: let 12 N yyy ,,..., be a set of N observations, then ii YyEicardyN P[]()()/ £= (6.7) where: i cardy () is the number of points less than i y

PAGE 231

-213Then the two samples were defined: 12 1 iN DEiEi max|()()| ££=(6.8) 0 H is rejected and a H is accepted, if D exceeds a table value. In the present circumstance a Gaussian importance sampler (Durham, G., personal interview, October, 2008) was used to generate the weights to select a sample from the residuals of the training set with which to perform the KS test. The results of the KS for each year and over the entire analysis perio d are given in Table 6.2. Table 6.2 P-values from KS test of the null hypothesis that the residuals from the fixed-effects model run on both the training and test sets have the same distribution. From this test there is no evidence to reject the n ull hypothesis for any of the study years or over the entire analysis study period. Th at is, there is no evidence to suggest the distributions were unequal. Year P-Value 20010.690120020.146920030.073220040.355920050.569920060.456520070.5699 All Periods 0.3685

PAGE 232

-214The results from the analysis of the test data sugg est a factor model has been created which allows a researcher or practitioner t o examine the joint distribution of tail behavior equity returns over a variety of comb inations of return levels under a multivariate lognormal assumption. Using this model the practitioner can set the probability or return level and estimated joint pro babilities over vectors of quantiles as desired. 6.5 Application Of Model Results To Portfolio Formation As a final line of inquiry in this dissertation an examination was performed concerning whether or not the elements of the mode l just presented could be used to create and improve portfolios. Because there a re many ways in which results from the return model could possibly be used in con struction, this analysis focuses on performing a comparison with a well-recognized a lternative. (In Chapter 7, Section 7.5, other research threads arising from th e model and which may be of use to the financial community are discussed.) The question which is focused upon is: Could these results be used to improve the performance of portfolios (defined Chapter 1, footn ote 1 within Section 1.3) generated using common mean-variance optimization ( MVO)? MVO is defined in Chapter 1, Section 1.4.1. Could performance be imp roved while maintaining or

PAGE 233

-215reducing the proportional amount of assumed risk? S pecifically, could uncertainty captured within the “nugget” variation associated w ith specified extreme-return levels be used to more clearly define the risk in o ptimizing portfolios and to produce better performance in the out-of-sample yea rs? At an overview level the procedure was as follows: Using previous results and methods for computing t he time-varying return values and their variance given in Chapter 5 (Sections 5.3 and 5.4), compute the time-varying “nugget” variances f or each equity by year for the following return levels: slightly grea ter than one week (1.0526), two weeks, four weeks, a half year, one y ear, three years, and six years. The probabilities of exceeding the quant iles associated with these return values were 0.95, 0.50, 0.25, 0.0383, 0.0192, 0.0064, and 0.0032, respectively. Estimate the annual correlations from the return d ata for the equities used in the portfolio optimization and use these to compute the remainder of the covariances, the off-diagonal elem ents. Combine these “extreme” covariance matrices with a conventionally computed or base covariance. For the present resear ch combining these two matrices was a rather nave element-by-element summation. (Chapter 7, Section 7.5) will suggest much more so phisticated

PAGE 234

-216approaches for combining the uncertainty structures .) This uncertainty structure was computed on an annual basis. For this analysis the following eight covariance structures were examined : 1. Conventional covariance, that is the matrix of es timated central second moments of security returns on the diagonal and estimated covariance of security returns off diagon al (denoted in the following discussion as Sigma) 2. Sigma + 1.0526-week return level covariance matrix (denoted in the following discussion as Sigma, 1.0526 Wks) 3. Sigma + 2-week return level covariance matrix (den oted in the following discussion as Sigma, 2.0 Wks) 4. Sigma + 4-week return level covariance matrix (den oted in the following discussion as Sigma, 4.0 Wks) 5. Sigma + 6-month return level covariance matrix (de noted Sigma, 26.071 Wks) 6. Sigma + one-year return level covariance matrix (d enoted in the following discussion as Sigma, 52.142 Wks) 7. Sigma + three-year return level covariance matrix (denoted in the following discussion as Sigma, 156.42 Wks) 8. Sigma + six-year return level covariance (denoted in the following discussion as Sigma, 312.85 Wks)

PAGE 235

-217 Use a high-grading step (Labovitz [2007]) to reduc e the number of equities going into the final portfolio step from a pproximately 3,000 to an average of 200 on an annual basis. The set of po st-high-grading securities is herein referred to as the candidates and the group collectively is referred to as the candidate pool. Apply quadratic optimization techniques to generat e an efficient frontier of about 600 portfolios. The efficient frontier is discussed and defined in Chapter 1, Section 1.4.1. The optimization was of the form 1012 1RL sTT r i s N i iaaa st ainumSecN a max[()] .. ,,,,e ee eml ml ml ml=-+ = = SS SS SS SS (6.9) where: a is a vector of weights of length equal to the numbe r of securities, all weights must be greater than or equ al to 0, and the sum of the weights must equal 1. r m mm m is a vector of expected returns for the securities in the candidate pool, which is to be estimated by r ˆ l ll l is a scalar used to capture risk aversion and ente r the present analysis through setting a target for the smallest and largest portfolio risk as well as an increment of additiona l risk for each portfolio. RL e ee e S, S S, S S, S S, S are, respectively, the conventional covariance matr ix and the covariance matrix for a specific return level ( listed above).

PAGE 236

-218Although it is not indexed to avoid notation overlo ad, each of the quantities was estimated on a year-by-year basis. Create portfolios lying along the efficient fronti er for each combined covariance structure for each year (2001-2007), usi ng within each year the same candidate pool across the optimization run s for each covariance structure. Assume the invest-and-hold or annual-rebalancing s trategies for each of the eight risk models. The strategy described is one commonly used in back -testing18 financial models. In this strategy in-sample and out-of-sample component s are differentiated. In certain ways these elements are analogous to the training a nd test-set structure more familiar to statisticians, but they frequently more explicitly incorporate time into the analysis. The in-sample/out-of-sample (IS/OS) s trategy works in the following manner: Any time unit for which the weight sets (th e a vectors defined ) is estimated is considered an in-sample element, and t he process of building a weight set is called rebalancing. The reader should note t hat construction of a weight set has two consequences. Firstly, the non-zero weights (the positive weights as per the 18 Backing testing is a procedure commonly used in fi nancial model testing wherein a strategy developed at the present time, such as the portfoli o formation/optimization procedure being explored here, is tested using historic data as if it had occurred today, by simulating. Data and decisions going into the model test are restricted to that known at the investment horizon.

PAGE 237

-219constraints) define which candidate securities are actually in the portfolio. The securities that make the portfolio are called the c onstituents. Collectively, the constituent list or the portfolio differentiates th is set from the candidate pool. Secondly, the weights define how much (what proport ion of the portfolio) is invested in each security. So, for example, if the total value of the portfolio at time t is given by t M and under the present asset allocation (weighting) scheme, security j in the portfolio has a weight of j a then jt aM is the value security j holds in the portfolio. Any time unit to which weights are applied to retur ns realized during the duration of the time unit but to which the returns are not u sed to estimate weights, is considered an out-of-sample element or test element The manner in which the weights are applied to an OS element is to multiply each weight times its respective daily return and compound the daily returns. Of cou rse, out-of-sample elements can become in-sample elements. In fact, this is precise ly the analysis approach adopted for back testing the strategies developed herein an d described below. The analysis strategy used would be some variant on the followin g theme: Develop a set of portfolios (and weights) for each uncertainty model for month t and then apply the weights unchanged to the returns for year t+1 yielding the performance for each portfolio at the end of year t+1 Repeat this process by developing a new set of

PAGE 238

-220portfolios and weighting schemes based upon returns from year t+1 and then applying them to year t+2. This strategy of alternating building the portfolio and estimating the performance has the following two important aspects: Firstly, i t mimics a commonly recommended strategy involving investing and reeval uating the investment portfolio. This approach is the opposite of investm ent timing19, a questionable strategy not recommend for the vast major of invest ors who are of the “invest and forget” mindset. The strategy also mimics the reallife practice of rebalancing a portfolio, namely changing the securities mixture ( in this case equities) to achieve good performance in the marketplace in the short-te rm until the next rebalance. It makes the assumption that in the short-term, “What is past is prologue,” (Shakespeare [2004]); in other words what has happe ned in the recent term will continue to happen in the near short term. Secondly this approach as practiced places investors in the position of experiencing pr ecisely what would have happened had they invested at that time in the mann er described. No information available at a time later than the investment time horizon was used in constructing the portfolios. 19 A strategy for investing which tries to create and use signals regarding the price movements of markets and specific securities typically over the short term (sometimes as short as intraday) and to trade based-upon those signals. This is as opposed to investing for the long term and historic long term market gains.

PAGE 239

-2216.5.1 Construction Of Efficient Frontiers As the reader may recall from Chapter 1, Section 1. 4.1, the efficient frontier—a concept initially developed by Markowitz (1952)—may be defined in one of two ways: 1. Select a set of volatilities (risks) that are posi tive and strictly monotone increasing. For each of the risks identify the port folio with the highest expected reward (return) among all portfolios posse ssing that level of volatility. 2. Select a set of expected rewards (returns) that is strictly monotone increasing and for each of the levels of reward ide ntify the portfolio with the lowest risk among all portfolios possessing tha t level of expected reward. The two definitions are equivalent and the set of p ortfolios satisfying each definition forms the efficient frontier, which is t ypically depicted graphically (for example, see Figure 1.1) in two-dimensional space w ith volatility/risk on the abscissa and reward/performance on the ordinate20. The set of portfolios lying along the efficient frontier is also known as the o ptimal portfolios, inasmuch as in 20 It should be apparent to the reader that since the form of the objective is quadratic in the weights for which the investigator is seeking a solution, t wo solutions may possibly exist. In such instances the efficient frontier consists of the solution tha t possesses the maximum reward of the two solutions.

PAGE 240

-222the absence of borrowing money and taking a short p osition21 for a set of securities in the portfolio, the efficient frontier divides th e space of feasible solutions (portfolios) from the space of infeasible solutions (portfolios). The latter is the space in which the combination of risk and reward d oes not exist in any portfolio. To create the efficient frontier the R functions fr om Wuertz (2009) were modified. The efficient frontier was created by defining a ra nge of the minimum and maximum returns and an increment, such that 600 por tfolios were computed. This is a point which bares highlighting, that is an eff icient frontier represents an infinite number of portfolios (obviously finite in estimatio n) formed from the same set of securities. The difference portfolio to portfolio is the weighting of the securities or the proportions of a security forming a specific po rtfolio. In further processing, the code removed any portfol io representing second solutions (see footnote on previous page) and any portfolio n ear the origin affecting the convexity of the efficient frontier. This reduced t he number of portfolios from 600 to a number no less than 518 in all cases. Figures 6.3 through 6.5 provide examples 21 A short position trade is one in which the seller sells a security s/he does not own. In this trade the seller sells to the buyer a quantity of a speci fic security, at a stated price to be delivered on a specific date in the future. To meet regulatory re quirements the seller must borrow and hold the designated quantity of the security. The seller ma kes money if the security falls in value before the delivery date, because the buyer will pay a higher than market price for the security. The seller deliveries the borrowed securities and with the pro ceeds goes in the market and purchases replacement securities to satisfy the lending party then pockets the difference as profit. Of course the seller loses money if the price of the security goes up.

PAGE 241

-223of the efficient frontiers over various years and c ovariance structures. The first image in each set is the raw solution to quadratic optimization. The second image, labeled “Upper Quad Optim,” is a plot of the quadra tic optimization solution with second solutions removed. In the third plot any “st arting” solutions that do not form part of the convex solution was removed. The straig ht line in the third plot is an estimate of the capital-asset-pricing model (define d Chapter 1, Section 1.4.1) for a risk-free return rate of 0.0. The point of tangency between the straight line and the convex efficient frontier is also known as the Shar pe portfolio (Sharpe [1966]), defined as the i i r i i i ERewardiEF rRFi iEF EwardRiskFreeUnitRisk r[], max(([Re])/) max(()/)m mm mms ms ms ms

PAGE 242

-2240.00.10.20.30.4 0.000.020.040.06 Quad Optim for 2002, Risk==SigmaFactorized = FALSE, No. of Portfolios = 600, Risk F ree Rate = 0 Standard DeviationMean 0.00.10.20.30.4 0.000.020.040.06 Upper Quad Optim for 2002, Risk==SigmaFactorized = FALSE, No. of Portfolios = 518, Risk F ree Rate = 0 Standard DeviationMean 0.00.10.20.30.4 0.000.020.040.06 Convx Frntr for 2002, Risk==Sigma Factorized = FALSE, No. of Portfolios = 518, Risk F ree Rate = 0 Standard DeviationMean X Figure 6.3 Graphs Depicting The Results In The Formation Of E fficient Frontiers For The Sigma Covariance Structure, Rebalance Year 2002. The Line Represents The Capital Asset Pricing Model (CAPM), The Red X Is The Tangent Between The CAPM, And The Effi cient Frontier Is The Sharpe Portfolio. Other Data Provided Include The Number Of Portfolios Form ing The Efficient Frontier, The Risk-Free Rate Of Return, And An Indicat i o n If The Covariance Matrix Was Factored

PAGE 243

-22501234 0.000.100.20 Quad Optim for 2004, Risk==Sigma, 26.071 WksFactorized = FALSE, No. of Portfolios = 600, Risk F ree Rate = 0 Standard DeviationMean 01234 0.000.100.20 Upper Quad Optim for 2004, Risk==Sigma, 26.071 WksFactorized = FALSE, No. of Portfolios = 589, Risk F ree Rate = 0 Standard DeviationMean 01234 0.000.100.20 Convx Frntr for 2004, Risk==Sigma, 26.071 Wks Factorized = FALSE, No. of Portfolios = 589, Risk F ree Rate = 0 Standard DeviationMean X Figure 6.4 Graphs Depicting The Results In The Formation Of E fficient Frontiers For The Sigma, 26.071 Weeks Covariance Structure, Rebalance Year 2004. The Line Represents The Capital Asset Pricing Model (CAPM), The Red X Is The Tangent Between The CAPM, And The Efficient Frontier Is The Sharpe Portfolio. Other Data Provided Include The Number O f Portfolios Forming The Efficient Frontier, The Risk Free Rate Of Return, And An Indicat i o n If The Covariance Matrix Was Factored.

PAGE 244

-226Figure 6.5 Graphs Depicting The Results In The Formation Of E fficient Frontiers For The Sigma, 256.42 Weeks Covariance Structure, Rebalance Year 2006. The Line Represents The Capital Asset Pricing Model (CAPM), The Red X Is The Tangent Between The CAPM, And The Efficient Frontier Is The Sharpe Portfolio. Other Data Provided Include The Number O f Portfolios Forming The Efficient Frontier, The Risk-Free Rate Of Return, And An Indication If The Covariance Matrix Was Factored. 0.00.20.40.60.81.01.2 0.000.020.040.06 Quad Optim for 2006, Risk==Sigma, 156.42 WksFactorized = FALSE, No. of Portfolios = 600, Risk F ree Rate = 0 Standard Deviation 0.00.20.40.60.81.01.2 0.000.020.040.06 Upper Quad Optim for 2006, Risk==Sigma, 156.42 WksFactorized = FALSE, No. of Portfolios = 548, Risk F ree Rate = 0 Standard DeviationMean 0.00.20.40.60.81.01.2 0.000.020.040.06 Convx Frntr for 2006, Risk==Sigma, 156.42 Wks Factorized = FALSE, No. of Portfolios = 548, Risk F ree Rate = 0 Standard Deviation X

PAGE 245

-227where: the argument of the maximum function is kno wn as the Sharpe ratio. i r m mm m is the expected value of return from portfolio i EF is the efficient frontier. RF r is the risk-free rate of rate, set to zero in this analysis. i s ss s is the risk for portfolio i So, the Sharpe portfolio is the portfolio that prov ides the greatest return per unit of risk. 6.5.2 Applying The MVO Weights The MVO weights generated as described above were a pplied. Figure 6.6 depicts an invest-and-hold strategy. Using the same candida te pool, Sharpe portfolios for a 2001 rebalance were identified for each of the cova riance structures. Each of the 2001 Sharpe portfolios were held over the entire in vestment horizon of this research. Using this invest-and-hold strategy for t he portfolios, the compounded the portfolio returns over the seven-year investment ho rizon was computed. All of the covariance structures with return periods greater t han or equal to 26.071 weeks 0.5 year yielded investments better than or equal t o the base covariance structure (Sigma). On the other hand, all of the co variance structures representing return periods less than 0.5 year did worse than th e base covariance structure. To look at the entire sweep of the portfolios (all the portfolio on an efficient frontier including the Sharpe portfolio) the resear cher computed for each

PAGE 246

-228covariance structure the average return over the en tire set of portfolios for each of the out-of-sample years using an the invest-and-hol d strategy and then “adjusted” (divided) the average by the standard deviation. Th e result is plotted in Figure 6.7. In this plot the greater the value on the ordinate the greater reward per unit of risk. Once again, the adjusted means for portfolios with return levels 0.5 year exceed the base covariance model adjusted means by a large amount. Over time, as expected without an interim rebalance, the advantag e of the large return means declines. While investing and holding is an investment strate gy recommended by some professionals, more professionals recommend periodi cally rebalancing or reviewing a financial portfolio. A year is a common period between rebalances. Figure 6.8 depicts the results of investing in an a lternative strategy to invest and hold. For this strategy the Sharpe portfolio was “r einvested” each year in that Sharpe portfolio which yielded the best reward over all of the covariance structures, the worst reward over all of the covariance structu res and in the Sharpe portfolio built upon the base covariance structure. The resul ts are considerably different. The best reinvestment strategy product produced a retur n of just under $35 for every dollar invested ($34.70), the worst strategy produc ed just under $9 for every dollar invested, and the base covariance structure produce d just over $12 ($12.40) for every dollar invested. Of course, none of these ret urns were adjusted for transaction

PAGE 247

-229costs. But there is a clear advantage to choosing t he best strategy. Table 6.3 shows the best strategy for each year. Here the best str ategy means the covariance structure which produced the highest return for the Sharpe portfolio Year200220032004200520062007 Best Covariance Structure Sigma, 156.42 Wks Sigma, 312.85 Wks Sigma, 26.071 Wks Sigma, 156.42 Wks SigmaSigma, 156.42 Wks Table 6.3 Rebalance strategy yielding highest returns under the set of covariance structures described in the text for an annual reba lance over the years indicated

PAGE 248

-2301 1 1 1 1 1 200220032004200520062007 51015 Cumulative Return Sharp Portfolio, Rebalance Year 2 001Out of Sample YearsCumulative Return 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 1Sigma2Sigma, 1.0526 Wks3Sigma, 2.0 Wks4Sigma, 4.0 Wks5Sigma, 26.071 Wks6Sigma, 52.142 Wks7Sigma, 156.42 Wks8Sigma, 312.85 WksLegend Figure 6.6 Cumulative Return Plot (VAMI) Over The Years Ind icated For An Invest-And-Hold Strategy Under The Set Of Covariance Structures Described In The T ext.

PAGE 249

-231. 1 1 1 1 1 1 200220032004200520062007 12345 Normalized Returns, Rebalance Year 2001Out of Sample YearsNormalized Return 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 1Sigma2Sigma, 1.0526 Wks3Sigma, 2.0 Wks4Sigma, 4.0 Wks5Sigma, 26.071 Wks6Sigma, 52.142 Wks7Sigma, 156.42 Wks8Sigma, 312.85 WksLegend Figure 6.7 Returns Normalized By Standard Deviation For An Invest-And-Hold Strategy Under The Set Of Covariance Structures Described In The Text.

PAGE 250

-232M M M M M M 200220032004200520062007 05101520253035 Cumulative Return Sharp Portfolio, Sequence Rebalan ce Years First Out of Sample YearsCumulative Return (VAMI) m m m m m m S S S S S S nr nr nr Figure 6.8 Cumulative Return Plot (VAMI) Over The Years Ind icated For An Annual-Rebalance Strategy For Maximum, Minimum, And Base Covariance Structures Fo r Risk

PAGE 251

-233The observation to notice from Table 6.3 is that th e dominant numbers of the best reinvest strategies are, as per the other graphics, coming from the covariance structures with return periods of 0.5 year or great er. The obvious rub in the material just discussed is, are there any signals which can be used before the fact to suggest which covariance st ructure to use? In the next section one such signal is explored. 6.6 Predicting The Best Covariance Structure A long-observed result has been the high negative c orrelation between the returns of the S&P 500 Index and the VIX (le Roux [2007]), where the VIX is the Chicago Board Options ExchangeÂ’s (CBOE) VIX, which has an e stimated 30 days in advance of implied volatility. Figure 6.9 depicts a plot of daily returns from VIX versus daily returns from the S&P 500 for the perio d from January 3, 2000, to December 31, 2007. The correlation is minus 0.75

PAGE 252

-234-0.20.00.20.4 -0.06-0.04-0.020.000.020.040.06 Plot of Log(Returns On VIX) Vs S&P 500 ReturnsS&P500 Daily ReturnLog(Return VIX) Figure 6.9 Plot Of Daily Returns From S&P 500 Versus Simila r Measure Of The VIX For The Period From 2000-2007.

PAGE 253

-235The VIX is computed using the implied volatilities of a wide range of S&P 500 Index puts and calls. The VIX, a widely observed me asure of market risk, is meant to be forward looking. The VIX is often referred to as the "investor fear gauge" (Investopedia [2009]). Correlations between the daily VIX and out-of-sampl e daily returns for each of the Sharpe portfolios developed from each of the covari ance structures were looked at; these results are given in Table 6.4. The following observations are evident: Firstly, the VIX is built upon S&P 500 options and therefore is likely to be much more sensitive to the behavior of large-cap (capitalizat ion) and mega-cap securities. The portfolios generated by the investigator came from a candidate pool that included micro-, small-, mid-, large-, and mega-cap firms. T he entries in Table 6.4 are correlations; the bolded and underscored values are the best periodic rebalance strategies. Despite the basics of VIX construction, there appears to a good alignment between the maximum negative value and th e best strategy, especially if the observer considers just covariance structures w ith return periods greater than or equal to 0.5 year, i.e. the covariance structures b ased on return periods of greater than or equal to approximately 26 weeks. In Table 6.4 these are the covariance structures in the lower half of the table.

PAGE 254

-236While this appears to be the beginning of a signal, it is still contemporaneous with the returns. Therefore, an open question is whether there exists a signal available in advance of the investment decision that an investor can observe at the time the annual rebalance decision is to be made. The VIX re turns from the preceding year were examined as a signal to guide selection of a c ovariance structure in the following year. The results were mixed, with about half of the investment decisions Table 6.4 Correlation between returns on VIX and returns on Sharpe portfolios for the years indicated. Bolded and underscored values in columns represent the covariance structure providing the be st returns for an annual rebalance strategy. To YearSigma Sigma, 1.0526 Wks Sigma, 2 Wks Sigma, 4 Wks 2002 -0.1797-0.2300-0.2270-0.4095 2003 -0.1817-0.1927-0.1528-0.1472 2004 -0.1358-0.4042-0.4008-0.3946 2005 -0.0193-0.0270-0.1476-0.1790 20060.0019 0.00370.01490.0212 2007 -0.1025-0.1137-0.1689-0.1781 To Year Sigma, 26.071 Wks Sigma, 52.142 Wks Sigma, 156.42 Wks Sigma, 312.85 Wks 2002 -0.1904-0.1985 -0.2221 -0.2155 2003 -0.2560-0.2502-0.2341 -0.2589 2004-0.3911 -0.4091-0.1003-0.4105 2005 -0.1629-0.1712 -0.1962 -0.2088 2006 0.02310.02220.01980.0202 2007 -0.1651-0.1495 -0.1650 -0.1002

PAGE 255

-237(counting by year) suggested by the signal being th e correct one or the second best choice; the other half was not as predictive. This is not totally a surprise—the VIX is advertised as an about-a-month-look-ahead estima te of implied volatility. It would be a bit surprising if it behaved as a year-a head signal. However, the results observed leads to the belief that the search for su ch an “early-warning” is a worthwhile effort and is likely to bear fruit. Rese arch threads aimed at further exploration for this look-ahead signal are discusse d in the section on future research (see Chapter 7, Section 7.5). Regardless o f whether the research continues to examine the utility of the VIX or pursues anothe r measure as a substitute for or for use in combination with the VIX, the investigat or would: 1. Look for an appropriate weighting scheme for the a nnual VIX series, giving more weight to the most current estimate of the VIX 2. Reconsider the rebalance period to be more frequen t (for example, quarterly) and use the signal series of appropriate length. 3. Pursue some combination of Approaches 1 and 2. An interesting question is why there is poorer beha vior of covariance structure with return periods of less than 0.5 year. A simply stat ed hypothesis is that the nugget covariances for the return period are adding no add itional information over that provided by the base covariance in terms of describ ing the risk in the portfolio. There is some support for this interpretation. The standard deviations of the base

PAGE 256

-238covariance are very similar to those nuggets out to about four weeks. So, in fact, adding this shorter-return-period covariance is som ehow “misdirecting” or “subtracting” from the ability of the base covarian ce to describe risk. In this Chapter (Chapter 6, Section 6.4) the resear cher has provided validation for a model with a potentially wide set of applications, including the ability to estimate the joint probability of any given set of equities for an arbitrary set of return periods (return values). The Chapter (Chapter 6, Section 6. 5) concluded with a specific application of elements computed for a model of the common and important problem of portfolio formation. It was shown that, in a context realistically mimicking an investing environment, material outper formance of a commonly used model could be had by an investor with the introduc tion of the covariance structure associated with longer return periods. This is true for investors with hold/forget strategies as well as periodic-rebalance strategies Finally, an initial exploration was undertaken looking for a signal to predict which co variance structure an investor should selected to get the most performance out of a periodic-rebalance strategy In the next Chapter (Chapter 7) the researcher summ arizes the research presented herein, the unique aspects of the dissertation, and future research that could offer very ample opportunities for understanding portfoli o risk/return.

PAGE 257

-2397. Summary And Conclusions, Along With Thoughts On The Current Research 7.1 Overview Of The Chapter In this the final Chapter of the dissertation there are four sections which tie-together the research, speak to its unique and innovative as pects and lays-out some extensions to the research performed as well as som e additional, new research threads. The Chapter (Chapter 7, Section 7.2) commences with a summary of the research performed. This is followed by a section (Chapter 7, Section 7.3) which takes the summaries and articulates them in terms of the conc lusions which they yielded. Behind the summary and conclusions and perhaps lead ing to them is a set of unique and novel features which underpin the research (Cha pter 7, Section 7.4). These are enumerated in the third section of the Chapter. Th e final section (Chapter 7, Section 7.5) brings together and organizes the next steps which will extend the presented research into areas which the researcher believes are promising.

PAGE 258

-2407.2 Summary It is clear that the characterization of risk is im portant in the design of financial investment strategies. A host of earlier investigat ors has shown that returns on equities are not normally distributed and that this departure from normality can mean that estimates of tail-risk (the probability d ensity in the distributions’ tails) are greater—potentially far greater—than would be computed under the normaldistribution assumption. Many researchers have in f act concluded that the return on equities is in general leptokurtic (too much densit y in the center and in the tails and too little in the shoulders) and negatively skewed. Therefore, commonly occurring levels of returns or returns in the center of the d istribution may appear Gaussian. It is the returns in the tails that are of most concer n to investors, and in fact understanding tail behavior is perhaps the most imp ortant (legal) financial intelligence component relating to whether money wi ll be made or lost. Sadly, an understanding of tail behavior is not widespread am ong investors (especially in the U.S.), and—to be fair—there are still a lot of unknowns. The purpose of t he research in this paper is to study at least some of these unknowns in a new and novel fashion. The starting perspective for this research—based on a host of other researchers’ prior analyses—was that better characterizations of risk do matter in terms of improving returns and that the characterization of risk that produces consistently better returns is an important goal. The research f ocused upon worldwide equity

PAGE 259

-241performance because of the desire to make general s tatements about performance/risk over a broad range of geographies and types of businesses. In so doing, the research looked at a very important set of securities and a large asset class—in fact, a fundamental asset class with direct link s to the business of the investment business (equities are not complicated b y all the overlying factors that impact derivatives or collective investments). For the initial research it was decided not to incl ude other fundamental asset classes such as fixed income or bonds, since they could cre ate unwarranted complications for the research. Daily performance data for 76,424 common stocks from 95 stock exchanges, covering the period January 2000 to Augu st 2007, were gathered (see Chapter 3, Section 3.3). Along with the performance data a set of ancillary data (some value statistics) for firms and exchanges was also collected. Finally, a substantial number of economic and market time seri es were gathered. These covered the same timeframes as the performance data that, taken together, represented a global view of the behavior of market s and economies. The equity performance series (in particular, the return serie s) were filtered for completeness of series, length of series, dead equities, and pos session of liquidity. This latter property was determined with an approach developed by the researcher that is now being used by a major financial data reporting/diss emination organization. The number of performance series meeting the filtering criteria was 12,185. Of these,

PAGE 260

-2423,000 were sampled, using a stratified random sampl e (proportions based on 12,185 equities) to mirror the proportions found in the la rger sample. As indicated earlier, the research started from the perspective that it is worthwhile to examine the behavior of the tails. While there a re results (cited earlier in the document) that take proposed distribution for retur ns and that examine the tails or the maxima (minima), the research examined the dist ribution of tail values directly, using an area of active research that pursues these studies—extreme-value theory. The challenge, however, was not the examination of tail behavior as a set of univariate series but to combine the univariate dis tributions to form a multivariate series, since there does not appear to exist a meth od for combining the univariate series that is reliable, applicable across a range of data domains, and supported by good diagnostic techniques that can be readily expl ained to business (nontechnical) managers. (Copulas have over the past decade been t he mechanism of choice for quite a few analysts. However, this mechanism quite often does not meet the criteria just laid out.) So, another means of formi ng a joint distribution was proposed. This involved converting the problem fro m a joint distribution of generalized extreme-values (GEVs) into a model of t he distribution of return values estimated from the GEVs. (This model may be defined as a Gaussian process that has the advantage of following a multivariate (log normal) Gaussian distribution, (after log transform) with a mean and variance depe ndent upon the ancillary data

PAGE 261

-243characteristics of equities and a variance that is a function of the estimated GEV parameters.) In setting up the analysis the researcher, based up on several preliminary studies, decided to use a block maximum method (with a block being a trading week) to create a minima extreme series for each of the equi ties. In an earlier experiment which examined block maxima for one month and six m onth blocks as well as peaks-over-threshold (POT) mechanisms, it was found that the estimated parameters for the block maximum model yielded clea ner and more rapid convergence than POTs or Poisson-Generalized Pareto Distributions (P-GPDs). Further examination of block maxima-based models sh owed that the parameters of distributions of blocks of different lengths are fu nctionally related and use of a week block was selected because it yields the maxim um number of observations with which to estimate parameters as well as provid es an easier interpretation from a domain perspective. For ease of computation the a nalysis was changed from an analysis of extreme minima to an analysis of extrem e maxima by multiplying the series by -1. From the review of GEV models, those parameterized by only one value for each of the distribution parameters ( and msx msx msx msx ) are labeled as static or non-time varying GEVs. Given the nature of the subject under study, there was ample reason to

PAGE 262

-244believe that a time-varying set of parameters might be a better fit to the data. To this end, a model was adopted wherein the GEV param eters were estimated as linear functions of covariates. The covariates exam ined included simple linear trends (timeÂ’s arrow), constants, periodic (trigono metric) functions, and an algorithmically selected subset (44 series) of the economic and market time series alluded to earlier in the summary. The trigonometri c functions proved to yield no significant coefficients. The constant-value estima tes for the parameters were significant in the overwhelming percentage of sampl es of the maxima series. Using a constant only was equivalent to a static model, s o consistency was assumed as being an element in the linear covariate for all sa mples. This strategy had the added advantage of creating an embedded model structure b y looking at the efficiency of models containing economic and financial covariates Examining the utility of these economic and financial covariates proved far more exhausting. Uncertainty on the manner in which the covariate was to enter t he model led to the application of time lags and aggregation factors to covariates, so each covariate resulted in 9 values for each week, or 396 (9*44) in all. Some pr eliminary examinations allowed the number to be cut down to 264. A linear relation ships between covariates and parameters was assumed and parameter estimation was set within a maximumlikelihood estimator (MLE) framework.

PAGE 263

-245A (forward and backward) step-wise model was used t o check over the 266 covariates for each of the three parameters (798 po tential candidate covariates). A combination of Nelder/Mead and Broyden-Fletcher-Gol dfarb-Shanno (BFGS) methods was used to search over space. Stopping rul es were provided by examination of the Bayesian Information Criterion ( BIC) and likelihood-ratio tests. Models were estimated for the 3,000 external series using 90 processors for a period of 10 days. The result was a linear model fo r each of the parameters for each sample of block maxima datasets. The average number of covariates found in the linear models was just under seven. A number of pat terns were noted in the results (not examined further in this research, since they did not bear directly upon the major dissertation propositions but which do repres ent a research thread to examine in the future). Having obtained estimates of the time-varying param eters for the GEVs, these estimates were used along a domain meaningful set o f return periods to obtain return values, using a time-varying return level co mputation. So, for each of the predetermined return periods (articulated in weeks) and for each of the analysis years from 2001 to 2007, for each security in the t raining set a time-varying return value i z was computed. i z was defined such that 1 jiji PZzN []/ >= (7. 1)

PAGE 264

-246where: j Z is the return variable describing the maxima retur ns for equity I, following a time-varying EVD described above. ij z is the arbitrary but fixed return value for equity i and return period j i N is the return period in weeks. The return value was arrived at by using a search a lgorithm as described earlier. Finally, these return value results, along with oth er functions of the estimated timevarying parameters and assumptions about the behavi or of the solution space in the neighborhood of the selected solution, were used to compute the measurement or nugget variance for each of the equities for each y ear by return period. Using these results and the ancillary data set, a m ultivariate model to describe the behavior of any given set of return levels was defi ned. The model was adapted into the form of a Gaussian process. The mean of the pro cess comprised fixed effects formed by market capitalization, stock exchange, an d market sector. These fixed effects were selected as the result of a step-wise (forward and backward) modeling analysis performed over the entire set of ancillary variables, both as main effects and with some interaction. BIC was used as the test statistic and selection criterion. Two random effects were identified and incorporated in the form of covariance structures (assumed to be independent of one anothe r). These included a covariance structure based upon market capitalization, time, a nd the nugget variance. The model diagnostics run on “studentized” residuals sh owed no discernible deficiency in the model against a normal-distribution hypothes is. The model was also applied

PAGE 265

-247to a holdout sample of 100 equities. The coefficien ts generated by the training set were those used in the test-set model. In other wor ds, the coefficients were not reestimated for the test-set model. Once again, the studentized residuals failed to reject either the normal-distribution hypothesis or the hypothesis that the distributions of the training and test sets were id entical. This model provides an alternative means of describ ing the joint uncertainty, particularly in the downside tails, of equity retur n distributions. It is believed the model and its components form the basis for a numbe r of lines of inquiry and applications. Future research to which this model m ight be applied is discussed in a later section of this Chapter (Chapter 7, Section 7 .5). This dissertation concludes with an examination of portfolio construction and w hat, if anything, these results add to the performance of this activity. Construction of portfolios is an important activity in the financial domain; it may even be argued that constructing portfolio and trad ing strategies is central to the financial investment function. The methodologies fo r constructing portfolios are abundant; however, in this research it was examined whether an element as represented by the connected concept of time-varyin g GEVs and the return values and the nugget variation can add something to the d escription of the tail variation and in turn to the definition of financial portfoli os. To this end the nugget variances

PAGE 266

-248of a series of financially meaningful return period s for approximately 3,000 equities for each year from 2001-2007 were computed These nugget covariances were combined with the standard or base covariance. In this research that combination was a rather nave addition operation t hat certainly could be expanded upon, as will be described in the section covering further research. Nonetheless, the set of covariances was used to form portfolios by m eans of mean-variance optimization (MVO)—long considered the gold standard of optimization f or this application. Upward of 600 portfolios for each combination of ye arand return period-based covariance were formed on the same set of equities, which were reduced in number using a variant of the high-grading methodology des cribed earlier (see Chapter 3, Section 3.7.1). For purposes of comparison MVOs als o were run using the base covariance structure. The output of the analysis wa s an example of using wellknown financial management strategies: invest and h old—wherein an investor invests in a portfolio and leaves the portfolio unt ouched over a long investment horizon, and annual rebalancing—wherein the investo r reexamines and modifies the portfolio on an annual basis. An “in-sample/out -of-sample” analysis construction was utilized, so that the overall anal ysis was equivalent to an actual investment strategy and behavior. At each timefram e the computations used no information that would have been considered as occu rring in advance of the

PAGE 267

-249timeframe. For each of these analysis methods the Sharpe portfolio was used for the comparisons. The Sharpe portfolio is well-defin ed and a commonly used benchmark and provides a well known point of refere nce. With respect to the invest-and-hold-approach portfolio, all the longerreturn period strategies outperformed the base strategy. The annual-rebalanc ing strategy generated even greater outperformance for the same return period m odels. The annual-rebalancing strategy required a look-ahead element (utilization of a forward-looking market covariate) to be fully effective. In this case some success was realized using the measure of volatility commonly known as the VIX. (T his is an area the researcher believes will yield clearly improved results with f urther research.) 7.3 Conclusions Many of the conclusions arising from this research flow directly from the Summary. The research has established a platform fo r inquiry into and further characterization of the uncertainty of financial re turns. This platform is built upon the previously unused timbers of time-varying GEV d istributions and the related species of time-varying return values. The platform provides a means for incorporating into the modeling process a substanti al portion of the variability and value of globally recognized financial measures. In this regard the GEV parameters tend to be a function of time-leading global equity indices, followed by wellobserved longer-term sovereign and investible-grade corporate/nongovernment

PAGE 268

-250debt interest rates. The return values generated fr om the time-varying GEVs are able to be modeled jointly as a Gaussian Process us ing fixed and random elements based on industrial sector, exchange, and market ca pitalization. These results from a finance perspective could be interpreted as a new factor model that conforms to the results of earlier investigators, while adding new dimensions in terms of the specific forms and the manner in which the model is structured. The good-quality diagnostics generated from this model suggest the m odel should be investigated as an alternative to using copulas to understand joint equity behavior, particularly with respect to tail uncertainty of returns in general a nd the joint probability of extreme returns in specific. Finally, the platform has prov ided a launching pad for a new means of describing return uncertainty that could b e incorporated into the formation of financial portfolios, leading to reali zation of better overall performance. 7.4 Unique Aspects Of The Research The research described herein has a number of novel and unique aspects: 1. Breadth of the data inasmuch as the dataset exampl e used global equities on a heretofore unexamined scale. 2. Use of return values as the dependency function fo r building a joint distribution of tail behavior of equity returns.

PAGE 269

-2513. Development and subsequent patenting of an applica tion to deal with the problem of the number of properties being greater t han the number of samples. 4. Application in finance of a complete time-varying extreme-value approach, including fitting model parameters using covariates and estimation of timevarying return values. (Such an approach is much mo re appropriate than the static or non-time-varying approaches used previous ly.) 5. Breadth of financial covariates examined as potent ial driving variables to describe the behavior of the extreme-value paramete rs. 6. Development of a new liquidity filter now being us ed in the industry to filter out illiquid equities from sets of candidate equities in constructing portfolios. 7. Development of a new portfolio generation mechanis m using the Omega statistic. 8. Creation of a new factor model for describing join t tail behavior of equities. 9. A new portfolio construction method (to augment a well-known existing technique), using an estimate of risk that captures behavior in a more detailed manner. 10. Introduction of a signal to help the investor choo se among a set of return period-driven covariance structures.

PAGE 270

-2527.5 Future Research In earlier Sections of this Chapter (Chapter 7, Sec tions 7.2 and 7.3) the reader was provided with a sense that the line of inquiry pres ented herein has great potential for very substantial expansion. In fact, the resear cher believes what has really been developed in this research is the skeleton or suppo rt structure on which a number of potentially profitable inquiries may be built. In t his section some major threads of possible future research based on the present study will be enumerate with brief explanation: 1. Parameters of the EVDs and the covariates will und oubtedly need to be refined, if not revisited in very substantial propo rtion. This activity will lead to at least two threads of inquiry: first, an impro vement in the efficiency of examining the covariate space and second, confirmin g and deepening the understanding of which covariates are driving the p arameters. In fact, the value of adding latents (latent variables) such as business cycles, geopolitical regimes, and stochastic volatility to the model should be examined. 2. Combining covariance structures in the present res earch was accomplished through simple addition of the covariances. Other m ethods worth investigating, include incorporating the optimizati on-weighted version of the covariances and using more than two covariances or moments other than just the second moment to capture behavior in tail uncertainty.

PAGE 271

-2533. The researcher initially sought to expand the fact or model to include factors that have stronger spatial or temporal elements. An examination of the impact of temporal and spatial covariance (the latt er in a more general sense than just geography) was conducted However, the res ults of this examination (not reported on herein) were either no nexistent (for some factors examined) or were confusing. It is the res earcherÂ’s view that the direction of investigation should include more care ful delineation of factors, including more selective use of greater factor gran ularity as well as adding other conditioning variables, including latent effe cts such as statistic volatility, are amongst the directions to extend th e model into the realm of spatial extreme-value models. 4. The value to the investor of a signal suggesting t he return-maximizing covariance structure to select, was demonstrated. This signal would be used by an investor as part of a periodic rebalance stra tegy. The research needs to be extended to improve upon the signal and its l ook-ahead capability. 5. At the outset of this research suggestions were ma de to the investigator that he might try to establish models to express the man ner in which a financial storm ripples through the global financial system. (In climatology this is often defined in terms of a distinction between mod els that describe how weather propagates versus models that describe clim atological relationships. It also may be thought of as the mo del impacts of introducing

PAGE 272

-254shorter-term innovations versus long-term expected behavior.) The researcher did not achieve the goal of offering a “ financial weather” model, but he did develop tools and insights (unreported h erein), leading him to believe that a rudimentary form of the model is fea sible, along with a direction in which to pursue it. 6. Finally, if these models are to be used ultimately as part of a financial system to make decisions about the direction of the research, much of the modeling and models will need to be made more effic ient and easier to modify than the set of models on which this researc h rests. The researcher believes the material presented here in represents the basis for a set of research threads that will last a minimum of the next five to seven years and will ultimately yield a greater understanding of the cha racter of risk in designing financial investment strategies.

PAGE 273

-255Appendix A. List of Countries, Stock Exchanges, Ind ustries And Sectors Used In This Research Table A.1 Countries used in this research. nr r rrr rr r r r rr r r n r nr r r rr r!rrrr rr !rr "rr#r r "r $nr !r "nr rrr %r "nr rrr &r r r'# &#r r r &#r "r #r &rr "%r rr &(#n "r rrr "&r'r # rr )r' r "r *r ( +rr *n r +rr *r +r+#rn ,r r#r +r ,r r +' r# r -r#*r r .rr -%n nr -"r r $r /n%n r .r#r /nr r'r .#r# r rrr

PAGE 274

-256Table A.2 Stock exchanges used in this research. *(rn *(rn *(rn *0/r#nr'rr/rr"0rr/'+/n%n&rn''"rrn''rrn!r'rr'1r '!")2nrnr!r#nr#r%nr+rn rr r%rr&"rrnr&rn"rrr&r"r&1#1rr"*2r&"rrr$ *(rn &r'r"rnr rrr&(#n"nrrr#rrr"nrrrr"$30#("40"*(rnrn+ )#r,rrr' r+' )rnr+*(nr#+*(,r")2rr'$r*(r r$ *(nr 5"* 4rr ,r'5"*r4n,''r,*04'r#0*(rn0*+r .rn#

PAGE 275

-257Table A.3 Industries used in this research. IndustryIndustryIndustryIndustryIndustry Advertising/Mrketng Services Construction MaterialsForest ProductsMedical Specia ltiesPublishing: Newspapers Aerospace & DefenseConsumer SundriesGas Distributor sMedical/Nursing ServicesPulp & Paper Agricultural Commodities/Milling Containers/PackagingHome FurnishingsMetal Fabricati onRailroads Air Freight/CouriersContract DrillingHome Improveme nt ChainsMiscellaneousReal Estate Development AirlinesData Processing ServicesHomebuildingMiscell aneous Commercial Services Real Estate Investment Trusts Alternative Power Generation Department StoresHospital/Nursing Management Miscellaneous Manufacturing Recreational Products AluminumDiscount StoresHotels/Resorts/ CruiselinesM otor VehiclesRegional Banks Apparel/FootwearDrugstore ChainsHousehold/Personal CareMovies/EntertainmentRestaurants Apparel/Footwear RetailElectric UtilitiesIndustrial ConglomeratesMulti-Line InsuranceSavings Banks Auto Parts: OEMElectrical ProductsIndustrial Machin eryOffice Equipment/SuppliesSemiconductors Automotive AftermarketElectronic ComponentsIndustri al SpecialtiesOil & Gas PipelinesServices to the He alth Industry Beverages: AlcoholicElectronic Equipment/Instruments Information Technology Services Oil & Gas ProductionSpecialty Insurance Beverages: Non-AlcoholicElectronic Production Equipment Insurance Brokers/ServicesOil Refining/ MarketingSp ecialty Stores BiotechnologyElectronics DistributorsIntegrated Oil Oilfield Services/Equipment Specialty Telecommunications BroadcastingElectronics/Appliance Stores Internet RetailOther Consumer ServicesSpecialty Telecommunications Building ProductsElectronics/AppliancesInternet Sof tware/ServicesOther Consumer SpecialtiesSteel Cable/Satellite TVEngineering & ConstructionInvestm ent Banks/BrokersOther Metals/MineralsTelecommunica tions Equipment Casinos/GamingEnvironmental ServicesInvestment Mana gersOther TransportationTextiles Catalog/Specialty Distribution Finance/Rental/LeasingInvestment Trusts/Mutual Funds Packaged SoftwareTobacco Chemicals: AgriculturalFinancial ConglomeratesLife/ Health InsurancePersonnel ServicesTools & Hardware Chemicals: Major Diversified Financial Publishing/Services Major BanksPharmaceuticals: GenericTrucking Chemicals: SpecialtyFood DistributorsMajor Telecomm unicationsPharmaceuticals: MajorTrucks/Construction /Farm Machinery CoalFood RetailManaged Health CarePharmaceuticals: OtherUnknown Commercial Printing/FormsFood: Major DiversifiedMar ine ShippingPrecious MetalsWater Utilities Computer CommunicationsFood: Meat/Fish/DairyMedia C onglomeratesProperty/Casualty InsuranceWholesale Di stributors Computer PeripheralsFood: Specialty/CandyMedical Di stributorsPublishing: Books/Magazines Wireless Telecommunications Computer Processing Hardware

PAGE 276

-258Table A.4 Sectors used in this research. Factset Sectors Commercial ServicesCommunicationsConsumer DurablesConsumer Non-DurablesConsumer ServicesDistribution ServicesElectronic TechnologyEnergy MineralsFinanceHealth ServicesHealth TechnologyIndustrial ServicesMiscellaneousNon-Energy MineralsProcess IndustriesProducer ManufacturingRetail TradeTechnology ServicesTransportationUtilities

PAGE 277

-259Appendix B. Detailed Analyses Of The Covariate Mode ls To analyze results wherein 795 covariates could hav e been chosen, the investigator first remapped the results to a series of simple bi nary values. These included the presence or absence of a covariate in the model for each equity having the following attributes: 1. Was in the function for or ˆ ˆ,ˆ, msx msx msx msx 2. Was contemporaneous or lagged, given 1 3. Was one-week or two-week lagged, given 1 4. Was maximum or minimum, given 1 Additional indices were built upon factor analyses of the covariates at the level of the individual covariate. The factor analyses of co ntemporaneous covariates yielded distinct groupings (see Tables B.1 and B.2). Using these groupings as a guide, two set of indices were created for each time aggregati on: One set was based upon simple membership or non-membership in a multi-cova riate factor analytic grouping. The second set was a more complex mapping in which multi-covariate groups were individually coded, while single-covari ate groups were combined under one code.

PAGE 278

-260Tables B.1-B.5 summarize the overall distribution o f coefficients from the timevarying GEV analysis. Highlights from these tables include: 1. There was an average of 6.99 covariates per model ( this included the three constants, one each for and msx msx msx msx forced into each model). Of these, about half were financial/economic covariates rather than constants or trends. 2. Of the financial/economic covariates, the plurality were used in the estimation of ˆ m mm m followed by ˆ x xx x and ˆ s ss s 3. Time-contemporaneous covariates were less than half of the covariates compared to time-lagged covariates (46.6% against 5 3.4%). Events occurring in the previous two weeks had slightly mo re influence on parameters than events occurring in the same week. 4. Covariates associated with lags of one or two weeks were equally divided, suggesting that, from an overall perspective, the e xistence of a lead-lag relationship, but there is not a strong differentia tion by time frame for at least the first two weeks. 5. The minimum values of the covariate in the block (N M) more frequently entered the model than the maximum values of the co variate in the block (PM), 62.1% versus 37.9%, respectively. 6. Covariates associated with a group were more freque ntly in the model than independent covariates and formed their own factor.

PAGE 279

-261DJ Ind Average P IX NM DJ Ind Average P IX PM Spread: 30 Yr Mtg-10 Yr (CM) NM NASDAQ 100 P IX NM NASDAQ 100 P IX PM And 10 Year Swap Rate PM S P 1500 Supercomp TR IX NM S P 1500 Supercomp TR IX PM US Interest 10 Yr PM Russell 1000 NM Russell 1000 PM US Interest 20 Yr PM Russell 2000 NM Russell 2000 PM Moody aaa PM Russell 3000 NM Russell 3000 PM Moody bbb PM Russell Mid Cap NM Russell Mid Cap PM Euro STOXX 50 NM NASDAQ Composite Index PM France CAC 40 NM NASDAQ Composite Index NM Libor Three Months NM US Interest 10 Yr NM 10 Yr (CM) NM Libor Six Months NM US Interest 20 Yr NM Spread 15 Yr Mtg-7 Yr (CM) PM Moody aaa NM Spread Invest Grade-5Yr (CM) PM Moody bbb NM Spread 30 Yr Mtg-10 Yr (CM) PM Austria ATX NM Spread 15 Yr Mtg-7 Yr (CM) NM Libor Three Months PM Austria WBI Benchmarked NM Invest Grade-5 Yr CM NM Libor Six Months PM 10 Yr (CM) PM Austria ATX PM 15 Yr Mtg NM Aust Bank accepted Bills 180 day NM Austria WBI Benchmarked PM 15 Yr Mtg PM Aust Treasury Bonds 2 years NM Aust Bank accepted Bills 180 day PM Euro STOXX 50 PM Aust Treasury Bonds 2 years PM France CAC 40 PM Group 1Group 2Group 3Group 4Group 5Group 6 Group 9 Group 10Group 11Group 12Group 13Group 14 Group 7Group 8 Table B.1 Covariates grouped as per factors developed usin g a principalcomponents extraction and a varimax rotation, group s (factors) contain more than one covariate. (More complete cov ariate names [where appropriate] may be found in Table 3.12; NM = minimum value in time block, PM = maximum value in time blo ck.)

PAGE 280

-262KOSPI IX NM Brazil Bovespa NM China Shanghai SE Composite NM CBOE U S Market Volatility NM VXO PM Fed_Fund_Rate NM Japan Nikkei Average 225 Benchmarked PM KOSPI IX PM US Interest 6 mo PM Hang Seng Hong Kong PM Dlr Fut Ind PM One Yr Interest Rate Swap PM Dlr Fut Ind NM And 10 Year Swap Rate NM 3M Euro Dollars Fut PM Hang Seng Hong Kong NM Can V39072 PM CBOE U S Market Volatility PM Fed_Fund_Rate PM Lehman Muni Sec Tr Inv NM BOE IUDVCDA PM Aust Bank accepted Bills 30 day PM Aust Bank accepted Bills 30 day NM One Yr Interest Rate Swap NM Brazil Bovespa PM 3M Euro Dollars Fut NM VXO NM Spread 3M TB 3M Euro Dlrs NM BOE IUDVCDA NM Spread 3M TB 3M Euro Dlrs PM Can V39072 NM FTSE 100 P IX NM China Shanghai SE Composite PM FTSE 100 P IX PM Japan Nikkei Average 225 Benchmarked NM US Interest 6 mo NM Lehman Muni Sec Tr Inv PM Ungrouped Covariates KOSPI IX NM Brazil Bovespa NM China Shanghai SE Composite NM CBOE U S Market Volatility NM VXO PM Fed_Fund_Rate NM Japan Nikkei Average 225 Benchmarked PM KOSPI IX PM US Interest 6 mo PM Hang Seng Hong Kong PM Dlr Fut Ind PM One Yr Interest Rate Swap PM Dlr Fut Ind NM And 10 Year Swap Rate NM 3M Euro Dollars Fut PM Hang Seng Hong Kong NM Can V39072 PM CBOE U S Market Volatility PM Fed_Fund_Rate PM Lehman Muni Sec Tr Inv NM BOE IUDVCDA PM Aust Bank accepted Bills 30 day PM Aust Bank accepted Bills 30 day NM One Yr Interest Rate Swap NM Brazil Bovespa PM 3M Euro Dollars Fut NM VXO NM Spread 3M TB 3M Euro Dlrs NM BOE IUDVCDA NM Spread 3M TB 3M Euro Dlrs PM Can V39072 NM FTSE 100 P IX NM China Shanghai SE Composite PM FTSE 100 P IX PM Japan Nikkei Average 225 Benchmarked NM US Interest 6 mo NM Lehman Muni Sec Tr Inv PM Ungrouped Covariates Table B.2 Covariates grouped as per factors developed using a principalcomponents extraction and a varimax rotation, group s (factors) contain only one covariate. (More complete covariate names [where appropriate] may be found in Table 3.12; NM = minim um value in time block, PM = maximum value in time.)

PAGE 281

-263Name All Parms (Exclude Constants) Contemporaneous No Trend (NT)Lags NTTrends Mu Count 4,3772,3691,793215 Mu Average/Equity 1.4580.7890.5970.072 Propor'n of Col 0.3730.4740.3130.209 Sigma Count 3,2831,2301,379674 Sigma Average/Equity 1.0940.4100.4590.225 Propor'n of Col 0.2790.2460.2410.655 Xi Count 4,0891,4012,548140 Xi Average/Equity 1.3620.4670.8490.047 Propor'n of Col 0.3480.2800.4450.136 Column Total 11,7495,0005,7201,029 Average Over All Equities 3.6691.5621.7860.321 Proportion All (exclude constant) 0.4260.4870.088 Proportion All (exclude constant, trend) 0.4660.534 Table B.3 Overall tally of covariates entering models, bro ken down by parameters and broad themes.

PAGE 282

-264NameLag 1 NTLag 2 NT Mu Count 944849 Mu Average/Equity 0.3140.283 Propor'n of Col 0.3300.297 Sigma Count 698681 Sigma Average/Equity 0.2330.227 Propor'n of Col 0.2440.238 Xi Count 1,2151,333 Xi Average/Equity 0.4050.444 Propor'n of Col 0.4250.466 Column Total 2,8572,863 Average Over All Equities 0.8920.894 Proportion All (exclude constant) 0.4990.501 Table B.4 Tally of covariates entering models, broken down by parameters and time lag, not including contemporaneous observations.

PAGE 283

-265NamePM NTNM NT Total NG Total GR Mu Count 1,2592,9031,8212,341 Mu Average/Equity 0.4190.9670.6070.780 Propor'n of Col 0.3100.4360.3980.381 Sigma Count 1,0351,5741,0721,537 Sigma Average/Equity 0.3450.5240.3570.512 Propor'n of Col 0.2550.2360.2340.250 Xi Count 1,7682,1811,6862,263 Xi Average/Equity 0.5890.7270.5620.754 Propor'n of Col 0.4350.3280.3680.369 Column Total 4,0626,6584,5796,141 Proportion All (exclude constant, trend) 0.3790.6210.4270.573 Table B.5 Tally of covariates entering models broken down by parameters versus aggregate function and covariate groupings. (Legend: PM = maximum, NM = minimum; NG = covariate is a single ton in its factor, GR = covariate highly loads on a factor possessing more than one covariate; NT means that trend covari ates were not included in the counts)

PAGE 284

-266Table B.6 provides a breakdown of the model covaria tes by group affiliation, provided that the covariate is a member of a group. Note that the last line of each subsection of the table indicates in descending ord er the rank of the group in terms of the amount of variation explained by the factor forming the group. There appears to be a very strong relationship between the number of covariates from a specific group and the variation explained by the group. Thi s suggests to the investigator that there is strong explanatory power in the model being postulated.

PAGE 285

-267Name Total Group (GR) 1Total GR 2Total GR 3Total GR 4Total GR 5Total GR 6Total GR 7 Mu Count 4613682938723113486 Propor'n of Col 0.3790.3890.3920.3610.3850.4000.394 Sigma Count 317224175741419045 Propor'n of Col 0.2610.2370.2340.3070.2350.2690.206 Xi Count 4383542808022811187 Propor'n of Col 0.3600.3740.3740.3320.3800.3310.399 Column Total 1,216946748241600335218 Proportion All (exclude constant, trend) 0.1980.1540.1220.0390.0980.0550.035 SS Covariates Position 1235479 Name Total GR 8 Total GR 9 Total GR 10 Total GR 11 Total GR 12 Total GR 13 Total GR 14 Mu Count 1248389881049499 Propor'n of Col 0.3660.3810.3850.3180.3850.3690.401 Sigma Count 87465583667361 Propor'n of Col 0.2570.2110.2380.3000.2440.2860.247 Xi Count 12889871061008887 Propor'n of Col 0.3780.4080.3770.3830.3700.3450.352 Column Total 339218231277270255247 Proportion All (exclude constant, trend) 0.0550.0350.0380.0450.0440.0420.040 SS Covariates Position 610118121338 Table B.6 Tally of covariates entering models, broken down by parameters versus covariate group/factor.

PAGE 286

-268An additional analysis was performed to examine the relationship between the time-varying models and the factors formed from the ancillary variables described in Chapter 3, namely market value (MV), sector (Sec t), trade region (Tr.reg), exchange region (Reg), and exchange (Ex). The analy sis was set up in terms of factors formed from the values of the ancillary var iables versus what the author calls dependent variables. The dependent variables were formed for certain modeling characteristics by the summations of the t allies created from the presence/absence data of the model covariates just described. Summations of the tallies were formed for characteristics, creating t he dependent variables: Contemporaneous covariates (Con_NT) Time-lagged covariates (Lags_NT) Covariates that were members of a group of size gre ater than 1 (Grp) Covariates that were members of a group of size = 1 (NoG) Maximum extreme value of the covariate (Max.ex) Minimum extreme value of the covariate (Min.ex) The eight values of these dependent variables (beyo nd constant and trend terms) were: 1. “,,,” = no covariates 2. “m,,” = m mm m covariate(s) only (meaning covariates present only in functions of m mm m )

PAGE 287

-2693. “,s,” = s ss s covariate(s) only 4. “,,x” = x xx x covariate(s) only 5. “m,s,” = m mm m and s ss s covariates only 6. “m,,x” = m mm m and x xx x covariates only 7. “,s,x” = s ss s and x xx x covariates only 8. “m,s,x” = m mm m s ss s and x xx x covariates A cross-tabulation was created and a contingency-ta ble analysis was performed. The examination tested for the presence of an assoc iation between the values of a specific factor and a dependent variable. The null hypothesis for each of these tests was the lack of presence of an association, i.e., t hat the factor and dependent were independent of one another. This would indicate tha t the observed tallies in the cells were not significantly different than expecte d or the expected value was formed by the sample size adjustment of the product of the marginal probabilities. The alternative hypothesis was that the factor and the dependent variable were not independent. Of course, a rejection of the null hyp othesis did not imply the specific form of the association or, even more interesting, that the association was meaningful in the present context. (A further exami nation of the results is needed for that determination.) A Bonferroni adjustment wa s made to the 30 chi-square

PAGE 288

-270tests to control the overall a aa a error level. Table B.7 reports on what the investig ator interpreted as the significant results of the testi ng. It is of interest that the significant dependent va riables are Con_NT, Grp, Min.ex, and at a lesser frequency NoG. Neither Lag_NT nor M ax.ex appear as significant dependent variables. FactorDependentP(X>stat)ExCon_NT0.0005ExGrp0.0005ExMin.ex0.0005ExNoG0.0005MVCon_NT0.0005MVGrp0.0005MVMin.ex0.0005MVNoG0.0005RegCon_NT0.0005RegGrp0.0010RegMin.ex0.0005SectCon_NT0.0010SectMin.ex0.0005Tr.regCon_NT0.0005Tr.regGrp0.0005 Tr.reg Min.ex 0.0005 Table B.7 Results from chi-square test for the presence of association between the values of the stated factor and the dep endent variable.

PAGE 289

-271Tables B.8-B.11 were computed by creating a directi onal mapping of the residuals or differences between the expected cell counts und er the independence assumption and the observed cell counts. A negative residual count meant that the expected count was greater than the observed count, and vice versa. With directional mapping the residuals -1, 0, or 1 were mapped based on count-off values. From examining these tables the investigato r suggests the following: Micro-cap firms tend to be modeled by less complex models; models for small-, mid-, and large-cap firms tend toward great er complexity. (Since only one mega-cap firm was in the sample, it was dr opped from the analysis.) The models associated with trade region and exchang e region were very similar in their gross patterns of covariate usage, and the results suggest we should work on investigating the possibility of dro pping one or the other from further analysis. Africa, Eastern Europe, the Middle East, South Amer ica, and to a lesser extent North America can—at the gross level of this analysis—be modeled by less complex models. The Pacific Rim and Western Europe, in the same sen se as the previous, tend to be more complex models.

PAGE 290

-272Table B.8 Directional representation of residuals from con tingency table analyses between the Market Value factor and signif icant dependent variables as listed in Table B.7. ,,,,x,s,,s,xm,,m,,xm,s,m,s,x large0 0000 + + micro + + 00 0 mid 0 + + + + small 00 + 0 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x large 0000 + + micro + + 00 mid 00 + 0 + + small 000 + 0 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x large 0 + + + micro + + + + mid + + + + small 00 + + + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x large 00 + + micro + + + + 0 mid 0 + 0 + + small + 0 + + No Group Membership Minimum Aggregation Contemporaneous Group Membership

PAGE 291

-273Table B.9 Directional representation of residuals from con tingency table analyses between the Trade Region factor and significant dep endent variables as listed in Table B.7. ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa + 00 + 000Central Asia00 + + 0000Eastern Europe00 + 000 Middle East + 00 + 0 North America000 + 0 Pacific Rim000 00 + + South America000 + + 0 Western Europe + 0 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa000 + 0 0Central Asia0 + 00 000Eastern Europe00 + + 00 Middle East + 000 + 0 North America000 + + 0 Pacific Rim + 0 0 00 + South America0 + 0 + 0 00Western Europe0 00 0 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa0 + + 00 0Central Asia0 + + 0 000Eastern Europe0 + 0 + 00 0Middle East0 + 00 + North America0000 + + Pacific Rim + 000 + + South America0 + + 00 00 Western Europe 0 + 0 + + Contemporaneous Timeframe Grouped Covariates Minimum Aggregation

PAGE 292

-274,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa + 00 + 000Central Asia00 + + 0000Eastern Europe + 0 + 000 Middle East000 + 00 North America000 + 0 Pacific Rim000 0 + + South America000 + + 0 Western Europe + 0 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa000 + 0 0Central Asia0000 000Eastern Europe00 + + 00 Middle East000 + 000 North America000 + + 0 Pacific Rim + 0 0 00 + South America0 + 0 + 0 00Western Europe0 00 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Africa0 + + 00 0Central Asia0 + + 0 000Eastern Europe0 + 0 + 0 Middle East + 0 + 0 + 0 North America0000 + + Pacific Rim + 0 00 + 0South America0 + + 00 00 Western Europe 0 + 0 + + Grouped Covariates Minimum Aggregation Contemporaneous Timeframe Table B.10 Directional representation of residuals from contin gency table analyses between the Exchange Region factor and si gnificant dependent variables as listed in Table B.7.

PAGE 293

-275,,,,x,s,,s,xm,,m,,xm,s,m,s,x Commercial Services0 + 000000Communications000 + 000 Consumer Durables00 0 + 0 Consumer Non-Durables000 + 0 + Consumer Services0 00 + 0 + Distribution Services000 + 0 00Electronic Technology00 00 + Energy Minerals + 0 + 000 Finance00 0 0 + 0Health Services00 + 00000Health Technology + 00000 0Industrial Services00000000Miscellaneous00 + 0 0 + + Non-Energy Minerals + + + 00 Process Industries0 00 + 0 + 0Producer Manufacturing00000000Retail Trade000 + 000 + Technology Services0 + 00000 Transportation0 0000 + Utilities000000 + + ,,,,x,s,,s,xm,,m,,xm,s,m,s,x Commercial Services + + 000 + Communications + 0 + + 0Consumer Durables0000 + + Consumer Non-Durables + + 0 + Consumer Services + + + + 0 Distribution Services 0 + + 0 + 0 Electronic Technology0 0 + + 0Energy Minerals0 + 0 + 000 Finance + 0 + 0 + 0Health Services00 + 0 0 + Health Technology0 0 + 0 + 0 Industrial Services00 + + + Miscellaneous + 00 + + Non-Energy Minerals + + 0 + + Process Industries + 0 + + Producer Manufacturing000 + Retail Trade + 0 + 0 + Technology Services + + 0 + Transportation0 000 + + Utilities0 000 0 + Contemporaneous Minimum Aggregation Table B.11 Directional representation of residuals from conti ngency table analyses between the Sector factor and significant dependent variables as listed in Table B.7.

PAGE 294

-276Since the sector tables contain more data than the other tables, to gain additional insights a Q-mode factor analysis (Rummel [1970]) was performed, the results of which are presented in Table B.12. The two results, while built on fundamentally the same data, represent somewhat different structu res and different degrees of sensitivity. These two results, as well as the othe rs, will be used to guide the construction of the time/space models in the next s ection (see Table B.13). Industrial Services 1 Producer Manufacturing 1 Technology Services -1 Energy Minerals 1 Utilities1 Consumer Services -1Communications-1 Electronic Technology -1 Miscellaneous1 Process Industries -1 Non-Energy Minerals -1 Consumer Durables -1 Transportation1 Health Technology -1Health Services-1Retail Trade-1 Commercial Services -1 Distribution Services -1 Fact.1.namesFact.3.names Fact.2.namesFact.4.names Fact.8.names Fact.5.namesFact.7.names Fact.6.names Table B.12 Q-Mode factor analysis—cases represented by depend ent variable minimum aggregation (Min.ex) and properties represe nted by the data factor sector (Sect).

PAGE 295

-277Sector,,,,x,s,,s,xm,,m,,xm,s,m,s,xFacrSign Industrial Services00 + + + 11 Miscellaneous + 00 + + 11 Utilities0 000 0 + 11 Consumer Durables0000 + + 1-1 Non-Energy Minerals + + 0 + + 1-1 Producer Manufacturing000 + 21 Consumer Services + + + + 0 2-1 Process Industries + 0 + + 2-1 Communications + 0 + + 03-1 Technology Services + + 0 + 3-1 Energy Minerals0 + 0 + 000 41 Electronic Technology0 0 + + 04-1 Transportation0 000 + + 51 Commercial Services + + 000 + 5-1 Distribution Services 0 + + 0 + 0 6-1 Health Technology0 0 + 0 + 0 6-1 Health Services00 + 0 0 + 7-1 Retail Trade + 0 + 0 + 81 Consumer Non-Durables + + 0 + NSL Finance + 0 + 0 + 0 NSL Table B.13 Results from Q-Mode factor analysis. Results appen ded to directional representation of residuals from contin gency table between the minimum aggregation dependent variable (Min.ex) and the sector data factor (Sect) (NSL = No substantive load, meaning that the sector did not load highly on any rotated factor.)

PAGE 296

-278Bibliography Affleck-Graves, J., & McDonald, B. (1989). Nonnorma lities and tests of asset pricing theories. The Journal of Finance, 44 (4), 889-908. Answers.com. (2007). Business and finance: Market c apitalization. Retrieved November 12, 2007, from the Answers.com website: http://www.answers.com/topic/market-capitalization? cat=biz-fin Beirlant, J., Goegebeur, Y., Segers J., & Teugels, J. (2004). Statistics of extremes: Theory and applications. West Sussex, England: John Wiley & Sons, Ltd. Box, G.E.P., & Cox, D.R. (1964). An analysis of tra nsformations (with discussion). Journal of the Royal Statistical Society, Series B, 26 (2), 211252. Campbell, J.Y., Lo, A.W. & MacKinlay, A.C. (1997). The econometrics of financial markets. Princeton, New Jersey: Princeton University Press. Casella, G. & Berger, R.L. (2001). Statistical inference. (2nd ed.). Pacific Grove, CA: Duxbury Press. Cebrin, A.C., Denuit, M., & Lambert, P. (2003). An alysis of bivariate tail dependence using extreme value copulas: An applicat ion to the SOA medical large claims database. Belgian Actuarial Bulletin, 3 (1), 33-41. Chatfield, C. (1996). The analysis of time series: An introduction. (5th ed.). London: Chapman and Hall. Clark, A., & Labovitz, M. (2006, October). Securities selection and portfolio optimization: Is money being left on the table? Paper presented at the Conference on Financial Engineering and Applications (FEA 2006). Cambridge, MA. Coles, S.G. (2001). An introduction to statistical modeling of extreme values. London: Springer-Verlag.

PAGE 297

-279Coles, S.G., &. Dixon, M.J. (1999). Likelihood-base d inference for extreme value models. Extremes, 2 (1), 5-23. Cooley, D., Nychka, D., & Naveau, P. (2006a). Baye sian spatial modeling of extreme precipitation return levels. Retrieved Janu ary 7, 2008 from the Colorado State University website: http://www.stat.colostate.edu/~cooleyd/Papers/frRev .pdf Cooley, D., Naveau, P., & Jomelli, V. (2006b). A Ba yesian Hierarchical Extreme Value Model for Lichenometry. Environmetrics, 17 (6), 555-574. Costa M., Cavaliere, G., & Iezzi, S. (2005). The ro le of the normal distribution in financial markets. Proceedings of the Meeting of the Classification an d Data Analysis Group (CLADAG) of the Italian Statistical Society (pp. 343-350). Bologna, Italy: University of Bologna. Davison, A.C. (1984). Modelling excesses over high thresholds, with an application. In J.Tiago de Oliveira (Ed.) Statistical extremes and applications (pp. 461-482). Dordrecht, The Netherlands: Reidel, Davison, A.C. & Smith, R.L. (1990). Models for Exce edances Over HighThresholds. Journal of the Royal Statistical Society, Series B, 52 (3), 393442. Day, T.E., Wang, Y., & Xu, Y. (2001). Investigating underperformance by mutual fund portfolios. Retrieved March 26, 2006 from the University of Texas at Dallas website: http://www.utdallas.edu/~yexiaoxu/M fd.pdf Diggle, P. J., & Ribeiro, P.J. (2007). Model-based geostatistics. New York: Springer Series in Statistics. Dijk, V. and L. de Haan 1992. On the estimation of the exceedance probability of a high level. In: Order statistics and nonparametrics: Theory and app lications. Sen, P.K. and I.A. Salama. Eds. Elsevier, Amsterdam pp. 79-92. Dufour, J.M., Khalaf, L., & Beaulieu, M.C. (2003). Exact Skewness–Kurtosis Tests for Multivariate Normality and Goodness-of-Fi t in Multivariate Regressions with Application to Asset Pricing Model s. 65 (s1), 891-906.

PAGE 298

-280Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. London: Chapman and Hall. Fama, E.F. (1965a). Random walks in stock market pr ices. Financial Analysts Journal, 21 (5), 55-59. Fama, E.F. (1965b). The behavior of stock-market pr ices. Journal of Business, 38 (1), 34-105. Financial Dictionary. (2008). The definition of ill iquidity. Retrieved February 2, 2008 from the Free Dictionary website: http://financial-dictionary.thefreedictionary.com/I lliquid Fischer, M., Kck, C., Schlter, S., & Weigert, F. (2009). An empirical analysis of multivariate copula models. Quantitative Finance, 9 (7), 839 – 854. Fisher, R.A., & Tippett, L.H.C. (1928). On the esti mation of the frequency distributions of the largest or smallest member of a sample. Proceedings of the Cambridge Philosophical Society (pp.180-190). Cambridge, England. Gilleland E., & Katz, R.W. (2006). Analyzing seasonal to interannual extreme weather and climate variability with the extremes t oolkit. Poster session presented at the 18th Conference on Climate Variability and Change, 86th Annual Meeting of the American Meteorological Socie ty (AMS), Atlanta, GA. Gilleland, E., & Nychka, D. (2005). Statistical mod els for monitoring and regulating ground-level ozone. Environmetrics, 16 (5), 535-546. Gnedenko, B. (1943). Sur La Distribution Limite Du Terme Maximum D'Une Serie Aleatoire. The Annals of Mathematics, Series 2, 44 (3), 423-453. Gumbel E.J. (1958). Statistics of extremes. New York: Columbia University Press. Heffernan, J.E., & Tawn, J.A. (2004). A conditional approach for multivariate extreme values. Journal of the Royal Statistical Society, Series B, 66 (3), 497534. Hosking, J.R.M., Wallis, J.R., & Wood, E.F. (1985) Estimation of the generalized extreme-value distribution by the method of probabi lity-weighted moments. Technometrics, 27 (3), 251-261.

PAGE 299

-281Hosking, J.R.M. & Wallis, J.R. (1997). Regional frequency analysis: An approach based on LMoments. Cambridge: Cambridge University Press. International Monetary Fund. (2007). Chapter 1: Global Prospects and Policy Issues. Retrieved October 28, 2007 from the International Monetary Fund website: http://www.imf.org/external/pubs/ft/weo/20 07/01/pdf/c1.pdf Investopedia. (2007). Market Capitalization. Retrieved December 8, 2007 from the Investopedia website: http://www.investopedia.com/terms/m/marketcapitaliz ation.asp Investopedia. (2009). VIX CBOE Volatility Index. Retrieved May 19, 2009 from the Investopedia website: http://www.investopedia. com/terms/v/vix.asp Jenkins, G.M. & Watts, D.G. (1968). Spectral analysis and its applications. San Francisco: Holden-Day. Jenkinson, A.F. (1955). The frequency distribution of the annual maximum or minimum values of meteorological events. Quarterly Journal of the Royal Meteorological Society, (81) 158-172. Johnson, R.A. & Wichern, D.W. (2002). Applied multivariate statistical analysis. (5th ed.). Upper Saddle River, New Jersey: Prentice Ha ll. Jondeau, E., & Rockinger, M. (2003). Testing for differences in the tails of stockmarket returns. Journal of Empirical Finance, 10 (5), 559-581. Kwiatkowski, D., Phillips, P.C.B., Schmidt, P., & Shin, Y. (1992). Testing the Null Hypothesis of Stationarity against the Alterna tive of a Unit Root. Journal of Econometrics, 54, 159-178. Labovitz, M.L., Turowski, H., & Kenyon, J.D. (2007 ). Lipper Research Series: High-Grading Data: Retaining Variation, Reducing Di mensionality. Retrieved November 4, 2007 from the Lipper website: http://www.lipperweb.com/research/fundIndustryOverv iew.asp Lay, D.C. (2005). Linear Algebra and Its Applications. (3rd ed.). Boston: Addison Wesley.

PAGE 300

-282Leadbetter, M.R., Lindgren, G. & Rootzn, H. (1983) Extremes and related properties of random sequences and series. New York: Springer-Verlag. Leadbetter, M.R., Weissman, I., De Haan, L., & Root zn, H. (1989). On clustering of high values in stationary series. Paper presented at the International Meeting on Statistical Climatology. Rotorua, New Ze aland. Ledford, A.W., & Tawn, J.A. (1996). Statistics for near independence in multivariate extreme values. Biometrika, 83 (1), 169-187. Le Roux, M. (2007). A long-term model of the dynam ics of the S&P500 implied volatility surface. North American Actuarial Journal, 11 (4), 61-75. Lintner, J. (1965). The valuation of risk assets an d the selection of risky investments in stock portfolios and capital budgets The Review of Economics and Statistics, 47 (1), 13-37. Madsen, H., Rasmussen, P.F., & Rosbjerg, D. (1997). Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events: 1. At-site modeling. Water Resources Research, 33 (4), 747–758. Malevergne, Y., & Sornette, D. (2001). General fram ework for a portfolio theory with non-gaussian risks and non-linear correlations Retrieved March 21, 2005 from the GloriaMundi.org website: http://www.gloriamundi.org/ShowTracking.asp?Resourc eID=453055857 Malevergne, Y., & Sornette, D. (2002). Multi-momen ts method for portfolio management: Generalized capital asset pricing model in homogeneous and heterogeneous markets. Retrieved March 21, 2005 fr om the Cornell University Library website: http://arxiv.org/PS_cache/cond-mat/pdf/0207/0207475 v1.pdf Markowitz, (H.M.) (1952). Portfolio selection. Journal of Finance, 7 (1), 77-91. Markowitz, H.M. (1959). Portfolio selection: Efficient diversification of investments. New York: John Wiley and Sons. Martins, E.S., & Stedinger, J.R. (2000). Generalize d maximum-likelihood generalized extreme value quantile estimators for h ydrologic data. Water Resources Research, 36 (3), 737-744.

PAGE 301

-283Martins, E.S., & Stedinger, J.R. (2001). Generalize d maximum-likelihood ParetoPoisson estimators for partial duration series, Water Resources Research, 37 (10), 2551-2557. McNeil, A., Frey R., & Embrechts, P. (2005). Quantitative risk management: Concepts techniques and tools. Princeton, NJ: Princeton University Press. Nelder, J.A., & Mead, R. (1965). A simplex method f or function minimization. Computer Journal, 7, 308-313. Nelsen, R.B. (1999). An introduction to copulas. New York: Springer-Verlag. Neter, J., Kutner, M.H., Wasserman, W., & Nachtshei m, C.J. (1996). Applied linear statistical models. (4th ed.). Chicago: McGraw-Hill/Irwin. NIST (National Institution of Standards and Technol ogy). (2003). Kolmogorov Smirnov two sample. Retrieved October 11, 2008 from the National Institution of Standards and Technology website: http://www.itl.nist.gov/div898/software/dataplot/re fman1/auxillar/ks2samp.ht m Nocedal, J., & Wright, S.J. (1999). Numerical optimization. New York: SpringerVerlag. Pickands, J. (1975). Statistical inference using ex treme order statistics. Annals of Statistics, 3 (1), 119-131. Prescott, P., & Walden, A.T. (1980). Maximum likeli hood estimation of the parameters of the generalized extreme-value distrib ution. Biometrika 67 (3), 723-724. Prescott, P., & Walden, A.T. (1983). Maximum likel ihood estimation of the parameters of the three parameter generalized extre me-value distribution from censored samples. Journal of Statistical Computation and Simulation, 16 (3), 241-250. R Development Core Team. (2009). The R Stats packag e, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria.

PAGE 302

-284Rockafellar, R.T., & Uryasev, S. (2002). Conditiona l value-at-risk for general loss distributions. Journal of Banking and Finance, 26 (7), 1443–1471. Rummel, R. J. (1970). Applied factor analysis. Evanston, Illinois: Northwestern University Press. Sain, S. (2004). MATH 6026, Topics in Probability & Statistics: Spatial Data Analysis. Lecture Series, University of Colorado at Denver. Denver, CO. Schlather, M., & Tawn, J.A. (2002). Inequalities fo r the extremeal coefficients of multivariate extreme value distributions. Extremes, 5 (1), 87-102. Schlather, M., & Tawn, J.A. (2003). A dependence m easure for multivariate and spatial extreme values: Properties and inference. Biometrika, 90 (1), 139–156. Schwarz, G. (1978). Estimating the dimension of a m odel. Annals of Statistics, 6 (2), 461-464. Shakespeare, W. (2004). The tempest. Washington D.C.: Simon & Schuster. (Original work published 1623). Sharpe, W.F. (1964). Capital asset prices A theor y of market equilibrium under conditions of risk. Journal of Finance, 19 (3), 425-442. Sharpe, W. F. (1966). Mutual fund performance. Journal of Business, 39 (1, Part 2: Supplement on Security Prices ), 119-138. Sharpe W.F. (1974). Imputing expected returns from portfolio composition. Journal of Financial and Quantitative Analysis, 9 (3), 463-472. Smith, R. L. 1984. Threshold methods for sample ext remes. In: Statistical Extremes and Applications. Tiago de Oliveira, J., Ed. Reidel, Dordrecht. pp. 621-638. Smith, R. L. (1985). Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72 (1), 67-90. Smith, R.L. (1989). Extreme value analysis of envi ronmental time series: An example based on ozone data. Statistical Science, 4 (4), 367-393.

PAGE 303

-285Smith, R.L., Tawn, J.A., & Yuen, H.-K. (1990). Stat istics of multivariate extremes. International Statistical Review, 58 (1), 47-58. Smith, R.L. (2003). Statistics of Extremes, With Applications in Enviro nment, Insurance and Finance. Unpublished manuscript. Retrieved December 14, 2006 from the University of North Carolina website : http://www.stat.unc.edu/postscript/rs/semstatrls.ps Smith, R.L., & Weissman, I. (1994). Estimating the extremal index. Journal of the Royal Statistical Society, Series B (Methodological), 56 (3), 515-528. Smith, R.L., Grady, A.M., & Hegerl, G.C. (2006). Trend in extreme precipitation levels over the contiguous United States. Unpublished manuscript. Stephenson, A., & Tawn, J.A. (2004). Bayesian infe rence for extremes: Accounting for the three extremal types. Extremes, 7 (4), 291-307. Szego, G. (2002). Measures of risk. Journal of Banking and Finance, 26 (7), 12531272. Tawn, J.A. (1988). Bivariate extreme value theory: models and estimation. Biometrika, 75 (3), 397-415. Tawn, J.A. (1990). Modelling multivariate extreme v alue distributions. Biometrika, 77 (2), 245-253. Tobler, W.R. (1970). A computer model simulation of urban growth in the Detroit region. Economic Geography, 46 (2), 234-240. Tokat, Y., Rachev, S.T., & Schwartz, E.S. (2003). The stab le non-Gaussian asset allocation: A comparison with the classical Gaussia n approach. Journal of Economic Dynamics and Control, 27 (6), 937-969. Trevor, H., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning, data mining, inference and prediction. New York: Springer. Tukey, J.W. (1977). Exploratory data analysis. New York: Addison-Wesley. von Mises, R. (1954). La distribution de la plus gr ande de n valeurs. In (Ed.), Selected Papers (Vol. II, pp. 271-294). Providence, RI.: America n Mathematical Society.

PAGE 304

-286Walshaw, D. (1994). Getting the most from your ext reme wind data: A step-bystep guide. Journal of Research of the National Institute of St andards and Technology, 99 (4), 399-411. Wuertz, D. (2009). Markowitz portfolio, R-port. Retrieved April 2009 from the Rforge website: http://r-forge.r-project.org/plugins/scmsvn/viewcvs .php/*checkout*/ pkg/fPortfolio/R/B2-MarkowitzPortfolio.R?rev=1&root =rmetrics

PAGE 305

-287Sources Of Data Source of Consumer Price Index Data Consumer Price Index. 2008. U.S. Department of Labo r, Bureau of Labor Statistics, Bureau of Labor Statistics Data. First retrieved January 2008 from http://data.bls.gov/PDQ/servlet/SurveyOutputServlet Source of Economic Time Series http://www.fxstreet.com/fundamental/economic-time-s eries. First retrieved January 2008. Source of Equity Performance and Ancillary Data FactSet. 2007. On Line Editor. FactSet Research Sy stems Inc. Norwalk, CT. Data retrieved from FactSet Data Content during fourth c alendar quarter 2007. Lipper. 2007. Lipper Analytical New Application (LA NA). Lipper, A Thomson Reuters Co. Denver, CO. Data retrieved from LANA d uring fourth calendar quarter 2007. Lipper. 2008. Lipper Security Master File. A Thomso n Reuters Co. Denver, CO. Data retrieved from Security Master File during fir st calendar quarter 2008. Reuters. 2007. Security Analysis and Validation Mod eling (StockVal). A Thomson Reuters Co. Phoenix, AZ. Data retrieved from Stock Val during fourth calendar quarter 2007.

PAGE 306

-288Source of Interest Rate Data Country Source Bank of Canadahttp://www.bankofcanada.ca/en/rates/i nterest-look.html Bank of Englandhttp://www.bankofengland.co.uk/stati stics/index.htm Bank of Japanhttp://www.boj.or.jp/en/type/stat/dlon g/index.htm European Central Bankhttp://sdw.ecb.europa.eu/brows e.do?node=bbn131& Swiss National Bankhttp://www.snb.ch/en/iabout/stat /id/statdata The Reserve Bank of Australiahttp://www.rba.gov.au/ Statistics/interest_rates_yields.html U.S. Federal Reservehttp://www.federalreserve.gov/R ELEASES/ All series first retrieved December 2007.

PAGE 307

-289Source of Exchange Index Data All data sets first retrieved December 2007. Index/BenchmarkCountrySource Dow Jones Indus. AvgUSA Lipper Analytical New Application (LANA), Lipper S&P 500 IndexUSA Lipper Analytical New Application (LANA), Lipper Nasdaq Composite IndexUSA http://www.bloomberg.com/apps/quote?ticker=CCMP:IND S&P/Tsx Composite IndexCanada http://www.bloomberg.com/apps/quote?ticker=SPTSX:IND Mexico Bolsa IndexMexico http://www.bloomberg.com/apps/quote?ticker=MEXBOL:IND Brazil Bovespa Stock IndexBrazil http://www.bloomberg.com/apps/quote?ticker=IBOV:IND DJ Euro Stoxx 50EU http://www.bloomberg.com/apps/quote?ticker=SX5E:IND FTSE 100 IndexUK http://www.bloomberg.com/apps/quote?ticker=UKX:IND CAC 40 IndexFrance http://www.bloomberg.com/apps/quote?ticker=CAC:IND DAX IndexGermany http://www.bloomberg.com/apps/quote?ticker=DAX:IND IBEX 35 IndexSpain http://www.bloomberg.com/apps/quote?ticker=IBEX:IND S&P/MIB IndexItaly http://www.bloomberg.com/apps/quote?ticker=SPMIB:IND Amsterdam Exchanges IndexNetherlands http://www.bloomberg.com/apps/quote?ticker=AEX:IND OMX Stockholm 30 IndexSweden http://www.bloomberg.com/apps/quote?ticker=OMX:IND Swiss Market IndexSwitzerland http://www.bloomberg.com/apps/quote?ticker=SMI:IND Nikkei 225Japan http://www.bloomberg.com/apps/quote?ticker=NKY:IND Hang Seng IndexHong Kong http://www.bloomberg.com/apps/quote?ticker=HSI:IND S&P/ASX 200 Index Australia http://www.bloomberg.com/apps/quote?ticker=AS51:IND

PAGE 308

-290Source of Other Data Sets Called Out In Chapter 3, Section 3.3 All data sets first retrieved December 2007. Data Set Source Chicago Board of Exchange (CBOE) www.cboe.com Lehman Brothers fixed income indices (various) Lipper Analytical New Application (LANA), Lipper Dollar futures indices of the New York Board of Trade (NYBOT) www.nybot.com U.S. mortgage rates (various maturities) http://mortgage-x.com/ general/indexes/default.asp Rates for MoodyÂ’s AAA and BBB corporate paper FactSet, FactSet Research Systems Inc Russell equity indices (various) Lipper Analytical New Application (LANA), Lipper Interest rate swap rates (various maturities) FactSet, FactSet Research Systems Inc