Citation

Material Information

Title:
Derivative-free optimization of noisy functions
Creator:
Larson, Jeffrey M/
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English

Thesis/Dissertation Information

Degree:
Doctorate ( Doctor of philosophy)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Mathematical and Statistical Sciences, CU Denver
Degree Disciplines:
Applied mathematics
Committee Chair:
Billups, Stephen
Committee Members:
Engau, Alexander
Simon, Burt
Jacobson, Michael
Glover, Fred

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
Copyright Jeffrey M. Larson. Permission granted to University of Colorado Denver to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.

Full Text
DERIVATIVE-FREE OPTIMIZATION OF NOISY FUNCTIONS
by
Jeffrey M. Larson B.A., Carroll College, 2005 M.S., University of Colorado Denver, 2008
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Applied Mathematics
2012

This thesis for the Doctor of Philosophy degree by Jeffrey M. Larson has been approved by
Stephen Billups, Advisor and Chair Alexander Engau Burt Simon Michael Jacobson Fred Glover
Date
n

Larson, Jeffrey M. (Ph.D., Applied Mathematics) Derivative-free Optimization of Noisy Functions Thesis directed by Associate Professor Stephen Billups
ABSTRACT
Derivative-free optimization (DFO) problems with noisy functions are increasingly prevalent. In this thesis, we propose two algorithms for noisy DFO, as well as termination criteria for general DFO algorithms. Both proposed algorithms are based on the methods of Conn, Scheinberg, and Vicente [9] which use regression models in a trust region framework. The first algorithm utilizes weighted regression to obtain more accurate model functions at each trust region iteration. A weighting scheme is proposed which simultaneously handles differing levels of uncertainty in function evaluations and errors induced by poor model fidelity. To prove convergence of this algorithm, we extend the theory of A-poisedness and strong A-poisedness to weighted regression. The second algorithm modifies the first for functions with stochastic noise. We prove our algorithm generates a subsequence of iterates which converge almost surely to a first-order stationary point, provided the number of successful steps is finite and the noise for each function evaluation is independent and identically normally distributed. Lastly, we address termination of DFO algorithms on functions with noise corrupted evaluations. If the function is computationally expensive, stopping well before traditional criteria (e.g., after a budget of function evaluations is exhausted) are satisfied can yield significant savings. Early termination is especially desirable when the function being optimized is noisy, and the solver proceeds for an extended period while only seeing changes which are insignificant relative to the noise in the function. We develop techniques for comparing the quality of termination tests, propose families of tests to be used on general DFO algorithms, and then compare

the tests in terms of both accuracy and efficiency.
The form and content of this abstract are approved. I recommend its publication.
Approved: Stephen Billups
IV

ACKNOWLEDGMENT
I would like to thank my advisor, Steve Billups, for his years of research assistance and advice. His guidance was instrumental in obtaining the results in this thesis. I would also like to thank Peter Graf and Stefan Wild for their assistance in researching and writing parts of this thesis. The research in this thesis was partially supported by National Science Foundation Grant GK-12-0742434 and partially supported by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357. Lastly, Iâ€™d like to thank my wife, Jessica, for years of patience while the research for this thesis was performed.
v

Figures ................................................................... ix
Tables..................................................................... xi
Chapter
1. Introduction............................................................. 1
1.1 Review of Methods ................................................. 3
1.2 Outline............................................................ 5
1.3 Notation........................................................... 8
2. Background.............................................................. 10
2.1 Model-based Trust Region Methods.................................. 10
2.1.1 Model Construction Without Derivatives...................... 11
2.1.2 CSV2-framework.............................................. 11
2.1.3 Poisedness.................................................. 18
2.2 Performance Profiles.............................................. 23
2.3 Probabilistic Convergence......................................... 24
3. Derivative-free Optimization of Expensive Functions with Computational
Error Using Weighted Regression......................................... 26
3.1 Introduction...................................................... 26
3.2 Model Construction................................................ 27
3.3 Error Analysis and the Geometry of the Sample Set................. 29
3.3.1 Weighted Regression Lagrange Polynomials.................... 29
3.3.2 Error Analysis ............................................. 30
3.3.3 A-poisedness (in the Weighted Regression Sense)............. 35
3.4 Model Improvement Algorithm....................................... 39
3.5 Computational Results............................................. 50
3.5.1 Using Error Bounds to Choose Weights .................... 50
3.5.2 Benchmark Performance....................................... 52
vi

3.6 Summary and Conclusions........................................... 57
4. Stochastic Derivative-free Optimization using a Trust Region Framework . 59
4.1 Preliminary Results and Definitions............................. 62
4.1.1 Models which are k-fully Quadratic with Confidence 1 â€” au . 65
4.1.2 Models which are k-fully Linear with Confidence 1 â€” ... 69
4.2 Stochastic Optimization Algorithm................................. 70
4.3 Convergence....................................................... 73
4.3.1 Convergence to a First-order Stationary Point............... 73
4.3.2 Infinitely Many Successful Steps ........................... 78
4.4 Computational Example ............................................ 80
4.4.1 Deterministic Trust Region Method........................... 80
4.4.2 Stochastic Trust Region Method.............................. 81
4.5 Conclusion........................................................ 83
5. Non-intrusive Termination of Noisy Optimization.......................... 84
5.1 Introduction and Motivation....................................... 84
5.2 Background ....................................................... 87
5.3 Stopping Tests.................................................... 90
5.3.1 /f Test..................................................... 92
5.3.2 Max-Difference-/ Test....................................... 92
5.3.3 Max-Distance-x Test......................................... 93
5.3.4 Max-Distance-x* Test........................................ 93
5.3.5 Max-Budget Test............................................. 94
5.3.6 Tests Based on Estimates of the Noise....................... 94
5.3.7 Relationship to Loss Functions.............................. 96
5.4 Numerical Experiments............................................. 98
5.4.1 Accuracy Profiles for the (f)\ Family...................... 100
5.4.2 Performance Profiles for the (f)\ Family................... 102
vii

5.4.3 Accuracy and Performance Plots for the (f)2 Family.......... 104
5.4.4 Across-family Comparisons................................... 106
5.4.5 Deterministic Noise ........................................ 107
5.4.6 Validation for Individual Solvers .......................... 109
5.5 Discussion......................................................... 110
6. Concluding Remarks....................................................... 113
References.................................................................. 119
viii

FIGURES
Figure
2.1 An example of a performance profile..................................... 24
3.1 Performance (left) and data (right) profiles: Interpolation vs. regression
vs. weighted regression................................................... 55
3.2 Performance (left) and data (right) profiles: weighted regression vs.
NEWUOA vs. DFO (Problems with Stochastic Noise).................... 56
4.1 Several iterations of a traditional trust region method assuming deterministic function evaluations. The trust region center is never moved........ 81
4.2 Several iterations of a traditional trust region method assuming stochastic
function evaluations...................................................... 82
5.1 Part of a noisy trajectory of function values for an expensive nuclear
physics problem. After more significant decreases in the first 70 evaluations, progress begins to stall...................................... 85
5.2 First terms in (f)\ (top, with k = 100) and the noise............................................................... 96
5.3 Number of evaluations i* for a termination test based on (5.3) with fixed ujr. and k, but using a fi parameterized by c. The plots show remarkably similar behavior to the number of evaluations that minimize L(-, c) in (5.8). 98
5.4 Accuracy profiles for members of the (f)\ family on problems (5.9) with
two different magnitudes of (known) stochastic relative noise a. In the
top plots, k is held fixed and the shown members have different fi values.
In the bottom plots, ji is held fixed and the shown members have different
k values................................................................ 101
ix

5.5 Performance profiles for the most accurate 0i tests on problems (5.9) with
two different magnitudes of (known) stochastic relative noise a. Note that the a-axis has been truncated for each plot; 05 eventually terminates all of the problems and thus has a profile that will reach the value 1; all other tests change by less than .01....................................... 103
5.6 Accuracy (top) and performance (bottom) profiles for the 02 family on problems (5.9) with two different magnitudes of stochastic relative noise
a as k and fi are varied............................................ 105
5.7 Accuracy (top) and performance (bottom) profiles for the best tests on problems (5.9) with two different magnitudes of stochastic relative noise a.
The horizontal axes on the performance profiles are truncated for clarity;
05 eventually achieves a value of 1; all other tests change by less than .03. 106
5.8 Accuracy (top) and performance (bottom) profiles for the best tests on
problems (5.9) with two different magnitudes of deterministic noise. The horizontal axes on the performance profiles are truncated for clarity; 05 eventually achieves a value of 1; all other tests change by less than .03. . 108
5.9 Performance profiles for more conservative tests on problems (5.9) with two different magnitudes of deterministic noise. The horizontal axes on the performance profiles are truncated for clarity; 05 eventually achieves
a value of 1; all other tests change by less than .03............... 109
5.10 Accuracy profiles for the individual algorithms on the recommended tests. 110
x

TABLES
Table
5.1 Recommendations for termination tests for noisy optimization.. Ill
xi

1. Introduction
Traditional unconstrained optimization is inherently tied to derivatives; the necessary conditions for a first-order minimum are characterized by the derivative being equal to zero. Nevertheless, there exists a variety of functions which must be minimized when derivatives are unavailable. For example, consider an engineer in a lab who wants to maximize the strength of a metal bar by adjusting various factors of production. After the bar is constructed, it is broken to determine its strength. There is no closed formed solution for the barâ€™s strength; each function evaluation comes from an expensive procedure. Also, the process of breaking the bar provides no information about how to change the factors of production to increase the barâ€™s strength. In addition to the optimization of systems which must be physically evaluated, function evaluations by complex computer simulations often provide no (or unreliable) derivatives. Such simulations of complex phenomena (sometimes called black-box functions) are becoming increasingly common as computational modeling and computer hardware continue to advance. Whereas traditional techniques are concerned with efficiency of the algorithm, such concerns are secondary throughout this thesis. Explicitly, we assume that the cost of evaluating the function overwhelms the computational requirements of the algorithm.
In addition to unavailable derivatives, noise of various forms is often present in these functions. Throughout this thesis we will categorize this noise - or difference between the true value and computed value - into two categories: deterministic and stochastic. Deterministic noise (e.g., arising from finite-precision calculations or iterative methods) is often present if the function being optimized is a simulation of a physical system. For example, if evaluating the function involves solving a system of nonlinear partial differential equations or computing the eigenvalues of a large matrix, a small perturbation in the parameters can yield a large jump in the difference between the true and computed values. Though the computed value and true
1

value may differ, re-evaluating the function with the same choice of parameters will provide no further information. In contrast, re-evaluating a function with stochastic noise will provide additional information about the true value of the function. Two common sources of stochastic noise are found in functions whose evaluation requires a large-scale Monte-Carlo simulation of an actual system or if a function evaluation requires physically measuring a property in some system.
The thesis consists of three distinct but connected chapters addressing the problem:
min f(x) : Eâ„¢ â€”> E
X
when the algorithm only has access to noise corrupted values
f(x) := f(x) + e(x),
where e(x) denotes the noise.
Each chapter makes different assumptions about the noise e(x). Chapter 3 assumes that the accuracy at which / approximates / can vary, and that this accuracy can be quantified. For example, if fix) is calculated using a Monte-Carlo simulation, increasing the number of trials will decrease the magnitude of e(x). Similarly, if / is calculated by a finite element method, increased accuracy can be obtained by a finer grid. Of course, this greater accuracy comes at the cost of increased computational time; so it makes sense to vary the accuracy requirements over the course of the optimization. With this in mind, Chapter 3 asks: How can knowledge of the accuracy of different function evaluations be exploited to design a better algorithm?
Chapter 4 assumes that the noise for each function evaluation is independent of x and can be modeled as a normally distributed random variable with mean zero and a fixed, finite standard deviation. Though many other algorithms have been designed to optimize such a function, they often resort to repeated sampling of points. This provides information about the noise at a point, but no information about the
2

function nearby. This motivates the question addressed in Chapter 4: How can greater accuracy be efficiently achieved by oversampling without necessarily repeating function evaluations?
Chapter 5 assumes that a reasonably accurate estimate of the magnitude of the noise can be obtained, and that this estimate remains relatively constant throughout the domain. Though there are many algorithms in the literature designed to optimize noisy functions, very few use estimates of the noise in their termination criteria. When function evaluations are cheap, termination can be determined by common tests (e.g., small step size or gradient approximation). But when function evaluations are expensive, determining when to stop becomes an important multi-objective optimization problem. The optimizer wants to fold the best solution possible while minimizing computational effort. As this is a difficult problem to explicitly formulate, practitioners frequently terminate algorithms when (i) a predefined number of iterations has elapsed, (ii) no decrease in the optimal function value has been detected for a number of iterations, or (iii) the distance between a number of successive iterates is below some threshold. Chapter 5 attempts to answer the question: When should an algorithm optimizing an expensive, noisy function be terminated?
1.1 Review of Methods
Before discussing our algorithms further, we first discuss previous DFO techniques.
Heuristics are perhaps the first recourse when attempting to optimize a function without derivatives. Simulated annealing [36, 63], genetic algorithms [28], random search and its variants [55, 66, 39, 52], tabu search [20], scatter search [21], particle swarm optimization [34], and Nelder-Mead [45] are just a few of the heuristics developed to solve problems where only function evaluations are available. Though most of these algorithms lack formal convergence results (aside from results dependent on the algorithm producing iterates which are dense in the domain), they remain popular
3

due to their (1) ease of implementation, (2) flexibility on a variety of problem classes, and (3) frequent success in practice.
Other techniques attempt to approximate the unavailable derivative. Classical finite-difference methods approximate the derivative by adjusting each variable and noting the change in the function value. Other techniques, such as the pattern search methods [62, 2] and implicit filtering [5], evaluate a changing pattern of points around the best known solution. Also of note is the DIRECT algorithm [30], a global optimization method based on dividing hyper-rectangles using only function values.
An increasingly popular class of algorithms for derivative-free optimization (DFO) are model-based trust region methods [31, 11]. Local models approximating the function are constructed and minimized to generate successive iterates. These models are commonly low-order polynomials interpolating function values close to the best known value, for example Powellâ€™s UOBYQA algorithm [48]. Other examples include [49], where Powell introduces a minimum Frobenius norm condition on underdetermined quadratic models, and ORBIT by Wild et al. [64], which constructs models using interpolating radial basis functions. (These local models should not be confused with kriging [59] or response surface methodologies [44] which build global models of the function.) Though implementation of these techniques is not as simple as some other techniques, a well-developed convergence theory exists. As this thesis focuses on noisy DFO problems, we considered trust-region methods with regression models most appropriate (since, in many cases, regression models through enough points can approximate the true function).
There are also a variety of existing DFO algorithms for optimizing functions with noise. For functions with stochastic noise, replications of function evaluations can be a simple way to modify existing algorithms. For example, [14] modifies Powellâ€™s UOBYQA [48], [15] modifies DIRECT [30], and [61] modifies Nelder-Mead by repeated sampling of the function at points of interest. For deterministic noise, Kelley
4

[33] proposes a technique to detect and restart Nelder-Mead methods. Neumaierâ€™s SNOBFIT [29] algorithm accounts for noise by not requiring the surrogate functions to interpolate function values, but rather fit a stochastic model. Similarly, [10] proposes using least-squares regression models instead of interpolating models when noise is present in the function evaluations.
Lastly, Stochastic Approximation algorithms are also designed to minimize functions with stochastic noise. These algorithms are developed by statisticians to solve
min f(x) = E [[f(x)]] ,
when only / can be computed. Two of the more famous algorithms, the Kiefer-Wolfowitz and Simultaneous Perturbation methods, take predefined step lengths in a direction approximating â€”V/. These techniques have strong theoretical convergence results, but can be difficult to implement in practice. For further discussion of these algorithms, see the beginning of Chapter 4.
1.2 Outline
The work in this thesis focuses on modifications to model-based trust region methods in order to handle noise. Throughout the thesis we assume that only noisy, expensive function evaluations / are available, but there is some smooth underlying function / which is twice continuously differentiable with a Lipschitz continuous Hessian. We also assume that the noise in the evaluation of / is unbiased with bounded variance.
Chapter 3 (joint work with Stephen Billups and Peter Graf) proposes a DFO algorithm to optimize functions which are expensive to evaluate and contain computational noise. The algorithm is based on the trust region methods of [9, 10] which build interpolation or regression models around the best known solution. We propose using weighted regression models in a trust region framework, and prove convergence of such methods provided the weighting scheme satisfies some basic conditions.
5

The algorithm fits into a general framework for derivative-free trust region methods using quadratic models, which was described by Conn, Scheinberg, and Vicente in [12, 11], We shall refer to this framework as the CSV2-framework. This framework constructs a quadratic model function rrik(-), which approximates the objective function / on a set of sample points Yk C Râ„¢ at each iteration k. The next iterate is then determined by minimizing nik over a trust region. In order to guarantee global convergence, the CSV2-framework monitors the distribution of points in the sample set, and occasionally invokes a model-improvement algorithm that modifies the sample set to ensure m*, accurately approximates /. The CSV2-framework is the basis of the well-known DFO algorithm which is freely available on the COIN-OR website [38].
To fit our algorithm into the CSV2-framework we extend the theory of poisedness, as described in [12], to weighted regression. We show (Proposition 3.12) that a sample set that is strongly A-poised in the regression sense is also strongly cA-poised in the weighted regression sense for some constant c, provided that no weight is too small relative to the other weights. Thus, any model improvement scheme that ensures strong A-poisedness in the regression sense can be used in the weighted regression framework.
The convergence proof in Chapter 3 requires that the computational error decreases as the trust region decreases; such an assumption can be satisfied if the user has some control of the accuracy in the function evaluation. Since Chapter 3 is centered on exploiting differing levels in different function evaluations, such an assumption is reasonable for that chapter. In Chapter 4, we remove this assumption, but add the assumption that / has the form
f{x) = f{x) + e (1.1)
where e ~ A/â€(0, modifies the algorithm from Chapter 3 to converge when the error does not decrease
6

with the trust region radius. With some light assumptions on the noise and underlying function, we prove the algorithm generates a subsequence of iterates which converge almost surely to a first-order stationary point in the case where the number of successful iterates is finite.
At a given point of interest xÂ°, the algorithm does not repeatedly sample f(xÂ°) in order to glean information about f(xÂ°). Rather nik(x0), the value of the trust region model at xÂ° is used to estimate f(xÂ°). We derive bounds on the error between / and m, provided the set of points used to construct m satisfies certain geometric conditions, called strongly A-poised (see Definition 2.14), and contains a sufficient number of points. Also, the step size is controlled by the algorithm, increasing and decreasing as the algorithm progresses and stagnates. This contrasts many of the methods in the Stochastic Approximation literature where the user must provide a predefined set of steps to be taken by the algorithm.
The results in Section 4.3 prove the algorithm will generate a subsequence of iterates converging almost surely to a first-order stationary point when the number of successful iterates is finite, and makes progress in the infinite case. Such results are not common for most DFO algorithms on problems with stochastic noise. Both the simplicial direct search method [1] and the trust region method in [4] prove similar convergence results, but both reduce the variance at a point by repeated sampling. In addition to our strong convergence result, we are able to directly quantify the probability of the success of some iterates (see Lemma 4.15 for one such example). We are unaware of any other similar theoretical results for algorithms minimizing stochastic functions.
Chapter 5 (joint work with Stefan Wild) addresses termination criteria, the choice of which is a common problem when optimizing noisy functions. We propose objective measures to compare the quality of termination rules. Families of termination tests are then proposed and their performance is analyzed across a broad range of DFO
7

algorithms. Recommendations for tests which work for many algorithms are also provided. Lastly Chapter 6 contains concluding remarks and directions for future research.
1.3 Notation
The following notation will be used: Râ„¢ denotes the Euclidean space of real n-vectors. || â€¢ ||p denotes the standard Â£p vector norm, and || â€¢ || (without the subscript) denotes the Euclidean norm. || â€¢ ||^ denotes the Frobenius norm of a matrix. Ck denotes the set of functions on Râ„¢ with k continuous derivatives. Dj f denotes the jth derivative of a function / G Ck, j < k. Given an open set Q C Eâ„¢, LCk(Q) denotes the set of Ck functions with Lipschitz continuous kth derivatives. That is, for / G LCk(Q), there exists a Lipschitz constant L such that
Dkf(y) â€” Dkf(x) II < L \\y â€” x\\ for all x,y G Q.
Vn denotes the space of polynomials of degree less than or equal to d in Era; qi denotes the dimension of (specifically q\ = (n + 1 )(n + 2)/2). We use standard â€œbig-Ohâ€ notation (written O(-)) to state, for example, that for two functions on the same domain, f(x) = 0(g(x)) if there exists a constant M such that \f(x)\ < M\g(x)\ for all x with sufficiently small norm. Given a set F, |T| denotes the cardinality and conv(T) denotes the convex hull. For a real number a, [aj denotes the greatest integer less than or equal to a. For a matrix A, A+ denotes the Moore-Penrose generalized inverse [22], eJ denotes the jth column of the identity matrix. The ball of radius A centered at i G R" is denoted B(x; A). Given a vector w, diag(w) denotes the diagonal matrix W with diagonal entries Wa = Wi. For a square matrix A, cond(A) denotes the condition number, Amin(A) denotes the smallest eigenvalue, and omin denotes the smallest singular value. For a set Y := {yÂ°, â€¢ â€¢ â€¢ ,yp} C Râ„¢, the quadratic design matrix X has rows
i yi
y3n

vL-i yi \{y3n?
(1.2)
8

Let rrik denote the fcth trust region model (as defined in Chapter 2). Let gk = Vmk{xk) and Hk = V2mk{xk).
Sk(x) =max{||Vmfc(x)|| ,-Xmin(X2mk(x))} s(x) = max {|| V/(x)|| ,-Amin(V2f(x))}
These variables measure how close x is to a first- and second-order stationary point of / and mk (i.e. the gradient is zero and all eigenvalues are positive). If X is a random variable, the notation X < 7 denotes P (X < 7) > a. Note that the relation < is
a 1â€”Oi
not transitive.
9

2. Background
Before continuing, we introduce the background material on which the thesis is constructed.
2.1 Model-based Trust Region Methods
A trust region algorithm is a numerical technique for minimizing a sufficiently smooth function /. At each iteration k, a model function nik(x) is constructed to approximate / near the best point xk. When derivatives are available, m-k is usually a truncated first- or second-order Taylor series approximation of / at xk. For example, if V/ and V2/ are easy to calculate,
mk{x) = f(xk) + V f(xk)(x â€” xk) + â€” xk)TV2 f(xk)(x â€” xk).
nik is minimized over the trust region B{xk; Ak) by solving the problem
min mk(xk + s)
s:\\s\\ to generate a potential next trust region center xk + sk. f(xk + sk) is evaluated and the ratio
f(xk) â€” f(xk + sk)
^k mk{xk) â€” mk{xk + sk)
is calculated, which compares the actual decrease in / versus the decrease predicted by the model nrikâ–  This ratio quantifies the success of iteration k and also how well the model function approximates the true function /. If pk is larger than a prescribed threshold rji, it indicates that the iteration was successful and the model is a good approximation of the function. In this case, xk+1 is set to xk + sk and the trust region radius, Ak is increased. If pk is less than another parameter rj0, the model function is considered unreliable so the trust region radius Ak is decreased and the iterate is not updated (i.e. xk+1 = xk). Lastly, k is incremented and the process repeats.
2.1.1 Model Construction Without Derivatives
10

When derivatives are unavailable, the models mk are constructed using points where / has been evaluated. For example, the Conn, Scheinberg, and Vicente framework (which we refer to as the CSV2-framework) builds models mk from a specified class of models M. using a sample set of points Yk = {yÂ°, â–  â–  â–  ,yp} C B(xk; A*,) on which the function has been evaluated.
Given Yk and a vector of corresponding function values / = (f(yÂ°), â€¢ â€¢ â€¢ , f(yp)), an interpolating model is a model m(x) such that m{yl) = f(yl) for i = 0, â€¢ â€¢ â€¢ ,p. Given a basis

0(x),..., 4>q(x)} of the class of models M, we can calculate the coefficients Pi in the basis representation of the interpolating model m(x) = Y^h=q by
solving the linear system
M(4>,Y)P = f (2.1)
where
M(4>,Y)
MvÂ°) Mv0)-- â–  MyÂ°)
My1) My1) â–  â–  â–  My1)
Mvp)Myp)-- â–  Myp)
Note that for this equation to have a unique solution, the number of sample points p +1 must equal the size of the basis q+1 and the matrix M(, Y) must be invertible.
Regression models have also been investigated, [10] in which the number of sample points p + 1 is greater than the size of the basis. In this case, the matrix M((f>,Y) has more rows than columns so the equation (2.1) is solved in a least squares sense.
Lastly, if M((f>,Y) has more columns than rows, the system (2.1) is underdetermined. Nevertheless, bounds between the function and an underdetermined model can be obtained in certain cases. For example, see [13] considering the minium Frobe-nius norm method.
2.1.2 CSV2-framework
11

We next outline the CSV2-framework for derivative-free trust region methods described by Conn, Scheinberg, and Vicente [12, Algorithm 10.3]. Algorithms in the framework construct a model function rrik(-) at iteration k, which approximates the objective function / on a set of sample points Yk = {yÂ°,... ,yPk} C Râ„¢. The next iterate is then determined by minimizing m*,. Specifically, given the iterate xk, a putative next iterate is given by xk + sk where the step sk solves the trust region subproblem
min nik{xk + s)
s:\\s\\ where the scalar A& > 0 denotes the trust region radius, which may vary from iteration to iteration. If xk+sk produces sufficient descent in the model function, then f(xk+sk) is evaluated, and the iterate is accepted if f(xk + sk) < f(xk)-, otherwise, the step is not accepted. In either case, the trust region radius may be adjusted, and a model-improvement algorithm may be called to obtain a more accurate model.
To establish convergence properties, the following smoothness assumption is made on /:
Assumption 2.1 Suppose that a set of points S CR" and a radius Amax are giuen. Let Q be an open domain containing the Amax neighborhood
|J B(x; Amax)
ices'
of the set S. Assume f G LC2(Q) with Lipschitz constant L.
The CSV2-framework does not specify how the model functions are constructed. However, it does require that the model functions be selected from a fully quadratic class of model functions M.:
Definition 2.2 Let f satisfy Assumption 2.1. Let k = be a giuen
uector of constants, and let A > 0. A model function m G C2 is K-fully quadratic on B(x; A) if m has a Lipschitz continuous Hessian with corresponding Lipschitz constant bounded by uf1 and
12

â€¢ the error between the Hessian of the model and the Hessian of the function satisfies
|| V2/(y) - V2m(y)|| < nehA for all y E B(x; A),
â€¢ the error between the gradient of the model and the gradient of the function satisfies
II V/(y) - Vm(y)|| < negA2 for all y E B(x\ A),
â€¢ the error between the model and the function satisfies
I f(y) ~ m{y)\ < Kef A3 for all y E B(x; A).
Definition 2.3 Let f satisfy Assumption 2.1. A set of model functions A4 = {m : Rn R,m E C2} is called a fully quadratic class of models if there exist positive constants k = (nef, neg, Keh, vf1), such that the following hold:
1. for any x E S and A e (0, Amax]; there exists a model function m in M. which is k-fully quadratic on B(x; A).
2. For this class M., there exists an algorithm, called a â€œmodel-improvementâ€ algorithm, that in a finite, uniformly bounded (with respect to x and A) number of steps can
â€¢ either certify that a given model m E M. is k-fully quadratic on B(x; A),
â€¢ or, find a model fh E M. that is k-fully quadratic on B(x; A).
Note that this definition of a fully quadratic class of models is equivalent to [12, Definition 6.2]; but we have given a separate definition of a K-fully quadratic model (Definition 2.2) that includes the use of k to stress the fixed nature of the bounding constants. This change simplifies some analysis by allowing us to discuss K-fully quadratic models independent of the class of models they belong to. It is important
13

to note that k does not need to be known explicitly. Instead, it can be defined implicitly by the model improvement algorithm. All that is required is for k to be fixed (that is, independent of x and A). We also note that the set A4 may include non-quadratic functions, but when the model functions are quadratic, the Hessian is fixed, so vâ„¢ can be chosen to be zero. For the algorithms presented in Chapter 3 and Chapter 4, we focus on model functions that quadratic. That is, A4 = Vf.
As a side note, k-fully quadratic models may be too difficult to construct or may be undesired in some situations. If that is the case, n-fully linear models might provide a useful alternative.
Definition 2.4 Let f G LC2 and let k = (nef, Keg, vâ„¢) be a given vector of constants, and let A > 0. A model function m G C2 is K-fully linear on B(x; A) if m has a Lipschitz continuous gradient with corresponding Lipschitz constant bounded by uf1 and
â€¢ the error between the gradient of the model and the gradient of the function satisfies
l|V/(y) - Vm(y)|| < negA Vy e B(x] A),
â€¢ the error between the model and the function satisfies
I f(y) ~ m(y)I < WfA2 Vy G B(x; A).
If is K-fully linear, it approximates / in a fashion similar to the hrst-order Taylor model of /. If is K-fully quadratic, then it approximates / in a fashion similar to the second-order Taylor model of /. If K-fully linear (or quadratic) models are used within the CSV2-framework, we can guarantee convergence of the algorithm to a first- (or second-) order stationary point of /.
A critical distinction between the CSV2-framework and classical trust region
methods lies in the optimality criteria. In classical trust region methods, mu is the
14

second-order Taylor approximation of / at xk\ so if xk is optimal for m-k, it satisfies the first- and second-order necessary conditions for an optimum of /. In the CSV2-framework, xk must be optimal for m-k, but m-k must also be an accurate approximation of / near xk. This requires that the trust region radius is small and that nik is /r-fully quadratic on the trust region for some fixed n.
To explicitly outline the CSV2-framework, we proved pseudocode below. In the algorithm, g^b and Hâ„¢b denote the gradient and Hessian of the incumbent model m^b. (We use the superscript icb to stress that incumbent parameters from the previous iterates may be changed before they are used in the trust region step.) The optimality of xk with respect to nik is tested by calculating <^c6 = max { ||gj[c6||, â€” ATOira(f/(,c6)}. This quantity is zero if and only if the first- and second-order optimality conditions for nik are satisfied. The algorithm enters the criticality step when <^c6 is close to zero. This routine builds a (possibly) new K-fully quadratic model for the current Al^b, and tests if <^cb for this model is sufficiently large. If so, a descent direction has been determined, and the algorithm can proceed. If not, the criticality step reduces A^6 and updates the sample set to improve the accuracy of the model function near xk. The criticality step ends when <^c6 is large enough (and the algorithm proceeds) or when both q^b and A^b are smaller than given threshold values tc and Amin (in which case the algorithm has identified a second-order stationary point). We refer the reader to [12] for a more detailed discussion of the algorithm, including explanations of the parameters rj0, rji, 7,7irac, /3, n and oj.
Algorithm CSV2 ([12, Algorithm 10.3])
Step 0 (initialization): Choose a fully quadratic class of models At and a corresponding model-improvement algorithm, with associated k defined by Definition 2.3. Choose an initial point xÂ° and maximum trust region radius Amax > 0. We assume that the following are given: an initial model m^b{x) (with gradient and Hessian at
15

x = xÂ° denoted by glQb and #oc6, respectively), q)cb = max {||pqc6||, â€”^min(Hocb)}, and a trust region radius Aqc6 g (0, Amax].
The constants 70, hi, 7,7mc, <7, A, ^ are given and satisfy the conditions 0 < Vo < hi < 1 (with yt o), 0 < 7 < 1 < 7inc,ec > 0, p > f3 > 0,u G (0,1). Set k = 0. Step 1 (criticality step): If q]pb > ec, then mk = rriÂ£b and Afc = A^b.
If Ckb < ec, then proceed as follows. Call the model-improvement algorithm to attempt to certify if the model m^b is K-fully quadratic on B(xk; A^b). If at least one of the following conditions hold,
â€¢ the model m^b is not certihably K-fully quadratic on B(xk; A^b), or
. Af > pAkcb,
then apply Algorithm Criticality Step (described below) to construct a model mk(x) (with gradient and Hessian at x = xk denoted by gkl and Hk, respectively), with = max |||7fc||, â€” Amira(i/fc)|, which is K-fully quadratic on the ball B(xk] Ak) for some Afc G (0, pyâ„¢] given by [12, Algorithm 10.4], In such a case set
mk = fhk and Ak = min jmax j Afc, j , Afcc6|.
Otherwise, set mk = m^b and Ak = AlÂ£b. For a more complete discussion of trust region management, see [26].
Step 2 (step calculation): Compute a step sk that sufficiently reduces the model mk (in the sense of [12, (10.13)]) such that xk + sk G B(xk; Ak).
Step 3 (acceptance of the trial point): Compute f{xk + sk) and dehne
f(xk) â€” f(xk + sk)
^k mk(xk) â€” mk(xk + sk)
If Pk A Vi or if both pfc > g0 and the model is K-fully quadratic on B(xk; Ak), then xk+1 â€” xk _|_ sk anc[ the model is updated to include the new iterate into the sample set resulting in a new model m^^x) (with gradient and Hessian at x = xk+l denoted
16

by g^_l and H^, respectively), with = max -Amin(i/^1)}; otherwise,
the model and the iterate remain unchanged (m^1 = m-k and xk+1 = xk).
Step 4 (model improvement): If pk < p 1, use the model-improvement algorithm to
â€¢ attempt to certify that m-k is K-fully quadratic on B(xk; Ak),
â€¢ if such a certificate is not obtained, we say that nik is not certihably K-fully quadratic and make one or more suitable improvement steps.
Define m^1 to be the (possibly improved) model. Step 5 (trust region update): Set
A
icb k+1
(min {q'iricAfc, /S.max}}
[Afc,min{7iracAfc, A
maa;}]
G

{^fc}
if Pk > hi and Ak < if Pk > hi and Afc > if pk < hi and nik is if pk < hi and nik is K-fully quadratic.
AC,
AC,
it-fully quadratic, not certihably
Increment k by 1 and go to Step 1.
Algorithm CriticalityStep ([12, Algorithm 10.4]) This algorithm is applied only if C& A ec and at least one of the following holds: the model m^b is not certihably K-fully quadratic on B{xk; A(c6) or A(c6 > p^b.
Initialization: Set % = 0. Set m^ = mjf>.
Repeat Increment i by one. Use the model improvement algorithm to improve the previous model until it is K-fully quadratic on B(xk; a;t_1A(c6). Denote the
new model by ni^. Set Ak = cU_1 A(c6 and nik = nif1.
Until Ak < h(C)W-
17

Global Convergence If the following assumptions are satisfied, it has been shown that the CSV2-framework iterates will converge to a stationary point of /. Define the set L(xÂ°) = {x E Eâ„¢ : f(x) < f(x0)}.
Assumption 2.5 Assume that f is bounded from below on L(xÂ°).
Assumption 2.6 There exists a constant Kbhm > 0 such that, for all xk generated by the algorithm,
11 Hk 11 Hbhm â–
Theorem 2.7 ([12, Theorem 10.23]) Let Assumptions 2.1, 2.5 and 2.6 hold with S = L(xÂ°). Then, if the models used in the CSV2-framework are n-fully quadratic, the iterates xk generated by the CSV2-framework satisfy
hm max{||V/(xfc)||, â€”Amin(V2/(xfc))} = 0.
It follows from this theorem that any accumulation point of {xk} satisfies the first- and second-order necessary conditions for a minimum of /.
2.1.3 Poisedness
Having outlined the CSV2-framework, we can discuss conditions on the sample set used to build rrik which guarantee the model sufficiently approximates /. Consider the set of polynomials in Eâ„¢ of degree less than or equal to d, denoted V%. A basis 0 = {0o(A), â€¢ â€¢ â€¢ , 4>q(x)} of V]) is a set of polynomials from Vlf which span V%. That is, for any P(x) E Vthere exists coefficients [3i such that P(x) = Ym=o &&(%)- We let \Pn\ denote the number of elements in any basis 0 of V]). For example, \Vf \ = n + 1 and \Vf\ = (n+l)(n + 2)/2.
Definition 2.8 A set of points X = {xÂ°,- â€¢ â€¢ ,xp} C Eâ„¢ with ||X|| = is poised for interpolation if the matrix M(0,X) is nonsingular for some basis 0 in Vlf
18

If \x\ > \V%\, we can construct the least squares regression model by solving (2.1) as well. We extend the definition of poisedness for the regression case.
Definition 2.9 A set of points X = {xÂ°, â€¢ â€¢ â€¢ ,xp} C Eâ„¢ with ||X|| > \P^\ is poised for regression if the matrix M( in
Since we have limited information about the function /, we want the interpolating or regressing m(x) to be an accurate approximation in a region of interest. This requires that X consists of points spread out within said region. Since M((f>,X) can be arbitrarily poorly conditioned and X is still poised, simply being poised is not enough to measure the quality of a set X. Also looking at the condition number of M((f>,X) is inadequate since scaling the sample set X or choosing a different basis can arbitrarily adjust this quantity. Nevertheless, there are methods for measuring the quality of a sample set, one of the most common of which is based on Lagrange polynomials.
Definition 2.10 For a set X = {xÂ°, â€¢ â€¢ â€¢ ,xp} C Eâ„¢, with \X\ = \Pn\, the set of interpolating Lagrange polynomials Â£ = {Â£o, â€¢ â€¢ â€¢ ,Â£p} C Vlf are the polynomials satisfying
0 otherwise.
If the set X is poised, then the set of polynomials are guaranteed to exist and be uniquely defined.
We can extend the definition of Lagrange polynomials to the regression case in a natural fashion.
Definition 2.11 For a set X = {x0,--- ,xp} C Eâ„¢, with \X\ > \P^\, the set of regression Lagrange polynomials Â£ = {Â£0, â–  â–  â–  ,Â£p} C Vlf are the polynomials satisfying
W)
l.s.
1 */ * = j,
0 otherwise.
19

Though these polynomials are no longer linearly independent, if X is poised, then the set of regression Lagrange polynomials exists and is uniquely defined.
We can now use these Lagrange polynomials to extend the definition of poisedness to A-poisedness. This relates the magnitude of the Lagrange polynomials on a set B Cl" which will allow a method for measuring the quality of a sample set.
Definition 2.12 Let A > 0 and a set B C Eâ„¢ be given. For a basis ofV%, a poised set X = {xÂ°, â€¢ â€¢ â€¢ ,xp} is said to be A-poised in B (in the interpolating sense) if and only if
1. for the basis of Lagrange polynomials associated with X
A > max max I Â£A ,
0 or, equivalently,
2. for any x E B there exists \{x) such that
p
^2 Xi(x)4>(yl) = 4>(x) with ||A(a:)||00 < A.
i=0
And we again can extend this definition to the regression case.
Definition 2.13 Let A > 0 and a set B C Eâ„¢ be given. For a basis ofVlf, a poised set X = {xÂ°, â–  â–  â–  ,xp} with |X| > |'P^| is said to be A-poised in B (in the regression sense) if and only if
1. for the basis of Lagrange polynomials associated with X
A > max max It'd ,
0 or, equivalently,
2. for any x E B there exists \{x) such that
p
^ Xi(x)4>(yl) = 4>(x) with ||A(a:)||00 < A.
i=0
20

Note that if |X| = \P%\, the definition for A-poised in the interpolation and regression sense coincide.
We can now examine the following bound (from [7])
||D7(;r) - Drmix)|| < T ||;râ€˜ - ||Dâ€™7'.(x)||
(d+i)!
where Dr f(x) is the rth derivative of /.

dxix â–  â–  â–  dxiâ€ž 11
and Ud is an upper bound on Dr+1f(x). If r = 0, this bound reduces to
|/(;r) - mix) < ^-t_(p+ l)ixdAtAd+1
(2/2)
where
Ab = max max \Â£A ,
0 and A is the diameter of the smallest ball containing X. Therefore, if the number of points in X is bounded, then A-poisedness is sufficient to derive bounds on the error between the regression or interpolating model and the function. That is, decreasing the radius of the sample set will provide bounds similar to Taylor bounds when derivatives are available. If using regression models with arbitrarily many points, A-poisedness is not enough to construct similar bounds. Strongly A-poisedness can help in this case.
Definition 2.14 Let Â£{x) = {Â£q{x)^-- ,Â£p(x))t be the regression Lagrange polynomials associated vnth the set Y = {yÂ°,... ,yp}. Let A > 0 and let B be a set in Eâ„¢. The set Y is said to be strongly A-poised in B (in the regression sense) if and only if
^LA > max \\Â£{x) || ,
VPi x^B
where q\ = \Vf \ and pi = |T|.
21

Since we can rewrite (2.2) as
I f(x)~ m(x) | < ^ \/p + l/VV2Ad+1
where
Afe,2 = max H^ll ,
xEB
strong A-poisedness provides Taylor-like error bounds between the regression model m and the function /, even when the number of points in X is unbounded.
As a final note, explicitly calculating the value of A is computationally expensive, but not necessary. It is possible to use the condition number of the design matrix M((f>,X) to bound the constant A. Since it is possible to scale the condition number of M((f>,X) by shifting and scaling X, or choosing a different basis, conditions must be placed on M(,X) before using its condition number. If we 1) use the standard basis (e.g., ,X). The next two theorems are for the interpolation and regression case respectively.
Theorem 2.15 Let X denote the shifted and scaled version of X so every point lies within the unit ball and at least one point has norm 1. Let M = M((f>,X) where
is the standard basis. If M is nonsingular and
M
-l
< A, then the set X is
ydpiA-poised in the unit ball. Conversely, if the set X is A-poised in the unit ball,
then
A.
< e^jpia
where 9 > 0 depends on n and d but is independent of X and
Theorem 2.16 Let X denote the shifted and scaled version of X so every point lies within the unit ball and at least one point has norm 1. Let M = M((f>,X) where 22

E
-l
< \J q\jp\A, then the set X is strongly A-poised in the unit ball. Conversely,
if the set X is A-poised in the unit ball, then and d but is independent of X and A.
E"1 < iauA
â€” VpT
where 9 depends on n
2.2 Performance Profiles
Next, we explain the content of performance profiles which are a compact method for comparing the performance of various algorithms on a set of problems. We will use Figure 2.1 as an example. Algorithms A, B, and C have been run an identical set of problems for the same number of function evaluations. The left axis shows the percentage of problems each algorithm solved first, where solved is user-defined. (Often, an algorithm is considered to solve a problem when it first finds a function value within tolerance of the best known solution.) In the example, A solves 20% of the problems first, B solves 55% of the problems first, and C solves 30% of the problems first. As these percentages total to over 100%, there is an overlap in the set of problems the algorithms solve first.
The right axis shows the percentage of problems solved by a given algorithm in the number of function evaluations given. All algorithms in Figure 2.1 solve over 90% of the problems in the testing set. Values between the left and right axes denote the percentage of problems solved as a multiple of the number of evaluations required for the fastest algorithm. For example, given 6 times as many iterations as the fastest algorithm on a problem, A solves 80% of the problems in the testing set.
Formally, an algorithm is considered to solve a problem when it first finds a function value satisfying
f{xÂ°) - f(x) > (1 - r)(f(xÂ°) - fL)
where r > 0 is a small tolerance, is the smallest found function value for any solver in a specified number of iterations, and xÂ° is the initial point given to each algorithm.
23

Figure 2.1: An example of a performance profile
If ip tp,a
Tp,a : FT V
mill
a
Then the performance profile of a solver a is the fraction of problems where the performance ratio is at most a. That is,
Pa(cv) = \peP: rp
where P is the set of benchmark problems.
2.3 Probabilistic Convergence
Lastly, we define various forms of probabilistic convergence. A sequence {A,,}
of random variable is said to converge in distribution or converge weakly, or
24

converge in law to a random variable X if
lim Fn(x) = F(x),
for every number {x G K} at which F is continuous. Here Fn and F are the cumulative distribution functions of random variables Xn and X correspondingly.
A sequence {Xn} of random variables converges in probability to X if for all e > 0
lim P{\Xn-X\ >e) = 0
71â€”>â–  CO
A sequence {Xn} of random variables converges almost surely or almost everywhere or with probability 1 or strongly towards X if
P ( lim Xn = X) = 1
\nâ€”]>oc 1
25

3. Derivative-free Optimization of Expensive Functions with Computational Error Using Weighted Regression
3.1 Introduction
In this chapter, we construct an algorithm designed to optimize functions evaluated by large computational codes, taking minutes, hours or even days for a single function call, for which derivative information is unavailable, and for which function evaluations are subject to computational error. Such error may be deterministic (arising, for example, from discretization error), or stochastic (for example, from Monte-Carlo simulation). Because function evaluations are extremely expensive, it is sensible to perform substantial work at each iteration to reduce the number of function evaluations required to obtain an optimum.
We assume that the accuracy of the function evaluation can vary from point to point, and this variation can be quantified. In this chapter, we will use knowledge of this varying error to improve the performance of the algorithm by using weighted regression models in a trust region framework. By giving more accurate points more weight when constructing the trust region model, we hope that the models will more closely approximate the function being optimized. This leads to a better performing algorithm.
Our algorithm fits within in the CSV2-framework, which is outlined in Chapter 2. To specify an algorithm within this framework, three things are required:
1. Define the class of model functions AT This is determined by the method for constructing models from the sample set. In [10] models were constructed using interpolation, least squares regression, and minimum Frobenius norm methods. We describe the general form of our weighted regression models in Â§3.2 and propose a specific weighting scheme in Â§3.5.
2. Define a model-improvement algorithm. Â§3.4 describes our model improvement
algorithm which tests the geometry of the sample set, and if necessary, adds
26

and/or deletes points to ensure that the model function constructed from the sample set satisfies the error bounds in Definition 2.2 (i.e. it is K-fully quadratic).
3. Demonstrate that the model-improvement algorithm satisfies the requirements for the definition of a class of fully quadratic models. For our algorithm, this is discussed in Â§3.4.
The chapter is organized as follows. We place our algorithm in the CSV2-framework by describing 1) how model functions are constructed (Â§3.2), and 2) a model improvement algorithm (Â§3.4). Before describing the model improvement algorithm, we first extend the theory of A-poisedness to the weighted regression framework (Â§3.3). Computational results are presented in Â§3.5 using a heuristic weighting scheme, which is described in that section. Â§3.6 concludes the chapter.
3.2 Model Construction
This section describes how we construct the model function m*, at the fcth iteration. For simplicity, we drop the subscript k for the remainder of this section. Let / = (/o,..., fp)T where /* denotes the computed function value at y\ and let e* denote the associated computational error. That is
fi = f(yt) + ^i- (3-1)
Let w = (wo, â–  â–  â–  ,wp)T be a vector of positive weights for the set of points Y = {yÂ°, â–  â–  â–  ,yp}. A quadratic polynomial m is said to be a weighted least squares approximation of / (with respect to w) if it minimizes
X>? -hf= \\W (mIX) - f) ||2.
i=0
where m(Y) denotes the vector (m(yÂ°), m(y1),... ,m(yp))T and W = diag(w). In this case, we write
Wm(Y) = Wf.
(3.2)
27

Let = {(f>o,(f>i, â–  â–  â–  ,(f>q} be a basis for the quadratic polynomials in Eâ„¢. For example, f might be the monomial basis
4> = {1, Â£1,2:2, â€¢ â€¢ -,xn,x i/2,Â£iÂ£2, â€¢ â€¢ â€¢ ,xn_iXn,xl/2}. (3.3)
Dehne
M(4>,Y)
MyÂ°)My0)--- MvÂ°) My1) My1) â–  â–  â–  My1)
Myp)Myp)â€¢â€¢â€¢ Myp)
Since 0 is a basis for the quadratic polynomials, the model function can be written q
m(x) = ai(f>i(x)- The coefficients a = (ao,...,aq)T solve the weighted least
i=0
squares regression problem
WM((f), Y)a =' W f. (3.4)
If M((f),Y) has full column rank, the sample set Y is said to be poised for quadratic regression. The following lemma is a straightforward generalization of [12, Lemma 4.3]:
Lemma 3.1 If Y is poised for quadratic regression, then the weighted least squares regression polynomial (with respect to positive weights w = (wo,..., wp)) exists, is unique and is given by m(x) = 4>(x)Ta, where
a = (WM)+Wf = (MTW2M)~1MTW2f, (3.5)
where W = diag(u>) and M = M((f>,Y).
Proof: Since W and M both have full column rank, so does WM. Thus, the least squares problem (3.4) has a unique solution given by {WM)+Wf. Moreover, since WM has full column rank, (WM)+ = ({WM)T(ITM))-1 MTW. u
3.3 Error Analysis and the Geometry of the Sample Set
28

Throughout this section, Y = {yÂ°, â–  â–  â–  ,yp} denotes the sample set, p\ = p + 1, w E IBdj1 is a vector of positive weights, W = diag(u>), and M = M((f>,Y). / denotes the vector of computed function values at the points in Y as defined by (3.1).
The accuracy of the model function rrik relies critically on the geometry of the sample set. In this section, we generalize the theory of poisedness from [12] to the weighted regression framework. This section also includes error analysis which extends results from [12] to weighted regression, as well as considering the impact of computational error on the error bounds. We start by defining weighted regression Lagrange polynomials.
3.3.1 Weighted Regression Lagrange Polynomials
Definition 3.2 A set of polynomials Â£j(x),j = 0,...,p in are called
weighted regression Lagrange polynomials with respect to the weights w and sample set Y if for each j,
Wi] = Wej,
where Â£] = [Â£j(y0),- â–  â–  ,ij(yp)]T.
The following lemma is a direct application of Lemma 3.1.
Lemma 3.3 Let f>{x) = ((f)0(x),... ,4>q(x))T. IfY is poised, then the set of weighted regression Lagrange polynomials exists and is unique, and is given by Â£j(x) = 4>(x)Taj, j = 0, â€¢ â€¢ â€¢ ,p, where aj is the jth column of the matrix
A = (WM)+W.. (3.6)
Proof: Note that m(-) = Â£,(â€¢) satisfies (3.2) with / = eh By Lemma 3.1, Â£j{x) = (f)(x)Taj where a? = (WM)+Wej which is the jth column of (WM)+W. â–
The following lemma is based on [12, Lemma 4.6].
29

Lemma 3.4 IfY is poised, then the model function defined by (3.2) satisfies
p
Mx) = ^fMx),
i=0
where Â£j{x), j = 0, â€¢ â€¢ â€¢ ,p denote the weighted regression Lagrange polynomials corresponding to Y and W.
Proof: By Lemma 3.1 m{x) = a = (WM)+Wf = Af
for A defined by (3.6). Let (fix) = [Â£o{x), â–  â–  â–  ,Â£p(x)]T. Then by Lemma 3.3
p
m(x) = (f>T(x)Af = ftix) = ^ffifix).
i=0
3.3.2 Error Analysis
For the remainder of this chapter, let Y = {yÂ°, â–  â–  â–  ,yp} denote the shifted and scaled sample set where yl = (yl â€” yÂ°)/R and R = max \\yl â€” yÂ°||. Note that yÂ° = 0
i
and max ||y*|| = 1. Any analysis of Y can be directly related to Y by the following
i
lemma:
Lemma 3.5 Define the basis

Full Text

PAGE 1

DERIVATIVE-FREEOPTIMIZATIONOFNOISYFUNCTIONS by JereyM.Larson B.A.,CarrollCollege,2005 M.S.,UniversityofColoradoDenver,2008 Athesissubmittedtothe FacultyoftheGraduateSchoolofthe UniversityofColoradoinpartialfulllment oftherequirementsforthedegreeof DoctorofPhilosophy AppliedMathematics 2012

PAGE 2

ThisthesisfortheDoctorofPhilosophydegreeby JereyM.Larson hasbeenapproved by StephenBillups,AdvisorandChair AlexanderEngau BurtSimon MichaelJacobson FredGlover Date ii

PAGE 3

Larson,JereyM.Ph.D.,AppliedMathematics Derivative-freeOptimizationofNoisyFunctions ThesisdirectedbyAssociateProfessorStephenBillups ABSTRACT Derivative-freeoptimizationDFOproblemswithnoisyfunctionsareincreasinglyprevalent.Inthisthesis,weproposetwoalgorithmsfornoisyDFO,aswellas terminationcriteriaforgeneralDFOalgorithms.Bothproposedalgorithmsarebased onthemethodsofConn,Scheinberg,andVicente[9]whichuseregressionmodelsin atrustregionframework.Therstalgorithmutilizesweightedregressiontoobtain moreaccuratemodelfunctionsateachtrustregioniteration.Aweightingscheme isproposedwhichsimultaneouslyhandlesdieringlevelsofuncertaintyinfunction evaluationsanderrorsinducedbypoormodeldelity.Toproveconvergenceofthis algorithm,weextendthetheoryof-poisednessandstrong-poisednesstoweighted regression.Thesecondalgorithmmodiestherstforfunctionswithstochasticnoise. Weproveouralgorithmgeneratesasubsequenceofiterateswhichconvergealmost surelytoarst-orderstationarypoint,providedthenumberofsuccessfulstepsis niteandthenoiseforeachfunctionevaluationisindependentandidenticallynormallydistributed.Lastly,weaddressterminationofDFOalgorithmsonfunctions withnoisecorruptedevaluations.Ifthefunctioniscomputationallyexpensive,stoppingwellbeforetraditionalcriteriae.g.,afterabudgetoffunctionevaluationsis exhaustedaresatisedcanyieldsignicantsavings.Earlyterminationisespecially desirablewhenthefunctionbeingoptimizedisnoisy,andthesolverproceedsforan extendedperiodwhileonlyseeingchangeswhichareinsignicantrelativetothenoise inthefunction.Wedeveloptechniquesforcomparingthequalityofterminationtests, proposefamiliesofteststobeusedongeneralDFOalgorithms,andthencompare iii

PAGE 4

thetestsintermsofbothaccuracyandeciency. Theformandcontentofthisabstractareapproved.Irecommenditspublication. Approved:StephenBillups iv

PAGE 5

ACKNOWLEDGMENT Iwouldliketothankmyadvisor,SteveBillups,forhisyearsofresearchassistance andadvice.Hisguidancewasinstrumentalinobtainingtheresultsinthisthesis.I wouldalsoliketothankPeterGrafandStefanWildfortheirassistanceinresearching andwritingpartsofthisthesis.Theresearchinthisthesiswaspartiallysupportedby NationalScienceFoundationGrantGK-12-0742434andpartiallysupportedbythe OceofAdvancedScienticComputingResearch,OceofScience,U.S.Department ofEnergy,underContractDE-AC02-06CH11357.Lastly,I'dliketothankmywife, Jessica,foryearsofpatiencewhiletheresearchforthisthesiswasperformed. v

PAGE 6

TABLEOFCONTENTS Figures.......................................ix Tables........................................xi Chapter 1.Introduction...................................1 1.1ReviewofMethods...........................3 1.2Outline..................................5 1.3Notation.................................8 2.Background...................................10 2.1Model-basedTrustRegionMethods..................10 2.1.1ModelConstructionWithoutDerivatives............11 2.1.2CSV2-framework.........................11 2.1.3Poisedness.............................18 2.2PerformanceProles...........................23 2.3ProbabilisticConvergence........................24 3.Derivative-freeOptimizationofExpensiveFunctionswithComputational ErrorUsingWeightedRegression.......................26 3.1Introduction...............................26 3.2ModelConstruction...........................27 3.3ErrorAnalysisandtheGeometryoftheSampleSet.........29 3.3.1WeightedRegressionLagrangePolynomials..........29 3.3.2ErrorAnalysis..........................30 3.3.3-poisednessintheWeightedRegressionSense.......35 3.4ModelImprovementAlgorithm.....................39 3.5ComputationalResults.........................50 3.5.1UsingErrorBoundstoChooseWeights............50 3.5.2BenchmarkPerformance.....................52 vi

PAGE 7

3.6SummaryandConclusions.......................57 4.StochasticDerivative-freeOptimizationusingaTrustRegionFramework.59 4.1PreliminaryResultsandDenitions..................62 4.1.1Modelswhichare -fullyQuadraticwithCondence1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k .65 4.1.2Modelswhichare^ -fullyLinearwithCondence1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k ...69 4.2StochasticOptimizationAlgorithm...................70 4.3Convergence...............................73 4.3.1ConvergencetoaFirst-orderStationaryPoint.........73 4.3.2InnitelyManySuccessfulSteps................78 4.4ComputationalExample........................80 4.4.1DeterministicTrustRegionMethod...............80 4.4.2StochasticTrustRegionMethod................81 4.5Conclusion................................83 5.Non-intrusiveTerminationofNoisyOptimization..............84 5.1IntroductionandMotivation......................84 5.2Background...............................87 5.3StoppingTests..............................90 5.3.1 f i 0 Test..............................92 5.3.2Max-Dierencef Test......................92 5.3.3Max-Distancex Test.......................93 5.3.4Max-Distancex i Test......................93 5.3.5Max-BudgetTest.........................94 5.3.6TestsBasedonEstimatesoftheNoise.............94 5.3.7RelationshiptoLossFunctions.................96 5.4NumericalExperiments.........................98 5.4.1AccuracyProlesforthe 1 Family...............100 5.4.2PerformanceProlesforthe 1 Family.............102 vii

PAGE 8

5.4.3AccuracyandPerformancePlotsforthe 2 Family......104 5.4.4Across-familyComparisons...................106 5.4.5DeterministicNoise.......................107 5.4.6ValidationforIndividualSolvers................109 5.5Discussion................................110 6.ConcludingRemarks..............................113 References ......................................119 viii

PAGE 9

FIGURES Figure 2.1Anexampleofaperformanceprole....................24 3.1Performanceleftanddatarightproles:Interpolationvs.regression vs.weightedregression............................55 3.2Performanceleftanddatarightproles:weightedregressionvs. NEWUOAvs.DFOProblemswithStochasticNoise..........56 4.1Severaliterationsofatraditionaltrustregionmethodassumingdeterministicfunctionevaluations.Thetrustregioncenterisnevermoved.....81 4.2Severaliterationsofatraditionaltrustregionmethodassumingstochastic functionevaluations..............................82 5.1Partofanoisytrajectoryoffunctionvaluesforanexpensivenuclear physicsproblem.Aftermoresignicantdecreasesintherst70evaluations,progressbeginstostall........................85 5.2Firsttermsin 1 top,with =100and 2 bottom,with =10on alog 10 scalewhenminimizinga10-dimensionalconvexquadraticwith stochasticrelativenoiseofdierentmagnitudes.Theasymptotesofthe quantitiesshowntendtobeseparatedbythedierencesinmagnitudesof thenoise....................................96 5.3Numberofevaluations i foraterminationtestbasedon.3withxed F i and ,butusinga parameterizedby c .Theplotsshowremarkably similarbehaviortothenumberofevaluationsthatminimize L ;c in.8.98 5.4Accuracyprolesformembersofthe 1 familyonproblems.9with twodierentmagnitudesofknownstochasticrelativenoise .Inthe topplots, isheldxedandtheshownmembershavedierent values. Inthebottomplots, isheldxedandtheshownmembershavedierent values.....................................101 ix

PAGE 10

5.5Performanceprolesforthemostaccurate 1 testsonproblems.9with twodierentmagnitudesofknownstochasticrelativenoise .Notethat the -axishasbeentruncatedforeachplot; 5 eventuallyterminatesall oftheproblemsandthushasaprolethatwillreachthevalue1;allother testschangebylessthan.01.........................103 5.6Accuracytopandperformancebottomprolesforthe 2 familyon problems.9withtwodierentmagnitudesofstochasticrelativenoise as and arevaried............................105 5.7Accuracytopandperformancebottomprolesforthebesttestson problems.9withtwodierentmagnitudesofstochasticrelativenoise . Thehorizontalaxesontheperformanceprolesaretruncatedforclarity; 5 eventuallyachievesavalueof1;allothertestschangebylessthan.03.106 5.8Accuracytopandperformancebottomprolesforthebesttestson problems.9withtwodierentmagnitudesofdeterministicnoise.The horizontalaxesontheperformanceprolesaretruncatedforclarity; 5 eventuallyachievesavalueof1;allothertestschangebylessthan.03..108 5.9Performanceprolesformoreconservativetestsonproblems.9with twodierentmagnitudesofdeterministicnoise.Thehorizontalaxeson theperformanceprolesaretruncatedforclarity; 5 eventuallyachieves avalueof1;allothertestschangebylessthan.03.............109 5.10Accuracyprolesfortheindividualalgorithmsontherecommendedtests.110 x

PAGE 11

TABLES Table 5.1Recommendationsforterminationtestsfornoisyoptimization.......111 xi

PAGE 12

PAGE 13

valuemaydier,re-evaluatingthefunctionwiththesamechoiceofparameterswill providenofurtherinformation.Incontrast,re-evaluatingafunctionwithstochastic noisewillprovideadditionalinformationaboutthetruevalueofthefunction.Two commonsourcesofstochasticnoisearefoundinfunctionswhoseevaluationrequires alarge-scaleMonte-Carlosimulationofanactualsystemorifafunctionevaluation requiresphysicallymeasuringapropertyinsomesystem. Thethesisconsistsofthreedistinctbutconnectedchaptersaddressingtheproblem: min x f x : R n ! R whenthealgorithmonlyhasaccesstonoisecorruptedvalues f x := f x + x ; where x denotesthenoise. Eachchaptermakesdierentassumptionsaboutthenoise x .Chapter3assumesthattheaccuracyatwhich f approximates f canvary,andthatthisaccuracy canbequantied.Forexample,if f x iscalculatedusingaMonte-Carlosimulation, increasingthenumberoftrialswilldecreasethemagnitudeof x .Similarly,if f is calculatedbyaniteelementmethod,increasedaccuracycanbeobtainedbyaner grid.Ofcourse,thisgreateraccuracycomesatthecostofincreasedcomputational time;soitmakessensetovarytheaccuracyrequirementsoverthecourseoftheoptimization.Withthisinmind,Chapter3asks:Howcanknowledgeoftheaccuracy ofdierentfunctionevaluationsbeexploitedtodesignabetteralgorithm? Chapter4assumesthatthenoiseforeachfunctionevaluationisindependentof x andcanbemodeledasanormallydistributedrandomvariablewithmeanzero andaxed,nitestandarddeviation.Thoughmanyotheralgorithmshavebeen designedtooptimizesuchafunction,theyoftenresorttorepeatedsamplingofpoints. Thisprovidesinformationaboutthenoiseatapoint,butnoinformationaboutthe 2

PAGE 14

functionnearby.ThismotivatesthequestionaddressedinChapter4:Howcan greateraccuracybeecientlyachievedbyoversamplingwithoutnecessarilyrepeating functionevaluations? Chapter5assumesthatareasonablyaccurateestimateofthemagnitudeofthe noisecanbeobtained,andthatthisestimateremainsrelativelyconstantthroughout thedomain.Thoughtherearemanyalgorithmsintheliteraturedesignedtooptimizenoisyfunctions,veryfewuseestimatesofthenoiseintheirterminationcriteria. Whenfunctionevaluationsarecheap,terminationcanbedeterminedbycommon testse.g.,smallstepsizeorgradientapproximation.Butwhenfunctionevaluationsareexpensive,determiningwhentostopbecomesanimportantmulti-objective optimizationproblem.Theoptimizerwantstondthebestsolutionpossiblewhile minimizingcomputationaleort.Asthisisadicultproblemtoexplicitlyformulate, practitionersfrequentlyterminatealgorithmswheniapredenednumberofiterationshaselapsed,iinodecreaseintheoptimalfunctionvaluehasbeendetectedfor anumberofiterations,oriiithedistancebetweenanumberofsuccessiveiterates isbelowsomethreshold.Chapter5attemptstoanswerthequestion:Whenshould analgorithmoptimizinganexpensive,noisyfunctionbeterminated? 1.1ReviewofMethods Beforediscussingouralgorithmsfurther,werstdiscusspreviousDFOtechniques. Heuristicsareperhapstherstrecoursewhenattemptingtooptimizeafunction withoutderivatives.Simulatedannealing[36,63],geneticalgorithms[28],random searchanditsvariants[55,66,39,52],tabusearch[20],scattersearch[21],particle swarmoptimization[34],andNelder-Mead[45]arejustafewoftheheuristicsdevelopedtosolveproblemswhereonlyfunctionevaluationsareavailable.Thoughmostof thesealgorithmslackformalconvergenceresultsasidefromresultsdependentonthe algorithmproducingiterateswhicharedenseinthedomain,theyremainpopular 3

PAGE 15

duetotheireaseofimplementation,exibilityonavarietyofproblemclasses, andfrequentsuccessinpractice. Othertechniquesattempttoapproximatetheunavailablederivative.Classical nite-dierencemethodsapproximatethederivativebyadjustingeachvariableand notingthechangeinthefunctionvalue.Othertechniques,suchasthepatternsearch methods[62,2]andimplicitltering[5],evaluateachangingpatternofpointsaround thebestknownsolution.AlsoofnoteistheDIRECTalgorithm[30],aglobaloptimizationmethodbasedondividinghyper-rectanglesusingonlyfunctionvalues. Anincreasinglypopularclassofalgorithmsforderivative-freeoptimizationDFO aremodel-basedtrustregionmethods[31,11].Localmodelsapproximatingthefunctionareconstructedandminimizedtogeneratesuccessiveiterates.Thesemodels arecommonlylow-orderpolynomialsinterpolatingfunctionvaluesclosetothebest knownvalue,forexamplePowell'sUOBYQAalgorithm[48].Otherexamplesinclude [49],wherePowellintroducesaminimumFrobeniusnormconditiononunderdeterminedquadraticmodels,andORBITbyWildetal.[64],whichconstructsmodels usinginterpolatingradialbasisfunctions.Theselocalmodelsshouldnotbeconfused withkriging[59]orresponsesurfacemethodologies[44]whichbuildglobalmodelsof thefunction.Thoughimplementationofthesetechniquesisnotassimpleassome othertechniques,awell-developedconvergencetheoryexists.Asthisthesisfocuses onnoisyDFOproblems,weconsideredtrust-regionmethodswithregressionmodels mostappropriatesince,inmanycases,regressionmodelsthroughenoughpointscan approximatethetruefunction. TherearealsoavarietyofexistingDFOalgorithmsforoptimizingfunctionswith noise.Forfunctionswithstochasticnoise,replicationsoffunctionevaluationscan beasimplewaytomodifyexistingalgorithms.Forexample,[14]modiesPowell's UOBYQA[48],[15]modiesDIRECT[30],and[61]modiesNelder-Meadbyrepeatedsamplingofthefunctionatpointsofinterest.Fordeterministicnoise,Kelley 4

PAGE 16

[33]proposesatechniquetodetectandrestartNelder-Meadmethods.Neumaier's SNOBFIT[29]algorithmaccountsfornoisebynotrequiringthesurrogatefunctions tointerpolatefunctionvalues,butrathertastochasticmodel.Similarly,[10]proposesusingleast-squaresregressionmodelsinsteadofinterpolatingmodelswhennoise ispresentinthefunctionevaluations. Lastly,StochasticApproximationalgorithmsarealsodesignedtominimizefunctionswithstochasticnoise.Thesealgorithmsaredevelopedbystatisticianstosolve min f x = E [ f x ] ; whenonly f canbecomputed.Twoofthemorefamousalgorithms,theKieferWolfowitzandSimultaneousPerturbationmethods,takepredenedsteplengthsina directionapproximating r f .Thesetechniqueshavestrongtheoreticalconvergence results,butcanbediculttoimplementinpractice.Forfurtherdiscussionofthese algorithms,seethebeginningofChapter4. 1.2Outline Theworkinthisthesisfocusesonmodicationstomodel-basedtrustregion methodsinordertohandlenoise.Throughoutthethesisweassumethatonlynoisy, expensivefunctionevaluations f areavailable,butthereissomesmoothunderlying function f whichistwicecontinuouslydierentiablewithaLipschitzcontinuousHessian.Wealsoassumethatthenoiseintheevaluationof f isunbiasedwithbounded variance. Chapter3jointworkwithStephenBillupsandPeterGrafproposesaDFO algorithmtooptimizefunctionswhichareexpensivetoevaluateandcontaincomputationalnoise.Thealgorithmisbasedonthetrustregionmethodsof[9,10]which buildinterpolationorregressionmodelsaroundthebestknownsolution.Wepropose using weighted regressionmodelsinatrustregionframework,andproveconvergence ofsuchmethodsprovidedtheweightingschemesatisessomebasicconditions. 5

PAGE 17

Thealgorithmtsintoageneralframeworkforderivative-freetrustregionmethodsusingquadraticmodels,whichwasdescribedbyConn,Scheinberg,andVicente in[12,11].Weshallrefertothisframeworkasthe CSV2-framework .Thisframeworkconstructsaquadraticmodelfunction m k ,whichapproximatestheobjective function f onasetofsamplepoints Y k R n ateachiteration k .Thenextiterateis thendeterminedbyminimizing m k overatrustregion.Inordertoguaranteeglobal convergence,theCSV2-frameworkmonitorsthedistributionofpointsinthesample set,andoccasionallyinvokesamodel-improvementalgorithmthatmodiesthesamplesettoensure m k accuratelyapproximates f .TheCSV2-frameworkisthebasis ofthewell-knownDFOalgorithmwhichisfreelyavailableontheCOIN-ORwebsite [38]. TotouralgorithmintotheCSV2-frameworkweextendthetheoryofpoisedness, asdescribedin[12],toweightedregression.WeshowProposition3.12thatasample setthatisstrongly-poisedintheregressionsenseisalsostrongly c -poisedinthe weighted regressionsenseforsomeconstant c ,providedthatnoweightistoosmall relativetotheotherweights.Thus,anymodelimprovementschemethatensures strong-poisednessintheregressionsensecanbeusedintheweightedregression framework. TheconvergenceproofinChapter3requiresthatthecomputationalerrordecreasesasthetrustregiondecreases;suchanassumptioncanbesatisediftheuser hassomecontroloftheaccuracyinthefunctionevaluation.SinceChapter3is centeredonexploitingdieringlevelsindierentfunctionevaluations,suchanassumptionisreasonableforthatchapter.InChapter4,weremovethisassumption, butaddtheassumptionthat f hastheform f x = f x + .1 where N ; 2 .ThecontentofChapter4jointworkwithStephenBillups modiesthealgorithmfromChapter3toconvergewhentheerrordoesnotdecrease 6

PAGE 18

withthetrustregionradius.Withsomelightassumptionsonthenoiseandunderlyingfunction,weprovethealgorithmgeneratesasubsequenceofiterateswhich convergealmostsurelytoarst-orderstationarypointinthecasewherethenumber ofsuccessfuliteratesisnite. Atagivenpointofinterest x 0 ,thealgorithmdoesnotrepeatedlysample f x 0 inordertogleaninformationabout f x 0 .Rather m k x 0 ,thevalueofthetrust regionmodelat x 0 isusedtoestimate f x 0 .Wederiveboundsontheerrorbetween f and m ,providedthesetofpointsusedtoconstruct m satisescertaingeometric conditions,calledstrongly-poisedseeDenition2.14,andcontainsasucient numberofpoints.Also,thestepsizeiscontrolledbythealgorithm,increasingand decreasingasthealgorithmprogressesandstagnates.Thiscontrastsmanyofthe methodsintheStochasticApproximationliteraturewheretheusermustprovidea predenedsetofstepstobetakenbythealgorithm. TheresultsinSection4.3provethealgorithmwillgenerateasubsequenceof iteratesconvergingalmostsurelytoarst-orderstationarypointwhenthenumberof successfuliteratesisnite,andmakesprogressintheinnitecase.Suchresultsare notcommonformostDFOalgorithmsonproblemswithstochasticnoise.Boththe simplicialdirectsearchmethod[1]andthetrustregionmethodin[4]provesimilar convergenceresults,butbothreducethevarianceatapointbyrepeatedsampling. Inadditiontoourstrongconvergenceresult,weareabletodirectlyquantifythe probabilityofthesuccessofsomeiteratesseeLemma4.15foronesuchexample. Weareunawareofanyothersimilartheoreticalresultsforalgorithmsminimizing stochasticfunctions. Chapter5jointworkwithStefanWildaddressesterminationcriteria,thechoice ofwhichisacommonproblemwhenoptimizingnoisyfunctions.Weproposeobjective measurestocomparethequalityofterminationrules.Familiesofterminationtests arethenproposedandtheirperformanceisanalyzedacrossabroadrangeofDFO 7

PAGE 19

algorithms.Recommendationsfortestswhichworkformanyalgorithmsarealso provided.LastlyChapter6containsconcludingremarksanddirectionsforfuture research. 1.3Notation Thefollowingnotationwillbeused: R n denotestheEuclideanspaceofreal n vectors. kk p denotesthestandard  p vectornorm,and kk withoutthesubscript denotestheEuclideannorm. kk F denotestheFrobeniusnormofamatrix. C k denotesthesetoffunctionson R n with k continuousderivatives. D j f denotesthe j thderivativeofafunction f 2 C k , j k .Givenanopenset R n , LC k denotesthesetof C k functionswithLipschitzcontinuous k thderivatives.Thatis, for f 2 LC k ,thereexistsaLipschitzconstant L suchthat D k f y )]TJ/F19 11.9552 Tf 11.955 0 Td [(D k f x L k y )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k forall x;y 2 : P d n denotesthespaceofpolynomialsofdegreelessthanorequalto d in R n ; q 1 denotes thedimensionof P 2 n specically q 1 = n +1 n +2 = 2.Weusestandardbig-Oh" notationwritten O tostate,forexample,thatfortwofunctionsonthesame domain, f x = O g x ifthereexistsaconstant M suchthat j f x j M j g x j forall x withsucientlysmallnorm.Givenaset Y , j Y j denotesthecardinality andconv Y denotestheconvexhull.Forarealnumber , b c denotesthegreatest integerlessthanorequalto .Foramatrix A , A + denotestheMoore-Penrose generalizedinverse[22]. e j denotesthe j thcolumnoftheidentitymatrix.Theball ofradiuscenteredat x 2 R n isdenoted B x ;.Givenavector w ,diag w denotesthediagonalmatrix W withdiagonalentries W ii = w i .Forasquarematrix A ,cond A denotestheconditionnumber, min A denotesthesmallesteigenvalue, and min denotesthesmallestsingularvalue.Foraset Y := f y 0 ; ;y p g R n ,the quadraticdesignmatrix X hasrows 1 y j 1 y j n 1 2 y j 1 2 y j 1 y j 2 y j n )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 y j n 1 2 y j n 2 .2 8

PAGE 20

Let m k denotethe k thtrustregionmodelasdenedinChapter2.Let g k = r m k x k and H k = r 2 m k x k . & k x =max kr m k x k ; )]TJ/F19 11.9552 Tf 9.299 0 Td [( min r 2 m k x & x =max kr f x k ; )]TJ/F19 11.9552 Tf 9.298 0 Td [( min r 2 f x Thesevariablesmeasurehowclose x istoarst-andsecond-orderstationarypointof f and m k i.e.thegradientiszeroandalleigenvaluesarepositive.If X isarandom variable,thenotation X denotes P X .Notethattherelation 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( is nottransitive. 9

PAGE 21

2.Background Beforecontinuing,weintroducethebackgroundmaterialonwhichthethesisis constructed. 2.1Model-basedTrustRegionMethods Atrustregionalgorithmisanumericaltechniqueforminimizingasuciently smoothfunction f .Ateachiteration k ,amodelfunction m k x isconstructedto approximate f nearthebestpoint x k .Whenderivativesareavailable, m k isusuallya truncatedrst-orsecond-orderTaylorseriesapproximationof f at x k .Forexample, if r f and r 2 f areeasytocalculate, m k x = f x k + r f x k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k T r 2 f x k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k : m k isminimizedoverthetrustregion B x k ; k bysolvingtheproblem min s : k s k k m k x k + s togenerateapotentialnexttrustregioncenter x k + s k . f x k + s k isevaluatedand theratio k = f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(f x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k iscalculated,whichcomparestheactualdecreasein f versusthedecreasepredicted bythemodel m k .Thisratioquantiesthesuccessofiteration k andalsohowwell themodelfunctionapproximatesthetruefunction f .If k islargerthanaprescribed threshold 1 ,itindicatesthattheiterationwassuccessfulandthemodelisagood approximationofthefunction.Inthiscase, x k +1 issetto x k + s k andthetrustregion radius, k isincreased.If k islessthananotherparameter 0 ,themodelfunction isconsideredunreliablesothetrustregionradius k isdecreasedandtheiterateis notupdatedi.e. x k +1 = x k .Lastly, k isincrementedandtheprocessrepeats. 2.1.1ModelConstructionWithoutDerivatives 10

PAGE 22

Whenderivativesareunavailable,themodels m k areconstructedusingpoints where f hasbeenevaluated.Forexample,theConn,Scheinberg,andVicenteframeworkwhichwerefertoasthe CSV2-framework buildsmodels m k fromaspecied classofmodels M usingasamplesetofpoints Y k = f y 0 ; ;y p g B x k ; k on whichthefunctionhasbeenevaluated. Given Y k andavectorofcorrespondingfunctionvalues f = f y 0 ; ;f y p ,an interpolating modelisamodel m x suchthat m y i = f y i for i =0 ; ;p .Givena basis = f 0 x ;:::; q x g oftheclassofmodels M ,wecancalculatethecoecients i inthebasisrepresentationoftheinterpolatingmodel m x = P p i =0 i i x by solvingthelinearsystem M ;Y = f .1 where M ;Y = 2 6 6 6 6 6 6 6 4 0 y 0 1 y 0 q y 0 0 y 1 1 y 1 q y 1 . . . . . . . . . . . . 0 y p 1 y p q y p 3 7 7 7 7 7 7 7 5 Notethatforthisequationtohaveauniquesolution,thenumberofsamplepoints p +1mustequalthesizeofthebasis q +1andthematrix M ;Y mustbeinvertible. Regressionmodelshavealsobeeninvestigated,[10]inwhichthenumberofsample points p +1isgreaterthanthesizeofthebasis.Inthiscase,thematrix M ;Y hasmorerowsthancolumnssotheequation.1issolvedinaleastsquaressense. Lastly,if M ;Y hasmorecolumnsthanrows,thesystem.1isunderdetermined.Nevertheless,boundsbetweenthefunctionandanunderdeterminedmodel canbeobtainedincertaincases.Forexample,see[13]consideringtheminiumFrobeniusnormmethod. 2.1.2CSV2-framework 11

PAGE 23

WenextoutlinetheCSV2-frameworkforderivative-freetrustregionmethods describedbyConn,Scheinberg,andVicente[12,Algorithm10.3].Algorithmsinthe frameworkconstructamodelfunction m k atiteration k ,whichapproximatesthe objectivefunction f onasetofsamplepoints Y k = f y 0 ;:::;y p k g R n .Thenext iterateisthendeterminedbyminimizing m k .Specically,giventheiterate x k ,a putativenextiterateisgivenby x k + s k wherethestep s k solvesthetrustregion subproblem min s : k s k k m k x k + s wherethescalar k > 0denotesthetrustregionradius,whichmayvaryfromiteration toiteration.If x k + s k producessucientdescentinthemodelfunction,then f x k + s k isevaluated,andtheiterateisacceptedif f x k + s k 0 .Amodelfunction m 2 C 2 is -fullyquadratic on B x ; if m hasaLipschitzcontinuousHessianwithcorrespondingLipschitz constantboundedby m 2 and 12

PAGE 24

theerrorbetweentheHessianofthemodelandtheHessianofthefunction satises r 2 f y )-222(r 2 m y eh forall y 2 B x ; ; theerrorbetweenthegradientofthemodelandthegradientofthefunction satises kr f y )-222(r m y k eg 2 forall y 2 B x ; ; theerrorbetweenthemodelandthefunctionsatises j f y )]TJ/F19 11.9552 Tf 11.955 0 Td [(m y j ef 3 forall y 2 B x ; : Denition2.3 Let f satisfyAssumption2.1.Asetofmodelfunctions M = f m : R n ! R ;m 2 C 2 g iscalleda fullyquadratic classofmodelsifthereexistpositive constants = ef ; eg ; eh ; m 2 ,suchthatthefollowinghold: 1.forany x 2 S and 2 ; max ] ,thereexistsamodelfunction m in M which is -fullyquadraticon B x ; . 2.Forthisclass M ,thereexistsanalgorithm,calledamodel-improvement"algorithm,thatinanite,uniformlyboundedwithrespectto x and number ofstepscan eithercertifythatagivenmodel m 2M is -fullyquadraticon B x ; , or,ndamodel m 2M thatis -fullyquadraticon B x ; . Notethatthisdenitionofafullyquadraticclassofmodelsisequivalentto[12, Denition6.2];butwehavegivenaseparatedenitionofa -fullyquadraticmodel Denition2.2thatincludestheuseof tostressthexednatureofthebounding constants.Thischangesimpliessomeanalysisbyallowingustodiscuss -fully quadraticmodelsindependentoftheclassofmodelstheybelongto.Itisimportant 13

PAGE 25

tonotethat doesnotneedtobeknownexplicitly.Instead,itcanbedened implicitlybythemodelimprovementalgorithm.Allthatisrequiredisfor tobe xedthatis,independentof x and.Wealsonotethattheset M mayinclude non-quadraticfunctions,butwhenthemodelfunctionsarequadratic,theHessianis xed,so m 2 canbechosentobezero.ForthealgorithmspresentedinChapter3and Chapter4,wefocusonmodelfunctionsthatquadratic.Thatis, M = P 2 n . Asasidenote, -fullyquadraticmodelsmaybetoodiculttoconstructor maybeundesiredinsomesituations.Ifthatisthecase, -fullylinear modelsmight provideausefulalternative. Denition2.4 Let f 2 LC 2 andlet = ef ; eg ; m 1 beagivenvectorofconstants, andlet > 0 .Amodelfunction m 2 C 2 is -fullylinear on B x ; if m hasa LipschitzcontinuousgradientwithcorrespondingLipschitzconstantboundedby m 1 and theerrorbetweenthegradientofthemodelandthegradientofthefunction satises kr f y )-222(r m y k eg 8 y 2 B x ; ; theerrorbetweenthemodelandthefunctionsatises j f y )]TJ/F19 11.9552 Tf 11.955 0 Td [(m y j ef 2 8 y 2 B x ; : If m k is -fullylinear,itapproximates f inafashionsimilartotherst-order Taylormodelof f .If m k is -fullyquadratic,thenitapproximates f inafashion similartothesecond-orderTaylormodelof f .If -fullylinearorquadraticmodels areusedwithintheCSV2-framework,wecanguaranteeconvergenceofthealgorithm toarst-orsecond-orderstationarypointof f . AcriticaldistinctionbetweentheCSV2-frameworkandclassicaltrustregion methodsliesintheoptimalitycriteria.Inclassicaltrustregionmethods, m k isthe 14

PAGE 26

second-orderTaylorapproximationof f at x k ;soif x k isoptimalfor m k ,itsatisestherst-andsecond-ordernecessaryconditionsforanoptimumof f .Inthe CSV2-framework, x k mustbeoptimalfor m k ,but m k mustalsobeanaccurateapproximationof f near x k .Thisrequiresthatthetrustregionradiusissmallandthat m k is -fullyquadraticonthetrustregionforsomexed . ToexplicitlyoutlinetheCSV2-framework,weprovedpseudocodebelow.Inthe algorithm, g icb k and H icb k denotethegradientandHessianoftheincumbentmodel m icb k . Weusethesuperscript icb tostressthatincumbentparametersfromtheprevious iteratesmaybechangedbeforetheyareusedinthetrustregionstep.Theoptimality of x k withrespectto m k istestedbycalculating & icb k =max k g icb k k ; )]TJ/F19 11.9552 Tf 9.298 0 Td [( min H icb k . Thisquantityiszeroifandonlyiftherst-andsecond-orderoptimalityconditions for m k aresatised.Thealgorithmentersthecriticalitystepwhen & icb k iscloseto zero.Thisroutinebuildsapossiblynew -fullyquadraticmodelforthecurrent icb k ,andtestsif & icb k forthismodelissucientlylarge.Ifso,adescentdirectionhas beendetermined,andthealgorithmcanproceed.Ifnot,thecriticalitystepreduces icb k andupdatesthesamplesettoimprovetheaccuracyofthemodelfunctionnear x k .Thecriticalitystependswhen & icb k islargeenoughandthealgorithmproceeds orwhenboth & icb k and icb k aresmallerthangiventhresholdvalues c and min in whichcasethealgorithmhasidentiedasecond-orderstationarypoint.Wereferthe readerto[12]foramoredetaileddiscussionofthealgorithm,includingexplanations oftheparameters 0 ; 1 ;; inc ;; and ! . AlgorithmCSV2 [12,Algorithm10.3] Step0initialization: Chooseafullyquadraticclassofmodels M andacorrespondingmodel-improvementalgorithm,withassociated denedbyDenition2.3. Chooseaninitialpoint x 0 andmaximumtrustregionradius max > 0.Weassume thatthefollowingaregiven:aninitialmodel m icb 0 x withgradientandHessianat 15

PAGE 27

x = x 0 denotedby g icb 0 and H icb 0 ,respectively, & icb 0 =max k g icb 0 k ; )]TJ/F19 11.9552 Tf 9.299 0 Td [( min H icb 0 ,and atrustregionradius icb 0 2 ; max ]. Theconstants 0 ; 1 ;; inc ; c ;;;! aregivenandsatisfytheconditions0 0 1 < 1with 1 6 =0,0 << 1 < inc ; c > 0 ;>> 0 ;! 2 ; 1 : Set k =0. Step1criticalitystep: If & icb k > c ,then m k = m icb k and k = icb k . If & icb k c ,thenproceedasfollows.Callthemodel-improvementalgorithmto attempttocertifyifthemodel m icb k is -fullyquadraticon B x k ; icb k .Ifatleastone ofthefollowingconditionshold, themodel m icb k isnotcertiably -fullyquadraticon B x k ; icb k ,or icb k >& icb k , thenapplyAlgorithmCriticalityStepdescribedbelowtoconstructamodel~ m k x withgradientandHessianat x = x k denotedby~ g k ,and ~ H k ,respectively,with ~ & m k =max n k ~ g k k ; )]TJ/F19 11.9552 Tf 9.299 0 Td [( min ~ H k o ,whichis -fullyquadraticontheball B x k ; ~ k for some ~ k 2 ; ~ & m k ]givenby[12,Algorithm10.4].Insuchacaseset m k =~ m k and k =min n max n ~ k ; ~ & m k o ; icb k o : Otherwise,set m k = m icb k and k = icb k .Foramorecompletediscussionoftrust regionmanagement,see[26]. Step2stepcalculation: Computeastep s k thatsucientlyreducesthe model m k inthesenseof[12,.13]suchthat x k + s k 2 B x k ; k . Step3acceptanceofthetrialpoint: Compute f x k + s k anddene k = f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(f x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k If k 1 orifboth k 0 andthemodelis -fullyquadraticon B x k ; k ,then x k +1 = x k + s k andthemodelisupdatedtoincludethenewiterateintothesample setresultinginanewmodel m icb k +1 x withgradientandHessianat x = x k +1 denoted 16

PAGE 28

by g icb k +1 and H icb k +1 ,respectively,with & icb k +1 =max k g icb k +1 k ; )]TJ/F19 11.9552 Tf 9.299 0 Td [( min H icb k +1 ;otherwise, themodelandtheiterateremainunchanged m icb k +1 = m k and x k +1 = x k . Step4modelimprovement: If k < 1 ,usethemodel-improvementalgorithmto attempttocertifythat m k is -fullyquadraticon B x k ; k , ifsuchacerticateisnotobtained,wesaythat m k isnotcertiably -fully quadraticandmakeoneormoresuitableimprovementsteps. Dene m icb k +1 tobethepossiblyimprovedmodel. Step5trustregionupdate: Set icb k +1 2 8 > > > > > > > > > > > < > > > > > > > > > > > : f min f inc k ; max gg if k 1 and k <& m k ; [ k ; min f inc k ; max g ]if k 1 and k & m k ; f k g if k < 1 and m k is -fullyquadratic, f k g if k < 1 and m k isnotcertiably -fullyquadratic. Increment k by1andgotoStep1. AlgorithmCriticalityStep [12,Algorithm10.4]Thisalgorithmisappliedonly if & icb k c andatleastoneofthefollowingholds:themodel m icb k isnotcertiably -fullyquadraticon B x k ; icb k or icb k >& icb k . Initialization: Set i =0.Set m k = m icb k . Repeat Increment i byone.Usethemodelimprovementalgorithmtoimprove thepreviousmodel m i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k untilitis -fullyquadraticon B x k ; ! i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 icb k .Denotethe newmodelby m i k .Set ~ k = ! i )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 icb k and~ m k = m i k . Until ~ k & m k i . 17

PAGE 29

GlobalConvergence Ifthefollowingassumptionsaresatised,ithasbeenshown thattheCSV2-frameworkiterateswillconvergetoastationarypointof f .Dene theset L x 0 = f x 2 R n : f x f x 0 g . Assumption2.5 Assumethat f isboundedfrombelowon L x 0 . Assumption2.6 Thereexistsaconstant bhm > 0 suchthat,forall x k generated bythealgorithm, k H k k bhm : Theorem2.7 [12,Theorem10.23]LetAssumptions2.1,2.5and2.6holdwith S = L x 0 .Then,ifthemodelsusedintheCSV2-frameworkare -fullyquadratic, theiterates x k generatedbytheCSV2-frameworksatisfy lim k ! + 1 max fkr f x k k ; )]TJ/F19 11.9552 Tf 9.298 0 Td [( min r 2 f x k g =0 : Itfollowsfromthistheoremthatanyaccumulationpointof f x k g satisesthe rst-andsecond-ordernecessaryconditionsforaminimumof f . 2.1.3Poisedness HavingoutlinedtheCSV2-framework,wecandiscussconditionsonthesample setusedtobuild m k whichguaranteethemodelsucientlyapproximates f .Consider thesetofpolynomialsin R n ofdegreelessthanorequalto d ,denoted P d n .Abasis = f 0 x ; ; q x g of P d n isasetofpolynomialsfrom P d n whichspan P d n .Thatis, forany P x 2P d n ,thereexistscoecients i suchthat P x = P p i =0 i i x .Welet P d n denotethenumberofelementsinanybasis of P d n .Forexample, jP 1 n j = n +1 and jP 2 n j = n +1 n +2 = 2. Denition2.8 Asetofpoints X = f x 0 ; ;x p g R n with k X k = P d n ispoised forinterpolationifthematrix M ;X isnonsingularforsomebasis in P d n 18

PAGE 30

If j X j > P d n ,wecanconstructtheleastsquaresregressionmodelbysolving .1aswell.Weextendthedenitionofpoisednessfortheregressioncase. Denition2.9 Asetofpoints X = f x 0 ; ;x p g R n with k X k P d n ispoised forregressionifthematrix M ;X hasfullcolumnrankforsomebasis in P d n Sincewehavelimitedinformationaboutthefunction f ,wewanttheinterpolating orregressing m x tobeanaccurateapproximationinaregionofinterest.This requiresthat X consistsofpointsspreadoutwithinsaidregion.Since M ;X can bearbitrarilypoorlyconditionedand X isstillpoised,simplybeingpoisedisnot enoughtomeasurethequalityofaset X .Alsolookingattheconditionnumberof M ;X isinadequatesincescalingthesampleset X orchoosingadierentbasis canarbitrarilyadjustthisquantity.Nevertheless,therearemethodsformeasuring thequalityofasampleset,oneofthemostcommonofwhichisbasedonLagrange polynomials. Denition2.10 Foraset X = f x 0 ; ;x p g R n ,with j X j = P d n ,thesetofinterpolatingLagrangepolynomials  = f  0 ; ; p gP d n arethepolynomialssatisfying  i y j = 8 > < > : 1 if i = j; 0 otherwise. Iftheset X ispoised,thenthesetofpolynomialsareguaranteedtoexistandbe uniquelydened. WecanextendthedenitionofLagrangepolynomialstotheregressioncaseina naturalfashion. Denition2.11 Foraset X = f x 0 ; ;x p g R n ,with j X j > P d n ,thesetof regressionLagrangepolynomials  = f  0 ; ; p gP d n arethepolynomialssatisfying  i y j = :s: 8 > < > : 1 if i = j; 0 otherwise. 19

PAGE 31

Thoughthesepolynomialsarenolongerlinearlyindependent,if X ispoised,then thesetofregressionLagrangepolynomialsexistsandisuniquelydened. WecannowusetheseLagrangepolynomialstoextendthedenitionofpoisedness to-poisedness.ThisrelatesthemagnitudeoftheLagrangepolynomialsonaset B R n whichwillallowamethodformeasuringthequalityofasampleset. Denition2.12 Let > 0 andaset B R n begiven.Forabasis of P d n ,apoised set X = f x 0 ; ;x p g issaidtobe -poisedin B intheinterpolatingsenseifand onlyif 1.forthebasisofLagrangepolynomialsassociatedwith X max 0 i p max x 2 B j  i j ; or,equivalently, 2.forany x 2 B thereexists x suchthat p X i =0 i x y i = x with k x k 1 : Andweagaincanextendthisdenitiontotheregressioncase. Denition2.13 Let > 0 andaset B R n begiven.Forabasis of P d n ,apoised set X = f x 0 ; ;x p g with j X j P d n issaidtobe -poisedin B intheregression senseifandonlyif 1.forthebasisofLagrangepolynomialsassociatedwith X max 0 i p max x 2 B j  i j ; or,equivalently, 2.forany x 2 B thereexists x suchthat p X i =0 i x y i = x with k x k 1 : 20

PAGE 32

Notethatif j X j = P d n ,thedenitionfor-poisedintheinterpolationandregression sensecoincide. Wecannowexaminethefollowingboundfrom[7] k D r f x )]TJ/F19 11.9552 Tf 11.956 0 Td [(D r m x k 1 d +1! d p X i =0 x i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x d +1 k D r  i x k where D r f x isthe r thderivativeof f . D r f x z 1 ;:::;z r = X i 1 ;:::;i r @ r f x @x i 1 @x i r z 1 i 1 z r i r and d isanupperboundon D r +1 f x .If r =0,thisboundreducesto j f x )]TJ/F19 11.9552 Tf 11.956 0 Td [(m x j 1 d +1! p +1 d b d +1 .2 where b =max 0 i p max x 2 B j  i j ; andisthediameterofthesmallestballcontaining X .Therefore,ifthenumberof pointsin X isbounded,then-poisednessissucienttoderiveboundsontheerror betweentheregressionorinterpolatingmodelandthefunction.Thatis,decreasing theradiusofthesamplesetwillprovideboundssimilartoTaylorboundswhen derivativesareavailable.Ifusingregressionmodelswitharbitrarilymanypoints, -poisednessisnotenoughtoconstructsimilarbounds.Strongly-poisednesscan helpinthiscase. Denition2.14 Let  x =  0 x ; ; p x T betheregressionLagrangepolynomialsassociatedwiththeset Y = f y 0 ;:::;y p g .Let > 0 andlet B beasetin R n . Theset Y issaidtobestrongly -poisedin B intheregressionsenseifandonlyif q 1 p p 1 max x 2 B k  x k ; where q 1 = jP 2 n j and p 1 = j Y j . 21

PAGE 33

Sincewecanrewrite.2as j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j 1 d +1! p p +1 d b; 2 d +1 where b; 2 =max x 2 B k  i k ; strong-poisednessprovidesTaylor-likeerrorboundsbetweentheregressionmodel m andthefunction f ,evenwhenthenumberofpointsin X isunbounded. Asanalnote,explicitlycalculatingthevalueofiscomputationallyexpensive, butnotnecessary.Itispossibletousetheconditionnumberofthedesignmatrix M ;X toboundtheconstant.Sinceitispossibletoscaletheconditionnumber of M ;X byshiftingandscaling X ,orchoosingadierentbasis,conditionsmust beplacedon M ;X beforeusingitsconditionnumber.Ifwe1usethestandard basise.g., = 1 ;x 1 ; ;x n ; 1 2 x 2 1 ;x 1 x 2 ; ;x 2 n for P d n ,2shiftthesampleset X soeverypointlieswithintheunitcircleand,3atleastonepointhasnorm1,thenit ispossibleboundusingtheconditionnumberof M ;X .Thenexttwotheorems arefortheinterpolationandregressioncaserespectively. Theorem2.15 Let ^ X denotetheshiftedandscaledversionof X soeverypointlies withintheunitballandatleastonepointhasnorm1.Let ^ M = M ; ^ X where isthestandardbasis.If ^ M isnonsingularand ^ M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 ,thentheset ^ X is p p 1 -poisedintheunitball.Conversely,iftheset ^ X is -poisedintheunitball, then ^ M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 p p 1 ,where > 0 dependson n and d butisindependentof ^ X and . Theorem2.16 Let ^ X denotetheshiftedandscaledversionof X soeverypointlies withintheunitballandatleastonepointhasnorm1.Let ^ M = M ; ^ X where is thestandardbasisandlet ^ U ^ ^ V T bethereducedsingularvaluedecompositionof ^ M . Thatis ^ isthe q 1 q 1 diagonalmatrixofsingularvalues.If ^ isnonsingularand 22

PAGE 34

^ )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 p q 1 =p 1 ,thentheset ^ X isstrongly -poisedintheunitball.Conversely, iftheset ^ X is -poisedintheunitball,then ^ )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 q 1 p p 1 ,where dependson n and d butisindependentof ^ X and . 2.2PerformanceProles Next,weexplainthecontentofperformanceproleswhichareacompactmethod forcomparingtheperformanceofvariousalgorithmsonasetofproblems.Wewill useFigure2.1asanexample.Algorithms A , B ,and C havebeenrunanidentical setofproblemsforthesamenumberoffunctionevaluations.Theleftaxisshows thepercentageofproblemseachalgorithmsolvedrst,wheresolvedisuser-dened. Often,analgorithmisconsideredtosolveaproblemwhenitrstndsafunction valuewithintoleranceofthebestknownsolution.Intheexample, A solves20% oftheproblemsrst, B solves55%oftheproblemsrst,and C solves30%ofthe problemsrst.Asthesepercentagestotaltoover100%,thereisanoverlapintheset ofproblemsthealgorithmssolverst. Therightaxisshowsthepercentageofproblemssolvedbyagivenalgorithmin thenumberoffunctionevaluationsgiven.AllalgorithmsinFigure2.1solveover90% oftheproblemsinthetestingset.Valuesbetweentheleftandrightaxesdenotethe percentageofproblemssolvedasamultipleofthenumberofevaluationsrequiredfor thefastestalgorithm.Forexample,given6timesasmanyiterationsasthefastest algorithmonaproblem, A solves80%oftheproblemsinthetestingset. Formally,analgorithmisconsideredtosolveaproblemwhenitrstndsa functionvaluesatisfying f x 0 )]TJ/F19 11.9552 Tf 11.955 0 Td [(f x )]TJ/F19 11.9552 Tf 11.955 0 Td [( f x 0 )]TJ/F19 11.9552 Tf 11.956 0 Td [(f L where > 0isasmalltolerance, f L isthesmallestfoundfunctionvalueforanysolver inaspeciednumberofiterations,and x 0 istheinitialpointgiventoeachalgorithm. 23

PAGE 35

Figure2.1: Anexampleofaperformanceprole If i p;a isthenumberofiterationsneededforsolver a tosolveproblem p ,thenthe performanceratioisdenedby r p;a = t p;a min a f t p;a g Thentheperformanceproleofasolver a isthefractionofproblemswherethe performanceratioisatmost .Thatis, a = j p 2 P : r p;a j 1 j P j where P isthesetofbenchmarkproblems. 2.3ProbabilisticConvergence Lastly,wedenevariousformsofprobabilisticconvergence.Asequence f X n g ofrandomvariableissaidto convergeindistribution or convergeweakly ,or 24

PAGE 36

convergeinlaw toarandomvariable X if lim n !1 F n x = F x ; foreverynumber f x 2 R g atwhich F iscontinuous.Here F n and F arethecumulative distributionfunctionsofrandomvariables X n and X correspondingly. Asequence f X n g ofrandomvariables convergesinprobability to X ifforall > 0 lim n !1 P j X n )]TJ/F19 11.9552 Tf 11.956 0 Td [(X j =0 Asequence f X n g ofrandomvariables convergesalmostsurely or almosteverywhere or withprobability1 or strongly towards X if P lim n !1 X n = X =1 25

PAGE 37

3.Derivative-freeOptimizationofExpensiveFunctionswith ComputationalErrorUsingWeightedRegression 3.1Introduction Inthischapter,weconstructanalgorithmdesignedtooptimizefunctionsevaluatedbylargecomputationalcodes,takingminutes,hoursorevendaysforasingle functioncall,forwhichderivativeinformationisunavailable,andforwhichfunctionevaluationsaresubjecttocomputationalerror.Sucherrormaybedeterministic arising,forexample,fromdiscretizationerror,orstochasticforexample,from Monte-Carlosimulation.Becausefunctionevaluationsareextremelyexpensive,it issensibletoperformsubstantialworkateachiterationtoreducethenumberof functionevaluationsrequiredtoobtainanoptimum. Weassumethattheaccuracyofthefunctionevaluationcanvaryfrompointto point,andthisvariationcanbequantied.Inthischapter,wewilluseknowledge ofthisvaryingerrortoimprovetheperformanceofthealgorithmbyusing weighted regressionmodelsinatrustregionframework.Bygivingmoreaccuratepointsmore weightwhenconstructingthetrustregionmodel,wehopethatthemodelswillmore closelyapproximatethefunctionbeingoptimized.Thisleadstoabetterperforming algorithm. OuralgorithmtswithinintheCSV2-framework,whichisoutlinedinChapter2. Tospecifyanalgorithmwithinthisframework,threethingsarerequired: 1.Denetheclassofmodelfunctions M .Thisisdeterminedbythemethodfor constructingmodelsfromthesampleset.In[10]modelswereconstructedusing interpolation,leastsquaresregression,andminimumFrobeniusnormmethods. Wedescribethegeneralformofourweightedregressionmodelsin x 3.2and proposeaspecicweightingschemein x 3.5. 2.Deneamodel-improvementalgorithm. x 3.4describesourmodelimprovement algorithmwhichteststhegeometryofthesampleset,andifnecessary,adds 26

PAGE 38

and/ordeletespointstoensurethatthemodelfunctionconstructedfromthe samplesetsatisestheerrorboundsinDenition2.2i.e.itis -fullyquadratic. 3.Demonstratethatthemodel-improvementalgorithmsatisestherequirements forthedenitionofaclassoffullyquadraticmodels.Forouralgorithm,thisis discussedin x 3.4. Thechapterisorganizedasfollows.WeplaceouralgorithmintheCSV2frameworkbydescribing1howmodelfunctionsareconstructed x 3.2,and2a modelimprovementalgorithm x 3.4.Beforedescribingthemodelimprovementalgorithm,werstextendthetheoryof-poisednesstotheweightedregressionframework x 3.3.Computationalresultsarepresentedin x 3.5usingaheuristicweighting scheme,whichisdescribedinthatsection. x 3.6concludesthechapter. 3.2ModelConstruction Thissectiondescribeshowweconstructthemodelfunction m k atthe k thiteration.Forsimplicity,wedropthesubscript k fortheremainderofthissection. Let f = f 0 ;:::; f p T where f i denotesthecomputedfunctionvalueat y i ,andlet i denotetheassociatedcomputationalerror.Thatis f i = f y i + i : .1 Let w = w 0 ;:::;w p T beavectorofpositiveweightsforthesetofpoints Y = f y 0 ; ;y p g .Aquadraticpolynomial m issaidtobea weightedleastsquares approximation of f withrespectto w ifitminimizes p X i =0 w 2 i )]TJ/F19 11.9552 Tf 5.48 -9.684 Td [(m y i )]TJ/F15 11.9552 Tf 14.503 3.155 Td [( f i 2 = W )]TJ/F19 11.9552 Tf 5.48 -9.684 Td [(m Y )]TJ/F15 11.9552 Tf 14.503 3.155 Td [( f 2 : where m Y denotesthevector m y 0 ;m y 1 ;:::;m y p T and W =diag w .Inthis case,wewrite Wm Y :s: = W f: .2 27

PAGE 39

Let = f 0 ; 1 ;:::; q g beabasisforthequadraticpolynomialsin R n .For example, mightbethemonomialbasis = f 1 ;x 1 ;x 2 ;:::;x n ;x 2 1 = 2 ;x 1 x 2 ;:::;x n )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 x n ;x 2 n = 2 g : .3 Dene M ;Y = 2 6 6 6 6 6 6 6 4 0 y 0 1 y 0 q y 0 0 y 1 1 y 1 q y 1 . . . . . . . . . . . . 0 y p 1 y p q y p 3 7 7 7 7 7 7 7 5 : Since isabasisforthequadraticpolynomials,themodelfunctioncanbewritten m x = q X i =0 i i x .Thecoecients = 0 ;:::; q T solvetheweightedleast squaresregressionproblem WM ;Y :s: = W f: .4 If M ;Y hasfullcolumnrank,thesampleset Y issaidtobe poisedforquadratic regression .Thefollowinglemmaisastraightforwardgeneralizationof[12,Lemma 4.3]: Lemma3.1 If Y ispoisedforquadraticregression,thentheweightedleastsquares regressionpolynomialwithrespecttopositiveweights w = w 0 ;:::;w p exists,is uniqueandisgivenby m x = x T ,where = WM + W f = M T W 2 M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 M T W 2 f; .5 where W =diag w and M = M ;Y . Proof: Since W and M bothhavefullcolumnrank,sodoes WM .Thus,theleast squaresproblem.4hasauniquesolutiongivenby WM + W f .Moreover,since WM hasfullcolumnrank, WM + = )]TJ/F15 11.9552 Tf 5.479 -9.684 Td [( WM T WM )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 M T W . 3.3ErrorAnalysisandtheGeometryoftheSampleSet 28

PAGE 40

Throughoutthissection, Y = f y 0 ; ;y p g denotesthesampleset, p 1 = p +1, w 2 R p 1 + isavectorofpositiveweights, W =diag w ,and M = M ;Y . f denotes thevectorofcomputedfunctionvaluesatthepointsin Y asdenedby.1. Theaccuracyofthemodelfunction m k reliescriticallyonthegeometryofthe sampleset.Inthissection,wegeneralizethetheoryofpoisednessfrom[12]tothe weightedregressionframework.Thissectionalsoincludeserroranalysiswhichextendsresultsfrom[12]toweightedregression,aswellasconsideringtheimpactof computationalerrorontheerrorbounds.Westartbydening weightedregression Lagrangepolynomials . 3.3.1WeightedRegressionLagrangePolynomials Denition3.2 Asetofpolynomials  j x ;j =0 ;:::;p in P d n arecalled weightedregressionLagrangepolynomials withrespecttotheweights w andsampleset Y ifforeach j , W Y j :s: = We j ; where  Y j =[  j y 0 ; ; j y p ] T . ThefollowinglemmaisadirectapplicationofLemma3.1. Lemma3.3 Let x = 0 x ;:::; q x T .If Y ispoised,thenthesetofweighted regressionLagrangepolynomialsexistsandisunique,andisgivenby  j x = x T a j ;j =0 ; ;p ,where a j isthe j th columnofthematrix A = WM + W: .6 Proof: Notethat m =  j satises.2with f = e j .ByLemma3.1,  j x = x T a j where a j = WM + We j whichisthe j thcolumnof WM + W . Thefollowinglemmaisbasedon[12,Lemma4.6]. 29

PAGE 41

Lemma3.4 If Y ispoised,thenthemodelfunctiondenedby .2 satises m x = p X i =0 f i  i x ; where  j x ;j =0 ; ;p denotetheweightedregressionLagrangepolynomialscorrespondingto Y and W . Proof: ByLemma3.1 m x = x T where = WM + W f = A f for A denedby.6.Let  x =[  0 x ; ; p x ] T .ThenbyLemma3.3 m x = T x A f = f T  x = p X i =0 f i  i x : 3.3.2ErrorAnalysis Fortheremainderofthischapter,let ^ Y = f ^ y 0 ; ; ^ y p g denotetheshiftedand scaledsamplesetwhere^ y i = y i )]TJ/F19 11.9552 Tf 12.116 0 Td [(y 0 =R and R =max i k y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(y 0 k .Notethat^ y 0 =0 andmax i k ^ y i k =1.Anyanalysisof Y canbedirectlyrelatedto ^ Y bythefollowing lemma: Lemma3.5 Denethebasis ^ = n ^ 0 x ; ; ^ q x o ,where ^ i x = i Rx + y 0 , i =0 ;:::;q and isthemonomialbasis.Let f  0 x ; ; p x g beweightedregression Lagrangepolynomialsfor Y and n ^  0 x ; ; ^  p x o beweightedregressionLagrange polynomialsfor ^ Y .Then M ;Y = M ^ ; ^ Y .If Y ispoised,then  x = ^  x )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 R : Proof: Observethat M ;Y = 2 6 6 6 6 6 6 6 4 0 y 0 q y 0 0 y 1 q y 1 . . . . . . . . . 0 y p q y p 3 7 7 7 7 7 7 7 5 = 2 6 6 6 6 6 6 6 4 ^ 0 ^ y 0 ^ q ^ y 0 ^ 0 ^ y 1 ^ q ^ y 1 . . . . . . . . . ^ 0 ^ y p ^ q ^ y p 3 7 7 7 7 7 7 7 5 = M ^ ; ^ Y : 30

PAGE 42

Bythedenitionofpoisedness, Y ispoisedifandonlyif ^ Y ispoised.Let x = )]TJ/F15 11.9552 Tf 6.992 -6.528 Td [( 0 x ;:::; q x T and ^ x = ^ 0 x ;:::; ^ q x T .Then ^ ^ x = 2 6 6 6 6 4 ^ 0 x )]TJ/F20 7.9701 Tf 6.586 0 Td [(y 0 R . . . ^ q x )]TJ/F20 7.9701 Tf 6.587 0 Td [(y 0 R 3 7 7 7 7 5 = 2 6 6 6 6 4 0 x . . . q x 3 7 7 7 7 5 = x : ByLemma3.3,if Y ispoised,then  x = x T WM ;Y + W = ^ ^ x T WM ^ ; ^ Y + W = ^  ^ x : Let f i bedenedby.1andletbeanopenconvexsetcontaining Y .If f is C 2 on,thenbyTaylor'stheorem,foreachsamplepoint y i 2 Y ,andaxed x 2 conv Y ,thereexistsapoint i x onthelinesegmentconnecting x to y i such that f i = f y i + i = f x + r f x T y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x + 1 2 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T r 2 f i x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x + i = f x + r f x T y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x + 1 2 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T r 2 f x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x + 1 2 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x + i ; .7 where H i x = r 2 f i x )-222(r 2 f x . Let f  i x g denotetheweighted-regressionLagrangepolynomialsassociatedwith Y .Thefollowinglemmaandproofareinspiredby[7,Theorem1]: Lemma3.6 Let f betwicecontinuouslydierentiableon andlet m x denote thequadraticfunctiondeterminedbyweightedregression.Then,forany x 2 the followingidentitieshold: m x = f x + 1 2 P p i =0 y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x  i x + P p i =0 i  i x , 31

PAGE 43

r m x = r f x + 1 2 P p i =0 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x r  i x + P p i =0 i r  i x , r 2 m x = r 2 f x + 1 2 P p i =0 y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x r 2  i x + P p i =0 i r 2  i x , where H i x = r 2 f i x )-194(r 2 f x forsomepoint i x = x + )]TJ/F19 11.9552 Tf 11.615 0 Td [( y i ; 0 1 onthelinesegmentconnecting x to y i . Proof: Let D denotethedierentialoperatorasdenedin[7],where D j is the j thderivativeofafunctionin C i where i j .Inparticular, D 0 f x = f x , D 1 f x z = r f x T z ,and D 2 f x z 2 = z T r 2 f x z .ByLemma3.4, m x = P p i =0 f i  i x ;sofor h =0 ; 1or2, D h m x = p X i =0 f i D h  i x : Substituting.7for f i intheaboveequationyields D h m x = 2 X j =0 1 j ! p X i =0 D j f x y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x j D h  i x + 1 2 p X i =0 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x D h  i x + p X i =0 i D h  i x .8 where H i x = r 2 f i x )-261(r 2 f x forsomepoint i x onthelinesegmentconnecting x to y i .Considerthersttermontherighthandsideabove.Weshallshow that 1 j ! p X i =0 D j f x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x j D h  i x = 8 > < > : D h f x for j = h 0for j 6 = h: .9 for j =0 ; 1 ; 2 ;::: .Let B j = D j f x ,andlet g j : R n ! R bethepolynomialdened by g j z = 1 j ! B j z )]TJ/F19 11.9552 Tf 10.222 0 Td [(x j .Observethat D j g j x = B j and D h g j x =0for h 6 = j .Since g j hasdegree j 2,theweightedleastsquaresapproximationof g j byaquadratic polynomialis g j itself.Thus,byLemma3.4andthedenitionof g j , g j z = p X i =0 g j y i  i z = 1 j ! p X i =0 B j y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x j  i z : .10 32

PAGE 44

Applyingthedierentialoperator D h withrespectto z yields D h g j z = 1 j ! p X i =0 B j y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x j D h  i z = 1 j ! p X i =0 D j f x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x j D h  i z : Letting z = x ,theexpressionontherightisidenticaltotheleftsideof.9.This proves.9since D h g j x =0for j 6 = h and D j g j x = B j for j = h .By.9,.8 reducesto D h m x = D h f x + 1 2 p X i =0 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x T H i x y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x D h  i x + p X i =0 i D h  i x : Applyingthiswith h =0 ; 1 ; 2provesthelemma. Since k H i x k L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k bytheLipschitzcontinuityof r 2 f ,thefollowingis adirectconsequenceofLemma3.6. Corollary3.7 Let f satisfyAssumption2.1forsomeconvexset ,andlet m x denotethequadraticfunctiondeterminedbyweightedregression.Then,forany x 2 thefollowingerrorboundshold: j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j P p i =0 L 2 k y i )]TJ/F19 11.9552 Tf 11.956 0 Td [(x k 3 + j i j j  i x j kr f x )-222(r m x k P p i =0 L 2 k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 3 + j i j kr  i x k kr 2 f x )-222(r 2 m x k P p i =0 L 2 k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 3 + j i j kr 2  i x k . Usingthiscorollary,thefollowingresultprovideserrorboundsbetweenthefunctionandthemodelintermsofthesamplesetradius. Corollary3.8 Let Y bepoised,andlet R =max i k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k .Suppose j i j for i =0 ;:::;p .If f satisesAssumption2.1withLipschitzconstant L ,thenthereexist constants 1 , 2 ,and 3 ,independentof R ,suchthatforall x 2 B y 0 ; R , j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j 1 p p 1 LR 3 + . 33

PAGE 45

kr f x )-222(r m x k 2 p p 1 LR 2 + =R . kr 2 f x )-222(r 2 m x k 3 p p 1 LR + =R 2 . Proof: Let f ^  0 x ;:::; ^  p x g betheLagrangepolynomialsgeneratedbytheshifted andscaledset ^ Y ,andlet f  0 x ;:::; p x g betheLagrangepolynomialsgeneratedbytheset Y .ByLemma3.5,foreach x 2 B y 0 ; R ,  i x = ^  i ^ x 8 i , where^ x = x )]TJ/F19 11.9552 Tf 13.158 0 Td [(y 0 =R .Thus, r  i x = r ^  i ^ x =R ,and r 2  i x = r 2 ^  i ^ x =R 2 . Let ^  x = h ^  0 x ;:::; ^  p x i T ,^ g x = h r ^  0 x ;:::; r ^  p x i T and ^ h x = h r 2 ^  0 x ;:::; r 2 ^  p x i T . ByCorollary3.7, j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j p X i =0 L 2 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 3 + j i j j  i x j p X i =0 )]TJ/F15 11.9552 Tf 5.48 -9.684 Td [(4 LR 3 + j  i x j since k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 2 R ,and j i j = )]TJ/F15 11.9552 Tf 5.48 -9.684 Td [(4 LR 3 + k ^  ^ x k 1 p p 1 )]TJ/F15 11.9552 Tf 5.479 -9.684 Td [(4 LR 3 + ^  ^ x ; sincefor x 2 R n , k x k 1 p n k x k 2 : Similarly, kr f x )-222(r m x k p p 1 4 LR 2 + R k ^ g ^ x k and r 2 f x )-222(r 2 m x p p 1 4 LR + R 2 ^ h ^ x : Setting 1 =max x 2 B ;1 ^  x , 2 =max x 2 B ;1 k ^ g x k ,and 3 =max x 2 B ;1 ^ h x yields thedesiredresult. Notethesimilaritybetweentheseerrorboundsandthoseinthedenitionof fullyquadraticmodels.Ifthereisnocomputationalerrororiftheerroris O 3 , fullyquadraticmodelsforsomexed canbeobtainedbycontrollingthegeometry ofthesamplesetsothat i p p 1 , i =1 ; 2 ; 3areboundedbyxedconstantsandby controllingthetrustregionradiussothat R isbounded.Thismotivatesthe denitionsof-poisedandstrongly-poisedintheweightedregressionsenseinthe nextsection. 34

PAGE 46

3.3.3 -poisednessintheWeightedRegressionSense Inthissection,werestrictourattentiontothemonomialbasis denedin .3.Inordertoproduceaccuratemodelfunctions,thepointsinthesampleset needtobedistributedinsuchawaythatthematrix M = M ;Y issuciently well-conditioned.Thisisthemotivationbehindthefollowingdenitionsof-poised andstrongly-poisedsets.Thesedenitionsareidenticalto[12,Denitions4.7, 4.10]exceptthattheLagrangepolynomialsinthedenitionsare weightedregression Lagrangepolynomials. Denition3.9 Let > 0 andlet B beasetin R n .Let w = w 0 ;:::;w p bea vectorofpositiveweights, Y = f y 0 ;:::;y p g beapoisedset,andlet f  0 ;:::; p g bethe associatedweightedregressionLagrangepolynomials.Let  x =  0 x ; ; p x T and q 1 = jP 2 n j . Y issaidtobe -poisedin B intheweightedregressionsenseifandonlyif max x 2 B max 0 i p j  i x j : Y issaidtobestrongly -poisedin B intheweightedregressionsenseifand onlyif q 1 p p 1 max x 2 B k  x k : Notethatiftheweightsareallequal,theabovedenitionsareequivalenttothose for-poisedandstrongly-poisedgivenin[12]. WearenaturallyinterestedinusingtheseweightedregressionLagrangepolynomialstoformmodelsthatareguaranteedtosucientlyapproximate f .Let Y k , k , and R k denotethesampleset,trustregionradius,andsamplesetradiusatiteration k asdenedatthebeginningof x 3.3.2.Assumethat R k k isbounded.Ifthenumberof samplepointsisbounded,itcanbeshown,usingCorollary3.8,thatif Y k is-poised forall k ,thenthecorrespondingmodelfunctionsare -fullyquadratic,assumingno 35

PAGE 47

computationalerror,orthatthecomputationalerroris O 3 .Whenthenumberof samplepointsisnotbounded,-poisednessisnotenough.Inthefollowing,weshow thatif Y k is strongly -poisedforall k ,thenthecorrespondingmodelsare -fully quadratic,regardlessofthenumberofpointsin Y k . Lemma3.10 Let ^ M = M ; ^ Y .If W ^ M T W + p q 1 =p 1 ,then ^ Y isstrongly -poisedin B ;1 intheweightedregressionsense,withrespecttotheweights w . Conversely,if ^ Y isstrongly -poisedin B ;1 intheweightedregressionsense,then W ^ M T W + q 1 p p 1 ; where > 0 isaxedconstantdependentonlyon n butindependentof Y and . Proof: Let A = W ^ M + W and  x =  0 x ;:::; p x T .ByLemma3.3,  x = A T x .Itfollowsthatforany x 2 B ;1, k  x k = A T x k A k x p q 1 =p 1 )]TJ/F22 11.9552 Tf 5.48 -1.651 Td [(p q 1 x 1 q 1 = p p 1 : Forthelastinequality,weusedthefactthatmax x 2 B ;1 x 1 1. Toprovetheconverse,let U V T = A T bethereducedsingularvaluedecompositionof A T .Notethat U and V are p 1 q 1 and q 1 q 1 matrices,respectively, withorthonormalcolumns;isa q 1 q 1 diagonalmatrix,whosediagonalentries arethesingularvaluesof A T .Let 1 bethelargestsingularvaluewith v 1 thecorrespondingcolumnof V .Asshownintheproofof[10,Theorem2.9],thereexists aconstant > 0suchthatforanyunitvector v ,thereexistsan x 2 B ;1such that v T x .Therefore,since k v 1 k =1,thereisan x 2 B ;1suchthat v 1 T x .Let v ? betheorthogonalprojectionof x ontothesubspace orthogonalto v 1 ;so x = )]TJ/F15 11.9552 Tf 5.48 -9.684 Td [( v 1 T x v 1 + v ? .Notethat V T v 1 and V T v ? are orthogonalvectors.Notealsothatforanyvector z , U V T z = V T z since U 36

PAGE 48

hasorthonormalcolumns.Itfollowsthat k  x k = A T x = V T x = V T v ? 2 + V T )]TJ/F15 11.9552 Tf 5.479 -9.683 Td [( v 1 T x v 1 2 1 = 2 v 1 T x V T v 1 V T v 1 = e 1 = k A k : Thus, k A k max x 2 B ;1 k  x k = q 1 p p 1 ,whichprovestheresultwith =1 = . WecannowprovethatmodelsgeneratedbyweightedregressionLagrangepolynomialsare -fullyquadratic. Proposition3.11 Let f satisfyAssumption2.1andlet > 0 bexed.Thereexists avector = ef ; eg ; eh ; 0 suchthatforany y 0 2 S and max ,if 1. Y = f y 0 ;:::;y p g B y 0 ; isstrongly -poisedin B y 0 ; intheweighted regressionsensewithrespecttopositiveweights w = f w 0 ;:::;w p g ,and 2.thecomputationalerror j i j isboundedby C 3 ,where C isaxedconstant, thenthecorrespondingmodelfunction m is -fullyquadratic. Proof: Let^ x , ^  ; ^ g , ^ h , 1 ; 2 ,and 3 beasdenedintheproofof Corollary3.8.Let ^ M = M ; ^ Y and W =diag w .ByLemma3.3, ^  x = A T x , where A = W ^ M + W .ByLemma3.10, k A k q 1 p p 1 ,where isaxedconstant. Itfollowsthat 1 =max x 2 B ;1 ^  x max x 2 B ;1 k A k x c 1 q 1 p p 1 ; where c 1 =max x 2 B ;1 x isaconstantindependentof Y .Similarly, 2 =max x 2 B ;1 k ^ g x k =max x 2 B ;1 kr ^  0 x k ; ; kr ^  p x k =max x 2 B ;1 A T r x F p q 1 max x 2 B ;1 A T r x p q 1 max x 2 B ;1 k A k r x c 2 q 3 2 1 p p 1 ; 37

PAGE 49

where c 2 =max x 2 B ;1 r x isindependentof Y . Tobound 3 ,let J s;t denotetheuniqueindex j suchthat x s and x t bothappear inthequadraticmonomial j x .Forexample J 1 ; 1 = n +2, J 1 ; 2 = J 2 ; 1 = n +3,etc. Observethat r 2 j x s;t = 8 > < > : 1if j = J s;t ; 0otherwise : Itfollowsthat r 2 ^  i x = q X j =0 A T i;j r 2 j x = 2 6 6 6 6 6 6 6 4 A T i;J 1 ; 1 A T i;J 1 ; 2 :::A T i;J 1 ;n A T i;J 2 ; 1 A T i;J 2 ; 2 :::A T i;J 2 ;n . . . . . . A T i;J n; 1 A T i;J n; 2 :::A T i;J n;n 3 7 7 7 7 7 7 7 5 : Weconcludethat r 2 ^  i x r 2 ^  i x F p 2 A T i; .Thus, 3 =max x 2 B ;1 ^ h x =max x 2 B ;1 kr 2 ^  0 x k ; ; kr 2 ^  p x k v u u t 2 p X i =0 A T i; 2 = p 2 k A k F p 2 q 1 k A k p 2 q 3 2 1 p p 1 : Byassumption,thecomputationalerror j i j isboundedby = C 3 .So,by Corollary3.8,forall x 2 B y 0 ;, j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j p p 1 L + C 3 1 c 1 q 1 L + C 3 = ef 3 . kr f x )-222(r m x k p p 1 L + C 2 2 c 2 q 3 2 1 L + C 2 = eg 2 . r 2 f x )-222(r 2 m x p p 1 L + C 3 p 2 q 3 2 1 L + C = eh . where ef = c 1 q 1 L + C ; eg = c 2 q 3 2 1 L + C ; eh = p 2 q 3 2 1 L + C : Thus, m x is ef ; eg ; eh ; 0-fullyquadratic,andsincetheseconstantsareindependentof y 0 and,theresultisproven. Thenalstepinestablishingthatwehaveafullyquadraticclassofmodelsisto deneanalgorithmthatproducesastrongly-poisedsamplesetinanitenumber ofsteps. 38

PAGE 50

Proposition3.12 Let ^ Y = f y 0 ;:::;y p g R n beasetofpointsintheunitball B ;1 suchthat k y j k =1 foratleastone j .Let w = w 0 ;:::;w p T beavector ofpositiveweights.If ^ Y isstrongly -poisedin B ;1 inthesenseof unweighted regression,thenthereexistsaconstant > 0 ,independentof ^ Y , and w ,suchthat ^ Y isstrongly )]TJ/F15 11.9552 Tf 5.479 -9.684 Td [(cond W -poisedinthe weighted regressionsense. Proof: Let ^ M = M ; ^ Y ,where isthemonomialbasis.ByLemma3.10applied withunitweights, ^ M + q 1 = p p 1 ,where isaconstantindependentof ^ Y and .Thus, W ^ M T W + cond W ^ M + cond W q 1 p p 1 : wheretherstinequalityresultsfrom ^ M T W + = 1 min ^ M T W 1 min ^ M T min W = M + W )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 : Theresultfollowswith = p q 1 . Thesignicanceofthispropositionisthatanymodelimprovementalgorithmfor unweightedregressioncanbeusedforweightedregressiontoensurethesameglobal convergenceproperties,providedcond W isbounded.Forthemodelimprovement algorithmdescribedinthefollowingsection,thisrequirementissatisedbybounding theweightsawayfromzerowhilekeepingthelargestweightequalto1. Inpractice,weneednotensure-poisednessof Y k ateveryiteratetoguarantee thealgorithmconvergencestoasecond-orderminimum.Rather,-poisednessonly needstobeenforcedasthealgorithmstagnates. 3.4ModelImprovementAlgorithm ThissectiondescribesamodelimprovementalgorithmMIAforregression which,bytheprecedingsection,canalsobeusedforweightedregressiontoensure thatthesamplesetsarestrongly-poisedforsomexedwhichisnotnecessarily 39

PAGE 51

known.Thealgorithmisbasedonthefollowingobservation,whichisastraightforwardextensionof[12,Theorem4.11]. TheMIApresentedin[12]makesassumptionssuchasallpointsmustliewithin B y 0 ;tosimplifythetheory.Weresistsuchassumptionstoaccountforpractical concernspointswhichlieoutsideof B y 0 ; thatariseinthealgorithm. Proposition3.13 Iftheshiftedandscaledsampleset ^ Y of p 1 pointscontains l = b p 1 q 1 c disjointsubsetsof q 1 points,eachofwhichare -poisedin B ;1 inthe interpolationsense,then ^ Y isstrongly q l +1 l -poisedin B ;1 intheregression sense. Proof: Let Y j = f y 0 j ;y 1 j ;:::;y q j g , j =1 ;:::;l bethedisjoint-poisedsubsetsof ^ Y , andlet Y r betheremainingpointsin ^ Y .Let j i , i =0 ;:::;q betheinterpolation Lagrangepolynomialsfortheset Y j .Asnotedin[12],forany x 2 R n , q X i =0 j i x y i j = x ;j =1 ;:::;l: Dividingeachoftheseequationsby l andsummingyields l X j =1 q X i =0 1 l j i x y i j = x : .11 Let j x = )]TJ/F19 11.9552 Tf 5.48 -9.684 Td [( j 1 x ; ; j q 1 x T ,andlet 2 R p 1 beformedbyconcatenatingthe j x , j =1 ; ;l andazerovectoroflength j Y r j andthendividingeveryentryby l .By.11, isasolutiontotheequation p X i =0 i y i = x : .12 Since Y j is-poisedin B ;1,forany x 2 B ;1, j x p q 1 j x 1 p q 1 : Thus, p l max j k j x k l r q 1 l r l +1 l r q 1 p 1 =q 1 = r l +1 l q 1 p p 1 : 40

PAGE 52

Let  i x , i =0 ; ;p betheregressionLagrangepolynomialsforthecompleteset ^ Y .Asobservedin[12],  x =  0 x ; ; p x T istheminimumnormsolutionto .12.Thus, k  x k r l +1 l q 1 p p 1 : Sincethisholdsforall x 2 B ;1, ^ Y isstrongly r l +1 l -poisedin B ;1. Basedonthisobservation,andnotingthat l +1 l 2for l 1,weadoptthefollowingstrategyforimprovingashiftedandscaledregressionsampleset ^ Y B ;1: 1.If ^ Y contains l 1-poisedsubsetswithatmost q 1 pointsleftover, ^ Y is strongly p 2-poised. 2.Otherwise,if ^ Y containsatleastone-poisedsubset,saveasmany-poised subsetsaspossible,plusatmost q 1 additionalpointsfrom ^ Y ,discardingthe rest. 3.Otherwise,addadditionalpointsto ^ Y inordertocreatea-poisedsubset. Keepthissubset,plusatmost q 1 additionalpointsfrom ^ Y . Toimplementthisstrategy,werstdescribeanalgorithmthatattemptstonda -poisedsubsetof ^ Y .Todiscussthealgorithmweintroducethefollowingdenition: Denition3.14 Aset Y B issaidtobe -subpoised inaset B ifthereexistsa superset Z Y thatis -poisedin B with j Z j = q 1 . Givenasampleset Y B ;1notnecessarilyshiftedandscaledandaradius ~ ,thealgorithmbelowselectsa-subpoisedsubset Y new Y containingas manypointsaspossible.If j Y new j = q 1 ,then Y new is-poisedin B ; ~ forsome xed.Otherwise,thealgorithmdeterminesanewpoint y new 2 B ; ~ suchthat Y new S f y new g is-subpoisedin B ; ~ . 41

PAGE 53

AlgorithmFindSet Findsa-subpoisedset Input: Asampleset Y B ;1andatrustregionradius ~ 2 p acc ; 1 , forxedparameter acc > 0. Output: Aset Y new Y thatis-poisedin B ; ~ ;ora-subpoised set Y new B ; ~ andanewpoint y new B ; ~ suchthat Y new S f y new g is -subpoisedin B ; ~ . Step0Initialization: Initializethepivotpolynomialbasistothemonomialbasis: u i x = i x ;i =0 ;:::;q .Set Y new = ; .Set i =0. Step1PointSelection: Ifpossible,choose j i 2f i;:::; j Y j)]TJ/F15 11.9552 Tf 17.933 0 Td [(1 g suchthat j u i y j i j acc thresholdtest. If suchanindexisfound,addthecorrespondingpointto Y new andswap thepositionsofpoints y i and y j i in Y . Otherwise ,compute y new =argmax x 2 B ; ~ j u i x j ,and exit ,returning Y new and y new . Step2GaussianElimination: For j = i +1 ;:::; j Y j)]TJ/F15 11.9552 Tf 17.932 0 Td [(1 u j x = u j x )]TJ/F19 11.9552 Tf 13.15 8.087 Td [(u j y i u i y i u i x . If i
PAGE 54

M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 p q 1 growth acc ; .13 where growth isthegrowthfactorforthefactorizationsee[27]. PointSelection Thepointselectionruleallowsexibilityinhowanacceptable pointischosen.Forexample,tokeepthegrowthfactordown,onecouldchoosethe index j i thatmaximizes j u i y j j whichcorrespondstoGaussianeliminationwith partialpivoting.Butinpractice,itisoftenbettertoselectpointsaccordingto theirproximitytothecurrentiterate.Inourimplementation,webalancethesetwo criteriabychoosingtheindexthatmaximizes j u i y j j =d 3 j ,over j i ,where d j = max f 1 ; k y j k = ~ g .Ifallsamplepointsarecontainedin B ; ~ ,then d j =1forall j .Inthiscase,thepointselectionruleisidenticaltotheoneusedinAlgorithm6.6 of[12]withtheadditionofthethresholdtest.When Y containspointsoutside B ; ~ ,thecorrespondingvaluesof d j aregreaterthan1,sothepointselectionrule givespreferencetopointsthatarewithin B ; ~ . ThetheoreticaljusticationforourrevisedpointselectionrulecomesfromexaminingtheerrorboundsinCorollary3.7.Foragivenpoint x in B ; ~ ,eachsample point y i makesacontributiontotheerrorboundthatisproportionalto k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 3 assumingthecomputationalerrorisrelativelysmall.Since x canbeanywherein thetrustregion,thissuggestsmodifyingthepointselectionruletomaximize j u i y j i j ^ d 3 j i , where ^ d j =max x 2 B ; ~ k y j )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k = ~ = k y j k = ~ +1.Tosimplifyanalysis,wemodify thisformulasothatallpointsinsidethetrustregionaretreatedequally,resultingin theformula d j =max ; k y j k = ~ . Lemma3.15 SupposeAlgorithmFindSetreturnsaset Y new with j Y new j = q 1 .Then Y new is -poisedin B ; ~ forsome ,whichisproportionalto growth acc max f 1 ; ~ 2 = 2 ; ~ g ,where growth isthegrowthfactorfortheGaussianelimination. 43

PAGE 55

Proof: Let ~ M = M ;Y new .By.13, ~ M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 p q 1 growth = acc .Let  x =  0 x ;:::; q x T bethevectorofinterpolationLagrangepolynomialsforthesample set Y new .Forany x 2 B ; ~ , k  x k 1 = ~ M )]TJ/F20 7.9701 Tf 6.587 0 Td [(T x 1 ~ M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 1 x 1 p q 1 ~ M )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 x 1 q 1 growth acc x 1 q 1 growth acc max f 1 ; ~ 2 = 2 ; ~ g : Sincethisinequalityholdsforall x 2 B ; ~ , Y new is-poisedfor= q 1 growth = acc max f 1 ; ~ 2 = 2 ; ~ g ,whichestablishestheresult. Ingeneral,thegrowthfactorintheabovelemmadependsonthematrix M and thethreshold acc .Becauseofthethresholdtest,itispossibletoestablishabound onthegrowthfactorthatisindependentof ~ M .Sowecanclaimthatthealgorithm selectsa-poisedsetforaxedthatisindependentof Y .However,thebound isextremelylarge,soisnotveryuseful.Nevertheless,inpractice growth isquite reasonable,sotendstobeproportionaltomax f 1 ; ~ 2 = 2 ; ~ g = acc . Inthecasewherethethresholdtestisnotsatised,AlgorithmFindSetdetermines anewpoint y new bymaximizing j u i x j over B ; ~ .Inthiscase,weneedtoshow thatthenewpointwouldsatisfythethresholdtest.Thefollowinglemmashowsthat thisispossible,provided acc issmallenough.Theproofismodeledaftertheproof of[12,Lemma6.7]. Lemma3.16 Let v T x beaquadraticpolynomialofdegree2,where k v k 1 =1 . Then max x 2 B ; ~ j v T x j min f 1 ; ~ 2 4 g : Proof: Since k v k 1 =1,atleastoneofthecoecientsof q x = v T x is1,-1, 1/2,or-1/2.Lookingatthecasewherethelargestcoecientis1or1/2-1and -1/2aresimilarlyproven,eitherthiscoecientcorrespondstotheconstantterm,a linearterm x i ,oraquadraticterm x 2 i = 2or x i x j .Restrictallvariablesnotappearing inthetermcorrespondingtothelargestcoecienttozero. 44

PAGE 56

If q x =1thenthelemmatriviallyholds. If q x = x 2 i = 2+ ax i + b ,let~ x = ~ e i 2 B ; ~ q ~ x = ~ 2 = 2+ ~ a + b;q )]TJ/F15 11.9552 Tf 10.024 0 Td [(~ x = ~ 2 = 2 )]TJ/F15 11.9552 Tf 13.906 3.022 Td [(~ a + b; and q = b: If j q )]TJ/F15 11.9552 Tf 10.023 0 Td [(~ x j ~ 2 4 or j q ~ x j ~ 2 4 theresultisshown.Otherwise, )]TJ/F17 7.9701 Tf 7.997 2.014 Td [(~ 4 ~ 2 4 . If q x = ax 2 i = 2+ x i + b ,thenlet~ x = ~ e i ,yielding q ~ x = ~ + a ~ 2 = 2+ b and q )]TJ/F15 11.9552 Tf 10.024 0 Td [(~ x = )]TJ/F15 11.9552 Tf 11.249 3.022 Td [(~ + a ~ 2 = 2+ b then max fj q )]TJ/F15 11.9552 Tf 10.023 0 Td [(~ x j ; j q ~ x jg =max n j)]TJ/F15 11.9552 Tf 19.884 3.022 Td [(~ + j ; j ~ + j o ~ min f 1 ; ~ 2 4 g where = a ~ = 2+ b =0. If q x = ax 2 i = 2+ bx 2 j = 2+ x i x j + cx i + dx j + e ,weconsider4pointson B ; ~ y 1 = 2 6 4 q ~ 2 q ~ 2 3 7 5 ;y 2 = 2 6 4 q ~ 2 )]TJ/F25 11.9552 Tf 9.298 14.792 Td [(q ~ 2 3 7 5 ;y 3 = 2 6 4 )]TJ/F25 11.9552 Tf 9.299 14.792 Td [(q ~ 2 q ~ 2 3 7 5 ;y 4 = 2 6 4 )]TJ/F25 11.9552 Tf 9.299 14.792 Td [(q ~ 2 )]TJ/F25 11.9552 Tf 9.299 14.792 Td [(q ~ 2 3 7 5 : q y 1 = a ~ 4 + b ~ 4 + ~ 2 + c s ~ 2 + d s ~ 2 + e q y 2 = a ~ 4 + b ~ 4 )]TJ/F15 11.9552 Tf 15.102 11.11 Td [(~ 2 + c s ~ 2 )]TJ/F19 11.9552 Tf 11.955 0 Td [(d s ~ 2 + e q y 3 = a ~ 4 + b ~ 4 )]TJ/F15 11.9552 Tf 15.102 11.109 Td [(~ 2 )]TJ/F19 11.9552 Tf 11.955 0 Td [(c s ~ 2 + d s ~ 2 + e q y 4 = a ~ 4 + b ~ 4 + ~ 2 )]TJ/F19 11.9552 Tf 11.955 0 Td [(c s ~ 2 )]TJ/F19 11.9552 Tf 11.955 0 Td [(d s ~ 2 + e Notethat q y 1 )]TJ/F19 11.9552 Tf 11.732 0 Td [(q y 2 = ~ + d p 2 ~ and q y 3 )]TJ/F19 11.9552 Tf 11.733 0 Td [(q y 4 = )]TJ/F15 11.9552 Tf 11.249 3.022 Td [(~ + d p 2 ~ .There aretwocases: 45

PAGE 57

1.If d 0,then q y 1 )]TJ/F19 11.9552 Tf 12.218 0 Td [(q y 2 ~ ,soeither j q y 1 j ~ 2 min n 1 ; ~ 2 4 o or j q y 2 j ~ 2 min n 1 ; ~ 2 4 o . 2.If d< 0,thenasimilarstudyof q y 3 )]TJ/F19 11.9552 Tf 11.955 0 Td [(q y 4 provestheresult. Lemma3.17 SupposeinAlgorithmFindSet acc min f 1 ; ~ 2 = 4 g .IfAlgorithm FindSetexitsduringthepointselectionstep,then Y new S f y new g is -subpoisedin B ; ~ forsomexed ,whichisproportionalto growth acc max f 1 ; ~ 2 = 2 ; ~ g ,where growth isthegrowthparameterfortheGaussianelimination. Proof: ConsideramodiedversionofAlgorithmFindSetthatdoesnotexitin thepointselectionstep.Instead, y i isreplacedby y new and y new isaddedto Y new . Thismodiedalgorithmwillalwaysreturnasetconsistingof q 1 points.Callthisset Z .Let Y new and y new betheoutputoftheunmodiedalgorithm,andobservethat Y new S f y new g Z . Toshowthat Y new S f y new g is-subpoised,weshowthat Z is-poisedin B ; ~ . FromtheGaussianelimination,after k iterationsofthealgorithm,the k +1st pivotpolynomial u k x canbeexpressedas v k T x ,forsome v k = v 0 ;:::;v k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 ; 1 ; 0 ;:::; 0.Thatis,the v i arethecoecientsforthebasisexpansionofthepolynomial u k .Observethat v k 1 1,andlet~ v = v k k v k k 1 .ByLemma3.16, max x 2 B ; ~ j u k x j =max x 2 B ; ~ v k T x = v k 1 max x 2 B ; ~ j ~ v T x j min f 1 ; ~ 2 4 g v k 1 min f 1 ; ~ 2 4 g acc : Itfollowsthateachtimeanewpointischoseninthepointselectionstep, thatpointwillsatisfythethresholdtest.Thus,theset Z returnedbythemodiedalgorithmwillinclude q 1 points,allofwhichsatisfythethresholdtest.By 46

PAGE 58

Lemma3.15, Z is-poised,withproportionalto growth acc max f 1 ; ~ 2 = 2 ; ~ g .Itfollowsthat Y new S f y new g is-subpoised. Wearenowreadytostateourmodelimprovementalgorithmforregression.Prior tocallingthisalgorithm,wediscardallpointsin Y withdistancegreaterthan = p acc from y 0 .Wethenformtheshiftedandscaledset Y bythetransformation y j = y j )]TJ/F19 11.9552 Tf 9.351 0 Td [(y 0 =d ,where d =max y j 2 Y k y j )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k ,andscalethetrustregionradiusaccordingly i.e., ~ = =d .Thisensuresthat ~ = d = p acc = p acc .Aftercallingthe algorithm,wereversetheshiftandscaletransformation. AlgorithmMIA ModelImprovementforRegression Input: Ashiftedandscaledsampleset ^ Y B ;1,atrustregionradius ~ p acc forxed acc 2 ; 1 4 r 2 ,where r 1isaxedparameter. Output: Amodiedset Y 0 withimprovedpoisednesson B ; ~ . Step0Initialization: Removethepointin ^ Y farthestfrom y 0 =0ifitis outside B ; r ~ .Set Y 0 = ; . Step1FindPoisedSubset: UseAlgorithm FindSet eithertoidentifya -poisedsubset Y new ^ Y ,ortoidentifya-subpoisedsubset Y new ^ Y and oneadditionalpoint y new 2 B ; ~ suchthat Y new S f y new g is-subpoised on B ; ~ . Step2UpdateSet: If Y new is-poised,additto Y 0 andremove Y new from ^ Y .Removeall pointsfrom ^ Y thatareoutsideof B ; r ~ . Otherwise If j Y 0 j = ; ,set Y 0 = Y new S f y new g plus q 1 )-222(j Y new j)]TJ/F15 11.9552 Tf 17.932 0 Td [(1additionalpoints from ^ Y . Otherwise set Y 0 = Y 0 S Y new plus q 1 )-222(j Y new j additionalpointsfrom ^ Y . 47

PAGE 59

Set ^ Y = ; . Step3If j ^ Y j q 1 ,goto Step1 . InAlgorithmMIA,ifeverycalltoAlgorithmFindSetyieldsa-poisedset Y new , theneventuallyallpointsin ^ Y willbeincludedin Y 0 .Inthiscase,thealgorithmhas veriedthat ^ Y contains  = b p 1 q 1 c -poisedsets.ByProposition3.13, ^ Y isstrongly l +1 l -poisedin B ;1. IftherstcalltoFindSetfailstoidentifya-poisedsubset,thealgorithmimprovesthesamplesetbyaddinganewpoint y new andbyremovingpointssothat theoutputset Y 0 containsatmost q 1 points.Inthiscasetheoutputsetcontainsthe -subpoisedset Y new S f y new g .Thus,ifthealgorithmiscalledrepeatedly,with ^ Y replacedby Y 0 aftereachcall,eventually Y 0 willcontaina-poisedsubsetandwill bestrongly2-poised,byProposition3.13. If ^ Y failstobe-poisedafterthesecondorlatercalltoFindSet,nonewpoints areadded.Instead,thesamplesetisimprovedbyremovingpointsfrom ^ Y sothat theoutputset Y 0 consistsofallthe-poisedsubsetsidentiedbyFindSet,plusupto q 1 additionalpoints.Theresultingsetisthenstrongly ^  +1 ^  -poised,where ^  = b j Y 0 j q 1 c . Trustregionscalefactor Thetrustregionscalefactor r wassuggestedin[12, Section11.2],althoughimplementationdetailswereomitted.Thescalefactordetermineswhatpointsareallowedtoremaininthesampleset.EachcalltoAlgorithm MIAremovesapointfromoutside B ; r ~ ifsuchapointexists.Thus,ifthealgorithmiscalledrepeatedlywith ^ Y replacedby Y 0 eachtime,eventuallyallpointsin thesamplesetwillbeintheregion B ; r ~ .Usingascalefactor r> 1canimprove theeciencyofthealgorithm.Toseethis,observethatif r =1,aslightmovement ofthetrustregioncentermayresultinpreviouslygood"pointslyingjustoutsideof B y 0 ;.Thesepointswouldthenbeunnecessarilyremovedfrom ^ Y . 48

PAGE 60

PAGE 61

3.5ComputationalResults Asshownintheprevioussection,theCSV2-frameworkusingweightedquadratic regressionconvergestoasecond-orderstationarypointprovidedtheratiobetweenthe largestandsmallestweightisbounded.Thisleavesmuchleewayinthederivationof theweights.Wenowdescribeaheuristicstrategybasedontheerrorboundsderived in x 4. 3.5.1UsingErrorBoundstoChooseWeights Intuitively,themodelsusedthroughoutouralgorithmwillbemosteectiveifthe weightsarechosensothat m x isasaccurateaspossibleinthesensethatitagrees withthesecond-orderTaylorapproximationof f x aroundthecurrenttrustregion center y 0 .Thatis,wewanttoestimatethequadraticfunction q x = f y 0 + r f y 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.956 0 Td [(y 0 T r 2 f y 0 x )]TJ/F19 11.9552 Tf 11.956 0 Td [(y 0 : If f x happenstobeaquadraticpolynomial,then f i = q y i + i : Iftheerrors i areuncorrelatedrandomvariableswithzeromeanandnitevariances 2 i ;i =0 ;:::;p ,thenthebestlinearunbiasedestimatorofthepolynomial q x is givenby m x = x T ,where solves.4withthe i th weightproportionalto 1 = i [51,Theorem4.4].Thisisintuitivelyappealingsinceeachsamplepointwillhave thesameexpectedcontributiontotheweightedsumofsquareerrors. When f x isnotaquadraticfunction,theerrorsdependnotjustonthecomputationalerror,butalsoonthedistancesfromeachpointto y 0 .Intheparticularcase when x = y 0 ,therstthreetermsof.7arethequadraticfunction q y i .Thus, theerrorbetweenthecomputedfunctionvalueand q y i isgivenby: f i )]TJ/F19 11.9552 Tf 11.955 0 Td [(q y i = 1 2 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 T H i y 0 y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 + i ; .14 50

PAGE 62

where H i y 0 = r 2 f i y 0 )-299(r 2 f y 0 forsomepoint i y 0 onthelinesegment connecting y 0 and y i . Weshallrefertothersttermontherightasthe Taylorerror andthesecondterm ontherightasthe computationalerror .ByAssumption2.1, k H i y 0 k L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k . Thisleadsustothefollowingheuristicargumentforchoosingtheweights:Supposethat H i y 0 isarandomsymmetricmatrixsuchthatthestandarddeviationof k H i y 0 k isproportionalto L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k .Inotherwords k H i y 0 k = L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k for someconstant .ThentheTaylorerrorwillhavestandarddeviationproportionalto L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k 3 .AssumingthecomputationalerrorisindependentoftheTaylorerror, the totalerror f i )]TJ/F19 11.9552 Tf 11.314 0 Td [(q y i willhavestandarddeviation q )]TJ/F19 11.9552 Tf 5.479 -9.683 Td [(L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k 3 2 + 2 i ,where i isthestandarddeviationof i .Thisleadstothefollowingformulafortheweights: w i / 1 q 2 L 2 k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k 6 + 2 i : Ofcourse,thisformuladependsonknowing ;L and i .If L , i ,and/or are notknown,thisformulacouldstillbeusedinconjunctionwithsomestrategyfor estimating L , i ,and forexample,basedupontheaccuracyofthemodelfunctions atknownpoints.Alternatively, and L canbecombinedintoasingleparameter C thatcouldbechosenusingsometypeofadaptivestrategy: w i / 1 q C k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k 6 + 2 i : Ifthecomputationalerrorshaveequalvariances,theformulacanbefurther simpliedas w i / 1 q C k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k 6 +1 ; .15 where C = C= 2 i . Anobviousawintheabovedevelopmentisthattheerrorsin j f i )]TJ/F19 11.9552 Tf 12.534 0 Td [(q y i j are notuncorrelated.Additionally,theassumptionthat k H i y 0 k isproportionalto L k y i )]TJ/F19 11.9552 Tf 11.955 0 Td [(y 0 k isvalidonlyforlimitedclassesoffunctions.Nevertheless,basedonour 51

PAGE 63

computationalexperiments,.15appearstoprovideasensiblestrategyforbalancingdieringlevelsofcomputationaluncertaintywiththeTaylorerror. 3.5.2BenchmarkPerformance Tostudytheimpactofweightedregression,wedevelopedMATLABimplementationsofthreequadraticmodel-basedtrustregionalgorithmsusinginterpolation, regression,andweightedregression,respectively,toconstructthequadraticmodel functions.Totheextentpossible,thedierencesbetweenthesealgorithmswere minimized,withcodesharedwheneverpossible.Obviously,allthreemethodsuse dierentstrategiesforconstructingthemodelfromthesampleset.Beyondthat,the onlydierenceisthatthetworegressionmethodsusethemodelimprovementalgorithmdescribedinSection3.4,whereastheinterpolationalgorithmusesthemodel improvementalgorithmdescribedin[12,Algorithm6.6]. Wecomparedthethreealgorithmsusingthesuiteoftestproblemsforbenchmarkingderivative-freeoptimizationalgorithmsmadeavailablebyMoreandWild [41].Weranourtestsonthefourtypesofproblemsfromthistestsuite:smooth problemswithnonoise,piecewisesmoothfunctions,functionswithdeterministic noiseandfunctionswithstochasticnoise.Wedonotconsiderthealgorithmpresented inthischaptertobeidealforhandlingstochasticallynoisyfunctions.Forexample,if theinitialpointhappenstobeevaluatedwithlargenegativenoise,thealgorithmwill neverre-evaluatethispointandpossiblynevermovethetrustregioncenter.Weare activelyattemptingtoconstructamorerobustalgorithm.Weconsidersuchmodicationsnon-trivialandoutsidethescopeofthecurrentwork.Theproblemswere runwiththefollowingparametersettings: max =100 ; icb 0 =1 ; 0 =10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(6 ; 1 =0 : 5 ; =0 : 5 ; inc =2 ; c =0 : 01 ; =2 ; =0 : 5 ;! = : 5 ;r =3 ; acc =10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(4 : Fortheinterpolationalgorithm,weused imp =1 : 01 ; forthecallsto[12,Algorithm6.6]. 52

PAGE 64

Asdescribedin[41],thesmoothproblemsarederivedfrom22nonlinearleast squaresfunctionsdenedintheCUTEr[23]collection.Foreachproblem,theobjectivefunction f x isdenedby f x = m X k =1 g k x 2 ; where g : R n ! R m representsoneoftheCUTErtestfunctions.ThepiecewisesmoothproblemsaredenedusingthesameCUTErtestfunctionsby f x = m X k =1 j g k x j : Thenoisyproblemsarederivedfromthesmoothproblemsbymultiplyingbyanoise functionasfollows: f x =+ " f \050 x m X k =1 g k x 2 ; where " f denestherelativenoiselevel.Forstochasticallynoisyproblem,\050 x isa randomvariabledrawnfromtheuniformdistribution U [ )]TJ/F15 11.9552 Tf 9.298 0 Td [(1 ; 1].Tosimulatedeterministicnoise,\050 x isafunctionthatoscillatesbetween-1and1,withbothhighfrequencyandlow-frequencyoscillations.Foranequationforthedeterministic,see [41,Eqns..2-.3].Usingmultiplestartingpointsforsomeofthetestfunctions, atotalof53dierentproblemsarespeciedinthetestsuiteforeachofthese3types ofproblems. Fortheweightedregressionalgorithm,theweightsweredeterminedbytheweightingscheme.15with C =100. Therelativeperformancesofthealgorithmswerecomparedusingperformance prolesanddataproles[17,41].If S isthesetofsolverstobecomparedonasuiteof problems P ,let t p;s bethenumberofiteratesrequiredforsolver s 2 S onaproblem p 2 P tondafunctionvaluesatisfying: f x f L + f x 0 )]TJ/F19 11.9552 Tf 11.955 0 Td [(f L ; .16 53

PAGE 65

where f L isthebestfunctionvalueachievedbyany s 2 S .Thentheperformance proleofasolver s 2 S isthefollowingfraction: s = 1 j P j p 2 P : t p;s min f t p;s : s 2 S g : Thedataproleofasolver s 2 S is: d s = 1 j P j p 2 P : t p;s n p +1 where n p isthedimensionofproblem p 2 P .Formoreinformationontheseproles, includingtheirrelativemeritsandfaults,see[41]. PerformanceprolescomparingthethreealgorithmsareshowninFigure3.1for anaccuracyof =10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(5 .Weobservethatonthesmoothproblems,theweightedand unweightedregressionmethodshadsimilarperformanceandbothperformedslightly betterthaninterpolation.Forthedeterministicallynoisyproblems,weseeslightly betterperformancefromtheweightedregressionmethod;andthisimprovementis evenmorepronouncedforthebenchmarkproblemswithstochasticnoise.Andfor thepiecewisedierentiablefunctions,theperformanceoftheweightedregression methodissignicantlybetter.Thismirrorsthendingsin[13]whereSID-PSMusing regressionmodelsshowsconsiderableimprovementoverinterpolationmodels. WealsocomparedourweightedregressionalgorithmwiththeDFOalgorithm [8]aswellasNEWUOA[50],whichhadthebestperformanceofthethreesolvers comparedin[41].WeobtainedtheDFOcodefromtheCOIN-ORwebsite[38].This codecallsIPOPT,whichwealsoobtainedfromCOIN-OR.WeobtainedNEWUOA from[40].Weranbothalgorithmsonthebenchmarkproblemswithastopping criterionof min =10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(8 ,where min denotestheminimumtrustregionradius.For NEWUOA,thenumberofinterpolationconditionswassettoNPT=2 n +1. TheperformanceprolesareshowninFigure3.2,withanaccuracyof =10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(5 . NEWUOAoutperformsbothouralgorithmandDFOonthesmoothproblems.This isnotsurprisingsinceNEWUOAisamaturecodethathasbeenrenedoverseveral 54

PAGE 66

Figure3.1: Performanceleftanddatarightproles:Interpolationvs.regression vs.weightedregression years,whereasourcodeisarelativelyunsophisticatedimplementation.Incontrast,on thenoisyproblemsandthepiecewisedierentiableproblems,ourweightedregression 55

PAGE 67

algorithmachievessuperiorperformance. Figure3.2: Performanceleftanddatarightproles:weightedregressionvs. NEWUOAvs.DFOProblemswithStochasticNoise 3.6SummaryandConclusions 56

PAGE 68

Ourcomputationalresultsindicatethatusingweightedregressiontoconstruct moreaccuratemodelfunctionscanreducethenumberoffunctionevaluationsrequiredtoreachastationarypoint.Encouragedbytheseresults,webelievethat furtherstudyofweightedregressionmethodsiswarranted.Thischapterprovidesa theoreticalfoundationforsuchstudy.Inparticular,wehaveextendedtheconceptsof -poisednessandstrong-poisednesstotheweightedregressionframework,andwe demonstratedthatanyschemethatmaintainsstrongly-poisedsamplesetsforunweightedregressioncanalsobeusedtomaintainstrongly-poisedsamplesetsfor weightedregression,providedthatnoweightistoosmallrelativetotheotherweights. Usingtheseresults,weshowedthat,whenthecomputationalerrorissucientlysmall relativetothetrustregionradius,thealgorithmconvergestoastationarypointunder standardassumptions. Thisinvestigationbeganwithagoalofmoreecientlydealingwithcomputationalerrorinderivative-freeoptimization,particularlyundervaryinglevelsofuncertainty.Surprisingly,wediscoveredthatregressionbasedmethodscanbeadvantageousevenintheabsenceofcomputationalerror.Regressionmethodsproduce quadraticapproximationsthatbetterttheobjectivefunctionclosetothetrustregioncenter.Thisisduepartlytothefactthatinterpolationmethodsthrowoutpoints thatareclosetogetherinordertomaintainawell-poisedsampleset.Incontrast, regressionmodelskeepthesepointsinthesampleset,therebyputtinggreaterweight onpointsclosetothetrustregioncenter. Thequestionofhowtochooseweightsneedsfurtherstudy.Inthischapter, weproposedaheuristicthatbalancesuncertaintiesarisingfromcomputationalerror withuncertaintiesarisingfrompoormodeldelityi.e.,Taylorerrorasdescribed in x 3.5.1.Thisweightingschemeappearstoprovideabenetfornoisyproblemsor non-dierentiableproblems.Webelievebetterschemescanbedevisedbasedonmore rigorousanalysis. 57

PAGE 69

PAGE 70

4.StochasticDerivative-freeOptimizationusingaTrustRegion Framework Inthischapter,weproposeandanalyzetheconvergenceofanalgorithmwhich ndsalocalminimizeroftheunconstrainedfunction f : R n ! R .Thevalueof f atagivenpoint x cannotbeobserveddirectly;rathertheoptimizationroutineonly hasaccesstonoisecorruptedfunctionvalues f .Suchnoisemaybedeterministic,due toround-oerrorfromnite-precisionarithmeticoriterativemethods,orstochastic, arisingfromvariabilityorrandomnessinsomeobservedprocess.Wefocusour attentioninthischapteronminimizing f when f hastheform: f x = f x + .1 where N ; 2 . Minimizingnoisyfunctionsofthisnatureariseinavarietyofsettings.Forexample,inalmostanyproblemwherephysicalsystemmeasurementsarebeingoptimized. Consideracityplannerwantingtomaximizetracowonamajorthoroughfareby adjustingthetimingoftraclights.Foreachtimingpattern x ,thetracow f x isphysicallymeasuredtoprovideinformationabouttheexpectedtracow f x . Stochasticapproximationalgorithms,builttosolve min f x = E f x haveexistedintheliteraturesinceRobbins-Monro'salgorithmforndingrootsofan expectedvaluefunction[53].TheKiefer-WolfowitzKWalgorithm[35]generalized thisalgorithmtominimizetheexpectedvalueofafunction.Theiriterateshavethe form x k +1 = x k + a k G x k where G isanitedierenceestimateforthegradientof f .The i thcomponentof G isfoundby G i = f x k + c k e i )]TJ/F15 11.9552 Tf 14.502 3.155 Td [( f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(c k e i 2 c k : 59

PAGE 71

where e i isthe i thunitvector.WhileKWspawnedmanygeneralizations,most formsrequireapredetermineddecayingsequenceforboththestepssizeparameter a k andnitedierenceparameter c k .Asopposedtothe2 n evaluationsof f required ateachiterateofKW,Spall'ssimultaneousperturbationstochasticapproximation SPSA[56]requiresonly2functionevaluationsperiterate,independentof n .SPSA estimates G i by G i = f x k + c k k )]TJ/F15 11.9552 Tf 14.503 3.155 Td [( f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(c k k 2 c k k i where k 2 R n isarandomperturbationvectorwithentries k i whichareindependent andidenticallydistributedi.i.d.fromadistributionwithboundedinversemoments, symmetricallydistributedaroundzero,anduniformlyboundedinmagnitudeforall k and i .ThoughSPSAgreatlyreducesthenumberofevaluationsof f ,thechoice ofsequences a k and c k arecriticaltoalgorithmicperformance.Nevertheless,if f has auniqueminimum x ,bothKWandSPSAhavealmostsureconvergencewhich impliesconvergenceinprobabilityandconvergenceindistributionof x k ! x as k !1 .ThereisalsoaversionofSPSAwhichusesfourfunctionevaluationsto estimatethevalue,gradient,andHessianof f [57]. ThealgorithmwhichfollowsdiersfromtheworkinChapter3inafewways. First,theanalysispresentedinChapter3givesveryconservativeerrorbounds;to tightenthesebounds,wemustconsiderspecicprobabilitydistributionsfortheerror. Second,inChapter3, k wasdenedusing f x k asanestimatefor f x k .Thework thatfollowsevaluatesmodelfunctions m k x k and^ m k x k toprovidebetterestimates ofthetruefunctionvalue.Itispossibletoestimate f x k byrepeatedevaluationof f x k ,butwedesireanalgorithmwhichavoidsrepeatedlysamplingpointstoreduce thevarianceatapoint x .Suchatechniqueonlygainsinformationaboutnoise inthe stochasticcase,andnoinformationabout if f isdeterministic.Butifmanypoints sucientlycloseto x aresampled,informationabout f and canbegleaned.Asis oftenthecase,thepoint x isthelikelynextiterate,andtheinformationgathered 60

PAGE 72

about f near x willbeusedimmediately.Also,ifthenoisein f isdeterministicbut theoptimizerhasimperfectcontrolof x ,itmaybepossibletoconsidertheproblem astochasticoptimizationproblem. Theanalysisofthealgorithmiscomplicatedbythepresenceofnoise.Sincethere isnoiseineachfunctionevaluation,itisimpossibletobecertainthemodelmatches thefunction.Forexample,if f x = x 2 ,thereisanonzerobuttinyprobability that f x < forany > 0ateverypointevaluated.Therefore,wecanonlyhave condencewhichwedenote1 )]TJ/F19 11.9552 Tf 11.61 0 Td [( k for k smallthatthemodelandfunctionagree. Thequantity k canbeadjustedasthealgorithmstagnatestoensureincreasingly accuratemodelsattheexpenseofmorefunctionevaluations.Akeyrequirementof theconvergenceanalysisisthatas k ! 0, k doesaswell.Forexample,wecan chooseasimplerulesuchas k =min f k ; 0 : 05 g toproveresultsaboutouralgorithm. Therearemanyotherequallyvalidrulesforhandling k toensureincreasingaccurate modelsas k ! 0. Ourultimategoalistoprovethatthealgorithmconvergestoastationarypoint of f almostsurelywithprobability1,butthisisadauntingtask.Thisistobe expectedconsideringthefollowingtwoquotesbothfrom[58] Thereisafundamentaltrade-obetweenalgorithmeciencyandalgorithmrobustnessreliabilityandstabilityinabroadrangeofproblems. Inessence,algorithmsthataredesignedtobeveryecientononetypeof problemtendtobebrittle"inthesensethattheydonotreliablytransfer toproblemsofadierenttype. and Unfortunately,forgeneralnonlinearproblems,thereisnoknownnitesample k< 1 distributionfortheSA[stochasticapproximation]iterate. Further,thetheorygoverningtheasymptotic k !1 distributionis ratherdicult. 61

PAGE 73

Despitethesepessimisticviews,weareabletomakeprogressintheworkthatfollows. Sinceweareattemptingtoconstructarobustalgorithm,withameasureofcondence inoursolutionafteranitenumberofiterations,ourtheoreticalrequirementsmaynot beimplementedinthealgorithm.Relaxingrequirementsmayyieldamoresuitable algorithmforaspecicprobleminstance. Ouralgorithmisaderivative-freetrustregionmethodusingregressionquadratic modelsfortheirperceivedabilitytohandlenoisyfunctionevaluations.Weoutlinethe modicationsrequiredforconvergencewhenminimizingafunctionwithstochastic noise.Forexample,whenthereisnonoiseinfunctionbeingoptimized,wecan measuretheaccuracyofthe k themodel m k withtheratio k = f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(f x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k : Thismeasurestheactualdecreaseobservedin f versusthedecreasepredictedbythe model m k .Since f cannotbeevaluateddirectly,weproposeamodiedratio^ k in Section4.2whichwebelieveismoreappropriatefornoisyfunctions.Wealsopropose amodiedformof -fullyquadraticforstochasticallynoisyfunctions. Anoverviewofthechapterfollows:inSection4.1wedene -fullyquadratic andlinearmodelswithcondence1 )]TJ/F19 11.9552 Tf 11.863 0 Td [( k on B x ;andshowthatquadraticand linearregressionmodelssatisfythesenewdenitions,providedthereareasucient numberofpoisedpointsin B x ;.WeoutlinethealgorithminSection4.2and showthatitconvergestoarst-orderstationarypointinSection4.3.Weprovide suggestionsforimplementingouralgorithmandcompareoneimplementationagainst otheralgorithmsforminimizing.1inSection4.4.Lastly,wediscusstheresultsin Section4.5andoutlinesomeofthefutureavenuesforresearch. 4.1PreliminaryResultsandDenitions Wemakethefollowingassumptions: Assumption4.1 Thenoise N ; 2 . 62

PAGE 74

Assumption4.2 Thefunction f 2 LC 2 withLipschitzconstant L on = [ k B x k ; max R n : Assumption4.3 Thefunction f isboundedon L f x 0 where L = f x j f x g . Insolvingthetrustregionsubproblem,wedonotrequireanexactsolution-instead itissucienttondanapproximatesolution,butitmustsatisfythefollowing assumption. Assumption4.4 If m k and k arethemodelandtrustregionradiusatiterate k ,and x k + s k ischosenbythetrustregionsolvertosolve min x 2 B x k ; k m k x ,and s k C = )]TJ/F17 7.9701 Tf 13.215 5.112 Td [( k k g k k g k istheCauchystep,thenforall k m k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(m k x k + s k fcd h m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k C i forsomeconstant fcd 2 ; 1] ThisassumptionmerelystatesthateverytrustregionsubproblemsolutionisafractionofthedecreasepossiblefromtakingtheCauchystep,andthisfractionisbounded positivelyawayfromzero.Also,theassumptionallowsustonotsolvethetrustregion subproblemexactly. Assumption4.5 Thereexistsaconstant bhf > 0 suchthat,forall x k generatedin thealgorithm r 2 f x k bhf : Weprovethefollowingthreeclaimsusedinthischapter. Lemma4.6 If X 1 )]TJ/F20 7.9701 Tf 6.586 0 Td [( Y and Y 1 )]TJ/F20 7.9701 Tf 6.586 0 Td [( Z ,then X 1 )]TJ/F17 7.9701 Tf 6.587 0 Td [(2 Z . 63

PAGE 75

Proof: P X Z P X Y ^ Y Z =1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(P X>Y _ Y>Z =1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(P X>Y )]TJ/F19 11.9552 Tf 11.955 0 Td [(P Y>Z + P X>Y ^ Y>Z 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(P X>Y )]TJ/F19 11.9552 Tf 11.955 0 Td [(P Y>Z =1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( )]TJ/F19 11.9552 Tf 11.955 0 Td [( =1 )]TJ/F15 11.9552 Tf 11.955 0 Td [(2 : So X 1 )]TJ/F17 7.9701 Tf 6.587 0 Td [(2 Z . Lemma4.7 1 )]TJ/F20 7.9701 Tf 18.02 14.944 Td [(n X i =1 P a i n P n X i =1 a i < ! Proof: P n X i =1 a i < ! P a 1 < n ^ a 2 < n ^^ a n < n =1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(P a 1 n __ a n n Lemma4.8 Let Y B ;1 beastrongly -poisedDenition2.14samplesetwith p 1 points.Let X bethequadraticdesignmatrixdenedby .2 ,then [ X T X )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 ] i;i q 1 p 1 2 where [ A ] i;i isthe i thdiagonalentryof A . Proof: Since X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 issymmetricandpositive-denite,the i theigenvalue i equalsthe i thsingularvalue i .By[12,Theorem4.11],theinverseofthe smallestsingularvalueof X isboundedby q q 1 p 1 .Thatis, r q 1 p 1 1 min X 64

PAGE 76

or q 1 p 1 2 1 min X 2 = 1 min X T X = max )]TJ/F15 11.9552 Tf 5.479 -9.683 Td [( X T X )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 = max )]TJ/F15 11.9552 Tf 5.48 -9.683 Td [( X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 = X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 =max k v k =1 X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 v X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 e i [ X T X )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 ] i;i where e i isthe i thunitvector. 4.1.1Modelswhichare -fullyQuadraticwithCondence 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k ToproveconvergenceofthealgorithmpresentedinSection4.2,werstpropose amodiedversionof -fullyquadraticmodels. Denition4.9 Let f satisfyAssumption4.2.Let = ef ; eg ; eh ; m 2 beagiven vectorofconstants,andlet > 0 .Amodelfunction m 2 C 2 is -fullyquadratic withcondence1 )]TJ/F19 11.9552 Tf 12.52 0 Td [( k on B x ; for k 2 ; 1 if m hasaLipschitzcontinuous HessianwithcorrespondingLipschitzconstantboundedby m 2 and theerrorbetweentheHessianofthemodelandtheHessianofthefunction satises P )]TJ 5.479 0.478 Td [( r 2 f y )-222(r 2 m y eh 8 y 2 B x ; 1 )]TJ/F19 11.9552 Tf 11.956 0 Td [( k ; theerrorbetweenthegradientofthemodelandthegradientofthefunction satises P )]TJ/F22 11.9552 Tf 5.48 -9.684 Td [(kr f y )-222(r m y k eg 2 8 y 2 B x ; 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k ; theerrorbetweenthemodelandthefunctionsatises P )]TJ/F22 11.9552 Tf 5.48 -9.684 Td [(j f y )]TJ/F19 11.9552 Tf 11.955 0 Td [(m y j ef 3 8 y 2 B x ; 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k : Thisisoccasionallyabbreviated -f.q.w.c. 1 )]TJ/F19 11.9552 Tf 11.956 0 Td [( k . 65

PAGE 77

Thesedenitionsareonlyusefulifmodelfunctionscanbeeasilyconstructed whichsatisfythem;themodelsmustalsobeeasytominimizeoveratrustregion. Inthefollowingtheorem,weshowthatquadraticregressionmodelssatisfytherequirementsofDenition4.9,providedthereareenoughpoisedpointswithinthetrust region. Theorem4.10 Ifthefunction f satisesAssumption4.2andthenoise satises Assumption4.1,thenforagiven k 2 ; 1 ,thereexistsa = ef ; eg ; eh ; m 2 suchthatforany x 0 2 R n , > 0 ,if Y B x 0 ; isstrongly -poisedand j Y j z 1 )]TJ/F21 5.9776 Tf 8.699 4.623 Td [( k 2 q 1 2 2 q 3 1 2 6 ; thenthequadraticregressionmodelis -fullyquadraticwithcondence 1 )]TJ/F19 11.9552 Tf 10.684 0 Td [( k ,where z = 2 isthenumberofstandarddeviationsawayfromzeroonastandardnormaldistribution,suchthattheareatotheleftof z = 2 is = 2 [46]. Proof: ByTaylor'stheorem,foranypoint x 2 B x 0 ;thereexistsapoint x onthelinesegmentconnecting x to x 0 suchthat f x = f x 0 + r f x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 T r 2 f x x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 = f x 0 + r f x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T r 2 f x 0 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T H x x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 ; .2 where H x = r 2 f x )-222(r 2 f x 0 . Let m x bethequadraticleastsquaresmodelregressing Y .Since m isquadratic, Taylor'stheoremsaysforany x , m x = m x 0 + r m x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T r 2 m x 0 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 : Let bethetrueparametersofthequadraticpartof f denedbytherst threetermsof.2andlet ^ betheleast-squaresestimatefor .i.e.if X is thedesignmatrixfortheset Y and f isthevectorwith i thentry f y i ,then ^ = 66

PAGE 78

)]TJ/F19 11.9552 Tf 5.479 -9.684 Td [(X T X X T f .Denethemapping V x : R n ! R q 1 where V x = V [ x 1 ; x n ] T = 1 ;x 1 ; ;x n ; 1 2 x 2 1 ;x 1 x 2 ; 1 2 x 2 n T .Thenthe i throwof X is V y i T .Theparameters ^ denethemodel m .Thatis, m x = ^ T V x . Withoutlossofgenerality,assume 1.Thenforany x 2 B x 0 ;, j f x )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x j = f x 0 + r f x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T r 2 f x 0 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T H x x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 )]TJ/F19 11.9552 Tf 11.955 0 Td [(m x 0 )-222(r m x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 )]TJ/F15 11.9552 Tf 10.494 8.088 Td [(1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T r 2 m x 0 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T V x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 )]TJ/F15 11.9552 Tf 13.64 3.154 Td [(^ T V x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 + 1 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 T H x x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 q X i =0 i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i j V x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 i j + L 2 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 3 q X i =0 i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i j V x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 i j + L 2 3 0 )]TJ/F15 11.9552 Tf 13.64 3.154 Td [(^ 0 + n X i =1 i )]TJ/F15 11.9552 Tf 13.64 3.154 Td [(^ i + q X i = n +1 i )]TJ/F15 11.9552 Tf 13.64 3.154 Td [(^ i 2 + L 2 3 q X i =0 i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i + L 2 3 : .3 Sincethenoiseisuncorrelatedwithmeanzero,constantvariance,andisnormally distributed,itisknownthat ^ N ; 2 X T X )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 [46].If[ A ] i;i denotesthe i th diagonalentryofamatrix A ,thenthe1 )]TJ/F20 7.9701 Tf 13.454 5.112 Td [( k q 1 condenceintervalforeachofthe i hastheform[46]: 67

PAGE 79

1 )]TJ/F19 11.9552 Tf 13.15 8.087 Td [( k q 1 = P ^ i )]TJ/F19 11.9552 Tf 11.955 0 Td [(z 1 )]TJ/F21 5.9776 Tf 8.699 4.623 Td [( k 2 q 1 q 2 [ X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 ] i;i < i < ^ i + z 1 )]TJ/F21 5.9776 Tf 8.699 4.623 Td [( k 2 q 1 q 2 [ X T X )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 ] i;i = P i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i
PAGE 80

kr f x )-222(r m x k = r f x 0 + r 2 f x 0 T x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 + H x x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 )]TJ/F25 11.9552 Tf 11.291 9.683 Td [()]TJ/F22 11.9552 Tf 5.479 -9.683 Td [(r m x 0 + r 2 m x 0 x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 r f x 0 )-222(r m x 0 + r 2 f x 0 T x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 )-222(r 2 m x 0 x )]TJ/F19 11.9552 Tf 11.956 0 Td [(x 0 + H x x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 n X i =1 i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i + q X i = n +1 i )]TJ/F15 11.9552 Tf 13.639 3.155 Td [(^ i + L x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 0 2 n X i =1 i )]TJ/F15 11.9552 Tf 13.64 3.154 Td [(^ i + q X i = n +1 i )]TJ/F15 11.9552 Tf 13.639 3.154 Td [(^ i + L 2 q X i =1 i )]TJ/F15 11.9552 Tf 13.64 3.155 Td [(^ i + L 2 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( 3 + L 2 2 + L 2 = eg 2 : where eg =1+ L .Asimilarargumentfor kr 2 f x )-222(r 2 m x k with eh =1+ L provesthetheorem. TocertifyamodelsatisesDenition4.9ortoimprovealeastsquaresregression modelintoonethatis -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 12.593 0 Td [( k isstraightforward: wemustensurethereareenoughpoisedpointswithin B x ;tosatisfythebound giveninTheorem4.10.Otherwise,addenoughstrongly-poisedpointsto Y .Fora techniquetogeneratestrongly-poisedsets,seeChapter3ofthisthesisor[12]. 4.1.2Modelswhichare ^ -fullyLinearwithCondence 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k Whilethemodels m k usedinthemainalgorithmarequadratic,linearmodels ^ m x canapproximate f near B x k + s k ; ^ k tosucientaccuracy.Therefore,ifwe haveenoughpointswithin B x k + s k ; ^ ,wecanboundtheerrorbetween f x k + s k and^ m k x k + s k .Wequantifythataccuracyinthefollowingdenition. 69

PAGE 81

Denition4.11 Let f satisfyAssumption4.2.Let ^ =^ ef ; ^ eg ; m 1 beagiven vectorofconstants,andlet > 0 .Amodelfunction m 2 C is ^ -fullylinearwith condence1 )]TJ/F19 11.9552 Tf 11.232 0 Td [( k on B x ; for k 2 ; 1 if m hasaLipschitzcontinuousgradient withcorrespondingLipschitzconstantboundedby m 1 and theerrorbetweenthegradientofthemodelandthegradientofthefunction satises P kr f y )-222(r m y k ^ eg 8 y 2 B x ; 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k ; theerrorbetweenthemodelandthefunctionsatises P )]TJ/F22 11.9552 Tf 5.479 -9.684 Td [(j f y )]TJ/F19 11.9552 Tf 11.956 0 Td [(m y j ^ ef 2 8 y 2 B x ; 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k : Thisisoccasionallyabbreviated -f.l.w.c. 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k . Theorem4.12 Ifthefunction f satisesAssumption4.2andthenoise satises Assumption4.1,thenforagiven k 2 ; 1 ,thereexistsa = ef ; eg ; m 1 such thatforany x 0 2 R n , > 0 ,if Y B x 0 ; isstrongly -poisedand j Y j z 1 )]TJ/F21 5.9776 Tf 11.839 4.623 Td [( k 2 n +2 2 2 n +1 3 2 4 : thenthelinearregressionmodelis fullylinearwithcondence 1 )]TJ/F19 11.9552 Tf 11.956 0 Td [( k . Proof: TheproofisnearlyidenticalthatofTheorem4.10. 4.2StochasticOptimizationAlgorithm Belowisanoutlineofourproposedstochasticalgorithm.For x k + s k ,the solutiontothetrustregionsubproblem,andaradius k > ^ k > 0,dene ^ Y k = n y 2 Y tot j x k + s k )]TJ/F19 11.9552 Tf 11.955 0 Td [(y ^ k o . Let Y tot = f y 1 ; ;y m g bethesetofpointswhere f hasbeenevaluated. f i := f y i .Deneanullmodel m 0 ,initialtrustregionradius 0 ,andaninitialTRcenter 70

PAGE 82

x 0 .Chooseconstantssatisfying0 << 1 < inc , c > 0,0 0 1 < 1 1 6 =0, where 0 0 < and ! 2 ; 1.Choose r 2 ; 1anddene ^ k = r k . Algorithm1: Atrust-regionalgorithmtominimizeastochasticfunction. Let k =0; Start ; Set k =min f k ; 0 : 05 g ; if & k max fk g k k ; )]TJ/F19 11.9552 Tf 9.298 0 Td [( min H k g < c andeitheri m k isnotcertiably -f.q.w.c. 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k on B x k ; k orii k >& k then ApplyAlgorithm2toupdate Y k , k ,and m k ; Set k =min f k ; 0 : 05 g ; else Selectorgenerateastrongly-poisedsetofpoints Y k B x k ; k from Y tot suchthat Y k hasenoughpointstoensure m k is -f.q.w.c.1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k . Buildaregressionquadraticmodel m k x through Y k .Solve s k argmin s : k s k < k m k x k + s .Builda -f.l.w.c.1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k model^ m k on ^ Y k possiblyaddingpointsto Y tot andcompute ^ k = m k x k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k if ^ k 1 or ^ k 0 ^ m k is -f.q.w.c. 1 )]TJ/F19 11.9552 Tf 11.956 0 Td [( k on B x k ; k then x k +1 = x k + s k ; else x k +1 = x k ; if ^ k 1 then k +1 =min f inc k ; max g ; else k +1 = k ; Let m k +1 bethepossiblyimprovedmodel; Set k = k +1andgoto Start ; 71

PAGE 83

Notethatweareapproximating f x k + s k usingasecondmodel^ m k inadierent trustregion ^ k around x k + s k .Formalconvergenceofthealgorithm,specically Lemma4.15,requirestheabilitytoapproximate f x k + s k withincreasingaccuracy asthealgorithmprogresses.Suchaccuracyisnotavailablefromarealizationofthe noisefunctionvalue,namely f x k + s k .Whileitispossibletoobtainincreasingly accurateapproximationsof f x k + s k byrepeatedsampling,wearehopingthetheory generatedinthischaptercanbeeasilytransferedtothecasewheredeterministic noiseispresentin f .Withdeterministicnoise,Var f =0,andthereforerepeated samplingwillprovidenofurtherinformation. Also,ifweeventuallyshrinkourtrustregionaroundagivenpoint,pointsgeneratedin B x k + s k ; ^ k tomakeanaccuratemodel^ m k x k + s k canbeusedinthe constructionofanaccuratemodel m j x atsomelateriterate j . Algorithm2: CriticalityStep Initialization Set i =0.Set m k = m k . Repeat Increment i byone.Improvethepreviousmodelbyaddingpointsto Y tot untilitis -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k on B x k ; ! i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k .ThiscanbedonebyTheorem4.10andthemodel improvementalgorithmfrom[3]whichbuildsastrongly-poisedset Y in O )]TJ/F17 7.9701 Tf 10.162 -4.977 Td [(1 6 stepsifthemodelssatisfyDenition4.9.Denotethe newmodel m i k .Set ~ k = ! i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k and~ m i = m i k .; Until ~ k & i k x k . Return m k =~ m k , k =min n max n ~ k ;& i k x k o ; k o ,and Y tot . WeadoptthenamingofiteratesfromConn,Scheinberg,Vicente: 1. k 1 ; x k + s k isacceptedandthetrustregionisincreased.Wecallthese iterations successful . 2. 1 > k 0 and m k is -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 12.463 0 Td [( k ; x k + s k is acceptedbut k isdecreased.Wecalltheseiterations acceptable . 72

PAGE 84

3. 1 > k and m k isnot -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 12.751 0 Td [( k ; x k + s k is notacceptedandthemodelisimproved.Wecalltheseiterations model improving . 4. 0 > k and m k is -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 12.75 0 Td [( k ; x k + s k isnot acceptedand k isreduced.Wecalltheseiterations unsuccessful . 4.3Convergence 4.3.1ConvergencetoaFirst-orderStationaryPoint Theuseofquadratic m k mightsuggestconvergencetoasecond-orderstationary point.Suchaproofwouldrequireaquadratic^ m k aswell,andsince ^ ,this wouldrequiremorepointsin B x k + s k ; ^ k thanin B x k ; k .Sinceitisfrequently thecasethat f x k + s k >f x k evenwhen m k is -f.q.w.c.1 )]TJ/F19 11.9552 Tf 9.871 0 Td [( k ,wenditwasteful tobuildaquadratic^ m k around x k + s k .Thisisoneofthemotivationsfor -fully linearmodelsfor^ m k ;withthis,wecanproveconvergencetoarst-orderstationary point. Werstshowthatif x k isnotastationarypointfor f ,thenAlgorithm2willexit withprobability1. Theorem4.13 Given k 2 ; 1 ,if f satisesAssumption4.2and r f x k > 2 ! j )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k ,thereisprobabilityatleast 1 )]TJ/F19 11.9552 Tf 12.19 0 Td [( k thatAlgorithm2willcorrectlyexitoneachiterate i after j suchthat ! i )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 < q ! j )]TJ/F18 5.9776 Tf 5.757 0 Td [(1 eg ,where and ! are declaredinAlgorithm1. Proof: Assume ! i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 < q ! j )]TJ/F18 5.9776 Tf 5.756 0 Td [(1 eg , k 1,andAlgorithm2cyclesinnitely.After sucientlymanyiterationsofthecriticalitystep, m i k willbe -fullyquadraticwith 73

PAGE 85

condence1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k on B x k ; q ! j )]TJ/F18 5.9776 Tf 5.756 0 Td [(1 eg k .Therefore & i k g i k r f x k )]TJ/F25 11.9552 Tf 11.955 13.748 Td [( r f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(g i k > 2 ! j )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k )]TJ/F25 11.9552 Tf 11.955 13.748 Td [( r f x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(g i k byassumption 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( 2 ! j )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k )]TJ/F19 11.9552 Tf 11.955 0 Td [( eg s ! j )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 eg ! 2 2 k byDenition4.9 = 2 ! j )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k )]TJ/F15 11.9552 Tf 13.746 8.088 Td [(1 ! j )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 2 k 1 ! i )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k : since k 1and i j Soforeach i after j suchthat ! i )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 < q ! j )]TJ/F18 5.9776 Tf 5.757 0 Td [(1 eg ,wehave1 )]TJ/F19 11.9552 Tf 12.687 0 Td [( k condencethat Algorithm2willexit. Sincewerequire k ! 0as k ! 0,thenforany k > 0,eventually k willbe smallenoughsothatAlgorithm2withprobabilityatleast1 )]TJ/F19 11.9552 Tf 12.269 0 Td [( k .Inotherwords, thistheoremensuresthatthealgorithmexitswithprobability1. Lemma4.14 If f satisesAssumption4.5and m k is -fullyquadraticwithcondence 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( ,thereexistsaconstant bhm > 0 suchthat k H k k 1 )]TJ/F20 7.9701 Tf 6.586 0 Td [( bhm ; forall k ,where H k istheHessianof m k . Proof: k H k k H k )-222(r 2 f x k + r 2 f x k 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( eh k + r 2 f x k byDenition4.9 eh k + bhf byAssumption4.5 eh max + bhf =: bhm 74

PAGE 86

Thefollowinglemmashowsthat,if x k isnotastationarypointof f ,thenif k issmallenough,thereisahighprobabilitythatasuccessfulstepwillbetaken. Lemma4.15 Let f satisfyAssumption4.5andletthetrustregionsubproblemsolutionsatisfyAssumption4.4.Let = ef ; eg ; eh ; m 2 and ^ =^ ef ; ^ eg ; ^ eh ; m 1 . Lettheconstants fcd , bhm , ef , ^ ef , 1 beasspeciedinAssumption4.4, Lemma4.14,Denition4.9,Denition4.11,anddeclaredatthebeginningofAlgorithm1respectively.If m k is -f.q.w.c. 1 )]TJ/F19 11.9552 Tf 10.896 0 Td [( k on B x k ; k , ^ m k is ^ -f.l.w.c. 1 )]TJ/F19 11.9552 Tf 10.897 0 Td [( k on B x k + s k ; ~ k ,and k min k g k k bhm ; fcd k g k k )]TJ/F19 11.9552 Tf 11.955 0 Td [( 1 2 ef max +2^ ef ; .4 thenwehavecondence 1 )]TJ/F15 11.9552 Tf 11.955 0 Td [(3 k that k 1 onthe k thiteration. Proof: UsingLemma4.14,thefactthat x k + s k isnoworsethantheCauchy stepAssumption4.4,and k k g k k bhm .4yields m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k 1 )]TJ/F20 7.9701 Tf 6.586 0 Td [( fcd 2 k g k k min k g k k bhm ; k = fcd 2 k g k k k : .5 75

PAGE 87

Usingthis, j k )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 j = m k x k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k )]TJ/F19 11.9552 Tf 13.15 8.087 Td [(m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k = m k x k + s k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k m k x k + s k )]TJ/F19 11.9552 Tf 11.955 0 Td [(f x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k + f x k + s k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(m k x k + s k 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( ef 3 k j m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k j + f x k + s k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k byDenition4.9 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( ef 3 k j m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k j + ^ ef ^ 2 k j m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k j byDenition4.11 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( 2 ef 3 k +2^ ef ^ 2 k fcd k g k k k by.5 2 ef max +2^ ef fcd k g k k k since k ^ k 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( 1 : by.4 Sincewehavecondence1 )]TJ/F19 11.9552 Tf 12.095 0 Td [( k thatthesecond,thirdandfourthinequalitieshold, wehavecondence1 )]TJ/F15 11.9552 Tf 11.955 0 Td [(3 k thatallthreeholdsimultaneously. Lemma4.16 Forall k ,assumethetrustregionsubproblemsolutionsatisesAssumption4.4.Let f satisfyAssumption4.5.Ifthereexistsaconstant 1 suchthat k g k k 1 forall k ,thenthereexistsanotherconstant 2 suchthat,foreveryiteration k where k 2 wehavecondence 1 )]TJ/F15 11.9552 Tf 12.024 0 Td [(3 k thatiteration k willbesuccessfuland k willincreaseif m k is -f.q.w.c. 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k . Proof: Thisproofissimilarto[12,Lemma10.7].WhetherAlgorithm2has beencalledornot, k min & k x k ; k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 min fk g k k ; k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 g min f 1 ; k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 g 76

PAGE 88

Since k g k k 1 forall k ,Lemma4.15impliesthatwhenever k islessthan 3 =min 1 bhm ; fcd 1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( 1 2 ef max +2^ ef ; wehavecondence1 )]TJ/F15 11.9552 Tf 12.644 0 Td [(3 k thatiteration k willbesuccessful k +1 = inc k or modelimproving k +1 = k .Ineithercase k +1 k sowehavecondence 1 )]TJ/F15 11.9552 Tf 11.955 0 Td [(3 k that k +1 k willholdwhenever k min f 0 ; 1 ; 3 g = 2 . Theorem4.17 LetAssumptions4.1{4.5besatised.Ifthenumberofsuccessful iterationsisnite,then liminf k !1 r f x k =0 withprobability1. Proof: Consideriterationsafterthelastsuccessfuliteration,denoted k last .For every k>k last ,theiterationisunsuccessful k < 1 andthemodelimprovement algorithmiscalled.Ittakesanitenumber O 1 6 k ofmodelimprovementstepsfor themodeltobecome -fullyquadraticwithcondence1 )]TJ/F19 11.9552 Tf 12.249 0 Td [( k onagiven B x k ;; thereareaninnitenumberofiterationsthatareeitheracceptableorunsuccessful. Given k ,wecanguaranteethatthetrustregionradiusmustdecreasebyatleast onemultipleof 2 ; 1after C 6 k iterationsforaxedconstant C .Sinceforany > 0,thereexistsaninteger N suchthat N k last < .After C k last 6 + + C N )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 k last 6 = N X i =1 C i )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 k last 6 = C 6 )]TJ/F17 7.9701 Tf 6.587 0 Td [(6 N )]TJ/F19 11.9552 Tf 5.479 -9.683 Td [( 6 N )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 6 k last 6 )]TJ/F15 11.9552 Tf 11.955 0 Td [(1 iterations,thetrustregionradiuswillbelessthan .Therefore,lim k !1 k =0, whichimplies k ! 0.Therefore,thereexistsaninnitesequenceofiterates f k i g where m k i is -f.q.w.c.1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k i and r f x k i r f x k i )]TJ/F19 11.9552 Tf 11.955 0 Td [(g k i + k g k i k 1 )]TJ/F20 7.9701 Tf 6.586 0 Td [( k i eg 2 k i + k g k i k : Thesecondtermconvergestozerowithprobability1.Toseethis,assume k g k i k is boundedawayfromzeroandwecanderiveacontradictionusingLemma4.15and 77

PAGE 89

thefactthatlim k !1 k =0.Since k i ! 0and k i ! 0,thenfor k i suciently large, k i 2 ,sothereisprobability1 )]TJ/F15 11.9552 Tf 12.323 0 Td [(3 k i thatiteration k i willbesuccessful. Thus,forany k > 0and K> 0,thereexists k i >K suchthattheprobabilityof step k i beingsuccessfulisgreaterthan1 )]TJ/F19 11.9552 Tf 12.26 0 Td [( k .Therefore,withprobability1,there areinnitelymanysuccessfuliterates,contradictingthedenitionof k last . 4.3.2InnitelyManySuccessfulSteps TheresultsthatfollowoutlinepartsofaproofforthecasewhenAlgorithm1 generatesinnitelymanysuccessfuliterates.Whiletheprevioustheoremproves k ! 0,theproofisnotvalidwhenthereareinnitelymanysuccessfuliterates. Wehavemadeconsiderableeorttoprove k ! 0,buthavebeenunabletodoso. Toprogress,weassumeitforthetimebeing. Assumption4.18 lim k !1 k =0 ItshouldbenotedthatitmaybepossibletoensureAlgorithm1satisesthisassumption,perhapsbyslowlydecreasing max .Thedetailswouldneedtobeworked out,butthisassumptionisnotasstrongasitmightappear. Conjecture4.19 IfAssumption4.18andAssumption4.5holdandthetrustregion subproblemsolutionsatisesAssumption4.4forall k ,then liminf k !1 k g k k =0 withprobability1. Discussion: If k g k k 1 forsome 1 0,byLemma4.16,thereexistsa 2 suchthat whenever k < 2 ,wewillhavea1 )]TJ/F15 11.9552 Tf 12.139 0 Td [(3 k condenceofincreasingthetrustregion. UsingAssumption4.18andthefactthat k =min f k ; 0 : 05 g ,wewillincreasethe trustregionwithprobabilityapproaching1as k getslarge.Thiswouldappearto 78

PAGE 90

contradict k ! 0,buttoprovealmostsureconvergenceassumingeachiterationis independentwemustshowtheproductofthe1 )]TJ/F19 11.9552 Tf 12.298 0 Td [( k approaches1.Andeventhe assumptionthateachiterationisindependentisdicult,asmanyofthepointsused tobuild m k willbeusedtobuild m k +1 .Iftheeventsaredependent,thenwemust considerconditionalprobabilitiessuchastheprobabilityonestepbeingasuccess giventhelaststepwasnot. Conjecture4.20 IfAssumptions4.1{4.5andAssumption4.18hold,foranysubsequence f k i g suchthat lim i !1 k g k i k =0.6 then,withprobability1 lim i !1 r f x k i =0 : Discussion: By.6,for i sucientlylarge, k g k i k c .Thus,byTheorem4.13, Algorithm2ensuresthatthemodel m k i is -f.q.w.c.1 )]TJ/F19 11.9552 Tf 9.749 0 Td [( k ontheball B x k i ; k i with k i k g k i k forall i sucientlylargeprovided r f x k i 6 =0.ByDenition4.9 r f x k i )]TJ/F19 11.9552 Tf 11.955 0 Td [(g k i 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( eg k i eg k g k i k : Therefore, r f x k i r f x k i )]TJ/F19 11.9552 Tf 11.955 0 Td [(g k i + k g k i k 1 )]TJ/F20 7.9701 Tf 6.587 0 Td [( eg +1 k g k i k : Since k g k i k! 0withprobability1,sodoes r f x k i . Conjecture4.21 IfAssumptions4.1{4.5andAssumption4.18hold,then liminf k !1 r f x k =0 withprobability1. Discussion: ByConjecture4.19,weknowtheremustexistasequenceof f k i g suchthatlim i !1 k g k i k =0.ByConjecture4.20,thissamesequence f k i g has lim i !1 r f x k i =0.Thisprovestheresult. 79

PAGE 91

4.4ComputationalExample Inthissection,wehighlightsomeoftheadvantagesofusingAlgorithm1overa traditionaltrustregionmethodwhichassumesdeterministicfunctionevaluations. Whilebothalgorithmshavemuchincommon,theslightdierencesbecomesignicant inthepresenceofstochasticnoise.Forexample,thedeterministicalgorithmsare susceptibletonegativenoise,asweseeintheFigure4.1.Inthatgure,thesolidline isthetruefunction f whichwewanttominimize,andthedashedblacklinesshow the95%condenceintervalofthenoise.Theblacksquaresmarkthenoisyfunctions whichdeterminethequadratictrustregionmodel m k andthetrustregionradius k isrepresentedbythedashedlines.Thetrustregioncenter x k hasaredboxaround it. 4.4.1DeterministicTrustRegionMethod Figure4.1showsatraditionaltrustregionmethodaftermovingtoanewtrust regioncenterat x k =2 : 5.Eachimageshowstheprogressofthealgorithm,andwe describewhatoccurredinthepreviousiteratetoyieldthepresentsituation: Figure4.1,topleft Bychance,therealizationof f x k + s k wasmuchlessthan f atanypointnear x k + s k .Itisnowthenewtrustregioncenter. Figure4.1,topright Theminimumofthequadraticmodelwasnotacceptedsince, k = f x k )]TJ/F15 11.9552 Tf 14.502 3.154 Td [( f x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k < 0 Thetrustregionradiushasalsobeenshrunksincethesamplesetisstrongly -poised. Figure4.1,bottomleft Apointoutsideofthetrustregionradiushasbeenremoved.Since k < 0,thetrustregionradiuswillshrinkagain. Figure4.1,bottomright Anotherpointoutsideofthetrustregionhasbeenremoved,andanewmodelhasbeenbuilt. 80

PAGE 92

Figure4.1: Severaliterationsofatraditionaltrustregionmethodassumingdeterministicfunctionevaluations.Thetrustregioncenterisnevermoved. Thedeterministicalgorithmwillacceptanewtrustregioncenterwhen k issucientlypositive,i.e.if f x k + s k isalsomuchlessthan f x k + s k .Ifthisdoes nothappen,thealgorithmwillnotndasuccessfulstepandthetrustregionradius willberepeatedlydecreased.Since f x k isneverre-evaluated,itislikelythatthe algorithmwillterminatewithoutevertakingafurtherstep. 4.4.2StochasticTrustRegionMethod Incontrast,byusing^ k introducedinSection4.2, ^ k = m k x k )]TJ/F15 11.9552 Tf 14.148 0 Td [(^ m k x k + s k m k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(m k x k + s k ; andincreasingthenumberofpointsinthetrustregionas k decreasesallowsthe algorithmtoproceedtomoveoofatrustregioncenterwithnegativenoise,seenin Figure4.2. 81

PAGE 93

Figure4.2: Severaliterationsofatraditionaltrustregionmethodassumingstochasticfunctionevaluations. Figure4.2,topleft Again,therealizationof f x k + s k wasmuchlessthan f at anypointnear x k + s k .Itisnowthenewtrustregioncenter. Figure4.2,topright Theminimumofthequadraticmodelwasnotacceptedsince ^ k < 0,butthetrustregionradiusisnotdecreased.Thoughthesampleset isstrongly-poised,therearenotenoughpointstoensurethemodelis f.q.w.c.1 )]TJ/F19 11.9552 Tf 11.955 0 Td [( k . Figure4.2,bottomleft Morepointshavebeenaddedtothesamplesetandthe modelhasbeenreconstructed. Figure4.2,bottomright Theminimumofthequadraticmodelisacceptedsince ^ k > 1 ,eventhough f x k + s k > f x k . 82

PAGE 94

Byusingthemodelvalueat x k insteadof f x k inthecalculationof^ k allowsthe estimateof f x k toadjustwithoutwastefullyreevaluating f x k .Inthisfashion, Algorithm1canavoidstagnatingatpointswithnegativenoise. 4.5Conclusion Inthischapterwepresentedanalgorithmusingquadratictrustregionmodels m k tominimizeafunction f whichcannotbeevaluatedexactly.Eventhoughthe algorithmonlyhasaccesstonoisecorruptedfunctionevaluations f ,weprovedalmost sureconvergenceofasubsequenceofiteratestoarst-orderstationarypointof f whenthenumberofsuccessfulstepsisnite.Wealsohaveoutlinedaproofforthe casewhenthenumberofsuccessfulstepsisinnite.Theseresultswereaccomplished, notbyrepeatedlysampling f atpointsofinterest x k ,butratherbyconstructing models^ m k whichareincreasinglyaccurateapproximationsof f near x k .Sinceitis oftenthecasethat x k isthecandidateforthenewtrustregioncenter,thisinformation isimmediatelyusefulinconstructing m k +1 .Wethenhighlightedhowthisalgorithm remediesacommonproblemwithusingtraditionaltrustregionmethodsonfunctions withstochasticnoise. 83

PAGE 95

5.Non-intrusiveTerminationofNoisyOptimization 5.1IntroductionandMotivation Theoptimizationofreal-world,computationallyexpensivefunctionsinvariably leadstothedicultquestionofwhenanoptimizationprocedureshouldbeterminated.Algorithmdevelopersandthemathematicaloptimizationcommunityatlarge typicallyassumethattheoptimizationisterminatedwheneitherameasureofcriticalitygradientnorm,meshsize,etc.issatisedorauser'scomputationalbudget numberofevaluations,wallclocktime,etc.isexhausted. Foralargeclassofproblems,however,theusermaynothaveawell-dened computationalbudgetandinsteaddemandaterminationtest t solving min t Computationalexpense t s.t.Acceptableaccuracyofthesolution t ; .1 withthecriticalitymeasureofthesolveremployedtypicallychosenwiththeaccuracy constraintinmind.Examplesofsuchaccuracy-basedcriticalitytestsarediscussed indetailbyGill,Murray,andWright[19,Section8.2.3]. Themaindicultiesarisingfromthisapproacharearesultof.1possibly beingpoorlyformulated.Thecomputationalexpensecouldbeunboundedbecause anaprioriuser-denedaccuracyisunrealisticfortheproblem/solverpair.Furthermore,ausermayhavedicultytranslatingthecriticalitymeasuresprovidedbya solver,whicharegenerallybasedonassumptionsofsmoothnessandinnite-precision calculations,intopracticalmetricsonthesolutionaccuracy. InFigure5.1weillustratethechallengesinthisareawithanexamplefrom nuclearphysics,similartotheminimizationproblemsconsideredin[37].Eachofthe functionvaluesshownisobtainedfromrunningadeterministicsimulationforone minuteona640-corecluster.Stoppingtheoptimizationsoonerthan200function evaluationswouldnotonlyreturnasolutionfasterbutwouldalsofreetheclusterfor 84

PAGE 96

Figure5.1: Partofanoisytrajectoryoffunctionvaluesforanexpensivenuclear physicsproblem.Aftermoresignicantdecreasesintherst70evaluations,progress beginstostall. otherapplicationsand/orresultinasavingsinenergy,anincreasinglycrucialfactor inhigh-performancecomputing. IfweassumethattheoptimizationpartiallyshowninFigure5.1hasnotbeen terminatedbyasolver'scriticalitymeasuresorauser'scomputationalbudget,the questionisthenwhetherterminationshouldoccurforotherreasons.Forexample, ifonlytherstthreedigitsofthesimulationoutputwerecomputedstably,onemay wanttoterminatetheoptimizationsoonerthanifcomputationalnoisecorrupted onlytheeighthdigitoftheoutput.Alternatively,thebehaviorshowncouldmean thesolverinquestionhasstagnatedbecauseofnoise,errorsinthesimulation,a limitationofthesolver,etc.,andhenceexaminingthesolutionand/orrestartingthe optimizationcouldbeamoreeectiveuseoftheremainingcomputationalbudget. Wright[65]referstothisstalledprogressas perseveration andnotesthatthereisno fullygeneralwaytodene`insucientprogress.'"Evenso,itmaybeadvantageous touseknowledgeoftheuncertaintyoraccuracyofagivenfunctionevaluationwhen makingsuchadecision. 85

PAGE 97

Intheremainderofthischapterweexploretheseissuesandproposetermination criteriathatcanbeeasilyincorporatedontopofauser'ssolverofchoice.In[18], Fletchersummarizesthechallengesathandinthecaseofround-oerrorsalone: Someconsiderationhastobegiventotheeectsofround-onearthesolution,andtoterminatewhenitisjudgedthattheseeectsarepreventing furtherprogress.Itisdiculttobecertainwhatstrategyisbestinthis respect. Moreover,Gill,Murray,andWright[19]stressthat nosetofterminationcriteriaissuitableforalloptimizationproblemsand allmethods. ThissentimentissharedbyPowell[47]whosays itisbelievedthatitisimpossibletochoosesuchaconvergencecriterion whichiseectiveforthemostgeneralfunction...soacompromisehasto bemadebetweenstoppingtheiterativeproceduretoosoonandcalculating f anunnecessarilylargenumberoftimes. Consequently,wewillconsiderteststhatallowfortheuseofestimatesofthenoise particulartoaproblem.Furthermore,ourcriteriaarenotintendedassubstitutes foracomputationalbudgetorasolver'sbuilt-incriticalitytests,whichweconsider tobeimportantsafeguards.Likewise,theterminationproblemcanbeviewedasa real-timecontrolproblemdependingoncompleteknowledgeofthesolver'sdecisions, butweresistthisurgeforpurposesofportabilityandapplicability. WeprovidebackgroundonpreviousworkandintroducenotationinSection5.2. ThefamiliesofstoppingtestsweproposeinSection5.3donotprovideguarantees onthequalityofthesolution,althoughdoingsomaybetheroleofasolver'sbuiltincriteria.Instead,theproposedtestsareparameterizedinordertoquantifya 86

PAGE 98

PAGE 99

iterations.Tolimitthenumberofstochasticreplications,theauthorsof[14]and[61] adjustthemaximumnumberofallowedreplicationsataparticularpointbasedon thevarianceofthenoise. Deterministicnoise{thatis,noisethatresultsfromadeterministicprocess,such asnite-precisionarithmetic,iterativemethods,andadaptiveprocedures{isfarless understoodthanitsstochasticcounterpart[42].Notsurprising,evenlessknowledge ofthemagnitudeofnoiseisusedforproblemswithdeterministicobjectives.When low-amplitudenoiseispresent,Kelley[33]proposesarestarttechniqueforNelderMeadbutterminateswhensucientlysmalldierencesexistinthesimplicialfunction values,independentofthemagnitudeofthenoise.Implicitltering[32]hasnumerous terminationpossibilitiessmallfunctionvaluedierencesonastencil,asmallchange inthebestfunctionvaluefromoneiterationtothenext,etc.butnonethatare explicitlyrelatedbytheauthortothemagnitudeofthenoise.Asimilarimplicit relationshiptonoisecanbeseenin[24],wheretreedGaussianprocessmodelsfor optimizationareterminatedwhenamaximumimprovementstatisticissuciently small.TheauthorsofSNOBFIT[29]suggeststoppingwhenthebestpointhasnot changedforanumberofconsecutivecallstothemainSNOBFITalgorithm. OurworkmorecloselyfollowsthatofGillet.al[19],wheresection.2isdevoted topropertiesofthecomputedsolution.Theauthorsthererecommendterminating Nelder-Mead{likealgorithmswhenthemaximumdierencebetweenfunctionvalues onthesimplexislessthanademandedaccuracyweightedbythebestfunctionvalue onthesimplex. Theonlyotherdirectrelationshipbetweenstoppingcriteriaandameasureof noisethatweareawareofarein[42,Section9]and[25].In[42],astochasticmodelof thenoiseisusedtoestimatethe noiselevel ofafunctionvalue f x bydierencetablebasedapproximationsofthestandarddeviationVar f x 1 = 2 .Resultsarevalidated fordeterministic f .Asanexampleapplication,theauthorsterminateaNelder-Mead 88

PAGE 100

methodonanODE-basedproblemwhenconsecutivedecreasesarelessthanafactor ofthenoiselevel.Theauthorsof[25]perturbbound-constrainedproblemssothe incumbentiterateistheexactsolutiontothisnewproblem.Analgorithmcanthen beterminatedwhenthesizeofthisperturbationrstdecreasesbelowtheerrorin theproblem.Naturalextensionstogradient/derivative-basedtestsarealsoenabled bytherecentworkin[43]wherenear-optimalnitedierenceestimatesareprovided asafunctionofthenoiselevel. Beforeproceeding,wedenethenotationemployedthroughout.Welet R + denotethenonnegativerealsand N denotethenaturalnumbers.Welet f x 1 ; ;x m g R n and f f 1 ; ;f m g2 R beasequenceofpointsandcorrespondingfunctionvalues producedbyalocalminimizationsolver,andwecollectthedatafromtherst i evaluationsin F i = f x 1 ;f 1 ;:::; x i ;f i g .Thebestfunctionvalueintherst i evaluationsisgivenby f i =min 1 j i f f j g ,with x i denotingthepointcorrespondingto f i . Accordingly,thesequence f f i g isnonincreasing.Unlessotherwisestated, kk denotes thestandardEuclideandistance. Welet^ " i r beanestimateoftherelativenoiseat f i i.e.,thenoiseat x i scaledby themagnitudeof f x i .Thisestimatemaycomefromexperience,numericalanalysis oftheunderlyingprocessesincomputing f i ,orappropriatescalingby1 = j f i j ofthe noise-levelestimatesfromthemethodproposedin[42].Inthecaseofstochastic functionswithnonzeromeanat x i ,^ " i r isthestandarddeviationof f x i relativeto theexpectedvalue E [ j f x i j ]. Favorablepropertiesofaterminationtestincludescaleandshiftinvariance,so thatthetestwouldterminateafterthesamenumberofevaluationsforanyane transformationoftheobjectivefunction.Specically,atestisscaleinvariantin f if itterminatesoptimizationrunsdenedby F i and F i f x 1 ;f 1 ;:::; x i ;f i g at anidenticalevaluationnumberforany > 0.Similarly,atestisshiftinvariantin f ifitterminates F i and F i + f x 1 ;f 1 + ;:::; x i ;f i + g afteranidentical 89

PAGE 101

numberofevaluationsregardlessof .Weusethefollowingpropositiontoaidinthe subsequentanalysisofscaleandshiftinvariance. Proposition5.1 Forstochastic f i ofnite,nonzerovariance,theestimateoftherelativenoise, ^ " i r ,isscaleinvariantin f andtheabsolutenoise, ^ " i r E [ j f x i j ] j p Var f x i ,isshiftinvariantin f . Proof: BothresultsfollowdirectlyfrompropertiesofVar .Since Var f x i > 0,itfollowsthat E [ j f x i j ] > 0andhence,for > 0 ^ " i r = p Var f x i E [ j f x i j ] = p 2 Var f x i E [ j f x i j ] = p Var f x i E [ j f x i j ] : Whendenedbythestandarddeviation,theabsolutenoiseisshiftinvariantin f because p Var f x i = p Var f x i + . Inthecaseofdeterministicnoise, invariancedependsonthemethodsusedtoobtaintheestimates^ " i r and^ " i r j f i j . 5.3StoppingTests Inthissectionwedenefamiliesofterminationtestsandprovidemotivationfor theiruse.Eachfamilycanbedenedthroughanextended-valuefunction mapping to R [f + 1g .Given F ,theproblemdatafromanoptimizationrun,theassociated terminationteststopsafter i F evaluations,where i F isthesolutiontoahitting problem, i F =min i f i : F i ; F i ; 0 g : .2 where F i and denoteparametersthatarepossiblydependenton F i andindependentof F i ,respectively.Hence,ateststopsanoptimizationrun,producingthe history F i ,attherst i 2 N where F i ; F i ; 0.Membersofafamilyoftests aredeterminedbydierentvaluesoftheparametervector F i ; .Since quanties theprogressofanalgorithmthroughthehistoryoffunctionvaluesandpoints,each familyoftestsisdesignedtodeterminewhencontinuingwiththepresentcourseis likelywastefulasmeasuredbytheparametersin F i ; . 90

PAGE 102

Itisoftenusefultoconsiderhowatestwillchangeiftheunderlyingfunction undergoesananechange.FollowingSection5.2,wewillsaythatatestis scale invariant if i F =min i f i : F i ; F i ; 0 g8 > 0 ; where F i f x 1 ;f 1 ;:::; x i ;f i g ,and shiftinvariant if i F =min i f i : F i + ; F i + ; 0 g8 ; where F i + f x 1 ;f 1 + ;:::; x i ;f i + g .Similaranechangesto f x 1 ;:::;x i g couldbeconsideredbutarenotcentraltothepresentdiscussion,andhenceallnotions ofinvarianceherearerelativetothefunction f .Wedropthesubscriptfrom i F if F isimplied. Similarly,itisusefultoconsiderwhether ismonotoneinsomeofitsparameters. Monotonicityof isdesirablebecauseitresultsinthesameformofmonotonicity forthecorrespondingnumberofevaluations i .Forexample,if ismonotonically increasinginascalarparameter ,thenincreasing resultsinamoreconservative testbecausethesolutionto.2isatleastaslarge.Asaconsequence,if is monotonicallyincreasingin andthetest ; ; 1 isneversatisedonasetof problems,itisnotnecessarytoconsider > 1 valuesonthatsetofproblems becauseitwillalsoneverbesatised. Wenowdeneseveralfamiliesofterminationtestsanddiscusstheirproperties andunderlyingmotivation.Allofthesetestsassumenoknowledgeoftheinner workingsofthealgorithmtheyareterminating,butsuchknowledgemightleadto appropriatemodications.Forexample,ifthemethodusesasimplex,ratherthan stoppingwhenthelast functionevaluationsarewithinafactorofthenoise,one couldstopwhenthelast simplexverticesarewithinafactorofthenoiseessentially amodicationoftheproposedrulein[19]. 5.3.1 f i 0 Test 91

PAGE 103

1 F i ; F i ;; 8 > > < > > : f i )]TJ/F21 5.9776 Tf 5.756 0 Td [( +1 )]TJ/F20 7.9701 Tf 6.586 0 Td [(f i )]TJ/F19 11.9552 Tf 11.955 0 Td [( j f i j F i if i ; 1 else, with F i ; 2 R + ; 2 N : .3 Thisfamilyoftestsisdesignedtostopwhentheaveragerelativechangein f overthe last evaluationsislessthan F i .Theinteger canbethoughtofasabackward dierenceparameterforestimatingthechangeinthebestfunctionvaluewithrespect tothenumberofevaluations. Wenotethat 1 ismonotonicallydecreasingin since,forxed , F i ,and F i , 1 2 = 1 F i ; F i ;; 1 1 F i ; F i ;; 2 : 1 isalsomonotonicallydecreasingin F i butisnotmonotonein .Membersof thisfamilyarescaleinvariantprovidedthat F i is,andshiftinvariantprovidedthat j f i j F i isproveninTheorem5.2inSection5.3.6. Weconsidertwospecialcases.When F i =1oranyconstant,weobtaintests thatarescaleinvariantbutnotshiftinvariantandstopiftheaveragerelativechange inthebestfunctionvaluedropsbelow .If F i =^ " i r ,thetestsarescaleandshift invariantbyProposition5.1andstopanalgorithmiftheaveragerelativechangein f becomeslessthanafactor timestherelativenoise. 5.3.2Max-Dierencef Test With F i ; 2 R + ; 2 N ,dene 2 F i ; F i ;; 8 > > < > > : max i )]TJ/F20 7.9701 Tf 6.586 0 Td [( +1 j i j f j )]TJ/F19 11.9552 Tf 11.956 0 Td [(f i j)]TJ/F19 11.9552 Tf 17.933 0 Td [( j f i j F i if i ; 1 else. .4 Thisfamilyoftestsstopswhen consecutivefunctionvaluesarewithin j f i j F i of f i . 92

PAGE 104

Onecanshowthat 2 ismonotonicallydecreasinginboth and F i andmonotonicallyincreasingin since 1 2 = max i )]TJ/F20 7.9701 Tf 6.587 0 Td [( 1 j i j f j )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i j max i )]TJ/F20 7.9701 Tf 6.587 0 Td [( 2 j i j f j )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i j : Wealsonotethatif 2 ismodiedsothat f j isreplacedby f j ,weobtainatest equivalentto 1 F i ; F i ;; .Theinvariancepropertiesofthistestareidenticalto thosefor 1 andformallyproveninTheorem5.2inSection5.3.6. Weexaminetwospecialcases.If F i =1oranyconstant, 2 isscaleinvariant butnotshiftinvariant;thisfamily, 2 F i ;1 ;; ,terminateswhenthelast function valuesdierbylessthanafactor relativetothebestfunctionvaluesofar.If F i =^ " i r ,theresultingtestsarescaleandshiftinvariantbyProposition5.1and terminatewhentheabsolutechangeinthelast functionvaluesiswithinafactor ofthenoiseof f . 5.3.3Max-Distancex Test 3 F i ; ; 8 > > < > > : max i )]TJ/F20 7.9701 Tf 6.586 0 Td [( +1 j;k i k x j )]TJ/F19 11.9552 Tf 11.956 0 Td [(x k k)]TJ/F19 11.9552 Tf 20.59 0 Td [( if i 1 else, with 2 R + ; 2 N : .5 Thisfamilystopswhen consecutive x -valuesarewithinadistance ofeachother andisanalyzedwith 4 below. 5.3.4Max-Distancex i Test 4 F i ; ; 8 > > < > > : max i )]TJ/F20 7.9701 Tf 6.587 0 Td [( +1 j i x j )]TJ/F19 11.9552 Tf 11.955 0 Td [(x i )]TJ/F19 11.9552 Tf 11.955 0 Td [( if i ; 1 else, with 2 R + ; 2 N : .6 Thisfamilystopswhen consecutive x i -valuesarewithinadistance ofeachother. Ingeneral,membersofbothofthefamiliesdenedby 3 and 4 arenotscaleshift 93

PAGE 105

invariantunlesstheproceduregenerating f x i g i isscaleshiftinvariantin f .Both 3 and 4 aremonotonicallydecreasingin andmonotonicallyincreasingin .We examinedatestusingmax i )]TJ/F20 7.9701 Tf 6.587 0 Td [( +1 j i k x j )]TJ/F19 11.9552 Tf 11.69 0 Td [(x i k butfoundtheperformancetobesimilarto thatof 3 . 5.3.5Max-BudgetTest 5 F i ; 8 > > < > > : 0if i ; 1 else, with 2 N : .7 Asapointofreference,weincludethefamilycorrespondingtostoppingafterabudget of evaluations. 5.3.6TestsBasedonEstimatesoftheNoise Thefamiliesoftestsintroducedabovehavebeenbroadlyparameterizedtocaptureawiderangeofbehaviors.Thefollowingtheoremsummarizestherelationship betweensomeoftheseparametersandinvariancepropertiesofthetests. Theorem5.2 aIf F i isscaleinvariant,thenallmembersofthefamilies 1 and 2 arescaleinvariant. bIf j f i j F i isshiftinvariant,thenallmembersofthefamilies 1 and 2 areshift invariant. cIftheproceduregeneratingthedata f x i g isscaleshiftinvariant,thenallmembersofthefamilies 3 and 4 arescaleshiftinvariant. dMembersofthe 5 familyarescaleandshiftinvariant. Proof: aSince and areindependentof F ,thedenitionsin.3and.4 showthat,forany > 0, 1 F i ; F i ;; = f i )]TJ/F20 7.9701 Tf 6.586 0 Td [( +1 )]TJ/F19 11.9552 Tf 11.956 0 Td [(f i )]TJ/F19 11.9552 Tf 11.955 0 Td [( j f i j F i 2 F i ; F i ;; =max i )]TJ/F20 7.9701 Tf 6.587 0 Td [( +1 j i j f j )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i j)]TJ/F19 11.9552 Tf 17.932 0 Td [( j f i j F i ; 94

PAGE 106

providedthat i .Scaleinvarianceof F i impliesthat F i = F i andhenceimplies that j F i ; F i ;; = j F i ; F i ;; ,for j =1 ; 2.Since ispositive,thevalue of i forwhich j and j changesignareidentical,showingthat i F = i F . bSimilarly,forany and i ,wehavethat 1 F i + ; F i + ;; = f i )]TJ/F20 7.9701 Tf 6.586 0 Td [( +1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i )]TJ/F19 11.9552 Tf 11.955 0 Td [( j f i + j F i + 2 F i + ; F i + ;; =max i )]TJ/F20 7.9701 Tf 6.587 0 Td [( +1 j i j f j )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i j)]TJ/F19 11.9552 Tf 17.932 0 Td [( j f i + j F i + ; andhenceif j f i + j F i + = j f i j F i ,then j F i + ; F i + ;; = j F i ; F i ;; for j =1 ; 2.Asaresult,thefunctionsinthehittingproblem5.2remainunchanged and i F + = i F . cBoth 3 and 4 dependonlyon F throughthelocationsoftheevaluated points, x i .Hence,iftheprocedureforgenerating x i producesthesamepointsfor F resp. F + asitwouldfor F ,thetestsarescaleresp.shiftinvariant. dThefunction 5 isindependentof F ,andhencethehittingproblem.2is unaectedbychangesin F . AsaconsequenceofTheorem5.2andProposition5.1,using F i =^ " i r asan estimateofthenoisein 1 and 2 resultsinteststhatarebothscaleandshiftinvariant. Furthermore,weseethattherstterminthedenitionof 1 and 2 hasastrong associationwiththemagnitudeofthenoise.ThisfeatureisillustratedinFigure5.2, whereeachlinerepresentsoneinstanceofaNelder-Meadmethodminimizinga10dimensionalconvexquadraticforincreasinglevelsofnoise.Figure5.2topshowsthe rsttermof 1 , f i )]TJ/F21 5.9776 Tf 5.757 0 Td [( +1 )]TJ/F20 7.9701 Tf 6.586 0 Td [(f i ,asthealgorithmprogresses.Hereweseethatthequantity generallyattensoutatincrementsseparatedbythesameorderofmagnitudeasthe sevennoiselevels.ThisbehaviorisevenmoreevidentinFigure5.2bottomwhen thersttermof 2 ,max i )]TJ/F20 7.9701 Tf 6.586 0 Td [( +1 j i j f j )]TJ/F19 11.9552 Tf 11.956 0 Td [(f i j ,isconsidered. 95

PAGE 107

Figure5.2: Firsttermsin 1 top,with =100and 2 bottom,with =10 onalog 10 scalewhenminimizinga10-dimensionalconvexquadraticwithstochastic relativenoiseofdierentmagnitudes.Theasymptotesofthequantitiesshowntend tobeseparatedbythedierencesinmagnitudesofthenoise. Consequently,inthenumericaltestsinSection5.4,werestrictourattentionto testsbasedon 1 and 2 forwhich F i =^ " i r .Wenotethatalarger isrequiredin Figure5.2lefttopreventthersttermin 1 fromprematurelytakingazerovalue; dependenceonparameterslike isdiscussedfurtherinSection5.4.Weexamined plotssimilartothoseinFigure5.2forthersttermsof 3 and 4 butfoundnosuch relationshipwiththenoiselevel.Asaresult,wehavechosennottoincludeconstants oftheform F i inthedenitionsof 3 and 4 . 5.3.7RelationshiptoLossFunctions 96

PAGE 108

Ideally,analgorithmshouldstopwhenthecostofperformingadditionalfunction evaluationsoutweighsadditionalimprovementsinthefunctionvalue.Whensuch atrade-ocanbequantied,thisproblembecomesoneof optimalstopping [54]. Resultsintheliteraturetypicallyfocusoncaseswhenthedistributionofthestochastic improvementisknown.Webrieyillustrateaconnectiontoasimplelossfunction employedinoptimalstoppingwithourtests. Wefocusonthecasewhenthecostofanadditionalevaluationisconstant.This canbeviewedastreatingthecomputationalexpenseperfunctionevaluationasconstant,butthecostandthetestsproposedherecouldbesuitablymodiedasan algorithmentersasubdomainwherethecostofanevaluationchanges.Givena sequence f f j g ,thelossfunction L i;c =min 1 j i f f j g + c i .8 providesameasureofthesuccessofstoppingafter i evaluationswhenthecostper evaluationrelativeto f is c .Thislossfunctionappearsintheoptimalstopping literatureasthehouse-sellingproblem[6],where f f j g areassumedtobeindependent andidenticallydistributedrandomvariables. Figure5.3showstheminimizerof L ;c foravarietyof c valuesonasequence f f j g 3000 j =1 outputbyadirectsearchsolveronanonlinearfunctionwithdeterministic leftandstochasticrightnoise.Wecomparethisminimizerwiththenumberof evaluations i denedby.2forthefamily 1 when c isusedasalinearmultiplier fortheparameter .Figure5.3showsastrongconnectionbetweenthebehaviorof argmin i L i;c andtheterminationtestdenedby 1 usinganestimateofthenoise andanappropriatechoiceoftheparameters ; .Thisillustrateshowvaryingthe parametersintheproposedfamiliescanbecloselyrelatedtothecostofperforming anevaluation. 5.4NumericalExperiments 97

PAGE 109

Figure5.3: Numberofevaluations i foraterminationtestbasedon.3with xed F i and ,butusinga parameterizedby c .Theplotsshowremarkablysimilar behaviortothenumberofevaluationsthatminimize L ;c in.8. Wenowdemonstratethemeritsoftheproposedtestsandexploretheeect ofchangingtheassociatedparametervaluesbyconsideringoutputsgeneratedby asetofderivative-freeoptimizationsolversonacollectionofnoisytestproblems. Wepresentillustrativeresultshere,acomprehensivesetofplotsmaybefoundat http://www.mcs.anl.gov/ ~ wild/tnoise/ . Weconsiderthecollectionofunconstrainedleast-squaresproblemsusedin[41], witheachfunctiontakingtheform f x =1++ g x m X i =1 F s i x 2 ; .9 whereeach F s i isasmooth,deterministicfunctionand 1isapositivescalar usedtocontroltheamplitudeofthenoisebeingaddedto f s x = P m i =1 F s i x 2 .We beginourstudybyconsideringstochasticnoise,sothat g x representsindependent andidenticallydistributediidrandomvariableswithvarianceVar g x =1.Asa result,therelativenoiseofthesetestfunctionsissimply and,hence,independentof x .Theconstant1wasaddedin.9sothattherelativenoiseisconsistentlydened 98

PAGE 110

evenif F s i x =0forall i ;suchshiftsarecommonlyperformedinaccuracymeasures see,e.g.,[16]. Toexaminethetestsonadiversesetoflocalmethods,weconsidersequences f f j g producedbydierentderivative-freeoptimizationsolvers.Sincetherelativemerits ofthesesolversisnotthefocusofthisstudy,wedonotexplicitlylistthem,but wenotethattheycomefromavarietyofclasses,includingmodel-basedmethods, implementationsofNelder-Mead,patternsearchmethods,andmethodsthatcross theseclasses.Afterexaminingtheterminationtestsontheentiregroupofsolvers, weanalyzethesuccessoftherecommendedtestsoneachsolverindividuallyasa meansofvalidation. Tomoreaccuratelystudytheeectofourtests,wehavemadethebuilt-interminationcriteriaofthesesolversasambitiousaspossibleinanattempttoremove theirinuence.Hence,weraneachsolveruntileitheritcrashede.g.,foranumerical reason,suchasthesimplexsidesbeingdroppedsucientlybelowmachineprecision oramaximumbudgetof5,000functionevaluationswasachieved.Thisbudgetof evaluationsissignicantlylargerthantheoneconsideredin[41],andweconsiderit tobemorethansucientfortheproblemsinthisset,whichrangeindimensionfrom n =2to n =12.Wedenotethemaximumnumberoffunctionevaluationseither 5,000,orfewerifthesolvercrashedby i max . Wethenhaveasetof318nonnegativesequences f f j g i max j =1 ,whichconstituteour setofproblems P .Weusetheseproblemstoexaminetheperformanceofasetoftests T ,denedasmembersofthefamiliesproposedinSection5.3.Foratest t 2T and problem p 2P ,wedenote i p;t tobethenumberoffunctionvaluesafterwhichtest t wouldstoponproblem p .Ifthetestisnotsatisedbeforethemaximumnumber ofevaluations i max ofproblem p ,welet i p;t = i max tomirrorwhatwouldbedonein practice. 5.4.1AccuracyProlesforthe 1 Family 99

PAGE 111

Terminationcriteriathataretooeasilysatisedhavelimitedpracticalitysince theycouldstopwithafunctionvaluefarfromtheminimum.Wewillmeasurethis abilitybyconsideringtherelativedierencebetween f i p;t and f i max , e p;t = 8 > > < > > : 1 if i p;t = i max ; f i p;t )]TJ/F20 7.9701 Tf 6.586 0 Td [(f i max f i p;t if i p;t
PAGE 112

Figure5.4: Accuracyprolesformembersofthe 1 familyonproblems.9with twodierentmagnitudesofknownstochasticrelativenoise .Inthetopplots, isheldxedandtheshownmembershavedierent values.Inthebottomplots, isheldxedandtheshownmembershavedierent values. Theleftasymptoteof ! t showsthefractionofproblemsonwhichtheteststopped afteritreachedtheminimumvalue f i max ,whiletherightasymptoteisthefractionof problemsforwhichthetestwassatisedwith i p;t
PAGE 113

testsbecomelessconservativeas grows,anumberofproblemsareterminated wellbeforetherelativeerrorisontheorderofthenoise.Ontheotherhand,not muchisgainedbysetting lessthan10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 or10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(2 . ThebottomtwoplotsinFigure5.4show 1 familymembersforxed =10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(2 andvariousvaluesof ,whichcanbethoughtofasabackwarddierenceparameter. Littleimprovementisseenfor > 20 n ,butamarkeddecreaseinaccuracyoccurs when < 10 n .Manyproblemsarestoppingwithalargerelativeerror,inpart because f j canremainunchangedformanyconsecutive j .Forexample,thelower leftplotshowsthatfornoiseaectingthethirddigit,onall318problems f j remained unchangedfor3 n consecutiveevaluationsbefore j = i max wasreached. 5.4.2PerformanceProlesforthe 1 Family Whileaccuracyprolescanquantifywhenateststopstoosoon,theymaynot revealwhichtestsrequireexcessivefunctionevaluationstoachievehighaccuracies. Forexample,themaximumbudgettest 5 triviallyachievesidealaccuracybutcan makemanymoreevaluationsthanarerequiredtogetsucientaccuracy. Weuseperformanceproles[17]tocomparedierentstoppingrulesintermsof bothaccuracyandthenumberoffunctionevaluationsrequired.Aperformanceprole requiresaconvergencetestaswellasaperformancemeasure r p;t foreachproblem p 2P andtest t 2T .Weusethenumberofevaluations i p;t asourperformance measureandaconvergencetestrequiringthatthesolutionobtainediswithinthe absolutenoiselevelofthebestfunctionvalue, f i p;t )]TJ/F19 11.9552 Tf 11.955 0 Td [(f i max j f i p;t j ^ " i r : .11 Theconvergencetesthastheeectofsettingtheperformancemeasure i p;t = 1 whenevertheoriginal i p;t doesnotsatisfy.11.Theperformanceratio r p;t = 8 > > < > > : i p;t min n i p; ~ t : ~ t 2 T ; .11satisedfor p; ~ t o if.11issatisedfor p;t 1 else 102

PAGE 114

Figure5.5: Performanceprolesforthemostaccurate 1 testsonproblems.9 withtwodierentmagnitudesofknownstochasticrelativenoise .Notethatthe -axishasbeentruncatedforeachplot; 5 eventuallyterminatesalloftheproblems andthushasaprolethatwillreachthevalue1;allothertestschangebylessthan .01. measurestherelativeperformanceonproblem p oftest t whencomparedwiththe othertestsin T . Theperformanceprole t = jf p 2P : r p;t gj jPj thenrepresentsthefractionofproblemswheretest t satisedtheaccuracyrequirement.11withanumberofevaluationswithinafactor ofthebest-performing, sucientlyaccuratetest.Largervaluesof t arehencebetter,with t being thefractionofproblemswhere t hassuccessfullyterminatedrstamongalltestsin T andlim !1 t beingthefractionofproblemsforwhich t satised.11. Figure5.5showstheperformanceprolesforthemostaccurate 1 familymembersfortwolevelsofnoise .Weinclude 5 ; i max in T asapointofreferenceto indicateanupperboundonthefractionofproblemsthatallothertestsmaynothave terminatedwith i p;t
PAGE 115

satisfy.11. Theseperformanceprolesillustratethatsomemembersofthe 1 familyoftests requireafractionofthefull i max evaluations.Thisisthecaseespeciallyforlarger magnitudesofnoise,wherelessaccuratesolutionsaredemanded.Likewise,asthe noisedecreases,.11demandsmoreaccuratesolutions,anditbecomesnecessaryto perform i max evaluationsonalargershareoftheproblems. Theseperformanceprolesalsodemonstratehowmoreliberalstoppingrulescan bemoresuccessfulthantheaccuracyprolesreveal.Forexample,inFigure5.4, 1 ; ; 20 n; 0and 1 ; ; 20 n; 10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(2 appearnearlyidenticalintermsoftheiraccuracy, butinFigure5.5weseeamarkeddierenceintheperformancemeasures.Theright asymptotesoftheirperformanceprolesarenearlyidentical,areectionoftheir accuracyprolesat a = ,buttherestoftheprolesshowthat 1 ; ; 20 n; 10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(2 usesconsiderablyfewerfunctionevaluationstosatisfythisaccuracyrequirement. Becauseofthishighaccuracyandperformance,weconsider 1 ; ; 20 n; 10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(2 tobe thebestofthestoppingrulesconsideredinitsfamily. 5.4.3AccuracyandPerformancePlotsforthe 2 Family Havingoutlinedourprocedurefordeterminingwhatconstitutesgoodmembers ofthe 1 family,wecannowquicklydosoforthefamilybasedon 2 in.4. TheaccuracyprolesintheupperleftplotofFigure5.6showthatthe 2 ; ; 10 n; 1testwassatisedonlessthan5%oftheproblems,andso 1has littlerelevanceforthisfamily.Inourexperience,decreasing didnotalleviatethis problemforsmall .Ingeneral, 2 tendstobemuchmoresensitivetothevalue thanaretestsbasedon 1 .Wealsoseethat 2 ismoreaccurateatsmallervaluesof than 1 was; =3 n isnowamorecompetitiveparameterchoice.Thistrade-o inaccuracycomesatthecostofthe 2 testsbeingmoreconservativeand,hence, satisedonfeweroftheproblems. 104

PAGE 116

Figure5.6: Accuracytopandperformancebottomprolesforthe 2 familyon problems.9withtwodierentmagnitudesofstochasticrelativenoise as and arevaried. Weagainuseperformanceprolestomeasurewhethertestsareoverlyconservative.Asindicatedbythelargerrangefor inthebottomtwoplotsofFigure5.6, the 2 familyoftestsaremorediculttosatisfyoverall,andthenumberoffunction evaluationsrequiredcomparesslightlylessfavorablywith i max thanforthe 1 family. Wealsoseethat 2 ; ; 3 n; 10tendstobethemostliberaltest,inpartbecauseit requiresfewerconsecutiveevaluationsthantheothermembersshownasindicatedby thevalueof ,butthat 2 ; ; 10 n; 10requiresjustasmallincreaseinthenumber offunctionevaluationswhilesolvingagreaterfractionofproblemsoverall.Basedon ourcomputationalexperience,weconsider 2 ; ; 10 n; 10tobethebesttestofthose 105

PAGE 117

Figure5.7: Accuracytopandperformancebottomprolesforthebesttests onproblems5.9withtwodierentmagnitudesofstochasticrelativenoise .The horizontalaxesontheperformanceprolesaretruncatedforclarity; 5 eventually achievesavalueof1;allothertestschangebylessthan.03. consideredinthisfamilyfortheseproblems. 5.4.4Across-familyComparisons Weperformedsimilarcomparisonsforthemembersofthe 3 and 4 families, buttheanalysisisidenticaltowhathasbeenpresentedabove.Forthebenchmarkproblems P ,wefoundthattests 3 ; n; 10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(7 and 4 ;20 n; 10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(7 performed bestamongthoseconsideredintheirrespectivefamilies.See http://mcs.anl.gov/ ~ wild/tnoise foracompletestudyofthetestsinthesefamilies. Havingidentiedthebestmembersofeachfamilyoftests,wecomparethem head-to-headinFigure5.7.ThetoptwoplotsofFigure5.7demonstratethatwhen 106

PAGE 118

thefourtestsconsideredstopwithfewerthan i max evaluations,theyalltendtohave obtainedasolutionwithinthelevelofthenoise,though 1 and 3 areslightlyworse duetoa1%jumppast a = .The x -valuebasedtests 3 and 4 havenearly identicalperformance,arehighlyaccurate,andaresatisedforalargefractionofthe problems.Forthefunctionvaluebasedtests, 1 appliestomoreoftheproblemswe examinedthan 2 . Ontheotherhand,thelowertwoplotsofFigure5.7showthatthetestbased on 2 generallyrequiresfewerevaluationstobesatisedonasignicantnumberof problemswithlargernoise.Thetestsbasedon 3 and 4 aremorefavorablewhen thenoiselevelislow.Sincetheperformanceandaccuracyofthe 3 and 4 testsare nearlyidentical,weremove 4 fromfurtherconsideration. 5.4.5DeterministicNoise Wenowconsiderhowthesetestsperforminthepresenceofdeterministicnoise byusingfunctionsoftheform.9,withadeterministic g .Tomodeldeterministic noise,weusethesame g combininghigh-frequencyandlower-frequencynonsmooth oscillationsasusedin[41],with g : R n ! [ )]TJ/F15 11.9552 Tf 9.299 0 Td [(1 ; 1]denedbythecubicChebyshev polynomial g x = x x 2 )]TJ/F15 11.9552 Tf 11.955 0 Td [(3,where x =0 : 9sin k x k 1 cos k x k 1 +0 : 1cos k x k 2 : Usingthetechniquein[42],weconsistentlyestimatedtherelativenoiseinthe318 resultingproblemstobeoftheorder0 : 6 ,providedthatthesamplingdistanceis appropriatelychosen. Tostudythevariousfamiliesofterminationtestsinthedeterministiccase,an analysisidenticaltothestochasticcasewasperformed.Whiletherewereslight dierencesintheperformanceoftestsbetweenthesecases,forthemostpart,the conclusionsweresimilar.Foreaseofpresentation,weproceedbydiscussingthe beststochastictestsinthedeterministiccase,acknowledgingthatslightlymore 107

PAGE 119

Figure5.8: Accuracytopandperformancebottomprolesforthebesttestson problems.9withtwodierentmagnitudesofdeterministicnoise.Thehorizontal axesontheperformanceprolesaretruncatedforclarity; 5 eventuallyachievesa valueof1;allothertestschangebylessthan.03. conservativetestsareneeded.Foracompletestudyofthedeterministicnoisesee http://www.mcs.anl.gov/ ~ wild/tnoise/ . TheaccuracyprolesinFigure5.8showamilddecreaseinaccuracyforthebest testscomparedwithstochasticnoiseinFigure5.7.Asaresult,weseethatonjust over10%%oftheproblems,thetestbasedon 1 2 nowterminateswhilenot satisfyingtheconvergencetest.11when =10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(3 .Inpractice,onewouldalso needanestimateoftherelativenoise^ " i r ,butourresultsof 1 and 2 varyingthe linearmultiplierofthenoise, ,showthatthetestremainrelativelystableif^ " i r is estimatedwithinanorderofmagnitude.Also,adjustingtheseteststobeslightly 108

PAGE 120

Figure5.9: Performanceprolesformoreconservativetestsonproblems.9 withtwodierentmagnitudesofdeterministicnoise.Thehorizontalaxesonthe performanceprolesaretruncatedforclarity; 5 eventuallyachievesavalueof1;all othertestschangebylessthan.03. moreconservativeimprovestheirecacy.Resultsforthischangearepresentedin Figure5.9. 5.4.6ValidationforIndividualSolvers Theresultsaboveshowfavorableaccuracyandperformanceforourrecommended testsforacollectionofsolversonasuiteofproblems.Fortheseterminationteststo beofpracticaluseforeachindividualsolverinthecollection,onemustensurethat thetestsdonotprematurelyterminateonesolvermoreoftenthananothersolver. WepresentaccuracyprolesinFigure5.10brokendownbythesixsolversexamined andnotethatnosinglesolveraccountsforadisproportionatenumberofincorrect terminations.Itisnoteworthythatneitherrecommendedtestisparticularlyrelevant tosolverD,asitisrarelysatised,andwouldthereforehavenoeectpositiveor negativeonthissolverontheseproblems.Weremarkthatthebehaviorseenismore anindicationofsolverD'sperformanceonthesenoisyproblemsratherthanonthe test'sperformance.Thisisveriedbythe 3 plotinFigure5.10,showingsolverD neverhas n consecutivepointswithinaEuclideandistanceof10 )]TJ/F17 7.9701 Tf 6.586 0 Td [(7 ofeachother. 5.5Discussion 109

PAGE 121

Figure5.10: Accuracyprolesfortheindividualalgorithmsontherecommended tests. Inthischapterweconsideredparameterizedfamiliesofterminationteststhat requiresolelyahistoryofevaluationspointsandfunctionvaluesfromanoptimizationsolver.Ouranalysisandexperimentsshowhowvaluesfortheseparameterscan bechangedtoreectauser'sviewoftheexpenseofanadditionalfunctionevaluationandtheaccuracydemanded,thetwocharacteristicsthatformthebasisfor.1. Whenusedinconjunctionwithperformanceproles,theaccuracyprolesintroduced herearevaluableforguidingterminationdecisionsbasedonthistrade-obetween accuracyandfunctionevaluations. Inourstudyofstochasticnoiseweencounteredanumberofproblemswherea solverproducednochangein f for300+evaluationsbutthenfoundchangeinthe 110

PAGE 122

Table5.1: Recommendationsforterminationtestsfornoisyoptimization. Test InterpretationofStoppingRule 1 20 n 0 : 01 Stopwhentheaveragerelativedecreaseinthebest functionvalueoverthelast20 n functionevaluations islessthanone-hundredthoftherelativenoiselevel. 2 10 n 10 Stopwhenthelast10 n functionevaluationsare within10timestheabsolutenoiselevelof thebestfunctionvalue. 3 n 10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(7 Stopwhenthelast n pointsevaluatedare withinadistanceof10 )]TJ/F17 7.9701 Tf 6.587 0 Td [(7 ofeachother rstdigitof f .Thiswassucientlyremediatedinthetestsbasedonfunctionvalues 1 and 2 by ,aparameterdetermininghowfarintopastfunctionvalueswere remembered.Werecommendabaselinevalueof =20 n ,withacorresponding lessthanorequalto0.1for 1 .Goodperformancefor 2 canstillbeseenfor less than10 n ,butthistestismoresensitiveto values.For 10, 2 testsarerarely satised,whereasfor 10,terminationcanoccurwithaninaccuratesolution.This resultisimportantaspreviousworkfocusedonsuccessivedecreases[41]orvalueson asimplexorstencil[19].OurrecommendationsaresummarizedinTable5.1. Wehavealsoseenthatfewerproblemsareterminatedbeforethebudgetconstraint asnoisemeasuredby becomessmall.Inthesecases,however,asolver'sbuiltinterminationcriteriashouldbesatisedmoreeasily.Testsbasedon x values, ratherthanfunctionvalues,performsurprisinglywell.Whenthenoiselevelismore signicant,testutilizingknowledgeofthisnoisecanleadtoimprovedperformance.In anycase,terminationtestsusingknowledgeofthepointsevaluatedandtheirfunction valuesoutperformusingthemetricspresentedinthischaptersimplyexhaustinga budgetoffunctionevaluations. 111

PAGE 123

Fordeterministicnoise,wefoundthattherecommendedtestsusedconsiderably fewerfunctionevaluationsthanthemaximumbudget.Thisresultwasobtainedat thecostofloweraccuracy.Werecommendslightlymodifyingtheseteststobemore conservativefordeterministicnoiseandagainallowthesolver'sbuilt-inteststostop arunwhennecessary.ThiseectisshowninFigure5.9,whereweseethattests basedon 1 and 2 performbetterwhen isincreasedby10 n . Althoughournumericalexperimentsfocusedonlocalderivative-freesolvers,the proposedtestscanalsobeusedinconjunctionwiththeterminationtestsofderivativebasedalgorithmsortherenementstageofglobalalgorithms.Incorporatingadditionalinformation,suchasnoisyderivativevaluesorthenumberoflocalminima,is leftasfuturework. 112

PAGE 124

6.ConcludingRemarks Theresultsinthisthesisnaturallyleadintofurtherresearch.WhileChapter3 provedconvergenceofatrustregionalgorithmusingweightedregressionmodelsassumingtheconditionnumberontheweightingmatrixisboundedthatleavesconsiderablefreedomindecidingonaweightingscheme.Theweightingproposedin Section3.5hassomeheuristicintuition,butnothingmore.Wehaveexperimented withvariousotherweightingschemes,someofwhichwethoughtwouldspeedconvergencethoughthathasn'thappenedyet.Whileitseemsintuitivetopicktheweights whichminimizetheerrorbetweenthemodelandthefunction,institutingsuchaplan ontestproblemshasn'timprovedtheconvergenceofthealgorithm. Eachtimeamodelisbuilt,weconsideredthecalculationoftheweightsasa separateproblemtobesolved.Weattemptedtochooseweightswhichsolve min w r m k x k )-222(r f x k s:t: cond W c 1 ; w i c 2 ; where c 1 and c 2 aresmallpositiveconstantsand W =diag w .Inwords,weare attemptingtominimizethedierencebetweenthegradientsofthemodelandfunction atthetrustregioncenter.Whileapracticalimplementationwouldonlybeableto minimizeboundsontheobjectivefunctionasaproxyforminimizingthedesired objective,initialresultssuggestagainstsuchavenuesofresearch.Foraproofof concept,weconstructedDFOproblemswithknownderivativesandtobeusedbythe weightingschemebutnotbythealgorithm.Onthesetestproblems,thealgorithm usingtheheuristicweightingschemepresentedinSection3.5.1outperformedthe optimizedweights.Wealsoselectedweightssolving 113

PAGE 125

min w max x 2 B x k ; k kr m k x )-222(r f x k s:t: cond W c 1 ; w i c 2 ; atconsiderablecomputationaleort.Again,theseweightsperformednobetterthan heuristicschemes.Wearestillsearchingforaweightingschemewhichoutperforms ourheuristic. WebelievetheweightsintroducedinChapter3shouldallowformoreaccurate modelswhentheaccuracyoffunctionevaluationsisdierentatdierentpointsi.e., weightmoreaccuratefunctionvalueshigher.Thiscouldextendtheworkof[60] whereincreasinglyaccuratefunctionevaluationscanbeachievedatincreasingcost. ThealgorithminChapter3couldbemodiedtocallthefunctionwithvariousdegrees ofaccuracyasitconvergestoaminimum. Inadditiontoprovingasubsequenceofiteratesfromourstochasticalgorithmconvergealmostsurelytoarst-orderstationarypointwhenthereareinnitelymany successfuliterates,wearealsogeneralizingtheresultsfromChapter4toapplytoall noisewithmeanzeroandboundedvarianceratherthannormallydistributederrors. BothofthealgorithmsinChapter3andChapter4arebasedonquadraticmodels, whichrequire O n 2 functionevaluationsforafunctionin n dimensionalspace.If thefunctionevaluationsareevenmoderatelyexpensive,thenmerelyconstructingan initialquadraticfunctioncanexhaustacomputationalbudgetof n =100forexample.Weareexploringwaystostartthealgorithmwithunder-determinedmodels andslowlybuildinguptowardslinearandquadraticmodels.Wehaveaskedwhether itisbetter,whentherearemorethan n +1pointsinthetrustregion,tobuildan under-determinedquadraticoraoverdeterminedlinearmodel;suchananswermight notexist.OneofthelargestcontributionsofChapter5istheframeworktoob114

PAGE 126

jectivelycomparethequalityofstoppingcriteria.Thetestspresentedinthispaper weredirectedatstoppingnoisyfunctionoptimization,andsinceDFOalgorithmshave purporteddeftnessinhandlingnoise,onlyDFOalgorithmwerecompared.Nonetheless,thetoolsdevelopedtocomparestoppingcriteriatranslatedirectlytoalgorithms wherereliablederivativesareavailable,butevaluatingthemisexpensive. Lastly,duringthecourseofresearch,westudiedtheconvergencerateofDFO algorithms.WhatfollowsisonlyproofofquadraticconvergenceforaclassofDFO methodsthatweareawareof,assuming f issmoothandprovided m k isasucient approximationof f . Assumption6.1 f 2 LC 2 ,and x satisessecond-ordersucientconditionsfor f . Specically, r f x =0 and r 2 f x ispositivedenite. Assumption6.2 m k 2 LC 2 isamodelwhichapproximates f withanerror kr f x )-281(r m k x k = O k x )]TJ/F19 11.9552 Tf 12.664 0 Td [(x k 2 and kr 2 f x )-222(r 2 m k x k = O k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k for x sucientlycloseto x . Foreaseofnotation,denethefollowing: 1. m k k := m k x k 2. m k := m k x 3. r m k k := r m k x k 4. r 2 m k k := r 2 m k x k 5. f k := f x k 6. f := f x Theorem6.3 If f , x ,and m k satisfytheconditionsinAssumption6.1model, thenfor x 0 sucientlycloseto x ,iteratesgeneratedbyNewtonsteps, x k +1 = x k )]TJ/F15 11.9552 Tf 11.956 0 Td [( r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 r m k k ,convergequadraticallyto x . 115

PAGE 127

Proof: UsingtheintegralformofTaylor'stheorem r h x + p = r h x + Z 1 0 r 2 h x + tp pdt .1 with h = m k , x = x k ,and p = x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k yields r m k x k + x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k = r m k x k + Z 1 0 r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k dt r m k x k )-222(r m k x = )]TJ/F25 11.9552 Tf 11.291 16.272 Td [(Z 1 0 r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k dt r m k k )-222(r m k = Z 1 0 r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x dt .2 116

PAGE 128

Therefore, k x k +1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k = k x k )]TJ/F15 11.9552 Tf 11.956 0 Td [( r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 r m k k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k Factor = r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 r 2 m k k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x )-222(r m k k Add/subtractthesameterms: = r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 r 2 m k k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(x )-222(r m k k + r m k )-222(r m k Add r f =0andgroup = r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 r 2 m k k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(x )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k k )-222(r m k )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k )-222(r f UsetheintegralformofTaylor'stheorem.2: = r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 r 2 m k k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(x )]TJ/F25 11.9552 Tf 11.291 16.273 Td [(Z 1 0 r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(x dt )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k )-222(r f Sincethe c = R 1 0 cdt , = r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 Z 1 0 r 2 m k k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x dt )]TJ/F25 11.9552 Tf 11.291 16.272 Td [(Z 1 0 r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k x k )]TJ/F19 11.9552 Tf 11.956 0 Td [(x dt )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k )-222(r f Combiningintegralswithidenticallimitsandfactoringtheconstant = r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 Z 1 0 [ r 2 m k k )-222(r 2 m k x k + t x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k ] T dt x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k )]TJ/F15 11.9552 Tf 9.299 0 Td [( r m k )-222(r f ] Because r 2 m k isLipschitz: r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x Z 1 0 Lt x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x dt )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k )-222(r f 117

PAGE 129

Evaluatetheintegral: = r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x L 2 x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x )]TJ/F15 11.9552 Tf 11.955 0 Td [( r m k )-222(r f Usethetriangleinequality: r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 L 2 x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 2 + r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 r m k )-222(r f Cauchy-Schwartz = r 2 m k k )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 L 2 x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x 2 + r 2 m k k )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 kr m k )-222(r f k Since r 2 f x ispositivedenite,wecanpick x closeenoughto x so r 2 m k x isalsopositivedenite.Wecanthereforebound k r 2 m k x )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 k c 0 k r 2 f x )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 k forsomeconstant c 0 .Thisresultsin k x k +1 )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k L 2 c 0 k r 2 f )]TJ/F17 7.9701 Tf 6.587 0 Td [(1 kk x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 2 + c 0 k r 2 f )]TJ/F17 7.9701 Tf 6.586 0 Td [(1 k c 1 k x )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 2 C k x k )]TJ/F19 11.9552 Tf 11.955 0 Td [(x k 2 forsomeconstants c 0 ;c 1 ;C where c 1 comesfromAssumption6.2.Therefore,the sequenceof x k convergesquadraticallyto x . Asmentionedintheintroduction,problemswithexpensiveandnoisyfunction evaluationsareprevalentinavarietyofelds.Ascomputationalresourcesandmodelingmature,theseproblemsarisewithincreasinginfrequency.Therefore,thework inthisthesisshouldlaythegroundworkforavarietyofpertinentcontributionsto theoptimizationcommunity. 118

PAGE 130

REFERENCES [1]E.J.AndersonandM.C.Ferris.Adirectsearchalgorithmforoptimizationwith noisyfunctionevaluations. SIAMJournalonOptimization ,11:837{857,2001. [2]CharlesAudetandJ.E.DennisJr.Apatternsearchltermethodfornonlinear programmingwithoutderivatives.Technicalreport,SIAMJournalonOptimization,2000. [3]S.C.Billups,J.Larson,andP.Graf.Derivative-freeoptimizationofexpensive functionswithcomputationalnoiseusingweightedregression.Technicalreport, UniversityofColoradoDenver,2010. SIAMJournalonOptimization ,submitted. [4]K.H.Chang,LJeHong,andH.Wan.StochasticTrust-RegionResponseSurfaceMethodSTRONG{ANewResponse-SurfaceFrameworkforSimulation Optimization. INFORMSJournalonComputing ,pages1{14,April2012. [5]T.D.ChoiandC.T.Kelley.Superlinearconvergenceandimplicitltering. SIAMJournalonOptimization ,10:1149{1162,2000. [6]Y.S.ChowandH.Robbins.Onoptimalstoppingrules. ProbabilityTheoryand RelatedFields ,2:33{49,1963. [7]P.G.CiarletandP.A.Raviart.GeneralLagrangeandHermiteinterpolationin R n withapplicationstoniteelementmethods. ArchiveforRationalMechanics andAnalysis ,46:177{199,1972. [8]A.R.Conn,K.Scheinberg,andP.L.Toint.Recentprogressinunconstrained nonlinearoptimizationwithoutderivatives. MathematicalProgramming ,79:397{ 414,1997. [9]A.R.Conn,K.Scheinberg,andL.N.Vicente.Geometryofinterpolationsets inderivative-freeoptimization. MathematicalProgramming ,111:141{172,2008. [10]A.R.Conn,K.Scheinberg,andL.N.Vicente.Geometryofsamplesetsin derivativefreeoptimization:Polynomialregressionandunderdeterminedinterpolation. IMAJournalonNumericalAnalysis ,28:721{748,2008. [11]A.R.Conn,K.Scheinberg,andL.N.Vicente.GlobalConvergenceofGeneral Derivative-FreeTrust-RegionAlgorithmstoFirst-andSecond-OrderCritical Points. SIAMJournalonOptimization ,20:387{415,2009. [12]A.R.Conn,K.Scheinberg,andL.N.Vicente. IntroductiontoDerivative-Free Optimization .MPS-SIAMSeriesonOptimization.SIAM,Philadelphia,2009. 119

PAGE 131

[13]A.L.Custodio,H.Rocha,andL.N.Vicente.IncorporatingminimumFrobenius normmodelsindirectsearch. ComputationalOptimizationandApplications , 46:265{278,2010. [14]G.DengandM.C.Ferris.AdaptationoftheUOBQYAalgorithmfornoisy functions.In ProceedingsoftheWinterSimulationConference ,pages312{319, 2006. [15]G.DengandM.C.Ferris.Extensionofthedirectoptimizationalgorithmfor noisyfunctions.In ProceedingsoftheWinterSimulationConference ,pages497{ 504,2007. [16]J.E.DennisandR.B.Schnabel. NumericalMethodsforUnconstrainedOptimizationandNonlinearEquations .SIAM,Philadelphia,1996. [17]E.D.DolanandJ.J.More.Benchmarkingoptimizationsoftwarewithperformanceproles. MathematicalProgramming ,91:201{213,2002. [18]R.Fletcher. PracticalMethodsofOptimization .JohnWiley&Sons,NewYork, 2ndedition,1987. [19]P.E.Gill,W.Murray,andM.H.Wright. PracticalOptimization .Academic Press,London,1981. [20]F.Glover.TabuSearch{PartI. INFORMSJournalonComputing ,1:190{206, January1989. [21]F.Glover,M.Laguna,andR.Mart.Fundamentalsofscattersearchandpath relinking. Controlandcybernetics ,39,2000. [22]G.H.GolubandC.F.VanLoan. MatrixComputations .TheJohnsHopkins UniversityPress,3rdedition,1996. [23]N.I.M.Gould,D.Orban,andP.L.Toint.CUTErandSifDec:Aconstrained andunconstrainedtestingenvironment,revisited. ACMTransactionsonMathematicalSoftware ,29:373{394,2003. [24]R.B.GramacyandM.A.Taddy.Categoricalinputs,sensitivityanalysis,optimizationandimportancetemperingwith tgp version2,anRpackagefortreed Gaussianprocessmodels. J.StatisticalSoftware ,33:1{48,2010. [25]S.Gratton,M.Moue,andP.Toint.Stoppingrulesandbackwarderroranalysis forbound-constrainedoptimization. NumerischeMathematik ,119:163{187, 2011. [26]S.GrattonandL.N.Vicente.Asurrogatemanagementframeworkusingrigorous trust-regionssteps.Preprint11-11,Univ.ofCoimbra,March2011. [27]N.J.Higham. AccuracyandStabilityofNumericalAlgorithms .SIAM,Philadelphia,2ndedition,2002. 120

PAGE 132

[28]J.H.Holland. Adaptationinnaturalandarticialsystems .MITPress,Cambridge,MA,USA,1992. [29]W.HuyerandA.Neumaier.SNOBFITstablenoisyoptimizationbybranchand t. ACMTransactionsonMathematicalSoftware ,35:1{25,2008. [30]D.R.Jones,C.D.Perttunen,andB.E.Stuckman.Lipschitzianoptimization withouttheLipschitzconstant. JournalofOptimizationTheoryandApplications ,79:157{181,1993. [31]BKarasozen.Surveyoftrust-regionderivative-freeoptimizationmethods. JournalofIndustrialandManagementOptimization ,3:321{334,2007. [32]C.T.Kelley. UsersGuideforimlversion1 .Availableat www4.ncsu.edu/ ~ ctk/imfil.html . [33]C.T.Kelley.DetectionandremediationofstagnationintheNelder-Meadalgorithmusingasucientdecreasecondition. SIAMJournalonOptimization , 10:43{55,1999. [34]J.KennedyandR.Eberhart.Particleswarmoptimization. Proceedingsof ICNN'95-InternationalConferenceonNeuralNetworks ,4:1942{1948,1995. [35]JKieferandJWolfowitz.Stochasticestimationofthemaximumofaregression function. TheAnnalsofMathematicalStatistics ,23:462{466,1952. [36]SKirkpatrick,C.D.Gelatt,andVecchiM.P.Optimizationbysimulatedannealing. Science ,220:671{680,1983. [37]M.Kortelainen,T.Lesinski,J.More,W.Nazarewicz,J.Sarich,N.Schunck, M.V.Stoitsov,andS.Wild.Nuclearenergydensityoptimization. Phys.Rev. C ,82:024313,August2010. [38]R.Lougee-Heimer.TheCommonOptimizationINterfaceforOperationsResearch:Promotingopen-sourcesoftwareintheoperationsresearchcommunity. IBMJournalofResearchandDevelopment ,47:57{66,2003. [39]J.Matyas.Randomoptimization. AutomationandRemoteControl ,26:246{ 253,1965. [40]H.D.Mittelmann.Decisiontreeforoptimizationsoftware. http://plato.asu.edu/guide.html,2010. [41]J.J.MoreandS.M.Wild.Benchmarkingderivative-freeoptimizationalgorithms. SIAMJournalonOptimization ,20:172{191,2009. [42]J.J.MoreandS.M.Wild.Estimatingcomputationalnoise. SIAMJ.Scientic Computing ,33:1292{1314,2011. 121

PAGE 133

[43]J.J.MoreandS.M.Wild.Estimatingderivativesofnoisysimulations. ACM Trans.Math.Softw. ,38,2011.Toappear. [44]R.H.Myers,D.C.Montgomery,andC.M.Anderson-Cook. ResponseSurface Methodology:ProcessandProductOptimizationUsingDesignedExperiments . JohnWileyandSons,2008. [45]J.A.NelderandR.Mead.Asimplexmethodforfunctionminimization. ComputerJournal ,7:308{313,1965. [46]J.Neter,M.H.Kutner,C.J.Nachtsheim,andW.Wasserman. AppliedLinear StatisticalModels .McGrawHill,4thedition,1996. [47]M.J.D.Powell.Anecientmethodforndingtheminimumofafunctionofseveralvariableswithoutcalculatingderivatives. TheComputerJournal ,7:155{ 162,1964. [48]M.J.D.Powell.UOBYQA:Unconstrainedoptimizationbyquadraticapproximation. MathematicalProgramming ,92:555{582,2002. [49]M.J.D.Powell.LeastFrobeniusnormupdatingofquadraticmodelsthatsatisfy interpolationconditions. MathematicalProgramming ,100:183{215,2004. [50]M.J.D.Powell.DevelopmentsofNEWUOAforminimizationwithoutderivatives. IMAJournalonNumericalAnalysis ,28:649{664,2008. [51]C.R.RaoandH.Toutenburg. LinearModels:LeastSquaresandAlternatives . SpringerSeriesinStatistics.Springer-Verlag,2ndedition,1999. [52]L.A.Rastrigin.Theconvergenceoftherandomsearchmethodintheextremalcontrolofamanyparametersystem. AutomationandRemoteControl , 24:1337{1342,1963. [53]H.RobbinsandS.Monro.Astochasticapproximationmethod. TheAnnalsof MathematicalStatistics ,1951. [54]A.N.Shiryayev. OptimalStoppingRules .Springer-Verlag,NewYork,1978. [55]F.J.SolisandR.Wets.Minimizationbyrandomsearchtechniques. Mathematicsofoperationsresearch ,6:19{30,1981. [56]J.C.Spall.Multivariatestochasticapproximationusingasimultaneousperturbationgradientapproximation. AutomaticControl,IEEETransactionson , 37:332{341,1992. [57]J.C.Spall.Acceleratedsecond-orderstochasticoptimizationusingonlyfunction measurements.In DecisionandControl,1997.,Proceedingsofthe36thIEEE Conferenceon ,volume2,pages1417{1424.IEEE,1997. 122

PAGE 134

[58]J.C.Spall. Introductiontostochasticsearchandoptimization:estimation,simulationandcontrol .JohnWileyandSons,2003. [59]M.L.Stein. InterpolationofSpatialData:SomeTheoryforKriging .Springer, NewYork,1999. [60]J.TakakiandN.Yamashita.Aderivative-freetrust-regionalgorithmforunconstrainedoptimizationwithcontrollederror. NumericalAlgebra,Controland Optimization ,1:117{145,February2011. [61]J.J.Tomick,S.F.Arnold,andR.R.Barton.Samplesizeselectionforimproved Nelder-Meadperformance.In ProceedingsoftheWinterSimulationConference , pages341{345,1995. [62]V.Torczon.Ontheconvergenceofpatternsearchalgorithms. SIAMJournalon optimization ,7:1{25,1997. [63]V. Cerny.Thermodynamicalapproachtothetravelingsalesmanproblem:An ecientsimulationalgorithm. Journalofoptimizationtheoryandapplications , 45l:41{51,1985. [64]S.M.Wild,R.G.Regis,andC.A.Shoemaker.ORBIT:Optimizationbyradialbasisfunctioninterpolationintrust-regions. SIAMJournalonScientic Computing ,30:3197{3219,2008. [65]M.H.Wright.Usingrandomnesstoavoidperseverationindirectsearchmethods. PresentationatTheInternationalSymposiumonMathematicalProgramming", 2009. [66]A.A.Zhigljavsky. Theoryofglobalrandomsearch ,volume65of Mathematicsand itsApplicationsSovietSeries .KluwerAcademicPublishersGroup,Dordrecht, 1991.Translatedandrevisedfromthe1985Russianoriginalbytheauthor,With aforewordbySergeM.Ermakov. 123