Citation
Polynomial root solving techniques

Material Information

Title:
Polynomial root solving techniques
Creator:
Berg, Jeffrey V
Publication Date:
Language:
English
Physical Description:
vii, 48 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Polynomials ( lcsh )
Polynomials ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 39).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Mathematical and Statistical Sciences.
Statement of Responsibility:
by Jeffrey V. Berg.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
21104450 ( OCLC )
ocm21104450
Classification:
LD1190.L62 1989m .B47 ( lcc )

Full Text
POLYNOMIAL ROOT SOLVING TECHNIQUES
by
Jeffrey V. Berg
B.S., Arizona State University, 1983
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Mathematics
1989


This thesis for the Master of Science degree by
Jeffrey V. Berg
has been approved for the
Department of
Mathematics
by
William L. Briggs
Tom Russell
Jan Mandel
Date


iii
Berg, Jeffrey V.(M.S., Applied Mathematics)
Polynomial Root Solving Techniques
Thesis directed by Professor William L. Briggs
This thesis will examine two general classes of polynomial
root finding methods. Several well-known iterative techniques
which solve for one root of an arbitrary complex polynomial are
first presented. Since these techniques solve for only one root
at a time, there is justification to coin them one-dimensional
techniques. On the other hand, if an iterative technique
converges to all of the roots of the polynomial, then that
technique is referred to as a simultaneous root solver. Several
simultaneous polynomial root solvers are reviewed and numerical
examples are given to compare the relative efficiency of these
methods.
The form and content of this abstract are approved. I recommend
its publication.
Signed
William L. Briggs


Dedication
To my grandfather, Ernest F. Orrick


V
Acknowledgements
The author wishes express his gratitude to Dr. W.L.
Briggs of the University of Colorado at Denver for his
motivation, assistance and advice in the completion of this work.
Furthermore, the author expresses his appreciation to the
Department of Mathematics at the University of Colorado at Denver
for the education that made this work possible.


CONTENTS
CHAPTER
I. INTRODUCTION............................................ 1
II. ONE-DIMENSIONAL ROOT SOLVERS............................ 5
Newton's Method...................................... 6
Hansen-Patrick Family................................ 7
III. SIMULTANEOUS ROOT SOLVERS.............................. 12
Pseudo-simultaneous Root Solvers.................... 12
Symmetric Equations of a Polynomial................. 14
Symmetric Function of a Polynomial.................. 16
Durand-Kerner Simultaneous Root Solver.............. 16
Aberth Simultaneous Root Solver..................... 20
Hansen-Patrick-Rusnak Simultaneous Root Solver... 22
IV. NUMERICAL EXAMPLES..................................... 27
Initial Approximation of the Roots.................. 28
Computational Labor of Simultaneous Root Solvers. 29
Polynomials with Simple Roots....................... 31
Test Polynomials with Random Coefficients...... 32
Test Polynomials with Integer Roots.............. 33
Test Polynomial with Root Symmetry............... 34
Test Polynomial with Roots on the
Unit Circle................................... 35
Polynomials with Multiple Roots..................... 35
Test Polynomial with a Single Multiple Root...
35


vii
Test Polynomials with Multiple Roots of
Different Degrees........................... 36
V. CONCLUSION.............................................. 37
REFERENCES.................................................... 39
APPENDIX
MATLAB Computer Code for Simultaneous Root Solvers.. 40


CHAPTER I
INTRODUCTION
The values in the domain of a function which are mapped
onto the origin by that function are defined to be the roots of
the function. Roots of a function are of extreme interest in many
applications. The work in this thesis will be restricted to
polynomial functions. The purpose of this thesis is to provide a
reference for iterative polynomial root solving techniques.
Several well-known one-dimensional iterative techniques which
solve for one root of the polynomial are first reviewed. Further,
several simultaneous iterative root solving techniques, which
solve for all of the roots of a polynomial at once, are
investigated. The main assumption in the development of this
thesis is that all roots of an arbitrary complex polynomial are
desired.
An arbitrary complex nth degree polynomial may be
described by the following equation:
p(z) = anzn + an-izn-1 + ... + aiz + ao
where aj_, i = 1, . .., n, are constant complex coefficients. For
the purposes of this thesis, if the coefficient of the highest
power of z is nonzero, this equation will be written in monic
form:


2
p(z) = zn + bnzn_1 + bn-izn~2. + b2Z + bi
where b^, i = 1, ..., n, are constant complex coefficients.
Arbitrary complex polynomials are entire functions which
means the polynomial is infinitely differentiable (in the complex
sense) at every finite complex point[1]. This assures that
introduction of the mth derivative of p is defined for all m
greater than or equal to zero.
An nth degree polynomial has exactly n roots counting
multiplicity [2] Letting the roots of a polynomial p be rj_
i=l,...,n, p may be expressed as
p (z) = (z-rn) (z-rn-i) (z-r2) (z-n).
A root has multiplicity m < n when there are m roots that are
equal. A simple root has multiplicity 1.
Many iterative techniques which solve for one of the n
roots of p exist. Chapter II provides a review of the more well-
known one-dimensional techniques. It begins by presenting
Newton's one-dimensional method. Then a generalization of several
one-dimensional techniques presented by Hansen and Patrick[3] is
derived. Euler's method, Laguerre's method, Halley's method,
Ostrowski's method and Newton's method are all specific cases of
the family of methods presented by Hansen and Patrick[3].
Each one-dimensional technique involves picking an
initial guess to a root of the polynomial and iteratively
modifying that choice to produce an improved approximation of a
root of the polynomial. The desired result of the iteration is a
sequence which converges, within a chosen accuracy, to a root of


3
the polynomial. While these techniques are sound and much
literature is available describing the techniques and their
performance, these techniques solve for only one root of the
polynomial for each initial guess. Furthermore, different initial
guesses can converge to the same root. Consequently, these
methods might not produce all of the roots of the polynomial even
if many initial guesses are tested.
Often, all of the roots of a polynomial are desired.
Chapter III begins with a suggestion for using one-dimensional
techniques in a pseudo-simultaneous root solving technique. Such
techniques are amenable to parallel processing. However, the
pseudo-simultaneous techniques are often doomed to failure for
reasons which are discussed.
Chapter III then presents a nonlinear system of
equations, the symmetric equations of a polynomial[2]. The
symmetric equations have a unique solution which is the set of all
of the roots of the polynomial. Chapter III continues with three
proposed methods that solve for all of the roots of a polynomial
simultaneously. These three methods are identified by the
author (s) of the paper that describes the method and are 1)
Durand-Kerner method[4] 2) Aberth method[4] and 3) Hansen-
Pat rick-Rusnak method[5] The Durand-Kerner method is discussed
in detail because of its relative simplicity and its relationship
with the symmetric equations. The Aberth method is presented and
the physical derivation presented by Aberth is described. Chapter


4
III concludes with a derivation of the Hansen-Patrick-Rusnak
method. J
Chapter IV presents some numerical examples of the
Durand-Kerner method, the Aberth method and the Hansen-Patrick-
Rusnak method. Test polynomials with simple and multiple roots
are used to compare the efficiency of each method.
Finally, chapter V provides conclusions, conjectures and
avenues for further research. The Appendix contains a copy of the
MATLAB computer code for the simultaneous techniques which were
included in the numerical examples.


CHAPTER II
ONE-DIMENSIONAL ROOT SOLVERS
Let p be an arbitrary complex polynomial of degree n.
This chapter presents one-dimensional iterative polynomial root
solving techniques. As stated previously, one-dimensional root
solving methods solve for one root of the n roots of p at a time.
The first method reviewed is Newton's method which will be derived
using the Taylor series expansion. Hansen and Patrick derive a
family of one-dimensional techniques which is a general case for
Euler's method, Halley's method, Ostrowski's method and Laguerre's
method Also, the Hansen-Patrick algorithm is Newton's method in
the limiting case.
In general, each of the iterative root solvers can be
expressed by z^+l = f(z^) where zk is a current approximation to a
root of p. The difference between the methods lies in the
specification of the function, f. Newton's method utilizes p and
its first derivative, p', in its definition of f. The family of
methods derived by Hansen and Patrick utilizes p'' in addition to
p and p'.
The idea of the above algorithms is that after a finite
number of iterations the initial guess will converge, within a
chosen accuracy, to a root of the polynomial, p.


6
Newtons Method
Newton's method is one of the first iterative root
solving methods that is encountered in the study of numerical
analysis. Geometric considerations are usually presented as
motivation for Newton's method. Figure 2-1 depicts a geometric
representation of Newton's method for a real polynomial.
Figure 2-1. Geometric Representation of Newton's Method.
Newton's method is usually derived using real analysis, as is done
here; however, the resulting iteration method is applicable to the
complex polynomials in the complex domain.
The derivation begins by assuming that r is a simple real
root of p. In this case p' (r) is nonzero and there is a
neighborhood about r such that p' is nonzero in that neighborhood.
Assume that the current approximation of r, z^, is in the
neighborhood about r where p' is nonzero and expand p in a Taylor
series about zfc. Note pn+l(z) equals zero identically.


7
p(z) = p(Zk) + (z-zic)p' (Zk) + (z~z^) pi I (2k) +
2
... +
(Z-Zk) _n
p (Zk)
n!
Also pn(z) equals n!, so
p (z) = p(Zk) + (Z-Zk)p' (Zk) + -*Z Zlc^ p' 1 (Zk) + + (z-Zk)n
2!
Terms in powers of (z-z^) larger than one are neglected and z is
taken to be the next approximation, zfc+i. Assuming pfzfc+i) = 0,
we obtain Newton's iteration method,
Zk+l = Zk
_ P(Zk)
p'(zk)
Hans.eia-Patri.ck Family
Hansen and Patrick[3] present a family of one-dimensional
root solving techniques which generalizes several one-dimensional
root solving methods. The derivation of the family begins by
assuming r is a simple root of p. Then p(z) can be rewritten as
p (z) = (z-r)q(z)
where q(r) is nonzero. Define
Sl = P' = 1 + q* (z)
P(z) (z-r) q(z)
Following the notation employed by Hansen and Patrick, let e=(z-r)
and Sl
= 1- +
e
CTi.


8
Define
S2 = -si = P' -P(Z)P* = 1
p(z)2 (z-r)2
_ q(z)q (z) -q1 (z)'
q(z) 2
Let G2=~&1'=(q*^-qq'')/q^f then
S2 = + CT2.
E2
If E is known, then r is established. The expressions for si and
S2 give two relations for E. These relations can solved for E if
an arbitrary relation for £ is introduced which is quadratic.
Hansen and Patrick proposed the following quadratic equation
where A, B and C are arbitrary functions of z. Rewriting this
equation as a quadratic of 1/E, we have
B^C A+2B (s_i-CTi) + Sl(A_2BCTl) + Bsf + CS2 ACfi + Ba2 CCT2 = 0
E2 e
A, B and C are chosen so as follows to simplify the quadratic of
1/E. First, let A=B0i-C0i/02f then
- ACTi + Ba2 Cd2 = 0
Second, let B=-CCT2/( si(a-2BCTi) = 0
Then the quadratic in 1/E becomes


9
B^c. + A-2B(si Gi) + Bg2 + Cs2 o
c2 E
or
SiC?
Oi2
+ S2 =
0.
Now, CTi and 02 are replaced by fixed parameters, or equivalently
02/(Oi)2 in the quadratic in 1/e is replaced by a fixed parameter,
say a. When this is done, and £= z-r is replaced by z-z' to
account for the error in using a, we have
(1+ct)
.(z-z1)2.
+
I 2sia
l(z-z)
sfa + S2 = 0.
Solving for the approximation of r, z', yields
= z -
l+g
asi [(i+a) S2- as2]?
Substituting for si and S2r
z'
______________(l+0t)p(z)______________
ap(z) [(p'(z))2-{a+l)p(z)p"(z)]2
Replacing z' by z^+i and z^ by z gives the Hansen and Patrick
family of root solving method
Zk+l
_______________(i+g)p(zk)________________
ap'(zk) [(P'(zk))2-(a+l)p(zk)p' '(zjc)]^


10
In practice, the sign of the square root in the family of methods
is chosen so that the change in zfc is the smallest. The following
one-dimensional root solving methods are special cases of this
family.
Euler's Method (a=l)
Zk+l = Zk
2p (zn)
Halley's Method (a=-l)
Zk+l = Zk
P(Zlt)
2p'(zk)
Laguerre's Method (a=l/(n-l))
Zk+l = Zk
np(zk)
Ostrowski's Method (a=0)
Zk+l = Zk
P (Zk)


Newton's Method (a )
Zk+l
Zk
_ p(Zk)
p'(Zk)


CHAPTER III
SIMULTANEOUS ROOT SOLVERS
Pseudo-simultaneous Root Solvers
Figure 3-1 represents a possible use of the methods
presented in chapter II for a pseudo-simultaneous root solver. In
this pseudo-simultaneous root solver, methods discussed in chapter
II will be used n times in parallel as shown.
z
one of n
roots of p(z)
one of n
roots of p(z)
one of n
roots of p(z)
one of n
roots of p(z)
Figure 3-1. Pseudo-simultaneous Root Solver.


13
The initial guess is chosen so that z=[zi Z2 ... znl1 is an n-
dimensional complex vector, where zj_, i=l,...,n, are components of
z. Each component of z becomes the initial guess for the methods
discussed in chapter II.
A slight notational inconsistency will be highlighted
here in an effort to reduce confusion later. In chapter II,
subscripts were used to denote iteration step (eg. zk and z^+i).
In chapter III, subscripts will be used to denote components of
the n-dimensional vector, z, at a particular iteration step.
When it is important to denote the iteration step, the iteration
step will be denoted with a superscript (eg.z^, zk+1 Qr .
As the Figure 3-1 suggests, parallel processing
complements the pseudo-simultaneous root solvers. Each of the n
one-dimensional techniques could use a separate processor. If
each component of the initial guess converges to a different root,
then the procedure will be successful. However, a choice of an
initial guess which exhibits this behavior is more likely the
exception than the rule. Since no information is shared between
the one-dimensional techniques in parallel, there is nothing to
prevent different components of the initial guess from going to
the same root. This limitation of pseudo-simultaneous root
solvers leads to the search for iteration techniques which share
information between the estimates at an iteration and eventually
produce all of the roots.


14
Symmetric Equations of a Polynomial
A review of the theory of polynomials yields a
relationship between the set of all roots of a polynomial and the
coefficients of that polynomial. This relationship is commonly
called the symmetric equations of a polynomial. The derivation of
the symmetric equations follows directly from two equivalent
representations of a polynomial given earlier. These forms are
repeated here for convenience.
p(z) = zn + bnzn_1 + bn-izn-z + b2Z + bi
p (z) = (z-rn) (z-rn_i) ... (z-r2) (z-ri)
where bi, i=l,...,n are constant complex coefficients and ri,
i=l,...,n are the roots of p. Expanding the second representation
of p and matching coefficients yields the symmetric equations.
The symmetric equations for n = 2 and n = 3 are derived to provide
insight into the pattern of the symmetric equations for general n.
H=2.
p(z) = z2 + b2z + bi
P(z) = (z ri) (z r2)
= z2 (ri + r2)z + rir2
symmetric equations:
rl + r2 = ~b2
rlr2 = bl


15
n=3
p(z) = z2 + b3z2 + b2z + bi
p (z) = (z ri) (z r2) (z r3)
= z^ (ri + r2 + r3)z2
+ (rir2 + rir3 + r2r3) z r]_r2r3
symmetric equations:
rl + r2 + r3 = -b3
rir2 + rir3 + r2r3 = b2
rlr2r3 = -bi
Symmetric equations for general n
ri + r2 +...+ rn-i + rn = -bn
rlr2 + rir3 +...+ rirn + r2r3 +... + r2rn +...+ rn-irn = bn-i
all combinations of ri taken three at a time = -bn-2
all combinations of ri taken n-1 at a time = -1(n l)b2
rlr2 rn = -1(n)bi
Given the coefficients, bi, of a polynomial, the set of all of the
roots of that polynomial is the only set which satisfies the above
equations.


16
Symmetric Function of a Polynomial
Let bi i=l,...n be the coefficients of p as before and z
= (zi, Z2, .. zn_i, zn) and consider the vector-valued function,
f(z), such that
fl(z) = zi + Z2 +...+ zn_i + zn + bn
f2(z) = ziz2 + ZIZ3 +...+ z]_ zn + Z2Z3 +. . + Z2zn +. .+zn_izn bn_i
f3(z) = (all combinations of z taken three at a time) + ^n-2
fn-1(z) = (all combinations of z taken n-1 at a time) (-l)n ^b2
fn(z) = ziz2 ... zn (-1)nbi
f(z) is the symmetric function of p(z). We seek a solution of
f(z) = 0 because the set of all roots is the solution to this
equation. The symmetric function provides an avenue to share
information between the estimates at an iteration in a
simultaneous root solver. Newton's method in n-dimensions will be
applied to the symmetric function in the following simultaneous
root solver.
Durand-Kerner Simultaneous Root Solver
The first simultaneous root solver is presented under
different titles in different publications. Aberth[4] attributes


17
the method to Durand and Kerner, and Pasquini and Trigiante[6]
give many references for the method and refer to the method as the
W-method. Pasquini and Trigiante also document that "the w-method
is equivalent Newtons method in N-dimensional space" by
considering Newton's method applied to the symmetric function of
the polynomial. Aberth's derivation gives insight into the method
and will be given first, followed by a demonstration for n = 2 and
n = 3 that Newton's n-dimensional method applied to the symmetric
function is the W-method.
Suppose that zj_, i=l,...,n are estimates of the n roots
of p, r, i=l,...,n, at a given iteration step. Let 8z^ be a
correction to z such that ri = zi+8zi for i=l,...,n. Then
[z(zn+8zn)][z-(zn_i+8zn_i)] ... [z-(Z2+8Z2)][z-(zi+8zi)] =p(z).
Expanding in powers of 8zi gives
P(Z) = JI Z-Zj £ (Szi Z-Zj|+0 (6z?) .
j=l i=l l jl,j*i /
Now, as with Newton's one-dimensional method, the Durand-Kerner
simultaneous root solver is obtained by dropping terms with powers
of 8zi larger than one and setting z = zi in the above equation.
The resulting equation can be solved for 8z^ which results in
P(zi)
n (zi zj)
8zi = -


18
8zi is computed and added to z for i=l,...,n and the procedure is
iterated until all of the roots of the equation are found to a
chosen accuracy. Note for 8z£ to be defined, the estimates for
the n roots of p must be distinct. Furthermore, if 8zi = 0, then
p(zj_) = 0 and z is a root of p.
Similar to the pseudo-simultaneous root solver, this
technique is complemented by parallel processing. Each 8zi could
be found by a separate processor on a parallel system.
To complete this subsection, Newton's method in n-
dimensions is applied to the symmetric function, f(z), of a
polynomial. Newton's method applied to f(z) = 0 is
zk+l = zk j-1 (z^) f(zk)
where J-1 is the inverse of the Jacobian matrix of f (z) This
equation is equivalent to the W-method (Durand-Kerner method)[6].
This method is now applied for n=2 and n=3 to establish the
pattern.
n=2
p(z) = z^ + b2z + bi
f (z)
The Jacobian of f is
Zl + Z2 + b2
Z1Z2 bi
Inverting J gives


J-1 = f-^I 21
\ Z1-Z2{ _Z2 !
and post-multiplying by f evaluated at z^ gives
j-if( zk) ( 1 \f (zi)2 + b2zik + bi \ / \( p(zi)
l _(z£)2 b2Z2* bl / \ ziZ2'j -p(z|)
Then
8zi = ^ P(zi)--------m for i=if2.
n zi)

p(z) = z3 + b3z2 + b2z + bi
/ Z! + Z2 + Z3 + b
f (z) = ( Z1Z2 +Z1Z3 +Z2Z3 -
I Z1Z2Z3 + bi
of f is
1 1 1
J = Z2+Z3 Z1+Z3 Z1 + Z2
. Z2Z3 Z2Z3 Z2Z3
Inverting J requires the following expressions:
det (J) = Z1Z2 + Z1Z3 + Z2Z3 Z1Z3 zfzi Z3Z2
= (zi z2Xzi ~ 23X22 Z3)
[Adj (J)] T
Z?(Z2-Z3) Zl(z3~Z2) Z2Z3
Z2(z3-Zl) Z2{Z1-Z3) Z3-Z1
z|(zi-Z2) Z3(z2~Zl) Z1-Z2


20
zi_____________zZl____________I______
(Z1-Z2XZ1Z3) (ZI-Z2XZI-Z3) (Z1-Z2XZI-Z3)
~Z2____________Z2____________=1______
(Z1-Z2XZ2~Z3) (Z1-Z2XZ2-Z3) (Z1-Z2XZ2-Z3)
Z3_____________^Z3____________1______
_ (Z1-Z3XZ2-Z3) (ZI-Z3XZ2-Z3) (ZI-Z3XZ2-Z3) _
Post-multiplying by f evaluated at
' zf3 + b3Zi2 + b2zf + bi ' p(zi)
(zf z&zf zf) Z*?3 b3Z22 b2Z2 b2 1 -, (zf zfXzf zf) -pUf)
(zf Z$Z2k Z3) Z33 + b3Z§2 + b2Z3 + bl (zf zfe zf) p(zf)
(zf Z§Xz2 Z3) l (zf zfXzf zf) )
and
Szi = -------Pizi)------_ for i=lf2f3.
II (zi ZJ)
J=i,j*i
We see that Newton's method in n-dimensions applied to the
symmetric function is the W-method for n = 2 and n = 3.
Pasquini and Trigiante[6] expand this concept by
presenting a alternate definition of f and applying Newton's
method to f (x) = 0 to obtain another simultaneous root solver.
Interested readers may obtain information in the noted reference.
Aberth Simultaneous Root Solver
Aberth derived a simultaneous root solving technique
using an analogy with electrostatics[4]. Aberth assigns unit plus


21
charges to rj.f i=l,...,nf in the complex plane and uses physical
and vector space arguments to establish
8zi = -|R<5U.
Assigning unit plus charge to the zeros of p will cause the zeros
to attract zj_'s which are nearby. Notice, Aberth established the
pseudo-simultaneous root solver which employs Newton's one-
dimensional root solver.
To obtain a simultaneous root solver, Aberth assigned a
unit minus charge to each zj_. Presumably, if z is close to a
simple zero, then the minus charge will cancel the plus charge and
prevent other components of the initial guess from converging to
that simple zero. The unit minus charge assigned to each root is
Aberth's method to allow information sharing between the estimates
at an iteration Introduction of unit minus charges in the
derivation of the above equations results in
_1_____fp'(zl) + V -1 _
8zi P(Zi) Zi ZJ.
Reciprocating to obtain Aberth's simultaneous root solver
8zi
P(zi)
n
I L
21 ~ zi
As with the Durand-Kerner method, the Aberth method
requires that the estimation of the n roots of p are distinct for


22
8zi to be defined. Furthermore, the Aberth method can also be
implemented on a parallel system.
Aberth's iterative method is referred to as a
simultaneous root solver because of the numerical examples[4] and
those presented in this work. However, this point is left
otherwise unproven. Proof that the Durand-Kerner method is a
simultaneous root solver follows because the set of all roots of a
polynomial is the only possible solution. As with Aberth's
method, the following iterative method is called a simultaneous
root solver without proving that it actually is.
Hansen-Patrick-Rusnak Simultaneous Root Solver
The derivation of Laguerre's one-dimensional method
presented by Ralston and Rabinowitz [7] gives insight into the
logic employed by Hansen, Patrick and Rusnak to obtain their
simultaneous root solver. The highlights of this derivation are
given here as background for the method established by Hansen,
Patrick and Rusnak. As with the derivation of Newton's method
given in chapter II, the derivation assumes the roots of p(z) are
simple and real. Since the roots are simple, they can be ordered
from smallest to largest. Using the ordered roots as end points
and including -< as one left end point and + as one right end
point allows the real line to be divided into n+1 intervals. Each
initial approximation is a member of one of the n+1 intervals. A
family of parabolas in a parameter X which has two roots in the
same interval as the initial approximation and has one root less


23
than the approximation and one root greater than the approximation
is established. A, is optimized such that one of the two roots of
the parabola is as close as possible to a root of p(z). The root
of the parabola which is closest to a root of p(z) becomes the new
approximation in an iterative function. The algorithm for the
iterative function is exactly that given in chapter II for
Laguerre's one-dimensional root solver. Theory from real analysis
shows that Laguerre's one-dimensional root solver produces either
a monotonically increasing or monotonically decreasing sequence
which converges to a root of p(z).
Hansen, Patrick and Rusnak begin by consider the pseudo-
simultaneous root solver which uses Laguerre's one-dimensional
method as a route to a simultaneous root solver. The formula for
the change in each component, zj_, of the approximation of the
roots of p at a particular iteration step for the Laguerre pseudo-
simultaneous root solver is
8zi = _________________nP(Zi)_________________.
p'(zi) [(n-l)2(p,(zi))2 n(n-l)p(zi)p"(zi)]2
Since each of the individual one-dimensional root solver produce a
monotonically increasing or monotonically decreasing convergent
sequence, Hansen, Patrick and Rusnak realized each approximation
undershoots the root of p(z). Consequently, they incorporate an
extra term in Laguerre's pseudo-simultaneous root solver which
attempts to account for the undershoot; hence, coin their method


24
modified Laguerre. This scheme is Hansen, Patrick and Rusnak's
method for information sharing between the estimates at an
iteration. The numerical examples[5] and those given in this work
indicate that the modified Laguerre method is a simultaneous root
solver.
Consider the Laguerre pseudo-simultaneous root solver,
slightly rearranged, and with a term which will account for the
undershoot of the individual one-dimensional Laguerre root solvers
included in the radical of the denominator:
8zi = ----------------------------------------------------.
p'(zi) [n(n-l)[p,(zi)2-p(zi)p,,(zi)) (n-ljp'fzi)2 -Si]*
Suppose r^ is a root to which z is converging. If
8i = n(n-l)((p* (zi))2-p(zi)p' (Z1)) n(p' (z)) 2
- nz[p(zi) |2 + 2np(zi)p (zi)
\zi-ri) zi-ri
then the equation for 8zf reduces to
8zi = -
np (zi)
p'(zi) p'(zi)2 + n2(-^L-)
> il
2np(zi)p (zi) 2
zi-ri .
8zi = -
np(zj)
P'(zi) [(p'(zi) 1*?]
ri
or


25
Choosing the appropriate sign of the square root, we obtain the
desired result
8zi = ri zj_.
The next task is to approximate 8i because ri is unknown.
This task is accomplished by assuming rj, j=l,...n, are roots of p
and using notation similar to that used in the derivation of the
Hansen-Patrick one-dimensional root solver. Let
si (z)
P1 U) = V 1
p(z) h z-rj
and
S2(Z) =
_ p' (z) p(z)p'' (z)
2
" 1 2
p(z)'
>i z-r3
Rewrite 8i as
8i = n(n-l)((p' (zi) )2-p(zi)p' (zi)) n(n-l)(p^Zi^ )
\zi-ri/
- n( (p' (zi)) + P-(zi)|2
V zi-ri /
or
8i = n(n-l) S2(zi)p(zi) n(n-l)(p^21^ | - n(si(zi)p(zi) +
\zi-ri/ \ zi-ri/
This can be reduced to
8i = n(n-l)p(zi) £ ----1---- np(zi) V ---------1----
j-l.j* Zi rJ \j=l,j*i zi rj


26
An approximation for Si is obtained by replacing rj with the
approximations to rj which yields
Si = n (n-l)p (zi) £ ----L.
Zi
Zj
- np(zi)| £
\J-i,j*i
Zi Zj
As with the previous two simultaneous root solvers, this
method requires distinct estimations for the n roots of p (x) in
its definition of Sxi. Also, the modified Laguerre method can be
implemented on a parallel system.
Hansen, Patrick and Rusnak[5] give an alternative
parallel implementation of the modified Laguerre simultaneous root
solver which exhibits better behavior than the straightforward
implementation of the above method. This implementation departs
from the main-stream of this work and is not presented here.
Instead we turn to numerical examples of the three simultaneous
root solvers which have been derived.


CHAPTER IV
NUMERICAL EXAMPLES
This chapter gives numerical examples which compare the
three simultaneous root solvers derived in chapter III. The
simultaneous root solvers were implemented with the application
package, MATLAB, on a VAX computer. MATLAB functions developed to
implement the simultaneous root solvers are included in the
Appendix. MATLAB was chosen because of its interactive matrix
manipulation capabilities. Furthermore, MATLAB has an intrinsic
function called ROOTS which solves for the roots of a polynomial.
Consequently, results of the simultaneous root solvers were
checked against this function. Function ROOTS solves for the
roots of
p(z) = c(l)zn + c(2)zn 1 + ... + c(n)z + c(n+l),
so the simultaneous root solvers were coded to solve this same
problem. The MATLAB functions which implemented the Durand-
Kerner, Aberth and Hansen-Patrick-Rusnak simultaneous root solvers
were named DKRTS, ARTS and HPKRTS respectively. One drawback of
MATLAB was that implementation of the simultaneous root solvers in
parallel (which is a possible with simultaneous root solvers) was


28
not directly possible. Nevertheless, performance of the three
method could still be compared with a serial implementation.
Initially, the method for choosing the initial
approximation for the roots of p which applied to all the methods
will be given. Since the different methods require a different
amount of computation at each iteration step a weighting factor is
then established for fair comparison. The methods were tested on
a variety of polynomials in interactive sessions. Reflections on
the interactive sessions and cost comparisons are presented. The
different test polynomials can be categorized into two sets, those
with simple roots and those with multiple roots. Initially, test
polynomials with simple roots show that simultaneous root solvers
can be employed on a many different polynomials. Then, test
polynomials with multiple roots show that the simultaneous root
solvers can be useful even in situations where one might expect
them to fail.
Initial Approximation of the Roots
This section presents a proposal by Aberth[4] for the
initial approximation of the roots of p. This proposal was used
to obtain the initial approximation of the roots of p in the
numerical examples of this chapter. To obtain Aberth's proposal,
convert the polynomial
+ bn-l Zn-2- + b2Z + bi
p(z) = zn + bnzn_1


29
to q(x) where z = x bn/n. The change of variables results in
p(z) = q(x) = xn + Cn-ix11-^ + ... + C2X + ci
where 's are related to bi's. Aberth gives references which
show that if
s(r) = rn |cn_i|rn ... |C2Ir |ci| >0
then all of the roots of p are inside the circle with radius r
centered at -bi/n. Aberth also explains problems that
simultaneous root solvers have when the initial guess is
symmetrically placed about a line of symmetry of the roots of p.
Since the real line is often a line of symmetry of roots of a
polynomial, Aberth suggests picking the initial guesses on the
circle with radius r centered at -bi/n so that the initial guess
is not symmetric about the real line. This results in the
following formula for the initial approximation used in the
numerical examples:
z
j
+ r[exp(i(^l + £
j=l,--.,n.
Computational Labor of Simultaneous Root Solvers
This section is devoted to assigning a cost to each of
the simultaneous root solvers in chapter III. To this end, p will
be a nth degree polynomial, and n will be assumed large. This
assumption means that the cost of computing p, p', p' and
relevent summations in the simultaneous root solvers dominate and


30
the cost of combining these terms to form 8zi is negligible. The
number of floating point operations (flops), which is one addition
and one multiplication, for each of the simultaneous root solvers
will be estimated. The Durand-Kerner simultaneous root solver
will be chosen as the baseline because it requires the fewest
flops.
The Durand-Kerner simultaneous root solver requires the
evaluation of p(z) and
n
n zi zj
to compute 8zj_. If Horner's rule is employed, computation of p(z)
requires n flops. The remaining product requires n-1 additions
and n-1 multiplications which is n-1 flops. This implies that
approximately 2n flops are required to compute each 8zi. Since
8zi must be calculated for i=l,...n, the Durand-Kerner
simultaneous root solver executes approximately 2n^ flops per
iteration. For comparison purposes, let 2n^ flops cost one
dollar, in which case, the Durand-Kerner simultaneous root solver
cost one dollar per iteration. Notice that the value of a dollar
depends on n. Consequently, cost comparisons in the following
text compare the simultaneous root solvers on a polynomial of
given degree. Comparisons across degree are not valid.
The Aberth simultaneous root solver requires the
evaluation of p(z), p'(z) and


31
n
v 1
Zi Zj
j=l,j*i
to compute 8z£. Again, Horner's rule to calculate p'(z) requires
n-1 flops, so about 2n flops are required for p(z) and p' (z)
combined. The summation requires work of an additional n-1 flops
which implies approximately 3n flops are required to compute each
8zj_. Since 8zi must be calculated for i=l, . .n, the Aberth
simultaneous root solver executes approximately 3n^ flops per
iteration or costs 1.5 dollars per iteration.
Finally, the Hansen-Patrick-Rusnak simultaneous root
solver requires the evaluation of p(z), p'(z), p''(z) and 8i to
compute 8zj_. As before, p(z) and p' (z) combined require about 2n
and p' (z) requires approximately n flops. Examination of 8i
reveals that work of about 3n flops is required to calculate 8i.
This implies that about 6n flops are required to compute each 8zj_.
Since 8zi must be calculated for i=l,...n, the Hansen-Patrick-
Rusnak simultaneous root solver executes approximately 6n^ flops
per iteration or costs 3.0 dollars per iteration.
Polynomials with Simple Roots
Four interactive sessions were performed to compare the
simultaneous root solvers on test polynomials with simple roots.
The sequence of each session was: 1) generate the polynomial, 2)
solve for the roots using ROOTS, 3) solve for the roots using the
Durand-Kerner simultaneous root solver (DKRTS), 4) solve for the
roots using the Aberth simultaneous root solver (ARTS) and 5)


32
solve for the roots using the Hansen-Patrick-Rusnak simultaneous
root solver (HPKRTS). The variable, il, which was output by the
function for each of the simultaneous root solvers was the number
of iterations required to approximate all the roots. The stopping
criterion was that the magnitude of the polynomial evaluated at
all the roots individually was less than the smallest number
representable by the computer. il was multiplied by the cost
weighting factor determined in the previous subsection to
establish the cost comparisons presented below. Again the
comparisons are valid between simultaneous root solvers on a
polynomial of given degree, and comparison across degree are not
valid.
The first interactive session used test polynomials of
different degrees with either all real random coefficients or all
complex random coefficients. The second session used test
polynomials of different degree with integer coefficients. The
third session used a test polynomial of degree 6 whose roots
display symmetry about the imaginary axis, as well as the real
axis. The final session of this section used a polynomial of
degree 20 with roots on the unit circle.
Test Polynomials with Random Coefficients
This interactive session incorporated test polynomials of
degree 5, 10 and 2 0 with either all real or all complex
coefficients. Two real random vectors were generated and combined
to obtain a polynomial with random complex coefficients. For each


33
degree the polynomial with real coefficients was solved by each
method, then the polynomial with complex coefficients was solved
by each method. The results for real coefficients are summarized
by Table 4-1.
Table 4-1. Polynomials with Real Coefficients and Simple Roots.
Degree Polynomial Durand- Kerner Aberth Hansen- Pat rick-Rusnak
5 $11.00 $12.00 $18.00
10 $18.00 $15.00 $27.00
20 $26.00 $19.50 $39.00
The results for complex coefficients are summarized by Table 4-2
Table 4-2. Polynomials with Complex Coefficients and Simple Roots.
Degree of Polynomial Durand- Kerner Aberth Hansen- Pat rick-Rusnak
5 $10.00 $9.00 $15.00
10 $20.00 $16.50 $24.00
20 $29.00 $21.00 $39.00
Test Polynomials with Integer Roots
The test polynomial for this session was p(z)= (z-
1)...(z-i) for i=2,...8. Comparison of the different simultaneous
root solvers is given in the cost summary shown in Table 4-3.


34
Table 4-3. Polynomials with Integer Coefficients.
Degree of Polvnomial Durand- Kerner Aberth Hansen- Pat rick-Rusnak
2 $8.00 $9.00 $9.00
3 $8.00 $9.00 $18.00
4 $9.00 $9.00 $15.00
5 $11.00 $11.50 $18.00
6 $13.00 $12.00 $21.00
7 $14.00 $13.50 $24.00
8 $18.00 $16.50 $27.00
Test Polynomial with Root, Symmetry
The test polynomial for this session was p(z) = (z4 + 16)
(z^ + 4). The positioning of the roots of p is depicted in the
following figure.
Figure 4-1. Roots of p(z) = (z4 + 16) (z^ + 4).


35
To solve this test polynomial, Durand-Kerner costs $10.00, Aberth
costs $11.50 and Hansen-Patrick-Rusnak costs $18.00.
Test Polynomial with Roots on the Unit Circle
The test polynomial for this session was p(z) = z^0 1.
Complex analysis reveals that this test polynomial has 20 roots on
the unit circle centered at the origin of the complex plane. The
costs for the Durand-Kerner, Aberth and Hansen-Patrick-Rusnak
simultaneous root solvers were $10.00, $9.00 and $15.00
respectively.
Polynomials with Multiple Roots
Two interactive sessions were performed on test
polynomials with multiple roots. The sequence of the solution was
the same as that used for the test polynomials with simple roots.
The first session investigated the behavior of the simultaneous
root solvers on a test polynomial with a single multiple root at
the origin and the second session looked at test polynomials with
several multiple roots.
Test Polynomial with a Single Multiple Root
This session used the test polynomial, p(z) = z^(z^ 1) .
Even with a test polynomial with a single simple root, we see that
the cost increases substantially over the costs of solving for
simple roots. The costs of the Durand-Kerner, Aberth and Hansen-


36
Patrick-Rusnak simultaneous root solvers were $28.00, $27.00 and
$42.00 respectively.
Test Polynomial with Multiple Roots of Different Degrees
The test polynomial for this session was p(z) = (z + l)1
for i=l,...ll. As the order increased, the number of iterations
of the simultaneous root solvers increased accordingly. It was
also interesting to note that for i > 3 both the MATLAB intrinsic
function and the simultaneous root solvers had error in their
results. The roots produced had real parts close to but not equal
to -1 and small imaginary parts. The error increased as the order
increased. Comparison of the different simultaneous root solvers
is given in the cost summary in Table 4-4.
Table 4-4. Polynomial with Multiple Roots at -1.
Degree Durand- Hansen-
Q.f. .Polynomial__Kerner Aberth_________Patrick-Rusnak
2 $27.00 $27.00 $12.00
3 $31.00 $28.50 $33.00
4 $33.00 $28.50 $33.00
5 $34.00 $30.00 $39.00
6 $43.00 $34.50 $45.00
7 $53.00 $34.50 $48.00
8 $84.00 $67.50 $105.00
9 $92.00 $142.50 $213.00
10 $339.00 $201.00 $630.00
11
$529.00
$666.00
$711.00


CHAPTER V
CONCLUSION
This thesis reviewed polynomial root solving techniques.
Initially, techniques which solve for one of the roots of a
polynomial were presented. These techniques were coined one-
dimensional root solvers because they solve for only one of the
roots. The one-dimensional root solvers were then expanded to
pseudo-simultaneous root solvers which could solve for all of the
roots of the polynomial. However, since information is not shared
between the individual one-dimensional root solvers in the pseudo-
simultaneous root solvers, they do not necessarily solve for all
of the roots. Consequently, schemes for sharing information
between the individual one-dimensional root solvers were presented
to obtain simultaneous root solvers. Simultaneous root solvers
solve for all of the roots of a polynomial. The symmetric
function of a polynomial was presented as a scheme for information
sharing. Newton's method in n-dimensions was applied to the
symmetric function to obtain the first simultaneous root solver.
This simultaneous root solver was called the Durand-Kerner
simultaneous root solver. Next, a scheme for information sharing
presented by Aberth was presented to generate Aberth's
simultaneous root solver. Finally, Hansen, Patrick and Rusnak


38
supplied information sharing to the pseudo-simultaneous root
solver which employs Laguerre's one-dimensional root solver to
obtain the Hansen-Patrick-Rusnak simultaneous root solver.
Numerical examples using the simultaneous root solvers
were given to compare relative efficiency. Test polynomials with
both simple and multiple roots were used. The results showed that
the Durand-Kerner simultaneous root solver and the Aberth
simultaneous root solver usually required similar computational
labor. The Hansen-Patrick-Rusnak simultaneous root solver
generally required fewer iterations to approximate all of the
roots; however, the computational labor require at each iteration
made the Hansen-Patrick-Rusnak simultaneous root solver less
efficient than the other two simultaneous root solvers.
The methods used to generate the simultaneous root solvers
in this work suggest that further work to establish information
sharing between the individual one-dimensional root solvers in
pseudo-simultaneous root solvers would produce further
simultaneous root solvers.


REFERENCES
1. R. A. Silverman, Complex Analysis with Application, Dover
Publications, New York: 1974.
2. L. E. Dickson, New First Course in the Theory of Equations,
John Wiley and Sons, New York: 1967.
3. E. Hansen & M. Patrick, "A Family of Root Finding Methods,"
Numer. Math., v. 27, 1976, pp. 257-269.
4. 0. Aberth, "Iteration Methods for Finding All Zeros of a
Polynomial Simultaneously," Mathematics of Computation, v.
27, no. 122, April, 1973, pp. 339-344.
5. E. Hansen, M. Patrick, & J. Rusnak, "Some Modifications of
Daguerre's Method," BIT, v. 17, 1977, pp. 409-417.
6. L. Pasquini & D. Trigiante, "A Globally Convergent Method
for Simultaneously Finding Polynomial Roots," Mathematics of
Computation, v. 44, no. 169, January, 1985, pp. 135-149.
7. A. Ralston & P. Rabinowitz, A First Course in Numerical
Analysis, McGraw-Hill, New York: 1978.
8. M. Marden, Geometry of Polynomials, American Mathematical
Society, Providence, R.I.: 1966.
9. M. S. Petkovic & L. V. Stefanovic, "On Some Iteration
Functions for the Simultaneous Computation of Multiple
Complex Polynomial Zeros," BIT, v. 27, 1987, pp. 111-122.
10. G. Kjellberg, "Two Observations on Durand-Kerner's Root-
Finding Method," BIT, v. 24, 1984, pp. 556-559.
11. E. Hansen & M. Patrick, "Estimating the Multiplicity of a
Root," Numer. Math., v. 27, 1976, pp. 121-131.
12. G. Dahlquist & A. Bjorck, translated by N. Andersen,
Numerical Methods, Prentice-Hall, Englewood Cliffs, N.J.:
1974.
13. I. Gargantini, "Parallel Laguerre Iterations:
Case," Numer. Math., v. 26, 1976, pp. 317-323.
The Complex


APPENDIX
MATLAB COMPUTER CODE FOR SIMULTANEOUS ROOT SOLVERS


41
function a=dkrts(c)
%
%
% This MATLAB function finds the roots of the polynomial
% c(l)*x~n + c(2)x~(n-l) + . +c(n)*x + c(n+l). It
% accomplishes the same task as the intrinsic function ROOTS;
% however, rts employs the iteration method of Durand and Kerner
% explained in'Iteration Methods for Finding all Zeros of a Polynomial
% Simultaneously, written by Oliver Alberth in Mathematics of
% Computations, Volume 27 Number 122, April 1973. This function calls
% function dkr which follows the notation of the above stated paper
% more closely.
% The function requires that c(l) is not equal to 0.
%
% programmed by Jeffrey V. Berg
% 12/4/88
%
I
(m,n]isize(c);
ml>max(m,n);
if (c(1)"0),error{'c(1) equals zero'),end
for i2:ml,b(ml-i+i,l)-c(i)/c(l);end
a-dkr(b);
end



42
function a-arts(c)
This MATLAB function finds the roots of the polynomial
c(l)*x"n + c(2)x*(n-l) + . . . +c(n)*x + c(n+l). It
accomplishes the same task as.the intrinsic function ROOTS;
however, rts employs the iteration method of Aberth
explained in-lteration Methods for Finding all Zeros of a Polynomial
Simultaneously, written by Oliver Alberth in Mathematics of
Computations, Volume 27 Number 122, April 1973. This function calls
function art which follows the notation of the above stated paper
more closely.
The function requires that c(l) is not equal to 0.
programmed by Jeffrey V. Berg
12/4/88
(m,,n]*size(c) ;
ml-max(m,n);
if (c(1)--0),error('c(1) equals zero'),end
for i-2:ml,b(ml-i+1,1)-c( i )./c( 1) ;end
a-art(b);
end


43
function a-hprrts(c)
%
%
% This MATLAB function finds the roots of the polynomial
% c(l)*xn + c(2)x*(n-l) + . +c(n)*x + c(n+l). It
% accomplishes the same task as the intrinsic function ROOTS;
% however, rts employs the iteration method of Hansen, Patrick and
% Rusnak explained in Some Modifications of Laguerre's Method
% written by E. Hansen, M. Patrick and J. Rusnak in BIT Volume 17,
% 1977, pp. 409-417. This function calls function hprr.
% The function requires that c(1) is not.equal to 0.
%
% programmed by Jeffrey V. Berg
% 12/4/88
%
%
[m,n]size(c) ;
ml-max(m,n);
if (c(1)0),error('c(1) equals zero'),end
for i-2:ml,b(ml-i+1,1)-c(i)/c(1);end
a-hprr(b);
end


44
function r-dkr(c)
%
%
I This MATLAB function solves for the roots of the polynomial
% p(z) z'n + c(n)*z(n-1) + . . . + c(2)*z + c(l). The function
% employs the iteration method of Durand and Kerner explained in
% Iteration methods for Finding all Zeros of a Polynomial
I Simultaneously written by Oliver Alberth in Mathematics of
% Computations, Volume 27, Number 122, April 1973. r is a column
% vector of the roots of p(z) whose dimension equals the row
% dimension of the column vector c.
%
I column dimension (r) column dimension (c)
%
%
%
% programmed by Jeffrey V Berg
% 11/8/88
%
%
(ra,n]-size(c);
if (n~-l), error ('c is not a column vector'),end
zchek-l;cO-c;
cO(m+l)-l;
zi-init(cO,m);
check-ones(c);il=l;
while (zchek~-0),
for j-l:m,
if (check(j)=0 ) ,
qzj-1;pzj-1;
for k-l:m,
if k~-j,qzj=qzj*(zi(j)-zi(k));end,
%
% Horner's rule
%
pzj-c(m-k + 1)+pzj *zi ( j ) ;
end,
if (abs(pzj/qzj)<-eps),check(j)-0;zn(j)-zi(j);else zn(j)zi(j)-pzj/qzj;end,
end,
end,
zchek-0;
for j-1:m,if(check(j)--1),zchek-1fend,end
zi-zn;
il-il+1;
end
i1,r=zn;
end


4 5
function r=art(c)
%
%
% This MATLAB function solves for the roots of the polynomial
% p(z) z~n + c(n)*z"(n-l) + . + c(2)*z + c(l). The function
% employs the iteration method of Aberth explained in
% Iteration methods for Finding all Zeros of a Polynomial
% Simultaneously written by Oliver Alberth in Mathematics of
% Computations,-Volume 27, Number 122, April 1973. r is a column
% vector of the roots of p(z) whose dimension equals the column
% dimension of the column vector c.
%
\ column dimension (r) column dimension (c)
%
%
%
% programmed by Jeffrey V Berg
% 11/8/88
%
%
[m,n]-size(c);
if (n~l), error ('c is not a column vector'),end .
zchek-1;cO-c;
cO(m+1)1;
zi-init(cO,m);
check-ones(c);il-l;
while (zchek~-0 ) ,
for j-l:m,
if (check(j)~-0),
qzj-0;pzj-l;ppzj-m;
for k-l:m,
if k~-j,qzj-qzj+l/(zi(j)-zi(k));end,
%
% Horner's rule
%
pzjc(m-k+l)+pzj*zi(j);
if (k-l),ppzjc(m-k+2)*(m-k+1)+ppzj *zi(j);end,
end,
dzj-pzj/(pzj*qzj-ppzj);
if (abs(dzj)<-eps),check(j)-0;zn(j)zi{j);else zn(j)-zi(j)+dzj;end,
end,
end,
zchek=0;
for j-1:m,if(check(j)--l),zchek-1;end,end,
zi-zn;
il-il+1;
end
il,r-zn;
end


46
function r-hprr(c)
%
%
%
%
%
I
I
%
%
%
%
%
%
%
%
%
%
%
[m,n]-size(c ) ;
if (n~-l), error ('c is not a column vector'),end
zchek-1;cO-c;
cO(ra+1)-1 ;
zi-init(cO ,m);
check-ones(c);il-1;
while (zchek-0),
for j-l:ra,
if (check(j)~0),
qzj-0;pzj-1;ppzj=m;pppzj-ra*(m-1);
for k-l:m,
if k~ = j,qzj=qzj+l/(zi(j) zi ( k)) ;end,
%
% Horner's rule
%
pzj=c(m-k+1)+pzj*zi(j);
if (kl),ppzj-c(m-k+2)*(m-k+1)+ppzj *zi ( j ) ;end,
if (k~-l),if(k~-2),pppzj-c(m-k+3)*(m-k+1)*(m-k+2)+pppzj *zi ( j) ; end, end,
end,
if (abs(pzj)>eps),
dj2-0;bj-qzj/(m-l);
for k-l:m, .
if (k~-j),dj2-dj2+(l/(zi(j)-zi(k))bj)*2;end,
end,
sij-ppzj/pzj;
s2j-((ppzj)~2-pppzj*pzj)/(pzj)'2;
slpj = slj + ((m-l)*(m*s2j-slj *2-m*dj2))* 5 ;
slnj-slj-((m-l)*(m*s2j-slj *2-m*dj2)) 5 ;
if ( abs( sipj)>-abs(sinj)),sij-slpj;else sij-slnj;end,
dzj=-m/slj;
else dzj-0;end,
if (abs(dzj)<-eps),check(j)-0;zn(j)-zi(j);else zn(j)-zi(j)+dzj;end,
end,
end,
zchek-0;
for j-l:m,if check(j)--1,zchek-1;end,end,
z i z n ;
il-il+1;
end
i 1, r-zn;
end
This MATLAB function solves for the roots of the polynomial
p(z) z*n + c(n)*z*(n-l) + . + c(2)*z + c(l). The function
employs the iteration method of Hansen, Patrick and Rusnak
explained in Some Modifications of Laguerres Method written by
E. Hansen, ni Patrick and J. Rusnak in BIT, volume 17,1977,
pp. 409-417. r is a column vector whose dimension equals the
column dimension of the column vector c.
column dimension (r) column dimension (c)
programmed by Jeffrey V Berg
11/8/88


47
function zi- ini t ( co n)
%
%
% This MATLAB function generates an initial condition vector for the
% iteration scheme described in the Oliver Aberth paper.
%
%
% programmed by Jeffrey V. Berg
% 11/8/88
%
%
a-eye(n+1) ;
a(:,1)-ones(a(:,1) ) ;
for i-3:n+l,for j-2:i-1,a( i,j)-a(i-1,j-1)+a(i-1,j);end,end
for i2:n+1,d(-co(n)/n)'(i-l);for j-i:n+l,a(j,i)-a(j,i)*d;end,end
for i-l:n+l,for j-i:n+1,a(j,i)-co(j ) *a(j,i);end,end
cop-co;
for i-2:n+l,for j-1:n+1,cop(j (i 1))-cop(j-(i-l))+a(j,i);end,end
%cop,-
k-spos(cop,n,10 );
for i-1:n,zi(i)co(n)/n+k*exp(sqrt(-l)*((2*pi/n)*{i-1)+pi/(2*n)));end


48
function k-spos(cop,n,check )
%
%
% This MATLAB function determines an integer w=10ml for which the
% polynomial abs(cop(n+1))*w"n abs(cop(n))*w(n-1) . -
% abs(cop(2))*w abs(cop(l)) becomes postive. It checks that
% cop(n) < eps ( cop( n+1).) which is close to zero.
% If the sign of the polynomial remains negative through
% check an appropriate error message is returned.
%
%
% programmed by Jeffrey V. Berg
% 11/8/88
%
%
checkl--l;i1-1;
%
if abs(cop(n))>10*eps* ( cop(n+l)) ,
errort'the poly for spos is not right form'),end
%
copl(:)-abs(cop( :));
while (checkl<0),
for i-l:10,
if checkKO,
check1-copl(n+1);
w-10(il-l)*i;
for i2-l:n,checkl-checkl*w-copl(n+l-i2 ) ;end,
if w>check,error('poly in spos stays negative thru check'),end
else,
break,
end
end
il-il+1;
end
k-w;