Citation
A PN semiconductor device simulator using MATLAB

Material Information

Title:
A PN semiconductor device simulator using MATLAB
Creator:
Pace, Shawn Randall
Publication Date:
Language:
English
Physical Description:
ix, 114 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
MATLAB ( lcsh )
MATLAB ( fast )
Semiconductors -- Junctions ( lcsh )
Numerical analysis -- Computer programs ( lcsh )
Numerical analysis -- Computer programs ( fast )
Semiconductors -- Junctions ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 114).
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Shawn Randall Pace.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
781635746 ( OCLC )
ocn781635746
Classification:
LD1193.E54 2011M R34 ( lcc )

Full Text
A PN SEMICONDUCTOR DEVICE SIMULATOR USING MATLAB
by
Shawn Randall Pace
B.S.E.E., University of North Florida, 2007
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
for the degree of
Master of Science
Electrical Engineering
2011


This thesis for the Master of Science
Electrical Engineering
degree by
Shawn Randall Pace
has been approved
by
Hamid Fardi
Robert Grabbe
/!//?-/ II
Date


Pace, Shawn Randall (M.S., Electrical Engineering)
A PN Semiconductor Device Simulator Using MATLAB
Thesis directed by Professor Hamid Fardi
ABSTRACT
The purpose of this project is to develop a functional PN semiconductor device
simulator that is modular in nature in order to allow for flexibility during
programming and to allow for future modification with relative ease. The device
modeling program is developed using MATLAB In addition, the programs main
goal is to provide a simulator tool that can supplement device modeling,
semiconductor device physics, and numerical analysis to construct basic
semiconductor devices which can be studied using standard numerical analysis
techniques. MATLABs capability and inherent nature of handling matrices and
matrix operations makes this approach an excellent technique to develop numerical
analysis algorithms. The program solution will be used to examine device parameters
such as carrier statistics, device potential, and internal electric fields. The device
solution is compared to analytical approximations in order to further strengthen the
understanding between theory and exact numerical solutions and how those solutions
are obtained.
This abstract accurately represents the content of the candidates thesis. I recommend
its publication.
Signed
Hamid Fardi


ACKNOWLEDGEMENT
The main credit and acknowledgement for the development of this project goes to Dr.
Hamid Fardi, without whose knowledge and support would have made the creation of
this project nearly impossible. His knowledge of numerical analysis related to
semiconductor physics and device physics operation as a whole was immensely
appreciated and invaluable. Furthermore, appreciation should be given to the
University of Colorado Denver, whose Electrical Engineering department and faculty
provided the necessary training and knowledge over the course of my work there to
make this project possible. It is my sincere hope that this tool can be used in a
valuable training manner and supplemental learning aid.


TABLE OF CONTENTS
Figures.....................................................................vii
Tables.......................................................................ix
Chapter
1. Introduction...........................................................1
2. Semiconductor Device Physics...........................................4
2.1 Semiconductor Material Parameters......................................4
2.2 Device Equations.......................................................5
2.3 Carrier Densities.....................................................12
2.4 Mobility Models.......................................................13
2.5 Recombination and Generation Models...................................15
3. Numerical Analysis....................................................18
3.1 Taylor Series Expansion...............................................18
3.2 Numerical Differentiation.............................................21
3.3 Numerical Integration.................................................23
3.4 Tri-diagonal Matrices.................................................25
3.5 PN Junction DC Analysis...............................................28
3.6 Numerical Solution of PN Junction.....................................51
4. SDS Results Investigated..............................................56
4.1 SDS Output Compared to Analytical Approximations......................56
v


5. Semiconductor Device Simulator....................................67
6. Summary............................................................73
Appendix
A. Integral Form of the Current Density Equation......................74
B. Derivation Techniques of the Matrix-vector Coefficients............81
C. Semiconductor Device Simulator Implemented........................85
References...............................................................114
vi


LIST OF FIGURES
Figure
2.1 INFINITESIMAL SLICE FOR DETERMINATION OF
INCREASE IN CARRIER DENSITY IN X..................6
2.2 ELECTRON CONCENTRATION DIFFUSION AFFECT...........9
3.1 INTEGRATION USING THE TRAPEZOIDAL RULE............23
3.2 TRI-DIAGONAL MATRIX...............................25
3.3 BASIC P-N JUNCTION DIODE..........................29
3.4 DIVISION POINTS FOR DC ANALYSIS...................32
3.5 MATRIX FORM OF THE MATRIX-VECTOR EQUATION.........40
3.6 COEFFICIENT A OF MATRIX-VECTOR EQUATION...........41
3.7 COEFFICIENT B OF MATRIX-VECTOR EQUATION...........41
3.8 COEFFICIENT C OF MATRIX-VECTOR EQUATION...........42
3.9 COEFFICIENT F OF MATRIX-VECTOR EQUATION...........42
3.10 UPDATED BIAS POTENTIAL BASED ON PREVIOUS
POTENTIAL.........................................54
4.1 ABRUPT STEP JUNCTION PROPERTIES...................59
4.2 EQUILIBRIUM DEVICE POTENTIAL......................60
4.3 EQUILIBRIUM ELECTRIC FIELD........................61
4.4 DEVICE POTENTIAL WITH UNEQUAL DOPING DENSITIES....62
vii


4.5 SIMWINDOWS COMPARISON TO FIGURE 4.4............62
4.6 SDS OUTPUT SHIFTED TO ZERO.....................63
4.7 SIMWINDOWS ELECTRIC FIELD......................63
4.8 SDS ELECTRIC FIELD.............................64
4.9 SDS DEVICE POTENTIAL WITH EXTERNAL
APPLIED BIAS...................................65
4.10 SIMWINDOWS POTENTIAL WITH EXTERNAL
APPLIED BIAS...................................66
5.1 SDS BASIC FLOWCHART............................72
A. 1 INTEGRAL FORM OF CURRENT DENSITY EQUATIONS.....79
viii


LIST OF TABLES
Table
2.1 Material parameters for silicon...........................................4
2.2 Mobility parameters......................................................14
2.3 Ionization generation parameters.........................................17
3.1 Partial derivatives required for matrix coefficients.....................43
3.2 Numerical constants......................................................54
3.3 Physical constants.......................................................55
IX


1.
Introduction
Although there are several powerful industry and student level device physics
simulators already available, the hope is that with this new tool and supplemental
report, the theory of numerical analysis applied to device physics can be studied at a
more fundamental level. Traditional simulators allow for device simulation but do not
offer any insight into how that solution was obtained. The addition of the report that
supplements the Semiconductor Device Simulator will develop the basic tools
necessary to understand the operation of the program and allow future modifications
as necessary.
This project was developed in the programming environment provided by
MATLAB which is developed and supported by Mathworks. Almost all academic
institutions use MATLAB heavily in the learning process and it is readily available to
most students, hence the choice of this programming environment. Additionally,
MATLAB is an extremely powerful numerical analysis tool that makes the solution
of a program like this one rather straightforward. MATLABs capability and inherent
nature of handling matrices and matrix operations makes this an excellent tool to
develop numerical analysis algorithms.
The current semiconductor simulation tools available do not lend to updates or
modifications in a simple or straightforward manner. In fact most tools are not open
to be changed in any manner unless the source code is available, which is rare. With
the open nature of MATLAB the solution process can be studied throughout and all
variables and parameters are available for examination. The program could have the
capability to be standalone which allows for the use of the tool without the
requirement of the MATLAB environment.*
The symbol used throughout this manual is to indicate a future development not
yet built into the current version of the program. Almost all sections in this manual
include a future development section to indicate where the code can be easily updated
to include new functions or models as necessary.
1


The remaining discussion in this section gives a brief overview of the contents
included in the following sections.
Chapter 2 gives a brief introduction to the devices physics required to build a
numerical analysis program to simulate semiconductor devices in general. Device
physics equations (continuity, Poissons, current, etc.) are discussed as are the
models, such as mobility, SRH (Shockley-Read-Hall) recombination, and generation.
Boundary conditions are also discussed. Lastly, silicon material parameters are given
and briefly discussed. Currently no other semiconductor material is used in this
program, but the addition of other materials is rather straightforward*.
Chapter 3 is broken into several sections. The first section will give a brief
introduction to numerical analysis techniques such as differentiation using the
difference method, integration using the trapezoidal technique, Taylor series
expansion, and basic tri-diagonal matrix solutions.
The next section will discuss the numerical parameters required to support the
program, such as mesh spacing, iteration limits, convergence tolerances, etc. The final
section will give an in-depth view of setting up and solving the numerical equations
for a basic PN junction device. The section will cover converting the basic device
physics equations in Chapter 2 into numerical equations, setting up the tri-diagonal
block matrix, performing the recursive solution algorithm and determining
convergence criteria.
Chapter 4 will provide simulation results and compare them to basic theory (depletion
width, built-in potential, etc.) as well as with other Simulation tools, such as
SimWindows.
Chapter 5 will discuss the program as written and the various functions built to
support it. Each function will be discussed and explained. It should be noted however,
that the MATLAB code itself is heavily commented as a supplement to this section.
Chapter 6 gives a brief summary of the report.
2


Appendix A Integral Form of the Current Density Equation
Appendix B Derivation Techniques of the Matrix-vector Coefficients
Appendix C Semiconductor Device Simulator Implemented
A final note on the Introduction is in order. Currently this program supports basic 1-D
PN junction devices with uniform mesh spacing. Although the program is hardcoded
for a Step Junction, the junction profile is easily changed just as the numerical
parameters (convergence tolerance, relaxation factor, device length and doping) are
without any further modification to the program. Modifications for 2-D solutions,
refined mesh spacing, Shottky diodes, and BJTs could be further implemented*. The
same can be said for different numerical solution techniques if desired.
3


2.
Semiconductor Device Physics
In this chapter, a brief discussion of simple device physics will be covered along with
the differential equations governing them as well as any necessary models required to
establish a numerical solution of the differential equations. Semiconductor physical
parameters will be given in Section 2.1, device equations discussed in Section 2.2,
carrier densities in Section 2.3, mobility models in Section 2.4 and recombination-
generation models in Section 2.5.
2.1. Semiconductor Material Parameters
A tabulated list of the semiconductor physical parameters necessary to reach a
numerical solution and some extra parameters used for future expansions are given
below. Currently only parameters for Silicon material are used. The corresponding
program variable is also shown.
Table 2.1 Material parameters for silicon
Nomenclature Program Variable Default Value Units
Relative Permittivity er 11.7 -
Silicon Permittivity es 1.036 x 10-12 F/cm
Energy Gap (300K) EG300 1.08 eV
Alpha EG_alpha 4.73 x 10-4 -
Beta EG_beta 636 -
Conduction Band Density (300K) NC300 2.8 x 1019 cm-3
Valence Band Density (300K) NV300 1.04 x 1019 cm-3
Electron Mobility Function* 1350 cm2/V-sec
Hole Mobility Function* 495 cm2A,-sec
Intrinsic Carrier Concentration ni (Function)* 1.45 x 1010 cm-3
SRH Electron Lifetime tau_nO 0.2 x 10-7 sec
SRH Hole Lifetime tau_pO 0.2 x 10-7 sec
SRH Electron Concentration Nsrh_n 5 x 1015 cm-3
4


Table 2.1 (Cont.)
Nomenclature Program Variable Default Value Units
SRH Hole Concentration Nsrh_p 5 x 1015 cm-3
Electron Affinity** - 4.17 eV
Donor Energy Level** - 0.044 eV
Acceptor Energy Level** - 0.045 eV
Saturation Velocity** Function** - cm/sec
Electron Auger Coef.** - 2.8 x 10-31 cm6/sec
Hole Auger Coef.** - 9.9 x 10-32 cm6/sec
* Designates parameters whose values are outputs of a function but are currently set
to a constant. This is also in keeping with the theme of future expansion
mentioned in the introduction chapter. These terms can be further modeled with
little effort.
** Parameters are not currently used.
2.2. Device Equations
Semiconductor device phenomenon is described and governed by Poissons equation
(2.1).
0V
= --(r+P-n)
(2.1)
where, T = N(x) Nd Na
and the continuity equations for electrons and holes (2.2a and 2.2b).
§l = L.ZL+G-u
dt q dx
(2.2a)
dp__ J_ n
dt q dx
+ G U
(2.2b)
5


r is the effective doping concentration defined for the semiconductor, whether it be a
PN Junction, Bipolar Junction Transistor, MOSFET, and so on. Equation (2.1) is the
differential equation for the potential distribution in an arbitrarily doped
semiconductor. This equation, depending upon the particulars, is difficult to solve
analytically and in some cases can be impossible. Approximations must be made in
most analytical solutions, alternatively numerical methods can be used which is the
focus of this report. [1] Because much of semiconductor analysis is concerned with
the spatial variation of carriers and potentials from one region of a device to another,
Poissons equation is almost always encountered in the solution procedure. It is one
of the most useful principles in semiconductor device analysis.
Equations (2.2a) and (2.2b) are known as the continuity equations for electrons and
holes respectively. These equations are used to describe the free carrier densities and
their properties within the device by examining their interactions within a small
infinitesimal volume that we can extend across the device as a whole. To derive a
one-dimensional continuity equation for electrons, we consider an infinitesimal slice
of thickness dx located at x as shown in Figure 2.1.
_£ (
1 G R
Jn(x) _ Jn(x + dx)^


x + dx
Figure 2.1 INFINITESIMAL SLICE FOR DETERMINATION OF INCREASE IN
CARRIER DENSITY IN X
6


The number of electrons in the slice can increase because of the net flow into the
volume and from net carrier generation in the device. The overall rate of electron
increase equals the algebraic sum of
(1) The number of electrons flowing into the slice, minus
(2) The number flowing out, plus
(3) The rate at which electrons are generated, minus
(4) The rate at which they recombine. [ 1 ]
The first two components (1) and (2) are found by dividing the currents at each side
of the slice by the charge on an electron, and for now, the last two components (3)
and (4) are simply denoted by G and R, respectively. The rate of change in the
number of electron in the slice is then given by equation (2.3).
A is the cross-section area of the slice, q is the electron charge, and G and R is the
generation and recombination densities respectively. If we use Taylor series
expansion on the second term then,
dJ
Jn(x + dx) = J(x) -I--dx + .... and substituting into equation (2.3) obtains the
dx
continuity equation for electrons as shown in equation (2.2b). A similar procedure
will result in equation (2.2a) for the continuity equation for holes. To yield equations
that can be used analytically or numerically we need to introduce the relationship
between current density J and the carrier densities n, and p.
The current transport equations or current densities are defined by a combination of
the drift and diffusion affects on the free carriers within the device. The drift portion
of the current expression is simply due to the presence of an electric field, if there is
one. This electric field creates an overall drift velocity vd, and is proportional to the
electric field and the mobility of the carrier as shown in equation (2.4).
dx \ -q
'/(*) Jn(x + dx)'
s - A + (G R)Adx
(2.3)
vd =-me
(2.4)
7


The mobility of the free carrier is a parameter that describes the ease of motion of an
electron in response to an electric field and is affected by several things. The mobility
model and how it is used in the application of this program is discussed shortly.
The current density flowing in the direction of the electric field can be found by
summing the product of the charge on each electron times its velocity over all
electrons n per unit volume, or that is
The same procedure can be extended to the hole free carriers and then combined to
give the total current density due to the applied electric field in equation (2.5).
The other affect on the current density through a device is due to the diffusion of free
carriers in the device if a special variation of carrier energies or densities exist. To
investigate this affect we start by examining a portion of a semiconductor device that
is of n-type and with an electron density that only varies in one dimension. The
temperature across the region is average so that only the density varies with x and we
then consider the number of electrons passing the plane at x=0 per unit time per unit
area (Figure 2.2).
n
^ n,drift = = fiq^E
(2.5)
8


X
Figure 2.2 ELECTRON CONCENTRATION DIFFUSION AFFECT
Because the electrons are at finite temperature, they only have random thermal
motion along one line even with no electric field applied. On average the number of
electrons crossing the plane at x=0 from the left side start at approximately x=-X after
a collision. The lambda term is defined as the mean-free path for an electron, or that
is the average distance an electron will travel before encountering another collision
and is given by X = vthTcn. Therefore the average rate per unit area of electrons
crossing the plane at x=0 depends upon the density of electrons that started at x=- X
and is -^N{-X)vth. The factor of Vi is due to the fact that half of the electrons move
further to the left of x=- X after a collision and half to the right. Similarly, the rate at
which electrons pass the plane x=0 from the right is ~^N{X)vth which gives a net rate
of particle flow from the left is.
9


F = vth [n(-X) n(X)] which can be approximated at x=^ by the Taylor series
expansion to give
F = v,.
dn . dn
n(0) X (0) + X
dx dx
, dn
and finally because each
electron carries charge q, the particle flow corresponds to a current per unit area in
equation (2.6).
= -dF = qylv,
dn
dx
(2.6)
A more useful expression of equation (2.6) can be developed by using the theorem for
equipartition of energy in this one-dimensional case. [1] This allows us to write
~mlv?h ^kT and X vlhTcn, and rearrange equation (2.6) into equation (2.7).
hjiff = dDn ^ Wkere
n kT
Dn=ju
q
(2.7)
The quantity Dn is known as the diffusion constant or Einstein relationship. It relates
the two important coefficients that characterize free carrier transport by drift and by
diffusion in a solid. Now with the drift and diffusion terms derived, the total electron
current density can be expressed by equation (2.8)
Jn = Jn.drift + ^n.diff = dMnnE + ^n (2.8)
Following the same procedure for holes leads to equation (2.9) and combing the
expresses the current density for both carriers as in equation (2.10)
10


(2.9)
^ P J P,drift Jp,diff
dx
J = Jn+Jp (2.10)
Before we continue to the next section, the equations discussed so far are summarized
in equations (2.1 la-e) below which are often referred to as the equations of state [2],
and a brief discussion on non-thermal equilibrium conditions is given. The current
SDS program does not employ these techniques as solution mechanisms.
Equations of State:
, dp dip Jp =-qDn qUn P H pdx Hhp dx (2.11a)
dn dip Jn = aP = _,.a,-+c-{/ dt q dx (2.11c)
3" = 1.3-/.+C-(; dt q dx (2.1 Id)
^ =-e(r + -,) (2.1 le)
Not all devices operate under thermal equilibrium conditions and so techniques are
developed which allow the analysis of semiconductors that are not at thermal
equilibrium to occur. We call this quasi-Fermi analysis. Through this analysis the use
of quasi-Fermi potential levels are established which relate the current to the gradient
of the carrier at that given quasi-Fermi level. The expressions for the quasi-Fermi
carrier densities and current densities are shown below in equations (2.12-2.15).
11


n = m( exp
(2.12)
( E E ") f
= n, exp
l kT ) V
qWfi-VfnY
kT
p = n, exp
f E E \
c- cfp
kT
nj exp


. dEfn d¥fn
Jn= Mnn r~ = -qM"
ax ax
dE

d\j/
fP
dx
(2.13)
(2.14)
(2.15)
2.3. Carrier Densities
The carrier models used in this program follow the Boltzmann statistics
approximation of the Fermi-Dirac distribution model. The carrier densities are
approximated in equations (2.16) and (2.17).
n~Nc exp
f F E ^
r-c
kT
= "ie eXP
' qty-h)'

kT
(2.16)
p ~ Nv exp
(Ev~EfP) = nie exp 'qtyp-V')'
{ kT j kT K /
(2.17)
nie is the effective intrinsic carrier concentration and has dependencies on
temperature and total doping concentration. This project neglects band gap narrowing
due to heavy doping levels but could be implemented as discussed in [3]*. The
intrinsic carrier concentration then is only modeled as a function of temperature as
shown below in equation (2.18).
12


(2.18)
f
nie(T) = y/Nc(T)Nv(T)cx p
v
Eg(T)'
2kT ,
Nc and Nv are known as the density of states and along with the band-gap energy,
E also have dependencies on temperature as shown in equations (2.19-2.21).
Eg(T) = Eg(300) + a
3002
300 + p
T2
T + p
(2.19)
NAT) =
T
300
AL(300)
(2.20)
NV(T) =
Nv (300)
(2.21)
Nc (300) and Nv (300) along with a and p, are predefined quantities given in
section 2.1.
2.4. Mobility Models
The mobility of an electron or hole to move throughout the silicon crystal lattice is
representation of the responsiveness of a free carrier to move through the conduction
and valence bands in the presence of an electric field. This phenomenon is governed
by several complex interactions at the quantum mechanical level. [4]
Temperature plays a major role because at higher temperatures the lattice will vibrate
more causing the overall free carrier mobility to lower. Dopant levels also have a
significant role since the dopant sites create local distortions in the lattice allowing
the free carriers to scatter more.
13


The models built into SDS allow for effective doping dependency and electric field
dependency as shown in equation (2.22). There is also a factor included to take into
account for high-injection levels in high power devices. These models can be turned
on or off to account for any dependency that is desired. The constants and fitting
parameters are tabulated below.
M =
/^max Mn
1 +
\N(x)\
V "ref J


+ f
1
1 +
f
V
where,
/ =
\a
2.04 N
ref
(2.22)
Table 2.2 Mobility parameters
Nomenclature Program Variable Default Value Units
Electron Mobility Concentration Reference* Nrefn 8.5 x 1016 cm-3
Electron Mobility Alpha* alphan 0.72 -
Electron Mobility Beta* betan 2 -
Electron Critical Electric Field* Ecn 8.0 x 103 V/cm
Electron Maximum Mobility* mu_maxn 1330 cm2/V-sec
Electron Minimum Mobility* mu_minn 65 cm2/V-sec
Hole Mobility Concentration Reference* Nrefp 6.3 x 1016 cm-3
Hole Mobility Alpha* alphap 0.72 -
Hole Mobility Beta* betap 1 -
Hole Critical Electric Field* Ecp 1.95 x 104 V/cm
14


Table 2.2 (Cont.)
Nomenclature Program Variable Default Value Units
Hole Maximum Mobility* mu_maxp 495 cm2/V-sec
Hole Minimum Mobility* mu_minp 47.7 cm2/V-sec
* Constants to calculate field dependant mobility terms, when used.
2.5. Recombination and Generation Models
The SDS program models both generation and recombination for use in determining
current flow in the specified device when solving both the continuity equations and
Poissons equation. The most fundamental type of recombination defined as
Shockley-Read Hall recombination. The recombination rates are defined in equation
(2.23).
u=un=up
_______np ~ nf_______
Tp{" + n{)) + Tn(p + p0)
(2.23)
where,
n(t = n(. exp
kT
and
Po
= n, exp
kT
\
The electron and hole lifetime are defined by r and rp, respectively and are defined
below in equation (2.24) where o, is the capture cross-section and Nt, the density of
trapping centers.
T. =

and
Tn =

(2.24)
15


Another method to calculate the lifetime as used in this program is from [3] and is
shown in equation (2.25) below.
T. =
"n 0
N
j _|_ total
and
Tn =
> o
N
N
| _j_ total
(2.25)
SRH.n
N
SRH.p
The constants are tabulated below. Another type of recombination is Auger
recombination, characterized by electron-hole pairs disappearing without the aid of
trapping centers as in SRH recombination, and is mainly used when there are large
high-injection rates. Auger recombination is defined by equation (2.27). This model
and the SRH model are added together when used in conjunction to provide the
overall recombination rate as shown in equation (2.26). To use the Auger model, the
partial derivative terms used in Chapter 3 -Numerical Analysis will need significant
modification since Auger recombination is a function of both carrier densities.
U=Usm+UM0 (2.26)
where,
Umjg =r(n2p-p2n) (2.27)
The generation model used in this program models the basic generation affect due to
impact ionization as defined in equation (2.28). It is useful in characterizing reverse
applied bias breakdown. In low electric field applied bias cases the generation rates
are typically negligible. [4] The constants and fitting parameters are tabulated below.
G=G, =Gr=Ua\j,\ + ar\jl,\) (2.28)
where,
f / \ /
an = o exP - 1 f\ and ap = api) exp
\ V 1 i y J V
3
/
16


Table 2.3 Ionization generation parameters
Nomenclature Program Variable Default Value Units
Electron Alpha* alpha_nO 2.25 x 107
Hole Alpha* alpha_pO 3.08 x 106
Electron Energy* E_nO 3.26 x 106 V/cm
Hole Energy* E_pO 1.75 x 106 V/cm
Fitting Parameter* m 1 -
17


3.
Numerical Analysis
In this chapter, we will proceed in setting up the numerical equations in order to solve
the physical device equations established in the previous chapter. A procedure for
setting up the equations in numerical form, linearizing those equations, and solving
the subsequent block tri-diagonal matrix equation will be given in a step by step
manner.
However, first a brief introduction into some of the basic numerical analysis
techniques required to solve the device equations will be given below. All of these
techniques are used in transforming the differential device equations in to linearized
numerical equations suitable for a numerical analysis solution.
3.1. Taylor Series Expansion
The Taylor series is named after the English mathematician Brook Taylor (1685-
1731) and the Maclaurin series is named in honor of the Scottish mathematician Colin
Maclaurin (1698-1746) despite the fact that the Maclaurin series is just a special type
of Taylor series. But the idea of representing particular functions as sums of power
series goes back to Newton, and the general Taylor series was known to Scottish
mathematician James Gregory in 1668 and to the Swiss mathematician John
Bernoulli in the 1960s. Taylor was apparently unaware of the work of Gregory and
Bernoulli when he published his discoveries on series in 1715 in his book Methodus
incrementorum directa et inversa. Maclaurin series are named after Colin Maclaurin
because he popularized them in his calculus textbook Treatise of Fluxions published
in 1742. [5]
18


As we have seen in Chapter 2 Semiconductor Device Physics, the basic equations
describing the physics of semiconductors are differential in nature and nonlinear. In
order to perform numerical analysis on these equations, they must be linearized and in
order to do this, the Taylor series expansion is performed about a point of interest.
Additionally, for our equations of interest, the linearization must be done in each
variable of interest (n, p, and v|/) and this can be seen later in section 3.5.
The general form of the Taylor series is show in equation (3.1) below and is referred
to as the Taylor series expansion of the function f at a (or about a, or centered at a).
[5]
rw = Z
/(a), v,
-(x-a) =f(a) + (x-a)
n=0 n\
f'(a)
+
2!
ix.af+nsi
3!
{x-af +...
(3.1)
For the special case of a =0, the Taylor series becomes equation (3.2) and is given the
special name Maclaurin series. In our analysis, we will be using the standard Taylor
series expansion as the expansion point is not always zero.
= = f(0) + ^-(x) + ^-(x)2 +^p-(x)3 +... (3.2)
n=0
m
1!
2!
3!
Since we are dealing with a system of nonlinear equations in which we are interested
in multiple variables (n, p, and v|/), each equation that is linearized must be done so
around each variable and so therefore the partial derivatives of each variable is
introduced. Extending the above discussion on series expansion to a multiple
variables is rather straightforward. We will also introduce the concept of perturbation
state (or delta state).
Let us re-examine equation (3.1) and transform it into the form show in equation (3.3)
below. We assume that a is a point such that f(a) = 0 and this point is defined as the
equilibrium point of the system f (x) = f(x) since f (x) = 0, when x = a. Since f(a) = 0,
the nonlinear differential equation can be approximated near the equilibrium point by
equation (3.4).
19


(3.3)
jr
f(x) = f(a) H----1 x=a (x a) + higher order terms
dx
f\x) = ^-\x__a{x-a)
(3.4)
When the expansion point, in our case a, is sufficiently close to x, the higher order
terms in equation (3.3) can be neglected as they are very close to zero. To complete
the linearization, we define the perturbation state, or delta state (8x = x-a) and using
the fact that 8f(x) = f (x) we obtain the linearized model show in equation (3.5).
This can be easily applied to multi-variable systems and is show in equation (3.6).
This is the form employed in section 3.5 to linearize the system.
In summary, this technique allows us to linearize a system of nonlinear equations
about a point of expansion. In this analysis, the concept is a more generic approach.
As will be seen later, we linearize the current equations and generation and
recombination equations as they involve nonlinear terms of an exponential nature.
Since the three equations of interest are the two current continuity equations and
Poissons equation, and since the current continuity equations are differential
expressions of the current equations, these must also be linearized. This allows the
linearized current and recombination/generation equations to be substituted in
directly. This will be more closely examined in section 3.5.
Sf(x) = ^-\*=Sx
dx
(3.5)
(3.6)
20


3.2. Numerical Differentiation
It is often necessary, especially in the procedures following, to estimate the derivative
of a function and is one of the fundamental requirements in performing finite
difference method techniques. The procedure used to perform this is called the
difference method and is applicable to first or second order derivatives. The
procedure can be used to extend to higher order derivatives but is not discussed here.
Additionally, more elegant difference techniques can be used in order to lower the
inherent numerical error associated with calculating the derivative of a function but is
also not discussed here.
Numerical differentiation is performed using a series expansion and in this instance a
Taylors series expansion is also used. If we assume that fix) has as many continuous
derivatives as necessary, then from Taylors formula, equation (3.7) is found, which
is the same form as equation (3.1) but with the expansion about x, instead of a,
which was arbitrary for the above discussion. [6]
f(x + h) = f(x) + ^f (x) + -^f'(x) + -^f"'(x) + ... (3.7)
One way to think of equation (3.7) is that we are expanding the function fix) about x,
within small increments of h. So if we replace the arbitrary a in equation (3.1) with
x+h, equation (3.7) is a straightforward application. If we solve equation (3.7) for
f (x), the equation can be written as
h 2! 3!
and neglecting higher order terms gives equation (3.8),
/ (x) = + ^f&l' for small value of h.
h
(3.8)
21


Equation (3.8) is called the forward divided difference approximation and graphically
is the representation of the slop of the line passing through points (x, f(x)) and (x+h,
f(x+h)). There are two other main approximations for a first order derivative and they
are directly obtained from Taylors series expansion as well.
Assume we replace h with -h in equation (3.7) and repeat the same process we
obtain equation (3.9) below. Equation (3.9) is referred to as the backward divided
difference approximation.
/ (x) = ^, for small value of h. (3.9)
h
Finally, if we replace h with -h in equation (3.7) and then subtract the resulting
equation from the old one, we obtain
/'(*) =
f(x + h)~ f(x-h) h
hA
2 h
- /"'(*)- /<5) (*)-...
3! 5!
and neglecting higher order terms gives,
/ (x) = + ^for small value of h. (3.10)
2 h
Equation (3.10) is known as the central difference approximation. Each numerical
technique to establish the derivative of a function lends to round-off or truncation
errors as we are approximating the value. Keeping higher order terms of the
expansion will lower this error but is not necessary for this application.
22


3.3. Numerical Integration
Just as with derivatives, calculating the integrals of functions numerically is also of
interest for this application. The main use for this technique, recalling the device
continuity equations from Chapter 2, is to calculate the current density across the
device using the recombination and generation terms that have previously been
calculated. Since mesh spacing is also defined, use of a simple trapezoidal integration
rule is rather straightforward.
Let us suppose we would like to evaluate the definite integral of the form show in
equation (3.11) and which is graphically represented in Figure 3.1.
(3.11)
x0=a X! x2 x3
*n-2 *n-1 Xn-b
X
Figure 3.1 INTEGRATION USING THE TRAPEZOIDAL RULE
23


To calculate the integral (area under the curve), we begin by splitting up the interval
[a, b] into n subintervals, each with uniform mesh spacing as shown in Figure 3.1.
This gives equation (3.12).
h = and x^a + ih for i = 0,1,2,..., n (3.12)
n
Now by examining each subinterval, the area under the curve can be approximated by
the sum of the areas of the trapezoids. The area of the /th trapezoid is therefore
A = | L/C*,-! ) + /(-*, )]
with the sum or total area Tn being obtained by extending over the entire area.
Tn = A + A2 + ...+A-i + A,
= |[/Uo ) + fix, )] + ^[/(*I ) + fix 2 )] + ...
+ |[/CX.2 ) + fiXn_| )] + ![/(*_, ) + /(*)]
Finally, by collecting like terms we arrive at equation (3.13).
Tn =^[fix0) + fixn)\+h!£f(xi) (3.13)
^ i=i
Equation (3.13) is referred to as the composite trapezoid rule as the integral is
estimated by a composite of the areas of n trapezoids. [6]
24


3.4. Tri-diagonal Matrices
This section will briefly discuss matrices with respect to diagonal dominate matrices
and especially tri-diagonal matrices. A tri-diagonal matrix, as shown in Figure 3.2, is
simply a matrix which has non-zero entities on the main diagonal, super-diagonal
(upper), and sub-diagonal (lower) and frequently arises in the solution of numerical
differential equations, such as in this project.
a,l a\2 0 0 0
2. a 22 a2i 0 0
0 a)2 a3i a34 0
0 0 43 a4A a45
0 0 0 aS4 a55
Figure 3.2 TRI-DIAGONAL MATRIX
The main content of this section will discuss a recursive method for solving a block
tri-diagonal matrix equation as this is what is employed in the solution of this project.
The algorithm is borrowed from [4] directly but is a standard algorithm used
frequently. There are many methods to solve these types of matrices, from the very
basic naive Gaussian elimination, to more complex solutions, like LU decomposition,
Jacobi method, etc.
As we will see in Section 3.5 below, for a 1-D solution, a block tri-diagonal matrix
solution must be obtained. The recursive algorithm for this procedure is as follows.
Assume the matrix equation that needs to be solved is of the form in equation (3.14).
A(N)Sy(N -1) + B(N)Sy(N) + C(N)Sy(N +1) = F(N), 2 for Sy(N) = [5p(N),&t(N),Si/f(N)Y, 2 25


where matrices A, B, and C are 3x3 dimension and vector F is of 3x1 dimension.
Since A, B, and C are 3x3 matrices we have a block tri-diagonal matrix as opposed to
a simple tri-diagonal matrix. A direct solution would be to construct this matrix per
equation (3.14) and use any of the aforementioned solution algorithms but this is
computationally expensive. It would be much more efficient to only have to calculate
the matrix blocks and solve for the delta state (or error value) and not have to
manipulate every element in the matrix, including the majority of empty or zero
elements. The use of the recursive algorithm allows for this computation savings. To
continue, equation (3.14) is transformed into the following equation (3.15) involving
unknown vectors at two points, N and N+l, instead of three.
B (N)Sy(N) + C (N)Sy(N + l) = F (N) (3.15)
Equation (3.15) can be solved for 5y(N) explicitly using basic matrix math, which
yields equation (3.16) below.
fy(N) = B'(N)F'(N)- B'(N)C' (N)Sy(N + \) (3.16)
If we then take equation (3.16) and reduce the division point N by one, we can
substitute this result back into equation (3.14), rearranging the terms to arrive at
equation (3.17). The following steps illustrate this procedure.
Sy(N -1) = B (N -l)F (N -\)-B (N -1)C (N -\)S\'(N) (division point N reduced
by one)
Substituting back into equation (3.14) yields,
A(N)[b\N -1 )F (N -1 )-B (N-1 )C (N l)Jv(A0]
+B(N)Sy(N) + C(N)Sy(N + 1) = F(N)
Rearranging terms yields,
[B(N)-A(N)B (N-\)C (N-l)]Sy(N) + C(N)Sy(N + \)
W* ^ /
= F(N)-A(N)B(N-\)F(N-l)
26


If we now compare equation (3.17) with equation (3.15) we obtain the recursive
relations given below in equation (3.18), but for one less division point calculation.
B(N) = B(N)-A(N)B(N-\)C(N-\) C'(N) = C(N) and
F\N) = F(N)-A(N)B(N-\)F(N-\), 3 To finalize the algorithm since we would like to solve for 8y(N), the error values, we
compare equation (3.18) with equation (3.14) for N=2 specifically and obtain
equation (3.19).
B(2) = B(2), C (2) = C(2), F (2) = F(2) (3.19)
We know that Sy(N-l) for N=2 is Sy(l), which is equal to zero. Sy(l) corresponds to
the boundary condition error, but the boundary condition is fixed and so there is no
error associated with that term and therefore reduces to zero. Finally, starting with
equation (3.19) at N=2, then using equation (3.18) for the rest of the division points
(N=3, 4,..., L-l) we can use equation (3.16) to calculate the error values, 5y(N),
across the whole device (N=L-1, L-2,...,2) starting with 8y(L)=0, since this is also a
fixed boundary condition.
At this point, matrices A, B, C, and F are arbitrary with the dimensions defined
above. As we will see in the next section, these matrices will be defined and so
calculation of the above algorithm becomes very straightforward. Once A, B, C, and
F are calculated across the device using the recursive relations given, the error
potential is solved for directly. The definition of these matrices comes from
performing the finite element methods (difference method) on the device equations
and then linearizing those equations.
27


3.5. P-N Junction DC Analysis
In this section, a procedure for performing DC analysis of a 1-D, P-N junction device
will be developed. The basic P-N junction diode is examined as further development
techniques for other devices follow the same basic procedures and solutions used in
this section. For a 1-D Bipolar Junction Transistor, for example, only slight
modification of this procedure is required and the resulting simulator program can be
modified to use either solution depending upon the device specified. Another example
of modification is time domain analysis, which requires extra division spacing but
ultimately requires little modification to the base program with the conceptual idea
that these further implementations can be readily incorporated at a later date.*
Equations (3.20a-e) and Figure 3.1 show the basic setup for a 1-D P-N junction diode.
The overall doping profile as of yet is undefined but could be a step junction, linear
junction, and so on.
, ^ dp dip (3.20a)
dn dip J = aDn QjUnn , OX ox (3.20b)
dp 1 n P = p+G Un, dt q dx p p (3.20c)
1 +r l, a,"",'a* " (3.20d)
= +p-n), where T = ND-NA dx £ (3.20e)
For DC analysis the time derivatives in equations (3.20c and 3.20d) will become zero
as the device is considered to be in steady state.
28


(N=1)
(N=L)
Figure 3.3 BASIC P-N JUNCTION DIODE
Before we can proceed with numerical analysis of the device show in Figure 2, a few
points of interest should be noted. Many of the models used in the solution procedure
are of the basic type, however can be expanded easily to incorporate future models.
For example, the intrinsic carrier concentration model is currently only temperature
dependant and the mobility model is fixed across the device. Both could be modified
to incorporate band gap narrowing for heavily doped devices or electric field
dependency respectively.* More discussion on each model used in this section along
with its capabilities is further discussed in Chapter 5 Semiconductor Device
Simulator and also in Chapter 2.
The recombination model employed in this solution is of the basic Shockley-Read
Hall type. It is the most fundamental type of recombination method and so therefore
is most applicable to basic PN junction devices. Equations 18C and d imply that the
recombination and generation terms are the net terms and can constitute different
models, such as SRH and Auger recombination. In this case, the net recombination
rate from both models is used. Furthermore, the recombination centers only occupy a
single energy level directly in the middle of the forbidden energy band. This implies
that the equilibrium hole concentration is equal to the equilibrium electron
concentration which is in turn equal to the intrinsic carrier concentration, no=po=ni.
For higher power devices, Auger recombination, as discussed in Chapter 2, can also
be considered but is not explicitly used in this solution procedure. This allows us to
use the basic SRH recombination equation from Chapter 2, repeated as shown here in
equation (21).
29


(3.21)
u = uP=v
pn nf
^(P + ni) + TP(n + ni)
If we only want to evaluate the device in low electric field applications technically the
generation terms are not necessary and are not supported in some programs, such as
in [3]. However, this program includes the generation terms and as with
recombination we assume that the net hole generation terms are equal to the net
electron generation terms. These terms are useful in supporting devices with
extremely high or reverse applied biases. We can examine the process of avalanche
breakdown by incorporating these terms, however when low applied biases are used,
these terms are essentially zero. See more discussion in Chapter 5 on the generation
function. Repeating the equations from Chapter 2 for the one dimensional case gives
us equation (3.22) below.
Before we proceed to develop the numerical equations from equations (3.20)
necessary to solve the device let us discuss the boundary conditions that are also
necessary. Since we are performing a finite element analysis, an initial value or
boundary value must be given. We will develop a two-point (Dirichlet type)
boundary-value problem. Finite difference techniques can then be employed on the
device physics equations and Taylors series expansion to linearize the equations.
This leaves a tri-diagonal block matrix that can be solved iteratively and finalized
with Newtons iteration principle to arrive at a converged solution.
We need to define the appropriate boundary conditions. With respect to Figure 3.3, p
and n regions are located at x=0 and x=w, respectively, where w is the device length.
For our analysis, we assume that the n-region is fixed at zero potential, or that is the
external contact is grounded and the p-region is at any applied voltage. The thermal
equilibrium analysis that follows also starts with the p-region fixed at ground.
where
(3.22)
dx
30


The boundary conditions need to satisfy a zero space charge region at each contact.
Usually the semiconductor forming an ohmic contact with the metal has sufficiently
high doping concentration. If there were a nonzero space charge region in such a
highly doped semiconductor then there would be a correspondingly high electric field
which would lead to breakdown conditions at the metal-semiconductor contact, so
therefore the zero space charge condition must be satisfied. Future work can include
the Shottky boundary conditions necessary to satisfy Shottky devices.*
HO) + pi0) n{0) = 0, T(w) + p(w) n(w) = 0,
p(0)n{0) = nf, p(w)n(w) = nf,
^(0) = V--j-ln 'pi 0)" , wiw) = V~-In niw)
9 >h 9 n,
where, 9 =
kT
(3.23a)
(3.23b)
(3.23c)
Equation (3.23a) satisfies the zero space charge condition specified above, while
equation (3.23b) satisfies the thermal equilibrium condition necessary at each contact.
Equations (3.23a and 3.23b) are used to establish the two-point boundary conditions
that are self-consistent among the two equations and the result is shown in equation
(3.24).
P(0) = -
n(w) = -
n 3 1 + 1 + 2n, V
2 J'(0)j
T(w) 2 1 + - f 2tli 1
1 +
-il/2 '
>, n( 0) =
Pi 0)
,-11/21
>, p(w) =
n(w)
(3.24)
Conditions for the potential boundary conditions given in equation (3.23c) are
deduced by the use of the quasi-Fermi potentials briefly discussed in Chapter 2. These
are re-written here in equation (3.25) for reference.
w = 0n------In
p 9
P_ ii i >1 3 n
3. u 3.
where, 9 =
kT
(3.25)
31


If the definition of the zero potential point on the n-region and applied bias on the p-
region is extended to equation (3.25) it follows that(pp(w) = (pn(w) = 0 and
(pp{0) = (pn(0) = V Substituting these into equation (3.25) yields equation (3.23c)
directly. With equation (3.23c) and equation (3.24) we now have a two-point
boundary condition with respect to the three unknown variable, n, p, and v|/ for use in
the solution of the device.
We can now proceed with the numerical analysis of the device equations but in order
to do that, we need to transform the differential equations into difference equations
with the variables defined at a finite number of division points (mesh points) as
shown in Figure 3. Main division points are defined by N, with N=1 and N=w
corresponding to the device endpoints x=0 and x=w respectively. Auxiliary points are
defined such that they are exactly half the way between main division points.
This solution uses uniform mesh spacing and is sufficient for most classes of devices,
however when the extent of the depletion region becomes much smaller than the
neutral bulk regions it is sometimes more beneficial to have course mesh spacing in
the neutral regions and finer mesh spacing in the depletion region. The SDS program
can be easily modified to incorporate non-uniform mesh spacing*.
X=0
1
M-1
-O-
N-1
|*- h(M-1)-
h(N)-
M
O-
N
- h(M)
h(N+1)]
M+1
------O-
N+1 N+2
I h(M+1)!
X=w
L
Figure 3.4 DIVISION POINTS FOR DC ANALYSIS
In the following analysis, variables n, p, and vp are estimated on the main points,
while the derivatives of these quantities are estimated on auxiliary points. The device
equations in (3.20a-e), which need to be transformed into difference equations, are
repeated here in equation (3.26a-e) neglecting the time derivative. [4]
dp
to-W'P
dy/
dx
(3.26a)
32


(3.26b)
Bn By/
Jn =(lDn^-(iMn ,
ox ax
1 BJ
Q = ~~q'~d^ + Gp~Up'
=-^L+c,-t'..
q dx
ay_ q
dx2
= (F+p-n), where V = ND-NA
(3.26c)
(3.26d)
(3.26e)
The equations are now transformed into the difference equations. A brief discussion
on equations (3.27a and 3.27b) will follow shortly (including the coefficients for the
lambda terms and how they are obtained, but equations (3.27c-e) are obtained directly
by using the difference methods employing the techniques discussed in Section 3.2 -
Numerical Differentiation.
JPW)
JAM)
q
h(M)
q
h(M)
[Apt(M)p(N) + Ap2(M)p(N + \)\
\XnX{M)n{N) + An2{M)n{N + \)\
(3.27a)
(3.27b)
where,
i tits ,A,,V'(N)-ys(N + 1) j ,,,, ,...y/(N)-y/(N + \)
4.1 (M) = Mp(MV ---- AJM) = VAM)Y ----
l-ep
i i ,*,MN)-y'W + v
Ap2(M) Mp(M) ^nl(M) /J(M)
/3(M) = 0E(M)h(M) = 6[y/(N)-y/{N +1)]
q
Jp{M)-Jp{M-\)
h(N)
-G(N) + U(N) = 0
(3.27c)
q
J(M)-Jn(M-1)
h (N)
+ G(N)-U(N) = 0
(3.27d)
33


(3.27e)
ytf{N -1) + y2if/{N) + y2i//(N +1) = ~[r(N) + p(N) n(N)]
As you can see in the continuity equations (3.26c and 3.26d), the derivative of the
current density with respect to the spatial variable, x, has been transformed into the
difference approximation in equations (3.27c and 3.27d). Also Poissons equation in
(3.26e) was transformed directly to equation (3.27e) also by using the difference
method. This time, the second order central difference method was used since
Poissons equation is a second order differential equation. The transformation of
equation (3.26e) into difference form is shown below in equation (3.28).
1
h (N)
y/{N + \)-y/(N)
h(M)
y/{N)-\fr{N-1)
h(M -1)
-^[r(A0 + p(A0-(A0]
e
(3.28)
Throughout the solution process, the mesh spacing is continually defined at each
division point, whether it is a main division point or auxiliary point. Since we are
using uniform mesh spacing and the auxiliary spacing is equal to the main spacing,
equation (3.28) could be further transformed into the standard looking central
difference equation for a second order derivative. Mesh spacing is left as a function of
the division point so that future expansion of non-uniform mesh spacing is rather
straightforward*. Lastly, if we examine equation (3.28) with equation (3.27e) the
gamma coefficients are obtained directly.
K(AO =
1
h(M -\)h (N)
y2(N) =
h(N)
h(M -\) + h(M)
y2(N) =
1
h(M)h'(N)
At this point in the solution process we have all of the equations transformed into
numerical difference equations. The current density equations can be plugged into the
current continuity equations and combined with Poissons equation; the three
equations can be then solved simultaneously. However, the current density equations,
as well as the generation and recombination terms, are not linear and so the
application of Newtons iteration principle cannot yet be used. The next step in the
solution process is to linearize these equations about the fundamental variables of
interest, n, p, and vj/.
34


However, before we proceed to linearize the equations let us return to equations
(3.27a and 3.27b), where if we compare these to equations (3.26a and 3.26b) it is
obvious they were not directly obtained from the difference technique. Before
continuing, please proceed to Appendix A, where a complete derivation and
explanation of equations (3.27a and 3.27b) is given.
Because the continuity equations contain expressions for the current densities which
themselves contain derivatives of the fundamental variables, n, p, and vp, the
derivatives for the current densities are defined at main points as well. Therefore, it is
necessary to define the spacing between the auxiliary points shown in equation (29).
Equation (3.30) shows the hole current density equation linearized through the use of
Taylors expansion theorem since it involves nonlinear functions. Since each equation
is a function of one or more fundamental variables, the use of partial derivatives is
applicable.
h'(N)=-[h(M -\) + h(M)]
(3.29)
Jp(M) = Jp(M) + SJp(M)
(3.30)
Solving for SJp(M) gives
35


The hole current density is used as an example and the rest of the linearization
process for the electron current density, recombination and generation equations is the
exact same. Once the continuity equations are linearized, since they contain
derivatives of the current densities, the delta state in equation (3.29) can be
substituted in along with the rest shown below.
dJAM)
8Jn{M)~ Sp(N) +
+
dp(N)
dy/{N)
aJ AM)
p -Sp(N +1)
8y/(N) +
dp(N +1)
dl//(N +1)
Sys(N +1)
dJ(M -1)
SJp(M-l) -£ Sp{N -1) + '
dp(N -1)
dJ(M-\)
+
dJp(M-1)
dy/(N 1)
8y/(N -1) +
dp(N)
dJAM -1)
Sp(N)
dy/(N)
8i//(N)
dn(N) dn(N + h

dn(N +1)
KW)
di//(N + 1)
Sy/(N + \)
dn(N 1) ^
K(m-\)
dn(N)
Sn(N)
oy/(N 1) oi/s(N)
dG(N) s /hr dG(N) /Xl lx 3G(/V)
8G{N) ~ ^ 1) + ^ ^w(^V-l) + -
+
dp(N-l)
dG(N)
Bn(N-l)
dy/{N-1)
8y/{N -1)
- dG(N) dG(N) _
Sp(N)-\--------Sn(N) H---------5y/(N)
+
#/(/V)
dp(N) y ' dn(N)
dG(N)
dp(N+ \)~ry' ' a(/V + l)
dG(AO af/(A0
diy(N)
s ,Kt aG(/v) _ ^ aG(/v) ... ,,
8p{N +1) + ^ Sn(N +1) + Sy/(N +1)
dy/(N +1)
dp(N)
-8p(N) +
dn(N)
Sn(N)
36


Replacement of Jp(M) by Jp(M) + dJp(M) and Jp(M -1) by
Jp (M -1) + dJ (M -1) and so on in equation (25c and d) and rearranging the terms
yields equation (3.31).
<7
SJp(M)-SJp(M
-1)
-SG(N) + SU(N)
h (N)
1 -1)
= .-. P--------G(N) + U(N}
q h(N)
\_
q
SJn(M)-SJJM-l)
h'(N)
-SG(N) + SU(N)
1 Jn(M)-Jn(M-1)
q h{N)
-G(N) + U(N)
(3.31)
We now have the continuity equations (3.31) linearized in terms of incremental
variables. The incremental variables (delta state) are given above and these can now
be substituted into equation (3.31) which yields the two continuity equations
involving only fundamental variable increments as unknowns. Due to the amount of
physical space required to show the substitution and rearrangement process, the two
equations are given directly in equation (3.32) and (3.33). It is a simple algebraic
process but care must be taken that all signs are kept in their proper order and place.
Continuity equation for holes...
dJp(M- 1)+ dG{N)
qh(N) dp(N -1) dp(N -1)
8p(N -1)...

1 ~dJp(M) dJ{p(M -1) dG(N) dU(N)
qh(N) dp(N) dp(N) dp(N) dp(N)
+
a/J(M) dG(N)
qh'(N) dp(N +1) dp(N + 1)
Sp(N)...
Sp{N +1) lG^N)- Sn{N -1)...
an(N -1)
dG(N) dU\N)
dn(N) dn(N)
Sn(N)...
37


8y/{N -1)...
dG(N)
dn(N +1)
Sn(N + 1)
1 dJp(M -1) dG(N)
qh (N) dy/{N -1) dy/(N-\)
1 ~dJp(M) dG(N)
qh'(N) dy/(N) dl//(N) di//(N)
Sy/(N)...
1
qh (N) dy/(N +1)
3G(AQ
dy/{N +1)
dy/{N +1)...
= Kw )-Kw- 1)1 + G\N)-u(N)
qh (N) L J
(3.32)
Continuity equation for electrons...
3G()(/V)
dp(N -1)
dG\N)
+
dp(N +1)
1
Sp(N- 1) +
Sp(N + 1)...
dG(N) dU(N)
dp(N) dp(N)
Sp(N)...
dJn(M-\) | 3G(AQ
+
+
qh (N) dn(N -1) dn(N-\)
dJn(M) ayn(M-i)
dn(N)
£n(N-l)...
I (AO
dn(N)
dJ(M) ^ aG0(/V)
+

qh (N) dn(N + 1) 3(Af + l)
3n(N) dn(N)
Sn(N + \)...
+
dJJM -1)+ 3G(A0
qh (N) dy/(N-\) dl//(N-\)
+ <
+
1
1
dy/(N) dl//(N)
dJn(M) | 3G(AQ
+
Sy/(N 1)...
3G0(yv)|
a^(^v) J
---------T----- Sl//(N + \)...
qh (N) dy/(N + \) 3^(A^ + 1)J
-i)]-G0(yv> + G(yv)
qh(N)L J
Sy/(N)...
(3.33)
38


Poissons equation does not involve nonlinear functions and does not technically need
to be linearized in incremental variables but for ease of calculation and to match the
continuity equations we have done so in equation (3.34). This allows us to setup up a
standard block tri-diagonal matrix. The linearization process is not repeated but is the
same for the continuity equations. For example, p(N) is replaced by p(N) + dp(N)
and so on and then the terms rearranged and grouped.
Sp(N)- Sn(N) + y (N)Si//(N -1) + y2 (N)Si//(N) + y3 (N)Sy/(N +1)
£ £
= -^-[r(AO + p(AO-n0(AO]... (3.34)
- y (N)Siy{) (N-1) y2 (N)Sy/{) (/V -1) y3(N)Si// (N +1)
These three equations (3.32-3.34) are meant to be solved simultaneously and in order
to do so we can transform the three equations into a matrix-vector representation that
allows us to employ the recursive algorithm discussed earlier. It will also allow us to
develop the coefficients of each matrix. Equation (3.35) is the matrix-vector
representation of the two continuity equations and Poissons equation combined.
A(N)Sy(N -1) + B(N)Sy(N) + C(N)Sy(N + 1) = F(N),
for 2 < N < L -1
(3.35)
To evaluate the matrix coefficients A, B, C, and F let us examine equation (3.35) in
matrix form for three division points, N-l, N, and N+l as shown in Figure 3.5 and
then expand each equation in the matrix to form equations (36a-c).
39


8p(N-1)
8n(N-l)
8y/(N -1)
~au(N) al2(N) axi(N) bu(N) bn(N) bxi(N) cxx(N) cx2(N) cX3(N) 8p(N) 7i,W
a2i(N) a22(N) a22(N) b2x(N) b22(N) b2i(N) c2X(N) c22(N) c2i(N) 8n(N) = fnW
aix(N) ai2(N) a(N) bM(N) bn(N) b}i(N) c3l(V) cn(N) cn(N) 8y/(N)
8p(N + \)
8n(N + \)
8y/(N +1)
Figure 3.5 MATRIX FORM OF THE MATRIX-VECTOR EQUATION
an (N)5p{N -1) + an (N)Sn(N -1) + al3(N -1 )8yr(N -1)
+buSp(N) + bn(N)Sn(N) + bu(N)Si//(N) (3.36a)
+cuSp(N + l) + cn8n(N + l) + cl38tp(N + \) = fu(N)
a2x (N)Sp(N -1) + a22 (N)Sn(N -1) + a23 (N -1 )Sy/{N -1)
+b2XSp(N) + b22(N)Sn(N) + b23(N)Si//(N) (3.36b)
+c21 Sp( N +1) + c22Sn{ N +1) + c238y/{ N +1) = /l2 (N)
a3x (N)Sp(N -1) + an(N)Sn(N -1) + a33(N -1 )Sy/{N -1)
+b3xSp(N) + b32(N)Sn(N) + b(N)Sy/(N) (3.36c)
+c3XSp( N +1) + cn8n{ N +1) + ci3Sy/(N +1) = fx3 (N)
Now if we compare equations (3.36a-c) with equations (3.32-3.34) respectively, the
matrix coefficients for A, B, C, and F are obtained directly. The matrix coefficients
are tabulated below for easy reference in Figures 3.6-3.9
40


1 dJp(M -1) dG(N)
qh (N) dp(N- -1) WN-\)'
1 dJap(M -1) dG(N)
qh (V) dy/(N- -1) dy/{N-1)
1 -1) , dG(N)
qh (N) dn(N - -1) dn(N -1)
1 -1) , 5G(A)
qh(N) dy/(N- -1) a^(v-i)
0G(AQ
Bn(N -1)
dG(N)
Bp(N-1)
0, ~ 0, ^33 Y\ (AO
h(M -\)h(N)
Figure 3.6 COEFFICIENT A OF MATRIX-VECTOR EQUATION
bu =
qh(N)
dJp(M) dJp(M-\)
dp(N) dp(N)
bn
bn =
dG"{N) dU(N)
dn(N) dti(N)
1 \dJp(M) dJp(M -1)
dl//(N) dt//(N)
qh (N)
3G(AQ dU(N)
dp(N) dp(N)
dG(N)
dy/(N)
9G(AQ dU(N)
dp(N) dp(N)
qh (AO [ dl//(N) dl//(N)
bM=-i b?,2=bn = r2(N)
£ £
dG(N) dn(N) dG(N) dlf/{N) dU(N) dn(N)
+ -T 1
h(M -1) h(N)
Figure 3.7 COEFFICIENT B OF MATRIX-VECTOR EQUATION
41


CI1
CI3 =
c22
1 dJp(M) dG(N) dG(N)
qh (V) + dp(N +1) C'2 dn(N +1)
1 dG(N) dG(A0
qh (AO dp(N + \) dy/{N +1) Cl' ~ dp(N +1)
1 dJn(M) dG(N)
qh '(AO dn(N +1) dn(N -1-1)
1 a/(M) , dG(N)
qh '(AO a^(v + o a^(A/ + u
c21 o, o, Cj2 y^hi)
1
h(M)h (N)
Figure 3.8 COEFFICIENT C OF MATRIX-VECTOR EQUATION
/ =-{J0p(M)-Jp(M-\)] + Gq(N)-U0(N)
qh (N) L J
fi2=-^-\jn(M)-Jn(M-\)]-G()(N) + U0(N)
qh (N) L J
f]J=-^[T(N)+ p0(N)-n(N)]...
- y (N)y(N -1) y2{N)y/\N) y3(N)ys(N +1)
Figure 3.9 COEFFICIENT F OF MATRIX-VECTOR EQUATION
Now that we have the coefficients for the matrix-vector equation determined they can
be calculated across the whole device in order to solve equation (3.35). However, as
we have seen the recursive algorithm is more efficient and requires less computations
then solving equation (3.35) directly. Refer to section 3.4 for the recursive algorithm
and appropriate relationships.
Before the coefficients for equation (3.35) can be evaluated the partial derivatives
embedded in the coefficients must be determined first. The partial derivatives are
determined directly from the current density, recombination, and generation equations
42


(3.27), (3.21), and (3.22) respectively. The derivation techniques for several of the
partial derivatives are given in Appendix B, you may refer to that now for more
information. The required partial derivatives are given in Table 3.1 below for
reference. Their calculations are given below in summarized form. No equations
numbers are associated with the calculations; they must be referenced to the table
numbers below. It may seem excessive but the number of partial derivatives is not as
extensive as the list appears. Most of the derivatives can be coded into one function as
we will see in Chapter 5. The first table below simply lists all the partial derivatives
required to support the matrix coefficients A, B, C, and F.
Table 3.1 Partial derivatives required for matrix coefficients
1) (lfl) dJp(M-1) (1 b) dJp(M) (1 c) cUp(M) (\d)
dp(N-1) MN) dp(N) dp(N +1)
dJp(M-1) (2a) cUp(M- 1) (2b) dJp(M) (2c) dJp(M) (2d)
dyr(N-l) dlf/(N) dy/(N) dl//(N + 1)
1) Bn(N-1) (3 a) dJn(M-1) dn( N) (3 b) Mn(M) dn(N) (3c) dJ(M) dn(N + 1) (3 d)
K(M-X) (4 a) 37(M-1) (4b) cUn(M) (4c) dJ(M) (4 d)
dy/(N -1) dl//(N) dys(N) di//(N + 1)
dU(N) dn(N) (5b) dG(N) dp(N- 1) (6a) dG(N) MK) (6b) dG(N) dp(N + 1) (6c)
dU(N) M N) (5a) dG(N) dn(N- 1) (la) dG(N) dn(N) (lb) dG(N) dn(N + 1) (7c)
dG(N) dy/(N -1) m dG(N) dl//(N) m dG(N) dt//(N + 1) (8c)
Referring to the current density equation from Appendix A at division point M-1 and
M repeated below in equation (3.37), the partial derivatives for (la-d) are obtained
directly.
43


(3.37)
M" - = " 'W* +-l)p(Af)]
1)
0/7(A^ 1)
dJp(M-1)
a^(^v)
topm =
dp(N)
dJp (M) _
3p(W + l) ~
7 77^pi (M I)
/?(M -1)
MM -1)
Xp2(M-l),
^i ,(M),
h(M) p'
h(M)
Xp2(M),
(1)
(lc)
(W)
The partial derivative of the current density with respect to the potential, vp is more
complicated and requires use of the product rule and chain rule for derivatives since
the lambda terms are functions of the potential themselves. The results are tabulated
below. The lambda terms calculated as a part of the current density equations are
repeated here for reference in equation (3.38). The partial derivatives for (2a-b) are
tabulated below.
/I, (M) = jUp(M)
+ \)


y/{N)-y/{N +1)
(M-\) = jUp(M-\)
(M-\) = jup(M-\)
1 -em)
W(N \)-y/(N)
y/(N -\)-if/(N)
l e
0(M)
fi(M) = 9\y/{N)-y/{N +1)]
P(M -1) = e[y/{N -1) ^(AO]
where
(3.38)
44


dyKN-1) h(M-\){ dyKN-1) dyKN-1)
(2a)
where,
d\\(M ~ 1) //P(M-1)
tep2(M-\)= Vp(M-1)
a^(yv-i) _(i-Vw-'>)
P(M-
0(M-
(l-e*"-")
dJp(M-l)_ dJp(M-l)
dyKN) dyKN-\)
where,
9^,(M-1)= 8^,(M-1)
a^(yv) ~ a^(yv-i)
a^cM-i)^ a^2(M-i)
a^(yv) a^(yv-i)
a/;(M)
dyKN)
KM)
P(N)
dApi(M)
dyKN)
+p(N+l)
ayM)
a^Ao
(2c)
where,
Mplm_ MP(M) j P{M)e~PiM)
a^(^v) _ (\-e'P(M)) (l-e~PiM))
a^2(M)_ JUP(M) , /KM***
dy/(N)
45


afp(M) afp(M)
dyKN+l) ~ di/KN)
where,
dApl(M) a/t,(M)
a^(yV + l)_ dy/{N)
dXpl{M) = dJp(M)
dy/(N + l) dy/(N)
The procedure for evaluating the electron related partials derivatives in 3a-d and 4a-d
are exactly the same. The partial derivatives for these terms are tabulated below.
1)
dn(N-l)
dJn(M-1)
dn(N)
=
dn(N)
h(M -1)
(M-l),
q
h(M -1)

q
h{M)

ayn(M)
dn(N +1)
q
h(M)
A2(M),
(3a)
(3 b)
(3c)
(3a)
dyKN-1) h{M-1) l a^yv-l) di/XN-1) J
where,
dAn](M-\) = jun(M-\) dXp2(M-1)
a^(yv-i) mp(m-1) a^-i)
aAn2(M-i)^//(M-i) a/ip,(M-i)
a^(A^-l) jup(M-\)' dys(N-l)
dJn(M-1)
dy(N)
q
h(M -1)
n(N -1)
ain,(M-1)
dy/(N)
+ n(N)
dXn2(M-\)
m
46


where,
dAn[{M-\) = dAni(M -1) jUn(M -1) dAp2{M-1)
dy/(N) dy/{N-1) jUp(M-\) di//(N-\)
dAn2(M-\) = dAn2(M-1)^ jUn(M-1) a^CM-l)
diy(N) dy/(N-\) Mn(M -1) a^(/V-l)
tym= q
dyKN) h(M)
(A0
a^AO
+n(A+l)
a^2(M)
a^Ao
(4c)
where,
dAjM) ^yn(M)dAp2(M)
diy(N) jUp(M) dy/{N)
dAn2(M) ^jUn(M)dApl(M)
di//(N) jUp(M) a^(AO
ay(M)
a^(/v + i)
4
h(M)
n(N)
a/in,(M)
a^ + D
+ n(A + l)
dAn2(M)
dy/{N +1)
(4J)
where,
a/l,(M) ^ BAni(M)= yn(M)dApl{M)
dy/{N + \) dys(N) UP{M) dy/{N)
dAn2(M) dA2(M)_ jun(M) dApi(M)
dy/{N +1) dy/(N) jUp(M) di//(N)
The amount of partial derivatives for the current density equations may seem
excessive, but as will be explained in Chapter 5, only one function for the lambda
coefficients and partial derivative lambda expressions are required to calculate all of
the above which allows the calculation for each of the matrix coefficients to be more
efficient. The partial derivatives for the recombination and generation terms are also
given below. Their derivations are similar and briefly discussed in Appendix B as
well.
47


Recombination terms:
dU(N) _ n(N)-fnU(N)
dp(N) Tn(p(N) + ni) + Tp(n(N) + ni)
(5a)
p(N) + rpU(N)
dU(N) ____
dn(N) Tn(p(N) + ni) + T (n(N) + ni)
(5b)
Generation terms:
x,M.X).a,N) (6a)
dp(N-\) h(M -1) 7,1 p
dG(N)
MH)
= +
m,.
mu

ar(N) (6b)
dG(N) m.
dp(N + \) h(M)
= ^77-Ap2(M)-an(N) (6c)
where,
h(M) h(M -1)
mn = ;---, mh =----;----
fl 2 h(N) 2h(N)
ap(N) = aPo exP
and E(N) = maE(M -1) + mhE(M)
The choice of the sign for the partial derivative terms depends upon the sign of the
current density at the given mesh point, that is J (N) >0 or Jp (N) < 0
determines + or respectively when,
Jp(N) maJp(M -1) + mbJp(M)
48


m
An[{M-l)-an{N) {la)
ZG{N) _ .
dn{N-\) h(M -1)
dG(N)
dn(N)
m
h(M -1)
K2{m-\) +
mu
h{M)

a{N)
{lb)
dG{N)
dn{N +1)
m.
-+- b
h{M)
Xnl{M)-an{N)
(7c)
h{M) h{M -1)
------- ffl ~ ---------
2ti{N) h 2h{N)
an(N) = anO eXP
En0
|£(/V)|
and E{N) = maE{M -l) + mbE{M)
The choice of the sign for the partial derivative terms again depends upon the sign of
the current density at the given mesh point, that is Jn {N) >0 or Jn {N) < 0
determines + or respectively when,
Jn{N) = maJn{M -\) + mhJn{M)
The choice of the sign or + for the remaining partial derivatives below depends
upon whether Jn{N),Jp{N),E{N)>0 or Jn{N),J p{N), E{N) < 0 respectively.
For example, G^n is positive is £(A0>0 but negative if £(iV) < 0, however G 3I is
negative is E{N) > 0 but positive if E{N) < 0. The term that governs the sign
follows each GÂ¥ij below.
49


dG(N)
dy/(N -1)
~ GÂ¥\\ + GÂ¥\2 + GÂ¥U + GÂ¥\A
(8a)
<*G(N)
- (8fc)
3G(7V) r r r r
3^ + 1)Gi+G+C'm+G'')4
(8c)
where,
_ _L fy ( AT\ P
Gy/\\ ~ ~ n aP\N) 2

h(M-1) E(Nf q
ma(N) dJ (M -1)
G,=-2-2----^-------, Jp{N)
E(N)
>12
q a^(N-l)
m.
vn 0

G'":' +A(M-l)a" JmIN)
E(N)
Â¥14
q dy/{N-\) '
Gr3i=+T^7T a,W)-£j

h(M)
E(N)
Gvn ~ +
_mhap(N) dJp(M)
q dy/(N)
G^=*Trt:a.m^
h(M)
K(AO
E(NY q
JAN)
J{ff 34
dlf/( N)
E(N)
JAN)
E(N)
50


^21 ~ ^Vll ^V_31
GÂ¥22=ap{N)
m,
3./_£m
dl//(N)
dJp(M -1)
q di//(N-1)
^V23 ^Vl3
(//33
GvU=an(N)
q
Mn(M)
dl//(N)
ma dJn(M -1)
q 0^(A-1) J
M*)

With all of the coefficients for the matrix-vector form of the continuity equations and
Poissons equation tabulated and the partial derivatives for those coefficients
calculated, we can use them to develop an algorithm which calculates the thermal
equilibrium or applied DC bias potential and carrier densities throughout the device.
3.6. Numerical Solution
This section will discuss the numerical solution process used to solve the matrix-
vector equation (3.35), including development of the trial values, or initial guess, and
finally Newtons iteration principle to update the trial values and check for
convergence of the solution. A basic solution process is also discussed which was
used to develop the program given in Appendix C.
This solution algorithm involves solving the two continuity equations and Poissons
equation as previously discussed. The process of transforming these equations into
finite difference equations allows the use of the implicit method as a solution
procedure which is simply a multidimensional Newton iteration problem. Equations
(3.32-3.34) have been developed such that full coupling exists between the three
fundamental variables, n, p, and v|/ or that is each fundamental variable is allowed to
change during each iteration. In this manner the equations can be solved
simultaneously in a straight forward fashion. Other methods exist to solve the
numerical equation developed including decoupled methods (Gummel method [3])
but are not discussed here. They have their various differences and benefits for
specialized situations but for a generic solution algorithm for most possible device
conditions (low injection vs. high injection for example) the couple Newton iteration
51


procedure is preferred. It will increase the overall number of computations per
iteration but decrease the overall computation time for convergence.
The solution to equation (3.35), repeated below for reference and show in matrix
form, can involve a rather significant amount of matrix elements. An efficient
solution procedure is required or the computation time, even for simple devices, can
become excessive and limits the use of such simulator tools to any practical sense.
Use of the coupled Newton iteration principle with the recursive solution procedure
will suit the needs of this program adequately.
A(N)Sy(N-\) + B(N)Sy(N) + C(N)Sy(N + l) = F(N), for 2 where,
~p(N)~ Sp(N)
y(N) = n(N) , Sy(N) = Sn(N)
y(N) Sy/{N)
B( 2) C(2) 0 0 Sy( 2) F{2)
A( 3) B( 3) C(3) 0 Sy( 3) F( 3)
0 A(4) B( 4) c(4) : 0 A{L 2) B(L- 2) C(L 2) Sy(4) F(4)
0 0 Sy(L- 2) F(L-2)
0 0 A(L-l) B(L-l) Sy(L-l) F(L-l)
Newtons iteration principle requires that we have an initial guess or starting value to
calculate the error state for the first iteration. The calculated error is then added to the
initial value and the process repeated until convergence is reached. All devices,
whether simple 1 -D P-N junctions or elaborate 2-D bipolar devices, should have the
thermal equilibrium condition solved for first, or that is no external applied bias. Not
only does it allow for faster computation speeds on the applied bias conditions but
allows the user to obtain a good feel for whether the devices equilibrium conditions
are being met as intended.
52


For the thermal equilibrium condition, trial potentials are simply provided based on
the charge neutral assumption that the boundary conditions were established under.
n2
Carrier densities are given by pT = -T, nT = p, where T is the effective doping
concentration throughout the device. The zero-bias trial values for the potential are
given by equation (3.39) below.
-In
e
____
r IX*)
p region
n region
(3.39)
Now that the trial potentials are defined we can conceptualize a solution procedure,
which is given below. A flow chart representation is provided in [4] as well as the
authors version in Chapter 5, which represents this programs solution.
(1) Define physical and numerical parameters
(2) Define iteration limit and set current iteration number to zero
(3) Use the matrix-vector coefficients, A, B, C, and F to calculate the error for
each iteration using the recursive algorithm discussed in Section 3.4. This is
equivalent to solving equation (33) using direct Gaussian elimination for
example.
(4) Improve the initial guess or trial potentials by adding the calculated error.
(5) Check for convergence against defined convergence tolerance
(6) End program if solution has converged otherwise update current iteration limit
by 1 and return to step (3) repeating the process until solution has converged
or iteration limit is reached.
Assuming that a converged solution for the equilibrium condition has been reached,
extending the solution to an external biased condition only requires updating the
equilibrium condition potential with a simple proportional relationship and then
repeating the process defined above. Technically updating the potential is not
necessary but will lower the total iteration number for the DC condition. This
relationship is given in equation (3.40) and illustrated in Figure 3.10 below.
53


i//t(N) =
(3.40)
v2-v,
y/,(L)-y/x{L)
V -V
^(N)+- 2 1
y/,{L)-y/^{L)
V\{L)
Figure 3.10- UPDATED POTENTIAL BASED ON PREVIOUS POTENTIAL [4]
A tabulated list of physical constants and user defined constants as well as a tabulated
list of numerical constants required to develop the program in Appendix C is given
below.
Table 3.2 Numerical constants
Nomenclature Program Variable Default Value Units
Device Length length 10 um
Convergence Tolerance etol 1 x 10-6 -
# Grid Points L 1001 -
Iteration Number it 0 -
Iteration Limit itl 15 -
54


Table 3.3 Physical constants
Nomenclature Program Variable Default Value Units
Dielectric Permittivity Constant eo 8.854 x 10-14 F/cm
Electronic Charge q 1.602 x 10-19 C (A-sec)
Boltzmanns Constant k 8.62 x 10-5 eV/K
Thermal Voltage Vt 0.02586 V
Boltzmann Factor (Thermal Voltage) l/Vt 38.6 1/V
Temperature T 300 K
Donor Concentration Nd 1 x 1015 cm-3
Acceptor Concentration Na 1 x 1015 cm-3
Left Contact VI 0 V
Right Contact Vr 0 V
55


4.
SDS Results Investigated
In this chapter, we will review the output of the SDS program and compare it to
analytical results as well as to other simulators, notable SimWindows. [7]
SimWindows is also a 1-D semiconductor device simulator capable of simulating the
same devices as the SDS program. The first section will compare the SDS results
with respect to standard analytical models, for example the built-in potential with
respect to using the depletion approximation as opposed to solving the numerical
solution which makes no such assumptions. The purpose of this is to verify the
working operation of the program versus general semiconductor operation that is
learned at the classroom level.
4.1. SDS Output Compared to Analytical Assumptions
Most basic PN Junction analysis is performed using the analytical device expressions
as defined in Chapter 2. To make these differential equations mathematically
tractable, assumptions must be made in order to arrive at a solution. One of the main
assumptions used in this approximation is the Depletion Approximation. [ 1 ]
The Depletion Approximation is a way to simplify Poissons equation such that the
potential throughout the device can be calculated without the need of the carrier
concentrations, n and p. The approximation is performed by assuming that the
transition region between the n and p bulk regions, or that is depletion region, is
totally depleted of mobile carriers and that the mobile majority carrier densities
abruptly become equal to corresponding dopant concentrations at the edges of the
depletion region. The charge density is therefore zero everywhere except in the
depletion region, where it equals the ionized dopant concentration. Additionally, we
generally assume that the dopant atoms are complete (100%) ionized and free to
contribute to current conduction. This reduces Poissons equation to what is shown in
equation (4.1) below.
56


(4.1)
d2(p
dx2

If we further only evaluate Poissons equation with respect to the n-region of the PN
junction, equation (4.1) reduces to equation (4.2) below. This reflects no acceptor
atoms in the n-region and can easily be integrated from an arbitrary point in the n-
region depletion region to the edge of the region resulting in equation (4.3). Similar
results are obtained in the p-region in equation (4.4).
d2(/> dE dx2 dx £'
(4.2)
qN,
E(x) =----(*-*) 0 (4.3)
QNa
E(x) =-------(jc + jc ) x £,
(4.4)
If we further integrate equations (4.3) and (4.4) one more time, we arrive at
expressions for the potential in each respective n and p-region shown in equations
(4.5) and (4.6).
n
qNd
2e.
(* ~XY
0 < x < xn
(4.5)
-xp (4.6)
Where 0n and 0p is the potential at the edges of the depletion regions and is equal to
kT N kT N
q n, q n,
57


The total potential change from one region to the other is referred to as the built-in
potential and is the potential whereby at equilibrium it balances the drift and diffusion
components across the junction. The expression for the built-in potential is shown
below in equation (4.8).
N,Na
a a
(4.8)
With these expressions developed from an analytical approach using the Depletion
Approximation we can evaluate the SDS program solution with respect to the
analytical solution. The equations derived above generally result in the graphs shown
in Figure 4.1. These graphs apply to an abrupt step junction which is the default
device loaded into SDS.
Plot (a) shows the step junction profile, with one dopant type per region. This creates
a step junction trial potential for use in the numerical solution. Plot (b) is simply
showing the depletion approximation, or that is, the depletion region carrier density is
negligible. Plot (c) is the charge density associated with the dopant atoms. Plot (d) is
the electric field in the device, while plot (d) is the potential in the device.
58


(e)
Figure 4.1 ABRUPT STEP JUNCTION PROPERTIES
From plot (d) in Figure 4.1 the potential is only zero at the transition region when the
doping profiles on each side are equal. When they are not, the depletion region will
push further into the region that has less doping.
59


Referencing plot (d) and equations (7) and (8), we can calculate the built in potential
and potential in the n and p-regions only to see if they compare to our numerical
solution. Let us assume that the dopant profile in each region is equal which gives us
. kT, Nd 1.38*1023-300 ,
1*1015
n;
1.602*10
-19
(l .45*10 )2
1*1015
= 0.2878 V and
. kT, N 1.38*1023 -300 ,
0n--------In - =---------------tz In
p q nt 1.602*1 O'19 (l.45*10,0)
-0.2878 V
. . . kT. NdNa
q
1.38*1023 -300 1*1015 -1*1015
In-;--------- = 0.5756 V
1.602x10- (1.45X10'0)2
Figure 4.2 shows the output for the potential of the SDS program when run with an
abrupt step junction of equal doping in both regions. The cursor in the graph is
displaying the pertinent information.
potential equilibrium
Figure 4.2 EQUILIBRIUM DEVICE POTENTIAL
60


We can also observe in Figure 4.2 that the potential is zero at the junction transition
as we expect since the doping densities are equal in the bulk n and p-regions. Figure
4.3 shows the equilibrium electric field throughout the device. It is also symmetrical
about the junction as we expect.
Energy euilibrium
Figure 4.3 EQUILIBRIUM ELECTRIC FIELD
Figures 4.4 and 4.5 show the potential for a PN junction where the acceptor dopant
density is greater than the donor density. This shows the depletion region extending
further into the n-region as it should, as well as a nonzero value at the transition
region. Figure 4.5 is taken from SimWindows as verification the correct solution was
obtained. The only difference in SimWindows is that all plots are scaled to zero,
meaning there are no negative potential values. SDS has an option to do this as well
and the result is shown in Figure 4.6. The electric field is always max at the junction
transition; however when the dopant densities are unequal across the junction, the
field will be push further into the region with less doping atoms just as the depletion
region does.
61


0.3
potential equilibrium
02 -
01 -
I
I
0 -
S
z
-01 -
03
-0 4-
-05
0
_i______________i_____________I_____________i______________I_____________I_____________I_____________i_____________i_____________
01 02 03 04 05 06 07 0.6 09 1
Device Length (x) in micron* x
Figure 4.4 DEVICE POTENTIAL WITH UNEQUAL DOPING DENSITIES
Pol (V)
1.00-
0.90-
0.80-
0.70-
0.60-
0.50-
0.40-
0.30-
0.20-
0.10-
-2.0e-16-
'!^
1.00
4.00 5.00 6.00
Grid Position (microns)
Figure 4.5 SIMWINDOWS COMPARISON TO FIGURE 4.4
62


potential equilibrium
%
s 05 -
Device Length (x) in microns
Figure 4.6 SDS OUTPUT SHIFTED TO ZERO
Figure 4.7 below is the SimWindows equilibrium electric field with unequal doping
densities. This compares to the SDS output in Figure 4.8.
Field (V/cm)
0.00 ]------------------------------------------------------------------- -----------------------------------------------------------------------
-5.0e+03- j
-1.0e+04-i |
-I.Be+04- j
-2.0e+04-
-2.5e+04-
-3.0e+04-
-3.5e+04-
-4.0e + 04--1 T 1 rr, 1 ri J 1 1 1 1 1 I 1 1 1 1 1 1 r~,_l
0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Grid Position |microns)
Figure 4.7 SIMWINDOWS ELECTRIC FIELD
63


Energy euilibrium
04 05 06
Device Length (x) in microns
Figure 4.8 SDS ELECTRIC FIELD
Finally the abrupt step junction is evaluated with an external applied bias of 0.5 volts
at the left contact. This forward biases the PN junction as we would expect. A 0.5 volt
signal is also applied at the right contact to reverse bias the junction. Alternatively, a
negative voltage could be applied to the left contact to achieve the same affect. The
doping densities are kept unequal as in the previous examples.
Figure4.9 shows the device potential when the device is forwarded biased. From
classic analytical theory we expect the built-in potential to be lowered as is shown in
the graph. Figure 4.10 is a SimWindows comparison of the same output.
While the device is forwarded biased we can also calculate the current density
through the device and compare it to SimWindows output as well. One final
experiment not shown here is to reverse bias the device and slowly increase the bias
64


Device Potential M
until a breakdown condition is achieved. This better shows the effect of the
generation dependent models as a high bias tends to achieve impact ionization more
readily then lower level electric fields.
potential applied voltage
Figure 4.9 SDS DEVICE POTENTIAL WITH EXTERNAL APPLIED BIAS
65


Pot (V)
1.00 n
0.90 ^
0.80
0.70-
0.60-^
Q.50-j-
0.40-
0.30-
0.20-j
0.10-
0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Grid Position (microns)
Figure 4.10 SIMWINDOWS POTENTIAL WITH EXTERNAL APPLIED BIAS
66


5.
Semiconductor Device Simulator
In this chapter, a brief overview of the SDS program is given with explanations for
key aspects of the program that are not explicitly covered in any other section. We
will begin by looking at the main program and then investigate the functions that
are implemented in support of the main program.
As we have seen throughout the project, the main purpose is to write a program that
solves the matrix-vector equation which was established from the classic differential
equations in Chapter 2. In most aspects the program can be developed however the
developer likes, but in certain instances poor coding techniques will lead to longer
computation times and ultimately slower convergence speeds for a given device
setup. The main important goal is to manipulate the matrix coefficients efficiently and
to solve the matrix-vector equation with as little iteration as possible. On key element
to this is establishing the proper boundary conditions and trial values.
The first few blocks simply establish any constants, or numerical parameters required
to solve the device. The second block establishes any user defined inputs which also
include device information for the device to be simulated. A note should be made that
the code provided in Appendix III, and referenced in sections here, is heavily
commented and the reader should reference the comments for any confusion that may
arise.
clear all;
information
close all;
clc ;
TSTART = tic;
format long eng;
^Clears all previously stored variable
%Closes all previously open plots
%Clears the command window to blank
%Starts a timer to record solution time
%Formats the data to engineering mode
This block is a simple setup that is used before any new simulation is run. All stored
variable information in the Matlab workspace is cleared, all open plots are closed, and
the command window is cleared. This section also starts a timer, which is used to
investigate the speed of the computation algorithm for various device setups.
67


Every calculated value that is stored in a variable is accessible after the program is
run in the main Matlab workspace. For example, the last iterated value for the error
values is stored in an error vector. The program can be easily modified to run through
one iteration at a time allowing the user to investigate the change in error over each
iteration. Every opportunity for the user to access the calculated data has been
provided so that any insight into those variables can be gained if necessary.
One of the main reasons the author chose to use Matlab was due to the environments
natural capability to work with matrices. Everything in Matlab, even a one-
dimensional variable or constant is treated as a matrix. One of the key features in SDS
is the use of Cell arrays in Matlab. This allows the user to store various types of data
in each cell. In fact, the first cell could have a standard 3x3 matrix, while the second
cell has a string text in it for example.
The cell arrays are treated like matrices as well and indexed similarly. All standard
matrix operations work, provided that matrix dimensions agree, etc. This works
elegantly with the tri-diagonal block matrices. Each matrix coefficient is a 3x3 matrix
but the calculated values are stored in a cell array where each element is a 3x3 matrix.
The cell array is used in conjunction with the recursive algorithm for the solution
process. The delta_y array, yo, and y arrays are cell arrays as well and the
instantiation process is shown below.
delta_y = cell(l,L);
delta_y(:) = { [0;0;0] };
matrix
yo = cell(1,L);
yo (:) = { [0;0;0] } ;
matrix
y = cell(1,L);
y( : ) = { [o;0;0] };
matrix
Apr = cell(1,L);
Bpr = cell(1,L);
Cpr = cell(1,L);
Fpr = cell(1,L);
delta_y{N} = inverse_func(Bpr{N))*Fpr{N}...
- inverse_func(Bpr{n})*Cpr{N}*delta_y{N+l};
%Defin.es an empty cell from 1 to L
%Intializes each i cell to a : 3x3 empty
%Defines an empty cell from 1 to L
%Initializes each cell to a 1x3 empty
%Defines an empty cell from 1 to L
%Initializes each cell to a 1x3 empty
68


This calculates the error during each iteration. Following the matrix dimensions gives
delta_y{N} = 3x3 X 3x1 3x3 X 3x3 X 3x1 =3x1, which agrees.
This method of using cell arrays is efficient and compact and is highly recommended.
Even the update process is compact as shown below. In one line, the original trial
potential, yo{N} is updated with the calculated error in delta_y{N} at each division
point N. Remember the boundary conditions have no error associated with them as
they are fixed.
y{N} = yo{N} + delta_y{N};
After each iteration, the solution needs to be checked against the error tolerance.
There is effectively two main ways to accomplish this. The first method requires that
the absolute value of each calculated error value for the three fundamental variables,
n, p, and sai, are checked against the convergence tolerance at each division point. If
every single point has converged then the solution has been reached. This is
somewhat inefficient, however is accurate for the whole device. A second method
checks the normalized value of the entire error vector (normalized value of the error
at each division point) against the convergence tolerance. If the normalized value is
small enough the solution has converged. The process for one error vector is shown
below.
tempi = abs(norm(p_error)/norm(po));
if (tempi > etol)
flagl = 1;
else
flagl = 0;
end
Once all three flags have been cleared to zero (they are initially set to one), then the
device has converged. The solution algorithm loops until this occurs or the iteration
limit set by the user has been reached. The default is 15 iterations.
The final part of the main code or function simply plots the results, whether the
device solution converged or not. The electric field, carrier densities, and device
potential are all plotted against the spatial variable, x.
69


The rest of the SDS program is a series of functions that are called by the main
function or sub-functions. This is where the matrix coefficients are calculated and the
partial derivatives as well. The code for one element of the B_func is shown below. It
references several functions itself.
b(l,l)= ( l/hpr(N) )*((mu_func(M,h,po,no,Ntot,saio,1)/h(M))...
*lambda_func(M,saio,1)-(mu_func(M-l,h,po,no,Ntot,saio,1)/h(M-
1) ) . .
*lambda_func(M-l,saio,-1))+ diffsrh_func(N,po,no,Ntot,1);
Referencing back to Chapter 3 Section 3.5, you can see that there are several
functions to support the calculation of the matrix coefficient. Mobility is calculated
through a function so that models of temperature and effective doping can be used,
and the lambda terms and recombination terms are supported by function calls as
well. This allows for efficient storage of the calculated data. We dont need the
lambda value stored across the whole device because unless you are troubleshooting,
there is no interest in these values.
The lambda function code is shown below. Recalling the various lambda parameters
from Chapter 3, the beta term is also referenced as a function. This way beta(M) and
beta(M-l) can be calculated in one function by passing in the current division point.
The other important note for the lambda function is the use of the variable, I. This
variable is passed in so that the proper sign is used for the exponential terms. In this
manner, one function can support all of the lambda terms from Chapter 3 in a
compact fashion.
Vt = 0.02586;
theta = 1/Vt;
temp = beta_func(M,saio);
if(I == 1)
if(abs(temp) <
lambda_out
else
lambda_out
end
elseif(I == -1)
if(abs(temp) <
lambda_out
else
IE- 5)
= (1/theta)*(1/(1-(1/2)*temp+(1/6)*tempA2));
= (1/theta)*(temp / (1 exp(-temp)));
IE- 5)
= (1/theta) (1/ (1+ (1/2) *temp+ (1/6) *temp^2) ) ,-
70


lambda_out = (1/theta)*(temp / (1 exp(temp)));
end
end
end
One final note on this function, which also applies to the difflmabda function as well,
is that for special cases, the actual value of lambda needs to be approximated or some
of the resulting matrix coefficient calculations will become undefined due to a
division by zero. For the default device in this program, a step junction profile is
used. This results in equal values of the device potential in each bulk region which
will cause the output of the lambda function to go to zero or undefined. Therefore,
depending upon the value of beta (which is the difference of the device potential
between two division points), instead of using the direction equation and
approximated equation is used.
In essence a series expansion of the lambda terms are performed so that the
exponentials are approximated. The technique used here was known as Bernoullis
function. It is also applied to the difflambda function which is used to calculate the
partial derivative terms of the lambda coefficient.
Several other functions exist but wont be discussed here. As mentioned, each block
of code is commented in Appendix C. The remaining functions are similar in nature
in that they support the overall solution process, many being to support the
calculation of the matrix-vector coefficients, like generation and recombination. The
current function will calculate the current density equations and so on.
A basic flowchart of the program is provided in Figure 5.1. This should be used in
conjunction with Appendix C to better understand the flow of the program and where
various calculations are made in reference to the discussions throughout the report.
71


Figure 5.1 SDS BASIC FLOWCHART
72


6. Summary
Throughout this report an attempt has been made to thoroughly document the
procedure used to develop a numerical solution to the simple PN junction diode. The
basic device equations were transformed in to numerical equations that could then be
solved in any desired manner. This program implemented the implicit Newton
method as the chosen solution procedure to the system of numerical equations. A
recursive algorithm was used to solve the system of numerical equations which had
been placed in matrix form resulting in a tri-diagonal block matrix.
The program is intended to allow for different PN junction diode characteristics to be
studied by giving the user the capability to alter the default device already built into
the program. Parameters like doping profile, device length, and device materials can
all be changed to suit the simulation desired. Numerical parameters like convergence
tolerance, mesh points, and mesh spacing can also be modified to suit the users need
or even to help solidify the understanding of the numerical parameters role in
defining the obtained solution. The solution to the default PN junction was used to
compare with the theoretical solutions obtained from making certain assumptions,
like the depletion approximation, when solving the standard differential equations.
The work done to develop this program lays the groundwork necessary to develop a
more in-depth simulator capable of solving multiple semiconductor devices over a
wide range of applications. The base program can easily be modified and updated to
include further enhancements and capabilities without drastically changing the nature
of the program. The program was written modularly and only portions need be
modified at a time.
Applications like zener breakdown, reverser recovery time, and more can be studied
with the use of this simulator, with some suitable setup to the program.
73


APPENDIX A
Integral Form of the Current Density Equation
As was seen in Chapter 3, Section 3.5, the current density equations were not
transformed directly from the difference method as was the continuity and Poisson
equations. The main reason behind this is to generate stability within the tri-diagonal
block matrix to ensure proper convergence when using the implicit method. It is
possible however to employ the standard difference technique on the current density
equations and substitute them directly into the continuity equations, but this can lead
to numerical instability if the potential changes drastically between mesh points. To
examine why this occurs let us re-define the current density equation for holes,
repeated from Chapters 2 and 3 in equation (A.l) and then transform the equation into
standard difference form as shown in equation (A.2a and A.2b). [4]
dp

dip
dx
(A.l)
qfl M
y (M) = ----
p 0h(M)
[p(N + l)-p(N)]
'WPW)
K h(M)
\
[^(A + 1)-^(A)]
f p(N) + p(N +1)^
J
P
qjUp(M-\)

0h(M -1)
[/?(/V)-/7(/v-d]
1)
^ h(M -1)
\
[ip(N)-ip(N-\)]
f p(N-1) + p(N)}
(A.2a)
(A.2b)
The continuity equation for holes is again given in equation (A.3) in difference form.
Equations (A.2a and A.2b) are then substituted into equation (A.3) and rearranged
with like terms and simplified to obtain equation (A.4).
74


1 J AM) J (M -1)
---7--------G(N) + U(N) = 0
q h (N)
(A.3)
Expanding JAM) and JAM -1)
WSM)
qMAM)
qMn(M)
j(m)=-j:,p... p(A+i)+ pw-^;>(Af+i)#)
+
9h{M)
2h(M)
qMp{M)
2 h(M)
9h(M)
\j/{N + \)p(N +1) +
y/(N)p(N +1)
qMp(M)
2 h(M)
2 h(M)
nu)p(N)
and...
qjUAM-\)
qjU(M-\)
WP(M-1)
Jn(M-l) = -"J... + PW-D- tW(N)p(N-1)
+
6h{M 1)
2/j(A/ -1)
2h(M -1)
6h{M 1)
qflAM -1)
y/(N)p{N) + -A----
2h(M -1)
p(N-\)p(N)
2h(M -1)
ip(N-\)p(N-\)
Subtracting both current densities...
VP(M)
Mp(M)
- J (M) - Jn (M -1) = p(N +1) + - p(N)
9h(M)
9h(M)
MPW)
MAM)
MAM)
p -y/(N + l)p(W) -y/{N +1 )p(N +y/(N)p(N)...
2 h(M)
Mp(M)
+
2 h(M)
MP(M-1)
2h(M -1)
Mp(M~ 1)
2h(M -1)
i//(N)p(N+ l) +
2h(M)
MAM-1)
2 h(M)
P(N)~
MAM-1)
M(M -1)
y/(N)p(N-\) + p
9h{M -1) 9h(M -1)
y/{N)p{N)...
p(N-\)...
2h(M 1)
ju(M -1)
A 1)/>(W 1) -T77T-ys(N I)P(N)
2h(M -1)
75


Combining like terms...
+
Up(M-\) pp{M-1)
0h(M -1) 2h(M -1)
^(N-l) +
2h(M -1)
y/(N)
MP(M -1) M (M-1) _ jUp(M-1)
-------+ --------u/(N)----------
0h(M -1) 2h(M-\)
2h(M -1)
y/(N-\) +
p(N-l)
Mp(M)
6h(M)
uAM) /v (M)
y/{ N + 1) + ^77777 y/( N)
+
2 h(M)
MPm
2 h(M)
M(M)
0h(M) 2h(M)
u(M)
y/(N + \) + ^-y/{N)
p{N + \)
p(N)
Finally, simplifying...
Mp{M-\)
2h{M -1)
Up(M-\)
0
+ y/{N -X)-y/{N)
2h(M -1) 10
MPm
----y/(N -l) + i//(N)
+
2 h(M)
MP(M)
2h(M)
+ y/(N) y/{N +1)
U
p(N-\) +
P(N)
y/(N)-y/{N + \)~
0
p(N + iy
= h\N)[G(N)-U(N)]
(A.4)
If we assume that the mobility and mesh spacing is constant and that the potential
variation between the three points, N-l, N, N+l is linear, that is
y/(N -\) + y/{N) = y/(N)-y/{N +1) = VD, then we obtain equation (A.5)
76


A
2h
~ + VD
e D
p(N-\) + -p(N) +
u
f 2)
V \ V) p(N + \)
= h\N)[G(N)-U(N)]
(A.5)
Examining equation (A.5) shows that unless the generation or recombination terms
grow to proportions that deform the matrix coefficients on the left, the diagonal
2
dominance of the equation is only lost if |VD| > which would allow the sub- and
0
super-diagonal elements to dominate the equation. The middle element p(N) is not
affected by the change in potential from mesh points and so the only way to combat
this issue when using the standard difference form is to decrease the mesh spacing to
sufficient size to lower the potential change between mesh points. However, this can
lead to an extensive amount of device grid points leading to excessive computation
times. [1]
This issue can be resolved by transforming the current density equation into integral
form and then applying the difference technique. This allows for deviation from
diagonal dominance to be negligible to the overall solution algorithm, according to
Sharfetter and Gummel. [8]
To do this, we assume that in the range of x, between two division points N and N+l,
the electric field, mobility, and current density are constant. Then equation (A.l) can
be rearranged to integrate the hole density with respect to x to obtain equation (A.6).
^ dp = 9E H pdx Jp | dx, where
d e=-^. e=3-
p q p dx kT
JP(M) = -
qJU (M )E(M) e
6E(M )h{M)
i e
8E(M)h(M)
qu(M)E(M)
(A.6)
l e
77


To arrive at a more simplified solution, as shown in Chapter 3, we must do some
more algebraic manipulation of equation (A.6). First, multiply the first term in
equation (A.6) by
e~0E(M)h(M)
SE (M)h(M) .
e , which gives,
JP(M) =
qH (M)E(M)-e
6E(M)h(M) -SEiM)h(M)

6E(M)h(M)

-0E(M)h(M) '
p(N) +
qVp(M)E(M)
\_e6E(M)h{M)
p{N + \)
Combing the exponential terms gives,
J P(M)
qjUp(M)E(M) qjup(M)E(M)
^1 J _e0E(M)h(M) P(N + \)
then,
JP{M) = -f-r (M)p(N) + i2 {M)p(N + 1)1
n(M)L J
Ap\(M) = jUp(M)-
Ap2(M) = jup(M)-
y/{N)-\f/(N +1)
i//{N)-y/(N +1)
1_^'M)
^(M) = 6[^(A)-^(A + 1)]
where,
(A.7)
Equation (A.7) is exactly as shown in Chapter 3 and the same process can be used to
arrive at a stable equation for the electron current density. This process is omitted
here but is also the same and for both JP(M -1) and Jn (M -1) as well. In Chapter
5, we will see that only one lambda equation function must be coded, but depending
upon the variables passed into the function we can calculate the desired lambda terms.
The only difference between Api(M) and Ap2(M) is the sign of the beta term.
Anl(M) and An2(M) beta terms are simply swapped. So with one equation we can
evaluate any of the four lambda terms. Also when calculations for M-l division point
are required, we simply calculate the function at the current division point less 1.
78


J (M .) = [\t (M )p(N) + Ap2 (M)p(N +1)] and
JP W-1) = /;(AJ_i}[/lp, (M -1 )p(N -1) + Ap2 (M 1)/>(A0]
where,
Xpl(M)=jUp(M)
l//(N)-i//(N +1)
\-e
P(M)
j (A4\ (A* + \)
Ap2(M) = pp(M)-----;---------, and
ApX{M-\)=pp{M-\)
y/(N-l)-ys(N)
-p(M-1)

- e
P(M) = e[\f/{N)-y/{N + 1)]
P(M -\) = G[y/(N-\)-y/{N)\
Jn (M) = 77777 [4, W )n(N) + An2(M)n(N + 1)] and
h(M)
Jn(M-\) =
where,
q
h(M -1)
[4(A/ l)n(N -\) + An2(M- 1)(/V)]
An](M) = jun(M)
y/(N)-y/(N + 1)
1 e
P(M)
4,2 (M) = pn(M)-;--------, and
An](M-\) = /Jn(M-\)
_e-P(M)
yr(N-\)-yf{N)
\-e
P(M-1)
An2(M-\) = Jun(M-\)y/{N and
-e
-P(M-1)
p(M) = G[y/(N)-y/{N + 1)]
p(M -1) = 6[y/{N -1) y/(N)]
Figure A.l INTEGRAL FORM OF CURRENT DENSITY EQUATIONS
79


Finally, if we take the new integral forms of the current densities in equation (A.7)
just derived and plug them back into the continuity equation (A.3) and again assume
that the mobility and mesh spacing are constant and that the potential variation
between the three points, N-l, N, N+l is linear we obtain equation (A.8).
= h[G(N)-U(N)]
With this form, even if |V01 VD, deviation from the diagonal dominant
0
condition is nonessential. Using the integral form for the current densities greatly
improves the matrix property, in that the nonsingularity condition is at least
approximately satisfied even for high potential difference between two neighboring
division points.
p(N-\) +
WPVD (
h [\
(A.8)
+ ~-j-P(N + 1)
l-ep
80


APPENDIX B
Derivation Techniques for the Partial Derivative Coefficients
The following appendix will discuss the techniques used to derive the partial
derivatives of the matrix-vector coefficients, A, B, and C. The derivation requires use
of the product and chain rule from basic calculus. One example derivation is given for
the hole current density and one example for the recombination terms. The rest of the
derivations are left to the reader, however the partial derivatives for the coefficients
A, B, and C are tabulated below so that they can be compared to the relationships
given in Chapter 3 Section 3.5. Doing so will help develop the understanding required
to program one function that supports the various differential lambda terms that
greatly reduces the number of computations. Repeating the current density equation
from Appendix A,
jp(M) = [Ap] (M)p(N) + Ap2(M)p(N +1)] where,
p{M) = 6[y/(N)-\i/{N +1)]
In partial derivative form:
81


To take the partial derivative of the lambda term with respect to the potential at mesh
point N we can start by first applying the product rule since the lambda terms is not a
straightforward function of the potential.
MnAM) dr
= here,
f(y/{N)) = y/{N)-y/{N + \)
g(Â¥{N)) = (\-e-p{M))~'
Using the product rule:
3/L(M) [ ,
3 [*(^(/V))] + (K(A'))^y[/(^/V))]j
dy/{N)
However, before going further the partial derivative of the newly defined g term
requires use of the chain rule, so that is evaluated first.
to take-----r(^))l,letg(yr(N)) be a function of u then,
dy/(N)L J
, J -2
g(Â¥{N)) = u and -----------= -u
du
where,
u = 1 -e
-P(M)
and
du
dy/(N)
= 0e~fiiU)
Using the chain rule:
[n
du
dg{Â¥(N)) ^dg{Â¥(N)) du / -p(M) \~2 / s\ \
dy/(N) du dy/(N) 1 M '
finally,
dXpl(M)
dy/(N)
dAp](M) pp(M)
= ]Up(M){-{Â¥(N)-Â¥(N + l))(l-e-/?(M,)"2(^-/?,M,) + (l-e-/?,A^,)',}
p{M)e-p(M)
dy/{N)

82


The same procedure is followed for the lambda term 2.
where
f(i//(N)) = y/(N)-y/(N + \)
g{iy(N)) = (\-efiiM)y'
Using the product rule:
dAp2(M) f
% the product rule:
However, before going further the partial derivative of the newly defined g term
requires use of the chain rule, so that is evaluated first.
to take --[g(^(A0)J, let g(y/{N)) be a function of u then,
di//(N)
f j dg{V(N)) _2
g[y/(N)) = u and -------=-----= u
du
where,
u = 1 e
PiM)
and
du
dy/(N)
= -0e*M)
Using the chain rule:
dg(iy(N)) dg(t//(N)) du
dif(N) ~ du dy/(N)
{\-eP{M))~2 (-0ePiM))
finally,
dAp2(M)
dys(N)
MP(M) j ( p(M)ep(M)
(l-ePiM)) + (\-eP{M))
83


Essentially the same procedure is used for the recombination and generation partial
derivatives. An example of the recombination derivation is given below.
U(N) =
p(N)n(N)-nf
Tn (P(N) + ni) + TP in(N) + ni)
dU(N)
MN)
dU(N)
dn(N)
3 ~
dp(N)_
d '
dn(N) .
(p(N)n(N) -nf)-(rn{ p(N) + /!,.) + Tp
(p(N)n(N)-nf)-(rn (p(N) + ni) + Tp
(n(A0 + n,))
(n(N) + ,.))
Using the product and chain rule:
d
dp(N)l
{p(N)n(N)-nf)-[rn (p(N) + ni) + Tp (n(N) + ni))
= (p(N)n(N) -nf)- |-r (r (p(N) + n,) + tp (n(N) + nt)) 2 )...
+ n{N)(tn (p(N) + /!,-) + Tp (n(N) + nt))
so,
dU(N) = n(N)-rnU(N)
dp(N) Tn (p(N) + n,) + Tp (n(N) + nt)
84


APPENDIX C
Semiconductor Device Simulator Implemented
%--------------------------------------------------------------------
%Title: ID PN Step Junction Numerical Device Simulator
%Description: Using pre-defined material parameters for Silicon, a
%device can be described numerically along a grid at which the
%physical differential equations are solved for the three basic
%device parameters, potential, and carrier concentrations, herein
%referred to as sai, n, and p.
%Written By: Shawn Pace
%Date: 2011 UCD Thesis Project
%Contributions: Many thanks to Dr. Hamid Fardi whose knowledge and
%previous solution program fardev, without which, would have made
%this nearly impossible.
%--------------------------------------------------------------------
clear all;
information
close all;
clc ;
TSTART = tic;
format long eng;
%Input Parameters
Nd = 1E15;
Na = 1E15;
T = 300;
ni = nieff_func(T)
Vt = 0.02586;
eo = 8.854E-14;
er = 11.7;
es = eo er;
length = 10E-6;
q = 1.602E-19;
theta = 1/Vt;
%Clears all previously stored variable
%Closes all previously open plots
%Clears the command window to blank
%Starts a timer to record solution time
%Formats the data to engineering mode
%Donor Concentration
%Acceptor Concentration
%Temperature in Kelvin
%Intrinsic Carrier Concentration
%Thermal Voltage
%Relative Permittivity Constant
%Relative Permittivity of Silicon
%Permittivity of Silicon
%Device C
%Electron Charge
%Boltzmann Factor
85


alpha_pO = 2.25E7;
E_pO = 3.26E6;
alpha_nO = 3.80E6;
E_nO = 1.75E6;
m=l ;
%Generation defined
%Generation defined
%Generation defined
%Generation defined
%Generation defined
constants;
constants;
constants;
constants;
constants;
see Chapter 5
see Chapter 5
see Chapter 5
see Chapter 5
see Chapter 5
%Initial applied voltage:
Vl = 0; %Applied voltage to left contact
Vr = 0; %Applied voltage to right contact
%Numerical Parameters:
%Iteration number initialized to zero
%Max iteration limit to find solution within
%a given tolerance
%Numerical Device Solution Points
%Convergence Tolerance; calculated and
%normalized error must be less than etol to
%converge
%Distance between each device point in cm
%10*2 is a conversion factor for um->cm
delta = (length/(L-l))*10*2;
%Define uniform mesh points:
x(1:L)= 0; %Memory space defined for device points
for(N=2:1:L)
x(N) = x(N-l)+delta; %Mesh points are defined from 1 to L in
end %increments of delta
it = 0 ;
itl = 15;
L = 1001;
etol = IE-6;
%Memory space defined for mesh spacing; h is uniform mesh spacing
%and is equal to delta for a uniformly spaced device, hpr (h-prime)
%is the auxiliary defined mesh spacing but is also equal to h for a
%uniform device
h (1:L-1)=0;
hpr(1:L-1)=0;
for(M=l:1:L-1)
h(M) = x(M+l)-x(M);
end
for(N=2:1:L-1)
M=N;
hpr(N) = (1/2)*(h(M-l) + h(M));
end
%Define Memory Space:
xl(1:L)=0; %Not currently used
gamma(1:L)=0; %Device doping profile
86


NtOt(1:L)=0;
saio(1:L)=0;
sai(1:L)=0;
po(l:L)=0;
no(1:L)=0;
E(1:L)=0;
delta_y = cell(l,L);
delta_y(:) = { [0;0;0] } ;
yo = cell(1,L);
yo(:) = { [0;0;0] } ;
y = cell(1, L);
y ( : ) = { [ 0 ; 0 ; 0 ] } ;
%Device doping total concentration (Nd + Na)
%Device potential previous solution
%Device potential final solution
%Device hole concentration
%Device electron concentration
%Device energy field
%Defines an empty cell from 1 to L
%Initializes each cell to a 3x3 empty matrix
%Defines an empty cell from 1 to L
%Initializes each cell to a 1x3 empty matrix
%Defines an empty cell from 1 to L
%Initializes each cell to a 1x3 empty matrix
%y and yo above are used to store the updated values after adding
%the error for each iteration. They store sai, n, and p.
%Boundary Conditions:
%The solution of the device equations require a two-point boundary
%condition and below are the boundary conditions for the device
%length, doping profile, carrier concentrations, and built-in
%potential
x(l) = 0;
x(L) = length*lCd2;
gamma(1)= -Na;
gamma(L)= Nd;
Ntot(1)= Na;
Ntot(L)= Nd;
po(1)=(-gamma(1)/2)*(1+sqrt(1+((2*ni)/gamma(1))*2));
no(1)=niA2/po(1);
no (L) = (gamma (L) /2) (1 + sqrt (1+ ( (2*ni) /gamma (L) ) A2) ) ;
po(L)=niA2/no(L);
phi_p = -Vt*log(po(1)/ (ni)) ;
saio(L) = Vr + Vt*log(no(L)/(ni)) phi_p;
saio(l) = VI;
yo{ 1} = [po(l) ,-no(l) ,-saio(l) ] ;
yo{L} = [po(L);no(L);saio(L)];
%Numerical Solution
%Trial Values:
%Before the device equations can be calculated, trial values for the
%carrier concentrations and device potential must be provided across
%the whole device,
for(N=2:1:L-1) %#ok<*N04LP>
if(x(N) <= (length*10A2)/2)
gamma(N) = -Na;
Ntot(N)= Na;
else
gamma(N) = Nd;
Ntot(N)= Nd;
87


-1)
end
if(sign(gamma(N)) ==
po(N)=-gamma(N);
no(N)=-(niA2/gamma(N));
else
po(N)=niA2/gamma(N);
no(N)=gamma(N);
end
saio(N) = Vt*log(abs(gamma(N))/ni)*sign(gamma(N)) phi_p;
if(saio(N) < IE-9)
saio(N) = 0;
end
yo{N} = [po (N) ,-no (N) ; saio (N) ] ;
end
%Figure 1 plots the trial potential calculated above. This should be
%a step profile for a step junction,
figure(1)
plot(x.*10^-2,saio)
xlim([0 length])
ylim([0.0 1.0])
title('saio initial trial potential 1, 1FontSize16)
%This pre-defined memory space is to store the error calculated
%after each iteration for device potential and carrier
%concentrations. This was mainly used for troubleshooting purposes
%during development of the program, however these values can be
%plotted to give an understanding of how the error changes during
%each iteration.
p_error(1:L)=0;
n_error(1:L)=0;
v_error(1:L)=0;
%This pre-defined memory space is defining an empty cell from 1 to
%L; it is not necessary to intialize each cell to 3x3 empty matrices
%as above, because each cell can hold whatever the programmer
%chooses. These cells hold the 3x3 matrices used to calculate the
%error for each device point in the recursive equation that is
%employed.
Apr = cell(1,L);
Bpr = cell(1,L);
Cpr = cell(1,L);
Fpr = cell(1,L);
%Since the boundary conditions are pre-defined, there is no error
%associated with grid points 1 and L, so only 2 through L-l must be
%solved; please refer to the appendices in the Kurata book
N=2 ;
Bpr{2} = b_func(N,h,hpr,po,no,Ntot,saio);
Cpr{2} = c_func(N,h,hpr,po,no,Ntot,saio);
88


Fpr{2} = f_func (N, h, hpr, po, no, Ntot, gamma, saio) ,-
for(N=3:1:L-1)
Apr{N} = a_func (N, h, hpr,po, no, Ntot, saio) ,
Bpr{N} = b_func(N,h,hpr,po,no,Ntot,saio) Apr{N}*...
inverse_func(Bpr{N-l})*Cpr{N-l};
Cpr{N} = c_func(N,h,hpr,po,no,Ntot,saio);
Fpr{N} = f_func(N,h,hpr,po,no,Ntot,gamma,saio) Apr{N}*...
inverse_func(Bpr{N-l})*Fpr{N-l};
end
%Once Apr (A prime), Bpr, Cpr, and Fpr are calculated across the
%whole device, the recursive equation can be used to calculate the
%error at each device point. This error is then added to the trial
%potential and if the convergence criteria are satisfied, the
%solution has been found.
for(N=L-1:-1:2)
delta_y{N} = inverse_func(Bpr{N})*Fpr{N}...
- inverse_func(Bpr{N})*Cpr{N}*delta_y{N+l};
end
%Error for each variable is extracted for review if necessary
for(N=l:1:L)
p_error(N) = delta_y{N}(1);
n_error(N) = delta_y{N}(2);
v_error(N) = delta_y{N}(3);
end
%This section adds the calculated error to the trial values; recall
%that y is a cell matrix with each cell containing a 1x3 matrix and
%since delta_y is of the same type they can easily be added
%together. Then each updated value is extracted into its own vector
%for convergence determination and also replaced in the trial value
%cell matrix yo.
for(N=l:1:L)
y{N} = yo{N} + delta_y{N};
po(N) = y{N}(1);
no(N) = y{N}(2);
saio (N) = y{N} (3) ,-
if(saio(N) < 0)
saio(N) = 0;
end
yo{N} = [po (N) ,-no (N) ; saio (N) ] ;
end
89


%Variable tempi, 2, and 3 contain the normalized calculation for the
%updated (trial + error) value which is then compared to the
^convergence tolerance, etol and a flag is set to signify
%convergence or not.
tempi = abs(norm(p_error)/norm(po));
if (tempi > etol)
flagl = 1;
else
flagl = 0;
end
temp2 = abs(norm(n_error)/norm(no));
if (temp2 > etol)
flag2 = 1;
else
flag2 = 0;
end
temp3 = abs(norm(v_error)/norm(saio));
if (temp3 > etol)
flag3 = 1;
else
flag3 = 0;
end
%Whether the solution was found, converged, or not, the iteration
%number is still increased by 1 to signify one more iteration was
%completed.
it = it + 1;
%This while loop repeats the solution algorithm above until the
%iteration limit is reached or the solution found,
while(((flagl || flag2 || flag3) == 1) && (it < itl))
N=2 ;
Bpr{2} = b_func(N,h,hpr,po,no,Ntot,saio);
Cpr{2} = c_func(N,h,hpr,po,no,Ntot,saio);
Fpr{2} = f_func(N,h,hpr,po,no,Ntot,gamma,saio);
for(N=3:1:L-1)
Bpr{N} = b_func(N,h,hpr,po,no,Ntot,saio)...
- a_func(N,h,hpr,po,no,Ntot,saio)*...
inverse_func(Bpr{N-l})*Cpr{N-l};
Cpr{N} = c_func(N,h,hpr,po,no,Ntot,saio);
90


Fpr{N} = f_func(N,h,hpr,po,no,Ntot,gamma,saio)
- a_func(N,h,hpr,po,no,Ntot,saio)*...
inverse_func(Bpr{N-l})*Fpr{N-l};
end
for(N=L-1:-1:2)
delta_y{N} = inverse_func(Bpr{N})*Fpr{N}...
- inverse_func(Bpr{n})*Cpr{N}*delta_y{N+l}
end
for(N=l:1:L)
p_error(N) = delta_y{N}(1);
n_error(N) = delta_y{N}(2);
v_error(N) = delta_y{N}(3);
end
for(N=l:1:L)
y{N} = yo{N} + delta_y{N};
po(N) = y{N} (1) ;
no (N) = y{N} (2) ;
saio(N) = y{n}(3);
if(saio(N) < 0)
saio(N) = 0;
end
yo{N} = [po (N) ,-no (N) ; saio (N) ] ;
end
tempi = abs(norm(p_error)/norm(po));
if (tempi > etol)
f lagl = 1;
else
f lagl = 0;
end
temp2 = abs(norm(n_error)/norm(no));
if (temp2 > etol)
flag2 = 1;
else
flag2 = 0;
end
temp3 = abs(norm(v_error)/norm(saio));
if (temp3 > etol)
flag3 = 1;
91