Citation
Identification of ictal and pre-ictal states using neural networks with wavelet decomposed data

Material Information

Title:
Identification of ictal and pre-ictal states using neural networks with wavelet decomposed data
Creator:
Schuyler, Ronald Paul
Publication Date:
Language:
English
Physical Description:
vii, 54 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Spasms -- Detection ( lcsh )
Radial basis functions ( lcsh )
Neural networks (Neurobiology) ( lcsh )
Electroencephalography ( lcsh )
Wavelets (Mathematics) ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves -).
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Ronald Paul Schuyler.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
60403341 ( OCLC )
ocm60403341
Classification:
LD1190.E52 2004m S33 ( lcc )

Full Text
IDENTIFICATION OF ICTAL AND PRE-ICTAL STATES USING NEURAL
NETWORKS WITH WAVELET DECOMPOSED EEG DATA
by
Ronald Paul Schuyler
B.S., University of Colorado, Boulder, 1998
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
2004


I
I
I
This thesis for the Master of Science
degree by
Ronald Paul Schuyler
has been approved
by
Utf.
I


Schuyler, Ronald Paul (M.S., Computer Science)
Identification of Ictal and Pre-Ictal States Using Neural Networks with Wavelet
Decomposed EEG Data
Thesis directed by Professor Krzysztof Cios
This work presents a reliable seizure detection method based on radial basis function
(RBF) neural networks, and extends that method to confirm the existence of an
identifiable pre-ictal state. The efficacies of several preprocessing methods are
evaluated for their abilities to extract relevant information from the
electroencephalographic (EEG) data. RBF network topology is investigated, and a
heuristic is proposed for narrowing the search for optimal values of neuron radius.
This abstract accurately represents the content of the
ABSTRACT
its publication.
Signed


ACKNOWLEDGMENT
I would like to thank my advisor, Krys Cios, for his direction and for his confidence
in my abilities.
Thanks also to Andrew White at the Childrens Hospital in Denver for supplying the
data used here and for answering many of my questions about neurology.


CONTENTS
Figures......................................................................vi
Tables......................................................................vii
Chapter
1. Introduction...............................................................1
2. Literature Review.........................................................4
3. Data......................................................................9
3.1 Raw Data..................................................................9
3.2 Meta Data................................................................9
4. Methods..................................................................10
4.1 Artificial Neural Networks...............................................10
4.1.1 Radial Basis Function Neural Networks..................................11
4.2 Preprocessing...........................................................18
4.2.1 Windowing..............................................................20
4.2.2 Fourier Transform......................................................21
4.2.3 Wavelet Transform......................................................22
4.2.4 Input Vector Construction..............................................24
5. Results..................................................................30
5.1 Neuron Locations.........................................................30
5.2 Seizure Identification..................................................31
5.2.1 Seizure-At-Once Method.................................................31
5.2.2 Short Slices...........................................................33
5.3 Seizure Prediction.......................................................38
6. Discussion...............................................................44
7. Conclusions and Future Work..............................................49
References...................................................................51
v


FIGURES
Figure
4.1 RBF Network Architecture...................................................11
4.2 Radial basis transfer function.............................................12
4.3 Accuracies for mushroom data at a range of spread factors..................16
4.4 Accuracies for FFT feature set over a range of spread values...............16
4.5 Transformation examples....................................................25
5.1 Comparison of neuron location methods......................................30
5.2 Per-slice seizure identification accuracies using short slices.............34
5.3 Seizure identification for rat 4...........................................36
5.4 Three heuristics for seizure identification over 24 hours..................37
5.5 Rat 4 seizure identification for two days..................................38
5.6 Seizure prediction on different channels...................................39
5.7 Seizure prediction for rat 6...............................................40
5.8 Seizure prediction for rat 4...............................................40
5.9 Seizure prediction for rat 5...............................................40
5.10 Prediction of one seizure.................................................41
5.11 Prediction refinement with heuristic......................................42
5.12 Seizure prediction for 24 hours...........................................43
vi


TABLES
Table
4.1 Indexed seizures per rat.................................................21
4.2 Seizure counts per rat...................................................26
4.3 Normal segment counts per rat............................................26
5.1 Classification of seizures using seizure-at-once method..................32
5.2 Results for seizure identification using seizure-at-once method..........33


1. Introduction
In order to facilitate the development of drugs to control epileptic seizures, animal
models are needed. A necessary component in the development of these animal
models is the ability to keep track of when seizures happen. Currently, this is done by
a human expert trained to identify seizures within electroencephalographic (EEG)
data recorded from intracranial electrodes. This process requires a researcher to
review thousands of hours of EEG data plots. The development of a system capable
of automating this task would relieve the researcher from this tedious, time
consuming and error prone task.
Artificial neural networks are known to be useful in pattern recognition applications,
and have been applied to EEG analysis in areas such as disease diagnosis [12,36],
sleep stage classification [26], mental state classification [2], artifact recognition [3]
and the detection of epileptiform discharges [13,38].
The use of radial basis function neural networks in this study demonstrates that with
the proper data preprocessing, seizure identification can be very accurate. The
Fourier transform or wavelet decomposition is used to preprocess the data before
using it to train the neural network. The results are compared to feeding the
untransformed data directly to the network. The research of [12] suggests that
1


training a neural network on raw EEG data is unlikely to be successful. This study
also shows that using preprocessing methods outperforms the use of raw data.
However, a properly configured neural network trained only on raw data shows better
than expected results.
In addition to demonstrating a reliable seizure identification system, the possibility of
predicting an impending seizure before clinical onset is investigated. The period
during a seizure is known as the ictal state, while the periods of normal brain activity
between seizures are called interictal. A third state, referred to as pre-ictal, has been
proposed [19,20,30] as the period just before seizure onset. If this state can be
identified in the EEG [16,19,20,21,23,29,30,33], seizures can effectively be predicted.
Implantable devices for humans already exist that can abort a seizure using electrical
stimulation or localized on-demand drug delivery [8,19,20,30,33,37]. Combined with
reliable pre-ictal state identification techniques, these devices could eliminate the
need for constant drug treatment of a condition with intermittent symptoms [20]. At
the very least, seizure prediction could give an early warning to the 25% 30% of
epileptics who do not respond to drug therapy [16,19,33].
As with seizure identification, appropriate data preprocessing methods improve the
accuracy of seizure prediction. Wavelet decomposition provides an effective means
to transform a window of raw data long enough to contain relevant information for
seizure prediction into a vector short enough to be generalized by a neural network.
2


Windows of different lengths are used in combination with different levels of wavelet
decomposition. Although seizures could not be reliably predicted in all cases, the
limited success in identifying a pre-ictal period demonstrates that the possibility of
more accurate seizure prediction exists.
3


2. Literature Review
Neural networks have been used for different EEG classification tasks with varying
degrees of success. Ultimately, the potential success of a particular classification task
is dependent on the existence of the appropriate information within the raw EEG data
for that task. Assuming the necessary information is contained within the raw data,
the success of a neural network classification method is largely based on
preprocessing. The raw data must be converted into a vector of manageable size,
while retaining as much of the relevant information as possible. Transformations,
windowing, sampling or some combination are typically used with time-series data
from the EEG. This section provides an overview of some methods that have been
investigated.
The non-stationaiy characteristic of EEG data is an issue that must be addressed.
Typically this is done by limiting the data to a small window so that the data analyzed
can be assumed to be stationary. Anderson et al. [2] found that a window of one
quarter second was as good as a two second window for distinguishing between a
subject performing mental arithmetic and a baseline mental state. Using a fully
connected feed-forward neural network with autoregressive parameters for spectral
density resulted in an accuracy of 74% for mental state classification, and
4


outperformed the use of the raw EEG data directly. They suggest the averaging of
results over several successive windows to improve accuracy.
In 1997, Hazarika et al. applied a Lemarie wavelet transform to one second segments
of EEG data as a preprocessing step to train a neural network to classify patients as
normal, having schizophrenia or having obsessive compulsive disorder (OCD) [12].
Only the two largest coefficients of the wavelet transform of each segment were used
from each level of decomposition, resulting in a substantial loss of information. Their
network correctly classified only 66% of normal cases and 71% of schizophrenia
cases. Classification results for OCD were described as poor. Still, these results
were better than those obtained for the same task using an autoregressive
transformation of the raw EEG data. This may indicate that classification of these
conditions is not a task that can be effectively performed from EEG data alone (the
necessary information is not present in the EEG signal), or that other factors, such as
not controlling for different levels of drug treatment had a more substantial impact on
effectiveness than the authors believed.
Visual inspection of wavelet transformed EEG from an epileptic patient is used in [1].
They deem the Daubechies wavelet decomposition superior to the short time Fourier
transform for its ability to localize and identify the transient signals associated with
epileptic discharges. Daubechies wavelets are also used in [10], along with several
other raw data transformations, including fractal dimension estimation. Three
5


algorithms are compared for the estimation of fractal dimension. One is found to be
more reproducible than the others, but no quantitative results are provided. It is
pointed out in [11] that estimates of fractal dimension of EEG data are almost certain
to be incorrect, however relative differences between estimates using the same
method may be useful in distinguishing between states. In [10], EEG segments of
tens of seconds are considered to be stationary, and windows between one quarter and
45 seconds are used. The rate of epileptic discharges in the form of spikes is
investigated using the wavelet transform and found to be uncorrelated with seizure
onset. Another important observation given here is that a seizure detector based on
any of these methods will likely require patient-specific tuning, as with speech- and
handwriting-recognition systems. This observation is echoed in [20].
Seizure prediction is closely related to seizure identification, and many of the same
techniques have been applied. Estimates of the fractal dimension of the EEG have
been investigated as apossible identifier of the pre-ictal state [6,16,19,21,23], based
on the observation that brain activity involved in epileptogenesis near die seizure
focus becomes more correlated, while electrical activity not associated with the
building seizure decreases [11]. The method of [20] noticed a decrease in
dimensionality in human EEG data up to four hours before seizure onset. The
nonlinear pattern recognition capabilities of neural networks [28] make them a natural
6


choice to investigate the possibility of the existence of a pre-ictal state without
requiring an explicit estimation of fractal dimension.
A review of seizure prediction research was published in 2002 that qualitatively
compared many methods [19], Research in the areas of time-domain analysis,
frequency-domain analysis, nonlinear dynamics, and intelligent systems were
described, but no quantitative results were given. The ability of methods, such as
neural networks, to distinguish between pre-ictal and normal states without
articulation of specific rules was acknowledged. These methods were dismissed for
their inability to provide insight into the nature of the pre-ictal state. If the goal is
investigation of the causes of seizures, then the black-box nature of neural network
methods is a disadvantage. If the goal is to know when a seizure is about to happen
in order to prepare for it or stop it altogether, then it is necessary only to know that it
is coming, and the disadvantage of neural network methods becomes a non-issue.
However, because neural networks are characteristically non-linear, the output is not
bounded in some cases. If an input state is encountered that is outside the states
represented in training, the output is not predictable, and care must be taken in
dealing with these results.
The first study of pre-ictal EEG to use wavelet decomposed data with a neural
network was published in [30]. Their method used recurrent neural networks with
one or two inputs, ten or 15 recurrent hidden neurons and one output neuron.
7


Daubechies wavelets were used to decompose the raw data, and only data from the
most relevant intracranial probe was used. Separate networks were trained with raw
data, wavelet approximation coefficients and detail coefficients. Four seizures from
one patient were analyzed, with 95 seconds of data immediately preceding seizure
onset used as pre-ictal, and the 95 seconds immediately preceding that used as normal
baseline data. Segments of ten to 20 consecutive training pairs were chosen randomly
from the pre-ictal or normal periods and used to train the network. The best
prediction results were obtained using wavelet detail coefficients. As there were only
four seizures used in the study, the criterion used to evaluate accuracy was visual
inspection of a plot of network output when presented with 170 seconds of data
immediately preceding a previously unseen seizure. The same method was used with
extra-cranial EEG data, with less accurate results. This difference is attributed to
attenuation of high frequency components of the signal by the skull and scalp.
The windowing and wavelet method used in this study is similar to that described in
[30]. Several additional wavelet bases and levels of decomposition are used here.
Those results are then extended from pre-ictal state identification for a few isolated
seizures to seizure prediction for several full days of data with frequent seizure
activity.
8


3. Data
Data used here were furnished by Dr. Andrew White of the University of Colorado
Health Sciences Center as part of an unpublished study.
3.1 Raw Data
The data analyzed consists of approximately 50 billion data points taken from raw
EEG readings acquired from three channel radiotelemetry units of nine rats. Five rats
(rats 4-8) were treated with kainite to induce seizures. Rats 1-3 and 9 are controls.
No data were processed for rats 2 and 3. More than 100 days of data were recorded at
a sampling rate of 250 Hz. Separate files were used to store data for each day and
each rat.
3.2 Meta Data
In addition to the raw EEG data, an Excel spreadsheet containing the time of day and
duration of2462 seizures was provided. Seizure times are recorded as the nearest
minute before the onset of seizure activity, so the actual start of the seizure could be
up to 59 seconds, or 14750 data points, away from the nominal start time.
9


4. Methods
Two methods are used for distinguishing between segments of EEG recordings
containing seizures and those containing only normal data. The first method attempts
to identify the entire seizure at once using 230 second segments of data. The second
method examines several consecutive data slices, where each slice is a few seconds in
duration. The second method is then modified to identify pre-ictal data.
4.1 Artificial Neural Networks
An artificial neural network is a mathematical model inspired by the biological neural
networks of a living brain, capable of learning from examples and generalization
beyond the examples used in training. The network is composed of individual
interconnected units known as neurons. The neurons are usually arranged in layers,
with an input layer, one or more hidden layers which are not directly connected to the
outside world, and an output layer. Each neuron takes input from other neurons in the
network and calculates its output based on a transfer function. The outputs from the
individual neurons are then combined at the output layer to produce the total network
response to a given input vector.
10


Supervised learning is achieved by presenting training patterns to the input layer and
adjusting the connection weights between neurons to minimize the difference
between the total network response at the output layer and the target response for
each training pattern.
4.1.1 Radial Basis Function Neural Networks
The radial basis function (RBF) neural networks used in this study were chosen for
their pattern recognition capabilities and training speed. An RBF network consists of
an input layer, a single hidden layer of radial basis neurons, and a linear output layer.
The RBF network architecture is shown in Figure 4.1.
Input Layer Hidden Layer Output Layer
Figure 4.1. RBF Network Architecture. The input vector X produces the hidden layer output vector a
and the network response L
11


Weight vectors of the neurons in the hidden layer are set to a representative subset of
the input vectors used for training. The processing done by a single RBF neuron
consists of calculating the Euclidian distance from its weight vector to the input
vector and passing the bias adjusted result through the transfer function.
The radial basis transfer function used here is a Gaussian, given by equation 4.1.
f(x) = exA2 (4.1)
A plot of this function is given in Figure 4.2.
Radal Basis Transfer Function
Figure 4.2. Radial basis transfer function.
12


The weight vector of an RBF neuron is also referred to as its location or center point.
Input vectors close to the center point of an RBF neuron will cause that neuron to
generate an output value near one. The neurons output decreases approaching zero
as the input vector gets further from the neuron center. The output vector an of the
hidden layer is calculated in terms of the P-dimensional input vector x in equation 4.2,
where W is the weight vector of neuron n, b is the bias and tfn is the Gaussian
transfer function.
an = tfn(b V(Xp=l :p(Xp Wnp)2)) (4.2)
The bias of the hidden layer is calculated as:
b = V(-log(0.5)) / radius (4.3)
Supervised learning is achieved by presenting the network with a set of training
vectors and adjusting the weights and bias of the linear output layer to minimize the
mean absolute error between target and actual network output. This is done by
solving equation 4.4 in terms of the output from the hidden layer, where k is the
number of training vector/target pairs.
tk = b + au w, + a^ w2 + ... + a^ wn (4.4)
13


The RBF networks used in this study were built using functions from the Matlab
Neural Network toolbox.
4.1.1.1 RBF Network Parameters
Several parameters must be specified when building an RBF neural network. Useful
heuristics exist for choosing reasonable values for some parameters, but much of the
design process involves trial and error. One new heuristic is proposed for choosing a
good starting value for neuron radius, greatly speeding up network design times.
4.1.1.1.1 Neuron Radius
One heuristic for selecting a value for the neurons radii, also referred to as spread, is
given in the Matlab Neural Network Toolbox documentation [9]:
...choose a spread constant larger than the distance between adjacent
input vectors, so as to get good generalization, but smaller than the
distance across the whole input space.
Taking the minimum and maximum values of the distance matrix of input vectors for
the FFT feature set described in section 4.2.4.1 gives a range of: [851,1211900],
narrowing the search down to a still huge search space of 1.2 x 106.
A better heuristic is based on the amount of variation in the set of input training
vectors. The base spread for a given set of training vectors is calculated as the mean
14


of the distance matrix of those vectors. The formula used to calculate the base spread
is given by equation 4.5, where x, and Xj are training vectors, k is the number of
training vectors and P is the dimension of the data.
Base Spread = 2*Q]i=i:kIj=i+i:kV(Zp=i:p(xip Xjp)2))/(k2-k) (4.5)
The base spread is multiplied by a spread factor to determine the neuron radius. The
best values for neuron radius were determined experimentally to be between one third
and two thirds of the value of base spread. This observation held across multiple
diverse data sets. Figure 4.3 shows a plot of the accuracies of a neural network
trained to distinguish between edible and inedible mushrooms from 8416 samples of
22 dimensional data [18]. Figure 4.4 shows a similar plot of accuracies for the FFT
feature set.
15


Figure 4.3. Accuracies for mushroom data at a range of spread factors. Plots of accuracies using 4, 8,
14,20 and 30 neurons are shown.
Figure 4.4. Accuracies for FFT feature set over a range of spread values.
The best accuracies in Figures 4.3 and 4.4 occur in the relatively narrow range of one
third to two thirds of the value of base spread for the respective data sets. Using this
heuristic with the same data set as above results in a search space of 3.9 x 104, a
16


reduction of two orders of magnitude. The width of the space necessary to search
using the Matlab heuristic is approximately equal to the width of the entire plot of
Figure 4.4. This is more useful than any neuron radius heuristic that we have found.
4.1.1.1.2 Neuron Location
Each neuron in an RBF neural network has an associated n-dimensional location in
the n-dimensional input space, where n is the dimension of the input vectors. In this
study, these location vectors are determined using an unsupervised learning technique.
All training vectors are clustered using the k-means clustering algorithm implemented
in the Matlab Statistics toolbox. The number of neurons used in the network is the
same as the number of clusters. Cluster centers are determined as the means of the
vectors in each cluster, and these values are used as RBF neuron centers.
Another method commonly used to choose neuron centers is the greedy strategy
implemented in the Matlab RBF design function newrb. Using this function, the
network is built incrementally, adding neurons until some error boundary is reached.
In each iteration the algorithm chooses the location of the new neuron as the input
vector which will minimize the total error of die network when used as a neuron
center.
The effectiveness of each of these two methods is compared in the results section.
17


4.1.1.1.3 Number of Hidden Layer Neurons
Ideally, we would like to cover the entire input space with overlapping neurons, so
any input vector would generate a response from several neurons, but this strategy is
rarely feasible. With more neurons, the radius of each one could be decreased, and
the network would be more specific. If taken too far, this could result in die network
memorizing each input vector and not generalizing well. Trials using neuron counts
from three to 200 were conducted. When using a value for the neuron radius obtained
using the heuristic outlined above, 13 hidden layer neurons were found to be
sufficient for good network performance on this data set.
4.2 Preprocessing
The first step in preparing the raw EEG data for presentation to the neural network
was to locate the relevant segments. A C++ program was developed to extract
relevant segments from the raw data files based on the seizure times given in the
Excel file. Extracted segments are 230 seconds long, slightly longer than the longest
seizure duration in the study. The total number of data points per extracted segment
is: 230 seconds x 250 points per second x three channels equals 172500 points per
segment.
18


Seizure times were checked against the recording ranges of the raw data files and
extracted segments were checked for inconsistencies and seizures that were truncated
by the extraction process, leading to the rejection of 101 segments. This left 2361
seizure segments for training and testing.
In this study, there are two normal (non-epileptic) rats and five abnormal (epileptic)
rats. Obviously, all seizure samples come from the epileptic rats. The easiest way to
generate samples of normal EEG signals would be to take segments from random
times from only the normal rats. However, if this method were used it is possible that
the neural network could learn to identify features specific to each rat, distinguishing
between individuals rather than between normal and abnormal EEG segments. This
could result in highly accurate, but completely meaningless results. Therefore it was
necessary to include segments of normal EEG from epileptic rats as well as from
normal rats. Normal segments were chosen randomly from the raw data with the
restriction that each segment start and end time must not be within five minutes of the
start or end of any previously extracted segment. Due to the frequency of seizure
occurrences on some days, five minutes was the maximum amount of time possible
between extracted segments. Longer intervals between segments were used when
possible. One normal segment was extracted from the same day for each abnormal
segment, so the total number of normal segments was equal to the number of seizure
19


segments for each day and for each rat. An additional 336 normal segments were
extracted from each of the non-epileptic rats.
4.2.1 Windowing
In order for a neural network to be able to generalize from a set of training vectors,
the number of training samples available must be much greater than the length of
each sample. If the length of training vectors used is greater than the number of
samples available for training, the network will not generalize well. Given the
extracted segment length of 172500 points, 2361 seizures and 3106 normal samples,
the length of the vector presented to the network must be significantly reduced. The
most straight-forward technique is to chop the vector into smaller segments, taking
only a few seconds, or fractions of a second, worth of raw data, rather than the whole
segment. This technique is called windowing.
4.2.1.1 Indexing
Given die fact that the known seizure times are rounded to the nearest minute before
seizure onset, combined with the variable duration across seizures, determining where
to take a windowed slice from within the 230 second extracted segment is somewhat
problematic in practice. In order to determine seizure start and end times, and where
it is appropriate to take data slices from, a Matlab based segment viewer was
20


developed to display all three channels of a given seizure segment using variable
scales and allowing navigation through the full 230 seconds of extracted data. In this
way, 555 seizures were manually annotated with start and end times at a resolution of
one second. The number of seizures indexed for each rat is given in Table 4.1.
Rat Indexed Seizures
4 57
5 38
6 107
7 204
8 149
Total 555
Table 4.1. Indexed seizures per rat.
Indexing was not necessary for normal segments, as any slice should be as good as
any other within the same segment. Accordingly, normal slices were chosen
randomly from within each normal segment.
4.2.2 Fourier Transform
The Fourier transform is used to analyze the frequency spectrum of a signal by
decomposing the signal into different sinusoids. This yields a view of the frequency
components of the signal, but results in a loss of information in the time domain. In
order to preserve some time information, the transform is often applied to a moving
window of the data. This is known as the short-time Fourier transform. In this study,
21


the short-time fast Fourier transform was used, as implemented by the Matlab
function fit. For a vector x of length P, the transformed vector X is given by equation
4.6, where j is the square root of-1.
xk = Xp=i ;P(Xp*exp(-j *2 *pi*(k-1 )*(n-1 )/N)), 1 <= k <= P (4.6)
4.23 Wavelet Transform
The wavelet transform provides another view of a signals frequency content. Rather
than using the fixed width window of the short-time Fourier transform, wavelet
decomposition uses a scaled window. A narrow window is used to capture high
frequency data, while wider windows are used for the lower frequencies. Instead of
the sinusoidal bases of the Fourier transform, a wide range of basis functions are
available for use with wavelet decomposition. This study concentrated on the
Daubechies base wavelets developed by I. Daubechies [7], and used in [30] for
seizure prediction with recurrent neural networks. Matlab implements 43 Daubechies
wavelet bases, referred to as dbl through db45. Decomposition is achieved by
comparing the original signal to scaled and shifted versions of the base wavelet and
generating coefficients indicating how well each version of the base wavelet
represents the original signal.
22


Wavelet decomposition can be used to separate a signal into a low frequency
approximation of the original signal and high frequency details. Applying a second
decomposition to the approximation coefficients obtained from the first
decomposition results in a level two decomposition of the original signal. This may
be carried out multiple times, depending on the length of the original signal and the
wavelet base used.
The discrete wavelet transform of the original signal f(t) is given by equation 4.7.
Xk=-:^Jk^PJk(t) "b ^J=J:ooXk=-a: where Cjk are the scaling coefficients, djk are the wavelet coefficients,

function and y is the basis function, in this case the Daubechies base. In the right
hand side of this equation, the first term represents the approximation of the original
signal, while the second term contains the details [1].
Approximation coefficients and detail coefficients were used separately to test their
abilities to isolate useful features for seizure identification and prediction. Neural
networks were trained using either approximation or detail coefficients and their
accuracies were compared.
23


4.2.4 Input Vector Construction
As discussed in sections 3.2 and 4.2.1.1, it was not known precisely where within a
segment a seizure began or ended. Two overall strategies were used to overcome this
problem. The first method used a moving window and transformations covering the
entire 230 second segment, while the second method relied on being able to apply
transformations to individual short slices of data guaranteed to be within a seizure.
4.2.4.1 Seizure-At-Once Method
The first method made no attempt to explicitly localize the seizure within each
segment. Each segment was divided into 30 equal slices across the three channels of
data. Some transformation was then applied to each slice. The values were averaged
within each slice, resulting in 30 values per channel for each data segment, or a 90
dimensional vector.
The transformations applied to each slice of a full segment were the Fourier transform
or a level three wavelet decomposition using the Daubechies 2 base wavelet. Wavelet
approximation and detail coefficients were used separately. In an attempt to capture
the general shape of a seizure a simple method taking the mean of the absolute values
for each slice with no other transformation was also used for comparison. These
24


feature sets are referred to as FFT, Wavelet Approximation, Wavelet Details and
Mean Raw respectively. Figure 4.5 shows the result of applying these
transformations to a normal segment and a seizure segment.
Noonal
Figure 4.5. Transformation examples. Row 1: raw data for one normal and one seizure segment.
Rows 2-5: FFT, Wavelet Approximation, Wavelet Details and Mean Raw transformations.
Transformations in column 1 are applied to a normal segment; transformations in column 2 are applied
to a seizure segment.
A neural network could easily be trained to distinguish between the 90 point vectors
in the left column of Figure 4.5 from those in the right column. These examples were
25


chosen to illustrate how these preprocessing methods looked in the best case, and
unfortunately it was not always this clear.
Feature vectors from all rats were pooled so that all training and testing vector sets
included examples from every rat. Tables 4.2 and 4.3 give the total numbers of
seizures and normal segments used from each rat.
Rat Seizures
4 86
5 38
6 414
7 998
8 820
Total 2361
Table 4.2. Seizure counts per rat
Rat Normal Segments
1 336
4 112
5 56
6 420
7 1006
8 840
9 336
Total 3106
Table 4 J. Normal segment counts per rat
4.2.4.2 Short Slices
The second method took advantage of the added information obtained by indexing
seizure start and end times within each of 555 segments, as described in section
4.2.1.1. Slices of raw EEG data from less than one second to approximately 22
seconds in duration were extracted from random locations between the indexed
seizure start and end times. The use of shorter slices allowed several vectors to be
obtained from each indexed seizure. This also eliminated the need to average over
26


multiple values and consequent loss of information, as was necessary to reduce the
length of the vectors representing the full 230 second segments.
This method resulted in the availability of several thousand seizure examples for the
neural network, depending on the length of slice used, from only 555 indexed
seizures. Each slice was handled as a separate example to be transformed and
presented to the neural network, although slices from the same seizure were grouped
for neural network training or testing purposes. All slices from a given seizure were
allocated together to either the training pool or the testing pool.
In order to have a fair comparison of neural networks, it is necessary to use vectors of
the same length. Applying wavelet decompositions with different bases and at
different levels to a constant width signal results in transformed vectors of different
lengths. A minimum vector length is necessary in order to perform a valid wavelet
decomposition at a given level with a given base wavelet. Using the shortest vector
necessary for a valid decomposition at levels one through four with Daubechies base
wavelets results in a maximum transformed vector length of 152 points. Level five
decomposition requires 5441 points (21.8 seconds), and yields a 171 point
transformed vector. Raw data slices of different widths were used to maintain a
constant transformed vector length of 152 or 171 points. In this way, the use of slices
over the range of one to 22 seconds can be compared.
27


The RBF neural network was trained using slices of wavelet transformed or raw data
from normal and seizure segments and validated using a testing set of vectors not
used in training. Network responses above the threshold of 0.5 indicate that the
network believes the input vector causing this response came from within a seizure.
This trained network was then tested further by feeding it a full day of data, slice by
slice. False positives were minimized by applying a simple heuristic, such as
requiring that eight of ten consecutive network responses be above the threshold
before declaring a seizure. The choice of this heuristic varied based on the duration
of the slice used. This is similar to the method of averaging of network responses for
several consecutive slices proposed in [2].
The networks trained using these short slices were specific to each rat. Each network
was trained to recognize seizures from one rat only. This improved accuracy, but
decreased generalization to other rats. For example, the networks trained to recognize
or predict seizures for rat four did not perform well when presented with data from rat
seven. Part of the strength of this method comes from the ability to tune it to each
individual rat.
4.2.4.3 Pre-Ictal Slices
The problem of seizure prediction can be approached in the same way as the problem
of seizure identification. The only difference is the location of a data slice of interest
28


relative to seizure onset. Short slices for pre-ictal state identification were obtained
similarly to those for ictal slices, but were extracted from between two minutes and
one second before seizure onset.
29


5. Results
5.1 Neuron Locations
Several tests of the two neuron location determination methods described in section
4.1.1.1.2 showed that using k-means cluster centers as RBF neuron centers
outperforms the Matlab newrb method. Figure 5.1 shows a representative example
using 20 neurons and the Fourier transformation applied to 230 seconds segments.
Spread x 10
Figure 5.1. Comparison of neuron location methods.
The neuron at cluster center method was used throughout the rest of this study.
30


5.2 Seizure Identification
Initial results were obtained using the full 230 second segments described in section
4.2.4.1. Better results were obtained later using the short slices method of section
4.2.4.2.
5.2.1 Seizure-At-Once Method
Several trials were run using different combinations of neuron count and radius. The
most effective neuron radius value was determined for each feature set based on the
accuracy of the resulting trained network when tested using another data set. All
results in this section were obtained using 5-fold cross-validation. Each feature set
was randomly divided into fifths, with an equal proportion of feature vectors from
each rat in each fifth. The network was trained with four of the five fifths and tested
on the remaining fifth. This process was repeated for each combination of four fifths,
retraining the network and using the remaining fifth as the testing set. The results of
the five trials were then averaged.
Tables 5.1 and 5.2 give the results of using the k-means cluster centers as neuron
centers method for 20 and 200 neurons. Table 5.1 presents the results in terms of the
numbers of true positives, true negatives, false positives and false negatives. Table
31


5.2 presents the same information in terms of sensitivity, specificity and accuracy,
which are defined as follows:
Sensitivity % = 100 True Positives / (True Positives + False Negatives) (5.1)
Specificity % = 100 True Negatives / (True Negatives + False Positives) (5.2)
Accuracy % = 100 (True Positives + True Negatives) /
(True Positives + True Negatives + False Positives + False Negatives) (5.3)
Feature Set Neuron Count True Positives True Negatives False Positives False Negatives
FFT 20 805 1209 140 207
FFT 200 840 1225 131 165
Wavelet Details 20 789 1250 90 232
Wavelet Details 200 847 1264 79 171
Wavelet Approx. 20 802 1133 224 202
Wavelet Approx. 200 855 1097 221 188
Mean Raw 20 746 1090 264 261
Mean Raw 200 760 1179 162 260
Figure 5.1. Classification of seizures using seizure-at-once method.
32


Feature Set Neuron Count Sensitivity Specificity Accuracy
FFT 20 79.5 89.6 85.27
FFT 200 83.6 90.3 87.45
Wavelet Details 20 77.3 93.3 86.38
Wavelet Details 200 83.2 94.1 89.40
Wavelet Approx. 20 79.9 83.5 81.97
Wavelet Approx. 200 82.0 83.2 82.67
Mean Raw 20 74.1 80.5 77.77
Mean Raw 200 74.5 87.9 82.11
Table 5.2. Results for seizure identification using seizure-at-once method.
The results in Table 5.2 show that applying the wavelet or FFT transforms to the raw
data before using it to train the network improves accuracy. Comparing networks
using only 20 neurons, die wavelet approximation coefficients give much better
results than using raw data. When neuron count is increased to 200, results using raw
data are comparable to using the wavelet approximation. The use of wavelet detail
coefficients with a relatively high neuron count produces the best results here with
89.5% accuracy. While these results are promising, several seizures would still go
undetected and the false positive rate would be significant.
5.2.2 Short Slices
RBF neural networks trained using short slices for one rat at a time outperformed
those trained using whole seizures for all rats. In this section, half of the available
33


vectors were used for training the neural network, with the other half used for testing.
Data slices from all three channels were used for seizure identification. For each trial,
raw data was extracted from random starting positions within the ictal or interictal
data, decomposed into wavelet coefficients, and used to train the neural network.
Another data set was used for testing, and then the entire process was repeated.
Results using wavelet transformed data given in this section are averaged over two
trials. Figure 5.2 compares the results of using wavelet detail and approximation
coefficients at decomposition levels one through four. A histogram of 146 trials
using 0.6 second slices of raw data is also shown for comparison.
Rat 4, seizure identification, levels 1-4, semllogx
Rat 4, raw data, 152 point window
accuracy
Figure 5.2. Per-slice seizure identification accuracies using short slices. Using a) wavelet detail
coefficients (blue) and approximation coefficients (red) at levels 1-4, b) raw data with 152 data points
or 0.6 seconds per slice.
34


The histogram of raw data accuracies in Figure 5.2 shows that an average per-slice
accuracy of 87% is achievable using untransformed narrow windows of raw data.
Results using wavelet decompositions clearly show that detail coefficients do a better
job of extracting relevant information from longer slices of raw data than
approximation coefficients. The best results for rat four come from using the details
of a level three decomposition, with an average per-slice accuracy of 95%. This
corresponds to a window width of 1000 to 1300 points. Increasing the width of the
window beyond 1300 points shows a decreased accuracy of the resulting neural
network. The optimal window width was different for each rat, but the relative
performance of approximation versus detail coefficients was consistent for seizure
identification.
Once the most accurate preprocessing parameters are identified for a given rat, a
network can be trained and applied to longer stretches of data, slice by slice. Figure
5.3 shows three channels of raw data from rat four for approximately four minutes
with network responses to 4.8 second slices. The neural network was trained with
detail coefficients at level three using wavelet base dbl. The seizure is clearly
identified by the network responses above the threshold of 0.5.
35


Rat 4.11/000311:21 AM
4.075 4.08 4.085 4.09 4.095 4.1
x 10*
Figure 5 J. Seizure identification for rat 4. Three channels of raw data (blue), with network responses
to 4.8 second slices (red circles).
The use of shorter time slices allows for the use of a refinement phase, which was not
possible when the network was trained to identify the whole seizure at once. Several
consecutive slices can be compared, as with the heuristic given in section 4.2.4.2.
Figure 5.4 shows the use of three heuristics for seizure identification for 24 hours of
data, using the same neural network as above. The heuristics used required eight,
36


nine or ten of ten consecutive network responses above the threshold before declaring
a seizure.
Rat 4,11/09/03, 8 of 10
- 1 > 1 1 > ^ 1 1 1 1 ( l l l 1 1 > < 1 1 1 > 1
0123456789 9of10 x10<
> < 1 1 > ; I I I > C J i i 1 1 5 l 1 1 1 1
0123456789 10 Of 10 x10<
1 o c 1 I > < 1 l l l > c 1 1 1 1 1 ) ' I 1 1 ) 1
0123456789
x 104
Figure 5.4. Three heuristics for seizure identification over 24 hours. Circles indicate actual observed
seizures. Lines are seizures identified by the neural network. X-axis is time in seconds from midnight.
As can be seen from Figure 5.4, combining the 95% per-slice accuracy with an
appropriate refinement heuristic results in a highly accurate seizure detector. The
heuristic requiring all ten consecutive network responses to be above the threshold
was too strict in this case, and the first seizure was missed for the day shown in
37


Figure 5.2. No false positives or false negatives were found when testing on five full
days of data using the eight of ten heuristic at the top of Figure 5.4. Plots using the 8
of 10 heuristic for the days 11/7/03 and 11/8/03 are shown in figure 5.5. No seizure
occurred on 11/5/03 and 11/6/03 for rat 4, and none were detected by the network.
Rat 4,11/07/03
0
1
0.5
3 4 5 6
seconds from midnight
I I I I I I I 1 1 L , ! , 1 1 1 >
0123456789 Rat 4,11/08/03 x1(J4
c I I I L > < > ( J \ 1 1 1
[ 10*
Figure 5.5. Rat 4 seizure identification for two days. Circles indicate actual observed seizures. Lines
are seizures identified by the neural network. X-axis is time in seconds from midnight.
S3 Seizure Prediction
Because seizures often originate from a single epileptic focus at a different location
within the brain for different individuals, it is reasonable to suspect that the probe
closest to the focus would pick up seizure related signals first. If a pre-ictal state can
be identified, it is likely that data from one probe will be more useful than the others.
For rat six, channel two shows much better predictive capabilities than the other
38


channels. For rats seven and eight, the most relevant channel is less clear. This is
shown in Figure 5.6.
Rat 6. level 1 decomposition
Figure 5.6. Seizure prediction on different channels. Accuracy vs. wavelet base at level 1
decomposition. Some data channels perform better than others.
For seizure prediction, wavelet approximation coefficients perform better than detail
coefficients, indicating that relevant pre-ictal information is contained in the lower
frequencies. Figure 5.7 illustrates this with a representative example from rat six.
39


Rat 6 channel 2, prediction, levels 1-5, serralogx
window width (seconds)
Figure 5.7. Seizure prediction for rat 6.
Per slice prediction results are shown for rats 4 and 5 in figures 5.8 and 5.9.
Rat 4 channel 1, seizure prediction, levels 1-4
Figure 5.8. Seizure prediction for rat 4.
Figure 5.9. Seizure prediction for rat 5.
40


Given the 74% per-slice accuracy for channel two of rat six, fairly accurate seizure
prediction is possible. Figure 5.10 shows three channels of raw data for rat six with
slice classification responses from a network trained to identify the pre-ictal state
from channel two only. This seizure could have been predicted approximately four
minutes in advance.
Figure 5.10. Prediction of one seizure. Three channels, six minutes of raw data (blue) for rat 6 and
responses (circles) from a network trained for prediction on channel two.
41


Application of the same heuristic used above makes this result more clear. A
different seizure from rat six is used to illustrate this point in Figure 5.11, with two
plots showing raw data and network responses for channel two before and after
applying the heuristic for identification. The impending seizure is identified
approximately two minutes in advance in this case.
Rat 6 prediction, channel 2,11 AjaA)3 2:45 AM
8 of 10 consecutive network responses above threshold
.0 51___________I____________I___________I___________I____________I___________l____________l___________I___________
9500 9550 9600 9650 9700 9750 9800 9850 9900 9950
Figure 5.11. Prediction refinement with heuristic. Rat 6 channel two, 6.6 minutes of data. Circles
represent a) raw predictions, b) predictions after application of 8 of 10 heuristic.
42


When applied to a full day of data, seizure prediction is less successful. All seizures
were predicted for the three days shown in Figure 5.12, but the rate of false positives
is significant.
rat6 streamjest 11/21 AD, training
x 104
Figure 5.12. Seizure prediction for 24 hours, a) training, b) and c) testing.
43


6. Discussion
A new heuristic for determining radius values for an RBF neural network has been
proposed and shown to be effective. The use of this heuristic significantly reduces
network design times.
Two types of RBF neural network based seizure detectors have been implemented
and tested. One attempts to identify an entire seizure at once, while the other uses a
two stage approach by looking at several consecutive network responses to short time
slices of the data.
An average of 89% accuracy was achieved across all rats using the seizure-at-once
method combined with detail coefficients of wavelet decomposition preprocessing.
This method has the advantage of not needing to explicitly localize a seizure within
die 230 second data segment before training the network. The disadvantage of
needing to average over slices in order to reduce vector length results in a loss of
potentially relevant information, and therefore reduced accuracy.
By limiting the focus to just a few seconds of data, the short slice method
demonstrated consistent per-slice accuracies of 95% using wavelet decomposed data.
Training the networks to identify seizures for only one rat improved performance.
44


This indicates that individual rats exhibit different identifiable characteristics in their
seizure EEG. That this high level of accuracy is achievable using only half of the
available vectors for training the neural network and half for testing demonstrates the
generalization capabilities of this method.
The 87% average per-slice accuracy using a neural network trained on 0.6 seconds of
raw data is nearly as good as the best per-seizure accuracy achieved using the seizure-
at-once method. Comparing the neural network response to several consecutive slices
of raw data before declaring a seizure would likely result in a per-seizure accuracy
better than any of the seizure-at-once methods. This suggests that a neural network
using narrow windows of raw data with no other transformation could identify seizure
fairly accurately.
When a network trained to identify seizures in one rat was tested using data from a
previously unseen rat, performance was poor. This was expected, as the network
learned to identify seizure-related characteristics in the EEG that were specific to one
rat. The specific waveform characteristics of the electrical activity measured during
epileptic seizures vary from seizure to seizure within the same rat, but all have
common patterns that are leamable and recognizable for a neural network, given a
representative set of training examples. It seems feasible that, given a representative
training set from a large enough pool of individuals, a neural network could also learn
to recognize seizures in previously unseen individuals.
45


The necessity for annotating seizure start times in order to localize the relevant
sections within the raw data files is a minor drawback to this method. Only 57
seizures were used to train the neural network for seizure identification in rat four,
and it is likely that the method could work with even fewer. Visual identification of a
small subset of seizures is a much less demanding task than attempting to identify
diem all this way, as is commonly done.
The results obtained for seizure prediction suggest that an identifiable pre-ictal state
exists, at least in some circumstances. Because seizure origins are localized at a
stationary point within the brain in at least some cases, it was expected that if a pre-
ictal state were identifiable, it would be more evident in data from one probe than
from the others. This was the case for three of the five rats, and was particularly
evident for rats four and six. It seems likely that the probe providing the best data for
seizure prediction was very near the seizure origin in these cases. All three channels
performed equally poorly for rat eight, possibly because no probe was near enough to
the seizure origin to detect the pre-ictal state. When intra-cranial probes are used
with humans, extra-cranial EEG data is first examined to determine appropriate probe
placement. This should ensure the availability of information from the relevant area
of the brain.
The results of [30] suggest that it is the signals high frequency components that
contain the information relevant for seizure prediction. The results obtained here
46


contradict those findings. While the wavelet details were most useful for seizure
identification, it was the approximation coefficients that provided the most accurate
results for prediction. This may indicate differences in the electrical activity of
seizures between humans and rats.
The high rate of seizures in the rats used here complicates prediction. With seizures
occurring every few minutes for long periods, brain activity may not have time to
return fully to a normal interictal state. Further work is necessary to refine prediction
techniques to the point where they become useful in real world applications.
Another issue of interest to researchers studying epileptic seizures is the spike. A
spike is a sudden short burst in the EEG that is easily identifiable by a human expert.
There is usually a spike just before seizure onset, but there can also be many spikes
during the interictal period that do not immediately precede a seizure. Given the
ability to localize short time transient signals, wavelet decomposition seems a natural
choice for preprocessing data for use with a neural network classifier to track
interictal spikes. This was found to be effective in [10].
The methods investigated here do not give a formula for building a seizure detector or
predictor directly, but rather provide a framework that can be applied to any EEG
data set. Appropriate parameters for application within the framework can then be
determined in a fairly mechanical way for a given data set. The parameters that must
47


be specified consist of the level of wavelet decomposition and width of the window
applied to the raw data. This could be used with other animal models or with EGG
data from human subjects.
48


7. Conclusions and Future Work
Identification of an ictal state from EEG data is possible using an RBF neural network.
The use of half second windows of raw data as network input demonstrates the ability
of die neural network to learn differences in the patterns of ictal and interictal EEG
data without explicit feature extraction. Wavelet decomposition of the narrow
window of raw data improves performance. Transformation of a wider window, up
to about five seconds, improves performance further. The ability of wavelet
decomposition to transform five seconds of raw data into a vector of manageable
length without substantial loss of relevant information makes it an effective tool for
preprocessing EEG data.
Improvement to seizure identification and prediction accuracy could be realized by
adding additional hidden layer neurons. With only 13 hidden layer neurons in the
most accurate neural networks used here, many more could be added without risk of
loss of generalization. The use of cluster validity measures to determine an optimal
number of neurons could be investigated.
The heuristic for determining the RBF neuron radius could be refined further. It is
likely that it could be made more precise by accounting for the standard deviation in
49


the distance matrix of input training vectors and the number of neurons used to cover
die input space.
Variation in results across rats may be partially attributable to probe placement. In
order to achieve accurate results, it is necessary to have at least one probe near the
seizure origin. With some types of seizure, there is no clear single epileptic focus. In
these cases prediction based on data from one probe only is unlikely to be successful.
The same prediction method used here could be modified to take transformed data
from all three probes as input to the neural network in an attempt to identify the pre-
ictal state without a single epileptic focus.
Building a truly generic seizure detector capable of identifying seizures in data from
previously unseen individuals could be pursued with the methods presented here.
Generalization of seizure characteristics between rats would require a larger pool of
training subjects. It would take a network with significantly more neurons than is
necessary for seizure identification in one individual, but building a generic seizure
detector should be possible.
50


REFERENCES
[1] H. Adeli, Z. Zhou and N. Dadmehr, Analysis of EEG Records in an Epileptic
Patient Using Wavelet Transform, J. Neuroscience Methods, no. 123,2003, pp. 69-
87.
[2] C. Anderson, S. Devulapalli and E. Stolz, EEG Signal Classification with
Different Signal Representations, Neural Networks for Signal Processing V, IEEE,
1995, pp. 475-483.
[3] R. Bogacz, U. Markowska-Kaczmar and A. Kozik, Blinking Artefact
Recognition in EEG Signal Using Artificial Neural Network, Proc. 4th Conf. Neural
Networks & Their Applications, Polish Neural Networks Soc., 1999, pages 6-12.
[4] P. B remaud, Mathematical Principals of Signal Processing, Springer-Verlag,
2002.
[5] K. Cios, W. Pedrycz and R. Swiniarski, Data Mining Methods for Knowledge
Discovery, Kluwer Academic Publishers, 1998.
[6] M. DAlessandro, R. Esteller, G. Vachtsevanos, A. Hinson, J. Echauz and B. Litt,
Epileptic Seizure Prediction Using Hybrid Feature Selection Over Multiple
Intracranial EEG Electrode Contacts: A Report of Four Patients, IEEE Trans.
Biomedical Eng., vol. 50, no. 5, May 2003, pp. 603-615.
[7] I. Daubechies, The Wavelet Transform, Time-Frequency Localization and
Signal Analysis, IEEE Trans. Information Theory, vol. 36, no. 5, Sept. 1990, pp.
961-1004.
[8] C. DeGiorgio, S. Schachter, A. Handforth, M. Salinsky, J. Thompson, B. Uthman,
R. Reed, S. Collins, E. Tecoma, G. Morris, B. Vaughn, D. Naritoku, T. Henry, D.
Labar, R. Gilmartin, D. Labiner, I. Osorio, R. Ristanovic, J. Jones, J. Murphy, G. Ney,
J. Wheless, P. Lewis and C. Heck, Prospective Long-Term Study of Vagus Nerve
Stimulation for the Treatment of Refractory Seizures, Epilepsia, vol. 41, no. 9,2000,
pp. 1195-1200.
51


[9] H. Demuth and M. Beale, Neural Network Toolbox for Matlab, The Mathworks
Inc., 1998.
[10] R. Esteller, Detection of Seizure Onset in Epileptic Patients from Intracranial
EEG Signals, doctoral dissertation proposal, Dept. Electrical Eng., Georgia Inst
Tech., 1999.
[11] A. Galka, Topics in Nonlinear Time Series Analysis, World Scientific, 2000.
[12] N. Hazarika, J. Chen, A. Tsoi and A. Sergejew, Classification of EEG Signals
Using the Wavelet Transform, Signal Processing, vol. 59,1997, pp. 61-72.
[13] C. James, R. Jones, P. Bones and G. Carroll, Detection of Epileptiform
Discharges in the EEG by a Hybrid System Comprising Mimetic, Self-organized
Artificial Neural Network, and Fuzzy Logic Stages, Clinical Neurophysiology, vol.
110, Dec. 1999, pp. 2049-2063.
[14] G. Kaiser, A Friendly Guide to Wavelets, Birkhauser, 1994.
[15] C. Ko and H. Chung, Automatic Spike Detection via an Artificial Neural
Network Using Raw EEG Data: Effects of Data Preparation and Implications in the
Limitations of Online Recognition, Clinical Neurophysiology, vol. Ill, Mar. 2000,
pp. 488-481.
[16] K. Lehnertz, F. Mormann, T. Kreuz, R. Andrzejak, C. Rieke, P. David and C.
Eiger, Seizure Prediction by Nonlinear EEG Analysis, IEEE Eng. Medicine and
Biology, vol. 22, Jan./Feb. 2003, pp. 57-63.
[17] K. Levin, H. Luders, T. Swanson, Comprehensive Clinical Neurophysiology,
chapter 33, Basic Cellular and Synaptic Mechanisms Underlying the
Electroencephalograph, WB Saunders, 2000.
[18] G. H. Lincoff, The Audubon Society Field Guide to North American Mushrooms,
Alfred A. Knopf, 1981.
[19] B. Litt and J. Echauz, Prediction of Epileptic Seizures, Lancet Neurology, vol.
1, May 2002, pp. 22-30.
52


[20] B. Litt, R. Esteller, J. Echauz, M. DAlessandro, R. Shor, T. Henry, P. Pennell,
C. Epstein, R. Bakay, M. Dichter and G. Vachtsevanos, Epileptic Seizures May
Begin Hours in Advance of Clinical Onset: A Report of Five Patients, Neuron, vol.
30, Apr. 2001, pp. 51-64.
[21] T. Maiwald, M. Winterhalder, R. Aschenbrenner-Schiebe, H. Voss, A. Schulze-
Bonhage and J. Timmer, Comparison of Three Nonlinear Seizure Prediction
Methods by Means of the Seizure Prediction Characteristic, Physica D, vol. 194,
Sept. 2003, pp. 357-368.
[22] T. Masters, Practical Neural Network Recipes in C++, Academic Press, 1993.
[23] P. McSharry, L. Smith and L. Tarassenko, Comparison of Predictability of
Epileptic Seizures by a Linear and a Nonlinear Method, IEEE Trans. Biomedical
Engineering, vol. 50, no. 5, May 2003, pp. 628-633.
[24] S. Mitra, Digital Signal Processing Laboratory Using Mat lab, McGraw-Hill,
1999.
[25] D. Mix and K. Olejniczak, Elements of Wavelets for Engineers and Scientists,
Wiley and Sons, 2003.
[26] A. Mutapcic, T. Shimayama and A. Flores, Automatic Sleep Stage
Classification Using Frequency Analysis of EEG Signals, Proc. XIXIntlSymp.
Information and Communication Technologies, University of Sarajevo, 2003.
[27] M. Nicolelis, Actions From Thoughts, Nature, vol. 409, no. 18, Jan. 2001, pp.
403-407.
[28] O. Omidvar and J. Dayhoff, Neural Networks and Pattern Recognition,
Academic Press, 1998.
[29] J. Paul, C. Patel, H. Al-Nashash, N. Zhang, W. Ziai, M. Mirski and D. Sherman,
Prediction of PTZ-Induced Seizures Using Wavelet-Based Residual Entropy of
Cortical and Subcortical Field Potentials, IEEE Trans. Biomedical Engineering, vol.
50, no. 5, May 2003, pp. 640-648.
[30] A. Petrosian, D. Prokhorov, R. Homan, R. Dasheiff and D. Wunsch II,
Recurrent Neural Network Based Prediction of Epileptic Seizures in Intra- and
Extracranial EEG, Neurocomputing, vol. 30,2000, pp. 201-218.
53


[31] S. Qian, Introduction to Time-Frequency and Wavelet Transforms, Prentice-Hall,
2002.
[32] J. Rogers, Object-Oriented Neural Networks in C++, Academic Press, 1997.
[33] S. Schiff, Forecasting Brain Storms, Nature Medicine, vol. 4, no. 10, Oct.
1998, pp. 1117-1118.
[34] K. Swingler, Applying Neural Networks, Academic Press, 1996.
[35] W. Tompkins, ed., Biomedical Digital Signal Processing, Prentice-Hall, 1993.
[36] S. Walczak and W. Nowack, An Artificial Neural Network Approach to
Diagnosing Epilepsy Using Lateralized Bursts of Theta EEGs, J. Medical Systems,
vol. 25, no. 1, Feb. 2001, pp. 9-20.
[37] E. Waterhouse, New Horizons in Ambulatory Electroencephalography, IEEE
Trans. Biomedical Engineering, vol. 50, no. 5, May/June 2003, pp. 74-80.
[38] W. Webber, R. Richardson and R. Lesser, A Seizure Detector Based on a
Neural Network for EEG Recordings from Scalp, Society Proc.
Electroencephalography and Clinical Neurophysiology, vol. 95, 1995, p. 28P.
54