Citation
Learning fuzzy logic control system

Material Information

Title:
Learning fuzzy logic control system
Creator:
Lung, Leung Kam
Publication Date:
Language:
English
Physical Description:
viii, 168 leaves : illustrations ; 29 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical engineering

Subjects

Subjects / Keywords:
Control theory -- Computer programs ( lcsh )
Control theory -- Computer programs ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 166-168).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Leung Kam Lung.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
31508903 ( OCLC )
ocm31508903
Classification:
LD1190.E54 1994m .L86 ( lcc )

Full Text
LEARNING FUZZY LOGIC
CONTROL SYSTEM
by
Leung Kam Lung
B. S., Metropolitan State College at Denver, 1991
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1994


Leung Kam Lung (M.S., Electrical Engineering)
i
Learning Fuzzy Logic Control System
Thesis directed by Associate Professor Jan T. Bialasiewicz
ABSTRACT
The performance of the Learning Fuzzy Logic Control System (LFLCS), devel-
oped in this thesis, has been evaluated. The Learning Fuzzy Logic Controller (LFLC)
learns to control the motor by learning the set of teaching values that are generated by
i
a classical PI controller. It is assumed that the classical PI controller is tuned to mini-
mize the error of a position control system of the D.C. motor. The Learning Fuzzy
Logic Controller developed in this thesis is a multi-input single-output network.
Training of the Learning Fuzzy Logic Controller is implemented off-line. Upon com-
pletion of the training process (using Supervised Learning, and Unsupervised Learn-
ing), the LFLC replaces the classical PI controller. In this thesis, a closed loop
position control system of a D.C. motor using the LFLC is implemented. The primary
focus is on the learning capabilities of the Learning Fuzzy Logic Controller. The
learning includes symbolic representation of the Input Linguistic Nodes set and Out-
put Linguistic Nodes set. In addition, we investigate the knowledge-based representa-
tion for the network, As part of the design process, we implement a digital computer
simulation of the LFLCS. The computer simulation program is written in C com-
puter language, and it is implemented in DOS platform. The LFLCS, designed in this
m


This thesis for the Master of Science
degree by
Leung Kam Lung
has been approved for the
Graduate School
Miloje Radenkovic

Edwaid T. Wall


thesis, has been developed on a IBM compatible 486-DX2 66 computer. First, the per-
formance of the Learning Fuzzy Logic Controller is evaluated by comparing the angu-
lar shaft position of the D. C motor controlled by a conventional PI controller and that
controlled by the LFLC. Second, the symbolic representation of the LFLC and the
knowledge-based representation for the network are investigated by observing the
parameters of the Fuzzy Logic membership functions and the links at each layer of the
LFLC. While there are some limitations of application with this approach, the result
of the simulation shows that the LFLC is able to control the angular shaft position of
the D.C.motor. Furthermore, the LFLC has better performance in rise time, settling
time and steady state error than to the conventional PI controller.
This abstract accurately represents the content of the candidates thesis. I recommend
its publication.
IV


ACKNOWLEDGEMENTS
This study was supported by a NASA grant #NAG-1-1444. I have also received
much support and encouragement from many people.
Especially, I would like to express my sincere gratitude to my advisor, Professor
Jan T. Bialasiewicz, and Professor William J. Wolfe for their support and guidance
during the preparation of this thesis. I also would like to thank Professor Miloje
Radenkovic and Professor Adward T. Wall for their encouragement and also for serv-
ing on my committee.
Finally, this thesis is dedicated to my parents for their love and caring. Heavenly
father, thank you for helping me to finish this thesis.
v


CONTENTS
Chapter 1
1.1 Introduction...................................................I
1.2 Previous Research..............................................2
1.3 Structure of the LFLC: An Overview.............................5
Chapter 2
2.1 Normalization..................................................8
Chapter 3
3.1 Network Structure.............................................11
3.2 Input Linguistic Nodes Layer..................................11
3.3 Input Term Nodes Layer........................................11
3.4 Rule Nodes Layer............................................ 13
3.5 Output Term Nodes Layer.......................................15
3.5.1 Down-Up Mode...........................................15
3.5.2 Up-Down Mode...........................................18
3.6 Output Linguistic Nodes Layer.................................18
Chapter 4
4.1 Learning Phase................................................20
4.2 Unsupervised Learning.........................................20
4.3 Supervised Learning.......................................... 22
Chapter 5
5.1 Symbolic Representation and Performance Evaluation of the LFLC..27
- vi-


5.2 Low Level Symbolic Representation of the LFLC.............27
5.3 High Level Representation of the LFLC.....................35
5.4 Performance evaluation....................................38
Chapter 6
6.1 Summary...................................................41
Appendix A............................................................42
Appendix B............................................................45
REFERENCES...........................................................166
- vii-


FIGURES
igure
l. Learning Fuzzy Logic Control System...........................................5
l. Learning Fuzzy Logic Controller..............................................10
5. Initial Input Term Node membership for Xj....................................28
Initial Input Term Node membership for X2.....................................29
5. Final Input Term Node membership for X]......................................30
5. Final Input Term Node membership for X2.................................... 31
/. Convergence of the Input Term Node memberships for input Xj..................32
5. Convergence of the standard deviations of the Input Term Node membership for X].32
). Convergence of the means of the Input Term Node memberships for input x2.....33
). Convergence of the standard deviations of the Input Term Node memberships for input x233
I. Initial Output Term Node membership..........................................37
l. Finial Output Term Node membership...........................................37
5. Training Data................................................................38
i. LFLCS with PI Fuzzy Logic Controller.........................................39
5. Step response of the system with classical and fuzzy logic PI controller.....40
viu


Chapter 1
1.1 Introduction
In the design of Analog and Digital Control systems, a dynamic representation of
the system is required. In design engineering, this information is usually not known as
a priori. Furthermore, in order to minimize the errors between the output and the
input, modem control theory requires that an accurate model of the system be avail-
able. These requirements limit the application of modem control theory in many areas.
It is important that controllers be developed that do not have such stringent require-
ments. The goal of this research is then to explore an alternative controller and to eval-
uate its performance. In this thesis, the concepts of Fuzzy Logic Control and Neural
Network Learning are combined to design a controller for an unknown plant. The
developed system is referred to as a Learning Fuzzy Logic Control System (LFLCS).
Whereas Unsupervised Learning Neural Network is used to set up the initial struc-
ture for the network controller, Supervised Learning Neural Network is used to adjust
parameters of the network controller to minimize the output error. This research
addresses the learning capability of the Learning Fuzzy Logic Controller (LFLC), and
its knowledge representation in symbolic terms. A complete system diagram is shown
in Fig.l. The LFLCS requires that the input signals and the teaching signal be avail-
able for training, and that a good logical structure be set up before the training takes
place. The LFLC is rather different from a conventional controller; this difference is
explained and illustrated.
1


1.2 Previous Research
Modem control theory has proved to be very useful in areas where systems are
well defined either deterministically or stochastically. However, many control systems
involve human-judgement interaction as part of the control system. Human involve-
ment often provides an adequate controller because the mind of an operator usually
provides a model of the process which is just accurate enough to carry out the task at
hand. On the contrary an automatic controller has no way of observing the essential
features of a particular process. A human is usually capable of learning through expe-
rience, which decreases the need for a precise model of the system. Thus, modelling
the human decision making process is essential in control system design. The knowl-
edge of the control process in the human mind is captured in the fuzzy system design
approach.
In 1965, Zadeh [31] introduced and developed the concept of fuzzy set theory.
Since its introduction, fuzzy logic has been successfully applied in many control sys-
tem applications; for example, see references [1], [29], [6], [17], [16], [20] for some
good illustrations. An excellent overview of fuzzy logic applications in control engi-
neering is given by Langari and Berenji [15]. In reference [4], Chuen-Tsai Sun imple-
mented the fuzzy IF-THEN rule base to identify the structure and the parameters of a
network such that a desired input-output mapping is achieved. However, a fuzzy logic
controller requires that the control strategy be obtained as a fuzzy set term. This limits
somewhat the usefulness of the concept of a fuzzy controller.
The Self-Organizing (unsupervised learning) approach was presented by Zhang
and Edmunds [32], and Linkens and Hasnain [20]. They proposed a fuzzy logic con-
2


trailer that is able to cluster (self-organize) the input data without any prior knowledge
of the data, and the network automatically sets up the parameters for each membership
of the network. Furthermore, this network has a learning algorithm and is capable of
generating and modifying control rules based on an evaluation of the system perfor-
mance. The generation and modification of the control rules is achieved by assigning
a credit or reward value (weight) to the individual control action that makes a major
contribution to the current performance. This is an excellent control strategy for a sys-
tem when the operator control strategy is not availble.
The concept of machine learning was introduced many years ago in an effort to
achieve human like performance in the fields of speech and image recognition for
handicapped people. An extensive research effort to simulate the process of intelligent
human learning using an Artificial Neural Net using the neural network is a major
effort in the field of computer sciences [2], [9], [11], [12], [24], [30], [26]. Also, Self-
Organized (unsupervised learning) is one of the learning methods which is used in
speech recognition. Supervised learning is used in many fields where teaching data
are obtainable from an applicable sources [21], [22], [10], [27], [8], [25].
Regardless of the name these models attempt to achieve a good performance via a
dense interconnection of simple computational elements. In this respect, an artificial
neural net structure is based on present measurements of biological systems. Neural
net models have the greatest potential in application such as speech, image recogni-
tion, and control systems where many of the hypotheses require a parallel, high com-
putational rate, and the better systems result from observing inadequate human
performance. It is hoped that the potential benefits of neural nets will extend beyond
3


the high computation rates provided by present massive parallel system. Neural nets
typically provide a greater degree of robustness or fault tolerance than the von Neu-
mann sequential computers of today because there are many more processing nodes
possible, each with primarily local connections. The presence of a few imperfect
nodes or links thus need not impair the overall performance significantly. Most neural
net algorithms also adapt connection weights in order to improve the performance
based on current results. Work on artificial neural net models has a long history.
Development of detailed mathematical models began more than 45 years ago with the
work of McCulloch and Pitts [23]. Lin and Lee presented in [18] a two-phase learning
fuzzy logic network which consisted of both unsupervised learning and supervised
learning, and in [19] they developed a reinforcement neural network-based fuzzy logic
control system.
In this thesis, following the approach of Lin and Lee [18], the Learning Fuzzy
Logic Control System (LFLCS) is proposed in which the learning capabilities of neu-
ral networks are utilized. The system learns by adjusting the parameters of the neural
network using training data. The learning schemes of a Learning Fuzzy Logic Con-
troller (LFLC) combine both the unsupervised (self-organized) and the supervised
learning.
The neural network structure, implementing the LFLC, is given in Fig.2. This net-
work has one output and two inputs. The input signals x\, and x^ to the LFLC are the
feedback signal and the error signal of the control system* respectively, as shown in
Fig.l. The teaching pattern used by the LFLC is the control signal generated by a con-
ventional controller.
4


Output
Fig.l. Learning Fuzzy Logic Control System
1.3 Structure of the LFLC: An Overview
A Neural Network (or connectionist network) is a highly parallel connected net-
work that attempts to model the learning ability of the human brain. The intelligence
of the network is represented by the weights that connect nodes at one layer to the
nodes of the next layer of the network. Fuzzy Logic Control is a knowledge-based
control strategy. This strategy can be employed given a sufficiently accurate control
law which is not unreasonably complex. The Neural Network learns by the tuning of
system parameters using training data. The learning schemes of the LFLC combine
both the unsupervised (self-organized) learning and the supervised learning. A layout
of a simple network is found in Fig. 2. This particular network has two inputs and one
output, i.e., it belongs to a Multiple Input Single Output (MISO) class of networks. As
5


shown in Fig.l, the input signal (xj) to the LFLC is the feedback signal of the control
system, and the input signal (*2) to the LFLC is the error signal of the control system.
The LFLC learns to recognize a set of data. This set of data is called the teaching pat-
tern which is generated by the digital Proportional-Integral (PI) controller in Fig. 1.
The Neural Network is used as a learning mechanism and the Fuzzy Logic Control
algorithm is actually controlling the plant The LFLC is a two phase learning network.
The first phase is an unsupervised learning phase, and the second is supervised learn-
ing phase. The LFLC has a total of five layers. The first layer is the Input Linguistic
Nodes layer which in this particular implementation contains two nodes. These nodes
represent the input data sets as symbolic terms. A set of five nodes is set up in the sec-
ond layer for each Input Linguistic Node. This is the Input Term Nodes layer. The
purpose of the Input Term Nodes is to categorize the input data into linguistic terms.
The third layer is the Rule Nodes layer. Each rule node in the third layer represent s a
rule of controlling the plant. The number of rule nodes in this layer is equal to the
product of the numbers of nodes in each set of the Input Term Nodes. Therefore, there
are 25 rule nodes in our implementation. The fourth layer is the Output Term Node
layer that contains seven nodes. The purpose of the Output Term Node is to categorize
in linguistic terms the consequences of the fired rules. The fifth layer is the Output
Linguistic Nodes layer that in the case of the application considered contains one out-
put node. However, a second node is used to train the Output Linguistic Node in the
unsupervised learning phase. The Output Linguistic Node constitutes the output of the
LFLC. The data are randomly presented to the LFLC in the learning phases. How-
6


ever, before the data are presented to the network, they go through a normalization
process, which is explained in detail in the next section.
7


Chapter 2
2.1 Normalization
In the normalization process, the data of the error signal (x2) and the feedback sig-
nal (jcj) are mapped to the range of [-1,1]. The data of the teaching pattern (yt) are
mapped to the range of [0,1]. These mappings are accomplished by the data normal-
ization using the following:
=
__________________________
max (\max jc, [j] |, |min x-t [j] |)
i = 1,2 j = 1,2,...n
(1)
y,U.I
(2)
J# 0.5 + | II* J l929**Tt
1 2 max (\maxy\,\min y,|)
where n denotes the number of data points.
Since {*;[/]} contains all data of the linguistic node (*;) that are going to be nor-
malized by Eq.(l), the [yt[j]} contains all data of the linguistic node (yt) that will going
be normalized by Eq(2). The max x,- becomes the maximum number of the set {*;[/]},
and min x{- the minimum number of the set {x,[/]}. In the same manner the max y, is
the maximum number of the set [yt\j]}, and min yt the minimum number of the set
{),[/]}. The X; is the result of the normalization of the data of the linguistic node (*,),
and yt is the result of the normalization of the data of the teaching patterns y,. This
normalization ensures that the input data and the teaching data are mapped to the
ranges [-1,1] and [0,1] respectively. All negative values of the original teaching data
8


are mapped to [0,0.5), and all positive values of the original teaching data are mapped
to (0.5,1]. Similarly, all negative values of the original input data are mapped to [-1,
0), and all positive values of the original input data are mapped to (0,1].
9


yt
y
Layer 5
(Output Linguistic Nodes)
Layer 4
(Output Term Nodes)
Layer 3
(Rule Nodes)
Layer 2
(Input Term Nodes)
Layer 1
(Input Linguistic Nodes)
Fig.2. Learning Fuzzy Logic Controller
10


Chapter 3
3.1 Network Structure
The LFLC has a total of five layers. These are the following:
First layer or Input Linguistic Nodes Layer
Second layer or Input Term Nodes Layer
Third layer or Rule Nodes Layer
Fourth layer or Output Term Nodes layer
Fifth layer or Output Linguistic Nodes Layer.
Each of these layers is connected by the link between them. The purpose and detail of
each layer will be explained in the following sections.
3.2 Input Linguistic Nodes Layer
The purpose of this layer is to propagate the input data to the Input Term Nodes
layer; therefore, each input linguistic node is connected to a set of Input Term Nodes in
the next layer. The output value of the input linguistic node is the same as the input
value, and is propagated to its own Input Term Nodes set in the next layer. The link
(w) from first layer to the second layer is unity.
3.3 Input Term Nodes Layer
The purpose of the Input Term Node is to represent the value of the Input Linguis-
tic Node in linguistic terms. For example, the linguistic term for an Input Term Node
can be Negative Large (NL), Negative Medium (NM), Negative Small (NS), Zero
(ZN), Positive Small (PS), Positive Medium (PM), or Positive Large (PL).
11


Two sets of Input Term Nodes are set up for the LFLC as shown in Fig. 2. Five
nodes are set up for each Input Term Nodes set; thus, for this particular network, a
total of 10 nodes are in the Input Term Nodes layer. The input value to each node in
the Input Term Nodes set number 0 is equal to the product of the output value of the
Input Linguistic Node number 0 and the link (w) that connects to the Input Term node.
Each node of the Input Term Nodes set has a membership function. This membership
function can be based on any activation function (a); however, a Gaussian activation
function is used in the network considered. This Guassian activation function is
defined by (3) and (4) below. It is used for all memberships in layer 2,
a = ef (3)
y
where/is the memberships function 5^.J. The parameters mu and 80 are the
center (the mean) and the width (the variance) of the Gaussian function respectively,
and uf is the input data to the Gaussian function with the superscript 2 used as a refer-
ence to the second layer. The subscript i is the index of the Input Term Nodes set, and
the subscript j is the index of the nodes within each Input Term Nodes set. The links
(w) of this layer are fully connected and they are equal to unity (See Fig. 1.). For our
particular structure shown in Fig. 2, i = 1,2 and j = 1,2,3,4,5. We have w? = xx and
u\ = x2.
12


3.4 Rule Nodes Layer
This layer contains a set of fuzzy logic rules For this thesis, a MISO system is
used. A linguistic variable (x) in a universe of discourse U is characterized by two
sets: T(x) = {Tlx, 7?, ...,7?} andM(x) = {MlxMl,...Mkx} The T(x) is the term
set of x which is the set of names in linguistic terms of the values of x with each value
Vx being a fuzzy number with membership function M'x defined on U. Thus Mix) is a
semantic rule for associating each value with its meaning.
For example, if x indicates voltage, then T(x) may be in the set of {NL, NS, ZN,
PS, PL}. In this thesis, five Input Term Nodes are set up for each Input Linguistic
Node, and seven Output Term Nodes are set up for each Output in this network, i.e.,
T{y) = {7^7?, ...,7?} = {NL, NM, NS, ZN, PS, PM, PL}. The fuzzy logic
rules for the LFLC are stated as follows:
/?, = IF Xj is NL and x2 is ZN, THEN the consequence is PM
where i = 1,2,...,25. Thus, a total of 25 rule nodes are in this layer. The input to each
rule node uf comes from one possible combination of the outputs of the Input Term
Nodes set ofk, where j= 1,2,...25 and i = k= 1,2...5. Let o\x and o\ j be the possible
inputs to the rule node Rj; however, only the smallest value of of k becomes the input
to the rule node Rj. Furthermore, the links (w) of this layer are unity. If there are two
rules:
RjS/F Xj is NL and x2 is ZN,THEN the consequence is PM (5)
13


R2 = IF xx is NL and x2 is ZN,THEN the consequence is PS (6)
then the firing strengths of Rj and R2 are denoted as a! and Oj, respectively. For
example, a, is given by the following equality:
=M^Ui)aMU2) (7)
where a is the fuzzy logic AND operation. The intersection is used as the fuzzy logic
AND operator. Thus, the AND operator is realized by the following equation:
| min (M (*,), Mqx2 (x2) )
a; = Mqxl(Xl) a M\2 (x2) = | or (8)
M\x (xx)Mq2 (x2)
where q is one of the {NL, NS, ZN, PS, PL} and i= 1,2,...,25. The nodes in this
layer form a fuzzy rule base. The connectionist inference engine is constructed by
combining the functions of this layer and the functions of layer 4 [28], [7], [5]. Hence,
the rule matching process is avoided [18]. The precondition matching of fuzzy logic
rules is accomplished by the method of linking layers. Each rule node in this layer per-
forms the fuzzy AND operation; thus the activation function (a) for the rule node is the
minimum of all its inputs.
a =f
(9)
( 3 3 3 ^
f=min\u,u ..u
'12 P r
and for the network analyzed p = 2.
(10)
14


The connections of this layer are given as: a node in each of the Input Term Nodes
set is connected to a rule node with a constraint such that no two nodes are from the
same Input Term Nodes set are connected.
3.5 Output Term Nodes Layer
This a layer has two operation modes. In the first phase of the training, the nodes
of this layer operate in an up-down transmission mode. Whereas in the second phase
of the training, these nodes operate in a down-up transmission mode. Upon comple-
tion of the learning process, the set My = {My, My,..., A/} of the membership
functions of the Output Term Nodes set are found. An Output Term Node number j
may be excited (as a result of the learning process in which the structure of connec-
tions between the Rule Nodes layer and the Output Term Nodes layer are established)
by a few or none of output signals of the Rule Nodes.
3.5.1 Down-Up Mode
In this mode, the node performs the fuzzy logic OR operation. To illustrate this
concept, let us assume that (as defined by the structure of connections) an Output Term
Node is excited by the Rule Nodes number /j, i2,...,ip that is described in section 3.4.
In our particular structure, p can be any number in the set {1,2,...,25) at the Rule Node
layer. Then, the membership function associated with the Output Term Node number
j can be defined as follows:
A= c^aM;^,.)
min (aik, M{ (sit))
or
a, A/' (sit)
where i = 1,2,...,/?
(11)
15


(12)
M^Si) = exp
' ( \2\
iVm>)
52
y
where sit is the output of the rule node number my and 8, are the mean and the
variance of the membership y, respectively. In the case considered, we have a set of
membership functions My = (MJ, M\,...MJ] Combining (8) and (11), we obtain
the output decision:
M{ = M;-J'(^) vM>j(^) v ... vSft'isi} (13)
where v is the fuzzy logic OR operation which performs the UNION of a given set
of memberships. Thus, the output decision for an Output Term Node number j can be
written as follows:
max (M'y"J (sj, M'/J (sj,..., M'fJ (j,))
Mi =

or
minO.^M'y** (st))
*=1
(14)
As a result, the nodes in this mode perform the fuzzy logic OR operation to inte-
grate the fired rule nodes, which have the same consequence; that is, they are con-
nected to the same Output Term Node j. This can be also expressed as
Oj = min
(15)
where
16


(16)
J5-
* = 1
Each of the 25 Rule Nodes can be potentially connected to each of the seven Output
Term Nodes as can be seen from Table 1 showing connection weights. Those weights,
either 0 or 1, re established in the learning phase and tell us which i^'s are involved in
equation (13) for a given node number j.
TABLE 1. The links (weights) of the Output Term Node Layer
Ml 0 1 0 0 0 0 0
M2 1 0 0 0 0 1 0
M3 0 0 0 0 0 0 0
M4 0 0 0 0 0 0 0
M5 0 0 0 0 0 0 0
M6 0 0 0 0 1 0 0
M7 0 0 1 1 0 0 1
Rule number 1-23
0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0
1 1 1 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
17


3.5.2 Up-Down Mode
The purpose of this mode is to find the initial means and the variance of the Output
Term Nodes. This mode is used in the first (unsupervised) learning phase. In this
mode, the nodes in this layer, function the same as those in the second layer, except
that only one node is used to perform a membership function for the Output Linguistic
Node.
3.6 Output Linguistic Nodes Layer
This layer contains the nodes performing the UPDOWN transmission and the
nodes performing the DOWNUP transmission. The UPDOWN transmission nodes
are used to feed the training data into the LFLC network [18]. Their activation is
defined as follows:
a = f = y (17)
The DOWNUP transmission nodes together with the links that are attached to the
Output Linguistic Node, performs the fuzzy OR operation or in other words imple-
ments the so called defuzzifier [18]. In this research, the fuzzy OR operation is based
on the center of area method, which as described in [3], gives the best result. Let sj be
the support value, i.e., a value at which the membership function, My (s) reaches the
maximum value My (j)'|i_i. Then, from (13) the defuzzification output is
(Sj)
(18)
18


The following equations are used to simulate the center of area defuzzificaton method
[32]:
f= uf = Z (w§S5) f
f
a =
m mi
(19)
(20)
where the w? is the link between the Output Term Node number j and the Output Lin-
guistic Node number i, it is also equal to the product of m? 8? = mfj 8J. Here m
and 8 are the mean and the variance of the Output Term Node, respectively. In this
particular network, we only have one Output Linguistic Node, thus j is equal to 1.
19


Chapter 4
4.1 Learning Phase
In LFLC there are two learning phases: the first one implements the Unsupervised
or self-organized learning, and the second one implements the Supervised Learning
using Backpropagation. As a result of Unsupervised Learning, the structure of the
LFLC is established after the links (w) and the firing strengths of the fuzzy logic rule
nodes in the network are found. Furthermore, the means (m) of each membership
function in the network are found by using the Self-Organized learning method which
is described by the Kohonens feature-map algorithm [14], [13].
4.2 Unsupervised Learning
The Kohonen feature-map algorithm is a self organizing method that gathers the
input data into a cluster. In his algorithm, a set of weights is initially generated for
each node in his network. When data is present in the network, the distances from the
data to all nodes are calculated. Let w I=1 denote the weights of the shortest dis-
tance from the data i to the node j, then the node number j is selected to be the output
node of the network. Furthermore, the weights (w i=ij=i) and its neighbors are
updated according to the following equation:
Wij(t + 1) = Wij(t) + r\ (t) (*; (r) Wu(t)) (21)
where j belongs to the nearest neighbor, and 0 < i ing rate (0 20


After the mean of the Fuzzy Logic membership function is found, the correspond-
ing variance (8) can be found as follows:
5 = K-m-U (22)
1 X
where x is the overlap parameter. This parameter is used to control the level of over-
lapping between memberships in the same cluster. The range of these parameters
depends on the range of the input data. However, in our network, this parameter is
chosen to be 1.5.
The mciosest is the nearest neighbor of the current mean value. Once the centers
and the variances are found, the input signal and the teaching signal reach the output
points at the output term nodes and the input term nodes. Next, the output of the input
term node at layer two are transmitted to layer three through the initial architecture of
the layer threes links. Based on the firing strengths of the rule nodes (output of the
rule nodes) and the output of the output term nodes, the correct consequences of the
links of each fuzzy logic rule node are found. Initially, the links (w) are fully con-
nected. However, the competitive learning algorithm is used to update the links
(weights) for each training data set This algorithm is described by the following
equation:
* where of, of, are the outputs of the output term node and the input term node, respec-
tively. Wjj denotes the weight of the link between the i-th rule node and the j'-th output
term node. A dot over wi; denotes the next value of the vv0-.
21


After competitive learning, the weights of the links at layer four represent the fir-
ing strength of the rule node which is transmitting the consequence of the rule node to
the term node of an output linguistic node. Furthermore, the links are chosen such that
at most one link is selected and the others are eliminated. As a result, only one term
node in the output term node set becomes one of the consequences of the fuzzy logic
rule. The supervised learning takes place after the fuzzy logic rule is established in the
unsupervised learning phase.
4.3 Supervised Learning
In the supervised learning phase, the backpropagation method is used to fine tune
the parameters of the LFLC which were described above. The objective of the second
phase of learning is to minimize the error function:
E = j (>()-?(<))
The error signals for layer five, layer four, and layer three, are given as:
(24)
a5(f) = y (r) -y(t)
(25)
of (r) = a5 (r)
m,q, (IS,uj) (Im.SjM,) 5,
(I6w)2
(26)
of = £of (27)
k
where y(t),y(t) are the desired output and the current output of the LFLC, respec-
tively.
Backpropagation has a forward phase and a backward phase. The forward phase
of the backpropagation requires that the data be presented to the network at the first
22


layer for training. Next, the outputs of the input term nodes are calculated, and then,
transmitted to one of the output term nodes, via the firing strengths of the rule nodes.
Each fuzzy logic rule was structured to be excited by the smallest output value of the
initially defined input term nodes. Finally, the output of the LFLC is calculated using
equation (18). After the output is found for a pair of the training data, the error signal
is calculated using equation (24) and is then propagated to all of the previous layers of
the LFLC. Concurrently the error signal is being used to update the set of means (m)
and the set of variances (8) of each layer in the LFLGThe means and the variances at
layer five are updated (fine tuned) by equations (27) and (28), respectively.
where the learning rate (rj) is a function decreasing in value as the time progresses.
Here the symbol u is the input value to the current node i and 8, m is the variance, and
the mean of the current membership i, respectively. Layer four does not contain any
parameters. Thus, only the error signal (a4) of this layer needs to be calculated, and
it is then propagated to the layer three. The equation of the error signal (a4) is
obtained from equation (25). As shows in Fig. 1., layer three also does not contain any
parameters that have to be updated, hence only the error signal (a3) is calculated and
propagated to layer two. In the layer two, the mean (m) and the variance (8) are
updated according to the following:
(28)
8f(r+l) = 8?(r) +t|-a5-
m,Ui (£5,u{) (LmfiiUj) w,
(Z5,i):2
(29)
23


The general learning rule in Neural Network is
Aw
(30)
which can be written as
w(f+l) = w (f)
where T| is the learning rate and E is the error defined by eq. (24) and
(31)
dE_________dE______d(net- input) dE df
3w 9(net-input) 9w 9/ 9w
where
(32)
net-input = /(wf, ....w*;wf, w|,w*) .
here, the superscript k indicates the layer number and subscript p indicates the index of
the input nodes.
9 E 9 E 9a 9/
9w 9a df 9w
(33)
To show the learning rule, we will derive the computation of
9E
3y
for this layer.
Using (32), and (4), the adaptive rule for m,- is derived as follows:
9E _ ]jfi_ 2
9 (my) 9fl, df dm,j ~ da, o?
(34)
24


where i is the number of the node in the Input Term Node layer, and j is the number of
the node in the Rule Node layer.
We have
g-(i
i k
BE______ B (net-input) k\
B (net input) * Bat J
(35)
for this particular layer;
BE
BE
i= = Gl
(36)
B (net- input) Bff
where is the error signal for rule node at layer three, and it = 1,2.25. Also from
(9), and (10)
d ^net 1 tf ul= min(inputs of rule node k), otherwise 0.
Hence,
I 2 <37)
' k
where the summation is performed over the rule nodes that a,- feeds into. Here is the
output of the Input Term Nodes number i and
(t if ai is minimum in kth rule nodes input
(38)
0 otherwise
Thus, the adaptive rule of my for this layer is
25


m
ij (*+ 1) mo (0 ^ ^
In the same manner, using (32), (4) and (37), (35),the adaptive rule of 8y is
8 (t+1) = 8 (t) -tt- -efl-
^ 3a,. (8y)3
(39)
26


Chapter 5
5.1 Symbolic Representation and Performance Evaluation of the LFLC
In the LFLC, the intelligent control decision of the controller is determined by
both the links (or weights) of the Learning Fuzzy Logic Controller and the variables of
the Fuzzy Logic membership functions. The symbolic representation of the network
is described in two levels: the Low Level, and the High Level.
5.2 Low Level Symbolic Representation of the LFLC
The low level symbolic representation of the network consists of the variables of
the fuzzy membership function and the links in each layer of the network. We can
interpret the symbolic representation of this level as the IF part of the LFLC. For
example, the mean is the center of a set of data that belongs to a cluster. The cluster
method that is used in this network is Self-Organization and the variances of the fuzzy
logic membership function serve as the radius of the cluster. Data that is out of the
range of the radius will have no effect on the node in either the Input Term Nodes layer
or the Output Term Node layer. Thus, the combination of the mean and the variance of
the Fuzzy Logic membership function as an activation function of a node in the neural
network enables the network to perform decision making. In an extreme case, each
node in the network represents Off or On in linguistic terms for the LFLC. The Fuzzy
Logic membership functions following an unsupervised learning process are shown in
Fig. 3 and Fig. 4. Each Gaussian membership function has a mean (center) and a vari-
ance (width). If data are presented to these Fuzzy Logic membership functions, only
one membership function will have the highest activation value.
27


Please note that the membership function 1-2 is not shown in Fig. 3 and Fig. 5.
This is due to the fact that the mean and variance values of this membership function
did not contribute to the control decision making (the mean value and variance did not
change their values) for this particular node. Hence, it is not shown in the Fig. 3 and
Fig. 5.
Input Data (xl)
Membership 1-1 + Membership 1-3 Membership 1 -4 o Membership 1 -5
Fig.3. Initial Input Tenn Node membership for x}
28


Input Data (x2)
Membership 2-1 Mentorship 2-2 -as- Membership 2-3
b Membership 2-4 -H- Membership 2-5
Fig.4. Initial Input Term Node membership for X2
After supervised learning, most of the mean and variance values of the Fuzzy
Logic membership functions are changed from their previous values (before the super-
vised learning). This is shown in Fig. 5 and Fig. 6. In Fig. 5, the mean and variance
values of the Fuzzy Logic membership functions are different than those of the mem-
bership functions shown in Fig. 3. In the same manner, the mean and variance of
membership functions 2-1,2-2,2-3,2-4 and 2-5 in Fig. 6 are different than those of
the membership functions shown in Fig. 4. This is due to the fact that in the super-
vised learning phase, the error between the calculated value and the desired value is
calculated and propagated to all layers in the network. As a consequence, the mean
29


and the variance are updated according to the learning rule (30). After supervised
learning, the mean and the variance represent the best control decision for a specific
node.
Input Data (x1)
Membership 1-1 + Membership 1 -3 w Membership 1-4 Membership 1 -6
Fig.5. Final Input Tenn Node membership for xt
Also, the variances of the Fuzzy Logic membership functions 1-3 and 1-4 in Fig. 3
changed their values from large value to small that is shown in Fig. 5. This can be
seen better when the figure is enlarged. This indicate that only limited range of data
will be in the neighbor hood of the means of these two membership functions. Other-
wise, the variances of these two membership functions would increase in value, like
those of the membership functions 1-1, and 1-5, to allow more data excites the mem-
bership functions that gives higher activation value. Also, the means of these two
Fuzzy Logic membership functions move closer to each other. The changing of direc-
tion indicate the input data that belongs to these two clusters is somewhat separate
30


from other input data. Furthermore, the variances of the membership functions 1-1,
and 1-5 are increased in value. This increase in value indicate that majority of the data
belongs to these two clusters, and the center of these two clusters are the mean values
of those two Fuzzy Logic membership functions 1-1,1-5.
IrpLt Data (x2)
-m- Membership 2-1 Merrfassh'p 2-2 Membership 2-3
-b- Membership 2-4 -w- Mentorship 2-5
Fig.6. Final Input Tenn Node membership for x2
In general, a change in the mean indicates that most of the data is close to the new
value. A large value of variance indicates that a large quantity of data is located at that
mean, and conversely, smaller values of the variance: indicates that smaller quantities
of the data are at that mean value. The convergence of the mean and the variance are
shown in Fig. 7, 8,9, and 10.
31


Standard Deviation
Mean 1 -1 -- Mean 1-3 < Mean 1 -4-Mean 1 -6
Fig.7. Convergence of the Input Tenn Node memberships for input xj
Deviation 1-1 -a- Deviation 1-3 m Deviation 1-4-------Deviation 1 -5
Fig.8. Convergence of the standard deviations of the Input Term Node membership for xj
32


Standard Deviations
Mean 2-1 Mean 2-2 m Mean 2-3
Mean 2-4 Mean 2-6
Fig.9. Convergence of the means of the Input Term Node memberships for input x2
Nutriber of Iterations
- Deviation 2-1 -* Deviation 2-2 tx Deviation 2-3
---Deviation 2-4 Deviation 2-6
Fig. 10. Convergence of the standard deviations of the Input Term Node memberships for input x2
33


In a Neural Network, the links represent the firing strengths which are used to con-
nect the nodes of one layer to the other layers. However, three types of links are used
in LFLCS. The first type of link is used to transmit information from one node to
another node. For example, the links of the Input Linguistic Node layer are used as a
connection to transmit information from this layer to a node at the Input Term Node
layer. The connection of the network is shown in Fig. 2. In the same manner, links at
the Output Term Nodes are used to transmit signals from the nodes in this layer to the
nodes at of Output Linguistic layer, and the links of these layers are fully connected
with weights equal to 1.
In this particular network, the second type of link is constructed in such a way that
for each node in the Rule Nodes layer there are only two connections from the Input
Term Nodes layer. These links can then be interpreted as the IF part of the Fuzzy
Logic rule. The predefined structure of these links is explained in detail in section 3.4.
The third type of link used is at the Output Term Node layer. These links are ini-
tially fully connected, but the weights are modified by the learning rule (eq. 23) in the
first (unsupervised) phase of the learning process. These links can be interpreted as
the THEN (consequence) part of the Fuzzy Logic rule. As can be seen in Table 1, a
rule node can be connected to only one node at the Output Term Nodes layer. These
links represent the firing strength for a node in that network. Furthermore, Table 1
shows that nodes number 3, 4 and 5 of the Output Term Node layer are eliminated.
This is because none of the rule nodes connects to these Output Term nodes. Thus,
any rule node that contains one of the three nodes from the Output Term Node, men-
tioned above as its consequence node, is automatically eliminated.
34


5.3 High Level Representation of the LFLC
In Fuzzy Logic symbolic representation of control strategy, the use of a linguistic
term offers advantages over the conventional approach to specifying the control algo-
rithm as an equation, especially, in ill-structured situations. The concept involves
using a linguistic rule to describe the operation of the process from a human point of
view and to capture the essential knowledge of the operation of that process, which the
operator has presumably acquired through direct experience with and actual operating
process. It follows that this knowledge can be used as the best rule set which the oper-
ator can obtain for the control action in linguistic terms.
In this thesis, the linguistic term of the control action is learned through the train-
ing of the network, and the teaching pattern is used for operators knowledge. This
knowledge is stored in the network by first using the Self-Organizing learning method
and then using the Backpropagation method of learning in order to determine the
Fuzzy Logic rule nodes.
For example, the linguistic term for the first node in the Input Term Nodes layer is
interpreted as Negative Large (NL). The second node is interpreted as Negative Small
(NS), the third node is interpreted as Zero (ZAO, the fourth node is interpreted as Posi-
tive Small (PS), and the fifth node as Positive Large (PL).
The linguistic term for the first node of the Output Term Nodes is interpreted as the
Negative Large (NL) node. The second node is interpreted as Negative Medium (NM),
the third nodes can be interpreted as Negative Small (NS), the fourth node is inter-
preted as Zero (ZN), the fifth node is interpreted as Positive Small (PS), the sixth node
35


is interpreted as Positive Medium (PM), and finally the seventh node as Positive Large
(PL).
After these linguistic terms are defined, the Fuzzy Logic rule can be applied to
obtain the control action for each input For instance, let x;=-0.8, and x2 = 0.0, then it
would belongs to the linguistic term Negative Large and Zero. If the Fuzzy Logic rule
number one is;
.RjS/F is NL and x2 is ZN,THEN the consequence is PM
and node six of the Output Term Nodes layer is executed. In simulation, every rule is
executed; however, only rule Rj responds with the strongest excitation value. This
means that node number six of the Output Term Nodes will have a large excitation
value compared to the other nodes in this layer. The Output Term Nodes membership
functions is shown in Fig. 11 and Fig. 12. After every output of the Output Term Node
is calculated, the output of the LFLC can be found using equations (20) and (21). For
this particular pair, the output of the LFLC is weighted mostly by the output of the
membership function six of the Output Term Nodes layer.
36


Fig.l 1. Initial Output Term Node membership
37


5.4 Performance evaluation
In this section, performance of the LFLC is evaluated by an example using the
LFLC in a closed loop control system. First, a conventional PI controller is designed
to control the shaft position of a D.C. motor. The transfer function of the D.C. plant
Gp(z) is
g,w- mA5^iir1) <
The transfer function of a position sensor H(z) which is used in the feedback path of
the closed loop system is
H(z) =
99.3z- 95.3
z +1
A unit step signal is used as the input signal to the system.
(41)
8i lei asi
Number of Data
----Control Signal ...Error Signal -----Feedback Signal
Fig. 13. Training Data
38


The control system is implemented by simulation to obtain training data set as
shown in Fig. 13. This data set is then used in the off-line training of the LFLC to
obtain the fuzzy logic controller, which will give approximately the same performance
as a PI controller.
A closed loop control system is also simulated to illustrate the capability of the
LFLC. This closed loop control system is shown in Fig. 14. In the diagram, the error
signal and the feedback signal are normalized before being fed into the LFLC. This
normalization process is a necessary step before any data is presented to the LFLC.
This follows since during the learning process the training data is also normalized by
equation (1). However, it should be observed that the control signal generated from
the LFLC in Fig. 14 is going through an inverted normalization processes in order to
retrieve the actual control signal. This is required since the teaching pattern is normal-
ized by equation (2) during the training process. Equation (43) is used in the inverse
normalization process.
Fig.14. LFLCS with PI Fuzzy Logic Controller
(42)
39


yactual = iyLFLc ~ 0*5) (2 x max (|max yj ,|min y,|)) (43)
where y^Lc is the output signal from the LFLC, andFactual is the actual control sig-
nal.
The output of the computer simulation of the LFLC with a PI Fuzzy Logic control-
ler is shown in Fig. 15. The result of the simulation shows that the LFLC is able to
control the D. C. motor at least as good as a conventional PI controller or even better.
The LFLC shows that it has better rise time performance, better settling time and a
smaller steady state error.
LFLNC ----MaUab Simulation
Fig.15. Step response of the system with classical and fuzzy logic PI controller
40


Chapter 6
6.1 Summary
In this thesis, the Learning Fuzzy Logic Controller is being developed and replaces
the conventional PI controller that is used in a closed loop position control system.
The result of two simulated control systems is given in Fig. 15. We find that the rise
time, settling time and steady state error of the system with the LFLC are superior over
those of a system with a conventional PI controller.
The Learning Fuzzy Logic Controller developed in this thesis shows that by com-
bining the Neural Network learning concept and the Fuzzy Logic rule base, it is possi-
ble to eliminate the need for an accurate model of the system.
However, during the research of the LFLCS we discovered that there are some
limitations of this approach. This is due to the need of the teaching pattern necessary
to train the network. In most cases, this teaching pattern is the control signal to the
plant. This signal can be either generated by computer simulation as we did in this
thesis or using an actual control signal of a real system. However, in control applica-
tion, this signal is not always obtainable. In order to overcome this problem in control
design, Reinforcement learning approach for control design should be considered.
The reinforcement learning utilizes both the knowledge of the system at hand and
error prediction scheme to modify the parameters of the controller.
41


Appendix A
Code Functions Index to Appendix B
nainO....................................................................................... .45
)pen_files(void)............................................................................ .47
insupervisedjeanring(void).....................................................................49
iupervisejeaming(void).........................................................................53
>et_downup_act4(unsigned int inputjndex).......................................................56
>et_downup_act4(void)........................................................................ 58
)ackpro_forward(unsigned int index)............................................................60
:hange_w4(void)................................................................................64
doseJoop_control( void)........................................................................66
;lose_loop_backpro_forward(void)...............................................................70
:lose_loop_get_act2(unsigned int inputjinguijio, float inputx).................................73
:lose_loop_activation_12(float inputx, unsigned int inputjingui_.no)...........................74
:loseJoopJ2_membership(floatmean, float deviation, float u).................................. 75
;loseJoop_get_act3(unsigned int input_lingui_nol, unsigned int input_lingui_no2)...............76
:loseJoop_get_downup_acl4(void)................................................................78
indJnput_deviation(unsigned int inputJingui_no)................................................80
ind_output_deviation(unsigned int outputJingyi_no).............................................82
:iror_backpro_layer2(float Irate, int inputjndex)............................................ 84
ule_connection(unsigned int input_tenn_membersbip, unsigned int inputJingui_no)................86
nembership_connection().................................................................... 88
ind_next_rule_nodeO............................................................................90
:iror_backpro_layer3(void).....................................................................91
:iror_backpro_Iayer4(float *Sum_width_act, float *Sum_mean_width_act)..........................93
:nror Jiackpro Jayer5 (float Irate, unsigned int index)........................................95
ule_connection(unsigned int input_tenn_membership, unsigned int inputjingui_.no)...............99
nembership_connectionO........................................................................100
ind_next_iule_node()..........................................................................102
ipdateJnputJ2_w(float inputx, float alpha, unsigned int input_Iing_node)......................104
42


Code Functions Index to Appendix B
fi_12_miiii_index(float inputx, unsigned int input_ling_node).....................................105
update_output_w(float inputy, float alpha, unsigned int output_ling_node).........................107
fi_output_mini_index(float inputy, unsigned int output_ling_node).................................108
get_means(void)...................................................................................110
find_mean(float max_range, float min_range, unsigned int num_membership, unsigned int input_no)...112
initials_all(void)................................................................................119
init_globle(void)............................................................................... 122
init_weight(void)................................................................................ 122
initjnaxminjrange(void)...........................................................................123
init_weight2(void)................................................................................125
init_weight3(void)........................................:.......................................126
init_weight4(void)................................................................................126
get_act2(void)......................................;.............................................128
activadon_12(float inputx, unsigned int input_lingui_no)......................................... 128
12_membership(float mean, float deviation, float u).............................................. 130
get_act2(unsigned int input_lingui_no, unsigned int input_index)..................................131
activation_12(float inputx, unsigned int input_lingui_no)........................................ 132
12_membership(float mean, float deviation, float u).............................................. 133
get_act3(unsigned int input_lingui_nol, unsigned int input_lingui_no2)............................134
get_act4(unsigned int output_lingui_no, unsigned int index).......................................136
activation_14(float output, unsigned int output_lingui_no)........................................137
14_membership(float mean, float deviation, float u)...............................................138
find_max_w4(void).................................................................................139
find_output(void).................................................................................142
init_phasel(void).................................................................................146
save_w4(void).....................................................................................150
save_w4(void).....................................................................................151
update_w(int prt_index)...........................................................................152
43


Code Functions Index to Appendix B
main(void)................................................................................156
read_input(void)..........................................................................164
44


APPENDIX B
Simulation Code
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h
int
main()
{
unsigned int input_ling_no,
output_Iing_no,
i,
j;
if (!open_files())
{
printf(CAN NOT OPEN FILESn);
>
if (!read_input())
{
printf(CAN NOT READ INPUT FILE\n);
>
if (!unsupervised_leaming())
{
printf(INITIAL PHASE1 FALSE\n);
return (FALSE);
}
if (!supervise_leaming())
{
printf(BACKPROP PHASE1 FALSER);
return (FALSE);
>
printf(end of supervise leamingNn);
45


getcharO;
*
if (!close_loop_control())
{
printf(CLOSE CONTROL LOOP FALSER);
return (FALSE);
>
return (TRUE);
>
46


#include
#include c;\borlandc\file\Ihesis.h
#include c:\borlandc\file\globalex.h"
int
open_files(void)
{
if ((Error_fptr = fopen(error.txt, r)) = NULL)
{
fprintf(stdeir, CAN NOT OPEN ERROR FILEMi);
return (FALSE);
>
if ((Fback_fptr = fopen(fback.txt, r)) == NULL)
{
fprintf(stderr, CAN NOT OPEN FBACK FILE\n);
return (FALSE);
}
if ((Consigna_fptr = fopen(consigna.txt, r)) NULL)
{
fprintf(stderr, CAN NOT OPEN CONSIGNA FILENn);
return (FALSE);
>
if ((Output_fptr = fopen(output.txt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN OUTPUT FILE\n);
return (FALSE);
>
if ((W_14_fptr = fopen(weigth4.txt, w)) = NULL)
{
fprintf(stderr, CAN NOT OPEN WEIGTH4 FILESn);
return (FALSE);
}
if ((Input_term_Deviation_fptr = fopen(input_D.txt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN INPUT DEVIATION FILEvn);
return (FALSE);
}
if ((FinaI_input_term_deviation_fptr = fopen(fi_inD.txt, w")) == NULL)
47


{
fprintf(5tderr, CAN NOT OPEN FINAL INPUT DEVIATION FILENn);
return (FALSE);
>
if ((Output_term_Deviation_fptr = fopen(output_D.txt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN OUTPUT DEVIATION FILE\n);
return (FALSE);
>
if ((Final_output_term_deviation_fptr = fopen(fi_outD.txt, w")) == NULL)
{
fprintf(stderr, CAN NOT OPEN FINAL OUTPUT DEVIATION FTLE\n);
return (FALSE);
>
if ((Mean_output_fptr = fopen(mean_ouLtxt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN MEAN OUTPUT FILE\n);
return (FALSE);
}
if ((Final_mean_output_fptr = fopen(fin_mout.txt, w)) = NULL)
{
fprintf(stderr, CAN NOT OPEN FINAL MEAN OUTPUT FILE\n);
return (FALSE);
>
if ((Mean_input_fptr = fopen(mean_in.txt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN MEANJN FILESn);
return (FALSE);
>
if ((Final_mean_input_fptr = fopen(fin_min.txt, w)) == NULL)
{
fprintf(stderr, CAN NOT OPEN FINAL MEAN INPUT FILE\n);
return (FALSE);
>
printf(OPEN SUCCESSFULV);
return (TRUE);
}
48


#include
#include
#include c:\borlandc\file\thesis.h
#includec:\borIandc\lile\globalex.h
Z*********************:********:**;**********************************************
* function init_phasel()
*
* This function excute all initial procedure for getting the center of the
* membership and the deviation of input and output term nodes.So after this
* function the input signal can be pass through the membership on layer3 and
* layer4.
*
Input parameter: inputxl,
*inputx2.
*
* Return value: TRUE if each of the center of membership and the deviation
are calculated.
FALSE otherwise.
*
* Programmer:
* Leung Kam Lung 9/30/93MS. Thesis.
*
*/
int
unsupervisedjearning(void)
{
int i;
float time,
time2;
unsigned int input_lingui_no,
output_lingui_no;
time = 0.001;
time2 = 0.001;
initials_all();
input_lingui_no = 0;
output_lingui_.no = 0;
while ((time > 0.000001) II (time2 > 0.000001))
{
for (i 0; i < MAXJNPUT.INDEX; i++)
{
update_input_12_w(Inputx[input_lingui_no][i], time, input_lingui_no);
49


update_input_12_w(Inputx[input_lingui_no + l][i], time, input_lingui_no + 1);
upda];e_output_w(Output[output_lingui_no][i], time2, output_lingui_no);
>
time = time 0.995;
time2 = time2 0.995;
>
if (!save_mean_input())
{
printf(Mean_input FALSEvn);
>
if (!save_mean_output())
printf(Mean_output FALSENn);
>
/*
* up to now only the center(mean) of the input and output membership are
* calculated
*/
for (input_lingui_.no = 0; input_lingui_no < INPUT_LINGUI_NO; input_lingui_no++)
{
if (find_input_deviation(input_lingui_no))
{
printf( Deviation of each Membership is find for INPUT_TERM_NODE %d\n,
input_lingui_no);
>
else
{
printf( NOT Deviation of each Membership is find for INPUT_TERM_NODE %d\n,
input_lingui_no);
return (FALSE);
>
> /* End for input_lingio_no */
. if (!save_input_deviation())
{
printf(CAN NOT SAVE INPUT DEVIATIONSn);
>
50


for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
if (find_output_deviation(output_lingui_no))
{
printf( Deviation of each Membership is find for OUTPUT_TERM_NODE %d\n,
output_lingui_no);
}
else
{
printf( NOT Deviation of each Membership is find for OUTPUT_TERM_NODE %d\n,
output_lingui_no);
return (FALSE);
}
} /* End for output_lingio_.no */
if (!save_output_deviation())
{
printf(CAN NOT SAVE OUTPUT DEVIATION^);
>
/*
* now all deviation of each membership for input term nodes and output
* term nodes are calculated
*/
if (!update_w_14())
{
printf(udate_w_14 FALSE\n);
return (FALSE);
>
/* printf( after updata w 4\n); */
if (!find_max_w4())
{
printf(can not find max weight 4 FALSEvn);
return (FALSE);
>
/* printf( after find_max_w4\n); */
51


if (!save_w4())
{
printf(can not save weight 4 V);
return (FALSE);
>
/* printf( after save w 4\n); */
return (TRUE);
>
52


#include
#include
#include c:\borlandc\file\thesis.h
#include c:\bbrlandc\file\globalex.h
int
superviselearning(void)
{
float Irate;
int index,
retv5,
retv4;
Irate = 0.045;
/*
* forward phase of the learning for all inputs. At this point all
* Errors will be calculated and then updata procedures will then
* execute. Error will be sumed for smoth result.
*/
Cal_error = 1.0;
while ((Irate > MINJLRATE) && (Cal_error > 0.025))
{
CaLerror = MIN_ERROR;
Max_indexs = 0;
for (index = 0; index < MAX_INPUT_INDEX; index++)
{
/* printf(NOW IN THE BACKPRO FORWARD V); */
if (!backpro_forward(index))
{
printf( Can not find the output from backpro forward\n);
return (FALSE);
>
/* printf(NOW IN THE BACKPRONn); *1
/* getchar(); */
if (!error_backproJayer5(lrate, index))
{
printf( Can not find the error_backpro_layer5_ptiNn);
53


return (FALSE);
>
/* printf(\n after LAYER 5\n\n); */
if (!error_backpro_layer4(Sum_width_act, Sum_mean_width_act))
{
printf( Can not find the error_backpro_layer4_ptiNn);
return (FALSE);
>
/* printf(V after LAYER 4\n\n); */
if (!error_backpro_layer3())
{
printf( Can not find the error_backpro_layer3_ptr\n);
return (FALSE);
}
/* printfCV after LAYER 3\n\n); */
if (!error_backpro_layer2(lrate, index))
{
printf( Can not find the error_backpro_Iayer2_ptr\n);
return (FALSE);
>
/* printfO after LAYER 2W); */
/* printf(\nNOW UPDATE WEIGHT\ri\n); */
if (!update_w(index))
{
printf(Error update the weight\n);
return (FALSE);
>
/* getcharO; */
} /* End of for input_index */
Irate *= 0.998;
printf( actual output %f, Output[0][Max_indexs]);
printf( calculated output %f\n,
Output_down_up[0] [Max_indexs]);
printf(leam rate %1.10f Irate);
54


printf(Cal_error %1.10f\n, CaLerror);
} /* End of the while Irate loop */
save_final_mean_input();
save_final_mean_output();
save_final_input_deviation();
save_final_output_deviation();
return (TRUE);
>
55


#include
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h
int
get_downup_act4(unsigned int input_index)
{
unsigned int output_lingui_no;
unsigned int output_term_membership;
unsigned int rulejndex;
float act4;
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGlJI_NO; output_lingui_no++)
{
for (output_term_membership = 0; output_term_membership <
Output_membership_airay[output_lingui_no];
output_term_membership++)
{
act4 = 0.0;
for (rulejndex = 0; rulejndex < RULE_NODE; rule_index++)
{
act4 += ActJ3 [rulejndex] *
WJ4[outputJingui_no][outputjerm_membership][ruleJndex];
> /* End rulejndex */
if (act4>= 1)
{
Act_downupJ4[outputJingui_no] [outputJerm_membership] = 1.0;
>
else
{
Act_downupJ4[outputJingui_no][output_teim_membership] = act4;
56


if (act4 > MAXFLOAT)
{
printf(activatio value of level 4 is bigger that max floatNn)
return (FALSE);
>
} /* End for output_term_membership */
> /* End for output_lingui_no */
return (TRUE);
>
57


#include
#include
#include
#include c:\borlandc\file\lhesis.h
#include c:\borlandc\file\globalex.h
int
get_downup_act4(void)
{
unsigned int outputJingui_no;
unsigned int outputJerm_membership;
unsigned int rulejndex;
float act4;
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
for (output_term_membership >= 0; output_term_membership <
Output_membership_array[output_lingui_no];
output_term_membership++) 1
{
act4 = 0.0;
for (rulejndex = 0; rulejndex < RULE_NODE; rule_index++)
{
act4 += Act_13[rule_index] *
W_14[output_lingui_no] [output_term_membership] [rulejndex];
} /* End rulejndex */
if (act4>= 1)
{
Act_downupJ4[outputJingui_no] [outputjerm_membership] = 1.0;
>
else
{
Act_downupJ4[outputJingui_no] [outputjerm_membership] = act4;
58


>
' ft
if (act4 > MAXFLOAT)
{
printf(activatio value of level 4 is bigger that max floafsn)
return (FALSE);
>
} /* End for output_term_membership */
} /* End for output_lingui_.no */
return (TRUE);
}
59


#include
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h"
/*******************************************************************************
* function backpro()
*
* This function calculates the network output in downup pass of the network.
* It call the following functions.
*
* get_downup_act4;
*get_act3;
*get_act2;
*
*
* Input parameter index
*
* Return value: int type.
*
*programmer:Leung Kam LungMS. Thesis9/28/93
*
*/
int
backpro_forward(unsigned int index)
{
int retval2,
retval3,
retval4;
unsigned int input_lingui_no,
input_lingui_nol,
input_lingui_no2,
output_lingui_no;
float outl,
f,
f2,
temp_error,
abs_cal_error,
60


unsigned int output_term_membership,
membership;
input_lingui_nol = 0
input_lingui_no2 = 1
output_lingui_no = 0
/* printf( index at backpro %dXn,index); */
for (input_lingui_no = 0; input_lingui_no < INPUT_LINGUI_NO; input_lingui_no++)
{
retval2 = get_act2(input_lingui_no, index);
if (retva!2 == FALSE)
{
printf(reture value of act_function_ptr2 is > or < max,min doubleXn);
return (FALSE);
} /* End of if retval2 */
> /* End of for input_lingui_no */
retvaB = get_act3(input_lingui_nol, input_lingui_no2);
if (retvaB == FALSE)
{
printf(rcture value of act_function_ptr3 is > or < max,min doubleXn);
return (FALSE);
> /* End of if retvaB */
retval4 = get_downup_act4();
if (retval4 == FALSE)
{
printf(reture value of act_function_downup_ptr4 is > or < max,min doubleXn);
return (FALSE);
> /* End of if retval */
61


/*
* Tlie next for loop calculates the networks output of the supervise
* learning
*/
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
Output_down_up[output_lingui_no][index] = 0.0;
f = £2 = 0.0;
for (output_term_membership = Q; output_term_membership <
Output_membership_array[output_lingui_no];
output_term_membership-H-)
{
f+= (Mean_output[output_lingui_no][output_term_membership] *
Output_term_Deviation[output_lingui_no][output_term_niembership] *
Act_downup_14[output_lingui_no][output_term_inembership]);
f2 += (Output_tenn_Deviation[output_lingui_no][output_tenn_membership] *
Act_downup_14[output_lingui_no][output_term_membership]);
} /* End of output_term_membership */
outl = Output_down_up[output_lingui_no][index] = f / £2;
Error_airay[output_lingui_no] = temp_error = (float) (Output[output_lingui_no] [index] -
Output_down_up[output_lingui_no][index]);
/* printf( actual output %f Output[0] [index]); */
/*
* printf( calculated output %f, Error %f\n, outl,
* Error_array[output_lingui_no]);
*1
/* getchar(); */
/* abs_cal_error = (float) pow(fabs(temp_error),2); */
abs_cal_error = (float) fabs(temp_error / Output[output_lingui_no] [index]);
if (abs_cal_error > Cal_error)
{
Cal_error = abs_cal_error;
62


Max_indexs = index;
>
} /* End of output_lingui_no */
return (TRUE);
}
63


#include
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h
int
change_w4(void)
{
float weight4;
unsigned int output_lingui_no;
unsigned int output_term_membership;
unsigned int rule_index;
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_li:
{
for (output_term_membership = 0; output_term_membership <
Output_membership_array[output_lingui_no];
output_term_membership++)
{
for (rule_index = 0; rule_index < RULE_NODE; rule_index++)
{
/*
* printf(output_term_membership %d, rule
* %d,output_termninembership, rule_index);
*/
/*
* printf( weight4 %f
* W_14[output_linguLno] [output_terni_membership] [rule_index
* ]);
*/
weight4 = W_14[output_lingui_no][output_term_membership][rule_index]
Act_14ioutput_lingui_no] [output_term_membership] *
(Act_13[rule_index] -
W_14[output_lingui_no] [output_term_membership] [rule_index]);
/* printf(weight4 %f\n,weight4); */
i_no++)
64


if,(weight4 > MAXFLOAT)
{ '
printf(weight4 value of output_lingui_no %d output term membership %d rule_index &d
is > or < expected valueNn,
output_lingui_no, output_term_membership, rule_index);
return (FALSE);
}
}
/* End rule_index */
/* End for output_teim_membership */
/* End for output_lingui_no */
>
>
return (TRUE);
}
/* End of function change_w4 */
65


#include
#include
#include c:\borlandc\file\lhesis.h
#include c:\borlandc\file\globalex.h
int
close_Ioop_control(void)
{
int index,
time;
float inputxl,
inputx2,
Fback;
float y,
yi.
y2;
float ul,
u2;
float fb,
fbl;
float close_inputl,
close_input2;
float inv_control_signal;
float plant_output,
reference;
yl =0.0;
y2 = 0.0;
ul =0.0;
u2 = 0.0;
fb = 0.0;
fbl = 0.0;
reference = 1.0;
plant_output = 0.0;
close_inputl = reference fb;
close_input2 = fb;
Close_loop_input[0] = (close_inputl Input_slopel);
Close_loop_input[l] = (close_input2 Input_slope2);
66


printf(IN CONTROL LOOP \n);
'
for (time = 2; time < MAXJITME; time-H-)
{
printf( Error %8f\n, close_inputl);
printf( Fback %8f\n, close_input2);
printf( Input_xO %8f\n, Close_loop_input[0]);
printf( Input_xl %8f\n, Close_loop_input[l]);
if (!close_loop_backpro_forward())
<
printf( Can not find the output from close loop backpro forwardSn);
return (FALSE);
>
inv_control_signal = ((Control_signal 0.5) / Output_slopel);
printf(Teach %1.6f\n, inv_control_signal);
plant_output = 2 y 1 y2 + 179.45e-6 (ul + u2);
printf(output of the plant %f\n, plant_output);
fb = 99.3 plant_output 95.3 yl fbl;
printf( fback %f\n, fb);
close_inputl reference fb;
close_inpUt2 = fb;
/*
* printf(Close inputl bigger %f\n,close_inputl*Input_slopel);
* printf(Close input2 bigger %f\n,close_input2*Input_slope2);
*/
if ((closejnputl Input_slopel) > Max_inputl)
{
Close_loop_input[0] = Max_inputl;
>
else
{
67


if ((close_inputl Input_slopel) < Min_inputl)
{
Close_loop_input[0] = Minjnputl;
}
else
{
Close_loop_input[0] = (close_inputl Input_slopel);
}
>
if ((close_input2 Input_slope2) > Max_input2)
{
printf(Max_input2 %f \n, Max_input2);
Close_loop_input[l] = Max_input2;
>
else
{
if ((close_input2 Input_slope2) < Min_input2)
{
printf(Min_input2 %f\n, Min_input2);
Close_loop_input[l] = Min_input2;
>
else
{
Close_loop_input[l] = (close_input2 Input_slope2);
>
u2 = ul;
ul = inv_control_signal;
y2 = yl;
yl = plant_output;
fbl = fb;
/*
* printf( u2 %8f\n,u2); printf( ul %8f \n,ul); printf( y2 %8f
* \n,y2); printf(yl %8fV,yl); printf(fbl %8fV,fbl);
*/
getchar();
68


}
fclose(Output_fptr);
fclose(Max_data_fptr);
return (TRUE);
>
69


#include
#include <§uilib.h>
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\glbbalex.h
/*****************$*£$**********************************************************
* function close_loop_backpro_forward ()
*
* This function calculates the network output in downup pass of the network.
* It call the following functions.
*
* close_get_downup_act4;
*close_get_act3;
*close_get_act2;
*
*
* Input parameter index
*
* Return value: int type.
*
programmerLeung Kam LungMS. Thesis9/28/93
*
*/
int
cIose_loop_backpro_forward(void)
{
int retval2,
retval3,
retval4;
unsigned int input_lingui_no,
input_lingui_nol,
input_lingui_no2,
output_lingui_no;
float outl,
f,
f2,
temp_error,
abs_cal_error;
70


unsigned int output_terai_membership,
membership;
input_lingui_nol =0
input_lingui_no2 = 1
output_lingui_no = 0
for (input_lingui_no = 0; input_lingui_no < INPUT_LINGUI_NO; input_lingui_no++)
{
retval2 = close_loop_get_act2(input_lingui_no, CIose_loop_input[input_lingui_no]);
if (retval2== FALSE)
{
printf(reture value of close_loop_get_act2is > or < max,min doubleNn);
return (FALSE);
> /* End of if retval2 */
} /* End of for input_lingui_no */
retval3 close_loop_get_act3(input_lingui_nol, input_lingui_no2);
if (retval3== FALSE)
{
printf(reture value of close_Joop_get_act3 is > or < max,min doubleNn);
return (FALSE);
} /* End of if retva!3 */
retval4 = close_loop_get_downup_act4();
if (retval4 == FALSE)
{
printf(reture value of close_loop_get_downup_act4 is > or < max,min doubleNn);
return (FALSE);
} /* End of if retval */
/*
* The next for loop calculates the networks output of the supervise
* learning
*/
71


for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
f = f2 = 0.0;
for (output_teim_membership = 0; output_teim_membership <
Output_membership_array[outputJingui_no];
output_term_membership++)
{
f += (Mean_output[output_lingui_no][output_term_membership] *
Output_term_Deviation[output_lingui_no] [output_term_membership] *
Act_downup_l4[output_lingui_no] [output_term_membership]);
f2 += (Output_teim_Deviation[outputJingui_no][output_term_membership] *
Act_downup_14[output_lingui_no][output_term_membership]);
} /* End of output_term_membership */
Control_signal = f / f2;
if ((Control_signal > MAXFLOAT) II (Control_signal < MINFLOAT))
{
printf(Control_signal OUT OF FLOATING POINT RANGEvn);
return (FALSE);
>
else
{
printf(Outputl %1.6f\n, Control_signal);
}
> /* End of output_lingui_no */
return (TRUE);
}
72


#include
#include
#include
#include c:\borlandc\file\thesis.h
#includec:\borlandc\file\globalex.h
/*******************************************************************************
* function close_loop_get_act2()
*
* This function calculates the output of the layer two of the network.
*
* Input parameter input_lingui_no, inputx.
*
* Return value: int type.
*
*programmer:Leung Kam LungMS. Thesis9/28/93
*
*/
int
cIose_loop_get_act2(unsigned int input_lingui_no, float inputx)
{
int retval;
retval = close_loop_activation_12(inputx, input_lingui_no);
if (retval == FALSE)
{
printf(reture value of act_function_ptr is > or < max,min doubleNn);
return (FALSE);
} /* End of if retval */
return (TRUE);
}
**********************************************************************************
***************
* This function is called by get_act2 to set up the parameters for calculates the output value
* for layer of the network.
*
* Input parameter inputx,
73


*input_lingui_no.
* -a
* Return value: TRUE if successful! calculate the output values.
* FALSE otherwise.
*
*
*programmer:Leung Kam LungMS. Thesis9/28/93
*/
int
cIose_Ioop_activation_I2(float inputx, unsigned int input_lingui_no)
{
int membership;
float temp;
for (membership = 0; membership < Input_membership_array[input_lingui_no]; membership++)
{
temp = Act_12[input_lingui_no] [membership] = (float)
close_loop_12_membership(Mean_input[input_lingui_no][membership],
Input_term_Deviation[input_lingui_no] [membership],
inputx);
printf(a_2 %f temp);
if (temp >= MAXFLOAT)
{
printf(close_loop Act_12 value of input_lingui_no %d membership %d is > or < expected
valueNn,
input_lingui_no, membership);
return (FALSE);
>
} /* End of for membership */
printf(Nn);
getchar();
74


return (TRUE);
>
/* End of activation_12 */
/I*****************************************************************************
* The function calculates the activation value for membership function in
* layer 2 of the MNFLC.
* input parameters:
*mean
deviation
*u(input value)
* output:
Return type double
programmerLeung Kam LungMS. Thesis9/28/93
*
*/
double
close_ioop_12_membership(float mean, float deviation, float u)
{
/*
* printf(2 deviation %f mean %f float %f, deviation, mean, u);
* getchar();
*/
return (exp(-pow(((double) u (double) mean), 2) / (pow((double) deviation, 2))));
}
75


#include
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h
**********************************************************************************
*******
* function close_loop_get_act3()
*
* This function find the activation value of the layer three by perform the minimun of
* the output of the input term nodes which are the output of the membership in layer 2.
* The activation valye of this layer(rule_nodes) are the minimum value from the output
* of the layer 2.
*
* Input parameter input_lingui_nol,
* input_lingui_no2.
*
*
* Return value: TRUE if successful calculate the output values.
* FALSE otherwise.
*
*
programmenLeung Kam LungMS. Thesis9/28/93
*
*/
int
close_loop_get_act3(unsigned int input_lingui_nol, unsigned int input_Iingui_no2)
{
unsigned int rule_node;
unsigned int lingnol_membership;
unsigned int lingno2_membership;
float smallest;
float smallest2;
rule_node = 0;
for (lingnol_membership = 0; lingnol_membership < MEMBERSHIP_N01;
lingnol_membership++)
{
76


smallest = Act_12[input_lingui_nol][lingnol_membership];
* J*
for (lirigno2_membership = 0; lingno2_membership < MEMBERSHIP_N02;
lingno2_membership++)
{
/* printf(rule_node =%d\t,rule_node); */
smallest2 = Act_12[input_lingui_no2][lingno2_membership];
/* printf(smallest =%f\t, smallest); */
/* printf(smallest2 =%f\t, smallest2); */
if (smallest < smallest2)
{
Act_13[rule_node] = smallest;
/* printf(Act_13 %f\n, Act_13[rule_node]); */
>
else
{
Act_13[rule_node] = smallest2;
/* printf(Act_13 =%f\n, Act_13[rule_node]); */
>
rule_node++;
if (rule_node > RULE_NODE)
{
printf(ERROR number of rule nodes are greater than the expected valueNn);
return (FALSE);
>
} /* End of lingno2_membership */
} /* End of lingnol_membership */
return (TRUE);
> /* End of the program */
77


#include
#include
#include
#include c;\borlandc\file\thesis.h
#includec:\borlandc\file\globalex.h
int
close_Ioop_get_downup_act4(void)
{
unsigned int output_lirigui_no;
unsigned int output_term_membership;
unsigned int rule_index;
float act4;
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
for (output_term_membership = 0; output_term_membership <
Output_membership_array[output_lingui_no];
output_term_membership++)
{
act4 = 0.0;
for (rule_index = 0; rule_index < RULE_NODE; rule_index++)
{
act4 += Act_13 [rule_index] *
W_14[output_lingui_no] [output_term_membership] [rule_index];
> /* End rule_index */
/*
* printf( act 4 %8f \n,act4); getcharQ;
*/
if (act4>= 1)
{
Act_downup_14[output_lingui_no][output_tenn_mer[ibership] = 1.0;
>
else
78


{
Act_downup_14[output_lingui_no][output_term_membership] = act4;
>
if (act4 > MAXFLOAT)
{
printf(activatio value of level 4 is bigger that max floatNn);
return (FALSE);
>
> /* End for output_term_membership */
} /* End for output_lingui_no */
return (TRUE);
>
79


#include
#include
#include
#include c:\borlandc\file\thesis.h
#include c:\borlandc\file\globalex.h
/
**********************************************************************************
9fcs|c^c9|e9)c9jc^eaf;9|ej4(a^9|e9{ea((34e
* This function find the deviation of each of the membership on layer2. The INPUT_OVERLAP
* parameter is 2.
*
Input parameter: inputJingui_no
*
Return value: TRUE if all deviation values are calculated
FALSE otherwise.
*
* Programmer.Leung Kam Lung9/30/93MS. Thesis.
*/
int
find_input_deviation(unsigned int input_lingui_no)
{
float closest_mean;
float current_mean;
float temp,
temp_smallest;
int membership 1;
int membership;
/* printf(input ling no %d\n,input_lingui_no); */
for (membership = 0; membership < Input_membership_array[input_lingui_no]; membership-H-)
{
current_mean = Mean_input[input_lingui_no] [membership];
/* printf( current_mean %f\n,current_mean); */
closest_mean = MAXFLOAT;
for (membershipl = 0; membershipl < Input_membership_airay[input_lingui_no];
membership 1++)
{
if (membership != membershipl)
80


{
*
temp = Mean_input[input_lingui_no][membershipl];
temp_smallest = fabs(temp current_mean);
/* printf(temp smallest %f\n,temp_smallest); */
if (temp_smallest < closest_mean)
{
closest_mean = temp_smallest;
}
>
} /* End of the for membership! */
if ((closest_mean < MINFLOAT) II (closest_mean > MAXFLOAT))
{
return (FALSE);
}
/* printf(closest mean %f\n,closest_mean); */
Input_term_Deviation[input_lingui_no] [membership] = (float) (fabs((double) (current_mean -
closest_mean)) / INPUT_0VERLAP);
/*
* printf(Input term D
* %8f\n,Input_term_Deviation[input_lingui_no] [membership]);
*/
} /* End of the for membership */
return (TRUE);
} /* End of the function find_deviation */
**********************************************************************************
***************
* This function find the deviation of each of the membership on layer2. The OUTPUT_OVERLAP
* parameter is 2.
*
Input parameter: input_lingui_no
*
Return value: TRUE if all deviation values are calculated
81


FALSE otherwise.
*
* .1
* ProgrammenLeung Kam Lung9/30/93MS. Thesis.
*/
int
find_output_deviation(unsigned int output_Iingui_no)
{
float closest_mean;
float current_mean;
float temp,
temp_smallest;
int membership 1;
int membership;
for (membership = 0; membership < Output_membership_array[outputJingui_no]; membership++)
{
current_mean = Mean_output[output_lingui_no] [membership];
/* printf( current_mean %f\n,current_mean); */
closest_mean = MAXFLOAT;
for (membershipl = 0; membership 1 < Output_membership_array[output_lingui_no];
membershipl++)
{
if (membership != membershipl)
<
temp = Mean_output[output_lingui_no][membershipl];
temp_smallest = fabs(temp current_mean);
/* printf(temp smallest %f\n,temp_smallest); */
if (temp_smallest < closest_mean)
{
closest_mean = temp_smallest;
}
}
} /* End of the for membershipl */
82


if ((closest_mean < MINFLOAT) II (closest_mean > MAXFLOAT))
{
return (FALSE);
>
/* printf(closest mean %f\n,closest_mean); */
Output_term_Deviation[output_lingui_no] [membership] = (float) (fabs((double) (current_mean -
closest_mean)) / OUTPUT_OVERLAP);
/*
* printf(Output term D
* %8f\n,Output_term_Deviation[output_lingui_no][membership]);
*/
} /* End of the for membership */
return (TRUE);
} /* End of the function find_deviation */
83


#include
#include
#include
#include
#include c:\borlandc\file\lhesis.h
#include c:\borlandc\file\globalex.h

* This function update the Mean_input of the layer 2 and the Width of the
* membership function
*
* Input parameter Irate,
* inputx.
*
* Output: TRUE if no error otherwise FALSE
*
*/
int
error_backpro_layer2(float Irate, int input_index)
{
unsigned int input_lingui_no,
membership;
float temp;
for (input_lingui_no = 0; input_lingui_.no < INPUT_LINGUI_NO; input_lingui_no++)
<
for (membership = 0; membership < Input_membership_array[input_lingui_no];
membership++)
{
/* clrscr(); */
/*
* printfCVMEMBERSHIP %d, INPUT_LINGUI_NO %d\n", membership,
* input_lingui_no);
*/
Change_E_respect_a = 0.0;
84


rule_connection(membership, input_lingui_no);
/*
* printf(lrate %2.5f, index %d, inputx %2.5f\n, Irate,
* input_index, Inputx[input_lingui_no][input_index]);
*/
/*
* printf(Mean_input %2.5f Diviation %2.5f C_E_respect_a %2.5f
* act %2.5f\n\ Mean_input[input_lingui_no][membership],
* Input_term_Deviation[input_lingui_no] [membership],
* Change_E_respect_a, (float)
* 12_membership(Mean_input[input_lingui_no] [membership],
* Input_term_Deviation[input_lingui_no][membership],
* Inputx[input_lingui_no][input_index]));
*/
/* Mean_input[input_lingui_no] [membership] */
Mean_input_curr[input_lingui_no] [membership] += temp
(Irate Change_E_respect_a *
(float) 12_membership(Mean_input[input_lingui_no] [membership],
Input_term_Deviation[input_lingui_no] [membership],
Inputx[input_lingui_no][input_mdex]) *
(2 ((Inputx[input_lingui_no][input_index] Mean_input[input_lingui_no][membership]) /
(float) pow((double) Input_term_Deviation[input_lingui_no] [membership], 2)
)
)
);
/* printf(mean cuir %2.5f temp); */
/* Input_term_Deviation[input_lingui_no][membership] */
Input_term_Deviation_curr[input_lingui_no] [membership] += temp =
(Irate Change_E_respect_a *
(float) I2_membership(Mean_input[input_lingui_no] [membership],
Input_term_Deviation[input_lingui_no] [membership],
Inputx[input_lingui_no] [input_index]) *
(2 ((float) pow((double) (Inputx[input_lingui_no][input_index] -
Mean_input[ihput_lingui_no] [membership]), 2) /
(float) pow((double) Input_term_Deviation[input_lingui_no][membership], 3)
)
)
);
85


/* printf(delta deviation %2.5fW\ temp); */
' ,*
/* getcharQ; *!
}
>
return (TRUE);
>
/
*********#*********#***#****#***#***#****###*****##*#***********%*****#*****#*****
******
* This function find all connection of rule nodes that are connected to a given
* in input term membership.
*
* Input parameter input_term_membership,
* input_lingui_no.
*
* Output TRUE if no error otherwise FALSE
*/
int
rule_connection(unsigned int input_term_membership, unsigned int input_lingui_no)
{
unsigned int N,
n,
first_nile_node,
next_rule_node;
I*
* printf(input_term_membership %d, input_lingui_no %d\n,
* input_term_membership, input_lingui_no);
*/
N = (Max_rule_no / (unsigned int) pow((double) MEMBERSHIP_DEMEMSION,
(double) (INPUT_LINGUl_NO input_lingui_no))) -1;
first_rule_node = (input_teim_membership (unsigned int) pow((double)
MEMBERSHIP_DEMEMSION,
(double) (INPUT_LINGUI_NO input_lingui_no 1)));
86


/* printf("mFIRST_NODE %d\n,first_rule_node); */
membership_connection(first_rule_node, input_lingui_no, input_term_membership);
find_next_rule_node(first_rule_node, input_lingui_no, input_teim_membership);
for (n = 1; n <= N; n++)
{
next_rule_node = first_rule_node + (n (unsigned int) pow((double)
MEMBERSHIP_DEMEMSION,
(double) (INPUT_LINGUI_NO input_lingui_no)));
/* printf(\nNEXT_NODE %d\n, next_rule_node); */
membership_connection(next_rule_node, input_lingui_no, input_term_membership);
find_next_rule_node(next_rule_node, input_lingui_no, input_term_membership);
>
/* getchar(); */
return (TRUE);
* This function find all connection of input term membership that are connected to a given
* rule node.
*
* Input parameter input_term_membership,
* input_node,
* rule_node.
*
* Output: TRUE if no error otherwise FALSE
*/
87


int
membership_connection(unsigned int ruIe_node, unsigned int input_node,
unsigned int input_term_membership)
{
unsigned int i,
is_minimum,
input_lingui_no,
Rule_no,
membership;
float minimum,
temp_minimum;
is_minimum = TRUE;
Rule_no = rule_node;
if (rule_node >= Max_rule_no II rule_node < 0)
<
printf(node greater expected value %d, rule_node);
return (FALSE);
>
minimum = Act_12[input_node][input_term_membership];
/*
* printf(input_node %d, input_teim_membership %d\n, input_node,
* input_term_membership);
*/
for (i INPUT_LINGUI_NO; i>= 1; i~)
{
input_lingui_no = INPUT_LINGUI_NO i;
membership = (rule_node / Rule_connect[i -1]);
/*
* printf(input_lingui_no %d, membership %d input_lingui_no,
* membership);
*/
temp_minimum = Act_12[input_lingui_no] [membership];
88


/*
* print£(minimum %2.5f, temp_minimum %2.5fSn, minimum,
* temp_minimum);
*/
if (minimum > temp_minimum)
{
is_minimum = FALSE;
/* printf(is_minimum is FALSE \n); */
break;
}
rule_node %= Rule_connect[i -1];
/* printf(rule_node now %dSn, rule_node); */
>
if (is_minimum)
{
/*
* printf(R_no %d, E_respect_a %2.5f Rule_no,
* Change_E_iespect_a);
*/
/* printf(E_sig_13 %2.5f Eiror_signal_layer3[Rule_no]); */
Change_E_respect_a += Error_signal_layer3[Rule_no];
/* printf(Change_E_respect_a %2.5f\n, Change_E_respect_a); */
>
return (TRUE);
>
/
**********************************************************************************
******
* This function find next number of rule nodes that are connected to a given
* in input term membership, and the current rule node that is calculate.
*
89


* Input parameter. input_term_membership,
* input_lingui_no,
*current_rule_node.
*
* Output TRUE if no error otherwise FALSE
*/
int
find_next_rule_node(unsigned int next_node, unsigned int input_lingui_no,
unsigned int input_term_membership)
{
unsigned int loop_node;
loop_node = (((unsigned int) pow((double) MEMBERSHIP_DEMEMSION,
(double) (INPUT_LINGUI_NO input_lingui_no -1))) -1);
/* printf( loop node %d\n,loop_node); */
while (loop_node > 0)
{
/* printf(\nNEXT_NODE %dV, next_node + 1); */
membership_connection(++next_node, input_lingui_no, input_term_membership);
loop_node-;
>
return (TRUE);
>
90


#include
#include
#include
#include c:\borlandc\fileMhesis.h
#includec:\borlandc\file\globalex.h
/
**********************************************************************************
************
* This function calculate the Error signal for layer 3
*
* Input parameter void
*
* Output: TRUE if no error otherwise FALSE
*
*/
int
error_backpro_layer3(void)
{
unsigned int output_term_membership;
int output_lingui_no,
rule_no;
float temp;
/* printf(\nERROR SIGNAL LAYER 3\n); */
for (rule_no = 0; rule_no < RULE_NODE; rule_no++)
{
/* printf(RULE_NO %d\n,rule_no); */
Error_signal_layer3[rule_no] = 0.0;
for (output_lingui_no = 0; output_lingui_no < OUTPUT_LINGUI_NO; output_lingui_no++)
{
for (output_term_membership = 0; output_term_membership <
Output_membership_array[output_lingui_no]; output_term_membership++)
{
/* printf(op_t_mship %d, output_term_membership); */
/*
91


* printf(W_14 %2.5f, E_sJ4 %2.5f,
t,W_14[output_lingui_no][output_term_membership][rule_no],
* Error_signal_layer4[output_lingui_no] [output_term_membershi
*p]);
*/
if (WJ4[outputJingui_no][output_term_membership][rule_no] > 0.0)
{
Error_signal_layer3 [rule_no] += temp =
Error_signal_layer4[output_lingui_no] [output_term_membership];
}
/*
* printf(E_s_13 %2.5f, S_E %2.5f\n,temp,
* EiTor_signaUayer3[rule_no]);
*/
} /* End of output_tenn_membership */
} /* End of output_lingui_no */
/* getcharQ; */
> /* End of rule_no */
/* getchar(); */
return (TRUE);
>
/* End of error_backpro_layer3 */
92