Citation
Protocol simulation

Material Information

Title:
Protocol simulation
Creator:
Elliott, John
Publication Date:
Language:
English
Physical Description:
159 leaves : ; 29 cm

Subjects

Subjects / Keywords:
Computer network protocols -- Simulation methods ( lcsh )
Computer network protocols -- Simulation methods ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references.
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science,Department of Electrical Engineering, Department of Computer Science and Engineering.
Statement of Responsibility:
by John Elliott.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
20868337 ( OCLC )
ocm20868337
Classification:
LD1190.E54 1989m .E54 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

PROTOCOL SIMULATION by John Elliott B.S. University of Colorado, 1984 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering and Computer Science 1989

PAGE 2

This thesis for the Master of Science degree by John Endecott Elliott has been approved for the Department of Electrical Engineering and Computer Science by DouglSs A. Ross .." Date S 7AN fJC(

PAGE 3

Elliott, John Endecott (M.S. Electrical Engineering) Protocol Simulator Thesis directed by Associate Professor William Wolfe Communications is the overhead associated with users or processes trying to communicate with other users or processes. Most often protocols are used when the communication is between two remote machines, or nodes, as they are commmonly referred to. A node can spuriously send another node a with no forewarning. -.. Or nodes can go through ahandshaking procedure that establishes a relationship before exchanging messages. High and low "levels" refer to the hierarchy of protocol functions. In general, the more a protocol deals with media problems the lower it is, and the more it deals with the user the higher it is. The International Standards Organization (ISO) has a detailed description of how protocol tasks should be divided up into

PAGE 4

iv associated groups called layers. The ISO defines seven layers of protocol called the Open Systems Interconnect Model (OSI Model). The OSI layers, or "stack", are going to become the protocol standard in the future. Today there are many interpretations and implementations of protocol, but it appears that all manufacturers, including IBM, are migrating toward the OSI model. The objective of this thesis is to create a protocol stack simulator (PSIM) that is flexible enough to handle many different protocols. Network performance analysis can be divided into three areas. The first area is media performance, which has been extensively analyzed by researchers. The second is the interlayer services, which transmit data up and down the protocol stack. The third is the creation of frames that must go onto the media. This Protocol Simulator's (PSIM's) emphasis has been on the last category, but since it simulates

PAGE 5

a network it can be used to study any of these areas. v In order to demonstrate PSIM's ability to simulate protocols there are results from simulating Datagrams, Virtual Circuits, Internet Protocol, SNA, Virtual Terminal, TCP and different.backoff algorithms for Ethernet,

PAGE 6

CONTENTS CHAPI'ER I. INTRODUCTION 1 Problem statement. The Open Systems Interconnect Model ( OSI) ....................... II. PROTOCOL INVESTIGATIONS. MAP AlOHA. X.21, RS-449, RS-232. X.25. HDLC. 802 802.3. 802.4. 802.5 802.2 LLC of 802.2. DOD TCP/IP. SNA

PAGE 7

III. PAST WORK DONE IN NETWORK SIMULATION . IV. SIMULATOR DESCRIPTION ...... PSIM Code Guide ..... Statistical Analysis Considerations. V. SIMULATOR RESULTS .. Comparisonl (data grams, virtual circuits, and virtual circuits with an IP header) ........ Comparison2 (TCP, SNA, and SNA with 'VT) Comparison3 (datagrams and SNA with 'VT running at a very heavy loading) ......................... Ethernet Backoff Algorithm Test . VI. CONCLUSION .......................... REFERENCES APPENDIX A. Comparisonl Histogram and Output ... B. Comparison2 Histogram and Output .. c. Comparison3 Histogram and output D. Ethernet Backoff Algorithm Test .... E. Program Listings ... vii

PAGE 8

Figure A.l. B.l. C.l. FIGURES viii 110 115 116

PAGE 9

CHAPI'ER 1 INTRODUCTION Problem Statement At first it would seem that protocols must be simple to understand. But due to the wide variety of protocols used today, and their ever expanding features, understanding them can be a lifelong undertaking. There are people who have made careers of being protocol experts. At the "lowest levels" protocols deal with how a message packet gets onto the transmission media. When a packet arrives a node must be able to determine whether it is error free and how to resolve errors. Due to transmission delays packets may arrive out of order. A node often must relay a packet onto another network. At "higher levels" protocols establish sessions, do data conversion, allow users to issue commands to other processes, etc.

PAGE 10

2 In determining the performance of a network three issues must be considered. The first is the bandwidth of the media. The second is the proliferation of data due to protocol overhead. And the third is the delay assocaited with the protocol layers A lot of simulation work has been done on the analyzing the first issue. The second issue of comparing the performance of protocol layers has never been simulated to my knowledge. The delays of inter-layer communication has been alnalyzed. Simulation of all of these performance issues in one simualtor has probably never been done. Thus PSIM's goal is to allow a user to define new medias, and higher level protocols and simulate their net effect.

PAGE 11

3 The Open Systems Interconnect Model ISO (International Standards Organization) is the prominant protocol organization today ( 17) They are dictating what protocols the world will be using in the future. For example today DEC uses Decnet's DDCMP(7) today for their networks, but they are making the transition to !SO's 802 specifications. IBM is they only company that has openly resisted theOSI, but even they are subscribing to the OSI model now. In this expaination I am going to be introducing protocol jargon in context, as an alternative to defining each term as I progress. The terms will be repeated, with some being synonyms, and hopefully their meanings will be clear by the end of this section.

PAGE 12

4 The basic concept of OSI is to break protocol down into specified "layers" that are defined in a hierarchial fashion. The idea of breaking protocol up into to functional groups is not unique to the OSI model. What ISO is trying to do is to standardize protocols, by defining what each protocol layer is, and what protocols are allowed to be used for that layer. For example at layer 1, the "physical" ISO layer, IEEE for LANs allows three protocols: CSMA/CD which is the carrier collision detect method that Ethernet uses: Token Ring which IBM uses; and. Token Bus which MAP, the GM originated protocol uses. All three of these level 1 (layer 1) protocols deal with how a PDU (protocol data unit ie a completely framed message) is able to get onto the media (the cable for these specifications).

PAGE 13

5 Although the OSI definition outlines what jobs each layer does, different interpretations of the model are noticaable. The physical layer is the most agreed upon. In studying higher level protocols you will notice disagreements; even as soon as level 2 Layer 2, or alternately called level 2 is the "data link layer". It is responsible for error control, synchronizing the incoming frames, and defining their type, such as whether they are control frames or information frames. If a received frame has an error it is responsible for at least noting the error. It can also deal with the overhead of establishing what is called a "virtual circuit",meaning that two processes decide they are going to talk to each other by first checking with each other that both are free. Usually layer 2 only deals with framing the PDU with synchronizing bits, control fields, CRC's, etc Layer 2 can be thought of as what

PAGE 14

you would do first if you wanted to send or transmit on a free media. layer 3 is called the "network layer". 6 It deals with network routing, which entails issues such as breaking messages up into smaller units so that the chance of errors is reduced. A protocol namedX.25 specifies the establishing of a "virtual circuit" as being in this level. This is where I believe "virtual circuit" protocol belongs, but IEEE 802.2, a level 2 protocol defines "virtual circuits". Even at level 2 and .' 3 the interpretations get more vague. But it would be safe to say that most agree that level 2 is where the framing of a PDU is read and done, and level 3 is where the human-like features start such as "hi I want to talk to you", etc. layer 4, the "transport layer", deals with internetwork communication problems such as routing. However one can find that session

PAGE 15

control can also be found at this layer. Iayer 5, the "session layer", is where ideally all session establishment, control, and termination are taken care of. 7 Iayer 6, the "presentation layer", has to do with formating of data such as translating and interpreting it. For example a presentation layer function could be to note that a .sequence of control 1 means do a line feed. Iayer 7 the "application layer", contains protocols such as VT, FTP, electronic mail, and FTAMr or in other words user to user communication. An implementation of MAP will not be implemented in PSIM for several reasons. Firstly MAP is extremely complex. It has taken many years for its definition to come as far as it has, and it is still far from complete. It would be an enormous task to try to implement a full

PAGE 16

8 MAP simulation on PSIM. Secondly MAP documentation is hard to find and the specification is constantly changing. Thirdly there are other protocols that are in popular use today that would be of more practical, and educational benefit to simulate. However investigating MAP was of interest since it is the first protocol that is attempting to define all seven layers. Since PSIM can simulate all seven layers MAP was a natural choice to investigate as a potential protocol to simulate. But as stated above I found that it was a very large specification.

PAGE 17

CHAPI'ER II PROTOCOL INVESTIGATIONS MAP General Motors after conducting a study concluded that the incompatability of factory floor protocols resulted in a great increase in their cost for automation. As in many plants today, they found there was a tendency to create "islands" of automation since integration of adhoc machines was too time consuming and expensive. GM decided to avoid this problem in the future by driving the creation of a new OSI based protocol named Manufacturing Automation Protocol (MAP). The objective of MAP is to define a broadband cable based protocol that covers all seven layers of the OSI model 1, thus allowing a company to install the cable in their factory and then be able to buy any vendors MAP

PAGE 18

hardware or software and be assured of compatability. 10 Boeing has come up with their own version of MAP named (Technical Office Protocol). It is intended to be the same as MAP in the upper layers, but is based, in the lower levels, on an Ethernet network (802.3 is virtually the same) instead of MAP's broadband scheme. An Ethernet network entails just the OSl layers one, two and three. Mostly the upper layers (commonly TCP/IP) from one vendor's network are not compatible with the other's. TOP would use the Ethernet media but have the more precisely defined upper layers of MAP. This approach has the advantage of allowing Boeing to install Ethernet for now, which has a large support base today, and then switch from Ethernet to TOP when and where they wish, instead of committing to MAP's broadband cabling and then having to wait for the protocol to mature enough to be practical to implement. The disadvantage of using TOP instead of MAP is that it has much less noise immunity, as

PAGE 19

11 discussed later. The question in many people's minds is when MAP is going to become a viable reality. Conceptually the idea of having an industry standard protocol sounds great. Every machine could then become compatible with every other machine. The problem with this idea is that it requires cooperation, and a great deal of planning and effort to make it happen. The amount of detail that must be defined is enormous, and then coding these layers is a large task. As of today, MAP has a long way to go before this job is complete. MAP's objective is to specify exact protocols for each of the seven layers of the OSI model. The result would then enable a multi-vendor network to exist where every layer of one vendors product could communicate with another's same layer. Physically MAP is an IEEE

PAGE 20

12 802.4, lOMBS 3 full duplex data channel broadband network which uses a token bus access method. (The CATV cable used can support 60 channels total.) The rest of the channels could be used for video and voice. In contrast to 802.3, or Ethernet, MAP's broadband cable network is designed to extend up to a maximum of ten miles, whereas Ethernet' maximum span is 4900ft. The cable used in MAP is the same as CATV, which is a well established technology. Some of the advantages of MAP's cabling is that it was designed to run under extreme environments, (cable TV runs outdoors, through tree branches, etc all seasons of the year). Connectors, splitters, and amplifiers are designed to run under almost any condition. Broadband runs at high frequencies (up to 350mhz) which are more immune to noise because frequency modulation(20) can be used and most noise in factories is due to

PAGE 21

13 the low frequencies associated with electric motors or it is 60hz. Another advantage of broadband CATV cable is that grounding may be done at multiple points anywhere along the cable path due to the frequency demodulation technique of reading the data versus the level sensing of pulse code modulation used by 802.3, (OV = logic high, -2V =logic low, manchester encoded) (12). MAP is based on token passing which many feel is an advantage over the CSMA/CD technique used by Ethernet. Token passing is a scheme where the right to talk is passed from one node to the next sequentially so that every node has a chance to talk. The rate that the right (the token) is passed is designed into the network controlling software. CSMA/CD is a technique where a node, that wants to talk, senses whether the line is being used. If it is not being used, the node begins to talk on the line. If at the same time a second node starts to talk the line

PAGE 22

14 becomes jammed by two messages trying to get through at the same time. Both nodes can sense this problem because the voltage on the line doubles over nonnal. The action the nodes take is to "backoff" a random amount of time and then attempt to retransmit their messages. Due to the random backoff time, one node will probably start retransmiting earlier than the other. Thus if the first node to retransmit has not finished sending its message before the second's backoff time has expired, the second node will be able to sense that the line is being used and therefore wait for the first node to finish. If the two nodes try to retransmit at the same time the backoff process occurs again; and due to it being a random time sooner or later one of the nodes will get the line first. In actuality, since the random backoff time is very long compared to the propagation delay of the network, the chance that two nodes, that have backed off, will again have

PAGE 23

15 their messages collide is small. The reason for believing that token bus is an advantage over CSMA/CO is that if a message is urgent one will be guaranteed of being able to transmit it within a known time. Ethernet cannot guarantee when the urgent process will be able to get the line.

PAGE 24

16 Some MAP jargon is defined as follows: GATEWAY: A level 1 through 7 conversion of some other protocol to MAP's, ie a conversion from another protocol to a full implementation of MAP. RELAY: Level 1-3 conversion so that another protocol's data link and physical layers can communicate with MAP's corresponding layers. BRIOOE: Level 1 and 2 conversion. For example a conversion from MAP to RS-232 would be a bridge since the media and framing would change, but nothing else would. MAC: The lower half of OSI layer 2 which does the final framing and algorithms that interface the node to the media. HEAD END REMODUIATOR: Basically a repeater. It cleans up the signal recieved and transmits it again. MODEM: This modulates the data onto the CATV cable

PAGE 25

CASE: The application which interfaces to the user. 17 FTAM: File transfer access and management is an application which allows the transfer of file to other networks, programs, and virtual tenninal services. ROUTER: Connects networks together. This enables transmission across multiple networks. As you may guess a router must have a global address versus just a network address. CATANET: A group of connected networks. Mini-MAP, MAP enhanced architcture (EPA), PROWAY: A carrier band subset of MAP. EPA: Enhanced Perfonnance Architecture is bypass over layers 3 through 6. It is designed to be real-time and eliminate the problems of delay associated with token passing. P CINS: The protocol used, by MAP, in the network layer.

PAGE 26

A Full MAP Node Conformance Specification 1-Physical IEEE 802.4 10MBS 2 channel 2-Data Link IEEE 802.1 sec 5 draft D 802.2 draft E 3-Network ISO/IS/8473 ISO/DIS/8348 4-Transport ISO/IS/8072 class 4 8073 5-Session ISO/IS/8326 8327 6-Presentation ASN.1 7-Application ISO/TC97/SC21/N1662 CASE ISO/DP/8561 SC16/N1669 N1670 N1671 N1672 FILE 18

PAGE 27

19 An implementation of MAP will not be implemented in PSIM for several reasons. Firstly MAP is extremely complex. It has taken many years for its definition to come as far as it has, and it is still far from complete. It would be an enormous task to try to implement a full MAP simulation on PSIM. Secondly MAP documentation is hard to find and the specification is constantly changing. Thirdly there are other protocols that are in popular use today that would be of more practical, and .... : educational benefit to simulate. However investigating MAP was of interest since it is the first protocol that is attempting to define all seven layers. Since PSIM can simulate all seven layers MAP was a natural choice to investigate as a potential protocol to simulate. But as stated above I found that it was a very large specification.

PAGE 28

20 .AI)HA Aloha was a pioneering contention protocol developed, as you can guess, Hawaii. Alohanet was the resluting network. It uses radio as the media, with all stations on the same frequency. "Pure" Aloha can be though of as "anybody can talk at any time, with ACKS". After a station sends a message it listens for an acJmowledgement. If it does not receive one it retransmits the message. The receiving station determines whether to positively acJmowledge (ACK) or negatively acJmowledge (NACK) by checking the frame check sequence (FCS) or as it is alternately called, the CRC. The FCS is a modulo result of the frame sum(lJ). "Slotted" Aloha improves the throughput over "pure" Aloha. The large problem with "pure" Aloha is that as traffic becomes heavier there are more and more collisions. Ultimately, as

PAGE 29

21 traffic increases, everyone is talking just like a human party, except that since all of the stations are tuned to the same frequency no one can understand anything. "Slotted Aloha" tries to aid the throughput of medium traffic conditions, ie. those where the collisions are starting to degrade the perfomance of the network. It works by having all stations syncronized to a clock. Stations can only begin transmission at the start of a time block (slot). Thus contending stations will destroy the slot but not overlap into the next slot. The probability of the next slot being free is then better since two contenders are no longer contending. In other words instead of having every station talk whenever it wants, and possibly colliding with maybe the very tail end of anothers message, they at least start colliding at the start of their messages, therefore making their interference time maximized, and leaving the network free again sooner.

PAGE 30

22 X.21, RS-449, RS-232 RS-232 is the most common physical layer protocol in the world. It transmit, receive, and DCE control lines, and is specified for less than fifteen meters. It can run at up to 20kps. In practice longer lengths of cabling can be run with slower data rates. RS-449 has two types of physical interfaces: balanced RS-422, and unbalanced RS-423. The drivers for RS-422 have two leads coming out allowing a differential signal to be transmitted. When noise enters the media over which the signals are moving it equally affects both lines. The receiver looks for the difference between the two lines in order to determine the digital level being transmitted at the moment. Consequently RS-422 is very noise immune. It is specified at lOOkps at 1200

PAGE 31

23 meters, and lOMps at 12meters. Obviously this is a huge improvement over RS-232. However it also costs more. RS-449 has more DCE control signal lines for RS-422 to carry. RS-423 is less expensive than RS-422 because its drivers are not differentially driven. They rely on a common ground. How this ground is routed is up to the vendor. In any case noise on the media can much more easily affect the tranmission integrity than on the RS-422 media. RS-232 can run at up to 3kps at 1000 meters, or 300kps at 10 meters. RS-449 has not really caught on because there are other high performance competing choices, and it is not as standard as RS-232. People thought RS-232 would die. But if one wants an inexpensive, low performance link it is one of the best choices. RS-449 will probably die out because of its cost and lower performance compared to even newer standards, such as X.21.

PAGE 32

24 X.21 uses an inexpensive 15 pin connector, which is an improvement over PS-449' s 30 pin connector. The transmit and receive lines can multiplex information and control, thus allowing more control flexibility, like RS-449, without all the extra lines. X.21 also has balanced and unbalanced modes. X.25 specifies the use of X.21 (16). The whole world has accepted X.21. Therefore it will probably increase in popularity, and start replacing RS-232 installations.

PAGE 33

25 X.25 X.25 is rapidly increasing in popularity in Great Britian, France, Canada, Japan, and the us. It was developed to solve international networking incompatibilities. It concerns the protocol layers 1, 2, 3 and 4 between DTE's and DCE's. It does not actually define level one or two. It states that one can use either X.21 or X.21 bis (RS-232) for level one; and a subset of HDLC for level 2, named LAP-B. LAP-B is the asynchronous balanced mode of HDLC. LAP-B is also used by the LLC of 802.2. Protocol can get very confusing as to who is using what portion of what, and what of their own design. There are three basic types of packets in X.25. The first, Call Request, sets up a virtual circuit. The initiating Dl'E sends the call Request to its own DCE, which then sends it over the network to the destination DCE, which then

PAGE 34

26 passes it to the destination DTE. The Call Request header has two fields Group and Channel which define the virtual circuit that will be used. It also has the calling and called addresses. A short amount of data can follow which is broken up into two fields; 1: the Facilities which dictates special features of the connection; 2: User Data which is customized data understood by both of the DTE's as defined by the user applications (2). The second type of packets, Control Packets do the rest of the control features of X.25. They are distinct from the Call Request packet in that they have a common format that is different from the Call request format. They contain the virtual cicuit group and channel numbers and then their Type field which identifies them as one of the following: call Accepted, Clear Request, Clear Confirmation,

PAGE 35

27 Interrupt, Interrupt Confirmation, Receive Ready, Receive Not Ready, Reset Confirmation, Restart Request, Restart Confirmation, and Diagnostic. The third type of packet is the Data Packet which has the.Group and Channel numbers as do the bthers. The D bit is used as an ack by the othbr DTE so that it can ack the last data I packet and send data back at the same time. The I More bit tells the receiver that more data packets are coming. The Sequence Fields indicate the number of data packets that this unit has dl t rece1ve or sen X.25 also has the ability to send Datagrams, or Fast Selects as they are alternatly called. An X.25 datagram/Fast Select is almost the same as a Call Request with a User Data field I that has been extended to 128 bytes.

PAGE 36

28 HDLC HDLC is an achronym for High Level Data Link Control. A portion of HDLC is used in both X.25 and LLC. LAP-D is a modification of the IAP-B part of HDLC that is used by ISDN(14). IBM's SNA uses SDLC which is the predecessor to HDLC, and is almost identical. Thus understanding HDLC helps understand many common protocols. HDLC has three modes: unbalanced nonnal response (NRM), asynchronous balanced (ABM) or (IAP-B), and asynchronous response mode (ARM). NRM is a master slave (primary, secondary) relationship. The primary polls the secondary to see if it wants to talk, but the secondary cannot initiate conversation on its own. ABM allows either end to initiate tranmission. ARM is balanced but there is a primary station that is designated as the controller.

PAGE 37

29 HDLC has a specific frame structure of the following fields: 8 bit flag, address, 1 or 2 byte control, data, FCS, and a trailing 8 bit flag(21). The flags are used to synchronize the receiver. Their bit pattern is 01111110. When the receiver sees a flag he knowsthat a frame address is coming next. If it is his address, he must retrieve the control and information data, otherwise he can ignore the frame. There are three types of control fields which indicate the frame type: information, supervisory, and unnumbered. Supervisory frames have the following functions: reject the frame indicated (REJ), receiver ready (RR), receiver not ready (RNR), and a few more. The information frames carry the user data. Unnumbered frames set up call establishment and disconnect using commands such as SABME, SNRM, DISC for ARM, NRM modes, and disconnect. Sequence numbers are used

PAGE 38

30 in information and supervisory frames to indicate what frame each station has received and sent(15). These numbers are modulo 8 or 128, meaning that a station counts the number of frames it has sent and recieved in modulo 8 or 128. Thus the maximum that a station can lag the other in processing incoming frames is the modulo number. Otherwise the receiving station could still be receiving message number one while the sender has already sent message number seven, using modulo 8. When the sender sends the second number one message the receiver will not know which number one was the first. Due to the delay in the packet switching they could be out of order.

PAGE 39

31 802 The following is a summary of the 802 LAN protocols with emphasis on how they relate to each other. The 802 protocols describe layers 1 and 2, and maybe layer 3 depending on whom you consult. Layer 2 is divided into to sub-layers: the Medium Access Control (MAC) and the Logical Link Control (LLC). The IEEE diagramatically layers the 802 protocols in their 802.4 publication as follows: 802.2 802.3 802.4 802.5 The MAC has three access control schemes, CSMA/CD, token bus, and token ring. Examples of their usage are where Ethernet uses CSMA/CD. IBM uses token ring. MAP, the manufacturing protocol started by GM, uses token bus.

PAGE 40

32 802.3 Before transmitting on the media, the MAU determines whether some other device is already using the media. If the MAU detects a carrier is already present, the station then remains silent for a random amount of time before trying to transmit again. The amount of time that the station chooses to remain silent depends upon the "backoff" algorithm that it uses. The alogorithm, that 802.3 (ethernet also) uses, is encoded in a routine named "backoff()", which is as follows: 1) I.et K= the smaller of 10 or the number of backoffs this pdu has been through so far. 2) Generate a random number R in the range 2 to the K. 3) Wait R times the slot time of the system, the slot time being defined as 512 bit times. PSIM uses this alogorithm and lets one change the slot time by changing the defined constant "SIDT".

PAGE 41

33 Sometimes two stations start transmission at almost identical times and the propagation delay of the media is greater than the time difference between the start transmission times. Therefore neither station senses a carrier. The result is a "collision". Both protocol data units are destroyed. When a station detects that it is colliding, it transmits a "jamming" signal which is an arbitrary bit pattern, preferable ones so that both jams add, for a sufficient amount of time. Ethernet jams 9 micro seconds. The stations then backoff as they did when they sensed a carrier. Specifications for 802.3 are as follows: 50 ohms Up to 100 transceivers Up to 1024 stations 15 pin D-Series connectors One grd. only if needed Vav = -1.025

PAGE 42

Wave form is 50% 10 Mhz square, .7 Vp-p Rise-fall time is 25ns +-Sns Maximum packet size is 1526 bytes Minimum packet size is 72 bytes(8 preamble + 14header + 46data + 4CRC) Packet Format Prearnble-dest addr-source addr-type-data-CRC Preamble is 64 bits of 101010 . 1011 Destination address= 48 bits Source address= 48 bits 34 Type field identifies the higher level protocol Backoff Algorithm: the backoff time is 0 to ((2 exp n ) 1) time units where 0 < n <= 10 "n" is the attempt try number, and a time unit is 51.2 us. Channel encoding is Manchester: For the first half of the bit time the line is the complement of the bit, then in the middle of the bit time the

PAGE 43

line is the positive logic of the bit. (positive(!) is ov, negative is -2v) 35

PAGE 44

36 802.4 The bus forms a logical ring. The stations know the addresses of the stations before and after them on this logical ring. A token is passed from station to station just as it would be on a ring. Non-token possessing stations may also talk on the bus, but only if they are polled or requested to do so by a token possessing station. Error recovery is interesting. If a station has the token and senses some other unexpected station tranmitting, it immediately deletes its token and relinquishes control. If every station does the same thing then there is no token. After a timeout, stations start a "claim-token process where they try to say "I am going to create the token and use it." If there are "claim-token" cqntentions, backoff algorithms finally allow some station to get the token.

PAGE 45

37 If a station wants to pass the token to its successor and the successor does not exist or is dead, recovery is done by having the token passing station wait for bus activity until a timeout occurs, at which time it trys again to pass the token. If that fails the station sends out a "who-follows" packet which asks "will the successor of my successor please take the token".

PAGE 46

38 802.5 One token circulates the ring. If a station wants to seize the token, it sets a busy bit in the token and lets the token continue to circulate. There is a priority field in the token which allows other stations to reserve the token for themselves next time. If a station wants to reserve a future token, it checks the priority field of the busy token as it passes by(ll). If the field is set to a priority less than its own priority, it can increase the priority field to its own level. Thus when the token is released by the present possesor, it can only be taken by a station with a priority level of the priority field. This eliminates priority based on which station happens to be next in line logically after the station that has the token, and allows stations to have different access rights.

PAGE 47

39 802.2 There are two levels within 802.2 the MAC and the LLC. The MAC can be thought of as a layer itself interfacing to the physical and LLC layers. It communicates with these layers via "primitives". To the LLC it can send : l)"Transmitstatus" which returns the result of sending the last frame that the LLC requested to have transmitted. 2) "ReceiveFrame" which is the frame that the MAC just received. The MAC can receive from the LLC: l)"ReceiveStatus" which indicates whether the LLC got the frame correctly. 2) "TransmitFrame" which is a frame to transmit on down to the physical layer(8). The LLC, which resides above the MAC, has two classes and two types of services which are a source of confusion since their definitions are almost redundant. Class 1 refers to a service

PAGE 48

40 that has only type 1 operation. Class 2 refers to a service that has both type 1 and type 2 operations. Type 1 operation is connectionless, with no acknowledgements, and no error recovery. Type 2 operation establishes a connection (data link), which has acknowledgements, and error recovery. An 1 sdu (link serVice data unit) has four fields: DSAP, SSAP, Control, Information. The SAP addresses are seven bits long with one preceeding bit, which indicates whether the address is a group or individual address in the case of a DSAP, or whether the 1 sdu is a command or response for the SSAP. The control field indicates what type of transfer this 1 sdu is: information, supervisory, or unnumbered. These different types of transfers have many details that could be explained, but in summary; the information type of transfer mostly transfers

PAGE 49

41 information; the supe:rvisory type is used for acknowledgements, and control functions such as RR (receiver ready) etc ; and the unnumbered is used for connectionless data transfer, and connection set up commands.

PAGE 50

The LI..C of 802. 2 The LI..C is the upper half of the data link layer as defined by 802. It resides above the MAC layer, and provides connectionless, connection-oriented, and multiplexed functions. An LLC PDU frame has four fields(10): 1) a byte destination addr 2) a byte source addr 3) a one or two byte control field 4) and a data field address format: 7 bits are actual address 1 bit determines whether individual(O) or group(1) There are three catagories of LI..C services. 42 *unacknowledged connectionless (class 1) *acknowledged connectionless (class1) *Connection-oriented (class 2) The IEEE defines this a little differently: There are two types 1, and 2.

PAGE 51

typel: no ack connectionless type2: connectioned logical point to point acked numbered 0-128 (IEEE p41) classl: uses typel class2: uses typel OR type2 (can be intermixed) For each catagory there are service primitives. Unacknowledged connectionless only has two services: L_DATA.request(loc_addr, rem addr, l_sdu, ... : -ser class) and L_DATA. indication(same) 43 The l_sdu parameter specifies the.sevice data unit, ie. the blk. num. of this piece of data. The service class sepecifies the priority which can be implemented using token ring or bus but not with CSMA/CD. Connection-oriented has many services: L _DATA-_ CONNECT. request or indication . (loc addr, rem addr, 1 sdu) ---

PAGE 52

44 L_DATA-CONNNECT. confirm (loc_addr, rem_ addr_ stat) which acks the above. L_CONNECT.request or .indication or .confirm L DISCONNECT. same L RESET. same L_CONNECTION-FLOWCONTROL.request or indication Acknowledged connectionless Service has: L_DATA_ACK.request or .indication L DATA ACK STATUS.indication L_REPLY.request or indication L_REPLY_ACK_STATUS.indication for the user L_REPLY_UPDATE.request L REPLY UPDATE STATUS.indication --

PAGE 53

45 DOD TCP/IP Due to the fact that the OSI layers were not yet fully defined, the DOD designed their own protocol often referred to as DPA (Department of Defense Protocol Architecture). There are four layers: l)Network Access: protocols for transmission over a network 2)Internet: which allows internetwork communication using IP,. As one can guess, internetwork communication is very important to the DOD. 3)Host-to-Host: which allows process on different hosts to communicate with the standards TCP and UDP. 4)Process-Application: which is for resource sharing. The standards are Fl'P 1 SMTP 1 and TELENET.

PAGE 54

46 TCP is a connection-oriented protocol. The common OOD alternative is UDP (datagrams) which are connectionless. A connection implies that both ports have agreed. to talk to each other; that they have established an agreed upon protocol. In constrast connectionless service implies that one port just sends a message without warning to the other port. To get an idea of the type of services the host-host layer does, listed below are some TCP Primitives: Open: unspecified-passive, full-passive, active Close: Send: Receive: Abort: cloose the connection abruptly. Status: of the connection Error checking primitives

PAGE 55

A TCPHeader has the following fields(6): source + destination port sequence + ack numbers TCP header length Flags: window: how many more bytes the sender is willing to receive checksum urgent pointer: implies this unit is urgent 47

PAGE 56

48 UDP is datagram protocol for efficient but unreliable transfers. It is efficient in that no handshaking like transfers occur. The user's message is framed and sent to the receiver without prior call setup. It is unreliable in that if their are errors, such as the receiver was too busy to receive the message, the sender has no way of knowing that the message was not properly received. UDP datagram Primitives are simple: Send: As in any datagrams a virtual circuit nor error control is accomodated. If there are any control features they are added on adhoc by the vendor. A UDP Header has the following fields version header length type of service total length datagram number

PAGE 57

49 fragment offset time to live: a counter as when to destroy? protocol (layer 4) source + destination port length + checksum Both of these headers are followed by the data. The IP header comes before the host-host header and information. Whatever network access protocol is being used, encapsulates the whole frame. ICMP is used for internet error control. Its header ie the same as UDP. It is followed by ICMP data. Below the above protocols there are address resolution protocols named ARP and RARP. Combined they convert IP addresses to the physical node addresses on the LAN.

PAGE 58

SNA SNA stands for Systems Network Architecture. It defines a set of protocols. IBM wanted to define a full set of protocols in 1974, before ISO had resolved much of their OSI model. Thus SNA (Systems Network Architecture) was defined, and it has since become the most popular network protocol in the world. 50 End users access the network through a logical unit (LU). Along with logical units, on the network, there are physical units, (PU's) which access hardware such as storage, and there are system service control points (SSCP's), which are local controllers for the subareas The subareas are groups of network nodes which are connected to the rest of the network via groups of cabling, called transmission groups(4). The lowest level of SNA protocol is the data link. SNA uses SDI.C which is almost the

PAGE 59

51 same as HDLC, which was expained earlier in this thesis. There are three types of SDLC-HDLC frames: information, supervisory; and unnumbered. SNA sessions are usually connection oriented, therefore all of these types of frames are almost always used in an SNA session. IBM's media access for wide area networks (WANs) is SDLC loop. Here a primary station is polling its subarea nodes to see if they want to talk. IBM's media access for LANs is token ring as defined by 802. 5. An important field of an 802.5 frame is the token. A station that wants the token, sets the token bit to busy so that other stations realize that the token is in use. Frames circulate around the ring. When they get back to the sender, the sender makes the token bit indicate that the token is free again. Thus the token is passed on to the next node, allowing it a chance to gain access to the ring (3).

PAGE 60

52 In the sublayer above the MAC IBM uses LI.C (802.2). The LI.C framing is part of the MAC information field. It contains the destination, and source SAP addresses; and control. The SAP addresses are used to define which port of a node is the address we are referring to. Thus the MAC address gets a frame to the node and the LLC address gets it to the correct process(9). The next layer up is path control, which takes care of routing. Every SNA network address has two parts: the subarea, and element address, which are carried in the transmission header, which is added to the frame by path control(S). The transmission header can have one of five formats, ranging in length from.two to twenty six bytes. They are named FID one through four and FIDF, and vary depending upon the distance the communication is over, and the types of nodes that are talking. Sublayers for path control are: virtual route control, explicit

PAGE 61

53 route control and transmission group control. Virtual route control takes care of the pacing, multiplexing, and prioritizing of bus transmission units (BTU's) over the virtual route from one network access unit (NAU) to another. The explicit route control function takes care of resolving the actual physical path that a BTU will take over the network. Transmission group control takes care of multiplexing the BTU's onto the physical transmission cables which were referred to earlier as transmission groups. Above path control is transmission control, which has two major sublayers the connection point manager (CPMGR), and the session control (SC). The CPMGR creates the requestjre8ponse header, a 3 byte field, which determines some of the flow control, acknowledging, and chaining of BIU's(l05). The session control does the call setup, maintenance, and disconnect functions.

PAGE 62

54 Above the tranmission control layer is data flow control, which takes care of flow control from a users point of view such as what type of acknowledgements are desired, whether the communication will be half or full duplex, and it brackets groups of chained messages together. The top layer of SNA is function management which takes care of services such as format translation, data compression and compaction, and network management functions.

PAGE 63

CHAPl'ER III PAST WORK DONE IN NE'IWORK SIMUlATION There has been a lot of work done doing mathematical network analysis. Much less has been done in network simulation. The mathematical analyses mostly deal with media access; studying delays in protocol data units as they try to get onto the media. The results generally compare a few medias. The Aloha protocol is the most popular area of study. Aloha is a "free for all" protocolwhere stations know that their data was sucessfully transmitted only after the receiving station has successfully acknowledged it. A typical study compares Aloha with some other similar protocols such as slotted

PAGE 64

.: 56 Aloha. Another popular theme is to compare CSMA, CSMA/CD, and Token Ring protocols. I found very little written about network simulation. My best sources were work done by MacDougall, of Apple Computers, and by Digital Equipment Corp. MacDougall (MIT Press 1987) explains his simulator "smpl" At the end of the book he gives an example of an Ethernet simulation. Events are created by a Poisson process. His media access routines are detailed right down to inter frame gap time, and propagation time considerations. The output of the simulator is statistics such as utilization, delay, etc. I closely followed his output in my simulator, PSIM. MacDougall gave me the idea to not use a user interface, since he was very effective in presenting his simulator without one either. Most other simulators that I heard of or was able to use, such as CACI, GPSS, Simscript and Micro Saint are all cumbersome due to their

PAGE 65

57 user interfaces. At first the user interface makes learning how to use the simulator easier. But ultimately one wants to know how the simulator works, and how to modify it, which requires learning the code which is often not available anyway. There is usually an element of doubt in the simulator that creaps in eventually too, which makes one want to see the code. When starting on the architecture of the thesis code I thought that I had an original idea of making the simulator act like an operating system with each task being a process. I quickly learned that Digital Equipment Corporation had been doing this internally for years. I was able to setup an appointment with one of their PH D's who had been involved in the the writing of several network simulators. He was of great help in getting me resolved on what all the issues would be in writing the simulator. In the end I

PAGE 66

58 did not use the operating system concept of each service access point (SAP) acting as a task because my simulator is event driven and this did not seem neccessary any more. Another interesting study was done by Stuck at Bell Labs. The paper was originally printed in Data Communications January 1983, titled "Which local net access is most sensitive to traffic congestion?". Stuck emphasized that in order to test a simulator that one must generate "controlled loads" What he meant by this is that one must test by creating discrete events for the simulator to process. All of the simulators were written to compare media access. One mathematical analysis delt with the delay due to the service access points. What is meant by this is that as the data travels up and down the protocol stack each interlayer group of communications has a delay assocaiated with it. The net effect is

PAGE 67

59 significant delay. I asked several protocol experts whether they had ever seen an analysis or simulation of protocol stacking. To date I have not found such a study although it is hard to believe that it has not been done. There are basically two approaches to simulation. Some people name them "process" and "event" driven. In process driven simulation the simulation clock (or time) runs continuously in as fine a granularity as is needed. Events are created dynamically. The advantage of this approach is that it is closer to the way the real world operates. You could say it is "object oriented". A multi-tasking operating system would be good for this. Level one service access points could be tasks. Each one of them could have either a queue or state machine to simulate the ordered transfer of data. The media could be another non-reentrant task. Or in the case of

PAGE 68

60 Ethernet it could be reentrant but cause a collision condition upon two SAP's entering it at the same time. The large disadvantage of process driver simulation is that the system must often remain idle while waiting for the clock. Event driven simualtion is fast. If nothing is happening in real time the clock skips foward over it. The disadvantage is that their is more to keep track of such as when to advance the clock. When frames collide they must be retroactively removed from the "done" queue and be put back on the "to be done" queue. It can be confusing to have to constantly refer to past time as to what will happen next. None the less I chose this approach for its speed. The result is that PSIM is more complex than I would have liked, but it is fast.

PAGE 69

CHAPI'ER IV SIMUlATOR DESCRIPriON PSIM Code Guide PSIM is designed to simulate and compare the interaction of different protocols. In the past studies have emphasiszed level one protocols only. PSIM allows a user to do level one protocols and upper level protocols. PSIM is organized to simulate the seven OSI layers. The user may define what protocol is to be used at each layer. Different service access points (SAP's), may use different protocols, except that level one and parts of level two must be the same if any network used today is to be simulated. The main objective of PSIM is to allow the user to compare the performance of differnt protocols. The user can run the same SAP traffic

PAGE 70

62 while trying mixes of different medias and protocols. The performance results are quantitative statistics of the run. In other words there is no AI shell interpreting the results. Thus the user must temper them with his own weighting for reliablility, compatibility, cost, complexity, etc PSIM can also be used to develop new protocols. After writing a new protocol for PSIM to simulate one finds that he has a much better understanding of the protocol. In the mean time the user may find mistakes in the original design. Thus PSIM can be used as a design tool. It can be confusing to determine what aspects of a protocol affect its performance on the media and which aspects are internal station overhead. PSIM helps one understand these differences. For example, 802.2 has a service named L DATA CONNECT.confirm. This is a message that is passed between the LLC and the network

PAGE 71

63 layer to indicate the success or failure of one or more data unit transfer requests. This has nothing to do with performance of the protocol on the media except that it could delay the transmission of the next PDU because of the time it takes to do this piece of overhead. When reading about a protocol the differences between the services and what actually reaches the media can become confusing, at least I found it so. PSIM allows the user to simulate both ( 1) and thus starts making the user more aware of the total protocol performance. PSIM is written in 'C'. It could have a user interface developed for it, that would allow one to specify new protocols and then simulate their execution. At the moment the user must write 'C' routines to change simulations, which is not "user friendly", however the architecture of PSIM has been made as simple and straight foward as possible so as not to make using PSIM a

PAGE 72

64 frustrating experience. I have tried to maintain a a repetitious style over doing the most efficient coding possible, and have occasionally broken statements up that could have been more tersely written. Some of the files could have been merged to shorten the overall code length, but were instead left somewhat redundant for clarity, and easy modification (2). In this thesis I will refer to service access points SAP's a great deal. A service access point is a port, at a station, that is accessable to other SAP's to talk to. Generally one process talks to another via two SAP's, one on the source station and the other on the destination station. However multiple remote SAP's may talk to one SAP. This is most often done for emergency messages. In these cases there is a designated SAP (port) number for a certain type of emergency that all stations know. Thus an emergency monitoring process can

PAGE 73

65 establish communication with the known SAP any time the need arrises. PSIM treats SAP's a little differently than networks in reality do. In PSIM a SAP is actually a pair of real world SAP's. PSIM does not force each PSIM SAP to simulate two SAP's but usually it will be used this way. The reason for this is that a pair of real SAP's have ordered conversation. In order for PSIM to simulate this order there were two choices. One was to have a state machine for each connection (in this context datagrams are connections too) and have each SAP monitor the state machine since PSIM does not really send its messages. PSIM only advance time to simulate that the media has been used for a certain amount of time. The monitoring of the many state machines and keeping the.state machines up to date would have added complexity to PSIM. The

PAGE 74

66 other simulators I have read about have ignored the SAP issue, and have had one common propogation delay for all messages, and have had a random generator create the messages to send. PSIM, in implementing a protocol stack, would lose a lot of meaning if it produced random messages in a certain protocol, because the basis of most protocols is establishing some sort of ordering of events. The other choice for remedying this problem was to have SAP modules act as pairs of real world SAP's. In this way the state machine for the proper ordering of the messages can be in the same state machine that the overhead PDU' s are in. Thus in this thesis a SAP may take on two slightly different meanings. When referring to PSIM's code I may mean a pair of SAP's most of the time I am referring to one real SAP. Hopefully this will be easy for you to resolve.

PAGE 75

67 PSIM starts by calling the scheduling modules. The main scheduler polls the SAP's for PDU's that they want to send. The SAP's tell the protocol stack routine what messages they want to send, at what time, and what protocol they want to use. The protocol stack then implements the protocol on the information PDU and sends it to a routine which puts it in a queue. By implements the protocol I mean that each layer of the desired protocol appends the appropriate framing to the packet. When the packet reaches the bottom of the stack, (level one) the packet is queued for entry onto the media. WHen the scheduler is finished polling the SAP's it retUrns to the main routine. Originally I had intended that the scheduling process could go on infinitly, by allowing the scheduler to queue up to a certain number of PDU's then have the rest of the

PAGE 76

68 simulator run, the have the scheduling process run again. PSIM can be modified to do this, however I chose not to do this for two reasons: 1) It adds complexity to the code which is the user interface. 2) If PSIM is close to what really happens on a LAN or WAN it is succesful. Simulating a longer traffic time could only add to the accurracy of PSIM. But PSIM can only be roughly accurate because it is a simulator. Therefore running many iterations shows a lack of appreciation for the limitations of any simulator. As an ass ide; during me investigations of what has been done before on level one simulation, I never noticed that any paper acknowledged the limitations of simulation. After the scheduling is complete the level two and one simulator is called upon to dequeue the PDU 1 s. The PDU' s are queued according to the time that they desire to run. The queue can be thought of as a clock running

PAGE 77

69 through time. As time advances more PDU's want to run. The queuing aspect is not something that happens in reality in this form. The simulation of real world queuing occurs in the level two and one simulator. This is where the collisions and backoffs occur, which then cause the PDU's to be delayed, ie. queued. The level one and two simulators are selectable by the user for each run. The level one part of these routines simulates the access to the media. The level two portion is harder to understand in the code. Ideally the stacking routine would implement all off level two. The protocol packet building for level two is in the same file as the rest of the stacking. However I ran into a problem with scheduling any extra level two overhead into the queue, which is that the overhead PDU's, such as a RR (receiver ready), must not only be queued but also must run before other PDU's from the same SAP. Therefore

PAGE 78

70 a state machine must be running for each SAP. If I queued the overhead PDU's with the original information frame, their order could be switched. For example, if SAP number .1 wants to send TEST then a message, 100 micro seconds appart from each other, the TEST packet might backoff. The message might run without any contention. Then the TEST might run after the message. This would be unrealistic since the SAP would first wait for the TEST PDU to be sent before asking for the message to be sent. The state machine was most easily implenemted in the routine which accesses the media. Thus part of the level two protocol is in the media access routine while the other half is in the protocol stack. After the simulation has been run the statistics calculator is called. This can be easily modified to collect different statistics.

PAGE 79

The ones that PSIM gathers at present are: sap number and label pdu transmission time propagation delay desired run time originally requested actual run time end of run time information throughput mean pdu delay normalized to pdu_delayjpdu_trans_time percent utilization of the media arrival rate of new pdu's mean prop_ time/mean trans _time total number of collisions fraction of pdu's that collided total number of pdu's mean service time (ie mean transmission time for a pdu) The modules of PSIM are as follows: GIDB. H is where the general global 71

PAGE 80

72 definitons are. The hub of PSIM is a structure named "queue", which has fields for all of the pertinent information associated with a pdu, such as the time that the SAP desired to send it, and when it actually got onto the media; the amount of infonnation content it had, and the length of the pdu as it was on the media; the number of collisions and backoffs it had to undergo; etc PROTO.H is where to level two an one media access routines are chosen. The choice is made by a #define. The other alternative would have been to include a user accessable variable. A user interface would probably want to include that. I chose the #define approach so as to make writing and debugging PSIM faster. MAIN.C calls the main scheduler, then the media access routine, then the statistics gatherer.

PAGE 81

73 SAP SCH.C is the main scheduler routine. It only calls the SAP schedulers at the moment. It would appear to be a case of code fragmentation except that I want to leave a structure behind for quickly calling different SAP schedulers, and possibly having longer simulation runs. As stated earlier having longer simulation runs does not increase accutracy. However PSIM could be modified to actually run on a network making longer run times desirable. Thus SAP_SCH.C is really a "hook" now. SAPl.C, SAP2.C, SAPJ.C, etc are the actual level seven PDU simulators. I have simulated the arrival process as both discrete coded ie. fixed points in time, and as a Poisson distribution. The name SAP#. C implies that the code schedules SAP's. This naming convention could be confused for indicating that the code simulates ons SAP. It does not. Each of these files is to be treated as a scheduler of SAP's.

PAGE 82

74 The user can define the file differences by the scheduling algorithm used in each one, or he can think of each one as a station, or he can use different protocols in each one, or any other convenience. In developing PSIM I wanted to keep the alogorithms for generating PDU's in different files. The Poisson arrival process simulator has a #define constant for how many arrivals to generate, making it easier to test PSIM. The discrete arrival simulator was able to better test boundary conditions such as marginal backoff and collision cases. STACK. C is the protocol stack that is called by the SAP's. At the moment this file is not too large. But if a lot of different protocols are added to PSIM, it should be broken up into different protocol stacks. STACK.C is called with a label, a message length that the SAP wants to send, a SAP port number to be able

PAGE 83

75 to track the port that the frame originally came from, the desired time for the message to be sent ie. the simulated time that the SAP would actually have finished its protocol stack and tried to send the message, and a list of pointers to the upper six layers of protocol desired. The list of protocol pointers allow the user to easily change the protocol used and compare the results. As stated before the protocol stack is contained in this file, STACK.C. The calling SAP#.C routine declares the protocol stack routines as extern. The directions for change, available to the user are infinite. An exisiting protocol can be changed, or one level of the stack can be exchanged for another, or more high level protocols may be added. If the user does not want to use or write a protocol for each level the routine "dummy() 11 may be called. In almost all cases dummy () will be used. Very few protocols today even define all seven layers of the OSI model.

PAGE 84

76 Q_IT.C is a utility which puts the PDU's in a queue based upon their desired run time. It also has a routine for reordering the queue based upon changing desired run times due to collisions and backoffs, and another for putting a PDU at the end of the queue. At this point it is probably best to expalain why there are three almost redundant queues, that contain PDU's, named PDU_Q, MEDIA_Q, FINAL_Q. They really could be combined, and statically allocating large amounts of memory it not very elagant, but having these three queues makes PSIM simpler and more flexible. PDU Q is the original queue of information PDU's after they have been framed by the protocol stack. MEDIA_Q is at first a copy of the PDU Q but later changes as PDU's can not get onto the media at the times that they desire. The times that they are going to run get advanced which causes the order of the queue to change. Thus PDU _Q is left

PAGE 85

77 behind as a copy of how things originally were. MEDIA_Q is a record of how the PDU's are being requeued during simulation time. FINAL_Q is a record of how all of the PDU's were when they ran on the media. MEDIA_Q also has the final results of the order of the PDU's as they successfully went onto the media, but it does not contain the overhead PbU's. The overhead PDU's are such PDU's as XID, RR (exchange ID's, Receiver Ready), etc In order to put these into the MEDIA Q PSIM would have had to constantly be reshifting the PDU's so as to make room for the overhead PDU' s. I chose to just have a queue named FINAL_Q. The other alternative would have been to dynamically allocate space for each PDU record as PSIM went along. I almost did this but having started with arrays, and arrays being easier for others to follow, I stayed with arrays all of the way through. MSJ c is the CSMA/CD Datagram simulator. It simulates each frame trying to get onto the media. Collisions can occur due to propagation delay, that is user definable, and backoffs occur

PAGE 86

78 due to carrier sensing. Since there is no call for datagrams each PDU contains an information section-that came from the originating SAP. MSJC.C is very similar to MSJ.C except that Type 2 connection oriented service is used with CSMA/CD. The call establishment and overhead associated with each SAP requires that a state machine be implenented for each SAP. Since Type 2 overhead is variable the user is encouraged to change the state machine to simulate different types and amounts. The state machine is what allows PSIM to simulate any protocols. Without it, each PDU that went onto the media would be as a direct result of a SAP requesting a transaction, which would only be realistic in the case of all SAP's sending short datagrams. The state machine COnSiStS Of an array 1 named 11pOrtS11 Of integerS .:

PAGE 87

79 of NUM PORTS (defined in glob.h) long. Port numbers in PSIM can be from zero to NUM_PORTS, unlike real life where one can use a much larger range of numbers. Each port number is an index into the array "ports". As overhead actions change the state of the connection the value of the associated integer in "ports" is changed Upon entry into MSJC.C the PDU_Q is copied into MEDIA_Q. MEDIA_Q is thus made into a queue of information PDU's that were originated by a SAP. The state machine processes the MEDIA_Q. When an overhead PDU is needed before the information PDU in MEDIA_Q the state machine realizes this by using the value in "ports" in a "switch" statement, ie. switch(ports[port_num]) {. The "case" statements then do the overhead action, reschedule the information PDU, and change the value of ports[port_num]. In this way the information PDU in the MEDIA Q queue is

PAGE 88

.. ,: 80 really being used to schedule the station to do whatever is needed next. In the same manner the information PDU can be used to schedule post information actions. In other words, the sending of the information PDU can be simulated while the state machine can again reschedule it in MEDIA_Q and use it again as a flag to go ahead and finish up. Packet fragmentation can also be implemented in the state machine. The code in the information sending portion of the state machine can look for excessively long information fields, in the PDU, and cause its fragmentation into multiple PDU's. In effect the original PDU that was in MEDIA_Q can become many PDU's. It is easiest to visualize the result as a tree diagram with the PDU's in MEDIA_Q as the roots with the leaves being the overhead plus the information

PAGE 89

81 This proliferation of PDU's is what made me chose to put the final results of simulation into the array FINAL_Q. FINAL_Q is a record of every PDU that went onto the media, which STATS.C can use. If the results would have been retained in the MEDIA Q there would have been a lot of shuffling to accomodate them. Thus FINAL_Q was created. As was stated before using "malloc" or "calloc" might have been better, but it would have meant a change in PSIM's coding style which would have made PSIM harder to follow. STATS.C does the statistics gathering and calculating after the simulation run has finished. It writes the reults, outlined a few pages earlier, to a file stat.dat Stat.dat can then be printed or viewed with an editor. The results that STATS.C gives can be changed as long as the parameters needed can be gotten from the queue structure fields. Othe:rwise a PSIM user may modify the queue structure and gather other PDU statistics for STATS.C to use.

PAGE 90

82 Statistical Analysis Considerations The objective of this thesis is to demonstrate a protocol simulator. To my knowlege all simulation research done so far, by other people, has been on level one protocols. Although my simulator can be used to compare level one protocols I will concentrate my efforts on simulating what is called a "protocol stack"; the concept of passing an information packet to a series of routines which then develop the whole protocol necessary to transmit the packet to another service access point. The protocol simulator (PSD-1) has a routine named "stack()" which receives, amoung other parameters, a list of pointers to protocols for each OSI layer. Thus each SAP could run a different protocol, but the real objective is to have SAP's transmit the same information packets, from run to run, while changing the protocol that is used from run to

PAGE 91

nm. The timing of the PDU's can then be compared to analyze the performance of the different protocols. 83 There is more to total performance analysis than just timing, such as compatibility, maintenance, and realiability of the data transmission. PSIM will concentrate on the timing analysis. An AI type of shell can be written to analyze all performance parameters that surrounds PSIM. Information throughput is the most important performance statistic for a network(l9). Other simulators have measured throughput of differrent medias. PSIM's version of throughput "information throughput", is trying to simulate the performance of information transfer, not just the number of error free bits transmitted in a unit of time. For example a complex protocol that not only adds a lot of

PAGE 92

framing to an information packet but also requires a lot of ACK' s, and fragmets the packets, might have a very good throughput as conventionally measured but could have a very poor information thouhgput due to the large amount of overhead. The area off information 84

PAGE 93

85 throughput has been worked on. Communications Machinery Corp. in California recently released a 68020 based VME board which increased the information throughput, or "data throughput" as they call it, for TCP/IP systems using 802.3. The following statistical parameters are in quotes as achromyms, which is how they appear in the 'C' code for PSIM. 11 i thru 11 = information throuhgput = I/T (information per time) The other statistics that PSIM creates help calculate and understand better information throughput. They are as follows, with some of them having accompaning discussions: "pdu_t_t" = PDU transmission time = the time a given PDU took to transmit "prop_d" = propagation delay = the time it takes to transmit from one SAP to another. Other simulators have either used a fixed, or

PAGE 94

86 random time for propagation delay. PSIM has a variable delay which depends upon which SAP's are talking to each other. The user can change the defined propagation delays for each pair of SAP's. This can help simulate different medias, protocols, and physical layouts more realistically. "util" = "b_time"l"i_time" = utilization = busy time I idle time "arr_rate" = arrival rate of PDU's to send (often referred to as lambda). "prop_to_trans" = mean propagation time I mean transmission time "mean s time" = mean service time = mean transmission time for a PDU "num call" = total number of collisions. This would apply only to random access protocols such as CSMA, CSMAICD, ALOHA, etc "sap_nmn" = the service access number ie the port or point access number, as they are alternately-called.

PAGE 95

"label" = the label associated with the PDU that this SAP is sending. "od_r_time" = the time the PDU went or would have went onto a free media. "od r time" implies original desired run time. 87 a r time" = actual run time that the PDU went onto the media. "end r time" = the time when the PDU finished runing on the media.

PAGE 96

CHAPl'ER v SIMULATOR RESULTS The goal of this thesis was to devise an algorithm, and write code for doing protocol stack simulation. The goal was not to optimize protocols. In order to test the code and see how different protocols compared some test cases were run. To my knowledge no simulator has been created before that simulates the protocol layering. Thus I could not make comparisons with previous work. Comparison! (datagrams, virtual circuits, and virtual circuits with an IP header) The first three test runs compared datagrams, virtual circuits, and adding an IP header to the virtual circuit pdu's. The datagram protocol used the LLC an MAC layers of 802.2 and 802.3. The virtual circuit

PAGE 97

89 used some HDLC (LAP-B) call establishment calls. The run with IP added an IP header with the purpose of demonstrating how the added overhead of a header changes network performance. PSIM output and a histogram of these results is in Appendix A. Datagrams generated much fewer pdu's. And if the traffic had been heavier the information utilization performance would have been much greater also. As it was the information utilization performance of the datagrams was only twenty percent better. Actually if you measured utilization as being the total number of bits transmitted divided by the total possible number that could have been transmitted the virtual circuits would have looked better. This is what I see as the pitfall that many ananlysis fall into. They do not concern themselves with the information transfer rate, but instead the bit rate, which are not necessarily related.

PAGE 98

90 Adding an IP header had no apparent effect. This is due to the size of the pdu's being transmitted. But the point is that headers usually do not degrade performance that much. One could send the worst case amount of information and thus cause the extra header bits to force two pdu's to be sent for every information pdu that was one But that would not be a real world circumstance. Comparison2 (TCP, SNA, and SNA with VT) These three runs compared TCP with SNA and SNA with VT on top. In this context SNA is actually referring to using SNA call establishment or as IBM calls it the path and transmission control layers of SNA. The simulation output and associated histogram are in Appendix A. As would be expected TCP has less overhead for starting a session than SNA and thus

PAGE 99

91 is more efficient. You can see that the information throughput for SNA and SNA with VT was the same. What happened here was that even though there were more pdu' s due to the VT overhead, they got onto the media almost as fast as the run with just SNA's did. The utilization with VT was better but the information utilization was the same. I could have doctored up the arrival rate so that this did not occur, but it shows how arrival rates and luck can affect performance As was stated earlier, running larger samples is not the answer either because all they do is give one a false sense of security that the results are more accurate than they really are. Network simulation can only give one a general feeling as to what to expect. The arrival rates, data lengths, control frames that one chases to use are all subjective choices which affect accuracy much more than the difference between running thousands of and a hundred.

PAGE 100

Comparison3 (datagrams and SNA with VT running at a very heavy loading) 92 The user information that was tranmitted was at the bandwidth of the network, ie. lOMbits of user information per second was the arrival rate. It demonstated how the choice of protocol is very important at heavy loading. The datagram information throuhgput was 4. 65 Mbi ts per second whereas the information throughput when using SNA was .28 Mbits per second. The addition of SNA overhead basically brought the network to its knees. Ethernet Backoff Algorithm Test Ethernet uses a truncated binary exponential backoff routine in order to back off pdu's that have collided(lS). The code for the algorithm is included in Appendix A. What I wanted to see was how efficient the algorithm was

PAGE 101

and whether a better one could be found by simulation. 93 All of the pdu arrivals were generated randomly by the same algorithm found in file "sap2.c". The first run was only 25 pdu's, the second was 100 pdu's both using the ethernet backoff algorithm. You can see from the results how the Ethernet algorithm adapts to the traffic level because the run of 25 pdu's was much less efficient than the run of 100 pdu' s. By efficient I mean the network utilization was much better for the longer run. On the PSIM display, during the 100 pdu run, you could see that the number of collisions was great at the beginning of the run and then the alogori thm started to work and reduce the number of collisions, thus allowing the traffic to proceed. The runs that attempted to find an optimum backoff time used a static random backoff, meaning that the backoff random time

PAGE 102

94 window did not change with the number of collisions. The code for each is at the top of the results page in Appendix D. Certain windows were found to be better this level of traffic being generated. Notice that the network utilization for the third algorithm is much better than Ethernet's. Of course this is only true for the traffic that "sap2.c" generates. Thus in reality the truncated binary expnential algoritm is better. However the results indicate that if a node were to adapt to traffic conditions a better backoff algorithm could be found. The Ethernet algorithm adapts only for each pdu. As soon as that pdu is sent what has been learned is forgotten. That is why the static algorithm I found works better.

PAGE 103

CHAPI'ER VI CONCWSION PSIM is a flexible tool that can aide in any network design, or investigation. The code is modular allowing one to adapt it to specific network characteristics, and in not having a user interface it is not friendly, but it is very flexible and powerful. The original goal, of PSIM, of creating a protocol simulator has been achieved. There were three basic architectural approaches that could have been taken to designing PSIM. The one that PSIM uses is to have service access point simulators create a group of pdus which are then queued in a general queue waiting to get onto the media. A second approch would be to have the service access point simulators each have their own queue of pdus. The media simulator would then be more complicated in that it would have to poll all of

PAGE 104

96 the queue managers as to when their next pdu desired to be sent. However the advantage of this approach would have been that service access point would not have had to keep a state machine so as not to mix up the order of the pdus in the general media queue that PSIM has. A third approach would have been to have the media simulator constantly poll the service access point simulators as to whether they had a pdu to send. Or one could even write an interrupt service routine for each service access point simulator that would wake up the media simulator. This last approach would run slower than the above two if polling was used. The interrupt approach would be a good architeture to try in the future. It would be much more object oriented, and thus easier to understand in the long run. The disadvantage would be that the code would be less portable, in that the

PAGE 105

interrupt handlers would be operating system dependent. 97 The simulation comparison of datagrams, virtual circuits, and virtual circuits with an IP header demonstrated how adding a header can have no detrimental effect on network performance, but that adding call overhead has a greater chance of doing so. Even though the IP header added bits to the frame the network information utilization did not decrease. The simulation comparison of virtual circuits with TCP, virtual circuits with SNA, and virtual circuits with the SNA and virtual terminal overhead demonstrated how TCP is a more efficient protocol then SNA, and how compared to the overhead of SNA session establishment virtual terminal does not significantly degrade a network. One can arrive at this conclusion without a simulator tool by reading about the overhead associated with each protocol, but this

PAGE 106

98 comparison showed that PSIM works as expected and could be further applied to a network design or layout problem where the impact of chasing one of these protocols over the other was not obvious. The simulation comparison of datagrams versus SNA path and transmission control with virtual terminal running on top demonstrated how the choice of protocol can make the difference between success and complete failure of a network. The choice of protocol can be essential. This demonstrated PSIM's usefulness. Where past work has been done studying media performance PSIM was able to show how protocol stacking can be more important than the media bandwidth. The Ethernet backoff algorithm test demonstrated how the bandwidth of Ethernet can be incresed by improving the backoff algorithm. Today much research is being done on improving

PAGE 107

99 media data rates . For example there are a large number of vendors that sell fiber optic to Ethernet interfaces that enable level 1 to run at speeds of around 100 Mbitsjsec. To my knowledge no company sells software that makes the backoff algorithm learn. The PSIM simulation indicated that there are great gains to be had in this area. One potential way of making the algorithm learn would be to have a node keep track of its past performance in accessing the media. Different access delays could be weighted with decreasing weights as their age increased. Thus the latest conditions would be predominant in determining the backoff window, but the older events would smooth the transition of the window size. Another method of making the backoff algorithm learn would be to occasionally have a designated node broadcast what it had learned. For example a protocol analyzer could sit on the network and run a network statistics application

PAGE 108

100 that came up with a backoff window that it determined was ideal for the moment and broadcast the window size to every node. This way only one node would need to run the learning software. Another comparison, that is realated to the Ethernet backoff algorithm comparison, that would be interesting to use PSIM for, would be to test different carrier sense persistenc algorithms. Ethernet uses !-persistence which is Jmown to degrade performance. Having the persistence algorithm adapt to network traffic in a same manner as the backoff algorithm can, as explained earlier, network performance could surely be improved. PSIM would be a good tool for learning how to adjust the persistence window.

PAGE 109

REFERENCES: 1. William Stallings, Data and Computer Communications, New York, Macmillian, 1985, pp 432-437 2. Ibid., p426 3. Ibid., p355 4. Anton Meijer, Systems Network Architecture a Tutorial, Wiley, New York, 1988, pp8-14 5. Ibid., p 55 6. Tanenbaum, Computer Networks Englewood Cliffs, New Jersey, 1981, p 374 7. Ibid., p 172

PAGE 110

102 8. Carrier Sense Multiple Access with Detection, IEEE Std 802.3, Lib. of Congress # 84-43096, pp 21-22 9. Logical Link Control, IEEE Std 802.2, Lib. of Congress # 84-43095, pp 38-41 10. Ibid. I p 38 11. Token-Passing Bus Access Method, IEEE Std 802.4, Lib of Congress # 84-43094, pp 29-30 12. Ibid., pp 152-160 '. .. : 13. Hammond and O'Reilly, Performance Analysis of Local Computer Networks, Reading Mass., Addison-Wesley, 1985, pp 175-180

PAGE 111

103 14. Willliam Stallings, Handbook of Computer Communication Standards, New York, MacMillian, 1987, p 97 15. Ibid., pp 89-91 16. Ibid., p 113 17. .Ibid., pp 14-40 .18. MacDougal, Simulating Computer Systems, Cambridge, Mass, MIT Press, 1987, p 181 19. Ibid., p 165 20. Data Communications Testing, Colorado Springs Colorado, Hewlett Packard, 1980, p 3-9

PAGE 112

104 21. Ibid., pp 4-11-4-16 22. Anton Meijer and Paul Peters, Computer Network Architectures, Rockville Maryland, Computer Science Press, 1983, p 105

PAGE 113

APPENDIX A Appendix A contains the simulation output of Comparisonl (datgrams versus virtual circuits versus virtual circuits with internet protocol headers) and the associated histogram of information utilization. Using datagrams this output file can be compared with vel and vcip. Note how few pdu's there were, and the efficiency. The arrival rate was determined by a discretely defined file named "sapl.c". Notice how few pdu's there This is because every pdu contained user information. There were no overhead pdu's.

PAGE 114

106 File Name DG1: STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 17524 us. total number of pdu's= 9 busy time= 4716. utilization= 4716/17524= 0.269 arr_rate= 513.581 pdu's per sec. # of collisions= 1 coll_per_pdu= 0.222 mean ser time= 524.00us. i tot= 45000.00 bits delay t (total)= 20145 us: mean delay= 2238.33 us. norm dpert(delay per trans time)= 4.272 i_thru(total infojtotal time = 2.57 Mbits per sec.

PAGE 115

107 This file can be compared with dgl and vcip. Note that connection service greatly increases the number of pdu's. The datagram service sent 9 pdu's. establishing the connections caused another 48 pdu's to be generated. Also notice, however that the data throughput did not decrease greatly. This was due to the light loading of the simulated network. The information throughput only decreased by 20 percent even though the number of pdu's increased by over 600 percent.

PAGE 116

108 File Name VC1: STATISTICS FROM SIMULATION OF DOT THREE CONNECTION SERVICES total simulation time= 33277 us. tqtal number of pdu's= 57 busy time= 9980 utilization= 9980/33277= 0.300 arr_rate= 1712.895 pdu's per sec. # of collisions= 12 coll_per_pdu= 0.421 mean ser time= 175.09us. i tot= 65000.00 bits delay t (total)= 134181 us. mean delay= 2354.05 us. -norm_dpert(delay per trans time)= 13.445 i_thru(total infojtotal time = 1.95 Mbits per sec.

PAGE 117

This file can be compared with dgl and vel. Notice how the IP header does not affect the performance noticably. File Name vcipl.sum: 109 STATISTICS FROM SIMULATION OF DOT THREE CONNECTION SERVICES total simulation time= 33296 us. total number of pdu's= 57 busy time= 10227 utilization= 10227/33296= 0.307 arr rate= 1711.917 pdu's per sec. # of collisions= 12 coll_per_pdu= 0.421 mean ser time= 179.42us. i tot= 65000.00 bits delay_t (total)= 134181 us: mean_delay= 2354.05 us. norm_dpert(delay per trans time)= 13.120 i thru(total info/total time = 1.95 Mbits per sec.

PAGE 118

APPENDIX B This file is to be compared with vcsna and vcsnavt. Note how TCP is much more efficient, than the other two protocols due to the simple call connections. File Name VCTCP: STATISTICS FROM SIMULATION OF DOT THREE CONNECTION SERVICES total simulation time= 13390 us. total number of pdu's= 35 busy time= 3570 utilization= 3570/13390= 0.267 arr_rate= 2613.891 pdu's per sec. # of collisions= 8 coll_per_pdu= 0.486 mean ser time= 102.00us. i tot7 12000.00 bits delay_t (total)= 30192 mean_delay= 862.63 us. nor.m_dpert(delay per trans time)= 8.457 i thru(total infojtotal time = 0.90 Mbits per sec.

PAGE 119

112 This file (vcsna) can be compared with vctcp and vcsnavt. It is the middle performer as expected. Call extabllishment is lenghty but it does not have the extra overhead of establishing a VT connection.

PAGE 120

113 File Name vcsna.sum: STATISTICS FROM SIMULATION OF DOT THREE CONNECTION SERVICES total simulation time= 17377 us. total number of pdu's= 65 busy time= 5632 utilization= 5632/17377= 0.324 arr_rate= 3740.577 pdu's per sec. # of collisions= 14 0.431 mean ser time= 86.65us. 1 tot= 13600.00 bits delay_t (total)= 64719 us. mean_delay= 995.68 us. norm dpert(delay per trans time)= 11.491 i_thru(totai info/total time= 0.78 Mbits per sec.

PAGE 121

114 This file (vcsnavt) can be compared with vcsna and vctcp. It has all of the call establishment overhead of SNA plus that of VT, making it the worst performer. As can be expected the more overhead calls a protocol has the more it bogs the network down.

PAGE 122

117 File Name vcsnavt.sum: STATISTICS FROM SIMULATION OF DOT THREE CONNECTION SERVICES total simulation time= 17381 us. total number of pdu's= 71 busy time= 6132 utilization= 6132/17381= 0.353 arr_rate= 4084.920 pdu's per sec. # of collisions= 13 0.366 mean ser time= 86.37us. 1 tot= 13600.00 bits delay t (total)= 68340 us. mean delay= 962.54 us. -norm dpert(delay per trans time)= 11.145 i_thru(totai info/total time= 0.78 Mbits per sec.

PAGE 123

APPENDIX C Here is a comparison between datagrarns and virtual circuits with SNA with virtual terminal. The loading was the bandwidth of the Ethernet. STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 1891 us. total number of pdu's= 11 busy time= 1144 utilization= 1144/1891= 0.605 arr_rate= 5817.028 pdu's per sec. # of collisions= 2 coll_per_pdu= 0.364 mean ser time= 104.00us. i tot= 8800.00 bits delay t (total)= 6807 us: mean delay= 618.82 us. -norm_dpert(delay per trans time)= 5.950 i_thru(total info/total = 4.65 Mbits per sec.

PAGE 124

VIRTUAL CIRCUITS with SNA with VT total simulation time= 31201 us. total number of pdu's= 71 busy time= 6132 utilization= 6132/31201= 0.197 arr rate= 2275.568 pdu's per sec. # of collisions= 22 0.634 mean ser time= 86.37us. 1 tot= 8800.00 bits 119 delay t (total)= 169152 us. mean delay= 2382.42 us. -norm_dpert(delay per trans time)= 27.585 i_thru(total infojtotal time = 0.28 Mbits per sec.

PAGE 125

120 Notice how the proliferation of PDUs caused the information throughput of the virtual circuit run to be very low. This is an extreme case of loading, but it does demonstrate the huge difference that can exist in performance. Token Ring or Bus would have been a much better media at this level of loading. The utilization for token architectures can be 100%. As you can see Ethernet degrades with loading.

PAGE 126

APPENDIX D Using truncated binary exponential backoff algorithm. You could see on the PSIM display that this algorithm-adapted to the network load. At first there were a lot of collisions, then as the backoff times increased the network ran with few collisions. The first output includes what the display showed. It that case there were 25 pdus. Thereafter the display data is not included since the runs were longer, and the results are summmarized in the output files that PSIM creates. The standard truncated binary expnential backoff algorithm in 'C': intk= (b off num < 10) ? b off num : 10; k= (doubie)intk; --range= pow(2.0, k); /*comes back a double*/ rnd= (((float)rand())/32767.0); rnd= (float)range rnd 50.0; time= (long)rnd return(time);

PAGE 127

122 This next simulation ran 100 pdus. Compare these results with the same algorithm run on only 25 pdu's. Notice how the throughput is better in the longer run. Again this is due to the adapting nature of the algorithm. STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 165985 us. total number of pdu's= 100 busy time= 82400 utilization= 82400/165985= 0.496 arr rate= 602.464 pdu'sper sec. # of collisions= 13 coll_per_pdu= 0.260 mean ser time= 824.00us. i tot= 800000.00 bits delay t (total)= 6254554 us: mean delay= 62545.54 us. norm dpert(delay per trans time)= 75.905 i thru(totai infojtotal time= 4.82 Mbits per sec.

PAGE 128

File Name Back1: Using an random backoff rd= (((f1oat)rand())/32767.0); r= (double)rd; /*get npm between 0 -1*/ time+= -1500*log(r); /*natural log*/ 123 STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 25529 us. total number of pdu's= 25 busy time= 20600 utilization= 20600/25529= 0.807 arr rate= 979.278 pdu's per sec. # of collisions= 1 coll_per_pdu= 0.080 mean ser time= 824.00us. i tot= 200000.00 bits delay t (total)= 262105 us: mean delay= 10484.20 us. -norm_dpert(delay per trans time)= 12.724 i_thru(total info/total time= 7.83 Mbits per sec.

PAGE 129

File Name Back2: Random backoff algorithm: rd= (((float)rand())/32767.0); r= (double)rd; /*get num between o -1*/ time+= -500*log(r); /*natural log*/ 124 STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 28678 us. total number of pdu's= 25 busy time= 20600 utilization= 20600/28678= 0.718 arr_rate= 871.748 pdu's per sec. # of collisions= 7 coll_per_pdu= 0.560 mean ser time= 824.00us. i tot= 200000.00 bits delay t (total)= 323133 us: mean delay= 12925.32 us. -norm_dpert(delay per trans time)= 15.686 i_thru(total infojtotal time = 6.97 Mbits per sec.

PAGE 130

File Name Back3: Using random backoff: rd= (((float)rand())/32767.0); r= (double)rd; /*get num between 0 -1*/ time+= -3000*log(r) ; /*natural log*/ 125 STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 32824 us. total number of pdu's= 25 busy time= 20600 utilization= 20600/32824= 0.628 arr_rate= 761.638 pdu's per sec. # of collisions= 4 coll_per_pdu= 0.320 mean ser time= 824.00us. i tot= 200000.00 bits delay t (total)= 339138 us: mean delay= 13565.52 us. norm dpert(delay per trans time)= 16.463 i_thru(totai info/total time = 6.09 Mbits per sec.

PAGE 131

File Name Back4: Using random number backoff: rd= (((float)rand())/32767.0); r= (double)rd; /*get nurn between o l*/ time+= -6000*log(r); /*natural log*/ 126 STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 41076 us. total number of pdu's= 25 busy time= 20600 utilization= 20600/41076= 0.502 arr rate= 608.628 pdu's per.sec. # of collisions= 1 coll_per_pdu= 0.080 mean ser time= 824.00us. i tot= 200000.00 bits delay t (total)= 350895 us: mean delay= 14035.80 us. -norm dpert(delay per trans time)= 17.034 i thru(totai info/total time = 4.87 Mbits per sec.

PAGE 132

127 File Name BackS: Using backoff algorithm: rd= (((float)rand())/32767.0); r= (double) rd; /*get num between o -1*/ time= (long)(-1000*log(r)); /*natural log*/ STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 230054 us. total number of pdu's= 100 busy time= 82400 utilization= 82400/230054= 0.358 arr_rate= 434.681 pdu's per sec. # of collisions= 167 coll_per_pdu= 3.340 mean ser time= 824.00us. i tot= 800000.00 bits delay_t (total)= 12928085 us. mean_delay= 129280.85 us. norm dpert(delay per trans time)= 156.894 i_thru(totai info/total time= 3.48 Mbits per sec.

PAGE 133

File Name Back6: Using: rd= (((float)rand())/32767.0); r= (double)rd; /*get num between 0 -1*/ time= (long) (-3000*log(r)); /*natural log*/ 128 Notice how this algorithm better fitted the traffic conditions than the truncated binary exponential algorithm of Ethernet. The Ethernet alogorithm adapts to the network in the sense that as a pdu continues to collide it backs off longer. However in order to reach the optimum backoff time it first clogs the network by trying to gain access too aggressively. Thus the learning process is repeated over and over. This leads one to strongly beleive that an adaptive backoff alogorithm that determines all of the pdu's backoff time frames might be better. In other words the algorithm would continually monitor traffic conditions and set the random backoff time frame accordingly. This could be done by having a node analyze its own results of

PAGE 134

129 trying to gain access to the media, or even have nodes send messages to each other about what the activity they have been seeing. But these messages would in turn load the network some more. Thus their use would have to be limited. STATISTICS FROM SIMULATION OF DOT THREE DATAGRAMS total simulation time= 130634 us. total number of pdu's= 100 busy time= 82400 utilization= 82400/130634= 0.631 arr_rate= 765.497 pdu's per sec. # of collisions= 43 coll_per_pdu= 0.860 mean ser time= 824.00us. i tot= 800000.00 bits delay t (total)= 6364213 mean delay= 63642.13 -norm_dpert(delay per trans time)= 77.236 i thru(total info/total time= 6.12 Mbits per sec. us.:::! I

PAGE 135

APPENDIX E Program Listings Following are the 'C' listings for PSIM. The large model of Microsoft 'C' was used. /*Source listings for Protocol SIMulator (PSIM)*/ /*John Elliott University of Colorado 1988*/ /*FILE NAME MAIN.C ***************************** The main.routine for the protocol simulator *I #include "glob.h" #include "proto.h" long P TIME; int PDU Q END; int FINAL=Q_END; int port_sm[lOO]; /*present sim time in micro s*/ /*index into Q of pdu's*/ /*end of final copy of all structs of messages sent*/ /*port states*/ struct queue PDU_Q[PDU_Q_LEN]; /*Q of info pdu's*/ struct queue MEDIA Q[PDU Q LEN]; /*Q of info-pdu's-ready to go onto media*/ struct queue FINAL Q[FINAL Q LEN]; main() { /*all pdu's for TYPE-II*/ int ii; /*intitialization*/

PAGE 136

} PDU Q END= O; p TIME= 0; for(ii=O; ii
PAGE 137

132 /*This file presently does dot3, SNA, with VT*/ /*FILE NAME SAP1.C*******************************/ #include "glob.h" #include "proto.h" /*STA1() ****************************************** on entry: on exit: Calls the stack() with a protocol and a message to send. Note that the minimum 802.3 message length is 576 bits long. Time is in micro seconds from sim time zero *I extern int dummy () ; extern int dot3 12(); extern int dot3-12c(); extern int sna34(); extern int ip(); extern int tcp () ; extern int vt () ; sta1 () { int mess_len=O, port_num= O; long sap_d_r_time=O; char *label; long ii; label= "sap1#1a"; mess len= 800; /*bits*/ port num= 1; sap d r time= 1; /*micro sec*/ if (OOT3DATAGRAM) stack(label, mess_len, port_num, sap d r time, dot3 12, dummy, duroiiiy-; dummy, dummy, dummy) ;

PAGE 138

else if(DOT3CONNECT) stack(label, mess_len, port_num, sap_d_r_time, dot3_12c, dummy, sna3 4, dummy, vt, dummy) ; else if(DUMMY) stack(label, mess len, port num, dummy, dumitiy, dummy, dummy, dummy, dummy); label= "sap1#10b"; mess len= 800; port-num= 1; sap d r time= 10; /*micro sec (10 bits on the cable)*/ if(DOT3DATAGRAM) stack{label, mess_len, port_num, sap_d_r_time, dot3_12, dummy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess_len, port_num, sap d r time, dot3 12c, dummy, sna34-; dummy, vt, dummy); else if (DUMMY) stack(label, mess_len, port_num, sap_ d r _time, dummy, dummy, dummy, dummy, dummy, dummy); label= "sap1#90c"; mess len= 800; port= nwn= 1; sap_d_r_time= 90; /*micro sec (10 bits on the cable)*/ if(DOT3DATAGRAM) stack (label, mess _len, port_ num, sap_d_r_time, dot3_12, dummy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess_len, port_num, sap d r time, dot3 12c, dummy, sna34-; dummy, vt, dummy) ; 133

PAGE 139

else if(DUMMY) stack(label, mess_len, port_num, sap_d_r_time, dummy, dummy, dummy, dummy, dummy, dummy); label= "sap2#100d"; mess len= 800; /*bits*/ port-num= 2 ; sap_d_r_time= 100; /*micro sec*/ if(DOT3DATAGRAM) stack(label, mess len, port nurn, sap_d_r_time, dot3_12, dUmmy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess_len, port_nurn, sap_d_r_time, dot3_12c, dummy, sna34, dummy, vt, dummy); else if(DUMMY) stack(label, mess_len, port_num, sap d r time, dummy, dummy, dummy,-dummy, dummy, dummy); label= 11sap3#150e11; mess_len= 800; /*bits*/ port num= 3; sap d r time= 150; /*micro sec*/ if (OOT3DATAGRAM) stack(label, mess len, port_num, sap_d_r_time, dot3 _12, dummy, -dummy, dummy, dummy, dummy); else if (DOT3CONNECT) stack(label, mess len, port num, sap_d_r_time, dot3_12c, dummy, sna34, dummy, vt, dummy); else if(DUMMY) stack(label, mess len, port nurn, sap d r time, dummy, dummy, dumiliy-; dummy, dummy, dummy); 134

PAGE 140

label= "sap3#200f"; mess len= 800; sap_d_r_time= 200; if (DOT3DATAGRAM) /*micro sec (10 bits on the cable)*/ stack(label, mess len, port num, sap_d_r_time, dot3_12, dummy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess len, port num, sap_d_r_time, dot3_12c, dummy, sna34, dummy, vt, dummy) ; else if(DUMMY) stack(label, mess len, port num, sap_d_r_time, dummy, dummy, dummy, dummy, dummy, dummy); label= "sap3#300g"; mess len= 800; sap d r time= 300; /*micro sec (10 bits on the cable)*/ if(DOT3DATAGRAM) stack(label, mess_len, port_num, sap_d_r_time, dot3_12, dummy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess len, port num, sap_d_r_time, dot3_12c, dummy, -sna34, dummy, vt, dummy) ; else if(DUMMY) stack(label, mess_len, port_num, sap_d_r_tirne, dummy, dummy, dummy, dummy, dummy, dummy); label= "sap3#500h"; mess len= 800; sap d r time= 500; /*micro sec (10 bits on the cable)*/ if(DOT3DATAGRAM) stack(label, mess len, port num, sap_d_r_time, dot3_12, dummy, dummy, dummy, dummy, dummy) ; 135

PAGE 141

else if(OOT3CONNECT) stack(label, mess_len, port_num, sap d r time, dot3 12c, dummy, sna34, dummy, vt, dummy) ; else if(DUMMY) stack(label, mess len, port num, sap_d_r_time, dummy, dunimy, dummy, dummy, dummy, dummy); label= "sap7#600i"; mess_len= 800; /*bits*/ port_ num= 7; sap d r time= 600; /*micro sec*/ if (r:xYfi3DATAGRAM) stack(label, mess_len, port_num, sap d r time, dot3 12, dummy, duminy-; dummy, dummy, dummy) ; else if(OOT3CONNECT) 136 stack(label, mess len, port num, sap_d_r_time, dot3_12c, dummy, -sna34, dummy, vt, dummy) ; else if(DUMMY) stack(label, mess_len, port_num, sap_d_r_time, dummy, dummy, dummy, dummy, dummy, dummy); label= "sap2#750j"; mess_len= 800; /*bits*/ port_ num= 2 ; sap d r time= 750; /*micro sec*/ if (OOT3DATAGRAM) stack(label, mess_len, port_num, sap_d_r_tirne, dot3_12, dummy, dummy, dummy, dummy, dummy); else if(DOT3CONNECT) stack(label, mess len, port num, sap_d_r_time, dot3_12c, dummy, -sna34, vt, dummy);

PAGE 142

138 /*FILE NAME SAP2.C*********************************/ #include "glob.h" #include "proto.h" #include #include #define NUM PDUS 0 #define ARR=CONST 100 /*STA2 () *I on entry: on exit: Calls the stack() with a protocol and a message to send. Creates an exponentially distributed arrival time. After each arrival a new arrival is calculated by taking the natural log of a random number between zero and one, and multiplying that result by a constant. extern int dummy () ; extern int dotJ 12(); extern int dot3-12c(); extern int sna34(); sta2 () { int mess_len=O, port_num=50; long sap_d_r_time=O; char *label; double time= O; double r; float rd; int ii;

PAGE 143

} 139 for(ii=O; ii
PAGE 144

/* FILE NAME MS3C.C**************************/ #include 11glob.h11 #include #include #define DEBUG 0 #define MAX PROP 23 #define PROPlll 20 #define SlOT 51 int qii= o; int fqii= O; ms802_3c () { /*index into MEDIA Q*/ /*index into FINAL-Q*/ long backoffc(); long d_r_time; /*desired run time*/ int mess len, port num, info_len; int trans time; -int retry; long last_t= OxOfffOOOO; int coll=O; /*collision flag*/ int mess cnt; int ii; -char *x_label; copy_qc(); 140 for(ii=O; 11 < PDU Q END; ii++) { printf ( 11 %d ", MEDIA _Q [ ii] info _len) ; } printf(11\n11); /*main loop for simulating pdu's and media*/ tryagain: /*try the same media_q slot again*/ while(qii < (PDU Q END) ) { mess len= MEDIA Q[qii].mess len; port-num= MEDIA-Q[qii].port-nurn; while(MEDIA_Q[qii].d_r_time-<= P_TIME). {

PAGE 145

} 141 if((MEDIA_Q[qii].d_r_time >= last_t) && {MEDIA_Q[qii] .d_r_time < (last_t + PROP111)) ) coll= 1; /*in collision window ie found collision!*/ MEDIA_Q[qii].b_off_num++; /*backoff this pdu*/ MEDIA Q[qii].d r time+= backoffc(MEDIA_Q[qii].b_off_num); /*above line is no limit alternate*/ if(MEDIA_Q[qii].d_r_time > P_TIME) { resort(qii) ; if( !coll) goto tryagain; } if(coll) { printf{"COLLISION AT PDU%d!!!", qii); coll= O; /*reset coll flag*/ if(qii > 0) { qii--; fqii--; /*errase last trans.*/ FINAL Q[fqii].info len= 0; FINAL=Q[fqii+1].info_len= O; } MEDIA Q[qii].coll num++; MEDIA-Q[qii+1].coil num++; /*takes 2 to collide*/ while(MEDIA_Q[qii].d_r_time <= P_TIME){ MEDIA Q[qii].b off num++; if((MEDIA_Q[qii].d:r_time+= backoffc(MEDIA Q[qii].b off num)) = -1) --printf(" >16 COLL PDU ABORTED "); if(MEDIA_Q[qii].d_r_time > P_TIME){

PAGE 146

} } } resort(qii); goto tryagain; 142 } /*end if colision both pdu's backed off*/ if((mess_cnt= MEDIA_Q[qii].mess_cnt) > 1) { x_label= MEDIA_Q[qii].x_label[mess_cnt]; mess len= MEDIA Q[qii].x mess[mess cnt]; MEDIA_Q[qii].mess_cnt--;--} info len= O; P TIME= MEDIA Q[qii].d r time+(long)72; last t= fill:fq(info_Ien, mess:ten, port_num, last_t, x_label); MEDIA Q[qii].b off nurn= O; MEDIA:Q[qii].coll_num= o; MEDIA_Q[qii].d_r_time+= 100; MEDIA_Q[qii].od_r_time= MEDIA Q[qii].d r time; resort(qii); --else ( } printf(" MESS%s ", MEDIA Q[qii] .label); trans time= (mess len/10)7 MEDIA-Q[qii].a r time= MEDIA Q[qii].d r_time; P_TIME= MEDIA_Q[qii].d_r_time+ (long)trans_time; /*update P_TIME*/ MEDIA Q[qii].end r time= P TIME; last t= MEDIA Q[qii].d r time; info:len= fill_fq(info_len, mess_len, port_num, last t, MEDIA Q[qii].label); qii++; -} /*end while deqing*/ FINAL_Q_END= fqii;

PAGE 147

143 fill fq(info len, mess len, port num, last_t, label) int port_num7 long last_t; char *label; { } FINAL_Q[fqii].label= label;_ FINAL Q[fqii].port num= port_num; FINAL=Q[fqii] .. od_r=time= MEDIA_Q[qii] .od_r_time; FINAL Q[fqii].a r time= last t; FINAL-Q[fqii].d-r-time= last-t; FINAL-Q[fqii].b-off num= MEDIA Q[qii].b off num; FINAL=Q[fqii].coll num= MEDIA_Q[qii].coll_nlim; FINAL Q[fqii].info-len= MEDIA Q[fqii].info len; FINAL-Q[fqii].mess-len= mess Ien; FINAL=Q[fqii++].end_r_time= P_TIME; /*copy qc()********* copies the PDU _Q into the MEDIA _Q *I copy_qc() { long d r time; /*desired run time*/ int mess=len, port_num; int ii= o; while(ii < PDU_Q_END ) { /*copy all of Q*/ MEDIA Q[ii]= PDU Q[ii]; MEDIA-Q[ii] .od r-tirne= MEDIA Q[ii]7d-r time; ---/*save orig*/ #if DEBUG printf(" %s ", PDU_Q[ii].label); #endif ii++; } /*end while copying*/ }

PAGE 148

144 /***backoff***trunc binary exp backoff alogorithm***/ long backoffc(b off num) int b off num; --{ --) double range, k; int intk; float rnd; long time; intk= (b off num < 10) ? b_off_num : 10; k= (double)intk; range= pow(2.0, k); /*comes back a double*/ rnd= (((float)rand())/32767.0); rnd= (float)range rnd 50.0; time= (long)rnd ; return(time); #if 0 f*a different backoff gererator than 802.3*/ /*see files back1,2,3,4,eth.dat for the results*/ /*this uses a random exponential algorithm to gerate the backoff time*/ long backoffc(b_off_num) int b off num; { --long time= O; double r; float rd; int ii; rd= (((float)rand())/32767.0); r= (double) rd; /*get num between 0 -1*/ time= (long) (-3000*log(r)); /*natural log*/ return(time); } /*end of rand backoff generator*/ #endif

PAGE 149

145 /*FILE NAME STACK.C *****************************/ /*This is a file that will need to be divided up into other files as the protocols in it grow. The entry points for a level are labeled with a comment as to what level they represent. An entry point for a level must have the parameters port num, mess len, d r time. This module can get hard to follow because unless a simple datagram protocol is used levels can gererate multiple calls to levels below which can in turn generate multiple calls. Thus some of the protocols use a state machine which is in the form of a switch on the present state of a SAP. In order to modify the stack, and run a new protocol 1) The scheduler must pass stack pointers to the new stack routines. 2) Proto.h must #define the appropriate dummy, datagram or virtual circuits. *I '...1 #include "glob.h" #define DEBUG 0 #define MIN_DATA 512 /*debug stack.c*/ extern struct queue PDU Q[PDU Q LEN]; int mess cnt= 1; ---int X mess[15]= {0101010101 010101010, int x-sap d[15]={0,0,010,0, 010,0,0,0, char *x_label[15]={11 ", ", 111 ", II II II II II II II II I I I I II 111 II 111 II 111 II 111 II"}; 0,0,0,0,0}; 010,o,o,o}; II II II II I I /*storage for labels to be passed to Q IT*/ /*In order to keep things as clear as possible, indexing into *I x label will be done based on the number of the pdus preceeding the last pdu associated with the transaction.

PAGE 150

/*STACK() calls the protocol levels it is given in order to build the PDU's to be sent. on entry: gets a list of pointers to protocol level functions alters: adds one (one!) pdu to PDU Q changes mess cnt field adds extra message labels on exit: returns nothing ***/ stack(label, mess_len, port_num, sap_d_r_time, lvl2, lvl3, lvl4, lvl5, lvl6, lvl7) char *label; int mess len, port num; long sap-d r time;-int (*lvi2)(f, (*lvl3) (), (*lvl4) (), (*lvl5) (), (*lvl6) (), (*lvl7) () ; { long d r time; int info-len; info mess len; d_r_time= sap_d_r_time; /*Call all the seven OSI layers in order to build the pdu's*/ mess len+= (*lvl7) (port num, mess len, d=r_time); /*call level7 protocol*/ 146 mess len+= (*lvl6) (port num, mess len, d-r time); /* level6 */ --mess-len+= (*lvl5) (port num, mess_len, d=r_time); /* level5 */ mess len+= (*lvl4) (port num, mess len, d-r time, lvl2, lvl3); -/* levei4 */ mess-len+= (*lvl3) (port num, mess len, 'd_r_time, lvl2) ; /* level3 */ mess len+= (*lvl2) (port num, mess_len, -d_r_time);

PAGE 151

#if DEBUG #end if } printf ( 11 stacking %s 11, label); q_it(label, mess_len, mess_cnt, x mess, x label, info len, port num,-d r time) ; ----/**LEVEL 2 DATAGRAMS SECTION**/ /*DOT3 L2 () the 802.3 level 2 LLC simulation -for datagram mode*/ dot3 12(port num, mess len, d_r_time) int mess_len7 long d_r_time; { } int frame= 32; switch (port_sm[port_num]) { case o: { /*starting*/ } mess cnt= 1; port:sm[port_num]= O; frame+= mac3(port_num, mess_len); return(frame); break; case 1: { mess cnt= 1; frame+= mac3(port_num, mess len); return(frame); break; } /*end case 1*/ default: { break; } } /*end switch*/ 147

PAGE 152

/*LEVEL 2 VIRTUAL CONNECTION SECTION*/ /*OOT3_L2C() CONNECTED: the 802.2 simulation for balanced connected mode*/ dot3 12c(port num, mess len, d r time) int port num,-mess len;---long d r time; -{ """" int frame= 32; /*added framing required*/ switch (port sm[port num]) { case 0: { --/*starting*/ mess cnt+= 3 ; X mess(mess cnt]= Xid(port num) ; x=label[mess_cnt]= "XID"; 148 x mess[mess cnt-1]= sabme(port num); x-label[mess_cnt-1]= "SABME"; -} x mess[mess cnt-2]= rr(port nurn); x=label[mess_cnt-2]= "RR"; -port_sm[port_num]= 1; frame+= mac3(port nurn, mess len); return(frame); -break; case 1: { mess cnt= 1; frame+= mac3(port_nurn, mess_len); return (frame) ; break; } /*end case 1*/ default: { break; } } /*end switch*/ } /*end dot3_12c*/

PAGE 153

/*MAC3() ie the 802.3 MAC layer This routine will add the pad, if any, plus 208 to the frame size. Note that the min. frame len. is 720. Ethernet 72 bytes = 576 bits. page references are to IEEE 802.3 spec*/ mac3 (port_num, mess_len) int port_num, mess_len; { /*pg 26, 56*/ 149 int pad; int frame; /*added framing required*/ } pad= (mess_len > MIN DATA) ? o : (MIN_DATA-mess_len); frame= 208 + pad; return(frame); /*MIN some message of minimum length*/ min (port num) int port-num; { -} int frame= 24; frame+= mac3(port num, frame); return(frame); /*XID() exchange ID's*/ xid (port num) int port= num; { } int frame= 24; frame+= mac3(port_num, frame); return(frame);

PAGE 154

/*SABME () */ sabme (port num) int port_ nU:m; { int frame= 16; /*pg 51*/ frame+= mac3(port_num, frame); return(frame); } /*RR() exchange ID's*/ rr (port num) int port num; { -int frame= 16; frame+= mac3 (port_num, frame); return(frame}; } /*UI(} exchange ID's*/ ui (port num) int port num; { -int frame= 8; /*pg 53*/ frame+= mac3(port_num, frame}; return(frame); } /*LEVEL 3 IP(} Internet Protocol header*/ ip(port num, mess len, d r time, lvl2) int port num, mess len; --int (*lvl2) (); -long d r time; { --return(l92}; /*pg 374 Ten.*/ } 150

PAGE 155

151 /*LEVEL 4 TCP() Transport Control Protocol header*/ tcp(port num, mess len, d r time, lvl2, lvl3) int port-num, mess-len; --int (*lvl2) o, (*lvlJ) (); long d r time; { --} switch (port sm[port num]) { case 0: { --/*starting*/ mess cnt++; x_mess[mess_cnt]= (*lvl3) (port_num, mess len, d r time, lvl2); x mess[mess cnt]+= min(port num); x=label[mess_cnt]= TCP_OPEN "; return(192); /*pg 374 Ten.*/ } /*end case starting*/ case 1: { /*already conn*/ return(192); } } /*end switch*/ sna34(port num, mess len, d_r_time) int port num, mess len; long d r time; -{ --int frame= 32 ; switch (port_sm[port_num]) { case o: { /*starting*/ mess cnt+= 6; X mess[mess cnt]= 720; x=label[mess_cnt]= "RS"; x mess[mess cnt-1]= 720; x label[mess_cnt-1]= "ACK"; x mess[mess cnt-2]= 720; x=label[mess_cnt-2]= "IBind"; x mess[mess cnt-3]= 720; x=label[mess_cnt-3]= "ACK";

PAGE 156

} } x_mess[mess_cnt-4]= 720; x_label[mess_cnt-4]= "SStart"; x_mess[mess_cnt-5]= 720; x_label[mess_cnt-5]= "ACK"; frame+= mac3(port num, mess len); return (frame) ; --break; case 1: { mess cnt= 1; frame+= mac3(port num, mess_len); return(frame); -break; } /*end case 1*/ default: { } } break; /*end switch*/ /*end SNA34*/ /*Level 6 VIRTUAL TERMINAL PROTOCOL*/ vt(port_num, mess_len, d_r_time) int port_num, mess_len; long d_r_time; { int frame= 32; switch (port sm(port num]) { case O: { --/*starting*/ } mess cnt++; X mess(mess cnt]= 720; x:label(mess_cnt]= "OPEN VT"; frame+= 16; return (frame) ; break; 152

PAGE 157

case 1: { mess cnt= 1; frame+= 16; return (frame) ; break; } /*end case 1*/ default: { break; } } /*end switch*/ } /*end VT*/ /*LEVEL 6 FTP() Internet Protocol header*/ ftp(port_num, mess_len, d_r_ti.me) int port_num, mess_len; long d_r_ti.me; { } int frame; frame= mess len/10; return (frame) ; int dummy(port_num, mess_len, d_r_time) int port_num, mess_len; long d_r_ti.me; { return(O); } 153

PAGE 158

154 /*FILE NAME Q IT.C*******************************/ #include "glob.h" #define DEBUG 0 /*RESORT(where we are in the media Q) resorts the media q based on the new d r time for the pdu we are working --on. The index is used so as to not waste time resorting the times that we know are earlier. *I resort(index) int index; { } int ii= index; struct queue temp__pdu; /*a temp pdu structure*/ int jj; temp_pdu= MEDIA_Q[index]; while(((PDU Q END-1) > ii)&& (temp_pdu.d r time> MEDIA Q[ii+1].d r time)) MEDIA Q[Ii)= MEDIA Q[ii+1]; --ii++;--} MEDIA_Q[ii]= temp__pdu; /*Q_IT()****************/ /*Fill the fields of a queue structure. Order the Q by desired run times.*/ q_it(label, mess_len, x_mess, x_label, info len, port num, d r char *labe'l; ---int mess len, mess cnt, x_mess[]; char *x label [ ] ; -int info len, port nurn; long d r-time;

PAGE 159

{ } int ii= O; int jj; 155 struct queue temp_pdu; /*a temp pdu structure*/ char c; while(((PDU Q END) > ii)&& (d r time>= PDU Q[ii].d r time)) { } ii++7---/*now d r time is < to ii.d r time*/ for(jj=PDU_Q_END; jj>ii; jj=-) { PDU_Q[jj]= PDU_Q[jj-1]; } PDU_Q[ii].label= label; PDU_Q[ii].d_r_time= d_r_time; PDU Q[ii].a r time= d r time; PDU=Q[ii].info_len= info_len; PDU Q[ii].mess len= mess len; PDU-Q[ii].mess-cnt= mess=cnt; for(jj=O; jj
PAGE 160

#if 0 /*PUTEND()****************/ putend(mess len, port num, d r time) int mess_len, port_nurn; --long d r time; { --. if (PDU_Q_END >= PDU_Q_LEN) { 156 printf ( "Q_ IT() says PDU ....:.0 is full ret ( -1) \n") ; return(-1); ) PDU_Q[PDU_Q_END].d_r_time= d_r_time; PDU_Q(PDU_Q_END].mess_len= mess_len; PDU_Q [PDU _Q_END] port _num= port _num; PDU Q(PDU Q END].b off num= O; PDU_Q(PDU=Q=END].coll_num= 0; PDU Q END++; return(O); } #endif /*FILE NAME STATS. C******************ic:*********** does the statistics gathering and c Statistics to be collected: *I sap number and label pdu transmission time propagation delay desired run time originally requested actual run time end of run time information throughput mean pdu delay normalized to pdu_delayjpdu_trans_time percent utilization of the media arrival rate of new pdu's mean prop timejmean trans_time total number of collisions fraction of pdu's that collided total number of pdu's mean service time (ie mean transmission time for a pdu)

PAGE 161

#include "glob.h" #include "proto.h" #include #include #include stats() ( FILE *stream; int ii; int tot_pdus; long b_time= 0, t_time= O; float util= 0.0, arr rate= 0.0; int num coll= O; -float coll_per_pdu; float mn ser tm= 0.0; long delay, a_r_time; float i tot= O; /*total info flow*/ 157 float i-thru= O; float bTt tot; float delay t= O; float mean delay; float norm dpert; /*total # of bits thru*/ /*total delay*/ float prop_per_trans; printf("stat.c is writing to stat.dat \n"); if( (stream= fopen("stat.dat", "w+")) = NULL) { exit(l); } fprintf(stream, "STATISTICS FROM SIMULATION OF") ; if(DOT3DATAGRAM) fprintf(stream, DOT THREE DATAGRAMS\n"); else if(DOT3CONNECT) fprintf(stream, "DOT THREE CONN-SER.\n"); else if(DUMMY) fprintf(stream, "NO STACKING"); for(ii=O; ii < FINAL_Q_END; ii++) ( if(! (ii%22))

PAGE 162

fprintf(stream, "LABEL od r time r_time port_num end r time mess_len\n"); i tot+= (float)FINAL Q[ii].info len; bit tot+= (float)FINAL_Q[ii].mess_Ien; b time+= FINAL Q[ii].end r time--FINAL-Q[ii].d r_tTme;j*inc busy*/ num cell+= FINAL num; a r-tirne= FINAL Q[ii].a r time; delay_t+= (float)FINAL_Q[Ii].d_r_time-(float)FINAL Q[ii].od r time; #if 0 ---fprintf(stream, "%-12s %-05ld %-05ld %-05d %-05ld ", FINAL_Q[ii].label, FINAL_Q[ii].od_r_time, FINAL Q[ii].d r time, FINAL=Q[ii].port_num, FINAL_Q[ii].end_r_time); 158 fprintf(stream, %05d", FINAL Q[ii].mess len); fprintf (stream," \n") ; #end if } FINAL_Q_END; t t1.me= P TIME;. -i thru=-i totj((float)t time); /**/ util= (float)b timej(float)t time; arr_rate= 1000000.0; mn_ser_tm= (float)b_tl.mej(float)tot_pdus; mean_delay= norm_dpert= delay_t/(float)b_tl.me; coll_per_pdu= (float)num_coll/(float)tot_pdus; num_coll/=2; fprintf(stream,"total simulation time= %ld us.\n", t time); fprintf(stream,"total number of pdu's= %d \n", tot_pdus); fprintf(stream, "busy time= %ld \n11, b_tirne) ;

PAGE 163

159 fprintf(stream, "utilization= %ld/%ld= %1.3f arr rate= %1.3f pdu's per sec.\n", b time, t time, util, arr rate); fprintf(stream, "#-of collisions= %d %2.3f\n", num_coll, coll_per_pdu); fprintf(stream, "mean ser time= %5.2fus. i tot= %5.2f bits\n", mn ser tm, i tot); fprintf(stream,"deiay_t (total)= %7.0f us. mean delay= %5.2f us. \n", delay t, mean delay); fprintf(stream," nonn dpert(delay per trans time)= %3. 3f\n", nonn dpert) ; fprintf(stream,"i thZU(total info/total time= %S.2f Mbits per sec.\n", i thru); fclose(stream);