----- Original Message ----- 
  
  
  Sent: Tuesday, April 17, 2001 5:03 
  AM
  Subject: [RPRWG] Definition of terms - 
  First set
  
  These 
  are the first set of definitions of terms. Please comment or give additional 
  term definitions to promote a common understanding in our future 
  discussions. Some of 
  them are possibly voted on in the Orlando meeting , May 14-18, 2001. 
  The 
  final list will be included in the standard.
   
  Best regards, Harmen
   
   
  DELAY DEFINITIONS FOR MAC 
  STUDIES
   
  Propagation 
  delay: Time required for 
  a packet to travel over the medium (for fiber this is 5 ms/km).
   
  Ring latency: Time required for a packet to propagate once around the 
  ring
   
  
  Queueing delay: Time between the arrival of an end 
  of packet at the MAC transmit buffer and the instant that this 
  packet becomes the head-of-the-line packet in the transmit buffer. This delay 
  is only caused by the node's own traffic.
   
 
  Medium access delay: 
  Time required for a head-of-the-line 
  packet in the MAC transmit buffer to gain access to the medium. This 
  delay is only caused by the medium competition and the fairness 
  mechanism, not by the node's own traffic. This delay does not include the 
  packet transmission time.
   
  Packet transmission time: 
  Time required to clock a packet onto 
  to the medium. This time calculates as t-packet [s] = packet-length [bit] / 
  bit rate on the medium [bit/sec].
   
  Transit node delay: Time required to transfer an immediate node of the ring 
  between source and destination. It consists of a constant packet handling time 
  and a variable insertion or transit buffer delay.
   
  Insertion buffer delay: 
  Time required for a packet to pass 
  through the insertion buffer operating in cut-through mode.
   
  Transit buffer delay: 
  Time required for a packet to pass 
  through the transit buffer operating in store-and-forward 
  mode
   
  Receive buffer delay: 
  Time between the arrival of a 
  begin-of-packet at the MAC receive buffer and the instant that this 
  packet is completely delivered to the next protocol layer.
   
  
  
  Ring end-to-end delay: 
  Time required for a packet to travel 
  from a source to a destination node on the same ring. It consists of 
   the packet transmission delay, all transit node delays, and the 
  propagation delay from source to destination.
 
   
  
  MAC end-to-end delay:  
  Time between the arrival of an end of 
  packet at the MAC transmit buffer of the source node and the time that this 
  packet is completely delivered to the next protocol layer of the destination 
  node on the same ring.
  
   
   
   
  DELAY DEFINITIONS FOR INTERACTIVE 
  REAL-COMMUNICATIONS
  
   
  Compression delay: Time required 
  to reduce the amount of the original information from an interactive 
  real-time (synchronous) source such as live video.
   
  Packetization delay: 
  Time required to fill a 
  packet with information from an interactive 
  real-time (synchronous) source. For a 64 kbit/s voice source this is one 
  byte per 125 ms.
   
  Protocol stack delay: 
  Time required to handle packets in the 
  protocol layers above the MAC.
   
 
  Decompression delay: Time required to obtain the original information format from 
  the received packet(s) before relaying it to the acoustical and/or video 
  equipment.
   
  
  Delay jitter: Delay variation of the packet transfer caused by the 
  queueuing and access delays in the source node, all transit node delays, and 
  the receive buffer delay in the destination node
 
   
  Playout buffer delay:  
  Enforced delay at the receive side for 
  interactive real-time communication to achieve a constant end-to-end 
  delay. The appropriate delay value is calculated from the delay 
  jitter, whereby the calculation depends on the application.
   
  User end-to-end 
  delay:
  Total time delay between two users or 
  applications. It is the sum of all time components above the 
  MAC,  those time components outside the considered ring, and the MAC 
  end-to-end delay between source and destination on the considered 
  ring.
   
   
   
  DEFINITIONS ON MAC BUFFERS AND THEIR 
  OPERATION MODES
   
  Transmit buffer: MAC buffer that 
  contains the packets waiting to be transmitted over the medium
   
  Receive buffer: MAC buffer that 
  receives the packets addressed to the node
   
  Insertion buffer: MAC 
  buffer operating in cut-through mode and being part of the 
  transmission path of the ring.
   
  Transit buffer: MAC buffer operating in store-and-forward mode and being 
  part of the transmission path of the ring.
   
  Cut-through mode: Operation mode to handle the MAC buffer in the transmission 
  path of the ring with the purpose to hold up an upstream packet for the 
  time that the node is transmitting a packet from its transmit buffer. Thus, 
  the filling of the insertion buffer is not necessary a complete packet. 
  Assuming that the insertion buffer has priority over the transmit buffer, then 
  the possibly partly buffered packet is immediately pulsed out again on 
  the medium. The additional insertion-buffer delay given by the amount of  
  data that had to be held up is then experienced by all passing packets until 
  the insertion buffer can be emptied during the absence of data on that part of 
  the ring.
  
   
  Store-and-forward mode: 
  Operation mode to handle the MAC 
  buffer in the transmission path of the ring with the purpose 
  to buffer each transit packet completely before relaying it to the next node. 
  
 
   
  MAC Buffer scheduling: 
  Scheduling strategy within the MAC to 
  decide whether to transmit a packet from the node's transmit buffer or a 
  packet from the insertion/transit buffer. In case of ring QoS classes, 
  there are a number of priority buffers, both in the transmit and receive 
  parts as well as at the insertion/transmit buffer part.
   
  Packet preemption: Operation to preempt a packet of a lower priority being 
  clocked out from the transmit or insertion/transit buffer in order to expedite 
  the higher priority packet. Preempting the lower priority is not destructive, 
  so that the preempted and preempting packets are both received at the next 
  ring node.
   
  Ring QoS classes: Number of service classes that are supported on the ring by 
  the MAC. Each ring class has its receive buffer, its 
  insertion/transmit buffer, and its transmit buffer within the 
  MAC. 
   
   
   
  FAIRNESS PROTOCOL 
  DEFINITIONS
   
  
  Simultaneous access: Nodes 
  geographically distributed around the ring are able to access the ring 
  simultaneously. The fundamental mechanisms are destination removal (stripping) 
  and the use of a buffer in each node on the ring transmission path 
  operated as cut-through or store-and-forward.
   
  Destination removal: Method that 
  destination nodes remove the received packet from the ring.
   
  Destination stripping: destination removal.
   
  Spatial reuse: Simultaneous use 
  of different geographical parts of the ring. This is possible because of 
  destination removal (stripping).
   
  Fairness protocol: Medium access control protocol to 
  ensure to all competing nodes have fair access to the medium. Each ring 
  is controlled independently.
   
  Global fairness: Fairness based 
  on a mechanism that allows nodes to share the same amount of the 
  transmission capacity of the ring, independently whether their traffic 
  interfere or not 
   
  Local fairness: Fairness based 
  on a mechanism that coordinates the ring access of only those nodes that 
  interact during their packet transfers. Therefore, all nodes that do 
  not interfere are not throttled in their performance as is in the case of 
  global fairness mechanisms.
   
  Bottleneck-link fairness: 
  Fairness based on a mechanism that coordinates the ring access 
  of only those nodes that use the same links for their packet 
  transfers.
   
  Flow fairness: Fairness based on 
  a mechanism that coordinates ring access to individual traffic flows instead 
  of nodes.
 
   
  Farness cycle: Constant or 
  dynamic control period of the fairness mechanism.
   
  Rate control: Access control 
  method in which sources or flows periodically obtain transmission credits 
  (e.g. in number of bytes).
   
  Backpressure control: Control 
  method to stop or throttle the data flow from the upstream node. On 
  a dual ring the control packet is sent on the counter-rotating 
  ring.
   
  
  Round-trip delay: time 
  required for a control packet to reach it destination and the instant 
  that the control becomes effective.
 
   
   
   
  GENERAL DEFINITIONS
   
   
  Medium Access Control (MAC): 
  Function for each ring of a node for the purpose of coordinating medium access 
  between distributed nodes that compete for transmission on that 
  ring. For a dual ring each node has two MACs.
   
  Unicast: Packets 
  are delivered to a single destination node.
   
  Multicast: Packets 
  are delivered to a number of destination nodes.
   
  Broadcast: Packets 
  are delivered to all destination nodes.
   
  Quality-of-Service (QoS): 
  Service quality that has to be guaranteed in terms 
  of throughput, end-to-end delay, delay jitter, packet loss, and service 
  availability.
   
  Connection-oriented: Form of 
  packet communication with a previous connection set-up to obtain a 
  virtual (logical) connection between source and destination. 
   
  Connectionless: Form of packet 
  communication without connection set-up.
   
  Packet: Unit of transmission on 
  the medium (I assume this would fit better for RPR as frame)
   
  Dual ring: Ring network 
  consisting of two counter-rotating rings comprising a number of nodes 
  interconnected by point-to-point transmission links. Nodes normally 
  select the clockwise or the counter-clockwise ring according to the shortest 
  path, i.e. the minimum number of transmission hops to their 
  destination.
   
  Multiple rings: Ring network 
  consisting of more than two rings.
   
  Source node: Node to which 
  the origin of the communication is attached
   
  Destination node: Node to 
  which the destination of the communication is attached
   
  Upstream node: Node 
  located before the considered node on the ring in data flow 
  direction.
   
  Downstream 
  node:  Node 
  located after the considered node on the ring in data flow 
  direction.
   
  Port: Ingress/Egress attachment 
  of a node.
   
   
  ------------------------------------------------------------------
Prof.Dr. 
  Harmen R. van As       Institute of 
  Communication Networks
Head of 
  Institute                      
  Vienna University of Technology
Tel  
  +43-1-58801-38800           
  Favoritenstrasse 9/388
Fax  
  +43-1-58801-38898          A-1040 
  Vienna, Austria
http://www.ikn.tuwien.ac.at      
  email: Harmen.R.van-As@xxxxxxxxxxxx
------------------------------------------------------------------