Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
INSERTION-BUFFER RINGS OPERATE PER DEFINITION AS CUT-THROUGH
Unfortunately, all emails on the issue
cut-through are discussing scheduling. These discussions have nothing to do with
the difference between cut-through and store-and-forward! Buffer-insertion rings
are always cut-through. Also Cisco uses cut-through, even when they call it
wrongly store-and-forward. This is the origin of this confusion. If they really
would apply store-and-forward, the performance would be disastrous. Thus, in the
May meeting the voting decision on this issue can only be cut-through! How to
schedule between buffers is an orthogonal issue. Buffer-Insertion rings were first used in the
late 70’s. The only purpose to use an insertion buffer in the transmission path
is to hold up the data stream that already is on the ring, when the node itself
pulses out it own packet from its transmit buffer. Otherwise, both transmissions
would overlap and both packets would be destroyed as in CSMA/CD. Assuming an
empty insertion-buffer at the beginning of the node’s transmission, an upstream
packet on the ring is only buffered during the collision time. Thus, the filling
of the insertion buffer is not necessary a complete packet. Assuming that the
node has no other packets to transmit, then the possibly partly buffered packet
on the ring is immediately pulsed out on the transmission link. The additional
insertion-buffer delay given by the amount of data that had to be hold up is then
experienced by all passing packets until the insertion buffer can be emptied
during the absence of data on that part of the ring. This is cut-through! In
case of store-and-forward, first the complete packet on the ring must be
buffered in the insertion buffer and then the decision is made to relay it to
the next node. This would mean that in case that none of the k nodes between
source and destination has a packet to transmit, one has a principal and
unnecessary delay corresponding to k times the size of the packet. This would
correspond to the degradation of the RPR to a ring of store-and-forward Ethernet
switches. Scheduling
between insertion buffers and transmit buffers comes only into play when the
insertion buffer (or one of the insertion buffers in case of priorities) has buffered at least one
single data unit of a ring packet. The scheduling sequence could then for
instance be: first insertion buffer of priority 1, then transmit buffer of
priority 1, then insertion buffer of priority 2, then transmit buffer of
priority 2. Note however that in case of selecting the insertion buffer, it
might contain only a part of an upstream packet and the rest is still coming in
as the packet is pulsed out. This is clearly, cut-through. Insertion buffers are
used to avoid loss during the interfering time of packet transmissions on the
ring and the node itself, not to buffer complete packets before forwarding as
basic operation. Another
thing that should be noted is that for a lossless operation on the ring, a node
can only transmit its own packet when there is enough buffer space left in the
insertion buffer to hold up an upstream packet of the same
size. With respect to a node structure, the transmission path in a node consists of a part for removing a packet destined to that node, a part for classifying packets into priority streams, a part containing the insertion buffers, and a part for scheduling the next packet from the insertion and transmit buffers. I hope this resolves the
confusion. Best regards Harmen R.
van As ------------------------------------------------------------------
Prof.Dr. Harmen R. van As Institute of Communication Networks Head of Institute Vienna University of Technology Tel +43-1-58801-38800 Favoritenstrasse 9/388 Fax +43-1-58801-38898 A-1040 Vienna, Austria http://www.ikn.tuwien.ac.at email: Harmen.R.van-As@xxxxxxxxxxxx ------------------------------------------------------------------ |