Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [RPRWG] Cut through definition?




I agree with Leon  that 'pre-emption' will increase the complexity.  ATM solves
this by segmenting the message into
smaller size cells and reassembling cells, and using this  approach adds a lot
of complexity.

If we do not have preemption, and assuming 1522 byte frame just starts
transmission before synchronouus traffic
can be sent, 1522 byte frame at OC-3 rate will take about 82.6 micro-seconds.
If we assume that in the ring, at 32
nodes the same situation arises then  we have about 2.6 msec delay because not
doing preemption.  So I doubt  preemption
will give us much advantage.

-Sushil


Leon Bruckman wrote:

> DVJ
> My personal view is that preempting lower traffic in the middle of a packet
> adds complexity that is not really needed. At 1G, the transmission time for
> a 1500 bytes packet is 12 usec, so the worst case for a 256 ring will be 3.1
> msec of added delay because of low packets being transmitted and not
> preeempted. Furthermore, the probability of the worst case is very small. We
> did some simulations with the following assumptions:
> - There is always a low priority packet being transmitted by the node
> - High priority packet may arrive at any time during the low priority packet
> transmission (equal probability)
> Some of the results were presented during the January interim (by Gal Mor).
>
> For a 128 nodes ring operating at 1G the preeemption gain will still be in
> the msec range with very high probability, and this can easily be absorbed
> by the jitter buffers at the receiver.
> Leon
>
> -----Original Message-----
> From: David V. James [mailto:davidvja@xxxxxxxxxxx]
> Sent: Friday, April 06, 2001 4:27 AM
> To: Carey Kloss; Devendra Tripathi
> Cc: stds-802-17@xxxxxxxx
> Subject: RE: [RPRWG] Cut through definition?
>
> All,
>
> Relative to the discussion of cut through, et. al.
> My perception is that a cutthrough node has two
> insertion buffers, for classA (provisioned synchronous)
> and classB (provisioned asynchronous).
>
> The preferred transmit order is as follows:
>   a) classA insertion buffer (always)
>   b) classA transmit traffic (subject to provisioned rate)
>   c) asynchronous traffic.
> The classA insertion buffer only needs to be the size of
> the maximum packets sent by this node, plus (perhaps) some
> extra symbols to deal with hardware decoding latencies.
>
> The classB insertion buffer is to deal with the accumulation
> asynchronous packets that occurs when (worst case) full asynchronous
> is coming in/out and rate-limited synchronous is being transmitted.
> The size of the classB buffer is on the order of several upsteam-link
> delays times rateOfSynchronous/rateOfLink ratio.
>
> Order of the asynchronous traffic (c) depends on the classB
> buffer-filled status, prenegotiated vs. consumed rates, and
> the size of the asynchronous backlog in the client.
>
> The asynchronous transmit buffer is a bit schitzophrenic on its
> behavior. It should be in the client (not the MAC) because that
> allows packets to be reordered/inserted/deleted until the just
> before transmission time. However, the amount of traffic in the
> asynchronous transmit queue may influence the MAC queue-selection
> and throttle-signal assertion properties.
>
> I personally favor allowing cut-through synchronous traffic to
> preempt asynchronous, even in the middle of a packet. That's
> yields the lowest possible jitter, but at some encoding complexity
> costs.
>
> DVJ