Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] Cut through definition?



Title: RE: [RPRWG] Cut through definition?
I agree with Harry on the complexity issues.  It is a resilient PACKET ring.  If the user wants granularity smaller than packets they can readily buy SONET equipment.
And how would we signal that a packet is being pre-empted and remain PHY agnostic?  It seems that some
sort of control character would need to be agreed upon for the pre-emption...but I digress.
 
There has been a lot of debate on the cut-through issue, but one thing that has not really come out is the ability to implement RPR with existing technology.  I have read the Cisco RFC on SRP and have found nothing in that RFC that would prevent me from going out and buying an off the shelf Network Processor and implementing that protocol.  Once the IEEE 802.17 dictates that store and forward is not acceptable, the ability to do that is removed.  If cut-through is required, then we just added 18-months to the availability date of working 802.17 products, since new silicon will be required.  Of course, there may be companies out there that already have silicon in the works....and would like to require cut-through operation as a way to maintain a barrier to entry in this market.  (I am not trying to say anyone is doing this with .17, but I have seen that kind of activity in other standards bodies.)
 
As far as the number of classes of service to support:  I initially thought that it would be nice to have many (i.e. 8) classes of service as well as their behavior dictated by the .17 standard.  After following the reflector the last few weeks it seems that would be totally the wrong thing to do.  I am now wondering if anyone would be in support of not standardizing multiple classes of service at all, and instead simply requiring nodes to implement a fairness algorithm that ensures a configurable percentage of the data stream from the ring is propagated through the node.  For example, on a 1 G ring, with 10 nodes could we not configure each node to ensure that it provides priority to the transit buffer with a 9:1 ratio?  I realize that there is a lot of work going into fairness algorithms, but maybe those should be vendor differentiators and not standardized.  That goes for the pre-emption feature too....if all the nodes could communicate and agree that they understand pre-emption then it could be used, as an option. 
 
I do not work for a silicon vendor, I am looking at these problems from a systems provider point of view.  I would really like to see some more "real life" requirements and input on these subjects from the carriers and service providers that will be buying the equipment.  If they are out there, please speak up.
 
Regards, Ray
 

Ray Zeisz
Technology Advisor
LVL7 Systems
http://www.LVL7.com
(919) 865-2735


-----Original Message-----
From: Harry Peng [mailto:hpeng@xxxxxxxxxxxxxxxxxx]
Sent: Wednesday, April 11, 2001 4:42 PM
To: Sushil Pandhi; Leon Bruckman
Cc: 'davidvja@xxxxxxxxxxx'; stds-802-17@xxxxxxxx
Subject: RE: [RPRWG] Cut through definition?

Complexity what complexity:

In the tandem path, if a high priority packet can preempt a low priority packet at
arbitrary boundary then the preempted logic will have to deal with a tandem packet that
is already pre-empted.
This means the fastest pre-emption response time is on internal word size and the pre-empted packet
will have to pad to word boundaries to make live easier.
Furthermore the tandem receiver will have to respond to within one clock cycle as it is the
atomic size. What is the word size for 10G 64 bits 128 bits? What about for 40G or higher.

Unless, you are will to have cells. Then why not use ATM.

I agree that we should keep it simple and not Segment packets.


Regards,

Harry



-----Original Message-----
From: Sushil Pandhi [mailto:Sushil.Pandhi@xxxxxxxxxxxxxxx]
Sent: Wednesday, April 11, 2001 10:33 AM
To: Leon Bruckman
Cc: 'davidvja@xxxxxxxxxxx'; stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] Cut through definition?



I agree with Leon  that 'pre-emption' will increase the complexity.  ATM solves
this by segmenting the message into
smaller size cells and reassembling cells, and using this  approach adds a lot
of complexity.

If we do not have preemption, and assuming 1522 byte frame just starts
transmission before synchronouus traffic
can be sent, 1522 byte frame at OC-3 rate will take about 82.6 micro-seconds.
If we assume that in the ring, at 32
nodes the same situation arises then  we have about 2.6 msec delay because not
doing preemption.  So I doubt  preemption
will give us much advantage.

-Sushil


Leon Bruckman wrote:

> DVJ
> My personal view is that preempting lower traffic in the middle of a packet
> adds complexity that is not really needed. At 1G, the transmission time for
> a 1500 bytes packet is 12 usec, so the worst case for a 256 ring will be 3.1
> msec of added delay because of low packets being transmitted and not
> preeempted. Furthermore, the probability of the worst case is very small. We
> did some simulations with the following assumptions:
> - There is always a low priority packet being transmitted by the node
> - High priority packet may arrive at any time during the low priority packet
> transmission (equal probability)
> Some of the results were presented during the January interim (by Gal Mor).
>
> For a 128 nodes ring operating at 1G the preeemption gain will still be in
> the msec range with very high probability, and this can easily be absorbed
> by the jitter buffers at the receiver.
> Leon
>
> -----Original Message-----
> From: David V. James [mailto:davidvja@xxxxxxxxxxx]
> Sent: Friday, April 06, 2001 4:27 AM
> To: Carey Kloss; Devendra Tripathi
> Cc: stds-802-17@xxxxxxxx
> Subject: RE: [RPRWG] Cut through definition?
>
> All,
>
> Relative to the discussion of cut through, et. al.
> My perception is that a cutthrough node has two
> insertion buffers, for classA (provisioned synchronous)
> and classB (provisioned asynchronous).
>
> The preferred transmit order is as follows:
>   a) classA insertion buffer (always)
>   b) classA transmit traffic (subject to provisioned rate)
>   c) asynchronous traffic.
> The classA insertion buffer only needs to be the size of
> the maximum packets sent by this node, plus (perhaps) some
> extra symbols to deal with hardware decoding latencies.
>
> The classB insertion buffer is to deal with the accumulation
> asynchronous packets that occurs when (worst case) full asynchronous
> is coming in/out and rate-limited synchronous is being transmitted.
> The size of the classB buffer is on the order of several upsteam-link
> delays times rateOfSynchronous/rateOfLink ratio.
>
> Order of the asynchronous traffic (c) depends on the classB
> buffer-filled status, prenegotiated vs. consumed rates, and
> the size of the asynchronous backlog in the client.
>
> The asynchronous transmit buffer is a bit schitzophrenic on its
> behavior. It should be in the client (not the MAC) because that
> allows packets to be reordered/inserted/deleted until the just
> before transmission time. However, the amount of traffic in the
> asynchronous transmit queue may influence the MAC queue-selection
> and throttle-signal assertion properties.
>
> I personally favor allowing cut-through synchronous traffic to
> preempt asynchronous, even in the middle of a packet. That's
> yields the lowest possible jitter, but at some encoding complexity
> costs.
>
> DVJ