My understanding of the cut-through definition in Sanjay's
example is
1. Pass-through packet is allowed to
transmit before it is completely received.
[Sanjay Agarwal]
Not
necessarily. You have same result if you forward packet after you completely
receive it or you start transmitting before you receive. In the
formar case you have one packet delay, in latter you don't. 1500byte at 10 gig
gives you 1.2 microseconds.
2. There is only one
transit buffer (regardless of class).
[Sanjay Agarwal]
Yes that is what proposed cut through schemes
have. If you have multiple classes of service and you allow priority
than you have to arbitrate between add and pass classes of traffic at that
moment it becomes store and forward.
3. Scheduling Algorithm always give
pass-through traffic (regardlesss of class)
preference over
add-on traffic.
[Sanjay Agarwal]
Yes that is what proposed cut through schemes have.
If you don't give pass higher priority than you don't have cut through
scheme.
which somewhat contradicts with his first statement. Thus
the interesting results.
No. it doesn't.
The debate should be based on a solid definition of
cut-through transmission, otherwise
there will be no convergence at the end of the
discussion.
I
agree.
I fully agree Sanjay's first statement, but want to add that
each class should have its
own transit buffer, (personally I prefer having 3 classes
supported as RPR MAC services).
Whether each transit buffer should reside in MAC layer or
systemlayer is up to further
discussion. Under this context, Circuit Emulation (or some
may prefer to call it
Synchronous) class will benefit from the cut-through
transit.
[Sanjay Agarwal]
I
don't agree in the present cut through proposals case. Unless you want to
define cut though differently.
Ideally it could
further benefit from preemptive transmission (yet
another definition to be solidly defined).
William Dai
----- Original Message -----
Sent: Thursday, March 22, 2001 11:15
AM
Subject: RE: [RPRWG] MAC Question
Hi Ajay,
Latency and jitter requirements depend on the class of
traffic. For some type (class) of services it is critical for others it is
not.
Counter intuitive as it is, actually, store and for forward
is less end-to-end latency than cut through.
In cut through approach, high add priority traffic waits
while pass low priority upstream traffic passes through. It takes two RTT to
shut up the low priority traffic through BCN. Thus high priority waits 2RTT
because of the low priority stream. In this case low priority pass streams
impose 2RTT latency or jitter to add high priority stream. For 200km ring
that is 2ms. For 200km it is 20ms.
Total end to end latency = add latency + N*pass
latency
In cut through end to end latency = 2RTT +
N* packet delay at link speed
In the store and forward approach, if pass traffic is low
priority it waits in the buffer while pass high priority and local high
pririty get to go in that order. Thus, max jitter or latency imposed on high
priority traffic is at worst imposed by high priority stream. Since high
priority traffic streams are committed services, they never over subscribe
the link only low priority streams do.
In store and forward end to end latency = pass hi priority
burst + N* packet delay at link speed.
Pass hi priority burst = At 10gig speeds depending on the hi
prority provisioning levels.
typically in the
order of microseconds
store and forward gives clear class based seperation. It
provides no latency panelties on committed high priority streams (typically
voice and video) due to overcommitted low priority streams (typically data).
There is no RTT dependence here which can be .1msec at 20km
to 10msec at 2000km
-Sanjay K. Agrawal
Luminous
networks
> -----Original Message-----
>
From: owner-stds-802-17@xxxxxxxx [mailto:owner-stds-802-17@xxxxxxxx]On
> Behalf Of Ajay Sahai
>
Sent: Thursday, March 22, 2001 6:34 AM
> To: Ray
Zeisz
> Cc: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] MAC Question
>
>
>
Ray:
>
> I guess the
answer is that the group is still debating this issue. Some
> vendors prefer to have a largish transit buffer where transit
frames
> are stored. Others are proposing "cut
through" transit functionality.
>
> I personally feel that latency will be larger in the
first approach.
>
> On
another note I do not believe that the similarity with 802.5 is
> on the lines of claiming a token etc. etc. The MAC
mechanism
> is going to be different.
>
> Hope this helps.
>
> Ajay Sahai
>
> Ray Zeisz wrote:
>
> > I am following the
.17 group from afar, but I have a question:
>
>
> > Is it acceptable for each node in the
ring to buffer up an entire packet
> > before
forwarding it to its neighbor? Would the latency be to
> great if this
> > were
done? Or is the .17 direction more along the lines of
> 802.5 where only
> > a few bits in
each ring node are buffered...just enough to
>
detect a token
> > and set a bit to claim
it.
> >
> >
Ray
> >
> > Ray
Zeisz
> > Technology Advisor
> > LVL7 Systems
> > http://www.LVL7.com
> > (919) 865-2735
>