This
reflector has become very active and some very interesting
points
made
on cut-through versus store-forward. I take the privellege of
jumping
in
late after learning about all the wisdom out there. I am probably
repeating
what
has already been said but would like to focus the discussion on
possibilities
that
help us converge on a standard quickly and one that are more pragmatic
and
not
optimized for the pathological case.
Here
is an idea for us RPRians to chew on -----
What
if RPR supports cut-thorugh and store-forward modes with the
class-of-service field in the header permiting a
tranist node to make that
decision.
The
first problem is there are 2 mode and perhaps 3 or more criteria
(based
on CoS
bits). Say, for being similar to others RPR supports 8 classes (3
bits).
We
could pre-program an RPR MAC to associate either a CT or SF for
each
of
this class. Obviously this would be a ring wide decision and not
change
from
node to node.
Incoming packets could be examined for their CoS bits
and the MAC decide
if
this a CT or SF type packet. CT type makes it through the transit
node
if the
header is error free. SF type packet gets handed to the system. The
system
could then impose further scheduling rules between "add" and "pass
SF"
traffic based on QoS requirements for the servcies it
supports.
For
example, if an RPR system implements 8 classes of traffic to
accomodate
all
its QoS requirements. It could segment this classes into two groups. One
set
(of
classes) is used for reserved (guaranteed) BW - meaning = the call
admission provsioning software
has
removed this BW from the ring segemnts to indicate how much more is
available
for
this set on the ring. The other set (of remaining classes) could be
categorized
as
over-subscribed class. Typically, such an over-subscribed class goes through
policing followed by shapping. Now one can choose
to use some sort of signalling
(like
SRP or iPT) to dynamically throttle schedulers for the SF type of "add" traffic
at
each
node or leave it to the system to use dynamic packet discard
alogithms combind
a
scheduler that has bandwidth awareness based on spatial reuse fof the
left over bandwidth
after
CT traffic is guaranteed.
This
choice really depends on the levels of over-subscription users can tolerate and
I
feel
that mandating. Hence, if we come up with a message format for
signaling
congestion we can have the standard define what a node
should do when it gets
such a
congestion notification but leave it open to the system implementor as
to
when
to generate such a signal (or even not generate one). We could learn a
lot
from
ethernet's pause mechanism philosophy here.
This
solutions uses some of the "loop-holes" we have created in the
objectives.
For
example if someone says that the RPR MAC shall be not loose
packets
in
transit - we take care of it because transit traffic is always in cut-thorugh
mode.
SF
traffic can be considered as non-transit (or pseudo-transit)
packet.
Now,
question is how much of buffer is needed in transit path. Life has taught us
well
and
those that do not heed to history will perish and the meek who accept
convetional wisdom
will
inherit the earth. If the transit buffer size has to scale with ring speed we
have a MAC
definition that will change from 2.5 gig to 10 gig and
at each and every step along the
way.
However, if we use the above scheme, we could dictate that pass CT (aka -
transit traffic)
always
has the highest priority. With this the maximum transit path needs to be only
one MTU size
long.
If a node happens to receive on the ring a CT type of packet while "adding"
it
will have to hold the CT type before it finishes "adding" to the ring.
Obviously, large MTU
size
on the ring has significant impact on jitter and delay and hence must be chosen
on a ring wide
basis
to accomodate the type of QoS the boxes require. Further which packet a nodes
wants
to add "first" (of all the packets it has) and whcih it chooses to classify as
"CT" is really a
system
issue
I
would recommend that we explore this further to further the standard because I
feel that
further debate on this will only further the point in
time when we can have a standard.
Dear Harmen,
I really cannot agree with your "freedom" speech in the
context of MAC definition.
Let's not forget we're defining a "shared media" MAC.
If my understanding is correct, you
don't even want to standardize on the different
class behavior of the MAC, right ? If you don't agree that
there should be any class
behavior class difference at MAC layer, I rest my case;
otherwise, if you allow
different node has different interpretation of the class
behavior, then where is the
"Interoperablity" coming from ?
Best Regards
William Dai
----- Original Message -----
Sent: Wednesday, April 18, 2001 1:22
AM
Subject: RE: [RPRWG] Cut through
definition?
Dear Ray
1) Nothing prevents that nodes with cut-through
and nodes with store-and-forward can work together.
2) Also different buffer scheduling mechanisms
can be used.
3) Even a different number of buffer classes in
the nodes works when a class mapping scheme is agreed on.
There will certainly be a difference in the
performance of the nodes, but the standard should leave freedom as much as
possible. No special configuration should be mandatory. Equipment
manufactures and network operators should decide.
Preemption is less complex than is claimed in
all email-responses. Also preemption should in fact remain a degree of
freedom. Nodes with and without preemption can perfectly exist in the same
ring, if the ring input-part of each node can handle
preempted packets so that preempted and preempting packets are received
correctly. If we not agree that all ring-input parts must understand
preemption, then at least the standard should allow that those
nodes designed for preemption are conform to the standard. In that
case, a preemptive node will not use the feature when the next node is not
preemptive. Again equipment manufactures and network operators should be
able to decide whether it will be necessary to have packet preemption or
not. There, are a lot of countries, a lot of applications, and not at last
cost-considerations, where the features of resilience and QoS is very
important, but where lower bit rates like 155/622 Mbit/s perfectly fit, and
that still for a long time to go. Here, voice-like applications and large
data frames ask for preemption.
With respect to jumbo packets, I like to ask
who really knows what the right MTU for future high-performance applications
will be. If the use of RPRs should be transparent in a variety of network
environments, then again equipment manufactures and network operators should
decide on the MTU in their products and their application,
respectively. The standard should allow again for freedom, however requiring
of course a minimum MTU.
What I advocate is that the standard should
guarantee interworking, but manufacturers should have some design freedom to
differentiate in the performance and cost features of their
products.
Best regards, Harmen
Original message
RE: [RPRWG] Cut through
definition?
Title: RE: [RPRWG] Cut through definition? I
agree with Harry on the complexity issues. It is a resilient PACKET
ring. If the user wants granularity smaller than packets they can
readily buy SONET equipment. And how would we signal that a packet is
being pre-empted and remain PHY agnostic? It seems that some sort of
control character would need to be agreed upon for the pre-emption...but I
digress. There has been a lot of debate on the cut-through
issue, but one thing that has not really come out is the ability to
implement RPR with existing technology. I have read the Cisco RFC on
SRP and have found nothing in that RFC that would prevent me from going out
and buying an off the shelf Network Processor and implementing that
protocol. Once the IEEE 802.17 dictates that store and forward is not
acceptable, the ability to do that is removed. If cut-through is
required, then we just added 18-months to the availability date of working
802.17 products, since new silicon will be required. Of course, there
may be companies out there that already have silicon in the works....and
would like to require cut-through operation as a way to maintain a barrier
to entry in this market. (I am not trying to say anyone is doing this
with .17, but I have seen that kind of activity in other standards
bodies.) As far as the number of classes of service to
support: I initially thought that it would be nice to have many (i.e.
8) classes of service as well as their behavior dictated by the .17
standard. After following the reflector the last few weeks it seems
that would be totally the wrong thing to do. I am now wondering if
anyone would be in support of not standardizing multiple classes of service
at all, and instead simply requiring nodes to implement a fairness algorithm
that ensures a configurable percentage of the data stream from the ring is
propagated through the node. For example, on a 1 G ring, with 10 nodes
could we not configure each node to ensure that it provides priority to the
transit buffer with a 9:1 ratio? I realize that there is a lot of work
going into fairness algorithms, but maybe those should be vendor
differentiators and not standardized. That goes for the pre-emption
feature too....if all the nodes could communicate and agree that they
understand pre-emption then it could be used, as an option. I
do not work for a silicon vendor, I am looking at these problems from a
systems provider point of view. I would really like to see some more
"real life" requirements and input on these subjects from the carriers and
service providers that will be buying the equipment. If they are out
there, please speak up. Regards, Ray
------------------------------------------------------------------ Prof.Dr.
Harmen R. van As Institute of
Communication Networks Head of
Institute
Vienna University of Technology Tel
+43-1-58801-38800
Favoritenstrasse 9/388 Fax
+43-1-58801-38898
A-1040 Vienna, Austria http://www.ikn.tuwien.ac.at
email: Harmen.R.van-As@xxxxxxxxxxxx ------------------------------------------------------------------
|