Hugh,
Thanks! I can understand the desire to see more
information. Hopefully some of that will be shown at the study group
meeting. I agree with you about the throttling. In my humble
opinion, throttling is what most of the study group is considering, but the
concepts for doing it are varied. If the study group could move to a task
force with the focus on developing a method of throttling for congestion
management, then the task force could determine the best (and simplest) method
of throttling to allow the device ahead of the choke point to control the
congestion.
Thanks,
Brad
Brad,
Definitely, yes. I won't say that I
am categorically against any of the other proposals, it's simply that by my
calculations (for both prioritized pause and preemption) indicate that they
are not worth doing. I will look forward to seeing some proof that I am wrong
before I change my position on those.
Throttling has some demonstrably
good behavior in the case where a device cannot service its input queue at
line rate, therefore it should ask the upstream device to throttle. This
reduces the effective link b/w and allows the device ahead of the choke point
to control the congestion (as it should be).
Hugh.
Booth,
Bradley wrote:
Hugh,
Can we sign you up as a supporter of link
throttling and then let the Task Force propose methods of performing that
throttling? I know a number of us have considered pause as a form of
throttling, but there are other methods as you've alluded to. Would it
be more palatable to you that the study group request doing congestion
management by using link throttling in their PAR?
Thanks,
Brad
David,
I would suggest that any form
of feedback/pause mechanism is inherently flawed, no matter which group
does it (with the exception for link throttling that I have already
mentioned).
If you think deeply about the problem, you will see
that the only way for this to work is to extend the pause so that it goes
from the choke point back to the originating source of traffic that's
causing the overload. This type of problem has been around since the
beginning of networks and has, basically, two approaches for the
solution:
1. Drop packets at the choke point (NOT anywhere else).
Build upper layer protocols that interpret dropped packets as indications
that congestion exists, therefore they reduce the system load. This
works!
2. Build a network of channels (or virtual circuits if you
will). As a connection is defined from one end point to another, bandwidth
is reserved across all the links required for making the connection. This
also works, but is deprecated for (normally connectionless) packet
networks - traffic is going from each end point to many other end points.
Perhaps there will be a resurgence in channelized solutions for
self-contained networks, provided that the provisioning and management
problem is solved - this will not be Ethernet!
A congestion based
pause mechanism has the effect of moving the congestion point. This means
that traffic passing through the false congestion point is effected,
whereas it wouldn't normally have been. Depending on the network
architecture and components, this false congestion may cause increased
latency or packet loss for the innocent traffic. This will cause upper
layer protocols to mistakenly react to perceived congestion. It will also
cause network level traffic management to misinterpret the position of the
choke point, causing bad changes to routing
configurations.
Hugh.
David Martin wrote:
Matt,
Thanks for the
reply.
I'm trying to
stay away from the organizational "where to do it" angle of .3 vs .1 and
consider whether there is some backpressure that could be done at a
finer granularity like per-VLAN ID or per-CoS.
WG locale
aside, are you saying you're not a fan of either a per-VLAN ID or
per-CoS backpressure
mechanism?
...
|