Re: [8023-CMSG] Proposed Upper Layer Compatibility Objective
Brad,
I think you misunderstood some of what I wrote. In essence, I said that
what is needed is not feasible; what is feasible is not needed.
Imagine a situation with device 1 sending traffic to device 2.
Device 2 has an input port (connected to device 1, of course) and 4
output ports: C, M, Y & K. Input port and ports C, M & Y are all b/w N;
port K is b/w N/10. Device 2 has a wire speed switch fabric.
A burst of traffic from device 1 to device 2 fills the queue for port K.
At that point, device 2 would like to send a PAUSE to device 1. Device 1
will now stop its traffic, causing its output queue to fill & packets to
be dropped. However, the packets dropped are destined for ports C, M &
Y. This causes the higher layer to throttle back traffic destined for
ports C, M & Y even though these ports are not congested.
With no PAUSE, device 2 would have dropped only frames destined for port
K - frames for the other ports would have passed through without
difficulty. The same is true even if there is priority-based information
in the PAUSE.
The only use of PAUSE that would (might) work would be if device 2 could
signal to device 1 that only frames destined for port K should be
paused. This would require that device 1 must understand how device 2 is
going to classify and direct the traffic and device 1 must maintain
separate queues on its output port corresponding to the output ports of
device 2. This means that device 1 will wind up with a total number of
queues equal to the sum of all the queues on all of the devices
connected to it. Scaling for more than 2 devices is left as an exercise
for the reader.
A rule of thumb here is that congestion is best handled at the place
where the choke is occurring. A choke can only occur where streams
converge or where speed changes. Therefore the choke will always be in
some form of bridge or router device. PAUSE will simply move the choke
from the place where it should be to a place where it shouldn't.
Finally, I can see a use to define a mechanism for setting the rate of a
link - e.g. you may want a gigabit physical layer to be limited to
700Mbps for traffic management purposes. I would expect that such
mechanisms would be (mostly) static and will define the net rate for the
link as a whole. Deciding how that link bandwidth will be used (in terms
of priority or service etc.) should be the province of the device
sending into that pipe.
Hugh.
Booth, Bradley wrote:
>Hugh,
>
>Your second paragraph describes what I think a large number of us are
>talking about. The edge devices (i.e. adapters) and switch fabrics are
>where some of these issues are presenting themselves. The only
>assistance 802.3 provides is PAUSE or nothing. As you mentioned, PAUSE
>creates false congestion and can move the congestion point. Stopping
>traffic for output queue X would reduce the congestion in that queue,
>while preventing dropping packets destined for non-congested queues.
>The term that some have been using for this is traffic prioritization.
>Maybe the term is close to something used by people in their companies
>or something used in 802.1, I don't know. Would the term
>"priority-based PAUSE" or "queue PAUSE" be better?
>
>
>
>