So, the
following is wrong?
(many more similar articles can be found searching
Google)
If EFM had adopted preamble-based OAM, would this have
required new PCS codes and a significant number of "why doesn't it work" support calls? If so, why
was it proposed?
Again, this isn't that hard.
jonathan
Jonathan,
Inline:
Jonathan Thatcher
wrote:
Hugh,
Re: cut through
market penetration: Bring this back up in 2-3 years and we will
compare notes.
If the market for
cut-through is zero now, why should we expect that the market for a new (and
more complex) preemption standard will be significant in 2.5 years. How about
this: build & sell cut-through switches now; in 3 years, if cut-through is
a significant market then we should develop a standard for preemption.
Re: preemption backward compatibility: Of course
802.3 would create mechanisms to ensure that a switch implementing
pre-emption would plug and play with one that didn't. You simply make the
default mode equal to the existing mode. I know that you know
this. 802.3 did for Link Ag. It did for OAM. It did for.... Come on
Hugh, this isn't that hard. Both OAM
& Link Ag. use frame based protocols. Preemption will require new
(previously reserved) PCS codes. I'm sure it will work in the vase majority of
cases. If you sell tens of millions of ports per years that equates to a very
significant number of "why doesn't it work" support calls*. If it could be
proved that preemption is worthwhile, then we should make a definition that
minimizes the complexity and interoperability problems. Personally, I think
that the net gain is too small (even in the most extreme cases) to justify the
pain.
Hugh.
* note - it's the equipment at the other end that's
wrong :-)
jonathan
Jonathan,
In
line:
Jonathan Thatcher wrote:
Hugh,
Well,
I certainly can't get on board with the idea of 40 or 100 Gb/s being
cheap or simple. At least not in the next couple of
years.
I
never thought of myself as small-frame-phobic. I always thought of
myself as a lover of improved cost-performance.
You
are correct that geometry matters if you want low
latency.
Regarding your comment of this being a niche of a
niche, to some of us, being a part of a 1 B$ a year and rapidly
growing niche within a 50B $ a year niche is worthy of
consideration. It doesn't especially bother me that this might be
embarrassingly small and not worthy of consideration for the
largest vendors. You (and others) would be
wise not to make assumptions about what large vendors consider worthy of
attention (or not). The reason why I classify it as a niche of a niche is
that I expect most of this large and interesting market will be satisfied
by products based on standard technologies (adapted from LAN and WAN
applications). I also expect that that there will be a significant niche
demanding higher performance (in the range discussed below) that will
require more exotic architecture. To satisfy this niche, end-station
vendors will need hardware acceleration; switch vendors may use
cut-through and, as a result, hardware will be significantly more
expensive. Then there is a niche-of-a-niche that will need faster layer 2
operation than Ethernet can provide. I expect that such a market could use
existing supercomputer-defined interfaces or may be small enough to
tolerate custom or proprietary solutions. I do not see that the
niche-of-a-niche warrants the making of a new standard for the whole of
Ethernet.
If there is any demand for preemption , then I would
expect that cut-through switching would have a significant segment of the
current market as it is a tried-and-true technology that is fully
compliant with current standards. What is the current penetration of
cut-through switches in new switch sales?
Of
course a switch implementing pre-emption would interoperate with a
switch that didn't. Really Hugh, that kind of FUD is beneath
you. Of course it won't! You would have to
define some mechanism for backward compatibility that involves discovery
and negotiation before pre-emption is used. If, for any reason, a switch
were to use preemption on an interface connected to a switch which doesn't
understand preemption then the receiving switch would see a jumbled
frames. At best this would lead to packet loss, at worst it would cause a
very high false packet acceptance rate. I would expect that such a mix of
PCS capabilities introduced into the market would generate a far worse
number of user issues than simply adding (or changing) a protocol
frame.
Hugh.
jonathan
Jonathan,
I don't
know why you're so scared of smaller frames - anyone who wants smaller
latency should prefer smaller frames. If you reduce the MTU to 500
bytes (not to 48) the equation swings in favor of existing
standards:
6 x 500 x 8/10k + 0.5 = 2.9 vs 0.8 - still a 3x
improvement using preemption, but getting closer. Bear in mind that
this is an extreme worst case comparison. Averages will be almost
identical because the preempting packet can arrive at any time during
the preempted frame; the preempted frame might not be of maximum size;
the link may be idle when the preempting packet arrives; plus of
course the packet in progress may be a high priority packet
also.
Of course if we start adding in more delays the
difference gets yet smaller (both delays increase similarly).
e.g.
Your example allows only 15m per link, you will start to
run into geometrical problems if you want to aggregate very large
numbers of nodes with only 15m per link. If there are fewer nodes then
you need to re-architect you interconnect matrix because 6 hops should
be able to accommodate many thousands of end stations.
Your
example must be assuming very aggressive cut-through switch
architecture (cut through has lost popularity in recent years, shame).
If you want to conform to the requirements of bridging then you should
wait for both the source & dest MAC address to be received before
you transmit (unless you are a repeater!). Since you are advocating
preemption, I would also assume that you must wait for the COS/TOS
tag. That extra 10 bytes will be difficult to avoid. Of course, if you
decide that the error propagation of cut-through makes the technique
unfavorable then you have a full 64 bytes of latency to wait for the
CRC of the incoming frame.
Regarding jumbo frames and
complexity of end station devices, I would expect that any device
capable of filling a 10Gbps pipe will require some hardware
acceleration. For hardware implementations there really is no
significant difference between encapsulating 1500 byte frames vs 500
byte (or even smaller) frames. Hardware which performs this high speed
operation has the advantage that it is seamlessly compatible with any
other equipment that might be connected to it. On the other hand, if a
switch started using a preemption mechanism when connected to any
existing hardware then it could be anybody's guess what would result.
My assertion is that a small reduction in MTU for the local network
will yield results which are close enough to your extreme examples to
make the applicable space where a new standard is demanded very small
indeed. As I said, it's a niche of a niche.
Better to spend our
effort on cheap and simple 40Gig (or even 100Gig) and make this whole
argument moot (yes, at 100G the max length frame can be stored in 25m
of
wire).
Hugh.
|