Re: Data rate standards vs internal switching standards
Gents,
It seems to me that we have three options before us:
1) Use a single clock rate at the MAC and the PHY. Given the desire to
support WAN, this would probably have to be 9.58464GHz.
2) Use one clock rate at the MAC/PHY boundary, allowing the PHY to have
another clock rate whilst relying on the MAC to insert variable length
IPGs to maintain a maximum data rate below the MAC/PHY clock rate.
3) Use one clock rate at the MAC/PHY boundary, allowing the PHY to have
another clock rate whilst flow controlling the MAC using something with
fine granularity compared to flow control frames.
Are there other options that I am missing?
I believe that there is value in having a "10 Gig" standard. Whether or
not 10 Gig is used in the WAN space, I personally don't understand the
need to start matching Ethernet data rates to whatever WAN transport
happens to be in favor. I also feel that this is largely a marketing
issue -- perhaps we should solicit some input from that camp. If we
don't truly support "10 Gig", what will we call it and will we need some
"not truly 10Gig" fine print like the 56K modems do?
Given these concerns, I don't like options 1 & 2 above. The former
because it isn't 10Gig. The latter because the MAC will either have to
be wired to provide 9.58464G (and I don't see why we should single that
rate out) or it would have to be programmable (adding complexity).
Either case smells like a point solution to me, not a general solution
we can carry forward.
In comparison, option 3 seems to be an elegant and general solution
which doesn't cause much headache to the MAC designers. Perhaps some
vendors will use a lower-than-9.58464G rate and insert other information
into a WAN link. Perhaps some will want to carry 10G over a 9.3514G
line (I made that rate up). Option 3 seems to allow us to move forward
with a "10 Gig" standard, provide easy support for existing WAN rates,
and for any other application which may require the PHY to lower the MAC
data rate.
One question left to resolve, within option 3, would be whether a
byte-by-byte HOLD signal would be easier, or an "increase-IPG" signal
which would cause the MAC to extend IPG. The former seems like it would
require less buffering in the PHY, and it would more easily support
matching new line rates. The latter seems like it may cause less of a
change to current design assumptions (such as a constant rate of sinking
data within a packet). The assumption that IPG is constant has already
been broken by flow control frames.
My 2 cents: add an "EXT_IPG" signal to the MAC. For every clock that
this signal is asserted, the next IPG is extended by 12 bytes. Adding
ten bits to the IPG counter will allow for 1024*12 = 12288 byte IPGs,
which would allow the data rate to be throttled down to 1.1G. This
should support fiber WAN and LAN, and perhaps high speed copper as
well. I believe that it has been shown that the amount of buffering
required will scale with the difference between 10G and the line rate --
about 64 bytes for 9.58464G. The buffer would need to increase to a
fairly large ~1140 bytes to support down to 2.5G. The desire to support
rates this low may require us to look back at the byte-by-byte flow
control signal.
-Simon L. Sabato
-Level One Communications
> "Booth, Brad" wrote:
>
> Paul,
>
> Currently, we have an objective to select between 10.0 Gb/s and
> 9.58464 Gb/s for the MAC/PLS interface. If you're proposing that the
> 9.58464 Gb/s should be 9.953280 Gb/s for the MAC/PLS interface, then I
> believe that the HSSG will have to re-evaluate the previous objective.
>
> Brad
>
> -----Original Message-----
> From: pbottorf@xxxxxxxxxxxxxxxxxx
> [SMTP:pbottorf@xxxxxxxxxxxxxxxxxx]
> Sent: Wednesday, August 04, 1999 5:28 PM
> To: Grow, Bob; 'Roy Bynum'; Ariel Hendel; stds-802-3-hssg
> Subject: RE: Data rate standards vs internal switching
> standards
>
> Bob:
>
> Though not all switches use shared memory, all packet data
> switches do use
> managed buffer memories to control congestion. These buffers may
> be input,
> output, or shared but are always present and are required for TCP
> and other
> data protocols.
>
> As to clocking for packet data switches, the backplane, memory,
> or fabric
> used in the core of a switch is always asynchronous to receive
> data. A
> major function of the MAC chips (not the architectural MAC) is to
> make a
> clock domain conversion between the switch core clock and the
> line clock.
> MAC chips also use external transmit oscillators which are
> asynchronous to
> the switch core.
>
> Since MAC chips perform the clock domain conversion between the
> line clocks
> and the switch core clock there is no advantage in using an even
> 10.000
> clock rate for the line clock. The clock will go through a clock
> domain
> conversion anyway. On the other hand there is a great
> disadvantage in
> having a PHY clock which is asynchronous with the MAC receive and
> transmit
> clocks. Such a design forces an asynchronous clock boundary into
> the PHY
> which otherwise would not be necessary.
>
> The clock rate of the MAC should be synchronized with the PHY so
> the only
> clock domain conversion which needs to be done is the one already
> required
> in the MAC. The MAC transmit and relieve clock rates therefore
> should
> depend on the encoding system. For the scrambler system which
> I've proposed
> the ideal MAC clock rate would be 9.953280 G with byte by byte
> HOLD
> limiting the delivered data rate at the 10GMII to 9.584640 G bps.
>
> I am opposed to having the rate set as a management parameter or
> negotiated
> since these only add complexity to systems with no benefit to
> customers.
>
> Paul
>
> At 10:33 AM 8/4/99 -0700, Grow, Bob wrote:
> >
> >Roy:
> >
> >In your second paragraph below, you make some invalid
> assumptions about
> >switches, and I find your arguements without substance. Many
> switches are
> >not shared memory. Shared memory has basically reached its
> limits for high
> >performance fabrics. (Memory speeds are not tracking the
> increase in
> >throughput requirements and you don't benchmark well if you go
> wider than a
> >minimum ethernet frame). Alternate architectures do not share
> the data
> >buffering for the "overall switch". A small FIFO or even a PHY
> frame buffer
> >is insignificant in monitoring the "overall 'health'" of the
> switch (just
> >look at the per port buffering in gigabit switch
> specifications). One of
> >the things you try to avoid in a switch fabric design is
> allowing one port
> >to consume too much of the buffer because if you don't it will
> eventually
> >effect uncongested ports.
> >
> >I also think you are wrong is assuming that because a factor of
> 10 is less
> >optimal than 2^n, it doesn't matter if the data rate is not an
> integer
> >multiple. I was using the same techniques you advocate for
> managing the
> >different clock domains more than 20 years ago, and have also
> designed
> >switches with both integer multiple clock and non-integer
> multiple clock
> >domains. It is because of this experience that I believe a
> 10.000 Gb/s MAC
> >is the better system solution. including some with similar
> clock domains to
> >what you and because I know it I appreciate the simplification
> that an
> >integer multiple gives in building a switch.
> >
> >--Bob Grow
> >
> >
> >-----Original Message-----
> >From: Roy Bynum [mailto:RBYNUM/0004245935@xxxxxxxxxxx]
> >Sent: Wednesday, August 04, 1999 7:19 AM
> >To: Ariel Hendel; stds-802-3-hssg
> >Subject: Re: Data rate standards vs internal switching standards
>
> >
> >
> >
> >Ariel,
> >
> >Actually, I am not in favor of a programmable IPG. I think that
> the IPG
> >should be set to minimum for all frames in full duplex 10GbE.
> With 400
> >bytes as the current average size of Internet 802.3 frames, I
> don't
> >think that there will be enough "slop" to make up the difference
>
> >between a 10.0 gb MAC and a 9.584 gb PHY. In the future, with
> more
> >and more video based applications, the average size of the data
> frame
> >will be increasing. This will only cause the MAC buffer discard
> rate
> >to increase if the MAC and PHY are not data rate matched. I
> would much
> >rather see the data rate be defined at the MAC, not the PHY.
> >
> >I would much rather see the data buffered internal to the switch
>
> >matrix than at the MAC. This will allow the overall switch to
> act as a
> >buffer instead of perhaps only one output link that is
> operationally
> >overloaded. It will also help network management to be able to
> monitor
> >the overall "health" of a switch or network architecture without
>
> >detrimental effect to the users. Since there are no standards,
> >requirements, or optimal internal data interchange clock rates
> other
> >than modulo 2, it does should not matter that the higher data
> rate
> >interfaces are operating at a slightly different output clock.
> The
> >technology to move data at inconsistent transfer rates through a
>
> >system and between interfaces was invented almost 20 years ago.
> This
> >is not something new.
> >
> >I am more concerned with how this technology will be implemented
> by
> >the customers'. I am concerned with how relatively "lower tech"
> >implemented and support people will be able use 10GbE. I am
> concerned
> >about designers and implementers that do not, can not, or will
> not
> >understand how users and their applications will make use of the
>
> >extended LAN/MAN/WAN data networking environment. People will no
>
> >longer be building extended data networks using routed meshed
> and
> >semi-meshed virtual circuits, but will be using switched virtual
>
> >segments over meshed semi-meshed 10GbE links. This is a very
> different
> >implementation environment from the simple switched LAN that
> 100BT
> >exists in. GbE has started to function in this realm. 10GbE will
>
> >definitely be used like this in a major way.
> >
> >Thank you,
> >Roy Bynum
> >MCI WorldCom
> >
> >
> >
> >
> >
> >
> >Date: Tue Aug 03, 1999 4:28 pm CST
> >Source-Date: Tue, 03 Aug 1999 15:28:37 -0700 (PDT)
> >From: Ariel Hendel
> > EMS: INTERNET / MCI ID: 376-5414
> > MBX: Ariel.Hendel@xxxxxxxxxxx
> >
> >TO: Ariel.Hendel
> > EMS: INTERNET / MCI ID: 376-5414
> > MBX: Ariel.Hendel@xxxxxxxxxxx
> >TO: stds-802-3-hssg
> > EMS: INTERNET / MCI ID: 376-5414
> > MBX: stds-802-3-hssg@xxxxxxxx
> >TO: * ROY BYNUM / MCI ID: 424-5935
> >Subject: Re: Data rate standards vs internal switching
> standards
> >Message-Id: 99080322284244/INTERNETGWDN3IG
> >Source-Msg-Id: <199908032229.PAA00067@xxxxxxxxxxxxxxxxx>
> >U-X-Mailer: dtmail 1.3.0 @(#)CDE Version 1.3.2 SunOS 5.7 sun4u
> sparc
> >U-Content-MD5: NboaVGdw7MetJUdUqIoauA==
> >
> >Roy,
> >
> >
> >Just a quick reappearance to address your follow up questions.
> >
> >
> >>
> >> I would like an additional clarification. In recognizing that
> the data
> >> clocking standards are at the exposed interface of the data
> link, does
> >> this mean that the standard applies to the MII between the MAC
> layer
> >> and the PHY or does it apply more to the PHY?
> >>
> >
> >You have examples of both. The MII/GMII case is obvious.
> >
> >Other exposed interfaces were defined within the PHY to reflect
> real
> >life partitioning into:
> >
> >- Clock recovery being the realm of exotic circuits designed by
> long
> >haired gurus that seldom show up for work before 11 A.M.
> >
> >- The coding layer, can go into CMOS based MAC ASICs, and can be
>
> >designed at any time of the day by many mortals skilled in the
> art of
> >digital design.
> >
> >...
> >
> >> As for the OC rate standard, there are several standards for
> mapping
> >> data into SONET/SDH transports. From an 802.3 view point, the
> >> SONET/SDH standards can be treated as layer 1 functional
> processes.
> >> From the other side of that argument, the current packet over
> SONET
> >> (POS) standard for mapping required an additional standard for
>
> >> inserting a layer 2 functionality between the layer 3 IP
> protocol and
> >> the layer 1 SONET protocol. 802.3 does not have that
> requirement for
> >> an additional functionality. In many ways it is more of a
> question of
> >> how much of the SONET/SDH standards would not used for 10GbE,
> >> depending on the implementation of the interfaces.
> >
> >My emphasis was that the optimization of the data rate to 9.xyz
> is
> >specific for a particular mapping and cannot be adopted in
> isolation
> >unless such mapping is also embraced.
> >
> >Finally, I hope that your lack of objection to a 10Gbps rate
> with a
> >statically programmable IPG is somehow a sign of agreement.
> >
> >
> >>
> >> Thank you,
> >> Roy Bynum
> >> MCI WorldCom
> >>
> >>
> >
> >Regards,
> >
> >
> >Ariel Hendel
> >Sun Microsystems
> >
> >
> Paul A. Bottorff, Director Switching Architecture
> Bay Architecture Laboratory
> Nortel Networks, Inc.
> 4401 Great America Parkway
> Santa Clara, CA 95052-8185
> Tel: 408 495 3365 Fax: 408 495 1299 ESN: 265 3365
> email: pbottorf@xxxxxxxxxxxxxxxxxx