RE: Leveraging OC-192c
Further Comments:
It seems the main objective of the Speed issue was pretty much defined after
the voting (over 60% for 10.00 Gbps, and over 30% for 9.58464 Gbps) in
Montreal. It implies the members (also the industry) want to leave both the
successful Datacom and Telecom industries alone, not to be interfered by
other industry. It was the right decision by the majority of us,
technically and business-wise. Any attempt to impose one side to the other
will never get enough votes (75%), and is counterproductive.
Therefore, our real issue is how to convert 10.00 Gbps to 9.58464 Gbps in
the most cost-effective way, but not to keep debating which speed we want,
10.0 or 9.558464 Gbps. To further clarify the objective, we have to select
the best rate-conversion method, and the most optimum rate-conversion
location to convert 10.00 to 9.58464 Gbps, and vice versa.
1. Location of Rate Conversion
We have the choice of locating the rate-converter at ONE place only; namely,
SWITCH, or at HUNDREAD places; namely, each TERMINALS in a cluster of 100
terminals as an example.
The choice is obvious --- only ONE place at a SWITCH. From overall system
cost point of view, the cost differential is 1:100. From implementation
point of view, we preserve all the advantages Ethernet has - simple, cheep,
functional. We can further assure to preserve, backward compatibility,
multiple rates (10/100/1000/10000), orderly migration to 10xGbE, existing
software, and existing hardware.
In addition, a switch is justified to have a larger FIFO for the
rate-conversion.
The logical choice was reflected on the HSSG reflector. Some of our friends
advocating 10.00 Gbps have indicated that the switch is the right place for
the rate-conversion; conversely, none of our friends advocating 9.58464 Gbps
care about where the rate-converter should be located - no objection in
locating the rate-converter at a switch.
2. Rate-Conversion Method
There are two approaches: Flow-control and Framing (Encapsulation), or
mixing both methods. This is the issue we should concentrate our energy to
perfect, and to establish the most cost-effective way of rate-conversion
method to benefit 10xGbE products.
From the discussions on HSSG reflector, it seems that the flow-control
approach is still under debating to come up with a consensus, which I will
avoid at this moment. Also from the discussions on HSSG reflector, we can
try to summarize, and conceptualize the Framing approach to realize what we
are getting in.
At a Terminal:
All we need is the MAC/PLS for LINK-Layer, and the SERDES for PHY-Layer.
Also, a PMD device common to all.
At a Switch:
What we need is the same MAC/PLS for LINK-Layer, and a Framer for PHY-Layer.
Also, a PMD device common to all.
Finally, our task is reduced to one word "Framer".
For a Framer:
Transmitter = Buffer + Adding frame overhead + Parallel to serial + Tx clock
+ Encoding.
Receiver = Buffer + Removing frame overhead + Serial to parallel + Clock
recovery + Decoding
All those blocks are familiar to those well-experienced chip designers
except high-speed headache. It implies the framer can be designed
cost-effectively by those very creative people. Nevertheless, the buffer
size remains as a cost-effective design issue. However, as it was
suggested, if the framer is needed only at a switch, the buffer size is not
an issue at all. Usually, a switch needs a large size buffer anyway.
Furthermore, with the help of flow-control, the buffer-size can be
optimized. However, do we really like to trade off
slightly-smaller-buffer-size with flow-control? Probably not. Flow-control
means slow-down the traffic, which ultimately affects the system throughput
to a certain extent.
We still have many other issues to be resolved. We have to move on.
Edward S. Chang
Unisys Corporation
Edward.Chang@xxxxxxxxxx
-----Original Message-----
From: Grow, Bob [mailto:bob.grow@xxxxxxxxx]
Sent: Tuesday, July 27, 1999 2:01 PM
To: 'Hon Wah Chin'; 'stds-802-3-hssg@xxxxxxxx'
Subject: RE: Leveraging OC-192c
I disagree with your assumption about flow control being an implementation
issue because we are not dealing with an exposed interface. The direct link
of MAC/PLS data rate to the PHY data rate is pervasive in the standard. If
we choose to allow these two rates to differ, we must have a clear
specification of how that works within the standard. If the rate
compensation is not on an exposed interface an implementer does have the
flexability you write of, but 802.3 cannot push the problem off as
implementers choice.
In the standard the PHY doesn't clock information from the upper protocol
layers, MAC transmits a serial data stream through MAC/PLS service
interface primitives (it can use PHY signals to determine when to start
transmitting), which (in later generations) is converted to a parallel
nibble or byte stream by the reconcilliation sublayer which in turn is
signalled to a data rate locked PHY across a media independent interface.
While most of the interfaces between these architectural components are not
exposed, being rigorous about their definition has in my opinion been one of
the important ingredients to ethernet's success.
Bob Grow
-----Original Message-----
From: Hon Wah Chin [mailto:HWChin@xxxxxxxxxxxxxxxxxxx]
Sent: Monday, July 26, 1999 10:51 AM
To: 'stds-802-3-hssg@xxxxxxxx'
Subject: Leveraging OC-192c
The discussion about a MAC/PLS pacing mechanism would be more
important if the interface were typically exposed. If it is
not exposed (and that's my guess), how it works is an internal
implementation issue. The PHY would clock information from the upper
protocol layers however it needs to.
This would simplify the discussion and eliminate the question of
how 802.3x interacts. Use of the 802.3x mechanism in this pacing
AND allowing flow control from the other end of the link probably
conflicts with the assumption in 802.3x that flow control packets
are not forwarded. As observed previously (Mick Seaman?), generating
such packets at the MAC/PLS interface AND forwarding such packets (without
extra internal state) received from the link can cause confusion.
The "bit time" used in the various calculations (IFG, 802.3x pause..)
can be defined at 10G or some other standard rate, probably with less
controversy.
Still, the question of a THROUGHPUT RATE TARGET interacts with
the signaling rate and packet formating to be used on the link.
Leveraging the OC-192 equipment still requires defining the packet
formating, and without flexibility in the signalling rate we have to
work on tradeoffs in the framing/formating vs throughput.
It seems to me that the leverage from the OC-192 deployment is mostly in the
regenerators. Current SONET ADMs would do not transport OC-192c trib
traffic,
The current LINE side is about 10Gb/s so there's no multiplexing or
add/drop.
The OC-192 components are ahead in volume right now, but the prospects
for 10Gb/s Ethernets would provide enough volume on their own to justify
tweaks to the component specifications, assuming no attempt is made
to push the performance 25%. The other advantage of the defined OC-192
structure is the overhead already reserved for link performance monitoring.
This saves definition work if this style of OAM&P is desired for extending
the reach of 10 Gb/s Ethernet using the deployed SONET OC-192 regenerators.
Overall, if you believe in significant 10Gb/s Ethernet volumes, leveraging
the deployed OC-192 regenerators is useful but not compelling. On a
buildout
of new links, new regnerators that clock at a slightly different clock
rate can be installed as well as old design regenerators that do exactly
what SONET OC-192 defines. Within the 40km target, regeneration would
not typically be used at all. Maybe making the physical layer
work with EDFAs will be more important than with OC-192 regenerators.