RE: Long distance links
Dan:
Another set of comments below.
Paul
At 01:19 PM 9/1/99 -0600, DOVE,DANIEL J (HP-Roseville,ex1) wrote:
>
>Hi Paul,
>
>My comments are in context below.
>
>> Dan:
>>
>> You are looking at the problem from the point of view of a
>> MAC talking to a
>> PHY attached to an Optical network. I agree that about 1
>> packet time for
>> transmit and 1-2 for receive is required for this configuration.
>> The issue is what happens when a MAC delivers a 10 Gigbit
>> data steam to a
>> 10 Gigabit PHY which is attached to a link which eventually reaches an
>> Optical network.
>
>So if I understand this model, we have a 10Gig link (campus backbone)
>that is connected to a campus switch. That switch wants to connect to
>a WAN and thus will have a WAN port that operates at 9.58464 by using
>its XGMII "hold" signal.
Provided people built networks to this configuration, then it works just
fine.
The IEEE has not yet decided to build 2 PHYs. I believe that the WAN PHY
being talked about does not have a distinct identity from the LAN PHY.
Because I don't have a good criteria for distinct identity I've found no
reason to believe the committee should build 2 PHYs. My assumption is that
any PHY developed may run on SMF and may be deployed in the wide area. This
is what is currently happening with 1 GigE.
>
>I agree that THAT switch will require buffering to handle the rate
>mismatch, but that would be required in the event that it has more
>than 10 Gigabit links feeding it anyway. This is OK.
In the configuration I described it is the buffer at a transponder/repeater
located at the junction between the IEEE segment and the DWDM segment which
requires buffering to rate match. At this juncture there are only two
ports. One side is the IEEE 10.00 Gbps and the other side is the 9.9584640
Gbps DWDM cloud. The buffer size covers only the rate mismatch not the
normal overload seen in packet switches. The photonic network appears as a
new segment in the link between switches, not as a separate link.
>
>The WAN PHY will still only need its 1 packet worth of buffering to
>deal with the rate conversion at the XGMII though.
True at the switch not at the repeater/transponder which enters the DWDM
network.
>
>> At the optical network a transform is performed to
>> continue the link. The device at the juncture must flow
>> control the link to
>> slow the data rate to 9.584640. To do so it must have enough buffer to
>> cover 2*delay*bandwidth. Since this buffer depends on delay
>> it has a direct
>> dependence on the link length (making the solution scale poorly). A
>> reasonable design point for wide area equipment is about 25
>> msec (typical
>> routers use 200 msec. which is where the design point needs to be for
>> general applications).
>
>Now that the data from that switch has been restricted to the 9.58464
>rate in the campus switch (via the hold signal) and has been placed onto
>the SONET network, aren't we done?
I agree if this is what is done. If the effective data rate coming out of
the Campus is 9.584640, then there is no problem. If, on the other hand,
the data from the campus is sourced at 10.000 and is carried on dark fiber
for say 1000 Km (using a few repeaters), then enters a DWDM network the
device at the edge of the DWDM network must slow the data rate down by a
flow control mechanism.
>
>> Doing the math on 25 msec gives us 2*25msec*400Mbps/8bpB =
>> 2.5 Mbytes. If
>> we recalculate the buffer requirement based on the standard
>> router design
>> point of 200msec we get 2*200msec*400Mbps/8bpB = 20 Mbytes.
>>
>> Paul
>
>If the campus switch doesn't want to provide 20Mbytes of buffering
>for it's 10Gig WAN port, it can flow-control it's down-stream links
>just as a 10Gig Campus port would do if it had 24 Gigabit links
>feeding it too much data.
>
>If your point is that by limiting the MAC/PLS rate to 9.58464 we
>would be able to send data from a building backbone to the campus
>switch and then on to the WAN without a rate mismatch, I accept
>your point. However, those of us who expect to be aggregating many
>Gigabit links see the inherent buffering and flow-control issues
>to be a more immediate concern and by dealing with those issues,
>the 10.0000 -> 9.58464 rate mismatch is resolved anyway.
>
>Thanks for continuing to illuminate your concerns. I hope my response
>helps us to converge our understanding.
>
>Regards,
>
>Dan
>
>
Paul A. Bottorff, Director Switching Architecture
Enterprise Solutions Technology Center
Nortel Networks, Inc.
4401 Great America Parkway
Santa Clara, CA 95052-8185
Tel: 408 495 3365 Fax: 408 495 1299 ESN: 265 3365
email: pbottorf@xxxxxxxxxxxxxxxxxx