Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Long distance links




Rich:

I think we're still far from mutual understanding. Here are some comments.

Cheers,

Paul

At 02:26 PM 9/2/99 -0700, Rich Taborek wrote:
>
>Paul,
>
>In your Montreal presentation
>(http://grouper.ieee.org/groups/802/3/10G_study/public/july99/bottorff_1_07
99.pdf),
>page 5, you show a transponder performing a bridge function between a DWDM
>photonic network and a L2/L3 WAN access switch. I have a couple of comments
>about this implementation:

The functions at the demarcation point between the DWDM network and the
IEEE link segment depend on the definition of the IEEE link. The function
at this boundary is at least a transponder which maps the IEEE segment onto
the DWDM grid. Other functions which may be required are signal
regeneration, signal retiming, format conversion, encode conversion, clock
domain conversion, store and forward bridging, etc.

My current thinking is the demarcation point will require a transponder for
mapping the DWDM grid along with clock domain conversion, signal
regeneration, and signal retiming. This view makes the transponder
something like a 2 port repeater. The way I look at the problem is I'm
looking for a solution which minimizes the complexity of the demarcation
point without adding cost or complexity to the IEEE link. I believe a
solution which achieves this provides the best trade-off for customers.

>
>1) It seems that you require a very specific SONET WAN PHY on both the
Ethernet
>side of the Transponder and WAN side of the L2/L3 access switch.
Specifically,
>you require that the Ethernet frames be scrambled NRZ, that the MAC/PLS
rate of
>at least the switch is 9.58464 Gbps, and that no special symbols be used.
>Essentially, your requirement is that the "Ethernet" link PHY is the same as
>SONET OC-192. Is this not exactly true? If not, please point our my
>misconceptions.

No, I don't require a specific SONET WAN PHY for two link segments
attaching to the DWDM link segment. Infact, I believe adhering to SONET
timing and management requirements would make a conventional Ethernet link
too expensive. I do require a very specific encoder running over the SONET
or DWDM PHY. The requirements for the DWDM network encoder include NRZ
efficiency, no special symbols, and a 9.584640 data rate. Because I have a
specific requirement for the DWDM link segment I prefer IEEE solutions
which are close enough to the DWDM PHY to minimize the transform at the
demarcation point without adding cost to the IEEE segment. The one
requirement which is the 9.584640 data rate makes a huge difference in the
complexity of the demarcation device while not costing anything on the
Ethernet side.

>
>2) I'll even go along with you if your answer above is "Yes, IEEE 9.58464
GiGE
>is really OC-192". As a matter of fact, I'll support it in committee as an
>alternative 10 GbE PHY option, call it the "WX" family of PHY's (could be
>different wavelengths). However, this PHY is ill suited for the LAN
environment
>and forces every demarcation point between "old Ethernet" and "new
SONENET" to
>do protocol conversion.
>Besides the OC-192 PHY not being cost effective in the LAN for reasons I've
>belabored in previous notes to this reflector, an additional cost which
cannot
>be ignored is the cost/performance penalty assessed by forced protocol
>conversion in ALL LAN environments including LAN environments with no WAN
>access requirements. Am I wrong about this protocol conversion requirement?

Why would you need code conversions? Certainly a link attached to a switch
will have its coding decoupled from the line just like 100 Mbit/ 1 GBit
today. If you think some conversion is required please tell me what
specifically would be converted, why, and where.

The proposed scrambled PHYs are no more expensive than any block coded PHY.
Infact, they may cost less and be easier to get to market because of
available components. Of course we have not proposed OC-192 as the PHY
because it has many aspects about it which make it more expensive. Instead,
we have proposed a PHY which uses some of the OC-192 technology to achieve
a simple WAN compatible and low cost solution.

>
>3) It seems to me that your presentation portrays one of many possible
>implementations of 10 GbE use in the WAN. The complete set includes WANs
which
>do and do not utilize DWDM technology. By your own words you indicate that
DWDM
>equipment may be code dependent (i.e. proprietary). It seems very reasonable
>then to shield 10 GbE from the special purpose, high cost and proprietary
>interfaces such as DWDM. Please help enlighten me as to how standardizing a
>non-standard WAN PHY as 10 GbE helps Ethernet customers in general?

I agree with this, which is why we are describing a demarcation point.

>
>4) As far as implementations go, one possible implementation would be to feed
>10 GbE at 10.0 Gbps directly into the DWDM photonic network. 

Photonic networks (not using SONET) are still code and frequency sensitive.
This is a product of the historic development from SONET infrastructure
components. As I have mentioned many times we are talking about 3 WAN link
segment types. These are dark fibers, dark lambda, and SONET. Dark lambda's
are built on DWDM photonic networks which today are  are frequency and code
sensitive. Our market research on this topic indicates that it requires
overcoming a tremendous installed base momentum to change the DWDM photonic
networks to allow frequency and code agility. The market reality is that
DWDM photonic networks are both code and frequency sensitive. Ignoring this
reality will just make Ethernet not viable in the photonic networks.

>still relatively in its infancy, I foresee more direct WAN implementation
which
>may benefit from some of the more cost effective PHYs already proposed for 10
>GbE, especially if common interfaces are developed for these PHYs.

I agree there is potential in the dark fiber market. I also believe a key
to exploiting this market is tackling the installed base.

 The use of
>these PHYs would enable perhaps the most cost effective implementations of
>metro and wide area DWDM networks which can STILL tie into the existing WAN
>infrastructure via routing or bridging.

This builds a word of islands where Ethernet is a very small player
competing with other technologies who have higher volume production due to
their ability to interoperate with the installed base. A much more
compelling story provides a universal link technology which operates on
dark fiber and interfaces to existing DWDM and SONET systems. This
technology would have far faster market uptake providing a simple and
universal WAN/LAN solution.

 It's not much of a stretch to envision
>WAN access routers with DWDM interfaces on the WAN side and 10 GbE interfaces
>on the LAN side. Is this latter implementation impossible? I think not.
What is
>the signaling protocol on the DWDM side in this case? The point here is
that I
>view your DWDM photonic network as only one possible implementation of a DWDM
>photonic network. Please don't encumber the rest of the Ethernet community
with
>implementation specific and special requirements.
>
>I have to apologize for being so harsh in my trying to the bottom of this
>issue. But I believe that my strategy in doing so sooner rather than later
will
>prove to be beneficial. Please also don't take the issues personally. I'm
>trying very hard to stick to the issues.

No offense taken.

>
>Best Regards,
>Rich
>
>--
>
>Paul Bottorff wrote:
>
>> Dan:
>>
>> I also think we are getting closer to understanding. A few comments.
>>
>> Cheers,
>>
>> Paul
>>
>> At 05:49 PM 9/1/99 -0600, DOVE,DANIEL J (HP-Roseville,ex1) wrote:
>> >
>> >Paul,
>> >
>> >While we may not be coming closer to agreement (or maybe we are?) I
>> >believe we are at least coming closer to understanding.
>> >
>> >More in context below...
>> >
>> >> >So if I understand this model, we have a 10Gig link (campus backbone)
>> >> >that is connected to a campus switch. That switch wants to connect to
>> >> >a WAN and thus will have a WAN port that operates at 9.58464 by using
>> >> >its XGMII "hold" signal.
>> >>
>> >> Provided people built networks to this configuration, then it
>> >> works just
>> >> fine.
>> >> The IEEE has not yet decided to build 2 PHYs. I believe that
>> >> the WAN PHY
>> >> being talked about does not have a distinct identity from the LAN PHY.
>> >
>> >This is one point at which we clearly have different perspectives. I
>> >believe that there will be sufficient distinction in cost between a
>> >DWDM laser for the WAN, and a (WWDM or serial) solution that is
>> >limited to a few Km for the campus. Otherwise, why do we need an XGMII?
>>
>> I agree that a PHY which included a DWDM laser would have a distinct
>> identity. However, I don't believe this interface is the current topic of
>> standardization. How I see the system being built is that the DWDM network
>> will be terminated in a shelf which provides 10 GigE access ports. On one
>> side of the shelf will be IEEE standard 10 GigE on the other side of the
>> shelf will be a DWDM photonic network. The device in the middle at the
>> demarcation point will be a transponder/repeater. For a router to access
>> the photonic network it will attach a 10 GigE interface to the photonic
>> network access port.
>>
>> A typical 10 GigE WAN link which attaches to a photonic network would be
>> built using 3 or more link segments. If you refer to my slides from
>> Montreal the 5th slide provides a picture of such a network. The link
>> segments which attach from the router to the photonic network need to
>> provide the 9.584640 data rate since this is all the data the photonic
>> network can carry due to historic reasons. The PHYs in the router do not
>> have DWDM photonics.
>> >
>> >> Because I don't have a good criteria for distinct identity
>> >> I've found no
>> >> reason to believe the committee should build 2 PHYs. My
>> >> assumption is that
>> >> any PHY developed may run on SMF and may be deployed in the
>> >> wide area. This
>> >> is what is currently happening with 1 GigE.
>> >
>> >Actually, there is LX, SX, CX and 1000BASE-T not to mention a few
>> >proprietary links for long-haul 1550nm. There is no reason not to
>> >believe that 10G will follow the paradigm that allows multiple
>> >PHYs for multiple cost/performance domains.
>>
>> Access to the photonic network described above can (and will in some cases)
>> be less than 100 meters. It may use 850, 900, 1300, for 1550 nm lasers. It
>> may be serial or CWDM. Finally it may have a different encode that the DWDM
>> network (though I dislike this).
>> >
>> >> >
>> >> >I agree that THAT switch will require buffering to handle the rate
>> >> >mismatch, but that would be required in the event that it has more
>> >> >than 10 Gigabit links feeding it anyway. This is OK.
>> >>
>> >> In the configuration I described it is the buffer at a
>> >> transponder/repeater
>> >> located at the junction between the IEEE segment and the DWDM
>> >> segment which
>> >> requires buffering to rate match. At this juncture there are only two
>> >> ports. One side is the IEEE 10.00 Gbps and the other side is
>> >> the 9.9584640
>> >> Gbps DWDM cloud. The buffer size covers only the rate mismatch not the
>> >> normal overload seen in packet switches. The photonic network
>> >> appears as a
>> >> new segment in the link between switches, not as a separate link.
>> >
>> >This looks like a specific implementation restriction. I doubt that
>> >I would implement it that way.
>> >
>> >Regards,
>> >
>> >Dan Dove
>> >
>> >
>> Paul A. Bottorff, Director Switching Architecture
>> Enterprise Solutions Technology Center
>> Nortel Networks, Inc.
>> 4401 Great America Parkway
>> Santa Clara, CA 95052-8185
>> Tel: 408 495 3365 Fax: 408 495 1299 ESN: 265 3365
>> email: pbottorf@xxxxxxxxxxxxxxxxxx
>
>-------------------------------------------------------------
>Richard Taborek Sr.    Tel: 650 210 8800 x101 or 408 370 9233
>Principal Architect         Fax: 650 940 1898 or 408 374 3645
>Transcendata, Inc.           Email: rtaborek@xxxxxxxxxxxxxxxx
>1029 Corporation Way              http://www.transcendata.com
>Palo Alto, CA 94303-4305    Alt email: rtaborek@xxxxxxxxxxxxx
>
>
>
>
Paul A. Bottorff, Director Switching Architecture
Enterprise Solutions Technology Center
Nortel Networks, Inc.
4401 Great America Parkway
Santa Clara, CA 95052-8185
Tel: 408 495 3365 Fax: 408 495 1299 ESN: 265 3365
email: pbottorf@xxxxxxxxxxxxxxxxxx