Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: 10xGbE on DWDM




Hi,

 Martin Nuss asked me to forward you a link on some SDL specs, etc.  They
can be found at:

http://www.lucent.com/micro/tic/ds3e3.html
<http://www.lucent.com/micro/tic/ds3e3.html> 

  Some additional comments:

1) SDL was designed to correct some flaws in POS, and can be used to map
data into any bit oriented link (i.e. into SONET or directly over fiber,
etc.)

2) It uses x^48 set/reset scrambling as opposed to x^43 self-sync scrambling
as the x^43 is open to DC balance/transition density attacks on
clear-channel mappings.  In non-clear channel mappings (such as STS-12 in an
OC-48) there are no issues and x^43 SSS works fine.

3) It suffers no byte expansion as seen by things such as 8B10B or byte
oriented HDLC.

4) It has 2 messaging channels that allow 6 byte messages to be passed
between ends of the link (i.e. you can map GbE out of band signalling into
these).

  Any questions on this, let me know.

P

	----------
	From:  Bill St. Arnaud [SMTP:bill.st.arnaud@xxxxxxxxxx]
	Sent:  Sunday, June 06, 1999 3:44 PM
	To:  Paul Gunning
	Cc:  bin.guo@xxxxxxx; rtaborek@xxxxxxxxxxxxxxxx;
dwmartin@xxxxxxxxxxxxxxxxxx; stds-802-3-hssg@xxxxxxxx; sachs@xxxxxxxxxxxxxx;
widmer@xxxxxxxxxx; Iain_Verigin@xxxxxxxxxxxxxx
	Subject:  RE: 10xGbE on DWDM



	Paul:
	> I fully concur. I would welcome a pointer as to where I could
	> find out some more information about the SDL protocol(s). It(they)
	> sounds very, very interesting.

	Lucent has been doing a lot of work in this area.  I can't point you
to a
	specific paper, but I have seen numerous presentations from Lucent
on SDL.
	Their web site might be a good place to start.

	>
	>
	> I realise that. But is there a requirement for additional
	> frame overhead in that you need a contiguous sequence
	> of preamble "training" 'bits' at the front of the frame
	> to phase sync. the Rx PLL?

	If you have loss of sync that may be required, but in general I
don't
	believe there is a requirement for "training" bits


	>
	> Agreed. But there ARE clock management issues between
	> adjacent nodes on a DPT ring in that you have to
	> explicitly nominate a master and slave relationship.

	Hmmm.  I will have to check into this.  DPT and Nortel's IPT which
is very
	similar allow for auto-discovery etc which implies that there is no
	master/slave relationship.

	> Unfortunately I can't attend the meeting in Idaho
	> but I would be very interested to learn of any proposals
	> for 100xGBe.

	I remain skeptical that we will see serial speeds beyong 10Gbps or
"all"
	optical switches in the near future.  The problem with serial speeds
in
	excess of 10 Gbps is the processing required at switching or routing
nodes.
	Every router manufacturer I have talked to has found it exceedingly
	difficult to build router interfaces that can work at 10 Gbps line
speeds.
	To do this a router must process an incoming packet, perform a
256,000 entry
	address table lookup and the forward the packet across a backplane
in a few
	nanoseconds.  At these speeds we are closely the physical limitation
of
	light propagation across the phsical dimensions of the box.

	Right now the I believe the propagtion time of an electrical signal
for
	"copper on copper" semi-conductors is comparable to the speed of
light
	through glass. But more importantly is the small component size of
	semi-conductors versus existing opto-switching devices.  At these
speeds the
	size of components and the length of their interconnection path
becomes very
	critical for high speed processing.  As I understand it when you try
to
	reduce optical components to the size and density as semi-conductors
you are
	approaching the wavelength distance of the actual light path which
causes a
	whole set of new refractive and light bending problems.  There was
an
	excellent article in the last issue of Scientific American on this
topic.

	Finally when you have serial speeds in excess of 10 Gbps you run
into a
	whole set of new non-linearities in the fiber itself. Polarization
mode
	dispersion, for example becomes very significant and now cannot be
ignored.
	Jitter and BER becomes increasingly tougher and tougher as well.

	I think someday we will reach these speeds but maybe not serially,
perhaps
	through Silkroad's or Transact's technology solutions.  But I don't
see it
	happening for several years.

	>
	> Yes. But deployment economics would tend to prefer DWDM
	> so as to maximise bandwidth per physical fibre. DWDM allows
	> you to have many additional "virtual" fibres per physical fibre.


	The same economics apply to supercomputers.  The cost per memory bit
of a
	supercomputer is significantly less than that of a a standard PC.
It would
	be more economical to move all our applications to supercomputers.
But you
	don't see many people doing that.

	I think the same rationale will apply to DWDM.  Yes, the cost per
bit is
	signficantly less than CWDM and yes you can pack in many virtual
paths.  But
	the upfront capital is very high.  More importantly you are tied to
the
	fortunes of your favourite carrier.  Whereas if I can get access to
dark
	fiber I can install my own CWDM system at significantly less upfront
cost
	and I am in control of my own destiny.

	We have actually been going through a detailed economic exercise on
this
	very topic for a 1700 km regional network here in Canada.  Even a
very big
	manufacturer of DWDM equipment had to admit that CWDM, particularly
with
	10GbE is "significantly" cheaper than DWDM.  Because the regional
network is
	in a small province the local carrier could not justify the upfront
capital
	cost of a DWDM system.  But even if the carrier had the business
case to
	justify the upfront capital cost,  the prorated DWDM cost was not
much
	better than the total CWDM 10GbE cost!!


	Bill



	>