I tend to agree with Joel. I don't think it's fair to
demand that the current configurations of passive hardware determine
the target MAC data rate. The most viable solution for 100G seems to
be 4 lambda, with a smaller possibility of 5 lambda, and 30G serial
links will be a lot harder to achieve electrically than
25G.
Gourgen
Jim,
Please look at past data regarding optic
feasibility for 10km solution space. The optics vendors are suggesting
4by25+FEC as a good fit. I'm pretty uncomfortable pushing that to
4by30+FEC.
-joel
McGrath, Jim wrote:
Joel, maybe the 100Gbs target should
be 120Gbs. Ribbon fiber cables and the connectors are x12. Assuming a 10Gbs
per channel implementation, this works our very nicely. Many
copper cables and connectors are also already established at x12. Then
the 120Gbs interface could be broken out into 3 40Gbs interfaces or 12 10Gbs
interfaces without loosing bandwidth.
Jim McGrath
Molex 2222 Wellington Ct Lisle, IL 60532 Phone: 630-527-4037 Mobile: 630-244-3872
Fax: 630-969-1352
Ali,
If the front end is defined as 100Gbps, expecting the
back end to be 40Gbps makes no sense from system implementation. Maybe
if it were 50Gbps, the TM might be easier to implement. But either way,
you would throw away a lot of bandwidth for a 100 going into 40.
Also,
if the front end moves towards 4by25+FEC, which it appears to be based on the
work so far, from a system perspective, you would use the same data rate on
the back end side with perhaps different signaling. Further, spending
another three years on a 40Gbps back plane standard for such a small gain
doesn't seem right. It was pretty painful the last time around.
You would end up defining 1by40Gbps, 4by10Gbps, 16by3.125Gbps. I just
don't see the ROI.
No one has yet to prove that 4by10Gbps LAG doesn't
fit the server market described by Shimon. And actually, I still don't
see the market he is talking about. Regardless of using LAG on the front
end or in an ATCA chassis with multiple LAG connections ... a solution exists
today that works well.
Last is someone still has to design an
aggregation box to connect all the 40Gs together and pipe them out
100Gs. I "know the art", and it is very costly to do this. But
that isn't the problem for me ... we can all burn the money to supply a market
we've seen no data for or a description of ... the real problem for the
systems vendor is we finish the box in 2010 and we have the exact same data
performance problem we have today jamming 1G and 10G links into a 10G
core.
I propose that rather then do 40G, we put that effort into
working with 802.1 to resolve the perceived problems with LAG. Thus when
100Gbps is complete, we will have a N-LAG ... or New LAG ... that allows the
end user to create ANY size pipe required for 1G, 10G, and 100G core or ag
implementations.
-joel
Ali Ghiasi wrote:
Marcus and Others
I like to present another point of
view in support of 40 Gig MAC. We currently have the following option on
the backplane side - KX-4 (XAUI)
10Gig - KR (1 lane ) 10Gig The natural next step
for backplane Ethernet will be to operate KX-4 lanes at 10.3125
Gbaud. Regardless of what decision we make in the HSSG 40Gig MAC will
exist for the backplane.
Assuming we will define the 40Gig MAC
sooner or later then allowing 40 Gig MAC for front panel becomes even
more compelling, specially when 100Gig is overkill for
these applications in near term. If we define 40Gig MAC in the HSSG
then defining 40Gig backplane becomes
travail.
applications.
Thanks, Ali
Marcus
Duelk wrote: > Hi, > > I think it was common sense at the
last meeting that the > rate that service providers and IXPs are
looking for is 100 GbE. > The discussion about 40 GbE is for the
*server market*, the > classical LAN application of Ethernet. In the
network space you > have already OTU3 and OC-768c PoS, so there is not
much > need for another 40G Ethernet interface. > > Also,
my personal opinion is regarding "broad market potential" > that there
will be more networks or network types that require > 100 GbE, however
in terms of volumes I could imagine that a 40GbE > interface for
servers will actually produce more volumes, even though > it is only
one type of network. > > Marcus > > >
Toshinori Ishii wrote: >> Hello, >> >> I'm
another IXP network engineer. >> >> 2007/4/5, Henk
Steenman <henk.steenman@ams-ix.net>: >>>
Back to 40GE: scaling link aggregation using 10GE for another 3
years >>> will be very hard. The use of 40GE might be of help
here if it would >>> allow for standardized products to become
available say second half >>> of 2008. >>> QUESTION:
Is there a way to expedite the standardization process (and >>>
subsequent product development) of a 40GE standard? Within or
outside >>> of the IEEE? >>> >>> If the
answer to the above is "no" then I would say lets not spend
any >>> time on anything other than 100GE so no delay is
introduced in the >>> development of this standard and get it
finished as soon as possible. >> >> Agree. >> I
need 100GE ASAP. >> >
begin:vcard
fn:Ali Ghiasi
n:Ghiasi;Ali
org:Broadcom;HSIP
adr;dom:;;3151 Zanker Road;San Jose;CA;95014
email;internet:aghiasi@broadcom.com
title:Chief Architect
tel;work:(408)922-7423
tel;cell:(949)290-8103
version:2.1
end:vcard
|