Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Joel, maybe the 100Gbs target should
be 120Gbs. Ribbon fiber cables and the connectors are x12. Assuming a 10Gbs per
channel implementation, this works our very nicely. Many copper cables
and connectors are also already established at x12. Then the 120Gbs interface
could be broken out into 3 40Gbs interfaces or 12 10Gbs interfaces without
loosing bandwidth.
Jim McGrath From: Joel Goergen [mailto:joel@force10networks.com] Sent: Thursday, April 05, 2007 2:07 PM To: STDS-802-3-HSSG@listserv.ieee.org Subject: Re: [HSSG] 40G MAC Rate Discussion If the front end is defined as 100Gbps, expecting the back end to be 40Gbps makes no sense from system implementation. Maybe if it were 50Gbps, the TM might be easier to implement. But either way, you would throw away a lot of bandwidth for a 100 going into 40. Also, if the front end moves towards 4by25+FEC, which it appears to be based on the work so far, from a system perspective, you would use the same data rate on the back end side with perhaps different signaling. Further, spending another three years on a 40Gbps back plane standard for such a small gain doesn't seem right. It was pretty painful the last time around. You would end up defining 1by40Gbps, 4by10Gbps, 16by3.125Gbps. I just don't see the ROI. No one has yet to prove that 4by10Gbps LAG doesn't fit the server market described by Shimon. And actually, I still don't see the market he is talking about. Regardless of using LAG on the front end or in an ATCA chassis with multiple LAG connections ... a solution exists today that works well. Last is someone still has to design an aggregation box to connect all the 40Gs together and pipe them out 100Gs. I "know the art", and it is very costly to do this. But that isn't the problem for me ... we can all burn the money to supply a market we've seen no data for or a description of ... the real problem for the systems vendor is we finish the box in 2010 and we have the exact same data performance problem we have today jamming 1G and 10G links into a 10G core. I propose that rather then do 40G, we put that effort into working with 802.1 to resolve the perceived problems with LAG. Thus when 100Gbps is complete, we will have a N-LAG ... or New LAG ... that allows the end user to create ANY size pipe required for 1G, 10G, and 100G core or ag implementations. -joel Ali Ghiasi wrote:
|