Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Scott, You lost me when you wrote below regarding 400G Ethernet not supporting break out. I do not understand. The uplink module does not have to also be a module for connection to servers. A 400G form factor can be
used to support multiple ports of 50G Ethernet as we shall be defining in our upcoming task force now in study group phase. It is the appropriate speed of Ethernet for server that we need defined along with the appropriate PMDs.
** ** ** ** You were not here to hear John D. remind the room that we began our efforts debating doing terabit-class Ethernet versus 400G Ethernet. Jeff From: Scott Kipp [mailto:skipp@xxxxxxxxxxx] Jeff, You do raise a good point about 2 lane interfaces to servers. When I interviewed Alan Weckel of Dell’Oro for the Analyst Hour on “50GE and the Path to 200GE and 400GE” ( available for free at
https://www.brighttalk.com/channel/6205/ethernet-alliance), he pointed out how they expect about a million ports of 25G servers will ship and another million ports of 50G (2X25G) servers
to ship this year (slide 24). By 2019, they expect 50G servers to outship 25G server ports by about a million ports (5M to 4M), but the servers will be a mixture of the 50G serial lanes by then. So I have considered 2 lanes servers and 2 is a nice multiple
of 4 as well. While some 8 lane modules are being proposed like QSFP-DD, CFP8 and On Board Optics, 400GbE is not defining an copper interface for servers or an 8-lane MMF
interface. We can’t break a DR4, FR8 or LR8 out to 50G servers. 400GbE is not for breakout to servers. As Gary showed in an excellent presentation in the Study Group phase for the 400GbE (http://www.ieee802.org/3/400GSG/public/13_11/nicholl_400_01_1113.pdf),
400GbE is for routers, not servers. The conclusion of that slide says, “Early market applications for 400 Gb/s Ethernet will be similar to those seen in early market 100 Gb/s Ethernet”.
I agree. 400GbE will not see high volumes until it has 100G lanes and fit in a QSFP. That will be in the next decade. What I don’t understand is why people think that 8X interfaces are suddenly going to be low cost when nothing has changed and the speeds are getting more challenging. Even if we defined an 8X50G MMF interface
for 400GbE, the 8X module will not see the volumes that drive cost down like the ubiquitous and beloved QSFP. Volume drives cost down and investment up. Can anyone explain how a low volume 400GbE product based on 8 lanes of 50G is going to get to low cost?
Another way to look at this is what will see volume? A low cost QSFP or a CFP8? This has played out a couple times now in favor of the QSFP. These topics have been on my mind quite a bit because I’ll be speaking about this next week at OFC on the Thursday panel called “A Rational Assessment of 400GbE”. I will argue that FlexE and 100GbE and 200GbE
will be more cost effective than 400GbE. I hope you can make it. Thanks, Scott From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Scott, For applications, we see this 4 to 1 ratio such as 200GE and 4 x 50GE OR 400GE and 4 x 100GE. What you do not consider is that the same base technology can do both, where that base technology is 50G lanes. The
base technology factors are 4 & 1 OR 8 & 2. You are not accounting for server connects that use 2-lane interconnects. We might argue that 400GE will not be at the sweet spot until it is made using 4 x 100G-lanes and server connects also use a 100G lane. In this way, you do not have to get into saying 400GE is only needed for routers today. There are data center applications. Jeff From: Scott Kipp [mailto:skipp@xxxxxxxxxxx]
All, I wanted to comment on some of the presentations that are coming out against 200GbE in the possibly final days of the NGOATH Study Group.
As I showed in kipp_50GE_NGOATH_01_0116.pdf, 200GbE can be designed for low cost QSFP implementations from the start. This makes 200GbE applicable to the low cost / high volume switch market while 400GbE will be suited for the high cost/
low volume router market. 200GbE will be a continuation of the successful, high volume progression from 40GbE QSFP+ to 100GbE QSFP28 to 200GbE QSFP56. Highly parallel Ethernet interfaces more than 4 lanes wide like 400GbE have never reached high volume (more
than 1M ports/year) and there is no reason to think this will change. According to Dell’Oro, more 100GbE switch ports shipped than 100GbE router ports last year for the first time. This year, over 10X as many 100GbE switch ports are expected to ship than 100GbE router ports. Routers ship low volumes of
high speed ports initially, but switches overwhelm routers in port shipments when the technology is ready and the cost is low enough for switching. This pattern of switching dominance is set to repeat again at 400GbE. 400GbE will ship for routers while 200GbE
can come out of the gates in high volume like 40GbE. Here’s a quick comparison of the differences between 200GbE and 400GbE:
In
booth_50GE_NGOATH_01a_0316.pdf, the presentation claims: 200G MAC to MAC provides no value. I heard similar arguments against 40GbE in 802.3ba and 40GbE took off in switching in 2011 while 100GbE stayed high cost in routers until a 4-lane solution became
available. 200GbE will add value by being lower cost/bit. No Ethernet interface has proven to be low cost when it is highly parallel (>4 lanes). Ethernet interfaces can sell millions of ports per year when they reach 4 lanes wide or less. In
nicholl_50GE_NGOATH_01_0316.pdf, the presentation claims “both a Standards and product viewpoint, 200GbE and 400GbE are likely to come out at the same time”. I agree that the standards could come out at roughly the same time, but the products will be
mainly different in character than in time. High cost 400GbE will only be available in routers while low cost 200GbE will be in switches. Go to any systems vendor’s website and there will be completely separate product lines for switches and routers. Switches
are for high volume and low cost while routing has very different and challenging requirements that are suited for high bandwidth deployments.
I’m glad the objectives for 200GbE are already accepted by the Study Group. I hope enough of you agree to keep 200GbE objectives in the project and prevent last minute changes with little justification beyond FUD. End users will flock
to 200GbE when they see a lower cost/bit performance - just like they did for 40GbE. Regards, Scott Kipp |