Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Duwenhua, Thanks for your input, are you requesting to make a presentation at the logic ad hoc? Thanks, Mark From: Duwenhua [mailto:duwenhua@xxxxxxxxxxxxx]
Hi, Mark
In your presentation
http://www.ieee802.org/3/bs/public/15_05/gustlin_3bs_03_0515.pdf page 4:
If you care mostly about 4x100GbE breakout, then the 4x100G architecture would have an advantage
? With the caveat that the 100G FEC is KR4 and the 400G FEC is KP4
But do people only care about 4x100GbE breakout, or do they also care about 16x25GbE, 8x50GbE and possibly 2x200GbE breakout in the future?
? With the caveat that we don’t know the architecture of some of these speeds My comments:
In Data Center we need 4x100GbE breakout, we do not need 16x25GbE, 8x50GbE breakout. Because:
Future data center will have 3-level of switch devices: TOR switch, Spine switch, Core switch.
1.
TOR Switch: Downlink is 25GbE connected to many servers (i.e, 48x25GbE), uplink is 100GE (i.e. 8x100GbE). ?No 400GbE at here;
2.
Spine Switch: Downlink is 100GbE connected to many TORs, uplink is 400GbE ( or 100GbE). ? We need 4x100GbE breakout. No 25GbE/50GbE at here.
3.
Core switch: Downlink is 400GbE (or 100GbE) connected to many spine switches. ?we need 4x100GbE breakout. No 25GbE/50GbE at here.
Why we need 4x100GbE breakout in data center ( at the positions of Spine switch and Core switch), there are two reasons:
1.
Double port density: On a slot of 1U height and 19inch width, today we get 36 ports of QSFP28. Future we get 18 ports of CDFP2(18x400GbE), breakout
72x100GbE;
2.
Flexibility: 400GE and 100GE dynamic configuration , plug and play.
So, I think that 4x100GbE breakout is very important in data center, the FEC architecture (400G or
4x100G )should care mostly about 4x100GbE breakout. -----Original Appointment-----
|