Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi John I am suggesting that moving forward we count the number of modules with a specified total bandwidth, and we also count number of Ethernet rate optical ports. Again, it’s the latter that matters to the network,
not the number of boxes. Below is a format I suggested to LightCounting, using a fictitious illustrative total module count of 100 units of each type. In this example, the 400G modules count is 500x. The xGbE port count is 640x 50GbE, 440x 100GbE, 200x 200GbE and 210x 400GbE.
* Caveat 1 **Caveat 2 *** Caveat 3 **** Caveat 4 From: John DAmbrosia <jdambrosia@xxxxxxxxx>
Chris You seem to be agreeing and disagreeing with me at the same time, so not sure how to respond. And you have only talked about the module – you didn’t address my comment about a port. A port’s ability to
support these multiple items doesn’t actually indicate how it is deployed. So stuck here again too. John From: Chris Cole <chris.cole@xxxxxxxxxxx>
John You are identifying the key challenge.
The first step in solving the problem is to change break-out counting from informal conversations to formal reporting. This gets everyone to recognize the importance of making these numbers accurate. The problem started with 40GbE SR4. There is an informal understanding that point-to-point vs. break and shuffle app. split is approx. 50/50. So every ten 40G SR4 modules should separately be also counted
as 5x 40GbE and 20x 25GbE ports, with appropriate caveats. The problem is actually going to get easier because of the transition of 4x break-out from MPO to high density LC, like SN from Senko or MDC from Corning, as currently in standardization in the QSFP-DD MSA.
400G DR4 will primarily be break-out, so these should be counted as 400G switch ports and modules, and 4x 100GbE optical ports. 2x200G FR4+ modules are 400G switch ports and modules, and 2x 200GbE optical
ports. 400G SR8 is probably the toughest. It can be 8x 50GbE, 4x 100GbE, 2x 200GbE and 1x 400GbE. Initially it’s likely mostly 8x 50GbE.
Network guys, who are our customers, care about the network topology, which is defined by the GbE link rate. I have yet to see a network topology diagram where multiple GbE links in enclosed in a box to indicate
that they are packaged on one physical module. The topology diagrams are nearly solid black with 64x or 128 x radix crisscross; they don’t need anymore boxes at the ends. As much as this looms important to us, it is irrelevant to the user. It’s just plumbing.
Chris From: John DAmbrosia <jdambrosia@xxxxxxxxx> sSent: Thursday, March 14, 2019 10:37 AM Chris, While i am very sympathetic to your request – i am also aware of how difficult it is to get. This problem has been going on for some time. As you know analysts get numbers based on numbers provided by vendors
and may be supported by back up information from end-users – not necessarily based on the application of the devices sold. For example modules or ports that do breakout – how will an analyst know how a device or for that matter a switch port will be used?
Answer is they don’t know. Fortunately, i am thankful that analysts i have spoke to usually make some sort of disclaimer related to this fact. I wish there was some way to get to this – but i have not seen a solution from my many conversations with analysts on this. John From: Chris Cole <chris.cole@xxxxxxxxxxx>
Dear 100Gb/s 100 Gb/s per lane optical PHYs Study Group Participants, After the conclusions of yesterday’s presentations, I had a conversation with Mark Nowell about some of the differences in perspective that come into play with respect to rate designations and rate counting.
Mark pointed out that to a System Vendor, what matters is the total port bandwidth, and how it’s partitioned is simply a configuration issue because switch ASICs behind the port support full range of MAC rates
(with some limitations for the lower MAC rates). To a host card that has 32x QSFP-DD or OSFP slots, with a 12.8T switch ASIC behind it, the only count that matters is 32x 400G ports. To a module vendor, the optical port rate because that’s what defines how the module is build. A 400GbE FR4 module is different than 2x200GbE FR4+ module, and from 4x100GbE module. With the use of high density
LCs, this is becomes more important. To a network architect all that matters is the MAC rate. Everything that is near and dear to our hearts is just one line on a network graph, and what matters about that line is the rate and supported radix. As a result, the presentation discussing rates has been updated with a comment about perspective.
http://www.ieee802.org/3/100G_OPTX/public/Mar19/cole_optx_01c_0319.pdf#page=5 Going forward, when citing volume, we should be careful to describe what is meant by cited numbers; 1) total system port or total module bandwidth, i.e. xAUI bandwidth, or ) optical port rate or MAC rate.
This way anyone looking at the numbers will know how it relates to what they are working on. Thank you Chris From: Mark Nowell (mnowell) <00000b59be7040a9-dmarc-request@xxxxxxxxxxxxxxxxx>
Reminder See you tomorrow Mark On 3/11/19, 1:42 PM, "Mark Nowell (mnowell)" <mnowell@xxxxxxxxx> wrote: Dear Colleagues, We’ll be starting @ 8:30am on Wednesday morning. We’re in the Waddington Conf room which is the same one that .3cm (400G Multimode) is using on Monday and Tuesday Mark To unsubscribe from the STDS-802-3-100G-OPTX list, click the following link:
https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-100G-OPTX&A=1 To unsubscribe from the STDS-802-3-100G-OPTX list, click the following link:
https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-100G-OPTX&A=1 To unsubscribe from the STDS-802-3-100G-OPTX list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-100G-OPTX&A=1 |