Re: [HSSG] 40G MAC Rate Discussion
Dan,
> "What has been puzzling to me in this debate ever since it started is:
> how can 40Gb server connectivity in the datacenter hurt those of you
> who believe that 100Gb is the right speed for aggregation links in
> service provider networks? I am certainly at a point where I
> understand and respect the needs of your market. All I am asking in
> return is the same."
>
> I believe I may have stated this during the meetings, but will be
> happy to reiterate. First off, I should mention that aggregation is
> not just for service providers, but high performance data centers are
> in need of 100G aggregation *asap* as well.
>
Of course.
> If we have a 40G and 100G MACs defined, it creates an /additional set/
> of IC/product solutions that must be developed.
>
> Imagine the following;
>
> Nx10G + Mx100G + (opt) Fabric I/O (Required for a 10G/100G
> Server/Uplink solution)
> Nx100G + Fabric I/O (Switch Aggregator Functionality)
>
> So, to solve the customer demand for higher density 10G with uplink
> solutions, we are likely to need _two chips_ and the products they spawn.
>
> Now, If we add 40G into the mix;
>
> Nx10G + Mx40G + (opt) Fabric I/O (Required for a 10G/40G Server/Uplink
> solution)
> Nx40G + Mx100G + (opt) Fabric I/O (Required for a 40G/100G
> Server/Uplink solution)
> Nx40G + Fabric I/O (Switch Aggregator Functionality)
> Nx100G + Fabric I/O (Switch Aggregator Functionality)
>
> So, we have essentially doubled the product development requirements
> to meet a wider, and thus more shallow customer demand for server
> interconnect and aggregation products.
>
I am not sure I follow your reasoning here.
I can see several ways for accomplishing all of the above in a single
piece of silicon, or the same two that you had originally. It is just a
matter of having flexibility in the interfaces and overall switching
capacity in the fabric. I don't think we want to architect this over the
reflector, but I believe reasonable people familiar with the art will
be able to figure this out. And by the way, this same piece of silicon
will now have a broader application space and higher volumes, which
must be a good thing, don't you think?
Now, if you believe that having a 100Gb-only standard will make it go
to every server on day one, then your argument of a "more shallow"
customer demand may have merit. However, some of us beg to differ.
Quite a few of us believe that a 40Gb solution will be a better fit for the
server space as far as the speeds are concerned, and provide a better
ROI by the virtue of re-using a lot of the existing 10Gb PHY technology.
> Now, I am open to the concept that a multi-speed MAC could be defined
> which is scaled upon the PHY lane capacity, but this is a substantial
> change from our current paradigm and the development for that approach
> will likely slow down the standard, add risk on the spec, and delay
> the availability of 100G which I already said is needed in some of our
> high-performance data centers today.
>
What is so different or unusual in this new paradigm that will "slow us
down",
"add risk on the spec" and "delay the availability of 100G"?
Last time I checked, we had proposals for 10Gb rates per lane (as in
10x10Gb)
and x4 lane configurations (as in 4x25Gb). How hard is it to combine the two
and add another one, as is 4x10Gb?
Details, please.
Shimon.