Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

[802.3EEESG] FW: [HSSG] 40G MAC Rate Discussion



 FYI, some of you may not be on the HSSG reflector.

We'll see what discussion ensues.

Bill


 
Bill Woodruff, (c) 408 582-2311
Aquantia - VP Marketing, 408 228-8300 x202
 

-----Original Message-----
From: Bill Woodruff 
Sent: Wednesday, April 04, 2007 3:01 PM
To: 'Dove, Dan'; STDS-802-3-HSSG@listserv.ieee.org
Subject: RE: [HSSG] 40G MAC Rate Discussion

As a spectator to this thread, I'd like to pose an observation....

The EEE meets in parallel w/ HSSG, and we're discussing ways to adapt
link speed without loosing link, with the goal of reducing system power.
PHY power is only one element of the equation.  

I believe the faster and more agile (and transparent) the shift beween
speeds is, the greater the opportunity to enjoy power savings.

Given the granularity that is a part of most of the discussions that are
underway in HSSG, should EEE type discussions be considered?

Perhaps this issue is not so important for transport type applications.
But, as the fruits of this effort extend deeper into the enterprise,
this capabilty may be of interest.

Comments?

Thanks, Bill


 
Bill Woodruff, (c) 408 582-2311
Aquantia - VP Marketing, 408 228-8300 x202
 

-----Original Message-----
From: Dove, Dan [mailto:dan.dove@HP.COM]
Sent: Wednesday, April 04, 2007 2:47 PM
To: STDS-802-3-HSSG@listserv.ieee.org
Subject: Re: [HSSG] 40G MAC Rate Discussion

Shimon,

::Details, please.

I agree that we cannot architect a solution via the reflector, however,
I can talk a bit about the tradeoffs of a multi-speed link solution
versus a single-speed link solution.

If we build two multi-rate-link chips, one nx10G/mx40G/px100G and one
nx40/100 we can then build the four product classes I listed below.
However, these chips are going to be more complicated than the
originally proposed devices by definition. Its much more difficult to
obtain the perfect resource balance for a switch chip that has different
link speed requirements on individual ports. How wide a bus is defined,
or what speed it runs is often based on the highest speed used, so it
would be overprovisioned for the 10G/40G and 40G/100G links. Sometimes,
this is OK, but usually when provisioning is inexpensive (aka
10/100/100) versus a system that will be pushing the boundaries as it
is. 

The product mix is greater, and therefore we have to determine which
product to do first, or whether some classes of switch products will
have such a low volume we would not do them at all. Short of a very high
end server that might need 40G, I see the real volume being at 10G with
100G uplinks and the next highest volume being nx10G aggregators.
Eventually, we would migrate to 100G server connects but that is when we
are going to be talking about 1T uplinks. :)

From the perspective of added delay/risk to the standard, I believe that
specifying multiple options inherently is more complex and likely to
encounter mistakes, not to mention simply adding pages to the spec that
need to be written, reviewed and finalized.

Concurrent with the delay added to the schedule, is the likelihood (IMO)
that 40G opportunity will be waning and 100G will be gearing up but
delayed for those applications that can use it.

I see some interesting benefit to the concept of a multi-rate MAC that
can adapt to its PHY channel width, but see a better use for such a
beast to expand beyond 100G, rather than fall short of it.

Dan

------------ Previous Message Below ------------
 

::-----Original Message-----
::From: Shimon Muller [mailto:Shimon.Muller@Sun.COM]
::Sent: Tuesday, April 03, 2007 5:32 PM
::To: STDS-802-3-HSSG@listserv.ieee.org
::Subject: Re: [HSSG] 40G MAC Rate Discussion
::
::Dan,
::
::> "What has been puzzling to me in this debate ever since it ::started
is:
::> how can 40Gb server connectivity in the datacenter hurt ::those of
you ::> who believe that 100Gb is the right speed for aggregation links
in ::> service provider networks? I am certainly at a point where I ::>
understand and respect the needs of your market. All I am asking in ::>
return is the same."
::>
::> I believe I may have stated this during the meetings, but will be
::> happy to reiterate. First off, I should mention that aggregation is
::> not just for service providers, but high performance data ::centers
are ::> in need of 100G aggregation *asap* as well.
::>
::
::Of course.
::
::> If we have a 40G and 100G MACs defined, it creates an ::/additional
set/ ::> of IC/product solutions that must be developed.
::>
::> Imagine the following;
::>
::> Nx10G + Mx100G + (opt) Fabric I/O (Required for a 10G/100G ::>
Server/Uplink solution) Nx100G + Fabric I/O (Switch Aggregator ::>
Functionality) ::> ::> So, to solve the customer demand for higher
density 10G with uplink ::> solutions, we are likely to need _two chips_
and the ::products they spawn.
::>
::> Now, If we add 40G into the mix;
::>
::> Nx10G + Mx40G + (opt) Fabric I/O (Required for a 10G/40G
::Server/Uplink ::> solution) ::> Nx40G + Mx100G + (opt) Fabric I/O
(Required for a 40G/100G ::> Server/Uplink solution) Nx40G + Fabric I/O
(Switch Aggregator ::> Functionality) Nx100G + Fabric I/O (Switch
Aggregator Functionality) ::> ::> So, we have essentially doubled the
product development ::requirements ::> to meet a wider, and thus more
shallow customer demand for server ::> interconnect and aggregation
products.
::>
::
::I am not sure I follow your reasoning here.
::
::I can see several ways for accomplishing all of the above in ::a
single piece of silicon, or the same two that you had ::originally. It
is just a matter of having flexibility in the ::interfaces and overall
switching capacity in the fabric. I ::don't think we want to architect
this over the reflector, but ::I believe reasonable people familiar with
the art will be ::able to figure this out. And by the way, this same
piece of ::silicon will now have a broader application space and higher
::volumes, which must be a good thing, don't you think?
::
::Now, if you believe that having a 100Gb-only standard will ::make it
go to every server on day one, then your argument of ::a "more shallow"
::customer demand may have merit. However, some of us beg to differ.
::Quite a few of us believe that a 40Gb solution will be a ::better fit
for the server space as far as the speeds are ::concerned, and provide a
better ROI by the virtue of re-using ::a lot of the existing 10Gb PHY
technology.
::
::> Now, I am open to the concept that a multi-speed MAC could ::be
defined ::> which is scaled upon the PHY lane capacity, but this is a
::substantial ::> change from our current paradigm and the development
for ::that approach ::> will likely slow down the standard, add risk on
the spec, and delay ::> the availability of 100G which I already said is
needed in ::some of our ::> high-performance data centers today.
::>
::
::What is so different or unusual in this new paradigm that ::will "slow
us down", "add risk on the spec" and "delay the ::availability of 100G"?
::Last time I checked, we had proposals for 10Gb rates per lane (as in
::10x10Gb)
::and x4 lane configurations (as in 4x25Gb). How hard is it to ::combine
the two and add another one, as is 4x10Gb?
::
::Details, please.
:: 
::Shimon.
::