FW: 32bit interface is not Slow!
- To: "'Simon L. Sabato'" <simons@xxxxxxxxxx>
- Subject: FW: 32bit interface is not Slow!
- From: "Rogers, Shawn" <s-rogers@xxxxxx>
- Date: Wed, 9 Jun 1999 08:45:21 -0500
- Cc: stds-802-3-hssg@xxxxxxxx
- Sender: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
Simon, as one of those responsible for inflicting RMII on the 100BT world,
I'm concerned that a configurable (or multiple) interfaces will be
interpreted as a requirement to support both or all interfaces on a single
device. Where this was possible in 100BT due to the lower speeds, the
speeds we're talking about here are traversing different technologies. I'm
not a circuit designer so I'll ask the question:
Can small signal swing and TTL levels be supported with a single buffer
design?
- Shawn
Shawn,
You didn't misunderstand... I should have been clearer when I was
throwing MHz numbers around that I meant clock rate, and that data rate
would be half that (in terms of switching speed). As you say, meeting
an 8ns bit time isn't hard, but it isn't trivial either. Some of the
messages sound like the 3.2ns option is just taking the easy way out,
wheras I feel that given available CMOS technology it's still going to
be tough.
My reason for bringing up Rambus was to point out that people are doing
10+Gb over 16-bit channels in high volume processes today. It's hard,
it takes custom layout, small signal levels, termination, expensive
testers, etc, but it can be done. By the time there's volume behind
10Gig it could probably migrate to a 16-bit interface using more
conventional techniques (both for design and *test* which you rightly
brought up).
But my basic feeling is that with anything narrower than 32-bits, given
an ASIC type design flow in CMOS, this interface may be as hard to use
(or worse) than the physical layer. So, to me the real question is
whether to specify just the 32-bit interface, more than one interface,
or a configurable interface.
The idea of a configurable interface is interesting, though I wonder
whether we really know what is going to be the most cost effective
driver strategy (signal levels, output versus input delays, etc) in the
future. We would need to think through all of that to truly define a
"future" interface today.
I think we should define the basic protocol to be used at various byte
widths, plus the signal levels and timing budgets for 32-bit
implementations. This shouldn't be too tough... move forward by simply
clocking it faster and using fewer data bits (and fewer byte-valid's).
Unless we try and serialize other signals (like SMII does) it should be
pretty obvious how to define a "compressed" pinout.
-Simon
"Rogers, Shawn" wrote:
>
> Simon, I agree with your point that a 32bit interface is not slow.
However,
> a 32-bit interface would require buffer frequencies of 156.25Mhz, which is
> within today's ASIC capability. The one exception is the clocks (Tx and
Rx)
> which, if Tsu/Th were done around the rising edge would be required to run
> at 312.5Mhz. The alternative is to have Tsu/Th of data done around both
> rising and falling edge (aka Double Clock) to keep the clock frequency
> within CMOS limits.
> The non-trivial part about this is the bit time. By this I mean you have
to
> get the data across the interface (that's driven from one device to
latched
> on the other device) in around 3.2ns. Currently 802.3z has an 8ns bit
time,
> and there have been some challenges with even that. Still, I believe a 32
> bit interface is within today's technical feasibility and other the high
> number of pins, is cost effective.
>
> Looking forward:
>
> A 16 bit interface would require a 1.6ns bit time. This 312.5Mhz buffer
> frequencies requires small signal swing technology (RAMBUS is one example,
> though I do not endorse this) severely limiting trace lengths and I'm sure
a
> lot of other things. The clocks would likely need to be differential. It
> is challenging, but is do-able. I question whether the cost benefit
> outweighs the complexity and limitations in the near term.
>
> An 8 bit interface would require a 0.8ns bit time. I don't know of any
> technology capable of doing this single ended today - great IP play if
> someone has this. Within the next few years this is likely to require
> differential signally, so you loose the pin reduction benefit. Also
> necessary to consider is the Automated Test Equipement (ATE) limitations.
> Above 800Mhz data rates, the ATE equipment is very limited and VERY
> expensive!
>
> My gut feeling is a 32 bit interface with an option for 16 is within the
> scope of the standard. However to consider and 8 bit interface, I would
> expect someone would have to prove technical feasibility first.
>
> Regards,
> Shawn
>
> -----Original Message-----
> From: Simon L. Sabato [mailto:simons@xxxxxxxxxx]
> Sent: Tuesday, June 08, 1999 12:08 PM
> To: Jaime Kardontchik
> Cc: 'stds-802-3-hssg@xxxxxxxx'
> Subject: Re: 10G-BASE-T question
>
> 10 Gig'ers,
>
> Even before coding, a 32-bit interface already requires I/O speeds of
> 300+ MHz. Is it even possible (or will it be in the required timeframe)
> to run a non-differential synchronous bus at 1.25GHz across a PCB at
> reasonable cost/EMI? Also, will one fourth the pins running four times
> as fast be any quieter for the receiver? (This isn't a rhetorical
> question, I'm out of my turf).
>
> In 100Mb the MII interface evolved to lower pincount versions as
> "standard" IC processes improved. This same model could be followed. I
> don't think that we'll build extremely pad-limited chips out of a desire
> to stick with a standard interface.
>
> An alternative would be to define a lower-pincount interface from the
> start. I think that we'd end up seeing GaAs or SiGe "bridge" chips
> which then take the narrow/fast bus and convert it to a wide/slow (300+
> MHz? slow?) bus. This "bridge" could then be sucked into the chip
> holding the MAC as 1GHz+ I/O busses become available. This way we could
> avoid the current situation in 100Mb where many are moving away from the
> IEEE standard MII in search of more cost effective alternatives. This
> begs the question, is it more important for the standard to include an
> interface which is cost effective today, or more viable in the future?
> It is my opinion that the former helps a successful introduction, wheras
> the latter will tend to take care of itself.
>
> Perhaps a 600+MHz 16-bit interface would be a good compromise. RDRAM
> interfaces are (barely?) manufacturable in volume today... if I'm not
> mistaken they offer 800MHz across 16-bits. They built a nice patent
> portfolio on the technology required to do this, although they are
> building a multidrop bus rather than a point-to-point connection. They
> also require hard IP cores, custom to the foundry, to get these speeds
> in CMOS. By the time 10Gig chips go into development, 600+MHz may be
> quite reasonable in a more standard design.
>
> I'm concerned, though, that above 300MHz too much time may be spent
> specifying, simulating, and trying to build the 10GMII interface rather
> than the other side of the PHY. And, at least at first, the chips for
> 10G systems are going to be plenty big enough to support the extra
> pins. Thoughts?
>
> -Simon L. Sabato
> -Level One Communications
>
> Jaime Kardontchik wrote:
> >
> > Rogers,
> >
> > The figure on page 4 emphasizes more the maximum clock used in the
> > 10G-BASE-T architecture, 1.25 GHz, and the maximum baud rate
> > in the optical fiber, 1.25 Gbaud/sec.
> >
> > The actual width of the MII interface is a question open to discussion.
> >
> > Shimon Muller (Sun) suggested using a 32-bit wide interface (64-bit
> > wide if we include both the Tx and Rx). Dan Dove (HP), in the audience,
> > suggested that if we use a 32-bit wide interface we might end up with
> > a chip that is all I/Os surrounding a tiny design, and he suggested to
> > take here an agressive approach and stick to an 8-bit wide interface.
> >
> > I tend to agree with Dan for the same reason and for another one:
> > 32 TTL-type output drivers at the Rx would introduce a lot of
> > switching noise that could affect the analog blocks in the chip,
> > including the jitter of the transmitter.
> >
> > Jaime
> >
> > Jaime E. Kardontchik
> > Micro Linear
> > San Jose, CA 95131
> > email: kardontchik.jaime@xxxxxxxxxxx
> >
> > "Rogers, Shawn" wrote:
> >
> > > Jaime, I have a question concerning your presentation in Idaho. On
page
> 4
> > > of your presentation you state the following when comparing your
> 10G-Base-T
> > > proposal to 802.3ab (1000Base-T):
> > >
> > > 1000Base-T 10G-Base-T
> > > GMII-8bit wide 10GMII - same
> > >
> > > Are you advocating a byte wide chip-to-chip interface between the PCS
> and
> > > Reconciliation sublayer in the MAC running at 1.25Ghz?
> > >
> > > Regards,
> > > Shawn
> > >
> > > -----Original Message-----
> > > From: Jaime Kardontchik [mailto:kardontchik.jaime@xxxxxxxxxxx]
> > > Sent: Monday, June 07, 1999 5:57 PM
> > > To: stds-802-3-hssg@xxxxxxxx
> > > Subject: 10G-BASE-T presentation
> > >
> > > Hello 10G'ers,
> > >
> > > For those that were not able to attend the Idaho meeting:
> > >
> > > The presentation on the 10G-BASE-T architecture given
> > > in Idaho included more material than the original posted
> > > two weeks ago.
> > >
> > > The updated presentation as given in Idaho is now in the
> > > web site, replacing the old one:
> > >
> > > http://grouper.ieee.org/groups/802/3/10G_study/public/june99
> > >
> > > Jaime
> > >
> > > Jaime E. Kardontchik
> > > Micro Linear
> > > San Jose, CA 95131
> > > email: kardontchik.jaime@xxxxxxxxxxx