Re: [802.3EEESG] 10BASE-T question
Joseph-
At 07:54 PM 3/28/2007 , Joseph Chou wrote:
Hi Pat,
Here is my two pennies.
I am guessing that the reason to lower down the output voltage of 10BaseT
is to reduce its power consumption so that higher speed phy can be
downgrade to 10BaseT.
That would be the reason
However, even though we modify the standard to
allow lower output voltage for 10BaseT, we probably will end up a 10BaseT
phy which has comparable power consumption of
100BaseT.
No, the idea would be that with revised specs AND new designs based on
contemporary supply voltages the power comsumption would be (a) lower
than 100BASE-Tx and (b) the power consumption of the 10 Meg would be
significantly lower when it was in IDL that when it was transmitting
data.
It will lose the advantage of speed
change. The benefit of changing the spec could turn out to have a new
lower power 10BaseT when it drives longest CAT 3 cable thus only 10Mbps
can be negotiated successfully.
No. Pat's proposal was to drop Cat3 compatibility and design the new one
around Cat5 cable. Cable that was worse than Cat3 (AT&T DIW) was the
design point for 10BASE-T. Cat5 is significantly better than Cat3 or DIW
in every way and there is very little true Cat3 left these days. Nobody
has installed in new installs for years. In particular, 10BASE-T has
enough drive to drive about 180 meters of Cat5 cable. If we made no other
change than to cut the drive level back to that required for 100 meters
we should be able to save quite a bit of power. They are other tricks we
could do for additional power saving once we have the design open. The
rules would be that it has to be backward compatible with existing
10BASE-T over Cat5 at up to 100 meters.
By the same token, so far no one considered
adding a Power Back Off mode on 1000BaseT and 100BaseT for shorter cable
length because the saving of power may be very
marginal.
Again, this would be a 2 way win:
- Lower
drive power because of the significantly lower attenuation
- The very
low duty cycle and power of the 10 Meg IDL when there is no traffic
And by signalling between packets at a lower frequency we will be
operating in a lower attenuation region of the spectrum which should
produce lower power requirements.
I am afraid that the incentive of changing
10BaseT spec is not as great as devising an electrical idle mode so that
all phy modes (10, 100, 1000, 10G) can be switched to
it.
This has sufficient promise to be worth investigation.
It is however, non-trivial.
The requirements, as I see them, would be:
- Maintain
link integrity state information
- Not
interfere with PoE probe pulse
- Support
the code transfer requirements of Auto-Negotiation
- Meet the
bandwidth requirements for keeping the DSP parameters current in the
higher speed PHYs
- Probably
needs to have some sort of baud width compatibility with current
PHYs
- Be
enough lower power to be worth doing
Additional requirements that there would be a strong push to add some
other "features":
- Rapid
link integrity response time, sufficiently improved to support fast
switch-over to a redundant link.
- Reliable
and speed consistent mechanism for far-end fault detection
-
Combining the 2 items above
By the way, in order to meet the template of
Fig 14-9 the transmitter normally needs to pre-emphasize the waveform for
fat bit (20ns or 2.5MHz carrier). I don't have simulation result at hand
so I am not sure for the same amount of pre-emphasis (preset in IC
design) used for CAT 3 model test it still fits the template when we use
CAT 5 cable model. We may need to consider the attenuation difference
between 2.5MHz and 5MHz for both cable models.
We certainly would.
ISO/IEC 11801- 1995 gives the following values for cable attenuation
(dB/100m):
Freq Cat3 Cat5
1 Mz
2.6 2.1
4 Mz
5.6 4.3
10 Mz
9.8 6.6
Best regards,
Geoff
Thompson
Best Regards,
-Joseph Chou
-----Original Message-----
From: Pat Thaler
[mailto:pthaler@BROADCOM.COM]
Sent: Wednesday, March 28, 2007 6:37 PM
To: STDS-802-3-EEE@listserv.ieee.org
Subject: Re: [802.3EEESG] 10BASE-T question
Mike,
I think that some adjustment to the 10BASE-T transmit voltage would be
entirely appropriate.
The 10BASE-T output voltage spec (IEEE 802.3-2005 14.3.1.2.1) currently
requires that the driver produce a peak differential voltage of 2.2 to
2.8 V into a 100 Ohm resistive load - a very normal output voltage when
the standard was written in the late 80's, but pretty high nearly 20
years later. This voltage allowed 10BASE-T to coexist in bundled Cat 3
cable with analog phone ringers. The transient when an analog phone
ringer goes off-line in that situation could produce over 250 mV.
That high output voltage is not necessary over Cat 5 or better
cable.
The simple change would be to add a differential output voltage spec for
operation over Cat 5 or better cable. In that case, remove the minimum
voltage spec for peak differential voltage into a 100 Ohm resistive load.
One still would keep the maximum voltage spec of 2.8 V or perhaps
substitute a lower maximum. Change the requirement for the Figure 14-9
output voltage template to be the signal produced at the end of a
worst-case Cat 5 cable instead of at the end of the (Cat 3) twisted-pair
model.
This should be fully backwards compatible with existing 10BASE-T
compliant PHYs over Cat 5 cable. The newly specified transmitters will
produce a signal over Cat 5 cable that is within the range of signal that
the original 10BASE-T produces over the Cat 3 cable channel it specified.
That template provides a minimum eye opening of 550 mV. If I plugged the
numbers into my calculator correctly, the attenuation difference between
Cat 5 and Cat 3 cable at 10 MHz is more than 4 dB so this should allow
the transmit voltage to drop by that. It should be very little work to do
this change.
A more aggressive change that would require real work would be to
determine what receive voltage could be tolerated by today's receivers
which probably can tolerate a smaller eye-opening especially if they are
a 1000BASE-T receiver operating in a slowed down mode. But in that case,
one would either need to only use the lower eye-opening when stepped down
by EEE or add negotiation for low voltage 10BASE-T to auto-neg because it
wouldn't ensure backwards compatiblity with classic 10BASE-T
receivers.
I think the fully-backwards compatible change would be pretty easy to
justify. To summarize, for operation over the channels specified by
100BASE-TX, 1000BASE-T and 10GBASE-T, delete the spec for minimum voltage
into a 100 Ohm load and change the test condition for the Figure 14-9
voltage template to be over a worst case 100BASE-TX channel.
Regards,
Pat
At 01:46 PM 3/28/2007 , Mike Bennett wrote:
>Folks,
>
>For those of you who were able to attend the March meeting, you may
>recall we had a discussion on 10BASE-T (in the context of having a
low
>energy state mode) and what we might change to specify this, which
>included possibly changing the output voltage. Concern was
raised that
>the work required to specify a new output voltage for 10BASE-T would
>far outweigh the benefit. Additionally, there was a question
regarding
>the use of 100BASE-TX instead of doing anything with 10BASE-T.
Would
>someone please explain just how much work it would be to change
>10BASE-T and what the benefit would be compared to using 10BASE-T
with
>the originally specified voltage or 100BASE-TX for a low energy
(aka
"0BASE-T" or "sleep") state?
>
>Thanks,
>
>Mike