Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Yair,
The picture is taken right at the output of the MDI. It
would be even worse at the end of a 100m section of cable because the high
frequencies would be taken out, but the low frequency distortion would
remain.
The period is readable on the scope plots. Its in the tens
of uSeconds if I recall correctly.
The BLW is not due to high frequency loss, its due to
low-frequency loss and an alignment of the data pattern with the 4B5B encoding
and the 11 bit scrambler. This is why introducing a high pass filter is
problematic.
To reproduce, set your scope trigger above the normal data
amplitude and it will trigger when a BLW event occurs.
Dan From: Darshan, Yair [mailto:YDarshan@microsemi.com] Sent: Thursday, January 10, 2008 8:29 AM To: Dove, Dan; STDS-802-3-POEP@LISTSERV.IEEE.ORG Subject: RE: [8023-POEP] Baseline Bucket ad hoc Hi
Dan, Thanks, it
helps. In the picture of the
BLW that you have sent me, what is the period of the phenomena? I guess that
when you load the Smartbit with the BLW packets, it runs periodically but the
BLW is generated in a different period rate due to cable constant? (am I
understand it correctly?) Is the BLW depends on
cable length? What is the minimum cable length that I should expect to BLW?
Thanks Yair .
From: Dove, Dan
[mailto:dan.dove@hp.com] Hi
Yair, Unfortunately,
100BASE-TX has strong signal content below 1MHz. When you run it through a
high-pass filter, it will become distorted. The amount of distortion is a direct
function of the pole location. If the 350uH adhoc were to come up with a spec
for the MDI that eliminated the need to spec inductance, the *implementation* of
a product would still be constrained to define that pole
frequency. In other words, suppose
we came up with a template measurement. You could implement a compliant product
by using a 350uH inductance with no silicon compensation circuits, or perhaps
you could use 100uH with some silicon compensation, but that compensation would
require the design to have a specific pole frequency or it would either
over-compensate or undercompensate. So, if we had a silicon
compensation circuit designed for 100uH inductance, and then you insert another
pole at some frequency down there, the result would be that your compensation
was no longer tuned properly. This problem is very
specific to 100BASE-T as 10BASE-T has no baseline wander, and 1000BASE-T uses a
scrambler polynomial that is so large it is essentially impossible to get
baseline wander content sufficient to cause a measureable bit error
rate. I hope this
helps.
From:
Hi Dan and
all, My team is working on
the following action items:
We started
with item 2 and I hope to present data by next
meeting. Regarding
Item 1 we are designing now the test setup for having enough data for
recommending the best course of action. I would
like to have your opinion on the following: The reason
we are checking the behavior of a channel containing ALT A Midspan is
because we assume that it will affect the channel behavior at frequencies lower
then 1MHz when we suppose to find the BLW
frequencies. My
question is: If our ultimate goal per the 350uH Ad Hoc is to ensure meeting the
minimum BER required so implementation will be transparent i.e. 350uH minimum
inductance will be only one way to achieve it and other ways are possible, THEN
meeting BER means also the effect on BER due too
BLW. If this is
true, then why we care what is the Midspan transfer function at this low
frequency band. If inserting the Midspan to a compliant Switch+channel+PD (i.e.
a channel designed according 802.3 guidelines without compensation techniques
etc. such as using 100BT SmartBit generator and standard channel and load)
and the BER test pass then the Midspan is compliant if it fails it
is not compliant hence how the Midspan did it we don’t care and we don’t have to
specify anything new. Please let
me know your opinion. Thanks Yair From: Dove, Dan
[mailto:dan.dove@hp.com] Yair, At the last meeting you
mentioned that you had tested 100BASE-T with inline power on the data pairs and
found no problems. During an offline discussion, we discussed performing testing
with maximum length cables and the "Killer Packet" to ensure that your testing
did not miss the impact of baseline wander on your testing. To assist, I sent
you a copy of the Killer Packet (zipped binary) to enhance your
testing. Have you had an
opportunity to evaluate the impact of inserting power into the data pairs using
the Killer Packet since our discussion and do you plan to present on
this? I think it would be
important to ensure that we fully characterize the exposure of inserting any
additional low-frequency poles into the channel on 100BASE-T signal
impairments.
From:
owner-stds-802-3-poep@IEEE.ORG [mailto:owner-stds-802-3-poep@IEEE.ORG] On Behalf Of Hi
Matt, Probably you didn’t see
my email which I ask not to have meetings on Friday. Friday and Saturday are a
non working days here like Saturday and Sunday' in Any way, please find
below my comments/opinion regarding the base line bucket:
Comment
#141: Comment #141 deals with
" The commenter state
that this column has no value and I disagree with
him. This column has value
in which it specifies the range of maximum for better power
management. It is true that when
you are using L2 you may not need this information however when using only L1,
this information has value. Regarding the argument
that it confuses the average reader with the minimum power required to keep the
port ON, I don’t see how it confused since the text is clear and if it still
confuse we can add clarification but it is not justifying changing the level of
information contained in this column. I suggest rejecting
this comment or ad clarification for the use of it in respect to
33.3.6. Comment
#124: The commenter is basing
his argument on the following assumptions:
The change
it or modify it we need to td feasibility and economical tests/simulations. Non
of it has been shown or demonstrated.
Rational: The
current specification required to meet Rpd_d together with Cpd_d<0.15UF. This
is possible and proven feasible. Requiring
meeting Rpd_d with Cpd_d>>0.15UF is technically problematic due to long
time constants and false positive detection risks. There is a way to overcome
this problem by using ac signals with source and sink capabilities however it
requires technical and economical feasibility tests which if somebody present
such it will be easier to consider and asses its technical and economical
aspects.
I suggest
rejecting this comment unless serious technical work will be presented to backup
the changes suggested. Comment
#13: This
comment is similar in principle to comment #124. Figures
33-8 and 33-9 are not mandating implementations. It
guarantees interoperability. Figure 33-8:
Figure 33-9:
The implementer can use any Thevenin equivalent of figures 33-8 or 33-9 which
allows the flexibility which we are looking for. In order
to allow such big changes it is requires proving feasibility with different PSEs
using different methods complying to the suggested
remedy……!!! Unless
such proofs made I suggest to reject this comment. Yair
From:
owner-stds-802-3-poep@IEEE.ORG [mailto:owner-stds-802-3-poep@IEEE.ORG] On Behalf Of Please find attached
material for discussion during this morning's baseline bucket ad
hoc. On Dec 11, 2007 11:53 AM, Colleagues - |