Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Rick,
Your questions are all good. But
I believe this over laps some what with the classification process. (Or maybe I
am just sensitive because of a real world situation that I have
seen.)
Recently I have been made aware
of a situation at an end user site. The PD, (and I use this term loosely to put
it in terms of the groups discussion), equipment was specified as watt load of X
watts by Manufacturer A. The power engineering was done for Manufacturer B's
equipment. It was determined that a system of Y watts was required to hand X*Z
loads. (The system actually had 2*Y capacity) Yet in reality during the
start up phase of the system the real requirement for power delivery was 4*Y.
Unofficial investigation showed
that indeed Manufacturer A load specification was as stated, except
during startup. During start up the system exhibited a large
capacitive characteristics.
And we all know that caps take time to charge, and
draw decreasing currents as they do charge.
So initially they look like short circuits. In all
other ways the PD works very well.
Unspoken here is the reaction of
the end user.
This puts me firmly on the side
of specifying the behavior of both the PSE and the PD during the start up mode.
In addition, (now that I am thinking about this, I was not in St. Louis), should
there be some specification for startup during the classification process?
(Now I am getting away from the issue.) The point being that System Engineering
of the whole Ethernet link could possibly create a system which would run
perfectly once it is up. But because of the lack of power up specification would
not start. Modeling both the PSE and PD during start up mode would prevent this
from occurring.
Another point. I fully
understand the engineering reasons for increasing the cap at the PD. But in the
above example this clearly did not help during the start up phase.
So I have a couple of questions.
1) Do we need
perfect stability? Or is it possible that some instability can be acceptable? (
I am defining stability as the change in voltage or current over time.) I would
guess that getting perfect stability is not possible. But some low bandwidth
instability would be possible without impacting system performance. But where
would this instability begin to impact the signaling?
2) Is there a
trade off between instability and cap size, which will allow a minimum size cap
to be used?
Thank you for your input.
David Kohl
|