Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
My email intended to deal with systems as you describe in your third (last) paragraph - where Tx and Rx run on independent clocks. All I was trying to point out is that since clock frequency (due to wander and jitter) is a function of time, "100ppm" needs a time-based definition. Also, that defintion should be coordinated with jitter definitions such that the entire spectrum is specified, without gaps, so that both FIFO overruns and CDR eye closure are managed.
Thanks for your reply, and sorry to take so long me to respond.
Tom Lindsay
Vixel
"Geoffrey M. Garner" wrote:
Tom,Could you clarify the application? Are you trying to characterize the amount of input jitter a PLL must tolerate (and therefore the jitter on the input to the PLL), the amount of jitter generated in the PLL, or the total jitter on the PLL output (input plus amount generated). You mention the +/- 100 ppm frequency accuracy in several places, but then also the need to prevent the buffer from overflowing. The frequency tolerance would pertain to the input or output signals, i.e., each would be expected to be within 100 ppm of nominal. It would also pertain to the pull-in range of the PLL, which would constrain how far off the VCO center frequency could be from nominal. However, the 100 ppm would not directly impact the buffer fill while the PLL was locked to the input, because in the long-term the input and output frequencies are the same. There are short-term variations in the difference between the input and output phases (this is jitter), but this is impacted by the PLL bandwidth.
Jitter is high frequency phase variation, i.e., phase variation filtered by a high-pass measurement filter. The phase error of a PLL, i.e., difference between the output and input phases, is equal to the input phase filtered by a high-pass filter (with breakpoint equal to the PLL BW). If the high-pass jitter measurement filter has breakpoint equal to the lowest PLL BW you expect to have, then limiting the jitter will limit the maximum PLL phase error. For a clock recovery circuit, one would limit the maximum phase error to a fraction of a UI so that the sampling point is sufficiently close to the center of the eye. If the PLL is a narrower BW PLL that follows the clock recovery circuit (such a PLL could be in a regenerator; I don't know if this would be relevant for your application), one would limit the maximum phase error to prevent buffer overflow or underflow. Note that, for regenerators, one typically specifies both high-band jitter and wide-band jitter; the latter is measured with a high-pass measurement filter whose breakpoint frequency is much lower than the former. The former is to control alignment jitter (difference between input and output jitter) in the clock recover circuit; the latter to prevent buffer overflow in a narrower BW PLL that may follow.
Wander is low frequency phase variation, i.e., phase variation filtered by a low-pass measurement filter. One reason to limit wander would be if the input and output of a buffer were controlled by independent clocks. Limiting the wander would control the rate of buffer slips (overflow or underflow). While PLLs have to tolerate input wander as well as input jitter, the key issue is still the phase variation in the input signal for frequencies above the PLL bandwidth. If you have both wide bandwidth and narrow bandwidth PLLs, lower frequency jitter would be relevant for the narrow bandwidth PLLs.
Regards,
Geoff Garner
Lucent Technologies
101 Crawfords Corner Rd.
Room 3C-511
Holmdel, NJ 07733
USA
+1 732 949 0374 (voice)
+1 732 949 3210 (fax)
gmgarner@xxxxxxxxxxTom Lindsay wrote:
This email is in response to:
1.2 Very low frequency jitter definition frequency and amplitudeA- Comment to be entered
Tom to send email concerning explanation of difference between low
frequency jitter and clock tolerances of 100ppm clock tolerance.
Agreed.
___________A clock will have a frequency, averaged over a period of time. It may fluctuate within that average. A receiver should track that average, such that the CDR (VCO) is at the same average frequency as the incoming data rate. Generally, this average frequency drives requirements for elasiticity buffering, and it must be controlled through a clock tolerance specification (+/-100ppm).
The fluctuations within the average can be thought of as jitter - jitter also must be controlled.
The time period (or some number of bits, etc.) for measuring clock tolerance should be specified. Generally, if the time period is increased, more lower frequencies of jitter will be averaged (filtered from the measurement); if the time period is decreased, more lower frequencies of jitter will be unfiltered and seen as frequency errors which could be controlled by the 100ppm spec. Note that using 100ppm frequency as jitter control in this manner is more stringent than present DJ specifications.
Complete control of the frequency spectrum is recommended. This is to prevent elasticity buffering overruns and excessive jitter beyond what can be tracked. To provide complete control, frequencies above the frequency that corresponds to this time duration should be controlled with jitter specifications; lower frequencies would be controlled by the 100ppm spec.
Define T = the measurement time duration for clock tolerance measurement. Then, require that a clock frequency must be within 100ppm of ideal when averaged over any interval >=T. Next (this is without rigor, a gut feel, someone should confirm), I would suggest that jitter be controlled down to freq <1/(2*T). This implies testing jitter output for a period >2*T , and testing tolerance with sine frequencies down < 1/(2*T).
What should the value be for T? For practical lab measurements, the time may want to be on the order of at least 1 second. This implies measuring jitter output for >2 seconds, which is still reasonably practical. However, for tolerance, this implies sweeping down to <0.5 Hz, which is not practical. Fibre channel defined T as 200,000 bits, which equates to approx 200 usec at 1Gbit/sec. This implies sweeping sine jitter down to approx 2.5 kHz for that data rate.
I don't have a good feel on what to do from here.
Comments, reactions, suggestions?
- Is this worth pursuing, or this unncessary specification/control for a non-problem?
- If we set T, should its value be short so that sine sweeps can stop at higher frequencies? If so, what frequency? If T is too short, it becomes impossible to verify frequency in the lab; if T is too long, then we run into practical limitations of sine jitter sweeps.
Thanks, Tom Lindsay
Vixel
425/806-4074