Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Minutes of serial PMD specs telecon 17 Oct 00



Geoffrey,
 
You have raised a good point. I want to approach it more deliberately. The issue of whether jitter above 80 MHz won't be large makes me want to pause and understand what it means.
 
(I am speaking with Serial PMD with 64B66B transmission code in mind, so I will assume that even DJ can be modeled as Gaussian, and that Total jitter can be clubbed simply as Jitter - at least for the point I want to make here. Gaussian DJ is a whole different discussion...)
 
The measure of jitter can be seen in two domains. In frequency domain, it is a plot of phase noise power spectral density (units dBc/Hz), as a function of frequency. The greater the area under this curve, the more jitter we will measure in time domain. In time domain, jitter is measured as amplitude and relative frequency of deviation from ideal timing. The relative frequency can be equated to a probability density, and from the amplitude vs. relative frequency plot, we pick measures like rms (one standard deviation) and peak-to-peak values (14*rms, for example) to quantify jitter, using time units like picoseconds.
 
For the newbies who dare walk the jitter minefield casually, I will caution that the terms jitter output, jitter generation, jitter transfer and jitter tolerance mean different things. I interpret the 802.3z jitter budget as being in terms of  jitter output. It describes four test points - TP1, TP2, TP3 and TP4.
 
If I measure jitter output at TP1 (input to optical transmitter), I agree that its frequency domain picture is likely to be dominated by frequencies below 80 MHz. I can explain it like this: TP1 jitter is the result of phase noise at the output of the Serializer PLL clock synthesizer circuit. Upto loop bandwidth, which is likely to be under 10 MHz, its phase noise is dominated by the reference oscillator. Beyond loop bandwidth, its phase noise is dominated by the VCO, with a 1/f asymptote and a 1/f^2 asymptote, eventually reaching a constant phase noise value, the so called noise-floor. Most of the area under this curve is dominated by a frequency range well below 80 MHz.
 
But if I measure Random jitter output at TP4 (electrical output of optical receiver, the last point in a link before the signal enters CDR), I believe the frequency domain picture is likely to contain significant high frequency portions, upto several GHz. I can explain it like this: A transimpedance amplifier for a serial transceiver will have a bandwidth of several GHz. Wideband thermal noise will be present in addition to signal, at its output. This noisy output will be converted to jitter (through the AM-to-PM conversion process) at the output of the limiting amplifier.
 
Given the two scenarios above, I can understand and accept the 80 MHz bandwidth rule for specifying jitter output at TP1. But I am uncomfortable with applying that rule on TP4.
 
I believe SONET is trying to say the same thing, but I am not sure. It describes this 80 MHz rule with a sentence that begins with "Timing jitter at network interfaces shall not exceed..." [GR-253-CORE, Issue 2, Dec 95, Rev 2, Jan 99, Section 5.6.1, Network Interface Jitter Criteria]. I interpret that to mean jitter output at the transmitter output. Considerations for receiver performance are meant to be covered by jitter tolerance and jitter transfer (Section 5.6.2). But I admit that I am trying to read the minds of the GR-253 authors here. I hope one of the authors is reading this message and helps us understand better.
 
Regards,
Vipul
 
vipul.bhatt@xxxxxxxxxxx
(408)542-4113
 
================================
 
<snip>
 
-----Original Message-----
From: Geoffrey Garner [mailto:gmgarner@xxxxxxxxxx]
Sent: Friday, October 20, 2000 1:52 PM
<snip> 
 

It is true that in a real network, the jitter present above 80 MHz for STM-64 (or, in general, above f4 for each rate) is expected to be small.  However, it seems it is still necessary to specify this bandwidth for test purposes to guarantee that results are reproducible.  In addition, if broadband jitter is being applied to a piece of equipment as in MJS-2, the results will be impacted by how high in frequency the broadband jitter extends because, for the same power spectral density amplitude, increasing the highest frequency means more jitter is being applied.

Related to this, I had mentioned in the conference call that I recalled a 50 MHz high frequency cuttoff, but needed to track down the document (I first came across the number in an offline discussion).  I believe I have found the source of this, and also some more complete information.  In MJS-2, Tables 3 (page 20) and 5 (page 23), and also in Tables 10, 14, and 30 of FC-PI (Rev 9), sinusoidal jitter applied in the tolerance test is swept to a maximum frequency of 5 MHz.  Since for 10 Gbe the line rate is about a factor of 10 higher, the corresponding high frequency cuttoff would be 50 MHz. However, on looking more closely at these tables, I do see that for deterministic jitter and random jitter the high frequency cuttoff is line rate (fc)/2.
For 10 Gbit/s, this would be 5 GHz which is larger than the 80 MHz by a factor of 62.5.  This could certainly affect the test results (i.e., applying jitter up to 5 GHz versus 80 MHz).  Have I correctly interpreted these high frequency cuttoffs?

You indicated you have more information on this; I would be interested in it.

Thanks.

Regards,

Geoff Garner