Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Minutes of serial PMD specs telecon 17 Oct 00



Vipul,

Thanks for your comments.

In SONET and SDH, the jitter tolerance would be applied at the optical input, which I believe corresponds to TP3.
GR-253 refers to the OC-N interface (which is an optical interface; it does refer separately to STS-N electrical interfaces, but
these are for lower rates, and not 10 Gbit/s).  In the ITU specs, G.783 also indicates that the jitter tolerance applies to the STM-N optical interface (the specific terminology used in G.783 is the "STM-N Optical Section to Regenerator Section Adaptation Sink"; note that G.783 also covers separately STM-N electrical interfaces, but these are for lower rates).  Similarly,
in SONET and SDH the jitter output would be at the optical interface, which I believe corresponds to TP2.  The notion that
the jitter above 80 MHz is small applies to the optical interfaces.

Consistent with the above, when SONET/SDH equipment is tested for jitter tolerance, the sinusoidal jitter is applied to the
optical interface.  When jitter generation is measured, it is measured at the optical interface.  These are typically the test points that are available.

For deterministic jitter, I thought some more about Rohit'sn yesterday that had a short description of this.  Is it correct to say
that DJ in MJS-2 is really pattern-dependent jitter (also called systematic jitter)?  In SONET or SDH, this jitter arises due to
the fact that the typical output of the SONET scrambler will have runs of symbols with no transitions for various numbers of bits
(the longer the run of no transitions, the lower the probability).  The overall effect is to have jitter in the recovered clock signal
due to the fact that the clock recovery circuit is going longer without getting a transition input.  The effect tends to limit the clock recovery circuit bandwidth.  It does seem that, if this is what is meant by DJ, it is different for 64B66B and for scrambled NRZ.

I agree with your comment on the measure of jitter in the frequency domain, with one minor addition.  Jitter would correspond to the power spectral density passed through the appropriate jitter measurement filter.  Also, one talks of both  "high-band"
jitter and "wide-band" jitter; our discussion has really been focusing on the high-band jitter.  For 10 Gbit/s (STM-64) high-band
jitter, this filter is a 4 MHz, single-pole, high-pass, concatenated with an 80 MHz (this is the 80 MHz we have been talking about), 3rd order, butterworth filter.  In the time-domain, I didn't fully understand your definition, but let me give you mine (tell me if it is at least clear enough that you can determine if it is the same as your definition).  In the time domain, we would first look at the the phase deviation from ideal phase, as a function of time.  This is the phase history, and, if we wanted to be very
precise, it is technically a discrete time random process with the discrete index referring to the respective bit in the stream and the value of the process referring to the time difference or phase difference (dependin on whether the units are units of time, rad, degrees, UI, etc.) between the actual time of that bit and the ideal time of that bit.   To get jitter, we filter this phase history with
the above measurement filter.  This gives the jitter history (or jitter process).  The rms jitter would be the standard deviation of this random process.  The peak-to-peak jitter would be the peak-to-peak of this process measured over a specified time interval (often 60 s is used).

Regards,

Geoff
gmgarner@xxxxxxxxxx
+1 732 949 0374

Vipul Bhatt wrote:

 Geoffrey,You have raised a good point. I want to approach it more deliberately. The issue of whether jitter above 80 MHz won't be large makes me want to pause and understand what it means.(I am speaking with Serial PMD with 64B66B transmission code in mind, so I will assume that even DJ can be modeled as Gaussian, and that Total jitter can be clubbed simply as Jitter - at least for the point I want to make here. Gaussian DJ is a whole different discussion...The measure of jitter can be seen in two domains. In frequency domain, it is a plot of phase noise power spectral density (units dBc/Hz), as a function of frequency. The greater the area under this curve, the more jitter we will measure in time domain. In time domain, jitter is measured as amplitude and relative frequency of deviation from ideal timing. The relative frequency can be equated to a probability density, and from the amplitude vs. relative frequency plot, we pick measures like rms (one standard deviation) and peak-to-peak values (14*rms, for example) to quantify jitter, using time units like picoseconds.For the newbies who dare walk the jitter minefield casually, I will caution that the terms jitter output, jitter generation, jitter transfer and jitter tolerance mean different things. I interpret the 802.3z jitter budget as being in terms of  jitter output. It describes four test points - TP1, TP2, TP3 and TP4.If I measure jitter output at TP1 (input to optical transmitter), I agree that its frequency domain picture is likely to be dominated by frequencies below 80 MHz. I can explain it like this: TP1 jitter is the result of phase noise at the output of the Serializer PLL clock synthesizer circuit. Upto loop bandwidth, which is likely to be under 10 MHz, its phase noise is dominated by the reference oscillator. Beyond loop bandwidth, its phase noise is dominated by the VCO, with a 1/f asymptote and a 1/f^2 asymptote, eventually reaching a constant phase noise value, the so called noise-floor. Most of the area under this curve is dominated by a frequency range well below 80 MHz.But if I measure Random jitter output at TP4 (electrical output of optical receiver, the last point in alink before the signal enters CDR), I believe the frequency domain picture is likely to contain significant high frequency portions, upto several GHz. I can explain it like this: A transimpedance amplifier for a serial transceiver will have a bandwidth of several GHz. Wideband thermal noise will be present in addition to signal, at its output. This noisy output will be converted to jitter (through the AM-to-PM conversion process) at the output of the limiting amplifier. Given the two scenarios above, I can understand and accept the 80 MHz bandwidth rule for specifying jitter output at TP1. But I am uncomfortable with applying that rule on TP4.I believe SONET is trying to say the same thing, but I am not sure. It describes this 80 MHz rule with a sentence that begins with "Timing jitter at network interfaces shall not exceed..." [GR-253-CORE, Issue 2, Dec 95, Rev 2, Jan 99, Section 5.6.1, Network Interface Jitter Criteria]. I interpret that to mean jitter output at the transmitter output. Considerations for receiver performance are meant to be covered by jitter tolerance and jitter transfer (Section 5.6.2). But I admit that I am trying to read the minds of the GR-253 authors here. I hope one of the authors is reading this message and helps us understand better.Regards,Vipulvipul.bhatt@xxxxxxxxxxx
(408)542-4113
 
================================<snip>
-----Original Message-----
From: Geoffrey Garner [mailto:gmgarner@xxxxxxxxxx]
Sent: Friday, October 20, 2000 1:52 PM
<snip> 
It is true that in a real network, the jitter present above 80 MHz for STM-64 (or, in general, above f4 for each rate) is expected to be small.  However, it seems it is still necessary to specify this bandwidth for test purposes to guarantee that results are reproducible.  In addition, if broadband jitter is being applied to a piece of equipment as in MJS-2, the results will be impacted by how high in frequency the broadband jitter extends because, for the same power spectral density amplitude, increasing the highest frequency means more jitter is being applied.

Related to this, I had mentioned in the conference call that I recalled a 50 MHz high frequency cuttoff, but needed to track down the document (I first came across the number in an offline discussion).  I believe I have found the source of this, and also some more complete information.  In MJS-2, Tables 3 (page 20) and 5 (page 23), and also in Tables 10, 14, and 30 of FC-PI (Rev 9), sinusoidal jitter applied in the tolerance test is swept to a maximum frequency of 5 MHz.  Since for 10 Gbe the line rate is about a factor of 10 higher, the corresponding high frequency cuttoff would be 50 MHz. However, on looking more closely at these tables, I do see that for deterministic jitter and random jitter the high frequency cuttoff is line rate (fc)/2.
For 10 Gbit/s, this would be 5 GHz which is larger than the 80 MHz by a factor of 62.5.  This could certainly affect the test results (i.e., applying jitter up to 5 GHz versus 80 MHz).  Have I correctly interpreted these high frequency cuttoffs?

You indicated you have more information on this; I would be interested in it.

Thanks.

Regards,

Geoff Garner