Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
I agree that, at TP2, we would specify jitter generation, and that jitter
tolerance would be at TP3. You are right that the
filter bandwidths would pertain to TP2.
On the filter BWs for the network interface jitter, SDH used for
the highpass the line rate divided by 2500, which is 4 MHz.
The network interface limit was 0.15 UIpp. This would represent
jitter accumulation through possibly many regenerators;
jitter generation for a single regenerator is 0.1 UIpp or 0.01 UIrms.
For jitter tolerance, the sinusoidal mask is used, with
the breakpoint of this mask the same as the highpass measurement filter
for the network interface spec, or 4 MHz.
I seem to recall in the MJS-2 document, they started with the tolerance
mask (in one of the annexes towards the end), but
only wanted a tolerance of 0.1 UIpp rather than 0.15 UIpp. They
extended the 20 dB/decade part of the mask down to
the 0.1 UIpp level, and the breakpoint became (line rate)/1667 rather
than (line rate)/2500. For 10 Gbe, this works out to
6 MHz (rounded from 5.999 MHz).
The above is clear concerning the tolerance mask; I need to think a
bit on how this applies to the measurement filters (I have to
leave for the meeting shortly). Part of the answer depends on
what the 10 Gbe signal traverses, i.e., what is the reference model (this
has not been discussed in the conference calls, though I have only been
participating in them for a month or so, so I don't know what discussion
of this has occurred before that). That is because, while the network
interface spec we start out with is 0.15 UIpp, the jitter generation is
0.1 UIpp.
I have sent my VGs on SONET/SDH jitter (for the meeting) to the exploder
we were using for the conference calls, and also to
David Law; I have not heard back from him yet but will check again
when I arrive in Austin.
Thanks.
Regards,
Geoff
Rohit Mittal wrote:
There is something which I think needs to be corrected.Sonet specifies Network Interface jitter (Which we have been talking about) and Network element jitter. When we, as an equipment vendor, do our tests, we are concerned only with the latter and not the former. NIJ (network interface jitter) applies to interface between users and carriers or carriers and carriers. NEJ refers to jitter of one SONET board or sonet component etc. The jitter generation criterion for NEJ is substantially different than NIJ. For NEJ, you apply a bandpass filter from 12k to 80Meg, and make sure pk-to-pk is less than .1UI. Speaking as a person who has done many jitter measurements, I have always found that decreasing the high pass cutoff from 12k to ,say, 5k increases jitter substantially. Increasing over 80Mhz has negligable impact.Thats SONET. How do we apply the same concept to 10gigE. Again, the point of interest is TP2, only. Note, this is only if we decide to do away with the previous methods of putting DJ and total jitter. I believe thats what people are leaning towards for 64/66.Simple, we choose a suitable high pass cut off. 1gE chose 637k, so we choose 6.37Meg ? (correct me if I'm wrong). As geoff mentioned, we might have to put something for low pass cutoff. Lets still put 80Meg or 5G (Vipul, I don't think it'll matter). And we specify pk-to-pk as .1UI. Vipul, I think jitter generation at TP3 is immaterial. It should be jitter tolerance. So we do not need to specify a filter at TP3, only TP2.ThanksRohit-----Original Message-----Geoff,We have gained a better understanding of SONET and MJS approaches to jitter through discussions such as this. Of course, we can go on forever and still not fully agree on what the elephant looks like...so allow me to review the main issue from a high altitude. As needed, we will seek clarifications and sink into details again.For 64B66B Serial, do we want to adopt MJS methodology, SONET methodology, or some combination of the two? If the combination, do we start with MJS and tweak it, or start with SONET and tweak it?The MJS methodology can't be adopted "as is" because our transmission code isn't 8B10B - the DJ will behave differently. I am also uncomfortable with adopting the SONET methodology "as is" because it is substantially different from the TP1-2-3-4 jitter output description used by 802.3z. It views TP3 in terms of jitter tolerance, not jitter output. For jitter output measurement, it applies the 80 MHz filter on TP2 - a method not suitable for 802.3ae, in my opinion. (I am okay with applying that filter on TP1, but that's irrelevant because TP1 probably won't be a compliance point in 802.3ae. I think we will see at TP2 a significant amount of jitter contributed by frequencies higher than 80 MHz - especially when you directly drive lasers, and also take into account Duty Cycle Distortion.) The TP-based jitter output approach has served us well in 802.3z and we have become used to it. Many of us have time domain test instruments that are adequate for estimating jitter output. So we have to tweak something, somewhere.(Option 1) If I start with MJS, what would I tweak? Clearly, the DJ part. DJ has four components - data dependent, sinusoidal, duty cycle distortion and uncorrelated-bounded. My interpretation of the MJS document is that the data dependent portion of DJ is a function of run length. For 64B66B code, we can assume that run length will be binomially distributed (truncated to 66 bits). This discrete distribution can be simplified as continuous Gaussian. Therefore, we can "transfer" the data dependent portion of DJ out of the DJ column. In other words, for a given TJ, the DJ values will get smaller. The remaining three quantities contributing to DJ can't be approximated as Gaussian, in my opinion. For example, a limiting amplifier can cause DCD because it has different rise and fall times due to device and process variations. This type of DCD may not be strongly related to run length.(Option 2) If I start with SONET, what would I tweak? I would increase the 80 MHz filter limit to 5 GHz for measuring jitter output at TP2. This would force me to raise the maximum acceptable value to 0.3 or even 0.4 UI, for example. I may also have to change the tolerance mask for TP3 to account for it.There is a political joke that says "When traveling on a road, if you come across a fork, take it." Sounds like good advice to me...Let's combine the best of Option 1 and Option 2. Let's keep the TP table, specifying a high-upper-limit filter for jitter output measurements at TP2 and TP3. In the TP table, let's shrink the values of DJ to reflect the nature of the 64B66B code. In addition, let's specify a tolerance mask for TP3. How does that sound?Regards,Vipul
From: Vipul Bhatt [mailto:vipul.bhatt@xxxxxxxxxxx]
Sent: Sunday, October 22, 2000 12:26 PM
To: Serial PMD Ad Hoc Reflector
Subject: RE: Minutes of serial PMD specs telecon 17 Oct 00vipul.bhatt@xxxxxxxxxxx
(408)542-4113====================-----Original Message-----Vipul,
From: owner-stds-802-3-hssg-serialpmd@xxxxxxxx [mailto:owner-stds-802-3-hssg-serialpmd@xxxxxxxx]On Behalf Of Geoffrey Garner
Sent: Saturday, October 21, 2000 2:36 PM
To: Vipul Bhatt
Cc: Serial PMD Ad Hoc Reflector
Subject: Re: Minutes of serial PMD specs telecon 17 Oct 00Thanks for your comments.
In SONET and SDH, the jitter tolerance would be applied at the optical input, which I believe corresponds to TP3.
GR-253 refers to the OC-N interface (which is an optical interface; it does refer separately to STS-N electrical interfaces, but
these are for lower rates, and not 10 Gbit/s). In the ITU specs, G.783 also indicates that the jitter tolerance applies to the STM-N optical interface (the specific terminology used in G.783 is the "STM-N Optical Section to Regenerator Section Adaptation Sink"; note that G.783 also covers separately STM-N electrical interfaces, but these are for lower rates). Similarly,
in SONET and SDH the jitter output would be at the optical interface, which I believe corresponds to TP2. The notion that
the jitter above 80 MHz is small applies to the optical interfaces.Consistent with the above, when SONET/SDH equipment is tested for jitter tolerance, the sinusoidal jitter is applied to the
optical interface. When jitter generation is measured, it is measured at the optical interface. These are typically the test points that are available.For deterministic jitter, I thought some more about Rohit'sn yesterday that had a short description of this. Is it correct to say
that DJ in MJS-2 is really pattern-dependent jitter (also called systematic jitter)? In SONET or SDH, this jitter arises due to
the fact that the typical output of the SONET scrambler will have runs of symbols with no transitions for various numbers of bits
(the longer the run of no transitions, the lower the probability). The overall effect is to have jitter in the recovered clock signal
due to the fact that the clock recovery circuit is going longer without getting a transition input. The effect tends to limit the clock recovery circuit bandwidth. It does seem that, if this is what is meant by DJ, it is different for 64B66B and for scrambled NRZ.I agree with your comment on the measure of jitter in the frequency domain, with one minor addition. Jitter would correspond to the power spectral density passed through the appropriate jitter measurement filter. Also, one talks of both "high-band"
jitter and "wide-band" jitter; our discussion has really been focusing on the high-band jitter. For 10 Gbit/s (STM-64) high-band
jitter, this filter is a 4 MHz, single-pole, high-pass, concatenated with an 80 MHz (this is the 80 MHz we have been talking about), 3rd order, butterworth filter. In the time-domain, I didn't fully understand your definition, but let me give you mine (tell me if it is at least clear enough that you can determine if it is the same as your definition). In the time domain, we would first look at the the phase deviation from ideal phase, as a function of time. This is the phase history, and, if we wanted to be very
precise, it is technically a discrete time random process with the discrete index referring to the respective bit in the stream and the value of the process referring to the time difference or phase difference (dependin on whether the units are units of time, rad, degrees, UI, etc.) between the actual time of that bit and the ideal time of that bit. To get jitter, we filter this phase history with
the above measurement filter. This gives the jitter history (or jitter process). The rms jitter would be the standard deviation of this random process. The peak-to-peak jitter would be the peak-to-peak of this process measured over a specified time interval (often 60 s is used).Regards,
Geoff
gmgarner@xxxxxxxxxx
+1 732 949 0374<snip>