Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement



Title: RE: [8023-10GEPON] Optical Overload Ad-Hoc announcement

Frank,

 

While not wishing to prolong this discussion much more, there is one more wrinkle to this.

 

By taking the penalty at a BER of 1E-3 we are assuming that the RS(255,223) FEC will transform this penalty to a BER of 1E-12.  However, the ability of the FEC to do this depends on the errors being randomly distributed.  When there is a penalty due to AC coupling, there is some tendency for the randomness of the error distribution to be reduced.

 

To investigate this effect, I did some Monte-Carlo simulations.

 

The data was 64B/66B encoded random binary.

The FEC was simulated by counting errored bytes within each FEC block and outputting no errors for any block with 16 or less errored bytes and passing all errors through for 17 or more errored bytes.

 

The output from the simulation with a high AC coupling ratio was in very good agreement with the theoretical BER vs power curve for RS(255,223) FEC

 

The results for penalty vs AC coupling ratio are plotted in the attached file.  The blue line is the result from the analysis in the spreadsheet you attached in your last e-mail.  The pink curve is the result from the Monte-Carlo simulation described above at a BER of 1E-6.  (This was the practical limit for this simulation before the run-time became unacceptable).

 

As can be seen from these plots, the AC coupling required for a particular penalty with 1E-6 BER at the FEC output is significantly different (better AC coupling required) from that required with 1E-3 BER at the FEC input.

 

Regards,

Pete Anslow

 

Nortel Networks UK Limited, London Rd, Harlow, Essex CM17 9NA, UK

External +44 1279 402540 ESN 742 2540

Fax +44 1279 402543

 


From: Frank Effenberger [mailto:feffenberger@HUAWEI.COM]
Sent: 24 April 2008 15:09
To: STDS-802-3-10GEPON@LISTSERV.IEEE.ORG
Subject: Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement

 

Dear All,

 

Pete and I had a good off-line discussion and analysis of the AC coupling problem.  

As it happens, while the Q-value math below was a little off, the basic analysis was correct.  

The AC coupling can be modeled as a signal-related noise (similar in effect to RIN).

This simple calculation was verified by doing a discrete time analysis.  So, the theoretical models seem consistent.

 

The attached spreadsheet plots the theory along with the experimental data that was given by Dr. Nagahori.  

In general, the data fits.  There are some divergences at low bit error rates, but not knowing the details of the experiment, it is hard to explain these for sure. 

 

I think that important conclusion we can reach is this: the penalty at high bit error rates (~1e-3) is very low (~0.1 dB), even for AC coupling time constants of ~100 bit times.  This is indicated by the experiment and the theory, and I have very high confidence in the finding. 

 

This is just for your information – I don’t intend to open the topic again. 

 

Sincerely,

Frank Effenberger

 

 


From: Pete Anslow [mailto:pja@NORTEL.COM]
Sent: Thursday, April 10, 2008 8:31 AM
To: STDS-802-3-10GEPON@LISTSERV.IEEE.ORG
Subject: Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement

 

Frank,

I don’t want to enter the debate on what the time constant of the PON should be, but your e-mail (below) on what penalty you get for AC coupling caught my eye.

I agree with you that you need to take the probability of occurrence in to account for this, but I don’t think that looking at CID is the right way to approach the calculation.

When you AC couple a binary signal you are filtering out some of the signal energy.  This is equivalent to adding this noise-like signal (with the opposite polarity) to a perfect transmitted eye.  To quantify the magnitude of this effect, I simulated a large amount of 64B/66B coded random data with an AC coupling of bit rate / 200 (20 ns).  The signal was represented as +1 for a one and -1 for a zero and the value of the mid point between the ones and zeros for the AC coupled signal captured for each successive bit.  Then for a range of baseline wander values I calculated the probability of any bit having a baseline wander higher than this.  This plot is attached.

<<baseline wander.pdf>>

By fitting to the shape of this curve we get an estimate of the magnitude of this noise-like signal.  In this case it is equivalent to a linear Q value of 20.5

A BER of 1E-3 is equivalent to a Q value of 3.29.  I believe that if we have a source with an effective Q of 20.5, then the Q we would have had if the baseline wander had not been present is 1/(1/3.29 – 1/20.5) = 3.92  (i.e. a receiver with a Q of 3.92 using a perfect signal is degraded to a Q of 3.29 by AC coupling at bit rate / 200).  If I have done my sums right, this is equivalent to a power penalty of 0.7 dB

Regards,

Pete Anslow

Nortel Networks UK Limited, London Rd, Harlow, Essex CM17 9NA, UK

External +44 1279 402540 ESN 742 2540

Fax +44 1279 402543

_____________________________________________
From: EffenbergerFrank 73695 [mailto:feffenberger@HUAWEI.COM]
Sent: 08 April 2008 07:04
To: STDS-802-3-10GEPON@LISTSERV.IEEE.ORG
Subject: Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement

Dear Dr. Nagahori,

Thank you for your document.  I can now see how you have figured your numbers.  However, I wonder about your calculation of penalty.  We must keep in mind that the penalty is a function of the BER at which you are measuring the penalty. 

Your calculation basically says that the penalty is equal to the amount of eye closure that happens during a CID event.  And that is true *during that event*.  But, we need to consider the likelihood of such events, because Bit Error Rate is a probability game.  A CID of 60 bits happens with a frequency of 10^-18.  That is very infrequently.  When it does happen, you will likely see a short run of errors near the tail end of the CID event.  (So much the better for us, since we are using a code that is good at correcting burst errors.)  We can also see the direct evidence of this in your BER curves, because the CID events are the source of the error floors.  

So, I would claim that you could build a system that has a tau of 20ns (the first row on your chart).  But, the optical penalty you would see is not 1.5 dB, as you have there.  The penalty is zero.  The reason is that we are operating at a BER way above the error floor that the CID events cause. 

Don't you think?

Sincerely,

Frank E

 

----- Original Message -----

From: Takeshi Nagahori <t-nagahori@AH.JP.NEC.COM>

Date: Tuesday, April 8, 2008 1:04 pm

Subject: Re: [8023-10GEPON] Optical  Overload Ad-Hoc announcement

> Dear Dr. effenberger, Hamano-san, All,

>

> On the US 10G burst mode timing discussion at January and March

> meeiting,there were not enough discussions about trade off between

> aquisition time and power penalty as a function of lower cut off in AC

> coupling.

>

> On DC balance limit with 64B/66B coding, I presentated that there are

> few inpact on sensitivity at 1E-3 BER on  March meeting.

>

> Attached is a calculated result on CID limit. The calculation is based

> on simplified AC coupling

> (1 pole high pass filter)  model, where droop in CID is modeled as

> 1-exp(-t/tau). The result shows that if we obtain 100ns acquitin time

> per 1pole with 15dB dynamic range, we have to allow 1.5 dB power

> penaly per 1 pole. If we allow only 0.5dB power penalty, the acquitin

> time becomes 350 ns per 1pole.

>

> I agree with Dr. Effenbergers comments qualitatively that 350ns+

> 350ns<<700ns at 2 pole system.

>

> Best Regards,

> Takeshi Nagahori

>

> ------------ Original Message ------------

>

> >Dear Hiroshi,

> >

> >Maybe I am being dumb, but I did not see any evidence that 500 ns

> is needed.

> >What I saw was some good data that suggested that short times are

> possible, followed by some hand waving that suggested the

> opposite.   

> >

> >The only way that I can arrive at such a number is to take the

> theoretically understandable value of 100 ns, and then assume a

> limited implementation results in a 5x multiplication.  But I have a

> hard time accepting such a poor implementation for our standard.

>

> >

> >But let's judge this debate on the same standard as that used for

> the slow start concept.  Various folks have objected to that based on

> the fact that their implementation doesn't have that problem.

> Well, I'm saying that my implementation doesn't have this 500ns

> response time problem.  I'm saying that, even though my implementation

> uses something essentially similar to the average power AGC.

> >

> >I would like to see a detailed explanation of why 500 ns is so

> necessary.  Show me the delay budget or similar equations that comes

> out to the result.  And, if it can be shown that the worst- case

> design results in 500ns time, well, then we should set the value at

> 500, and not 800!  The number 800 was based on a faulty premise, and

> should not be used.

> >

> >Sincerely,

> >Frank

[snip]

AC_coupling_FEC.pdf