Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement
Dear All,
On issue 2: I think that we agree that a (single pole) settling time of
100ns is sufficient for the 20dB dynamic range that we are interested in.
(That's a time constant of 20ns, and considering 5 time constants to be
'settled'.)
I agree with Mr. Nagahori that in a practical receiver today, we will
actually need to have a "2 pole" filter: One is the AC coupling, and the
other is some form of AGC feedback. But, I would suggest that the AGC
feedback based on average power will have the same time constant as the
AC-coupling. That is because it faces the same dilemma of being fast enough
to see the bursts but slow enough not to see data patterns.
The simple math would suggest 100+100=200. But we all know that time
constants don't add that way, it's actually RMS. So, if things are working
linearly, then we need 141ns. We see to have lots of margin. We can
tolerate even doubling both of the responses of each circuit, and it still
works. Just as long as things remain linear and don't go into pathological
modes.
So, I don't think we need to relax this any further than the established
400ns value.
On issue 3: The problem with overloading the circuit is not necessarily only
one for the LA, but also for the output stage of the TIA, and the AGC
control loop. Control loops work best when the signals that they are acting
on are in their linear range. If the strong burst suddenly comes in and the
TIA saturates, then the AGC loop will not behave optimally. Of course, this
can be allowed for by waiting longer, but isn't that the very complaint in
issue 2?
The whole point of controlling the transmitter rate-of-attack is that it
helps the receiver settle faster. Given that people are concerned with a
technology gap for the 10G burst Rx, it seems an obvious cross optimization
to make.
Now, as to the cost of such a rise-time control - I think it is a pretty
simple circuit to control the modulation current supply on a 10ns time
scale. In fact, existing circuits could likely be adapted simply by the
addition of a single capacitor. Is it really much harder than that? We
don't need precision, keep in mind.
Sincerely,
Frank E.
-----Original Message-----
From: Takeshi Nagahori [mailto:t-nagahori@AH.JP.NEC.COM]
Sent: Wednesday, April 02, 2008 10:40 AM
To: STDS-802-3-10GEPON@LISTSERV.IEEE.ORG
Subject: Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement
Dear Dr. Effenberger,
I greatly appreciate your effort taking both damege theshold / burst mode
timing ad hoc leadership.
I would like to comment on toipic 2 and 3 in-line.
>2. What dynamic performance can be expected from strong-to-weak burst
>reception (the Treceiver_settling question)?
>
>The Nagahori presentation gives us very useful data. Let me illustrate it
>in the following way: From Nagahori page 7, we can see that a tau/T of 210
>results in an error curve that has zero penalty at the higher bit error
>rates that we are working at. (There are signs of an error floor, but it
>happens at 1E-10, so we don't care). T, in out case, is 97 ps. So, the
>data says that setting tau to be 20ns is OK.
>
>Suppose we want to tolerate 20 dB of dynamic range burst to burst. This
>means that we need to set the time constant of the AC-coupling to be at
>least 5 times shorter than the burst-to-burst time. (e^5=148 > 20dB).
That
>means that the burst to burst time needs to be 100ns. So far, we are not
>seeing any problems. (By the way, the value of 100ns is what I put forward
>in 3av_0801_effenberger_3-page4.)
>
>I also think that real circuits will need to allocate time for control of
>the pre-amplifier stage (setting of the APD bias and/or the TIA impedance).
>This should take no longer than an additional 100ns of time.
>
>So, this leaves us with a requirement of 200ns, which has a safety margin
of
>2x below the 400ns that is the proposed value for Treceiver_settling.
>
>Thus, I don't see any reason why we should change the value from 400ns,
just
>like in 1G EPON. While it is true that Treceiver_settling will likely need
>to be longer than T_cdr, setting the maximum values of both at 400ns will
>not preclude any implementations. I fully expect that real systems will
>actually do much better than both of these limits.
At first, I would like to enphasize that the limiting factor is not
in AC coupling between TIA-LIM, but in burst mode AGC in TIA to control
transimpedance gain.
The required TIA input dynamic range is estimated to be 23dB for
PR-30/PRX-30
dual rate. But state-of-the-art data of 10G burst mode TIA dynamic is only
15dB with AGC in TIA from published paper in ECOC2007 and ISSCC2008.
We have to recognize this technology gap at this moment.
In this situation, it is preferred to allow the use of simple average
detection type TIA AGC, instead of peak detection type AGC that was appeared
in your and Dr. Ben-Amram's presentation at January meeting, in order to
reduce
the technology gap. Peak detection type AGC is superior to avarage detection
type
AGC in response speed, but it has challenging issues in response in
peak-detector's response at >1Gbps (not 10Gbps only), in addition to
dynamic range issue.
Considering the large enough margins for averaging detection type TIA AGC
and some margin to 200ns for TIA-LIM AC coupling, 400ns is not large enough
for treceiver_settling. The appropriate value would be less than 800ns,
even if we consider the technical gap between required spec and
ECOC2007/ISSCC2008 state-of-the-art data.
>3. What about limiting the rate-of-attack of the burst Tx (Ton/Toff)?
>I went to talk with my optical front-end expert, and he explained the
latest
>results that we've been seeing. The whole motivation of our concern is the
>large 20dB dynamic range that we are targeting in PON systems. The problem
>is that the receiver is normally in the maximum gain condition, and then a
>strong burst comes in that threatens to overload the circuit.
>
>Initially, we were concerned that the APD and the TIA would be most
>sensitive to high burst transients. However, this seems to be not the
case.
>The APD gain may be self-limiting (saturating), and this helps to limit the
>signal to some extent. So, damage to that part of the circuit seems
>unlikely.
>
>However, there still is a problem, and that is that the second stage
>amplifier (the one that is driven by the TIA) tends to get overloaded by
the
>strong bursts. (This is understandable, since the signal has received more
>gain by this point.) This prevents the output signal from being useful
(for
>control as well as for the actual signal), and the recovery from overload
is
>not well behaved. So, we'd like to avoid that.
>
>The simplest way to prevent transient overload is to reduce either the APD
>gain (by reducing its bias), or reducing the TIA impedance. Either of
these
>methods is essentially a control loop, and it will have a characteristic
>speed. The setting of the speed is bounded on both directions just like
the
>AC coupling speed, and a value of 20ns is good. Given that we have a
>control speed of 20ns, the loop will respond only that fast to input
>transients. We can thereby reduce the excursion of the control system
>output by limiting the "time constant" of the input signal to be similar to
>that of the control loop. This is why we suggest a 'rise time' on the
order
>of 20ns.
>
>I was wrong in extending this to also specifying a 'fall time' - there is
no
>need for controlling the trailing edge, at least, not strictly. The reason
>is that the receiver will 'know' when the burst is over, so it should be
>able to manage its withdrawal symptoms. (Note that this implies that the
Rx
>has certain feedback paths, such as when the CDR declares loss of lock.)
>
>So, that's the reason why we should consider having a controlled turn-on
for
>the transmitter.
At March meeting, impacts of rise time control on transimission
efficiency
and complexity PON chip were discussed and were concluded that there were
very
few impact on those. But precise rise time control makes implementation of
Laser
driver circuitry in ONU complicated to affects the ONU's cost.
I understood from your explanation that the reason why rise time control
is needed is only to prevent saturation in LIM. But if we consider actual
receiver circuit implementation, TIA does not generate signal exceeding
power supply voltage, typically 3.3V, even if AGC in TIA is not finished
to reduce the transimpedance gain. This means that a large signal to
saturate LIM would not generated from TIA, so we need not have attention
to saturation in LIM. Considering above, I cannot see any reason for need
for rise time control.
Best Regards,
Takeshi Nagahori
NEC