Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement
Dear All,
If I can summarize my understanding:
1. The actual fact is that the practical damage threshold of 10G APD
receivers is indeed 0 dBm (with some risk because burst mode Rx cannot use
the usual 'load resistor' sort of limiting circuit.) So, we should
certainly not increase the damage threshold above 0 dBm!
2. The PON optics should never be connected back to back, not even in
testing. I don't think this is a practical problem, actually. So, to be
complete, the standard should include some language warning the reader that
back-to-back connection is potentially damaging.
I would also observe that because PON is a single-fiber system, the ONU Rx
will be saturated at the same time that the OLT RX is in a dangerous
situation. Since the ONU won't transmit without a valid upstream signal,
the OLT will likely be 'saved' by this fact. Of course, we still need to
worry about saving the ONU Rx from permanent damage, but since that is CW
operation, a suitable load resistor / current-limited source should help.
What is not clear is the correctness of defining a more lenient damage
threshold for the classes that have lower power levels (classes PR20 and
30).
The point was made that this is not a necessary specification for the
operation of the system. And that is true - after all, a damage
specification never is. Even the 0 dBm level is not a 'necessary' spec.
But I think that perhaps we should try to put down the level that we think
is practically useful and easy to obtain.
More discussion is needed on that.
Sincerely,
Frank E.
-----Original Message-----
From: Hiroshi Hamano [mailto:hamano.hiroshi@JP.FUJITSU.COM]
Sent: Tuesday, April 01, 2008 10:15 AM
To: STDS-802-3-10GEPON@LISTSERV.IEEE.ORG
Subject: Re: [8023-10GEPON] Optical Overload Ad-Hoc announcement
Dear Dr. Effenberger,
I greatly appreciate your effort taking this ad hoc leadership.
I have already made my comments on the 'Damage threshold' values,
suggesting 'Overload + 1dB' for all classes. But currently an opposition
has been made, which suggests that they should be at least
'Overload + 3dB'. I am not sure if this 3dB has any technical meaning,
nor if it is technically possible.
I would like to show my thoughts and understandings, commenting through
your suggestion with regard to 'Damage threshold' values.
> For some of our optics, the math would put the Overload+1 level at 0dBm,
> which is probably where it should stay.
I have asked 10G transceiver experts and they have the feeling that it is
not
easy to guarantee the 10G APD-RX damage level over 0 dBm, especially for
a burst-mode OLT receiver, where simple self-limiting circuitry cannot be
implemented in order to catch up 10GE-PON high-speed burst signals.
Reliability experiments may also be needed even to confirm that damage
level.
Continuous-mode RX in ONU may have a little more margin, but anyway
in 10GE-PON, it seems difficult to guarantee the TX-RX direct connection
without damage, because of higher launch power and sensitive 10G components.
> However, for other optics, the formula puts the damage level is
> considerably lower. I'd expect that damage would not be a problem
> for levels lower than -3dBm (just a seat-of-the-pants sort of judgment).
I have suggested that -5dBm and -9dBm values for damage threshold
in several classes, and these values may have full of margins, compared to
the actual 10G-RX performance, achievable without difficulty.
But I think the specification value should not necessarily reflect
the possible RX performance. It should specify only the required value,
just suitable to achieve the system function. I believe that all the
possible margins and flexibilities should not be rejected without reason.
especially in the IEEE standard.
> If we look at clause 52, this sets the precedent for the damage level
> being 1dB over the Maximum Receive level...
> (just like the 10G point-to-point clause did.)
My understanding about 802.3ae may be a little bit different.
Most of the PMD categories in Clause 52 (10GBASE-S, L, LX4) seem
to have no 'Damage threshold' columns in their spec. table, but it is
because they are supposed to serve very short reach applications,
including direct TX-RX back-to-back connections, where
'TX average launch power (max)' equals to 'RX average receive power (max)'.
So, 'Damage threshold' here should not necessarily be specified redundantly.
The only exception is 10GBASE-E, and there, the column exists.
> But, maybe the best approach is to take the damage level
> out of the main table, and just put it as a footnote.
I understand this feeling. If the direct TX-RX back-to-back connection
is not feasible in 10GE-PON, there may be no big meaning for specifying
'Damage threshold'. It can be pushed out into the footnote, if it is not
suitable in the main spec. table.
It can be described as follows (following 802.3ae);
'The receiver shall be able to tolerate, without damage, continuous
exposure to an optical input signal having a power level equal to
the Average Receive Power (max) plus at least 1 dB'.
But I think this is a sort of an objective change from 802.3 precedents,
and all the readers/users should also be warned that direct TX-RX
connection on 10GE-PON equipments may make a possible damage, and that
if TX-RX back-to-back connection is necessary for some evaluations or tests,
optical attenuators and/or equivalent loss components should be inserted
to guarantee the damage-free RX input power level.
I think this kind of notification or warning should also appear in the
main body texts before the RX spec. tables, to draw attention.
Best regards,
Hiroshi Hamano
%% Frank Effenberger <feffenberger@HUAWEI.COM>
%% [8023-10GEPON] Optical Overload Ad-Hoc announcement
%% Thu, 27 Mar 2008 18:22:29 -0400
> Dear All,
>
> I was tasked with leading the discussion regarding optical damage /
overload
> issues.
>
> I think there are three sub-items that all relate to this issue
> 1. What values should be used for the optical damage levels for the optics
>
> 2. What dynamic performance can be expected from strong-to-weak burst
> reception (the Treceiver_settling question)?
>
> 3. What about limiting the rate-of-attack of the burst Tx (Ton/Toff)?
>
> We don't have much time, since formal comments must be submitted by April
> 4th. So, below, I have put down my own initial thoughts on these topics.
I
> invite all to reply with their comments as soon as possible.
>
> 1. What values should be used for optical damage levels?
> If we look at clause 52, this sets the precedent for the damage level
being
> 1dB over the Maximum Receive level. If we look at the absolute level for
> the 10G LX optics, that is 0dBm, which admittedly is getting pretty
strong.
>
> For some of our optics, the math would put the Overload+1 level at 0dBm,
> which is probably where it should stay. However, for other optics, the
> formula puts the damage level is considerably lower. I'd expect that
damage
> would not be a problem for levels lower than -3dBm (just a
seat-of-the-pants
> sort of judgment).
>
> But, maybe the best approach is to take the damage level out of the main
> table, and just put it as a footnote (just like the 10G point-to-point
> clause did.)
>
> 2. What dynamic performance can be expected from strong-to-weak burst
> reception (the Treceiver_settling question)?
>
> The Nagahori presentation gives us very useful data. Let me illustrate it
> in the following way: From Nagahori page 7, we can see that a tau/T of
210
> results in an error curve that has zero penalty at the higher bit error
> rates that we are working at. (There are signs of an error floor, but it
> happens at 1E-10, so we don't care). T, in out case, is 97 ps. So, the
> data says that setting tau to be 20ns is OK.
>
> Suppose we want to tolerate 20 dB of dynamic range burst to burst. This
> means that we need to set the time constant of the AC-coupling to be at
> least 5 times shorter than the burst-to-burst time. (e^5=148 > 20dB).
That
> means that the burst to burst time needs to be 100ns. So far, we are not
> seeing any problems. (By the way, the value of 100ns is what I put
forward
> in 3av_0801_effenberger_3-page4.)
>
> I also think that real circuits will need to allocate time for control of
> the pre-amplifier stage (setting of the APD bias and/or the TIA
impedance).
> This should take no longer than an additional 100ns of time.
>
> So, this leaves us with a requirement of 200ns, which has a safety margin
of
> 2x below the 400ns that is the proposed value for Treceiver_settling.
>
> Thus, I don't see any reason why we should change the value from 400ns,
just
> like in 1G EPON. While it is true that Treceiver_settling will likely
need
> to be longer than T_cdr, setting the maximum values of both at 400ns will
> not preclude any implementations. I fully expect that real systems will
> actually do much better than both of these limits.
>
> 3. What about limiting the rate-of-attack of the burst Tx (Ton/Toff)?
> I went to talk with my optical front-end expert, and he explained the
latest
> results that we've been seeing. The whole motivation of our concern is
the
> large 20dB dynamic range that we are targeting in PON systems. The
problem
> is that the receiver is normally in the maximum gain condition, and then a
> strong burst comes in that threatens to overload the circuit.
>
> Initially, we were concerned that the APD and the TIA would be most
> sensitive to high burst transients. However, this seems to be not the
case.
> The APD gain may be self-limiting (saturating), and this helps to limit
the
> signal to some extent. So, damage to that part of the circuit seems
> unlikely.
>
> However, there still is a problem, and that is that the second stage
> amplifier (the one that is driven by the TIA) tends to get overloaded by
the
> strong bursts. (This is understandable, since the signal has received more
> gain by this point.) This prevents the output signal from being useful
(for
> control as well as for the actual signal), and the recovery from overload
is
> not well behaved. So, we'd like to avoid that.
>
> The simplest way to prevent transient overload is to reduce either the APD
> gain (by reducing its bias), or reducing the TIA impedance. Either of
these
> methods is essentially a control loop, and it will have a characteristic
> speed. The setting of the speed is bounded on both directions just like
the
> AC coupling speed, and a value of 20ns is good. Given that we have a
> control speed of 20ns, the loop will respond only that fast to input
> transients. We can thereby reduce the excursion of the control system
> output by limiting the "time constant" of the input signal to be similar
to
> that of the control loop. This is why we suggest a 'rise time' on the
order
> of 20ns.
>
> I was wrong in extending this to also specifying a 'fall time' - there is
no
> need for controlling the trailing edge, at least, not strictly. The
reason
> is that the receiver will 'know' when the burst is over, so it should be
> able to manage its withdrawal symptoms. (Note that this implies that the
Rx
> has certain feedback paths, such as when the CDR declares loss of lock.)
>
> So, that's the reason why we should consider having a controlled turn-on
for
> the transmitter.
>
> As for specifying it, the currently suggested text (a Minimum Ton) is not
> good. We should rather specify a maximum rate of power increase. Since we
> are ramping from essentially zero to Pmax in about 20ns, I would suggest
> setting the maximum rate of power increase to be Pmax(mW)/10ns. This
allows
> for some non-linear power curve (e.g, exponential decay), since it
provides
> a margin factor of 2 over the straight line value.
>
> Regards,
> Frank E.
---
-----------------------------------------
Hiroshi Hamano
Network Systems Labs., Fujitsu Labs. Ltd.
Phone:+81-44-754-2641 Fax.+81-44-754-2640
E-mail:hamano.hiroshi@jp.fujitsu.com
-----------------------------------------