RE: [EFM] [EFM-P2MP] 10G EPONs
Carlos,
As Vincent mentions the most important thing here is cost.
* The main ONU costs today lies on the optic, where rates above
622Mbps upstream requires expensive lasers in order to maintain the
required distance and split. Moving to 2.5 Gbps upstream will result in
very expensive transceiver.
* The MPCP protocol efficiency for 1.25Gbps upstream is 50% (in
1ms cycle). 8/10 coding takes 20%, no fragmentation takes 15 % more, and
the burst OH takes additional 15%.
* Fragmentation is not a complex process, and even if it is a
complex process, it does not add additional cost on silicon, but
sometimes even saves cost because of smaller buffer size at the ONU.
Fragmentation does not add complexity to the system, since you can
integrate PON MAC component into a regular Ethernet Switch, without the
need to design special system for PON.
*
* Fragmentation and different coding (not 8/10) can provide 97 %
efficiency and when using low cost FP laser for 622Mbps you can get
almost 600Mbps. With the addition of FEC you can use the same low cost
FP laser and get almost 1.2Gbps U/S bandwidth with 97% efficiency.
We did not decide that fragmentation is wrong, it is just because the
802.3 is the host of the EPON standardization activity and 802.3 does
not allow to fragment packets. In addition the 802.3 requires 8/10
coding, which result in 80% efficacy.
Regards
David
www.broadlight.com
-----Original Message-----
From: carlosal@xxxxxxxxxxxxxxxxxx
Sent: Wed 2/13/2002 9:41 PM
To: gerry.pesavento@xxxxxxxxxxxx
Cc: bob.barrett@xxxxxxxxxxxxxxxxxx; David Levi;
stds-802-3-efm@ieee.org; stds-802-3-efm-p2mp@ieee.org;
vincent.bemmel@xxxxxxxxxxxx
Subject: RE: [EFM] [EFM-P2MP] 10G EPONs
Hi Gerry,
Sorry to bring such a discussion so late to the table. First of
all, a
disclaimer:
- I do not want to disturb the current discussions.
- I do not want to make us go back to the beginning.
- I do not want to waste energy in a discussion that is not
going to lead
us anywhere.
Said that, let me explain the context of my inopportune
suggestion.
As we start to analyze the MPCP protocol with more detail, it
becomes clear
that it will be very difficult to achieve high efficiency on the
upstream.
There are so many variables to account for: laser turn on/off
times, guard
times, report message overhead, framing overhead, and so on.
Small grant
sizes are important to minimize latency and make the system more
responsive, but OTOH, they make the system even less efficient.
Part of
this is caused by the decision not to fragment frames, which was
a very
wise decision, given the complexity of fragmentation.
Of course, the system can be made more efficient: tight timing,
better
components,etc. This may add a lot of complexity to the system
design. So I
was wondering if we could not take a different approach -
instead of higher
efficiency, throw more bandwidth at the problem. It's a somewhat
dumb
strategy, but it worked for several technologies, including
Ethernet, for a
long time.
In my mind, the tradeoff is just like this (just an example, the
figures
are arbitrary):
Which one is more cost-effective: a 90% efficient 1 Gbps system,
or a 50%
efficient 2.5 Gbps system?
Of course, this is not to say that we need to start from 2.5
Gbps. We can
start from 1 Gbps, and then go to define 2.5 Gbps, or even 10
Gbps, as the
next step. This would allow us to select a less efficient but
inexpensive
MPCP implementation. It's also one of the places where good
wavelength
allocation planning can come up to save the day, to allow for
easier
expansion.
Sincerely, I'm not concerned with efficiency by itself. There
are other
more important criteria: reliability, scalability,
predictability,
responsiveness, etc. All (other) things being equal, what really
matters is
the bottom line, and that's the cost per bit. If that's 1 Gbps
or 2.5 Gbps,
it's really not important; we have no emotional attachment to
powers of ten
anyway :-)
Best regards,
Carlos Ribeiro
CTBC Telecom
winmail.dat