Re: [EFM] RE: EPON TDMA
[resending because this exploder seems to have the habit of sending
messages to the 802.black-hole]
I think a key question for the group is what is the level of efficiency we
need to design with respect to %load offered, and what are the
latency/jitter guarantees that are required.
One school of thought is that EPON is just a simple transition technology
on the way toward PTP, so (upstream) efficiency is no consideration. Heck,
10-20% of available link capacity would be great. Heck, why not just
divide up the bandwidth statically and give each ONU 1/16th of the raw link
bandwidth. It's a lot simpler that way. Plus the EFM operator will just
switchover to PTP anyway once the traffic gets to a significant level.
Another school of thought is that EPON is an enduring technology which (in
order to compete with other enduring shared technologies, e.g. copper and
vaporized-copper) must dynamically maximize the utilization of the link
bandwidth up to 100% utilization. Furthermore latency sensitive traffic
must be capable of seeing minimum latency (e.g. 0 ms for a constant bit
rate stream) on the uplink.
So the question is which extreme model (or intermediate model) is
required. The model chosen will determine the functionality requirements
of the MAC/PHY.
One last comment. Here are a possible set of requirements for real time
traffic:
- guarantee traffic rate of all individual and aggregate real time traffic
stream
- guarantee minimum latency and jitter of all individual and aggregate real
time traffic stream
.. goal = ~0 ms from the time a real time packet is queued until it is
transferred upstream by the MAC
- real time traffic is a combination of
.. 1 or more sources at packet size s1, packet rate r1
.. 1 or more sources at packet size s2, packet rate r2
... etc etc up to, say, 4-8-16 different types of real time traffic
... plus traffic of the emerging SIP-client variety - monitor the packet
loss, latency, and
throughput end to end and dynamically adapt the traffic to lower/higher
bitrate encodings, with/without FEC to cover packet loss.
Note: "Real time traffic" I am thinking of includes voice, audio (CD
quality and higher), video (with audio lip sync), and 3-D motion-holigraphy
(with tactile sensor synch).
Note: of course an alternate view of "real time" is that "adaptive
buffering ameliorates all introduced jitter and latency".
J
At 03:38 PM 7/14/01 -0400, Dolors Sala wrote:
>Glen,
>
>I think your efficiency computation it is a little simplistic. This
>calculation
>only applies to overload conditions when there is always enough data in the
>queue to fill the assigned burst. However, under normal conditions it is more
>difficult to predict how much of the assigned bandwidth a CPE may be able
>to use
>if it is assigned in a static basis. On the other hand, if the bandwidth is
>assigned based on request then this is not an issue. In our presentation we
>tried to highlight these issues and give an initial estimate of performance
>(both efficiency and delay) based on simulation results.
>
>Below there is a little more detailed answer to your computation and
>explanation
>of the general case.
>
>glen.kramer@xxxxxxxxxxxx wrote:
>
> > Dear Xu,
> >
> > I think I know what confused you in the presentation as I got several
> > similar questions.
> >
> > Timeslot is not an analog to a cell. While, from the slide 4 in the
> > presentation you may conclude that one timeslot is only large enough to
> hold
> > one maximum size packet, that is not the case. Timeslot in our example was
> > 125 us, which equals to 15625 byte times. Then you can see that in the
> > worst case it will have 1518 + 4(VLAN) + 8(preamble)+12(IPG) - 1 = 1541
> > bytes of unused space at the end of timeslot (assuming there is data to be
> > sent and no fragmentation). With realistic packet size distribution (like
> > the one presented by Broadcom), the average unused portion of the timeslot
> > is only about 570 bytes. That gives channel efficiency of 96%, or
> > accounting for 8 us guard bands - 90%
>
>This efficiency calculation considers only the efficiency impact of the last
>packet in the burst transmission. If the system doesn't allow fragmenting
>packets, there will be wasted bandwidth at the end of the burst because the
>packet in the head of the queue does not fit in the remaining space of the
>burst. As you say, the size of this gap is on average equal to the average
>packet size. This means that the lack of fragmentation effects 5-10%
>efficiency.
>So I agree with you that this is a small penalty for the complexity
>involved in
>implementing fragmentation (i.e. buffering and SAR). This is why we did not
>mention fragmentation in our presentation.
>
>However the key factor in the efficiency is how often the CPE has enough
>information to fill the entire burst. This depends more on the burstiness and
>arrival process of packets and less on packet size distribution. If not enough
>bandwidth is assigned to a CPE, the queues build up and hence there is always
>enough packets to fill any bandwidth assigned. This is the case where your
>computation applies. However this overflow condition is not a good operation
>point for the system. Latency is too high to be able to have any interactivity
>in the system. It could work only for best effort data. Hence in general
>we have
>to be able to adapt the bandwidth assignment so that the queues are used
>only to
>mitigate the burstiness of the traffic. In this case it is difficult to
>predict
>in a simple calculation the efficiency of the system. To give an initial
>estimate, we showed too examples in our presentation, one for video
>traffic and
>another with a model corresponding to the current mixed of traffic in
>residential cable systems. As you can see in there, the saturation point
>of the
>system (i.e. maximum efficiency) is reached earlier than this simple
>calculation
>indicates. And the reason is that there is much more left over in the
>burst than
>just the last packet. Note that the two lines in the plots assume no
>fragmentation, hence the actual difference is a measure of this left over
>in the
>burst.
>
>These two plots were taken just as example. There are many more situations to
>consider. An interesting one is when close-loop (TCP-like) traffic models are
>considered. In this case the traffic generation is driven by the response
>of the
>system. If acks cannot be transmitted, no more data is generated. In this
>case,
>the queues never build up, instead the service just degrades due to the
>lack of
>bandwidth at the appropriate time. We can provide additional simulation
>results
>for these cases. Hence the assumption that the packets will eventually
>build up
>in the queue does not always apply.
>
>Our presentation and this description tries to motivate the need for DBA
>in the
>system design. While the actual algorithms don't need to be standardized, the
>interface between HE and CPE for the adaptation must be included.
>
>Dolors
>
> > DBA is a separate question. While it may be important for an ISP to have
> > DBA capabilities in their system, I believe it will not be part of the
> 802.3
> > standard. But a good solution would provide mechanisms for equipment
> > vendors to implement DBA. These mechanisms may include, for example, an
> > ability to assign multiple timeslots to one ONU or to have timeslot of
> > variable size. Grant/Request approach is trying to achieve the same by
> > having variable grant size.
>
> > Having small timeslots will not solve QOS either. Breaking packet into
> > fixed small segments allows efficient memory access and a cut-through
> > operation of a switch where small packets are not blocked behind the long
> > ones (and it assumes that short packets have higher QOS requirements). In
> > such a distributed system as EFM is trying to address (distances in excess
> > of 10 km) the gain of cutting through is negligible comparing to
> propagation
> > delay or even the time interval before ONU can transmit in a time-sharing
> > access mode (be that TDMA or grant/request method).
> >
> > Thank you,
> >
> > Glen
> >
> > -----Original Message-----
> > From: xu zhang [mailto:zhangxu72@xxxxxxxxx]
> > Sent: Thursday, July 12, 2001 7:01 PM
> > To: glen.kramer@xxxxxxxxxxxx
> > Cc: stds-802-3-efm@ieee.org
> > Subject: EPON TDMA
> >
> > hi, glen:
> > I had seen your presentation file about EPON TDMA in
> > PHY, it help me a lot to understand your EPON system.
> > We had developed the first APON system in china, when
> > I think of the TDMA of EPON, I think though the uplink
> > data rate is 1Gbits/s when shared by 16 or 32 users is
> > still not enough, so the dynamic bandwidth
> > allocate(DBA) protocal must be a requiremant
> > especially when take care of the QoS performance. In
> > DBA protocal, in order to achieve high performance the
> > time slot need be to small, I think why not we divide
> > the ethernet packet to 64 byte per solt, it is often
> > used in ethernet switch when store packet in SRAM.
> >
> > best regards
> > xu zhang
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Get personalized email addresses from Yahoo! Mail
> > http://personal.mail.yahoo.com/