RE: stds-802-16-tg4: Call for input - Data Encoding
Dear Octavian
We Agree in Santa Clara about Basic criteria and I think thet it is a good
starting.
All the best
Zion Hadad
-----Original Message-----
From: owner-stds-802-16-tg4@ieee.org
[mailto:owner-stds-802-16-tg4@ieee.org]On Behalf Of Octavian Sarca
Sent: Monday, April 16, 2001 6:31 PM
To: Minfei Leng; Liebetreu, John; Stds-802-16-Tg4 (E-mail)
Subject: RE: stds-802-16-tg4: Call for input - Data Encoding
Dear All,
I think the discussions about the encoding methods should be part of the
section coordinated by Tal which is designated to establish evaluation
criteria. I would like to encourage everybody to submit their opinion to
that section. I strongly believe that data encoding section should
accept ALL encoding schemes that fall within the scope outlined by the
group during the previous meetings. This is to let the group make an
informed decision.
I also have few comments about interference in UNII band. Potentially, I
see are 3 types of devices operating in these bands:
1. 802.11a/HyperLAN devices
2. 802.16.4/802.16b devices
3. PTP and PTMP devices using non-standard, proprietary protocols
In the first case, the minimum length of a burst is 16us (preamble) +
4us (SIGNAL) + 4us (1 data symbol) = 24us. However the maximum duration
can be
16us (preamble) + 4us (SIGNAL) + 783*4us (max data frame @ 6Mb/s) =
3152us.
In the second case, on downlink we expect bursts of 0.5ms to 8ms. On
uplink we may expect something in the range of tens to hundreds of us.
In the third case, we have PTP devices that operate with FDD (like those
developed by Western Multiplex and CRC/Redline) which use continuous
transmission or TDD devices (like those developed at Malibu Networks)
which will have characteristics similar or close to 802.16b.
Interference from FDD devices with continuous transmission will be
perceived as a raised noise floor. Thus any burst noise immunity does
not help here. Coding cannot do anything significant here but we can use
deployment strategies, directional antennas, etc.
Concerning the other devices, we expect noise bursts with a length
between 24us and 8ms, with the higher probability concentrated between
100us and 1ms. The SNR degradation during these bursts can also be
anywhere between few dB and enough to kill any reception.
Now, let's get real. No coding can be immune to bursts unless
interleaving time is much larger than the burst. In our case, what shall
we choose? 1ms? 10ms? 100ms? What about delay? Remember that the long
downlink burst is actually made of small packets/bursts with different
destinations, coding rates and modulations. What about the upstream? Can
we do 1ms interleaving on a 100us burst?
I think that the only chance we have to improve immunity to interference
is to use smaller granularity of the data stream. In other words, if we
use symbol-size interleaving, we place CRC on each separate data
fragment/packet/burst and we use ARQ we may obtain the best results.
Why? Because the interfering noise bursts will affect the least number
of data fragments.
Suppose a very strong interfering burst destroying say 14 OFDM symbols
(this is very likely to happen in the UNII band). With symbol-size
interleaving this would kill data in 14 OFDM symbols. Any longer
interleaving scheme will kill more data, i.e. more bursts on uplink or
more packets/fragments on the downlink.
I agree with you that we could have saved this data if we had been able
to make the interleaving block size to be much larger that the length of
the interfering bursts. But this is not possible. Even if we ignore
interference from other systems and we consider only 802.16b devices we
arrive to a vicious circle. Larger interleaving block implies larger
bursts which requires even larger interleaver.
Best Regards,
Octavian