Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

stds-802-16: 802.16a clarifications





Hello,

I have a few questions about the 802.16a standard that I was not able to
resolve from the standard and other docs on the 802.16 web site. If anyone
could offer some clarifications, or point me to references that could
clarify these, please let me know. My main interest is in finding out what
kind of channel state information is available to the BS, and what hooks
are provided in the standard for enabling the BS to optimize its resource
allocation.

1. Channel feedback: Is there any means for the SS to feed back detailed
(e.g. per sub-channel or per sub-carrier) information to the BS about the
state of the downlink channel? The measures for RSSI, CINR (defined in
section 8.5.11 of the 802.16a-2003 standard) seem to be coarse in the
sense that they define only a single measurement averaged across all
sub-channels/sub-carriers. Or is the BS supposed to infer the downlink
channel conditions of an SS through the transmissions received on the
uplink?  (may be ok for TDD, but would this work for FDD?)

2. Is the ranging procedure (section 8.5.7 of 802.16a-2003) intended to
provide such detailed channel measurements? It seems that it could be used
for this purpose.  However, the BS needs to designate a (presumably small)
set of sub-channels as a ranging channel, so it seems that this procedure
will give information only about a subset of the available sub-channels.
Also, can this ranging procedure be performed frequently enough to track
channel variations?

3. Transmit power allocation: Does the BS have the flexibility to decide
how much power to use on a sub-channel (or on individual sub-carriers
within a sub-channel)? Assuming that sufficiently detailed channel
feedback is available, the BS might want to do some optimization of the
power allocation to increase the system efficiency. Related to this, I also
have some other questions about the "Transmitter Requirements" in section
8.5.12 of standard 802.16a-2003:

    (a) Is the transmit power level control exercised at a per sub-channel
    granularity, or per sub-carrier, or per SS?

    (b) The transmitter spectral flatness requirement seems to indicate that
    the power in each sub-channel (and sub-carrier) should be essentially
    the same, i.e. the BS should not allocate different power to
    subchannels/subcharriers  with different channel conditions.
    Is this interpretation correct?

4. The allocation of carriers to sub-channels is done in a way that the
sub-carriers for a sub-channel are spread throughout the frequency band. Is
the intent of this to make the fading on the sub-carriers essentially
uncorrelated? Or is it meant to provide, for instance, interference
averaging of some sort?

Thank you.

-Anand Bedekar