Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [STDS-802-16] Problems with incremental bandwidth request header?



HI Lei,

Thanks for the detailed reply.

I still do not understand why a core component like the behaviour of
the request/grant state machine is left to implementations. There are
too many items that are vendor-dependent which can result in both
interoperability as well as bandwidth inefficiency.

I understand that the whole purpose of not including the feedback
about which requests are under consideration by BS (DOCSIS calls these
grant pending IEs) to save precious airtime. How ever, different
aspects that have been left out of the specification - like which
timers can be cancelled out or how much bandwidth can be requested by
the connections or how the SS should handle the corner cases - these
will cause bandwidth inefficiencies. And as Kaushik pointed out, this
can result in very interesting (mildly put) interoperability issues.

As for your question about why a connection would not make a piggyback
request instead of a contention request:
A piggyback request can only be tagged if a connection is sending out
an MPDU - I understand you cannot tag a piggyback request for a
different connection. The ideal approach would be for the connection
to insert a concatenated bandwidth request header - if it fits. The
question is - assuming that a MPDU for the connection cannot be
scheduled and a bandwidth request header cannot be concatenated,
should the contention resolution have been cancelled when the grant to
SS comes in?
From your answer, I understand that you are advocating that the SS
cancel out the timers only after allocations.

The problem with GPSS is that there are two entities making scheduling
decisions - BS and SS. And, it is like both of them are blind - BS
knows initially how much bandwidth different connections of SS require
but once it starts allocating any bandwidth, there is no feedback to
the SS of the BS intentions. Similarly, there is no directives in the
specification on how the SS can interpret the allocations from the BS.
Sure, there are aggregate requests to re-sync SS view with BS - but
remember that at small frame durations with maximium MAP relevance,
the bandwidth requests and grants may be overlapped - that means the
SS may be making an aggregate request while the BS may be granting for
the remaining portion of the previous request. The only time the
self-correction would end this ambiguity is when "none" of the
connections of the SS have any more data to send - till then, both
sides have a foggy idea about the allocations/requests from the other
end.

I did not understand your answer about setting the PM bit in multiple
UGS MPDUs: when a BS receives a UGS MPDU with PM bit set, it has no
idea of how many connections of the SS are requesting a bandwidth
poll. It knows that at least one connection is - it may be more.
Hence, if multiple UGS MPDUs have the PM bit set, does it mean that
the SS is requesting that many bandwidth polls or is it trying to
ensure that BS gets the indication - the question I really should ask
is: Is the PM bit a boolean flag requesting a bandwidth poll directed
at the SS (multiple PM bit MPDUs result in one bandwidth poll) or is
it an accumulation (multiple PM bit MPDUs result in multiple bandwidth
polls)?

I think that one of the biggest advantages of GPSS approach is the
providing the SS with the ability to re-schedule the allocations from
BS. I would like to believe that this was the reason for choosing GPSS
over GPC (and not just saving airtime for GPC grants). Considering
this feature, the rest of the request grant mechanism should specify
the behavior of SS/BS when SS reschedules the bandwidth itself.

I have worked extensively (and not just theoretically) on bandwidth
management and large scale fragmentation scenarios for DOCSIS systems.
The fragmentation problem is one of the most debilitating if the BS
cannot control the process. It is not such a major problem in the
downlink because the SS is receiving only some of the fragments and
hence the memory requirements are small. But for a BS supporting
hundreds of SS with multiple connections, the number of outstanding
fragments quickly becomes problematic. And worse yet, the BS has no
control on how to control the fragmentation.

I think I understand what has been bugging me: why is the piggyback
request only incremental? Why cannot it be aggregate? Aggregate
requests provide the same (if not far better) view of the bandwidth
requirements of the connection to the BS. I would like to suggest that
the piggyback request be changed from having a 16-bit BR field to a 1
bit aggregate/incremental indicator and a 15-bit BR. This will allow
the SS to make an aggregate bandwidth request to the BS.

Would there be any reason why the piggyback request cannot be aggreate
- the specification makes no comments on why this has to be
incremental only. Is the concern that the 16-bits in the piggyback
request are not enough for the bytes pending for the connection? If
that is the case, the SS can always fall back to using concatenated
(or a contention) bandwidth request header which allows up to 19-bits
(or up to 512 KBytes).

Thanks again for the details - I really hope that the self-correcting
nature of GPSS mechanism works out in the end.

Kalash

On 4/13/05, Kaushik V Shah <kaushik@ncoretech.com> wrote:
> Thank you Lei for some insights on these issues.
> 
> Question I have is - how is "system" efficiency to be maximized with
> such openness.
> 
> Is there an inherent assumption here that any vendor will provide whole
> system
> i.e. both SS and BS together (and also that any service provider will
> buy the system)?
> OR that there are a set of compatible ways of doing these things that
> are intuitive and
> common sense approaches that will work.
> 
> Another question, may be this is for WiMAX forum, but how will the
> performance be certified?
> i.e. can a SS and/or BS fail certification due to performance inefficiency?
> 
> Regards,
> Kaushik
> 
> Lei Wang wrote:
> 
> > Kalash and others,
> >
> > First, welcome to 802.16, Kalash and all other DOCSIS friends. Please
> > see my response to your questions embedded in your email below.
> >
> > Once again, this topic had been discussed intensively inside 802.16
> > for quite a long time, and finally about two years ago, the group
> > reached consensus.
> >
> > Regards,
> >
> > Lei
> >
> > -----Original Message-----
> > From: Kalash [mailto:kalash.kuhasa@gmail.com]
> > Sent: Tuesday, April 12, 2005 12:45 AM
> > To: STDS-802-16@LISTSERV.IEEE.ORG
> > Subject: [STDS-802-16] Problems with incremental bandwidth request header?
> >
> > Hi all,
> >
> > I come from the DOCSIS side of the world and am still trying to figure
> >
> > out the GPSS (grant per SS) in 802.16 vs GPC(grant per connection) in
> >
> > DOCSIS.
> >
> > I do understand the advantages of GPSS but I feel that the lack of
> >
> > feedback from BS regarding the bandwidth requests received by it
> >
> > and/or being processed by it can cause a lot of interesting scenarios.
> >
> > In GPC (DOCSIS), the resolution to most (if not all) these scenarios
> >
> > is a clear cut answer - no interpretations.
> >
> > I have the following questions - would appreciate if someone would
> >
> > attempt to answer:
> >
> > 1. If two connections of an SS make a bandwidth request uplink and BS
> >
> > sends a grant to the SS, should the request timers of both the
> >
> > requests be cancelled?
> >
> > <Lei Wang> This is one of the fundamental issues you have to solve to
> > make GPSS work. There are many solutions. It depends on the
> > implementations. For example, one of the solutions can be that the SS
> > cancel the timer for the connection for which the current grant is
> > used, and then reset the timer for the other connection.
> >
> > 2. If a connection was waiting for contention resolution process to
> >
> > complete when a grant to SS comes in, should it also cancel out
> >
> > contention resolution?
> >
> > <Lei Wang> I have a question first about this one: when an SS has
> > multiple active connections and get UL grants, why not use piggyback
> > bandwidth requesting mechanism, why still use contention-based
> > bandwidth requesting? Ok, even if the described scenario is realistic,
> > then my answer to your question is yes, it should cancel the
> > contention resolution, instead use piggyback bandwidth request on the
> > UL grant for finish the bandwidth request that was in the contention
> > resolution. Also, use aggregate bandwidth request to do the
> > self-correction on any possible out-of-sync errors in this process.
> >
> > 3. How would a connection calculate the number of bytes to request in
> >
> > an incremental bandwidth request? The grant is to the SS and hence the
> >
> > SS has no idea if the grant was made to the connection that actually
> >
> > uses it or not. Hence, there is no way that the SS can make an
> >
> > estimation of how much incremental request is to be made. <Lei Wang>
> > The grant is for SS, but the SS has full knowledge of the distribution
> > of the grant to its multiple connections. The SS definitely know how
> > many bytes allocated to those connections. Consider the
> >
> > following example where incremental request makes perfect sense:
> >
> > Last Aggregate request from connection 1: L1 = 1000 bytes
> >
> > Connection 1 has another 300 bytes of SDUs arriving, so total pending
> >
> > bytes = 1300 bytes
> >
> > Grant to SS: G = 500 bytes
> >
> > Allocated to connection 1: A1 = 500 bytes
> >
> > Pending bytes for connection 1: P1 = 1300 - 500 = 800 bytes
> >
> > Now incremntal request would be: I1 = P1 - (L1 - A1) = 800 - (1000 -
> >
> > 500) = 300 bytes
> >
> > But consider the following example where it does not:
> >
> > Assume 2 connections for the SS with connection 1 higher priority than
> >
> > connection 2.
> >
> > Last Aggregate request from connection 1: L1 = 100 bytes
> >
> > Last Aggregate request from connection 2: L1 = 600 bytes
> >
> > Connection 1 has another 800 bytes of SDUs arriving, so total pending
> >
> > bytes = 900 bytes
> >
> > Grant to SS: G = 500 bytes
> >
> > Allocated to connection 1: A1 = 500 bytes
> >
> > Pending bytes for connection 1: P1 = 900 - 500 = 400 bytes
> >
> > Now incremntal request can be either:
> >
> > I1 = P1 - (L1 - A1) = 400 - (100 - 500) = 800 bytes
> >
> > or
> >
> > I1 = P1 = 400 = 400 bytes
> >
> > What should be the incremental piggyback request for connection 1?
> >
> > Should it be 800 bytes? Or should it be 400 bytes?
> >
> > Why is this not clarified in the 802.16 specification?
> >
> > <Lei Wang> This is another important issue you have to solve to make
> > the GPSS work. First of all, you have to make your design decision
> > about whether or not you allow such a bandwidth re-distribution at SS,
> > i.e., allow the connection-1 to use more than its current requested
> > bandwidth. If not, what you described won't exist. If yes, then you
> > have to solve the described problem. Again, there are many possible
> > solutions. One simple solution is to send aggregate bandwidth requests
> > for both connection-1 and connection-2.
> >
> > 4. If BS received multiple UGS MPDUs (say 3 MPDUs) with PM bit set,
> >
> > should it grant 3 bandwidth request polls or just one?
> >
> > <Lei Wang> This is a implementation detail issue. Remember the
> > bandwidth requests is still on connection basis. If three connections
> > need to be polled, i.e., need to send bandwidth requests, the BS can
> > either allocate one UL transmission big enough to accommodate three
> > bandwidth requests, or three UL transmission, one for each bandwidth
> > request. I would say the former one is smarter. Well, again, it is
> > your implementation choice.
> >
> > 5. Assume that the BS cannot allocate bandwidth to the bandwidth
> >
> > request sent by a SS' connection because link is congested. Must the
> >
> > BS drop the request after T16 timer? There is no indication that the
> >
> > BS can provide the SS that its request has been received and is being
> >
> > processed.
> >
> > <Lei Wang> I don't think GPSS causes this problem. This one applies to
> > GPC too. This is the question regarding hand-shaking like bandwidth
> > request/grant mechanism vs. self-correction bandwidth request/grand
> > mechanism. I think this had also been discussed very intensively in
> > 802.16 group. The group consensus is the self-correction mechanism.
> > The indication mechanism you asked is basically hand-shaking like,
> > there are multiple cons with it, e.g., sending indication needs air
> > link bandwidth, also, it could be lost too, then you still need a
> > timer to deal with the possible error conditions, … Well, with
> > self-correction mechanism, in case the BS and the SS is out-of-sync
> > for the bandwidth request/grant, e.g., SS sends duplicate request
> > because the BS did not drop the previous due to time-out, those errors
> > can be easily corrected by using aggregate bandwidth request,
> > periodically. Very low cost, and very efficient.
> >
> > 6. SS can start fragmentation on its own while dividing the allocated
> >
> > SS grants to its connections even though BS may not have intended to
> >
> > start the fragmentation for the connection. This can result in
> >
> > unwanted and uncontrolled fragmentation.
> >
> > I have been involved in fragmentation reassembly on DOCSIS CMTSes and
> >
> > this particular point more than any before is a major concern for the
> >
> > BS. BS must have a way to control the fragmentation of the SDUs coming
> >
> > in from the SS - if not BS will either have to load up a lot of memory
> >
> > for the fragment reassembly or expect to drop the fragments as its
> >
> > buffers start filing up.
> >
> > <Lei Wang> Yes, there may be unexpected fragmentation caused by
> > allowing SS to decide how to use a bandwidth grant. Why are those
> > uncontrolled? Do you mean BS does not know how to process them? Also,
> > BS actually does not depend on the UL allocation algorithm/knowledge
> > to handle its reassembly part on the receiving side. If
> > fragmentation/reassembly is supported, BS has to be able to the
> > reassembly job. Plus, when the SS is allocating a grant to its
> > connections, the same optimization idea can apply to minimizing the
> > fragmentations.
> >
> > Is there a working group document that delved into the effects of GPSS
> >
> > on request grant state machine scenarios when multiple connections are
> >
> > active on an SS?
> >
> > <Lei Wang> as far as I know, the answer is no. Also, not necessary. If
> > you would like to contribute, I believe there will be people
> > interested in. Finally, we knew GPSS and self-correcting bandwidth
> > request/grant schemes are working solutions, which have been proven by
> > multiple implementations.
> >
> > Thank you
> >
> > Kalash
> >
>