Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [STDS-802-16] Problems with incremental bandwidth request header?



Kalash and others,

 

First, welcome to 802.16, Kalash and all other DOCSIS friends. Please see my response to your questions embedded in your email below.

 

Once again, this topic had been discussed intensively inside 802.16 for quite a long time, and finally about two years ago, the group reached consensus.

 

Regards,

 

Lei  

 

-----Original Message-----
From: Kalash [mailto:kalash.kuhasa@gmail.com]
Sent: Tuesday, April 12, 2005 12:45 AM
To: STDS-802-16@LISTSERV.IEEE.ORG
Subject: [STDS-802-16] Problems with incremental bandwidth request header?

 

Hi all,

 

I come from the DOCSIS side of the world and am still trying to figure

out the GPSS (grant per SS) in 802.16 vs GPC(grant per connection) in

DOCSIS.

 

I do understand the advantages of GPSS but I feel that the lack of

feedback from BS regarding the bandwidth requests received by it

and/or being processed by it can cause a lot of interesting scenarios.

In GPC (DOCSIS), the resolution to most (if not all) these scenarios

is a clear cut answer - no interpretations.

 

I have the following questions - would appreciate if someone would

attempt to answer:

 

1. If two connections of an SS make a bandwidth request uplink and BS

sends a grant to the SS, should the request timers of both the

requests be cancelled?

<Lei Wang> This is one of the fundamental issues you have to solve to make GPSS work. There are many solutions. It depends on the implementations. For example, one of the solutions can be that the SS cancel the timer for the connection for which the current grant is used, and then reset the timer for the other connection.

 

2. If a connection was waiting for contention resolution process to

complete when a grant to SS comes in, should it also cancel out

contention resolution?

<Lei Wang> I have a question first about this one: when an SS has multiple active connections and get UL grants, why not use piggyback bandwidth requesting mechanism, why still use contention-based bandwidth requesting? Ok, even if the described scenario is realistic, then my answer to your question is yes, it should cancel the contention resolution, instead use piggyback bandwidth request on the UL grant for finish the bandwidth request that was in the contention resolution. Also, use aggregate bandwidth request to do the self-correction on any possible out-of-sync errors in this process.

 

3. How would a connection calculate the number of bytes to request in

an incremental bandwidth request? The grant is to the SS and hence the

SS has no idea if the grant was made to the connection that actually

uses it or not. Hence, there is no way that the SS can make an

estimation of how much incremental request is to be made. <Lei Wang> The grant is for SS, but the SS has full knowledge of the distribution of the grant to its multiple connections. The SS definitely know how many bytes allocated to those connections. Consider the

following example where incremental request makes perfect sense:

Last Aggregate request from connection 1: L1 = 1000 bytes

Connection 1 has another 300 bytes of SDUs arriving, so total pending

bytes = 1300 bytes

Grant to SS: G = 500 bytes

Allocated to connection 1: A1 = 500 bytes

Pending bytes for connection 1: P1 = 1300 - 500 = 800 bytes

Now incremntal request would be: I1 = P1 - (L1 - A1) = 800 - (1000 -

500) = 300 bytes

 

But consider the following example where it does not:

Assume 2 connections for the SS with connection 1 higher priority than

connection 2.

Last Aggregate request from connection 1: L1 = 100 bytes

Last Aggregate request from connection 2: L1 = 600 bytes

Connection 1 has another 800 bytes of SDUs arriving, so total pending

bytes = 900 bytes

Grant to SS: G = 500 bytes

Allocated to connection 1: A1 = 500 bytes

Pending bytes for connection 1: P1 = 900 - 500  = 400 bytes

Now incremntal request can be either:

      I1 = P1 - (L1 - A1) = 400 - (100 - 500) = 800 bytes

or

      I1 = P1 = 400 = 400 bytes

 

What should be the incremental piggyback request for connection 1?

Should it be 800 bytes? Or should it be 400 bytes?

Why is this not clarified in the 802.16 specification?

<Lei Wang> This is another important issue you have to solve to make the GPSS work. First of all, you have to make your design decision about whether or not you allow such a bandwidth re-distribution at SS, i.e., allow the connection-1 to use more than its current requested bandwidth. If not, what you described won’t exist. If yes, then you have to solve the described problem. Again, there are many possible solutions. One simple solution is to send aggregate bandwidth requests for both connection-1 and connection-2.

 

4. If BS received multiple UGS MPDUs (say 3 MPDUs) with PM bit set,

should it grant 3 bandwidth request polls or just one?

<Lei Wang> This is a implementation detail issue. Remember the bandwidth requests is still on connection basis. If three connections need to be polled, i.e., need to send bandwidth requests, the BS can either allocate one UL transmission big enough to accommodate three bandwidth requests, or three UL transmission, one for each bandwidth request. I would say the former one is smarter. Well, again, it is your implementation choice.

 

5. Assume that the BS cannot allocate bandwidth to the bandwidth

request sent by a SS' connection because link is congested. Must the

BS drop the request after T16 timer? There is no indication that the

BS can provide the SS that its request has been received and is being

processed.

<Lei Wang> I don’t think GPSS causes this problem. This one applies to GPC too. This is the question regarding hand-shaking like bandwidth request/grant mechanism vs. self-correction bandwidth request/grand mechanism. I think this had also been discussed very intensively in 802.16 group. The group consensus is the self-correction mechanism. The indication mechanism you asked is basically hand-shaking like, there are multiple cons with it, e.g., sending indication needs air link bandwidth, also, it could be lost too, then you still need a timer to deal with the possible error conditions, … Well, with self-correction mechanism, in case the BS and the SS is out-of-sync for the bandwidth request/grant, e.g., SS sends duplicate request because the BS did not drop the previous due to time-out, those errors can be easily corrected by using aggregate bandwidth request, periodically. Very low cost, and very efficient.

 

6. SS can start fragmentation on its own while dividing the allocated

SS grants to its connections even though BS may not have intended to

start the fragmentation for the connection. This can result in

unwanted and uncontrolled fragmentation.

I have been involved in fragmentation reassembly on DOCSIS CMTSes and

this particular point more than any before is a major concern for the

BS. BS must have a way to control the fragmentation of the SDUs coming

in from the SS - if not BS will either have to load up a lot of memory

for the fragment reassembly or expect to drop the fragments as its

buffers start filing up.

<Lei Wang> Yes, there may be unexpected fragmentation caused by allowing SS to decide how to use a bandwidth grant. Why are those uncontrolled? Do you mean BS does not know how to process them? Also, BS actually does not depend on the UL allocation algorithm/knowledge to handle its reassembly part on the receiving side. If fragmentation/reassembly is supported, BS has to be able to the reassembly job. Plus, when the SS is allocating a grant to its connections, the same optimization idea can apply to minimizing the fragmentations.

 

Is there a working group document that delved into the effects of GPSS

on request grant state machine scenarios when multiple connections are

active on an SS?

<Lei Wang> as far as I know, the answer is no. Also, not necessary. If you would like to contribute, I believe there will be people interested in. Finally, we knew GPSS and self-correcting bandwidth request/grant schemes are working solutions, which have been proven by multiple implementations.

 

Thank you

Kalash