Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [RPRWG] Scheduling



Title: RE: [RPRWG] Scheduling
How about overcommitted class.
This makes it 4. Should we have provision for other classes.
If Diffserv 3 bits of COS (next 3 bits are for drop presidence ), MPLS COS (3 bits), and 802.1Q (3 bits)are
in conformance, why should RPR be any different.
 
We should even leverage from the code points which Diffserv has defined.
 
101 for EF
100 for AF4
011 for AF3
010 for AF2
001 for AF1
000 Best Effort
 
 
 
-Sanjay K. Agrawal
-----Original Message-----
From: William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Wednesday, April 04, 2001 1:47 PM
To: stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] Scheduling

Certainly you can have a mapping between Difffserv (or 802.1p) and
RPR MAC classes I mentioned. The question is how to define those
behavior in the context of MAC. At least I cannot imagine 8 different
behavior at MAC layer.
 
William Dai
----- Original Message -----
Sent: Wednesday, April 04, 2001 12:49 PM
Subject: RE: [RPRWG] Scheduling

Why not ADAPT Diffserv's well defined scheme.
It has already defined the behaviours for what you are suggesting.
We need not re-invent and re-define the traffic characteristics and behaviours again.
Just a thought !

Ashwin

-----Original Message-----
From: William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Wednesday, April 04, 2001 2:49 PM
To: Ray Zeisz
Cc: stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] Scheduling



Actually, I'm a proponent of 3 priorities, i.e.

Class A: Guaranteed provisional BW and low transit delay and jitter.
Class B: Guaranteed provisional BW.
Class C: Best Effort

I'm not sure yet about the relationship between 802.1p and the above
classifications.

William Dai



----- Original Message -----
From: "Ray Zeisz" <Zeisz@xxxxxxxx>
To: "'William Dai'" <wdai@xxxxxxxxxxxx>
Sent: Wednesday, April 04, 2001 11:09 AM
Subject: RE: [RPRWG] Scheduling


> Thanks for setting me straight.  After thinking about it some more, I
think
> I agree with you.  Do you think 2 priorities is enough?  802.5 and 802.1p
> have 8.
>
>
> Ray Zeisz
> Technology Advisor
> LVL7 Systems
> http://www.LVL7.com
> (919) 865-2735
>
>
>
>
> -----Original Message-----
> From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> Sent: Wednesday, April 04, 2001 2:16 PM
> To: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] Scheduling
>
>
>
> Ray,
>
> Without going into details, the "statistical scheduling", in my mind,
> should belong to the layer above the RPR MAC layer, the MAC layer
> should provide "fast lane / slow lane" type of service which is
> provisionalbe
> from upper layers. In this sense, there is nothing wrong with strict
> priority
> mechanisms in MAC layer, and schemes like WRR, RED should definitely
> belong to upper layer.
>
> By the way, packet reordering between different priority, also in my mind,
> SHOULD BE allowed, and jitter should not be of concern for low priority
> traffic.
>
> William Dai
>
>
> ----- Original Message -----
> From: "Ray Zeisz" <Zeisz@xxxxxxxx>
> To: "Carey Kloss" <ckloss@xxxxxxxxxxxxxxxx>; "Devendra Tripathi"
> <tripathi@xxxxxxxxxxxx>
> Cc: <stds-802-17@xxxxxxxx>
> Sent: Wednesday, April 04, 2001 6:53 AM
> Subject: RE: [RPRWG] Scheduling
>
>
> >
> > I am not sure, but it appears that all the proposals are using a strict
> > priority mechanism.  It seems like everyone wants to give an order in
> which
> > the queues are emptied.  Maybe I am missing something, but wouldn't it
be
> > better if we could statistically schedule between the transit buffers
and
> > the transmit buffers.
> >
> > Has anyone proposed a statistical allocation of bandwidth at each ring
> node?
> > For example, I can envision a protocol whereby each of the ring
> participants
> > advertise 1)Their required minimum guaranteed bandwidth and 2)Their
> desired
> > maximum bandwidth.  From these number each ring member could create
> > statistical queue weights for the following:
> > 1. This node's HP Transmit Buffer
> > 2. This node's LP Transmit Buffer
> > 3. This node's Transit Buffer
> >
> > Of course, MAC frames (for a lack of a better term) would always be
> > transmitted immediately in front of all user data frames.
> >
> > In my mind, having multiple transit buffers is a bad thing.  Am I
missing
> > something?  Each node should NOT be able to re-order packets based on
> > priority, while the packets are "passing through" the node.  Doing so
> seems
> > to create a lot of jitter for the low priority packets; at each node,
the
> > higher priority packets would be advancing in front of the lower
priority
> > packets, at least when there is also a packet insertion taking place.
> >
> > So back to the protocol....by assigning a weight to each of the queues,
> you
> > assign a probability of transmitting the next packet from either a) the
> > ingress from the ring (i.e. transit buffer) or b) this node's locally
> > generated traffic.
> > Always allowing the transit buffer to have priority prevents the ability
> to
> > deliver QoS in light of the fact that there could be a misbehaving node
> > somewhere on the ring.  However, if all of the nodes can agree on their
> > respective probabilities to transmit, and if this probability can be
> applied
> > to the queues, we should be able to support many priorities as well as
> > provide the ability to utilize 100% of the bandwidth regardless of the
> > priority profile of the traffic.  Something that a strict reservation of
> > bandwidth would not support.
> >
> > One other thing to note...in my mind, and in this note, all nodes are in
> > store-and-forward mode, whereby each frame is completely and totally
> > received and verified before being placed on the next hop of the ring.
> >
> > Please feel free to point out any mistakes in my logic, I am new to
> 802.17,
> > but I look forward to meeting you at the next meeting.
> >
> > Ray Zeisz
> > Technology Advisor
> > LVL7 Systems
> > http://www.LVL7.com
> > (919) 865-2735
> >
> >
> >
> >
> > -----Original Message-----
> > From: Carey Kloss [mailto:ckloss@xxxxxxxxxxxxxxxx]
> > Sent: Wednesday, March 28, 2001 11:54 PM
> > To: Devendra Tripathi
> > Cc: stds-802-17@xxxxxxxx
> > Subject: Re: [RPRWG] Cut through definition?
> >
> >
> >
> > I hadn't originally included store and forward congestion control
schemes,
> > but someone has asked that I add them, to make the discussion more
> complete.
> > So here are the 2 store and forward schemes that I know:
> >
> > 1. SRP: Incoming transit traffic is stored in 2 queues, a large low
> priority
> > queue (512 KB), and a smaller high priority queue (3MTU). If there is no
> > congestion, traffic leaves a node in this order: Usage pkt, HP transit,
> > other
> > control pkts, HP transmit, LP transmit, LP transit. If the node is
> congested
> > (LP transit buffer crosses a threshold), traffic leaves a node in this
> > order:
> > Usage pkt, HP transit, other control pkts, HP transmit, LP transit, LP
> > transmit. Also, periodically, a node will pass a "usage" message to it's
> > upstream neighbor. The usage value is basically the bandwidth of
transmit
> LP
> > traffic that it has been able to send. If the upstream neighbor sees
that
> > it's current usage is greater than the downstream node, it throttles
it's
> LP
> > transmit traffic. If that upstream node is also congested, it forwards
the
> > minimum of the two usages (received usage value and its own usage value)
> > further upstream.
> >
> > 2. Luminous: HP transit traffic is stored in an internal transit buffer
> and
> > all LP traffic is forwarded to the Host at each node. MAC serves the HP
> > transit traffic first, whenever the link is idle the host is let to
decide
> > what to send downstream next.
> >
> > In the cut through case, I believe that control packets have the highest
> > priority over any data packet.
> >
> > Thanks,
> > --Carey Kloss
> >
> > Devendra Tripathi wrote:
> >
> > > What about Ring control packets ? Are they given same priority as
> transit
> > > packet or
> > > there is one more priority level ?
> > >
> > > Regards,
> > >
> > > Devendra Tripathi
> > > VidyaWeb, Inc
> > > 90 Great Oaks Blvd #206
> > > San Jose, Ca 95119
> > > Tel: (408)226-6800,
> > > Direct: (408)363-2375
> > > Fax: (408)226-6862
> > >
> > > > -----Original Message-----
> > > > From: owner-stds-802-17@xxxxxxxx
[mailto:owner-stds-802-17@xxxxxxxx]On
> > > > Behalf Of Carey Kloss
> > > > Sent: Wednesday, March 28, 2001 6:07 PM
> > > > To: stds-802-17@xxxxxxxx
> > > > Subject: [RPRWG] Cut through definition?
> > > >
> > > >
> > > >
> > > > I would like to revisit the cut-through vs. store and forward, if
> nobody
> > > > objects?
> > > >
> > > > The last discussion ended with a wish to get a more concrete
> definition
> > > > of cut-through. Towards that end, I would like to put out my own
> > > > understanding, and generate feedback on what's specifically
different
> in
> > > > current schemes:
> > > >
> > > > >From what I understand, cut-through exists as Sanjay has explained
> it:
> > > > 1. Transit (pass-thru) traffic always has priority over transmit
> > > > (add-on) traffic, regardless of class.
> > > > 2. There is a small (1-2 MTU) transit buffer to hold incoming
transit
> > > > traffic when sending transmit traffic.
> > > > 3. All prioritization happens at a higher layer, when deciding what
to
> > > > transmit.
> > > >
> > > > I was also wondering if there is any agreement on cut-through
> congestion
> > > > control mechanisms? Looking through the presentations on the RPR
> > > > website, I've seen a number of schemes, and this is my understanding
> > > > from the slides. Please correct me if I've misunderstood:
> > > >
> > > > 1. The simplest, local fairness, which I'm not sure that anyone is
> > > > implementing: When HOL timer times out for high-pri traffic, send a
> > > > congestion packet upstream. This will stall the upstream neighbor
from
> > > > sending low-pri traffic (after some delay).
> > > >
> > > > 2. Fujitsu: Keep a cache of the most active source nodes. If a node
> has
> > > > an HOL timer time out, it sends a unicast "pause" message to
throttle
> > > > the most active source for a time. After another timeout, it will
send
> > > > more "pause" messages to other sources. This can be extended to
cover
> > > > multiple priorities, although I didn't see it explicitly stated in
the
> > > > slides.
> > > >
> > > > 3. Nortel, iPT-CAP:  When an HOL timer expires, the node calculates
> the
> > > > number of sources sending through the congested link, and apportions
> the
> > > > link fairly (if the link is 150M, and there are 3 sources, it
decides
> > > > that each souce can use 50M). To do this, it sets its B/W cap to
50M,
> > > > and then sends a message upstream to tell other nodes to start
sending
> > > > at only 50M. Once the affected link becomes uncongested, new
messages
> > > > are sent upstream, advising that more B/W is now available. This
will
> > > > converge to a fair B/W allocation.
> > > >
> > > > 4. Dynarc: Token passing and credits. No detailed description. What
is
> > > > the "goodput"?
> > > >
> > > > 5. Lantern: Per-SLA weighted fairness, with remaining bandwidth
> > > > apportioned fairly to SLAs. There wasn't a good explanation of
> > > > congestion handling, though. If the per-SLA rate limits are strictly
> > > > enforced to stop congestion, and traffic is bursty, what happens to
> the
> > > > "goodput"?
> > > >
> > > > Thanks a lot,
> > > > --Carey Kloss
> > > >
> > > >
> > > > Sanjay Agrawal wrote:
> > > >
> > > >
> > > >      Please see comments inline.
> > > >
> > > >      -Sanjay
> > > >
> > > >         -----Original Message-----
> > > >         From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > > >         Sent: Thursday, March 22, 2001 2:37 PM
> > > >         To: Sanjay Agrawal; 'Devendra Tripathi'; Ajay Sahai; Ray
Zeisz
> > > >         Cc: stds-802-17@xxxxxxxx
> > > >         Subject: Re: [RPRWG] MAC Question
> > > >
> > > >         My understanding of the cut-through definition in Sanjay's
> > > > example is
> > > >             1. Pass-through packet is allowed to transmit before it
is
> > > > completely received.
> > > >
> > > >            [Sanjay Agarwal]
> > > >            Not necessarily. You have same result if you forward
packet
> > > > after you completely receive it or you start
> > > >         transmitting before you receive. In the formar case you have
> one
> > > > packet delay, in latter you don't. 1500byte at 10
> > > >         gig gives you 1.2 microseconds.
> > > >
> > > >                   2. There is only one transit buffer (regardless of
> > > > class).
> > > >            [Sanjay Agarwal]
> > > >            Yes that is what proposed cut through schemes have.  If
you
> > > > have multiple classes of service and you allow
> > > >         priority than you have to arbitrate between add and pass
> classes
> > > > of traffic at that moment it becomes store and
> > > >         forward.
> > > >
> > > >             3. Scheduling Algorithm always give pass-through traffic
> > > > (regardlesss of class)
> > > >                 preference over add-on traffic.
> > > >            [Sanjay Agarwal]
> > > >
> > > >            Yes that is what proposed cut through schemes have. If
you
> > > > don't give pass higher priority than you don't have
> > > >         cut through scheme.
> > > >         which somewhat contradicts with his first statement. Thus
the
> > > > interesting results.
> > > >            No. it doesn't.
> > > >
> > > >         The debate should be based on a solid definition of
> cut-through
> > > > transmission, otherwise
> > > >         there will be no convergence at the end of the discussion.
> > > >
> > > >            I agree.
> > > >
> > > >         I fully agree Sanjay's first statement, but want to add that
> > > > each class should have its
> > > >         own transit buffer, (personally I prefer having 3 classes
> > > > supported as RPR MAC services).
> > > >         Whether each transit buffer should reside in MAC layer or
> > > > systemlayer is up to further
> > > >         discussion. Under this context, Circuit Emulation (or some
may
> > > > prefer to call it
> > > >         Synchronous) class will benefit from the cut-through
transit.
> > > >            [Sanjay Agarwal]
> > > >            I don't agree in the present cut through proposals case.
> > > > Unless you want to define cut though differently.
> > > >               Ideally it could further benefit from preemptive
> > > > transmission (yet another definition to be solidly defined).
> > > >
> > > >         William Dai
> > > >
> > > >
> > > >
> >
>
>