Re: [RPRWG] More comments on preemption
Nader,
Please see comments below.
----- Original Message -----
From: "Nader Vijeh" <nader@xxxxxxxxxxxxxx>
To: <stds-802-17@xxxxxxxx>
Sent: Friday, May 04, 2001 10:18 AM
Subject: RE: [RPRWG] More comments on preemption
>
> This is an interesting technical response, but there are a number of
issues:
>
> 1. A MAC in 802 language controls access to the media but does not perform
> packet switching. In other words, MAC makes a decision on adding a packet
or
> removing a packet to/from a shared medium. Having multiple queues in the
> transit path is outside the MAC framework.
>
Yes it is a MAC, but the "media" we're talking about here is anything but
physical
entity, you can call it "logical", or you can call it "virtual". The
physical media starts
from one node and ends at the next node. I'm not aware of any physical media
that allow simulataneous (same time, same wavelengh) packet transfer, but
RPR
ring does. Somebody may prefer to call it "RAC", Ring Access Control, but I
still
prefer to call it MAC since we have more than enough term to deal already.
> 2. Losing packets is transit (unless due to a fault) is also unacceptable.
> Any mechanism that loses a good packet in transit, would therefore violate
> the packet loss requirements associated with the media.
>
I agree, no packet loss (unless due to faults) in the "media".
> 3. Reordering packets is also not good. MAC layer should not re-order
> packets from a source to a destination (no matter what priority they
belong
> to). This is the work of 802.1 and within the scope of a bridge.
>
Most Ethernet switches on the market more or less do this kind of reordering
already, does that mean they all violate 802.1D spec ? The MAC level class
seperation on the RPR ring is just like providing a car pool lane on the
high way.
> 4. Suggesting segmentation and re-assembly of large packets raises the
> question that why don't we do that for all packets? in which case no
> pre-emption is required. Not to mention, the complexity and why not just
use
> ATM?
>
Please refer to #5 and #6 in my proposal, hope that will relieve your worry
about
ATM complexity. Of course, packet encapsulation issue needs to be resolved
before the preemption idea can fly.
Regards,
William Dai
> -----Original Message-----
> From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> Sent: Thursday, May 03, 2001 8:28 PM
> To: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] More comments on preemption
>
>
>
> Kanaiya,
>
> There has been a lot of discussion regarding preemption in the
> "Cut through Definition ?" email thread a while ago.
>
> Let me just recapture what I proposed in that thread for your
> reference (with minor corrections and additions). It may contradict
> with the proposed GFP packet encapsulation requirement, but hope
> it is enough to correct your misconception about preemption.
>
> 1. There are 3 MAC level classes of traffic (H, M, L,). H and M traffic
> insertion is subjuct to self policing according to their respectively
> provisioned rate, while L traffic insertion is subject to the
"Fairness"
> ring access algorithm only.
> 2. Preemption is allowed only for H traffic to preempt M or L traffic,
> 3. Each M and L packet transfer will be inserted an "IDLE/Escape"
> word for every 256 byte (for the sake of alignment/padding concern)
> as the preemptive insertion point.
> 4. Preemptive insertion is allowed only at the preemptive insertion
> point of onging M or L traffic.
> 5. Preempted "Leftover" traffic will be scheduled to transfer right
> after the H traffic is transferred, regardless of classes, and it
could
> be subject to further preemption when new H traffic arrives.
> 6. M and L traffic are allowed to do store&forward (packet-wise)
> transit on the ring (to reduce the complexity of reassembly task at
> the final receiver), while H traffic is allowed to do both cut-through
> and store&forward transit on the ring.
> 7. Jumbo frame is not supported for H class.
>
> All conditions need to apply at the same time.
>
>
> Regards
>
> William Dai
>
>
> ----- Original Message -----
> From: "Kanaiya Vasani" <kanaiya@xxxxxxxxxxxxxx>
> To: <stds-802-17@xxxxxxxx>
> Sent: Thursday, May 03, 2001 11:07 AM
> Subject: RE: [RPRWG] More comments on preemption
>
>
> >
> > William,
> >
> > Maybe you can further elaborate on this. What happens to the pre-empted
> > packet(frame)? How do we deal with the portion of the packet(frame)that
is
> > already transmitted?
> >
> > Thanks,
> >
> > - Kanaiya
> >
> > -----Original Message-----
> > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > Sent: Tuesday, May 01, 2001 2:34 PM
> > To: stds-802-17@xxxxxxxx
> > Subject: Re: [RPRWG] More comments on preemption
> >
> >
> >
> > Preemption does NOT drop packets.
> >
> > William Dai (minority member)
> >
> >
> > ----- Original Message -----
> > From: "Kanaiya Vasani" <kanaiya@xxxxxxxxxxxxxx>
> > To: <stds-802-17@xxxxxxxx>
> > Sent: Tuesday, May 01, 2001 1:40 PM
> > Subject: RE: [RPRWG] More comments on preemption
> >
> >
> > >
> > > There has been some good discussion around the subject of preemption.
> > Looks
> > > like a majority of the active members on the reflector would prefer to
> > leave
> > > it out.
> > >
> > > I too believe that there shouldn't be any preemption within the MAC
for
> > the
> > > following reasons:
> > >
> > > 1. The RPR MAC shall be defined with a set of transmission performance
> > > specifications - worst case packet delay, packet jitter tolerance,
> packet
> > > loss, etc. - similar to other transmission and transport technologies.
> In
> > > this context, the MAC packet loss shall be zero under normal
conditions.
> > > Pre-emption results in dropping of frames, and therefore should not be
a
> > > function of the MAC.
> > >
> > > 2. Packet loss is also an important component of a service level
> > agreement.
> > > Service providers obviously want packet loss to be as close to zero as
> > > possible, and the MAC must do its part to help the overall system
> achieve
> > > this objective. Dropping packets or causing CRC errors to support
> > > pre-emption is not desirable.
> > >
> > > Regards,
> > >
> > > - Kanaiya
> > >
> > > -----Original Message-----
> > > From: Leon Bruckman [mailto:leonb@xxxxxxxxxxxxx]
> > > Sent: Sunday, April 22, 2001 10:15 AM
> > > To: 'William Dai'; stds-802-17@xxxxxxxx
> > > Subject: RE: [RPRWG] More comments on preemption
> > >
> > >
> > >
> > > William,
> > > You are right that the additional delay variation added by each
> additional
> > > node becomes lower as the number of nodes already taken into
> consideration
> > > increases. But the maximum delay variation will not decrease as the
> number
> > > of nodes increases.
> > > So the simulation shows the limit to the delay variation, under the
> noted
> > > assumptions.
> > > You already corrected your second observation, so I understand it is
OK.
> > > Leon
> > >
> > > -----Original Message-----
> > > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > > Sent: Thursday, April 19, 2001 9:18 PM
> > > To: stds-802-17@xxxxxxxx
> > > Subject: Re: [RPRWG] More comments on preemption
> > >
> > >
> > >
> > > Leon,
> > >
> > > Simulation may or may not catch the worst case situation. There are
128
> > > nodes in your simulation model, the sheer number of nodes which makes
> > > it look like the "toughest" you can get. While I believe it is good to
> > > evaluate
> > > the delay, but it make the jitter evaluation more difficult. Why ?
> because
> > > the
> > > the probability of getting minimum delay (packet pass through 127
nodes
> > > without being blocked by Jumbo frame insertion) and the probability of
> > > getting maximum delay (packet pass through 127 nodes and being blocked
> > > by Jumbo frame insertion at every node) diminish quickly as the number
> of
> > > nodes increases.
> > >
> > > Secondly, assume we're comparing a 100Mbps traffic flow going through
> > > 1G ring vs. 10G ring with the same number of nodes and same traffic
> > > generation models, AND on the other end of the anti-jitter buffer,
> traffic
> > > will be extracted out at 100Mbps for the same flow. In theory, the
size
> > > of the anti-jitter buffer and the delay caused by the anti-buffer
should
> > be
> > > the SAME. It should not be a surprise because 10G ring is only 10
times
> > > wider than 1G ring, not 10 times faster for the 100Mbps traffic flow.
> > >
> > > I'm not a simulation believer (although I used to be in that field),
but
> I
> > > do
> > > respect those people who is doing that. It is just a tool used by
> PEOPLE.
> > >
> > >
> > > Best regards
> > >
> > > William Dai
> > >
> >
> >
>
>