----- Original Message -----
Sent: Friday, May 04, 2001 11:43 AM
Subject: RE: [RPRWG] More comments on
preemption
Hi William,
If we use 2 cos bits than we have 4 classes. Might as well
introduce best effort class that provides not guarantees. Best effort traffic
will fill spaces in the unused link bandwidth.
We could have 3 classes of service out of which only 4 are
defined and rest are there for future extentions.
Also, wouldn't reassembly at the destination RPR mac would be
big task. You have no idea how many outstanding preempted mac fragments there
are. Multiple nodes could be sending preempted packets from two directions of
the ring to a destination node. I think the of job fragmentation and
reassembly in the RPR mac chip will be difficult.
-Sanjay K. Agrawal
Luminous Networks
-----Original Message-----
From:
William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Thursday, May 03, 2001 8:28 PM
To:
stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] More
comments on preemption
Kanaiya,
There has been a lot of discussion regarding preemption in
the
"Cut through Definition ?" email thread a while
ago.
Let me just recapture what I proposed in that thread for
your
reference (with minor corrections and additions).
It may contradict
with the proposed GFP packet
encapsulation requirement, but hope
it is enough to
correct your misconception about preemption.
1. There are 3 MAC level classes of traffic (H, M, L,). H and
M traffic
insertion is subjuct to
self policing according to their respectively
provisioned rate, while L traffic insertion is
subject to the "Fairness"
ring
access algorithm only.
2. Preemption is allowed only
for H traffic to preempt M or L traffic,
3. Each M and
L packet transfer will be inserted an "IDLE/Escape"
word for every 256 byte (for the sake of
alignment/padding concern)
as the
preemptive insertion point.
4. Preemptive insertion is
allowed only at the preemptive insertion
point of onging M or L traffic.
5. Preempted "Leftover" traffic will be scheduled to transfer
right
after the H traffic is
transferred, regardless of classes, and it could
be subject to further preemption when new H traffic
arrives.
6. M and L traffic are allowed to do
store&forward (packet-wise)
transit on the ring (to reduce the complexity of reassembly task at
the final receiver), while H traffic is
allowed to do both cut-through
and
store&forward transit on the ring.
7. Jumbo frame
is not supported for H class.
All conditions need to apply at the same time.
Regards
William Dai
----- Original Message -----
From:
"Kanaiya Vasani" <kanaiya@xxxxxxxxxxxxxx>
To:
<stds-802-17@xxxxxxxx>
Sent: Thursday, May 03,
2001 11:07 AM
Subject: RE: [RPRWG] More comments on
preemption
>
> William,
>
> Maybe you can further elaborate on
this. What happens to the pre-empted
>
packet(frame)? How do we deal with the portion of the packet(frame)that
is
> already transmitted?
>
> Thanks,
>
> - Kanaiya
>
> -----Original Message-----
> From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> Sent: Tuesday, May 01, 2001 2:34 PM
>
To: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG]
More comments on preemption
>
>
>
>
Preemption does NOT drop packets.
>
> William Dai (minority member)
>
>
> -----
Original Message -----
> From: "Kanaiya Vasani"
<kanaiya@xxxxxxxxxxxxxx>
> To:
<stds-802-17@xxxxxxxx>
> Sent: Tuesday, May
01, 2001 1:40 PM
> Subject: RE: [RPRWG] More
comments on preemption
>
>
> >
>
> There has been some good discussion around the subject of
preemption.
> Looks
>
> like a majority of the active members on the reflector would prefer
to
> leave
> > it
out.
> >
> > I too
believe that there shouldn't be any preemption within the MAC for
> the
> > following
reasons:
> >
> >
1. The RPR MAC shall be defined with a set of transmission performance
> > specifications - worst case packet delay, packet
jitter tolerance,
packet
>
> loss, etc. - similar to other transmission and transport
technologies.
In
> >
this context, the MAC packet loss shall be zero under normal
conditions.
> > Pre-emption results in dropping
of frames, and therefore should not be a
> >
function of the MAC.
> >
> > 2. Packet loss is also an important component of a service
level
> agreement.
>
> Service providers obviously want packet loss to be as close to zero
as
> > possible, and the MAC must do its part to
help the overall system
achieve
> > this objective. Dropping packets or causing CRC errors to
support
> > pre-emption is not desirable.
> >
> > Regards,
> >
> > - Kanaiya
> >
> > -----Original
Message-----
> > From: Leon Bruckman [mailto:leonb@xxxxxxxxxxxxx]
> > Sent: Sunday, April 22, 2001 10:15 AM
> > To: 'William Dai'; stds-802-17@xxxxxxxx
> > Subject: RE: [RPRWG] More comments on
preemption
> >
>
>
> >
> >
William,
> > You are right that the additional
delay variation added by each
additional
> > node becomes lower as the number of nodes already
taken into
consideration
>
> increases. But the maximum delay variation will not decrease as
the
number
> > of nodes
increases.
> > So the simulation shows the limit
to the delay variation, under the
noted
> > assumptions.
> > You
already corrected your second observation, so I understand it is OK.
> > Leon
> >
> > -----Original Message-----
> > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > Sent: Thursday, April 19, 2001 9:18 PM
> > To: stds-802-17@xxxxxxxx
> >
Subject: Re: [RPRWG] More comments on preemption
>
>
> >
>
>
> > Leon,
>
>
> > Simulation may or may not catch the
worst case situation. There are 128
> > nodes in
your simulation model, the sheer number of nodes which makes
> > it look like the "toughest" you can get. While I believe it
is good to
> > evaluate
> > the delay, but it make the jitter evaluation more difficult.
Why ?
because
> >
the
> > the probability of getting minimum delay
(packet pass through 127 nodes
> > without being
blocked by Jumbo frame insertion) and the probability of
> > getting maximum delay (packet pass through 127 nodes and
being blocked
> > by Jumbo frame insertion at
every node) diminish quickly as the number
of
> > nodes increases.
>
>
> > Secondly, assume we're comparing a
100Mbps traffic flow going through
> > 1G ring
vs. 10G ring with the same number of nodes and same traffic
> > generation models, AND on the other end of the anti-jitter
buffer,
traffic
> > will
be extracted out at 100Mbps for the same flow. In theory, the size
> > of the anti-jitter buffer and the delay caused by
the anti-buffer should
> be
> > the SAME. It should not be a surprise because 10G ring is
only 10 times
> > wider than 1G ring, not 10
times faster for the 100Mbps traffic flow.
>
>
> > I'm not a simulation believer (although
I used to be in that field), but
I
> > do
> > respect those people who
is doing that. It is just a tool used by
PEOPLE.
> >
>
>
> > Best regards
> >
> > William Dai
> >
>
>