RE: [RPRWG] More comments on preemption
There has been some good discussion around the subject of preemption. Looks
like a majority of the active members on the reflector would prefer to leave
it out.
I too believe that there shouldn't be any preemption within the MAC for the
following reasons:
1. The RPR MAC shall be defined with a set of transmission performance
specifications - worst case packet delay, packet jitter tolerance, packet
loss, etc. - similar to other transmission and transport technologies. In
this context, the MAC packet loss shall be zero under normal conditions.
Pre-emption results in dropping of frames, and therefore should not be a
function of the MAC.
2. Packet loss is also an important component of a service level agreement.
Service providers obviously want packet loss to be as close to zero as
possible, and the MAC must do its part to help the overall system achieve
this objective. Dropping packets or causing CRC errors to support
pre-emption is not desirable.
Regards,
- Kanaiya
-----Original Message-----
From: Leon Bruckman [mailto:leonb@xxxxxxxxxxxxx]
Sent: Sunday, April 22, 2001 10:15 AM
To: 'William Dai'; stds-802-17@xxxxxxxx
Subject: RE: [RPRWG] More comments on preemption
William,
You are right that the additional delay variation added by each additional
node becomes lower as the number of nodes already taken into consideration
increases. But the maximum delay variation will not decrease as the number
of nodes increases.
So the simulation shows the limit to the delay variation, under the noted
assumptions.
You already corrected your second observation, so I understand it is OK.
Leon
-----Original Message-----
From: William Dai [mailto:wdai@xxxxxxxxxxxx]
Sent: Thursday, April 19, 2001 9:18 PM
To: stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] More comments on preemption
Leon,
Simulation may or may not catch the worst case situation. There are 128
nodes in your simulation model, the sheer number of nodes which makes
it look like the "toughest" you can get. While I believe it is good to
evaluate
the delay, but it make the jitter evaluation more difficult. Why ? because
the
the probability of getting minimum delay (packet pass through 127 nodes
without being blocked by Jumbo frame insertion) and the probability of
getting maximum delay (packet pass through 127 nodes and being blocked
by Jumbo frame insertion at every node) diminish quickly as the number of
nodes increases.
Secondly, assume we're comparing a 100Mbps traffic flow going through
1G ring vs. 10G ring with the same number of nodes and same traffic
generation models, AND on the other end of the anti-jitter buffer, traffic
will be extracted out at 100Mbps for the same flow. In theory, the size
of the anti-jitter buffer and the delay caused by the anti-buffer should be
the SAME. It should not be a surprise because 10G ring is only 10 times
wider than 1G ring, not 10 times faster for the 100Mbps traffic flow.
I'm not a simulation believer (although I used to be in that field), but I
do
respect those people who is doing that. It is just a tool used by PEOPLE.
Best regards
William Dai