Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [RPRWG] More comments on preemption




I can't help but to jump into the "should vs should not" debate....

I agree with the fact that the delay/jitter may be ignorable accross
a single RPR ring. But what about ring bridged into another ring
brdiged into .... ?  If we can do a better job with the basic element,
why not ?

You can argue that there's other elements that will affect the overall
end to end performance. Well, at least RPR ring should not be the
weak point that make it worse.

Regards,

William Dai


----- Original Message -----
From: "Karighattam, Vasan" <vasan.karighattam@xxxxxxxxx>
To: "'Pankaj K Jha'" <pkj@xxxxxxxxxxx>; "William Dai" <wdai@xxxxxxxxxxxx>
Cc: <stds-802-17@xxxxxxxx>
Sent: Friday, May 04, 2001 6:27 PM
Subject: RE: [RPRWG] More comments on preemption


> William:
>
> I have to agree with Pankaj here.  It takes very little time to send an
> entire packet and it is a clean and simple
> design rather than juggling with fragments.  Let us take the slower 1G
link.
> The high priority packet
> has to wait for 12 usec for a 1500 byte packet to be transmitted.  What
this
> does is add some
> jitter to the high priority packet.  The jitter has to be simply
compensated
> by a resynchronization process
> at the end station.
>
> Consider the various other problems that these 'high priority' packets
(real
> time voice or video
> that you may be concerned with) are already facing:
>   - packets are encountering various queueing delays at each link and
> therefore do not arrive
>     at regular intervals. (For instance if there were 10 high priority
> packets waiting on this link, the
>     last packet is already waiting for 120 usec (assuming 1500 bytes
each) -
> and this is an overly
>     simplistic example, the queueing times are actually higher)
>   - some packets may be lost due to transmission errors
>
> Packet preemption is not going to help much to reduce the end-to-end
jitter.
>
> The better solution (than packet preemption) is (what is definitely
> happening) for the real-time applications
> to become "network conscious".  An upper layer protocol (similar to RTP)
> could enable them to discover the jitter
> and compensate for it and even adapt their behavior.
>
> -Vasan
>
> -----Original Message-----
> From: Pankaj K Jha [mailto:pkj@xxxxxxxxxxx]
> Sent: Friday, May 04, 2001 4:24 PM
> To: William Dai
> Cc: stds-802-17@xxxxxxxx
> Subject: Re: [RPRWG] More comments on preemption
>
>
>
> William:
>
> Maybe I'm missing something here, please clarify:
>
> - At 1G/2.5G/10G, it takes just a few microseconds to send an entire
packet
> (even at the max length).
>
> - If a high-priority packet needs to go, it needs to go; and the
> low-priority
> packet will simply be sent to the system for further scheduling. Even if a
> low-priority packet is currently being transmitted, we can let the
> low-priority
> transmission complete, and send the high-priority packet following
> completion of
> this transmission.
>
> - We must keep in mind that it is the *application* that needs strict
> timing.
> Unless we are providing transport for SONET/SDH frames or high-precision
TDM
> services over RPR, I can't think of applications that require this
> precision.
> Certainly it cannot be G.729 VoIP stuff, because RPR delay is way within
its
> limits. Maybe I don't know of applications that need this. My contention
is
> that
> at the link speeds we are talking about, packet transmission times come
well
> within limits to satisfy any application (other than SONET/SDH and pure
TDM
> -
> and I think these are not part of RPR charter anyway, someone needing this
> service will not be counting on RPR to deliver these).
>
> - You rightly said that proper framing needs to be defined before
> pre-emption
> can be meaningful. Multiple fragments from multiple packets may be sitting
> at a
> node, needing reassemblies. Maybe we need sequence #s, require a system to
> have
> extensive buffers and reassembly logic (I can't imagine these being part
of
> MAC,
> because we are looking at many fragments of large packets). These
fragments
> can't be delivered to next node without a proper MAC header (copied from
the
> first fragment).
>
> - It is much simpler for a node to negotiate MTU size using its L3
utilities
> (on
> networks where timings are that critical) than go through this trouble, in
> my
> view.
>
> What do you think?
>
> -Pankaj
>
> William Dai wrote:
>
> > Nader,
> >
> > Please see comments below.
> >
> > ----- Original Message -----
> > From: "Nader Vijeh" <nader@xxxxxxxxxxxxxx>
> > To: <stds-802-17@xxxxxxxx>
> > Sent: Friday, May 04, 2001 10:18 AM
> > Subject: RE: [RPRWG] More comments on preemption
> >
> > >
> > > This is an interesting technical response, but there are a number of
> > issues:
> > >
> > > 1. A MAC in 802 language controls access to the media but does not
> perform
> > > packet switching. In other words, MAC makes a decision on adding a
> packet
> > or
> > > removing a packet to/from a shared medium. Having multiple queues in
the
> > > transit path is outside the MAC framework.
> > >
> >
> > Yes it is a MAC, but the "media" we're talking about here is anything
but
> > physical
> > entity, you can call it "logical", or you can call it "virtual". The
> > physical media starts
> > from one node and ends at the next node. I'm not aware of any physical
> media
> > that allow simulataneous (same time, same wavelengh) packet transfer,
but
> > RPR
> > ring does. Somebody may prefer to call it "RAC", Ring Access Control,
but
> I
> > still
> > prefer to call it MAC since we have more than enough term to deal
already.
> >
> > > 2. Losing packets is transit (unless due to a fault) is also
> unacceptable.
> > > Any mechanism that loses a good packet in transit, would therefore
> violate
> > > the packet loss requirements associated with the media.
> > >
> >
> > I agree, no packet loss (unless due to faults) in the "media".
> >
> > > 3. Reordering packets is also not good. MAC layer should not re-order
> > > packets from a source to a destination (no matter what priority they
> > belong
> > > to). This is the work of 802.1 and within the scope of a bridge.
> > >
> >
> > Most Ethernet switches on the market more or less do this kind of
> reordering
> > already, does that mean they all violate 802.1D spec ?  The MAC level
> class
> > seperation on the RPR ring is just like providing a car pool lane on the
> > high way.
> >
> > > 4. Suggesting segmentation and re-assembly of large packets raises the
> > > question that why don't we do that for all packets? in which case no
> > > pre-emption is required. Not to mention, the complexity and why not
just
> > use
> > > ATM?
> > >
> >
> > Please refer to #5 and #6 in my proposal, hope that will relieve your
> worry
> > about
> > ATM complexity. Of course, packet encapsulation issue needs to be
resolved
> > before the preemption idea can fly.
> >
> > Regards,
> >
> > William Dai
> >
> > > -----Original Message-----
> > > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > > Sent: Thursday, May 03, 2001 8:28 PM
> > > To: stds-802-17@xxxxxxxx
> > > Subject: Re: [RPRWG] More comments on preemption
> > >
> > >
> > >
> > > Kanaiya,
> > >
> > > There has been a lot of discussion regarding preemption in the
> > > "Cut through Definition ?" email thread a while ago.
> > >
> > > Let me just recapture what I proposed in that thread for your
> > > reference (with minor corrections and additions). It may contradict
> > > with the proposed GFP packet encapsulation requirement, but hope
> > > it is enough to correct your misconception about preemption.
> > >
> > > 1. There are 3 MAC level classes of traffic (H, M, L,). H and M
traffic
> > >     insertion is subjuct to self policing according to their
> respectively
> > >     provisioned rate, while L traffic insertion is subject to the
> > "Fairness"
> > >     ring access algorithm only.
> > > 2. Preemption is allowed only for H traffic to preempt M or L traffic,
> > > 3. Each M and L packet transfer will be inserted an "IDLE/Escape"
> > >     word for every 256 byte (for the sake of alignment/padding
concern)
> > >     as the preemptive insertion point.
> > > 4. Preemptive insertion is allowed only at the preemptive insertion
> > >     point of onging M or L traffic.
> > > 5. Preempted "Leftover" traffic will be scheduled to transfer right
> > >     after the H traffic is transferred, regardless of classes, and it
> > could
> > >     be subject to further preemption when new H traffic arrives.
> > > 6. M and L traffic are allowed to do store&forward (packet-wise)
> > >     transit on the ring (to reduce the complexity of reassembly task
at
> > >     the final receiver), while H traffic is allowed to do both
> cut-through
> > >     and store&forward transit on the ring.
> > > 7. Jumbo frame is not supported for H class.
> > >
> > > All conditions need to apply at the same time.
> > >
> > >
> > > Regards
> > >
> > > William Dai
> > >
> > >
> > > ----- Original Message -----
> > > From: "Kanaiya Vasani" <kanaiya@xxxxxxxxxxxxxx>
> > > To: <stds-802-17@xxxxxxxx>
> > > Sent: Thursday, May 03, 2001 11:07 AM
> > > Subject: RE: [RPRWG] More comments on preemption
> > >
> > >
> > > >
> > > > William,
> > > >
> > > > Maybe you can further elaborate on this. What happens to the
> pre-empted
> > > > packet(frame)? How do we deal with the portion of the
> packet(frame)that
> > is
> > > > already transmitted?
> > > >
> > > > Thanks,
> > > >
> > > > - Kanaiya
> > > >
> > > > -----Original Message-----
> > > > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > > > Sent: Tuesday, May 01, 2001 2:34 PM
> > > > To: stds-802-17@xxxxxxxx
> > > > Subject: Re: [RPRWG] More comments on preemption
> > > >
> > > >
> > > >
> > > > Preemption does NOT drop packets.
> > > >
> > > > William Dai (minority member)
> > > >
> > > >
> > > > ----- Original Message -----
> > > > From: "Kanaiya Vasani" <kanaiya@xxxxxxxxxxxxxx>
> > > > To: <stds-802-17@xxxxxxxx>
> > > > Sent: Tuesday, May 01, 2001 1:40 PM
> > > > Subject: RE: [RPRWG] More comments on preemption
> > > >
> > > >
> > > > >
> > > > > There has been some good discussion around the subject of
> preemption.
> > > > Looks
> > > > > like a majority of the active members on the reflector would
prefer
> to
> > > > leave
> > > > > it out.
> > > > >
> > > > > I too believe that there shouldn't be any preemption within the
MAC
> > for
> > > > the
> > > > > following reasons:
> > > > >
> > > > > 1. The RPR MAC shall be defined with a set of transmission
> performance
> > > > > specifications - worst case packet delay, packet jitter tolerance,
> > > packet
> > > > > loss, etc. - similar to other transmission and transport
> technologies.
> > > In
> > > > > this context, the MAC packet loss shall be zero under normal
> > conditions.
> > > > > Pre-emption results in dropping of frames, and therefore should
not
> be
> > a
> > > > > function of the MAC.
> > > > >
> > > > > 2. Packet loss is also an important component of a service level
> > > > agreement.
> > > > > Service providers obviously want packet loss to be as close to
zero
> as
> > > > > possible, and the MAC must do its part to help the overall system
> > > achieve
> > > > > this objective. Dropping packets or causing CRC errors to support
> > > > > pre-emption is not desirable.
> > > > >
> > > > > Regards,
> > > > >
> > > > > - Kanaiya
> > > > >
> > > > > -----Original Message-----
> > > > > From: Leon Bruckman [mailto:leonb@xxxxxxxxxxxxx]
> > > > > Sent: Sunday, April 22, 2001 10:15 AM
> > > > > To: 'William Dai'; stds-802-17@xxxxxxxx
> > > > > Subject: RE: [RPRWG] More comments on preemption
> > > > >
> > > > >
> > > > >
> > > > > William,
> > > > > You are right that the additional delay variation added by each
> > > additional
> > > > > node becomes lower as the number of nodes already taken into
> > > consideration
> > > > > increases. But the maximum delay variation will not decrease as
the
> > > number
> > > > > of nodes increases.
> > > > > So the simulation shows the limit to the delay variation, under
the
> > > noted
> > > > > assumptions.
> > > > > You already corrected your second observation, so I understand it
is
> > OK.
> > > > > Leon
> > > > >
> > > > > -----Original Message-----
> > > > > From: William Dai [mailto:wdai@xxxxxxxxxxxx]
> > > > > Sent: Thursday, April 19, 2001 9:18 PM
> > > > > To: stds-802-17@xxxxxxxx
> > > > > Subject: Re: [RPRWG] More comments on preemption
> > > > >
> > > > >
> > > > >
> > > > > Leon,
> > > > >
> > > > > Simulation may or may not catch the worst case situation. There
are
> > 128
> > > > > nodes in your simulation model, the sheer number of nodes which
> makes
> > > > > it look like the "toughest" you can get. While I believe it is
good
> to
> > > > > evaluate
> > > > > the delay, but it make the jitter evaluation more difficult. Why ?
> > > because
> > > > > the
> > > > > the probability of getting minimum delay (packet pass through 127
> > nodes
> > > > > without being blocked by Jumbo frame insertion) and the
probability
> of
> > > > > getting maximum delay (packet pass through 127 nodes and being
> blocked
> > > > > by Jumbo frame insertion at every node) diminish quickly as the
> number
> > > of
> > > > > nodes increases.
> > > > >
> > > > > Secondly, assume we're comparing a 100Mbps traffic flow going
> through
> > > > > 1G ring vs. 10G ring with the same number of nodes and same
traffic
> > > > > generation models, AND on the other end of the anti-jitter buffer,
> > > traffic
> > > > > will be extracted out at 100Mbps for the same flow. In theory, the
> > size
> > > > > of the anti-jitter buffer and the delay caused by the anti-buffer
> > should
> > > > be
> > > > > the SAME. It should not be a surprise because 10G ring is only 10
> > times
> > > > > wider than 1G ring, not 10 times faster for the 100Mbps traffic
> flow.
> > > > >
> > > > > I'm not a simulation believer (although I used to be in that
field),
> > but
> > > I
> > > > > do
> > > > > respect those people who is doing that. It is just a tool used by
> > > PEOPLE.
> > > > >
> > > > >
> > > > > Best regards
> > > > >
> > > > > William Dai
> > > > >
> > > >
> > > >
> > >
> > >
>
>
>