Although I'm not a big preemptive transfer fan, but I
think this topic deserves detailed
discussion before we rush into any conclusion. What changes
me is the discussion of
Jumbe Frame support on RPR, not long ago it was 2KB, now it
is 9KB, what about the
ultimate 64KB in the future ?
By saying that, I'm proposing neither ATM cell like
structure nor slotted ring structure,
and since RPR MAC is L1 agnostic, physical signalling trick
cannot be used either.
Let me give one example of preemptive transfer definition
here and let's discuss what
is so complicated (simple) about it.
1. There are 3 MAC
classes of traffic (H, M, L,).
2. Preemption is
allowed only for "Transit" H traffic to preempt "Transmit" M or L traffic.
3. Preempted segment is
not allowed to be prempted again.
4. Preempted "Transmit"
traffic will be scheduled to tranfer right after "Transit" H
traffic,
independent of classes.
5. Each Packet
transfer will be inserted an "IDLE/Escape" word for every 256 or 512
(for the sake of alignment/padding concern) byte as the preemptive inserion
point.
6. Jumbo frame is not
supported for H class.
By the way, SONET clock distribution is not needed. After
all, RPR is a packet based network.
Best Regards
William Dai
----- Original Message -----
Sent: Thursday, April 12, 2001 7:23
AM
Subject: RE: [RPRWG] Cut through
definition?
Exactly my point.
"we should keep it simple and not Segment packets.
" i.e. Do not preempt.
Regards,
Harry
I am not clear how
the proposed preemption method works.
Does a high priority transit packet preempt a low priority add packet?
Can a high priority add packet also preempt a low priority transit
packet?
What happens if a previously preempted add packet contends
with a same priority packet that was also preempted in an upstream node?
What happens if a previously preempted add packet contents with a same
priority previously preempted transit packet that follows a high priority
preempting transit packet with a clock cycle gap in between due to clock
mismatch?
Do we require a SONET clock to be distributed on the ring?
Is RPR MAC layer one agnostic?
Thanks.
Necdet
Harry Peng wrote:
Complexity what complexity:
In the tandem path, if a high priority packet can
preempt a low priority packet at
arbitrary
boundary then the preempted logic will have to deal with a tandem packet
that
is already pre-empted.
This means the fastest pre-emption response time is on internal
word size and the pre-empted packet
will have
to pad to word boundaries to make live easier.
Furthermore the tandem receiver will have to respond to within
one clock cycle as it is the
atomic size. What
is the word size for 10G 64 bits 128 bits? What about for 40G or
higher.
Unless, you are will to have cells. Then why not use
ATM.
I agree that we should keep it simple and not Segment
packets.
Regards,
Harry
-----Original Message-----
From: Sushil Pandhi [mailto:Sushil.Pandhi@xxxxxxxxxxxxxxx]
Sent: Wednesday, April 11, 2001 10:33 AM
To: Leon Bruckman
Cc:
'davidvja@xxxxxxxxxxx'; stds-802-17@xxxxxxxx
Subject: Re: [RPRWG] Cut through definition?
I agree with Leon that 'pre-emption' will
increase the complexity. ATM solves
this
by segmenting the message into
smaller size
cells and reassembling cells, and using this approach adds a
lot
of complexity.
If we do not have preemption, and assuming 1522 byte
frame just starts
transmission before
synchronouus traffic
can be sent, 1522 byte
frame at OC-3 rate will take about 82.6 micro-seconds.
If we assume that in the ring, at 32
nodes the same situation arises then we have about 2.6
msec delay because not
doing preemption.
So I doubt preemption
will give us much
advantage.
-Sushil
Leon Bruckman wrote:
> DVJ
> My personal view
is that preempting lower traffic in the middle of a packet
> adds complexity that is not really needed. At 1G,
the transmission time for
> a 1500 bytes
packet is 12 usec, so the worst case for a 256 ring will be 3.1
> msec of added delay because of low packets being
transmitted and not
> preeempted.
Furthermore, the probability of the worst case is very small. We
> did some simulations with the following
assumptions:
> - There is always a low
priority packet being transmitted by the node
> - High priority packet may arrive at any time during the
low priority packet
> transmission (equal
probability)
> Some of the results were
presented during the January interim (by Gal Mor).
>
> For a 128 nodes ring
operating at 1G the preeemption gain will still be in
> the msec range with very high probability, and this can
easily be absorbed
> by the jitter buffers
at the receiver.
> Leon
>
> -----Original
Message-----
> From: David V. James [mailto:davidvja@xxxxxxxxxxx]
> Sent: Friday, April 06, 2001 4:27 AM
> To: Carey Kloss; Devendra Tripathi
> Cc: stds-802-17@xxxxxxxx
> Subject: RE: [RPRWG] Cut through definition?
>
> All,
>
> Relative to the
discussion of cut through, et. al.
> My
perception is that a cutthrough node has two
> insertion buffers, for classA (provisioned
synchronous)
> and classB (provisioned
asynchronous).
>
> The preferred transmit order is as follows:
> a) classA insertion buffer
(always)
> b) classA transmit
traffic (subject to provisioned rate)
> c) asynchronous traffic.
> The classA insertion buffer only needs to be the size
of
> the maximum packets sent by this node,
plus (perhaps) some
> extra symbols to deal
with hardware decoding latencies.
>
> The classB insertion buffer is to deal with the
accumulation
> asynchronous packets that
occurs when (worst case) full asynchronous
>
is coming in/out and rate-limited synchronous is being
transmitted.
> The size of the classB buffer
is on the order of several upsteam-link
>
delays times rateOfSynchronous/rateOfLink ratio.
>
> Order of the asynchronous
traffic (c) depends on the classB
>
buffer-filled status, prenegotiated vs. consumed rates, and
> the size of the asynchronous backlog in the
client.
>
> The
asynchronous transmit buffer is a bit schitzophrenic on its
> behavior. It should be in the client (not the
MAC) because that
> allows packets to be
reordered/inserted/deleted until the just
>
before transmission time. However, the amount of traffic in the
> asynchronous transmit queue may influence the MAC
queue-selection
> and throttle-signal
assertion properties.
>
> I personally favor allowing cut-through synchronous traffic
to
> preempt asynchronous, even in the
middle of a packet. That's
> yields the
lowest possible jitter, but at some encoding complexity
> costs.
>
>
DVJ