Additional comments on preemption, 
    cut-through and store-and-forward. 
     
    Cut-through means the packet on the ring 
    keeps moving unless the insertion buffer filling increases because the 
    node clocks a packet out of the transmit buffer. Moving means here, one 
    internal word unit is written into the insertion buffer and at the same time 
    one word unit is read out. Upon the arrival of empty words, 
    the filling of the insertion buffer deceases until zero. In front of 
    the insertion buffer stage, a pipe-lined header recognition stage also 
    permits the packet to move through. Thus, all header information is 
    available at the end of the header-recognition pipeline. In case that the 
    packet is addressed to the node, it is clocked to the receive buffer, 
    otherwise to the insertion buffer part. Scheduling 
    between insertion and transmit buffers is done at the transmit-side. 
    The complexity of the implementations for 
    cut-through and store-and-forward are similar, with cut-through being 
    slightly more complex.
     
    Since some companies prefer to use 
    store-and-forward and other ones cut-through, we propose to allow 
    both types because they easily interwork. The difference is in end-to-end 
    delay performance and the required.size of the transit or insertion buffer, 
    respectively.
     
     
    Packet preemption is not yet an established 
    technik. Therefore, the immediate reaction of my exploder mail was that 
    is too complex! Or, why not use ATM in the first place? And my answer to one 
    of the comments is: Of course, I am talking about preemption during 
    transmission, nobody missed something. It is not complex and error handling 
    has been included!.
     
    Without packet preemption by IP-telephony or 
    IP-conferencing packets, an all-IP world never will be able to 
    achieve the voice conversation quality that circuit-switching and 
    ATM can provide. A natural voice communication requires a maximal 
    end-to-end delay of  80-100 ms. Not more!  Above that it becomes 
    more and more cumbersome. For free or low cost private calls, higher delays 
    might be acceptable. But not in the business world. For conversations over 
    larger distances, the propagation delay takes already a big part of the 
    permitted total delay (10.000 km gives 50 ms). Packetization for a payload 
    of only 40 bytes of  64 kbit/s voice gives an additional 
    delay of 10 ms. This means for this distance 20-40 ms ist left for delays 
    in the network, including the playout buffer for delay jitter. Delays 
    in the endsystems have not been taken into account and for larger 
    communication distances, the remaining margin becomes proportionally 
    smaller. IP-telephony or IP-conferencing is not yet a commodity. The 
    circuit-switched networks and ATM networks still do their excellent job for 
    voice, and they carry the bulk of that service. Currently, network operators 
    still not have to worry much about the end-to-end delay 
    issue. Everything is rather new, current customers except the inferior 
    quality, billing is not really established, calls are currently much cheaper 
    or free, so who cares at the moment. IP-telephony or IP-conferencing is so 
    sexy and hyp that already that might justify it usage, even when 
    it not works so fine all the time.
     
    The ATM cell size has been 
    chosen so small because of voice conversations. It is not at all 
    adequate for massive data communications. Natural communication between 
    humans is low volume compared with data, but it is certainly the most 
    important form for global human interactivity. Handling interactive 
    voice in packetized networks adequately is the most difficult and most 
    sentive form of communications. Why, one should return to walky-talky 
    communications with commands like 'over' as we want to move towards all-IP 
    networks. Also MPLS will not solve this issue. 
     
    Since there is futher an increasing pressure to 
    use very large data-packets in order that users can exploit the TCP-protocol 
    with much larger throughputs as today, packet preemption will be become 
    unavoidable. It is just a matter of time. In fact, packet preemption is 
    already been applied inside of some routers. I am sure preemption will also 
    be seen on lower speed router links soon. The first company with such a 
    feature on the market will immediately outperform all other routers in that 
    respect. IETF standardization will certainly follow that up. It has to be 
    said that the larger the distances and the higher the link speeds, the 
    poorer works the window mechanism of TCP with respect to throughput. 
    Therefore, larger packet will be required to keep the data explosion 
    going.
     
    Considering the max. size of a IP-packet of  64 
    Kbytes, one obtains without preemption a jitter of 
    SONET/SDH
     
    155 Mbit/s    - 3.495 ms per 
    node
    622 Mbit/s    - 0, 873 ms
    2.5 Gbit/s     - 0,218 ms
    10 Gbit/s      - 0,054 
    ms
     
    For SONET/SDH 155 Mbit/s, the per node jitter delay 
    is
     
    1 Kbytes     -  0.054 ms per 
    node
    5 Kbytes     - 0.273 ms
    10 Kbytes   - 0,835 ms
    20 Kbytes   - 1,092 ms
    40 Kbytes   - 2,184 ms
    60 Kbytes   - 3,277 ms
     
    Multiplied with the number of passing 
    ring nodes, these figures determine in fact the playout buffer, and 
    that only caused by the RPR. Not included are the additional delays in 
    other network nodes of the connection.
     
    For high-bit rates rings, the figures are perhaps not so 
    impressive. However, for lower bit rate rings for manufactury plants, public 
    access, campus, or in-building areas, providing a huge market,  it 
    really counts up. The more areas where the RPR can be used, the 
    more successful will be the IEEE 802.17 standard.
     
     
    Since the preemptive mechanism raised some 
    questions, here some details.
     
    - Three ring classes are considered
      Class 1: Premium class (circuit emulation): 
    guaranteed throughput, tight delay jitter
      Class 2: High- priority packet switching: 
    guaranteed throughput, bounded delay jitter
      Class 3: Low- priority packet switching: best- 
    effort
     
    - Class 1 may preempt classes 2 and 3
    - Class 2 may preempt classe 3
    
    - Class 1 uses cut-through
    - Classes 2 and 3 must be store-and-forward to keep it 
    simple
    - The preemption mechanism holds both for packets clocked 
    out of the insertion buffer and those leaving the transmit 
    buffer
 
     
    - At the ring receive side, all packet embeddings are 
    resolved by forwarding the packets of the different priority classes 
    into their corresponding receive or insertion buffers
    - Resolving means that within the considered received 
    packet, a new packet start of a higher class may appear 
    indicating an embedded packet lasting until its end of packet 
    shows up. This might happen more than once in a packet.
     
    - Packets of the highest class are immediately 
    forwarded, thereby possibly preempting a packet of a lower class, either 
    from the insertion or the transmit buffer
    - Due to store-and-forward operation of the insertion 
    buffers for the two lower classes all holes left back by the 
    embedded packets schrink together before they are forwarded onto the 
    next transmission hop.
     
    By this operation it is assured that the time-sensitive 
    packets for IP-telephony and IP-conferencing pracktically shoot through the 
    network with minimal delay. Note that the ring might only be a small part of 
    the global connection and as previously explained all delay saving may count 
    to achieve the required end-to-end delay bound of 80-100 ms.
     
    A mail on implementation complexity will follow. 
    
    It can also already be said that getting the start/end of 
    packets occurs in the same way as with an operation without preemption. 
    The MAC is thus agnostic.
     
    best regards
    Harmen