| 
 I have 
a feeling that there is some miscommunication here. Here is how I have 
understood..  
  
Cut 
through simply says that do not store packet, as soon as it is known where it 
has to go and with 
what 
priority, let it go. 
  
Pre-emption during transmission says to chop off the 
packet, create CRC (or other error) error, 
and 
send another high priority packet. 
  
Thus 
pre-emption and cut through do not have direct relation. Are we talking 
about other meanings ? 
  
Regards,
  Devendra Tripathi VidyaWeb, Inc 90 Great 
Oaks Blvd #206 San Jose, Ca 95119 Tel: (408)226-6800, Direct: 
(408)363-2375 Fax: (408)226-6862  
  
  Harry, 
    
  If cut-through is for H traffic only, I don't see any 
  conflict between cut-through  
  and preemption. Preemption is helping cut-through in a very 
  effective way.  
  The original goal for preemption is to reduce the packet 
  transfer delay and  
  jitter on the ring as much as 
  possible. Isn't this exactly what cut-through is  
  fighting for ? 
    
  Whether GFP belongs to L1 or L2, I still have doubts. But 
  for pure technical  
  discussion, I don't think that using GFP to eliminate the 
  chance of preemption 
  is appropriate here. 
    
  By the way, I totally understand and agree with your 
  definition of cut-through. 
    
  By your last statement, do you mean SONET clock distribution 
  is needed in 
  RPR by someone ? 
    
    
  Regards 
    
  William Dai 
    
  
    ----- Original Message -----  
    
    
    Sent: Thursday, April 12, 2001 6:13 
    PM 
    Subject: RE: [RPRWG] Cut through 
    definition? 
    
  
    If 
    this scheme is adopted then I don't see how this would work with cut 
    through mode and GFP. 
    Hence it is no longer L1 agnostics. Should we not 
    support cut-through then? 
    Is 
    this a bad thing for RPR? I think I will let the someone else answer the 
    question! 
      
    I 
    would like to make it clear that we, Nortel, are like everybody else, that 
    we will support  
    GFP on our equipment when the 3rd party silicon 
    vendors sell GFP compliant chips. 
      
    What is the original goal for preemption? Are you 
    providing a solution that is more  
    complicated than it's worth. What about test 
    equipment complexity. 
      
    Lastly, there are those who believe that RPR is 
    more than just for packet based network. 
      
      
    Harry  
    
      
      Although I'm not a big preemptive transfer fan, but 
      I think this topic deserves detailed 
      discussion before we rush into any conclusion. What 
      changes me is the discussion of 
      Jumbe Frame support on RPR, not long ago it was 2KB, now 
      it is 9KB, what about the 
      ultimate 64KB in the future ? 
        
      By saying that, I'm proposing neither ATM cell like 
      structure nor slotted ring structure, 
      and since RPR MAC is L1 agnostic, physical signalling 
      trick cannot be used either. 
        
      Let me give one example of preemptive transfer 
      definition here and let's discuss what 
      is so complicated (simple) about it. 
        
          1.    There are 3 MAC 
      classes of traffic (H, M, L,). 
          2.    Preemption is 
      allowed only for "Transit" H traffic to preempt "Transmit" M or 
      L traffic. 
          3.    Preempted 
      segment is not allowed to be prempted again. 
          4.    Preempted 
      "Transmit" traffic will be scheduled to tranfer right after "Transit" H 
      traffic, 
                 
      independent of classes. 
          5.    Each Packet 
      transfer will be inserted an "IDLE/Escape" word for every 256 or 512 
       
                 (for 
      the sake of alignment/padding concern) byte as the preemptive inserion 
      point.  
          6.    Jumbo frame is 
      not supported for H class. 
        
      By the way, SONET clock distribution is not needed. 
      After all, RPR is a packet based network. 
        
        
      Best Regards 
        
      William Dai 
                
       
        
        
      
        ----- Original Message -----  
        
        
        
        Sent: Thursday, April 12, 2001 7:23 
        AM 
        Subject: RE: [RPRWG] Cut through 
        definition? 
        
  
        Exactly my point. 
          
        "we should keep it simple and not Segment 
        packets. " i.e. Do not preempt. 
          
        Regards, 
          
        Harry  
          
        
          I am not clear 
          how the proposed preemption method works. 
          Does a high priority transit packet preempt a low priority add 
          packet?  Can a high priority add packet also preempt a low priority 
          transit packet?  What happens if a previously preempted add packet 
          contends with a same priority packet that was also preempted in an 
          upstream node?  What happens if a previously preempted add packet 
          contents with a same priority previously preempted transit packet that 
          follows a high priority preempting transit packet with a clock cycle 
          gap in between due to clock mismatch?  Do we require a SONET clock 
          to be distributed on the ring?  Is RPR MAC layer one agnostic? 
           Thanks. 
           Necdet 
           Harry Peng wrote: 
             
            Complexity what complexity: 
             In the tandem path, if a high priority packet can 
            preempt a low priority packet at  arbitrary 
            boundary then the preempted logic will have to deal with a tandem 
            packet that  is already pre-empted. 
             This means the fastest pre-emption response time 
            is on internal word size and the pre-empted packet  will have to pad to word boundaries to make live 
            easier.  Furthermore the tandem receiver 
            will have to respond to within one clock cycle as it is the 
             atomic size. What is the word size for 10G 64 bits 
            128 bits? What about for 40G or higher. 
             Unless, you are will to have cells. Then why not 
            use ATM. 
             I agree that we should keep it simple and not 
            Segment packets. 
             Regards, 
             Harry    
             -----Original Message-----  From: Sushil Pandhi [mailto:Sushil.Pandhi@xxxxxxxxxxxxxxx] 
             Sent: Wednesday, April 11, 2001 10:33 AM 
             To: Leon Bruckman  Cc: 
            'davidvja@xxxxxxxxxxx'; stds-802-17@xxxxxxxx  Subject: Re: [RPRWG] Cut through definition? 
               
             I agree with Leon  that 'pre-emption' will 
            increase the complexity.  ATM solves  this by segmenting the message into  smaller size cells and reassembling cells, and using 
            this  approach adds a lot  of 
            complexity. 
             If we do not have preemption, and assuming 1522 
            byte frame just starts  transmission before 
            synchronouus traffic  can be sent, 1522 byte 
            frame at OC-3 rate will take about 82.6 micro-seconds. 
             If we assume that in the ring, at 32 
             nodes the same situation arises then  we have 
            about 2.6 msec delay because not  doing 
            preemption.  So I doubt  preemption  will give us much advantage. 
             -Sushil 
             Leon Bruckman wrote: 
             > DVJ  > My personal 
            view is that preempting lower traffic in the middle of a 
            packet  > adds complexity that is not 
            really needed. At 1G, the transmission time for  > a 1500 bytes packet is 12 usec, so the worst case for a 
            256 ring will be 3.1  > msec of added 
            delay because of low packets being transmitted and not 
             > preeempted. Furthermore, the probability of 
            the worst case is very small. We  > did 
            some simulations with the following assumptions:  > - There is always a low priority packet being 
            transmitted by the node  > - High 
            priority packet may arrive at any time during the low priority 
            packet  > transmission (equal 
            probability)  > Some of the results were 
            presented during the January interim (by Gal Mor).  >  > For a 128 nodes ring 
            operating at 1G the preeemption gain will still be in 
             > the msec range with very high probability, 
            and this can easily be absorbed  > by the 
            jitter buffers at the receiver.  > 
            Leon  >  > 
            -----Original Message-----  > From: David 
            V. James [mailto:davidvja@xxxxxxxxxxx] 
             > Sent: Friday, April 06, 2001 4:27 AM 
             > To: Carey Kloss; Devendra Tripathi 
             > Cc: stds-802-17@xxxxxxxx  > Subject: RE: [RPRWG] Cut through definition? 
             >  > All, 
             >  > Relative to the 
            discussion of cut through, et. al.  > My 
            perception is that a cutthrough node has two  > insertion buffers, for classA (provisioned 
            synchronous)  > and classB (provisioned 
            asynchronous).  >  > The preferred transmit order is as follows: 
             >   a) classA insertion buffer 
            (always)  >   b) classA 
            transmit traffic (subject to provisioned rate)  >   c) asynchronous traffic.  > The classA insertion buffer only needs to be the size 
            of  > the maximum packets sent by this 
            node, plus (perhaps) some  > extra 
            symbols to deal with hardware decoding latencies.  >  > The classB insertion 
            buffer is to deal with the accumulation  > asynchronous packets that occurs when (worst case) full 
            asynchronous  > is coming in/out and 
            rate-limited synchronous is being transmitted.  > The size of the classB buffer is on the order of 
            several upsteam-link  > delays times 
            rateOfSynchronous/rateOfLink ratio.  >  > Order of the asynchronous 
            traffic (c) depends on the classB  > 
            buffer-filled status, prenegotiated vs. consumed rates, and 
             > the size of the asynchronous backlog in the 
            client.  >  > 
            The asynchronous transmit buffer is a bit schitzophrenic on 
            its  > behavior. It should be in the 
            client (not the MAC) because that  > 
            allows packets to be reordered/inserted/deleted until the 
            just  > before transmission time. 
            However, the amount of traffic in the  > 
            asynchronous transmit queue may influence the MAC 
            queue-selection  > and throttle-signal 
            assertion properties.  >  > I personally favor allowing cut-through synchronous 
            traffic to  > preempt asynchronous, even 
            in the middle of a packet. That's  > 
            yields the lowest possible jitter, but at some encoding 
            complexity  > costs.  >  > 
        DVJ       
 |