| Thread Links | Date Links | ||||
|---|---|---|---|---|---|
| Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index | 
| 
 Hugh, 
Okay, lets use near worst 
case numbers (worst case means that the implementer is choosing to suboptimize 
performance). 
All in 
microseconds 
Number of hops: 
6 
Delay on 100 m fiber 
(total, not per hop; speed of light in glass): 100 m / 
(300,000,000 m/s * 2/3) = 0.5 
Wait (Number of bytes): 1500 
Rate: 
10Gb/s 
Preemption slot (delay due 
to preemption): 64 Bytes (4 to 8 is more realistic) 
Latency without preemption: 
6 hops * 1500 bytes/hop * 8 bits/byte / 10 Gb/s + 0.5 = 7.2 + 0.5 = 
7.7 
Latency with preemption: 6 
hops * 64 bytes/hop * 8 bits/byte / 10 Gb/s + 0.5 = 0.3 + 0.5 = 
0.8 
That looks like an order of 
magnitude to me.  
This doesn't include the 
serialization/deserialization, but the 64 bytes in preemption slot more than 
covers that. 
On the other hand, you 
could reduce the frame size to 48 byte cells with 5 byte headers and do better 
than this with only the added expense of a nearly free SAR (all silicon is free, 
right? :-). Now, there's a solution one could really fall in love 
with. 
jonathan 
p.s. 
Doing this with 4 to 8 byte preemption 
slots on a backplane (1m) is even more 
interesting: 
Without: 1 hop * 1500 bytes/hop * 8 bits/byte / 10 
Gb/s + 0.005 = 1.2 
With: 1 hop * 6 bytes/hop * 8 bits/byte / 10 
Gb/s + 0.005 = .01 
Could that be closing in on 2 orders of 
magnitude? 
Jumbo frames makes it even more fun. But, we don't talk 
about those.... No, smaller frames is much better than bigger ones 
:-) 
  |