Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Jonathan, I don't know why you're so scared of smaller frames - anyone who wants smaller latency should prefer smaller frames. If you reduce the MTU to 500 bytes (not to 48) the equation swings in favor of existing standards: 6 x 500 x 8/10k + 0.5 = 2.9 vs 0.8 - still a 3x improvement using preemption, but getting closer. Bear in mind that this is an extreme worst case comparison. Averages will be almost identical because the preempting packet can arrive at any time during the preempted frame; the preempted frame might not be of maximum size; the link may be idle when the preempting packet arrives; plus of course the packet in progress may be a high priority packet also. Of course if we start adding in more delays the difference gets yet smaller (both delays increase similarly). e.g. Your example allows only 15m per link, you will start to run into geometrical problems if you want to aggregate very large numbers of nodes with only 15m per link. If there are fewer nodes then you need to re-architect you interconnect matrix because 6 hops should be able to accommodate many thousands of end stations. Your example must be assuming very aggressive cut-through switch architecture (cut through has lost popularity in recent years, shame). If you want to conform to the requirements of bridging then you should wait for both the source & dest MAC address to be received before you transmit (unless you are a repeater!). Since you are advocating preemption, I would also assume that you must wait for the COS/TOS tag. That extra 10 bytes will be difficult to avoid. Of course, if you decide that the error propagation of cut-through makes the technique unfavorable then you have a full 64 bytes of latency to wait for the CRC of the incoming frame. Regarding jumbo frames and complexity of end station devices, I would expect that any device capable of filling a 10Gbps pipe will require some hardware acceleration. For hardware implementations there really is no significant difference between encapsulating 1500 byte frames vs 500 byte (or even smaller) frames. Hardware which performs this high speed operation has the advantage that it is seamlessly compatible with any other equipment that might be connected to it. On the other hand, if a switch started using a preemption mechanism when connected to any existing hardware then it could be anybody's guess what would result. My assertion is that a small reduction in MTU for the local network will yield results which are close enough to your extreme examples to make the applicable space where a new standard is demanded very small indeed. As I said, it's a niche of a niche. Better to spend our effort on cheap and simple 40Gig (or even 100Gig) and make this whole argument moot (yes, at 100G the max length frame can be stored in 25m of wire). Hugh. Jonathan Thatcher wrote:
|