Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Stefan, Thank you for starting this thread. It has made me realize that the answer to the latency question has to answered at the top layer of the network, taking network topologies and MTUs into account. All known latencies
must accounted very well because of the multiplicative “N” hop factor. Hence, small latencies at the PHY become quite large. We are still evaluating the PHY and FEC solution with a big variable being the interleaving factor of the FEC solution. This factor
cannot be taken lightly because of the multiplicative factor as you have elaborated on. Let us make some more progress in the PHY group and then we will identify the latencies and roll this analysis back up to the top to verify that all applications are satisfied. I also understand your concern on the power consumption issues and we will track that as we explore the solutions. Regards, Tom Tom Brown System Engineering tkbrown@xxxxxxxxxxx office: 512-609-7530 cell: 512-965-8738 From: Stefan Buntz [mailto:stefan.buntz@xxxxxxxxxxx]
Hello Stephane, I added the IEEE reflector again, as I think it’s important to have the same understanding of the requirements between OEMs and PHY vendors and I am also not sure, if we have really detailed this… So, from my understanding the 5...6µs we discuss would mean the overall link latency of the PHY- and DATALink-layer for an single link of an RTPGE* network (that also means these values would be added up on multiple
links). For an 7-hop network this leads to maybe to 7*5…6µs = 35…42µs, however above the data link at both end-points (not in the individual hops) the message has to go through all the higher ISO/OSI layers which also adds significant delay, so this is not
the final end-to-end latency at application level. From my point of view the most critical application is some control loop between different ECU’s, e.g. for autonomous driving or other driver assistance applications and here we need to maintain low latencies
- not only on the PHY- and DATALINK-layers, but also on the higher layers (to avoid e.g. an oscillation of the control loop or to slow reaction). During discussions in Indian Wells I understood that the FEC per frame (is this meant to be an Ethernet frame (MTU=1500Byte) or are there any specific “FEC frames” inside PHY layer defined??) would led to ~1,5µs
plus minor overhead to the rest of the processing so that in sum we are around 2µs, maybe 3µs. If the FEC is extended to 2 frames this would be in the range of 2*1,5µs = 3µs plus some overhead, maybe 4…5µs. The delay of the FEC itself would be constant and
independent of any disturbance (as long as the transmission does not drop the complete frame to too much errors. This is because we would have to wait until the complete data (e.g. 2 frames) for error correction are received.
The question in Indian Wells was, if this amount of delay for an individual link is acceptable or not and based on the above assumptions I didn’t saw any reason why we need to avoid a 2-frame FEC. However I didn’t
know the discussion about DMLT/IET and I don’t know the reason why 1µs was not an acceptable latency in this case… So if there are from OEM’s point of view any reasons (I maybe missed) why this latency would cause a problem or if there are any applications which need dedicated latency please share. Regards, Stefan PS: As I am writing to the reflector: another point which is important for OEMs is for sure the power consumption. So if we could see differences between different modulations, FECs, etc. on the basis of an relative
power consumption this also would help us to come to a consensus. This however should always be in relation to the overall port power, e.g if 1-frame FEC versus 2-frame FEC is twice the power for the FECs, but does not have significant influence on the over
all PHY power consumption (e.g 1-frame FEC = 100% vs. 2-frame FEC = 101%) this is not a strong criteria to use for an decision. * from now on most probably called “1000Base-T1”. Von: KORZIN Stephane [mailto:stephane.korzin@xxxxxxxxxxx]
Mehmet, Stefan, Could you please explain what’s included in those 5 to 6 µs? From Vitesse presentation at Indian Wells, I understood that FEC could add some time to the frame transmission. But a 1500-byte frame duration is ~1.5 µs. So 5 to 6 µs seems quite long to me. Are those µs time spent in the receiver PHY to correct errors? How does it propagate through switches, let’s say with 5 to 7 hops? Do we have “chances” that the cause for the error burst compromises not only 1 link but multiple links along the path from the source node to
the destination node? Is this latency constant, whatever the number of errors? I remember presentations on Internet backbone, where they requested high-priority frames reception to force a switch to stop its transmission of a low-priority frame and start retransmission of the high-priority
frame immediately. It seemed like waiting for 1µs was not acceptable. So how about 5 to 6 µs? I don’t know how these requests were finally considered. Do you know? Best regards, Stephane. De : Mehmet Tazebay [mailto:mtazebay@xxxxxxxxxxxx]
Thank you, Stefan. This is very useful. We look forward to hear other OEM inputs as well. Best regards, Mehmet From: Stefan Buntz [mailto:stefan.buntz@xxxxxxxxxxx]
Dear all, as discussed during this week’s ad hoc, I’ve internally asked for feedback regarding the latency requirement for the error correction decision. As a first feedback from Daimler side, I can say that a latency of 5µs...6µs seems not to be an issue for our systems. I guess this opens the possibilities for different FEC solutions. Hopefully other car manufacturers can add their opinion by replying to this e-mail to get a wider view of the requirements Regards, Stefan
|