Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Stefan and Stephane, even if I’m not working for an OEM ;-) I try to give a small hint. As you mentioned in case of a 7 hop network we would accumulated significantly latency up to 42 µs. If take a look on the requirements from IEEE802.1 TSN with 100 µs over 5 hops including Layer 2 protocols I guess this would not going to work. Compared to other Ethernet PHY I would say the latency should be below 1 µs. But I’m not expert and would like that we include the .1 one guys as well in the discussion. Maybe Franz or Yong can give us their requirements from their Layer 2 perspective? I would not derive our requirements directly by application requirements but more from Layer 2 Protocols. We need to ensure that these protocols can work on the new PHY. Mit freundlichen Grüßen / Best regards Von: Stefan Buntz [mailto:stefan.buntz@xxxxxxxxxxx] Hello Stephane, I added the IEEE reflector again, as I think it’s important to have the same understanding of the requirements between OEMs and PHY vendors and I am also not sure, if we have really detailed this… So, from my understanding the 5...6µs we discuss would mean the overall link latency of the PHY- and DATALink-layer for an single link of an RTPGE* network (that also means these values would be added up on multiple links). For an 7-hop network this leads to maybe to 7*5…6µs = 35…42µs, however above the data link at both end-points (not in the individual hops) the message has to go through all the higher ISO/OSI layers which also adds significant delay, so this is not the final end-to-end latency at application level. From my point of view the most critical application is some control loop between different ECU’s, e.g. for autonomous driving or other driver assistance applications and here we need to maintain low latencies - not only on the PHY- and DATALINK-layers, but also on the higher layers (to avoid e.g. an oscillation of the control loop or to slow reaction). During discussions in Indian Wells I understood that the FEC per frame (is this meant to be an Ethernet frame (MTU=1500Byte) or are there any specific “FEC frames” inside PHY layer defined??) would led to ~1,5µs plus minor overhead to the rest of the processing so that in sum we are around 2µs, maybe 3µs. If the FEC is extended to 2 frames this would be in the range of 2*1,5µs = 3µs plus some overhead, maybe 4…5µs. The delay of the FEC itself would be constant and independent of any disturbance (as long as the transmission does not drop the complete frame to too much errors. This is because we would have to wait until the complete data (e.g. 2 frames) for error correction are received. The question in Indian Wells was, if this amount of delay for an individual link is acceptable or not and based on the above assumptions I didn’t saw any reason why we need to avoid a 2-frame FEC. However I didn’t know the discussion about DMLT/IET and I don’t know the reason why 1µs was not an acceptable latency in this case… So if there are from OEM’s point of view any reasons (I maybe missed) why this latency would cause a problem or if there are any applications which need dedicated latency please share. Regards, Stefan PS: As I am writing to the reflector: another point which is important for OEMs is for sure the power consumption. So if we could see differences between different modulations, FECs, etc. on the basis of an relative power consumption this also would help us to come to a consensus. This however should always be in relation to the overall port power, e.g if 1-frame FEC versus 2-frame FEC is twice the power for the FECs, but does not have significant influence on the over all PHY power consumption (e.g 1-frame FEC = 100% vs. 2-frame FEC = 101%) this is not a strong criteria to use for an decision. * from now on most probably called “1000Base-T1”. Von: KORZIN Stephane [mailto:stephane.korzin@xxxxxxxxxxx] Mehmet, Stefan, Could you please explain what’s included in those 5 to 6 µs? From Vitesse presentation at Indian Wells, I understood that FEC could add some time to the frame transmission. But a 1500-byte frame duration is ~1.5 µs. So 5 to 6 µs seems quite long to me. Are those µs time spent in the receiver PHY to correct errors? How does it propagate through switches, let’s say with 5 to 7 hops? Do we have “chances” that the cause for the error burst compromises not only 1 link but multiple links along the path from the source node to the destination node? Is this latency constant, whatever the number of errors? I remember presentations on Internet backbone, where they requested high-priority frames reception to force a switch to stop its transmission of a low-priority frame and start retransmission of the high-priority frame immediately. It seemed like waiting for 1µs was not acceptable. So how about 5 to 6 µs? I don’t know how these requests were finally considered. Do you know? Best regards, Stephane. De : Mehmet Tazebay [mailto:mtazebay@xxxxxxxxxxxx] Thank you, Stefan. This is very useful. We look forward to hear other OEM inputs as well. Best regards, Mehmet From: Stefan Buntz [mailto:stefan.buntz@xxxxxxxxxxx] Dear all, as discussed during this week’s ad hoc, I’ve internally asked for feedback regarding the latency requirement for the error correction decision. As a first feedback from Daimler side, I can say that a latency of 5µs...6µs seems not to be an issue for our systems. I guess this opens the possibilities for different FEC solutions. Hopefully other car manufacturers can add their opinion by replying to this e-mail to get a wider view of the requirements Regards, Stefan
|