Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_ISAAC] Need for more Use-Case ad hoc meetings



Hi TJ,

Thanks for the responses, and this is what I was referring to in terms of patience… good that you split it from the other response.

 

In response to your question: Stepping aside this issue - If you were talking about two completely different system designs, why would you have similar requirements for a system that does not have these requirements?

I would like to have a common solution with the best properties.  Given 2 completely different system designs they may be described as having a common set of requirements, especially if one system is a superset of the other.  If the scenarios are A: point-to-point over 802.3dm and B: over 0-2 hops in a network where at least the last link to the sensor is 802.3dm, then ideally a solution for scenario B would be a sufficient solution for scenario A as well.  If on the other hand the superset view is incorrect, and scenario A must meet performance criteria that scenario B can never reasonably achieve, then I would not expect a common set of requirements.   

 

I haven’t seen public information on the ways around I2C round-trip ACK that you refer to, so that would be interesting.  However, it is good to confirm the point that there are other acceptable solutions to the I2C bottleneck besides just lowering latency. 

 

The higher level requirement for I2C you have outlined is to be able to quickly reconfigure the sensor within a certain time frame (on the fly, not just initial configuration).  If the deadline can be met, then the latency of individual I2C transactions does not seem to be a limiting factor.  If the system design insists on a round-trip ACK for every portion of each I2C transaction, then it really creates a bottleneck, just as if TCP required an ACK for each packet before it would send the next one.  The round-trip ACK might work in some network topologies and fall apart in others, so the group needs to decide how far to go in supporting topology-sensitive use cases.  Alternately, the group could consider if there is a good reason to have 2 sets of requirements for different network topologies:  one for round-trip ACK and another for bandwidth-efficient, latency-tolerant methods. 

 

Thanks again for sharing your perspective on the other points. 

 

Best Regards,

Scott

 

From: TJ Houck <thouck@xxxxxxxxxxx>
Sent: Monday, August 12, 2024 4:01 PM
To: Scott Muma - C33246 <Scott.Muma@xxxxxxxxxxxxx>; STDS-802-3-ISAAC@xxxxxxxxxxxxxxxxx
Subject: RE: Need for more Use-Case ad hoc meetings

 

EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe

Hi Scott,

Thanks again for the questions see my response in GREEN.

Sorry for separating these responses on lengthy and I did not want to write a book report in one email and people just lose interest.

 

Questions:

  1. Slides 6 and 7 seem to be GPIO input to output delay when referring to latency.  Is the latency limit proposed on slide 9 also GPIO input-output delay or something else? This is about the diagram on slide 7 – some customers want tighter latencies, such as the 1-2us for GPIO. I believe Max was going to go with this because these are not achievable or would require significant changes to this ecosystem. This is why I have questioned customers about this already, and they agreed if 802.3dm leveraged the benefit of the PTP/TSN functionality, as Kristen pointed out in the previous email exchange. However, the highest some have agreed to is that 10us hard limit.
  2. For I2C commands is the 10us hard limit based on the clock stretching for the entire round-trip of the transaction or how would it be detected as a functional safety violation?  If a system works a certain way then I can understand why this processor<>sensor round-trip time is critical and performance-limiting, but systems can work other ways to decouple the overall round trip time from the clock stretching to get better performance and compensate for the round-trip-time/network latency.  Yes, there are ways around this that customers use today. I could go into more detail on this – if people come back to me with concerns, I have already reviewed there is a way around this that is deployed in the other technologies today – I would encourage you to review their user guides of these technologies, as this information is available publicly.  In general, the clock stretching should be a slight additional delay. As you pointed out, some customers want the entire round trip ACK/NACK for safety, and others don’t require it because they need the quickest transfer of information possible. (Hint Hint)
  3. Given synchronization methods such as PTP and ability to schedule events like frame synchronization with low skew and low jitter at multiple sensors, what latency could be tolerated?  10us is mentioned, but I am interested in what really drives this limit… if it’s 50us or 100us, but  with very low jitter/skew on the frame synch across sensors when does the processor start to be impacted?  I highlighted a few use cases in the email to Kristen let me know if you still don’t follow this and I can lay some more details out. One important aspect is the rapid change in environments and making the adjustments quickly.
  4. Given a proposed latency limit of 10us in the topology shown on slide 7, is it possible to add an Ethernet switch between the Switch and Bridge without pushing the latency beyond the 10us hard limit?  Or should the 10us limit apply up to a certain number of hops, or be understood differently? These are numbers the group should run. There may be compromises, and I brought this to the committee's attention because these latencies and priorities should be dealt with great importance. Otherwise, this will go down a similar road to ethernet audio for ANC/RNC (real-time operating applications). Latencies were not dealt with great importance, and now proprietary links reap the benefits of this flaw in requirements not identified by the ethernet committee as critical.

 

Questions:

  1. Slide 8 says latency, but describes delay on a very limited part of the link, how does this relate to the <1us latency limit in slide 6? This is the <1.0us requirement for video channel from sensor to switch, and I outlined an example of how this could be measured. There are many ways we could go here. The critical aspect is putting hard limits on transferring video information and prioritizing it correctly.  
  2. How could an Ethernet switch be inserted between the Bridge and Switch without exceeding the 1us latency limit in slide 6?  Or is higher latency acceptable in this case? Now this reaches outside of the requirements I listed and this is were customers have different sets of requirements. The customers are well aware of the additional latency this would cause, and I would love to discuss this further. I was primarily trying to show the risk to the group for applications I outlined if we don’t take latencies serious this can cause chaotic behavior customers would demy as unacceptable compared to their current solutions in place.
  3. If higher latency from sensor to processor is acceptable, then the information on Slide 6 seems out of context.  Is it possible to provide this context or to give a clearer requirement on the sensor to processor latency? This could be discussed in further detail with other measurement ideas that are related to the SoP -> SoP I laid out. I don’t see why this would be an issue as long as we try to achieve the lowest latency to get the data to the sensor and make adjustments as quickly as possible.

 

Similarly, if we state that the requirements are precisely the observed behavior of a point-to-point connection, then connecting the camera to processor over a network may not be possible/economical. –- This sounds like you’re describing two different requirements. The requirements I addressed directly reflect latency when communicating to sensors. The latency you’re describing is when this information wants to be passed into the network, which is a different requirement than I described. I would ask why we could not simply add other latency requirements for other network applications, add what you’re concerned about, and encourage you to share information about these.

Perhaps this is getting to the root of our difference in understanding.  I suspect that some of the group (at least me) understands the proposed requirements to be requirements derived from the processor<>camera interaction and overall application requirements independent of the specific network topology/implementation.  For example, if we have one requirement that says the system requires GPIO input-output delay must be <10us when point-to-point and another requirement that says it must be <100us over a network, then it means we require a PHY that supports the <10us case.  However, it seems unlikely and undesirable that the system would have different requirements based solely on network topology and so that is why this approach is likely to result in an overconstrained PHY.  On the other hand if we could say that the GPIO input-output delay can be up to 200us, but the skew/jitter at the output across multiple sensors is <1us for all topologies, then we can derive a looser PHY delay requirement and have much greater flexibility in making tradeoffs that can reduce the PHY complexity/cost/power, etc. which I understand to be some of the reasons for 802.3dm and what differentiates it from existing Ethernet PHYs.  Agreed, these are two different requirements – I outlined some scenarios, and you outlined some that are entirely different than the requirements. However, I disagree these are not having two separate requirements. These can easily be two different requirements, in my opinion, without affecting the PHY structure. Stepping aside this issue - If you were talking about two completely different system designs, why would you have similar requirements for a system that does not have these requirements?

 

Best Regards,

TJ

 

 

From: Scott Muma <00003414ca8b162c-dmarc-request@xxxxxxxxxxxxxxxxx>
Sent: Friday, August 9, 2024 4:04 PM
To: STDS-802-3-ISAAC@xxxxxxxxxxxxxxxxx
Subject: [EXTERNAL] Re: [802.3_ISAAC] Need for more Use-Case ad hoc meetings

 

Hi TJ, Thanks for the response, it’s good to hear that I am likely misunderstanding the message you intended to convey. However, I likely remain confused, so will explain my interpretation of some of the points and some questions it raised

Hi TJ,

Thanks for the response, it’s good to hear that I am likely misunderstanding the message you intended to convey.  However, I likely remain confused, so will explain my interpretation of some of the points and some questions it raised for me.  Your continued patience is greatly appreciated.

 

First in the processor to camera/sensor direction:

Slide 6 “Latency Requirements”

  • 10us hard limit for a GPIO trigger or I2C command for functional safety
  • 1-2us limit on GPIO trigger events
  • SERDES already achieves these latency requirements

Slide 7 “Latency and Jitter Application Diagram”

  • Trigger latency of <1-2us ideal
  • Schedule events if link cannot achieve <1-2us
  • Diagram illustrates rising edge of GPIO at processor to rising edge of GPIO at sensor having delay of 1-2us to achieve GPIO jitter of 1-2us
  • Diagram illustrates rising edge of GPIO at processor to rising edge of GPIO at sensor having delay of <10us with PTP to achieve GPIO jitter of 1-2us

Slide 9 “Summary”

  • It is proposed to limit the latency to 10us worst case in the switch to camera direction

 

Questions:

  1. Slides 6 and 7 seem to be GPIO input to output delay when referring to latency.  Is the latency limit proposed on slide 9 also GPIO input-output delay or something else?
  2. For I2C commands is the 10us hard limit based on the clock stretching for the entire round-trip of the transaction or how would it be detected as a functional safety violation?  If a system works a certain way then I can understand why this processor<>sensor round-trip time is critical and performance-limiting, but systems can work other ways to decouple the overall round trip time from the clock stretching to get better performance and compensate for the round-trip-time/network latency. 
  3. Given synchronization methods such as PTP and ability to schedule events like frame synchronization with low skew and low jitter at multiple sensors, what latency could be tolerated?  10us is mentioned, but I am interested in what really drives this limit… if it’s 50us or 100us, but  with very low jitter/skew on the frame synch across sensors when does the processor start to be impacted? 
  4. Given a proposed latency limit of 10us in the topology shown on slide 7, is it possible to add an Ethernet switch between the Switch and Bridge without pushing the latency beyond the 10us hard limit?  Or should the 10us limit apply up to a certain number of hops, or be understood differently?

 

Second in the camera/sensor to processor direction:

Slide 6 “Latency Requirements”

  • <1.0us latency limit from sensor to switch
  • <1.0us latency limit on video channel from sensor to switch

Slide 8 “Latency Requirements”

  • PCS to PCS block should not exceed 1us for 10Gbps
  • Diagram shows the SOP to SOP delay at the xMII of the bridge/switch should be <1us

Slide 9 “Summary”

  • It is proposed to limit the latency to 1us worst case in the camera to switch direction.

 

Questions:

  1. Slide 8 says latency, but describes delay on a very limited part of the link, how does this relate to the <1us latency limit in slide 6?
  2. How could an Ethernet switch be inserted between the Bridge and Switch without exceeding the 1us latency limit in slide 6?  Or is higher latency acceptable in this case?
  3. If higher latency from sensor to processor is acceptable, then the information on Slide 6 seems out of context.  Is it possible to provide this context or to give a clearer requirement on the sensor to processor latency?

 

I would not yet claim that 10us switch->camera and 1us camera->switch are not the true requirements, but these requirements will severely constrain the valid solutions and I am concerned it will take a networked topology off the table.   -- May I ask how you concluded that this is not a true requirement and how this would directly impact network solutions, as this is a different requirement than I have described?

Apologies if the wording was unclear.  I was saying that I am not yet convinced one way or the other if these are the requirements needed to meet the overall application requirements.  Hopefully my questions above clarify why I’m concerned about the impact on a networked solution.  I would be interested to understand how this is different from the requirement that you have described given that Ragnar referred below to these as your “proposed requirements”. 

 

Similarly, if we state that the requirements are precisely the observed behavior of a point-to-point connection, then connecting the camera to processor over a network may not be possible/economical. –- This sounds like you’re describing two different requirements. The requirements I addressed directly reflect latency when communicating to sensors. The latency you’re describing is when this information wants to be passed into the network, which is a different requirement than I described. I would ask why we could not simply add other latency requirements for other network applications, add what you’re concerned about, and encourage you to share information about these.

Perhaps this is getting to the root of our difference in understanding.  I suspect that some of the group (at least me) understands the proposed requirements to be requirements derived from the processor<>camera interaction and overall application requirements independent of the specific network topology/implementation.  For example, if we have one requirement that says the system requires GPIO input-output delay must be <10us when point-to-point and another requirement that says it must be <100us over a network, then it means we require a PHY that supports the <10us case.  However, it seems unlikely and undesirable that the system would have different requirements based solely on network topology and so that is why this approach is likely to result in an overconstrained PHY.  On the other hand if we could say that the GPIO input-output delay can be up to 200us, but the skew/jitter at the output across multiple sensors is <1us for all topologies, then we can derive a looser PHY delay requirement and have much greater flexibility in making tradeoffs that can reduce the PHY complexity/cost/power, etc. which I understand to be some of the reasons for 802.3dm and what differentiates it from existing Ethernet PHYs. 

 

Best Regards,

Scott

 

From: TJ Houck <thouck@xxxxxxxxxxx>
Sent: Thursday, August 8, 2024 7:34 PM
To: STDS-802-3-ISAAC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_ISAAC] Need for more Use-Case ad hoc meetings

 

You don't often get email from thouck@xxxxxxxxxxx. Learn why this is important

EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe

Hi Scott,

 

Thanks for the follow up. However, I don’t follow how this limits ethernet functionality, nor did my presentation say this was the only requirement. The applications I brought up were to propose limits on what the SERDES solutions address for their customers today. The presentation aimed to share how automotive ADAS systems are connected to sensors-bridge and Switch-processor. The GPIOs are used as critical trigger events for various applications, and latency is a crucial reason why SERDES solutions are used today since they address these needs desired by customers.

 

I would not yet claim that 10us switch->camera and 1us camera->switch are not the true requirements, but these requirements will severely constrain the valid solutions and I am concerned it will take a networked topology off the table.   -- May I ask how you concluded that this is not a true requirement and how this would directly impact network solutions, as this is a different requirement than I have described?

 

Similarly, if we state that the requirements are precisely the observed behavior of a point-to-point connection, then connecting the camera to processor over a network may not be possible/economical. –- This sounds like you’re describing two different requirements. The requirements I addressed directly reflect latency when communicating to sensors. The latency you’re describing is when this information wants to be passed into the network, which is a different requirement than I described. I would ask why we could not simply add other latency requirements for other network applications, add what you’re concerned about, and encourage you to share information about these.

 

I believe Kirsten tried to make this point in the call, -- I must’ve missed when this was brought up.

 

Best Regards,

TJ

 

From: Scott Muma <00003414ca8b162c-dmarc-request@xxxxxxxxxxxxxxxxx>
Sent: Thursday, August 8, 2024 2:35 PM
To: STDS-802-3-ISAAC@xxxxxxxxxxxxxxxxx
Subject: [EXTERNAL] Re: [802.3_ISAAC] Need for more Use-Case ad hoc meetings

 

Hi Ragnar, Max, I would also like to have more use case discussions, and appreciate your contributions so far. However, it would be useful to separate the behavior of specific implementations from the system/application requirements. TJ’s

 

Hi Ragnar, Max,

I would also like to have more use case discussions, and appreciate your contributions so far.  However, it would be useful to separate the behavior of specific implementations from the system/application requirements. 

 

TJ’s presentation made latency/delay understandable through diagrams, however, I understood the presentation was describing the behavior of a specific implementation. 

 

To take this to an extreme, if we hypothetically connect a processor directly to an imager we could observe the behavior of that implementation and it might “require” even lower latency because of decisions made by the implementer even if the overall application has no direct requirement for such low latency.  If we accept such requirements then there is no possible alternative but direct connection between camera and ECU. 

 

Similarly, if we state that the requirements are precisely the observed behavior of a point-to-point connection, then connecting the camera to processor over a network may not be possible/economical.  I believe Kirsten tried to make this point in the call, and if there is no network possible then Ethernet may be burdening the solution to the point that it can’t even achieve the point-to-point case at similar cost/power/latency.  So to Max’s point on the call, I don’t expect anyone is against a solution that supports a networked topology (since that is the point of Ethernet), but overconstraining the valid solutions will prevent a networked topology.

 

I would not yet claim that 10us switch->camera and 1us camera->switch are not the true requirements, but these requirements will severely constrain the valid solutions and I am concerned it will take a networked topology off the table. 

 

Best Regards,

Scott

 

From: Ragnar Jonsson <rjonsson@xxxxxxxxxxx>
Sent: Thursday, August 8, 2024 8:57 AM
To: STDS-802-3-ISAAC@xxxxxxxxxxxxxxxxx
Subject: [802.3_ISAAC] Need for more Use-Case ad hoc meetings

 

EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe

Hi Max and all,

 

At the end of yesterday’s meeting Max asked if we should have more Use-Case ad hoc meetings before the September meeting. There was a problem with my microphone, so you probably did not hear my comment. I think that we obviously need to have more Use-Case ad hoc meetings before the September meeting.

 

While yesterday’s ad hoc was a good start, we did not even have time to finish going over your proposed definitions of delay vs latency. Kirsten has already sent a follow-up email, highlighting the need for finishing that discussion.

 

I think that we need a deeper dive on the latency/delay requirements. There was a factor of 10 difference in the two proposed latency requirements presented in Montreal:

 

Kirsten presented

https://www.ieee802.org/3/dm/public/0724/matheus_dm_02b_latency_07152024.pdf

On slide 3 it states “It provides concrete examples of latency and latency requirements in a camera system.”

On slide 9 it states “Ethernet latencies of 10us in the DS and of 100us in the US are sufficiently small …”

 

TJ presented

https://www.ieee802.org/3/dm/public/0724/houck_fuller_3dm_01_0724.pdf

On slide 9 it states “It is proposed to limit the latency to 10us worst case in the switch to camera direction and 1us worst case in the camera to switch direction.”

TJ told us that these requirements are based on our conversations with multiple OAMs and with the ADAS SoC vendors.

 

There are also other issues that were brought up in Montreal related to Use-Cases that need further discussion.

 

In summary, we clearly need more Use-Case ad hoc meetings.

 

Ragnar


To unsubscribe from the STDS-802-3-ISAAC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-ISAAC&A=1


To unsubscribe from the STDS-802-3-ISAAC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-ISAAC&A=1


To unsubscribe from the STDS-802-3-ISAAC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-ISAAC&A=1


To unsubscribe from the STDS-802-3-ISAAC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-ISAAC&A=1


To unsubscribe from the STDS-802-3-ISAAC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-ISAAC&A=1