Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: [10GBASE-T] latency





Jonathan,

The TX latency of 1000BASE-T is 84 bits.
The RX latency is 244 bits. Total latency = 328ns.
Is this considered low latency?

Brett

-----Original Message-----
From: owner-stds-802-3-10gbt@majordomo.ieee.org
[mailto:owner-stds-802-3-10gbt@majordomo.ieee.org]On Behalf Of Jonathan
Thatcher
Sent: Tuesday, February 24, 2004 10:29 AM
To: stds-802-3-10gbt@ieee.org
Subject: RE: [10GBASE-T] latency



Sailesh,

Thank you. Excellent note.

Based on this information, I would say that the good news, and the bad news,
is that the discussion remains relevant. It is clear that the latency lower
bound will provide more market opportunity than the upper bound if at the
same cost. It seems similarly clear that the cost lower bound will provide
more market opportunity than the upper bound if at the same latency. I have
no idea how to de-convolve these.

Interestingly, I received a telephone call from a friend on this issue this
morning. It was brought to my attention that there may be expectation that
the latency is no worse than 1000BASE-T. Based on my read of Table 40-14,
this would be 84 ns.

As with complicated discussions, other dimensions are usually added to see
if clarity can be found. In this case, there is a point to be made about the
expected solution cost when compared to the work getting started for 10GMMF.
As both solutions probably require "new cabling" and the requisite human
cost for installation, the key question will be one of relative cost of
equipment and infrastructure (sans installation cost). For 10GBASE-T to be
interesting, it must cost significantly less than 10GMMF. If complexity and
cost are added to achieve low latency and the end result is a cost structure
similar to 10GMMF, then the wrong decision was made. If complexity and cost
are minimized and the end result is a cost structure similar to 10GMMF, then
why bother with 10GBASE-T at all?

jonathan

p.s. I use the word "significantly" precisely because of its ambiguity :-)

> -----Original Message-----
> From: owner-stds-802-3-10gbt@majordomo.ieee.org
> [mailto:owner-stds-802-3-10gbt@majordomo.ieee.org]On Behalf Of sailesh
> rao
> Sent: Tuesday, February 24, 2004 8:21 AM
> To: jonathan.thatcher@ieee.org; stds-802-3-10gbt@ieee.org
> Subject: RE: [10GBASE-T] latency
>
>
>
> Jonathan,
>
> Firstly, the complexity of the echo/NEXT cancellation in the
> 10GBASE-T
> receiver is affected by the latency requirement.
>
> If the Tx+Rx latency requirement is <100ns, the echo/NEXT
> cancellers would
> need to be implemented in direct form, just as in a
> 1000BASE-T receiver. In
> this case, using the 4DPAM8 proposal as an example, the span of the
> cancellers would increase by 8X over 1000BASE-T, AND the
> clock rate of the
> cancellers would also increase by 8X, which means that the
> complexity of the
> cancellers would increase by 64X over a 1000BASE-T implementation.
>
> If the Tx+Rx latency requirement is in the 1us range, then we
> can use batch
> processing techniques to reduce the complexity of the
> cancellers by an order
> of magnitude, to say, 4-6X that of a 1000BASE-T receiver.
>
> Secondly, the coding gain achievable from an error correcting
> code is a
> function of the latency requirement. If the latency
> requirement is <100ns,
> then we would have to restrict ourselves to weak 1000BASE-T
> type codes,
> which means that the complexity of the analog/cancellers
> would need to
> increase to compensate for the lack of coding gain.
>
> If the Tx+Rx latency requirement is in the 1us range, then we
> can use more
> powerful codes and achieve at least 3dB higher coding gain than the
> 1000BASE-T trellis code.
>
> I hope that answers your questions.
>
> Regards,
> Sailesh.
>
> >From: "Jonathan Thatcher" <jonathan@ccser.com>
> >Reply-To: <jonathan.thatcher@ieee.org>
> >To: <stds-802-3-10gbt@ieee.org>
> >Subject: RE: [10GBASE-T] latency
> >Date: Tue, 24 Feb 2004 02:06:46 -0800
> >
> >So, is there any gutsy person ready to throw out the latency
> estimates
> >(and,
> >ideally, the break outs) for the complex / high power vs the
> simple / low
> >power versions of 10GBASE-T?
> >
> >At this point, even "order of magnitude" estimates would be
> helpful. I
> >can't
> >tell if we are talking 10's of ns or 10's of ms. Depending
> on the outer
> >limits of the range, this entire discussion may be moot (which is the
> >essence, I think, of Pat's append below).
> >
> >jonathan
> >
> >p.s. There do exist Ethernet switches that transparently support
> >cut-through
> >switching. RDMA over TOE over Ethernet is one option. There
> are others.
> >
> > > -----Original Message-----
> > > From: owner-stds-802-3-10gbt@majordomo.ieee.org
> > > [mailto:owner-stds-802-3-10gbt@majordomo.ieee.org]On Behalf Of
> > > pat_thaler@agilent.com
> > > Sent: Monday, February 23, 2004 6:05 PM
> > > To: stephen.bates@ece.ualberta.ca
> > > Cc: stds-802-3-10gbt@ieee.org
> > > Subject: RE: [10GBASE-T] latency
> > >
> > >
> > >
> > > Stephen,
> > >
> > > This is a tough question because latency is important for
> > > some applications that might use RDMA NICs but there are also
> > > constraints on the power available. An RDMA NIC is an
> > > interface card that includes the RDMA protocol plus a TCP/IP
> > > offload engine and MAC/PHY. The MAC/PHY would usually be
> Ethernet).
> > >
> > > This is kind of long so here is the executive summary:
> > > Practically all of the the 10GBASE-T market will require
> > > reasonable power requirements. There is additional market
> > > available if the PHY is very low latency. However BMP is very
> > > dependent on resonable power so if a trade-off of power for
> > > latency pushes the power too high, one will lose more market
> > > than one gains. Note that low latency in Infiniband and Fibre
> > > Channel can be around 100 ns port to port through a switch.
> > >
> > > --- The details ---
> > >
> > > The upper layers on these cards use a plenty of power
> > > themselves so I doubt there is more than 5 W available for
> > > the 10GBASE-T PHY given the power that a card slot can
> > > provide. That number would probably be workable though
> > > painful. Less would be better. Much more and it will probably
> > > be hard to find slots that can provide the power and remove
> > > the heat. There may be early bleeding edge products made with
> > > higher power but for broader use the technology should be
> > > able to get here.
> > >
> > > Hopefully other NIC vendors will chime in if they disagree
> > > about the power.
> > >
> > > Like Ethernet NICs, RDMA NICs are intended to support a wide
> > > variety of applications. Some of these applications are
> > > pretty traditional networking applications and aren't
> > > especially latency sensitive. Other potential applications
> > > such as storage and clustering are currently served by more
> > > specialized networks (e.g. Fibre Channel and the proprietary
> > > predecessors to Infiniband) and are latency sensitive.
> > >
> > > What do clustering (Infiniband) and storage (Fibre Channel)
> > > customers consider low latency?
> > > In Infiniband, the systems vendors generally wanted less than
> > > 100 ns port to port through the switch. Fibre Channel
> > > switches are about the same. In both technologies they
> > > typically are using cut through switching to get this speed.
> > > Ethernet switches moved away from their early cut-through
> > > operation and generally have much higher latency. If Ethernet
> > > wants to serve the very latency sensitive applications, then
> > > more than PHYs has to be low latency.
> > >
> > > Neither of these technologies are planning on a 10GBASE-T
> > > type PHY. They have PHYs similar to CX4 and the optical PHYs.
> > > Infiniband is working on a quad speed version of their
> > > existing 2.5 Gig signaling (as 802.3 may end up doing if the
> > > backplane study group is chartered). It could be argued that
> > > for these very latency applications, Ethernet also can use
> > > the CX4 and optical PHYs.
> > >
> > > I'm not sure what the latency range of the proposals under
> > > consideration currently is. It seems likely that even the
> > > fastest of them doesn't achieve the ultra low latency that
> > > the systems vendors want for this class of application.
> > >
> > > Given this, it makes sense to accept some extra delay in
> > > return for lower power.
> > >
> > > Regards,
> > > Pat
> > >
> > > -----Original Message-----
> > > From: Stephen Bates [mailto:stephen.bates@ece.ualberta.ca]
> > > Sent: Friday, February 20, 2004 12:34 PM
> > > To: THALER,PAT (A-Roseville,ex1)
> > > Cc: stds-802-3-10gbt@ieee.org
> > > Subject: RE: [10GBASE-T] latency
> > >
> > >
> > > Hi All
> > >
> > > My thanks to everyone who has responded to my email.
> > >
> > > The responses I've been getting tend to suggest that PAUSE
> > > should not be
> > > (and is not) enabled in most Ethernet systems. If flow control is
> > > required it should be handled higher up the stack. This obviously
> > > increases the latency and if TCP/IP is implemented in
> > > software no exact
> > > bound on latency can be given since it will be
> architecture specific.
> > >
> > > However, although latency is not an issue for PAUSE it is a
> > > major issue
> > > for certain applications 10G may be targetting. I believe
> > > this brings us
> > > back to Brad's original request for some figures on end
> to end latency
> > > for applications such as cluster computing and RDMA.
> > >
> > > Serag mentioned that we should stick to the low latency
> > > solutions if the
> > > power remains comparable with that of 10G optical transponder. My
> > > concern is that the balance between digital and analog
> power will be
> > > totally biased to the analog. This implies power
> consumption will not
> > > drop as much with technology scaling. In this case we
> will not see the
> > > same kind of power consumption drop over time that we saw for
> > > 1000BASE-T.
> > >
> > > Regards
> > >
> > > Stephen
> > >
> > > --------------------------------------------------------------
> > > ------------
> > >
> > > Dr. Stephen Bates
> > >
> > > Dept. of Electrical and Computer Engineering      Phone: +1
> > > 780 492 2691
> > > The University of Alberta                         Fax:   +1
> > > 780 492 1811
> > > Edmonton
> >www.ece.ualberta.ca/~sbates
> >Canada, T6G 2V4
> stephen.bates@ece.ualberta.ca
> >
> >-------------------------------------------------------------
> -------------
> >
> >
> >
> ><< winmail.dat >>
>
> _________________________________________________________________
> Click, drag and drop. My MSN is the simple way to design your
> homepage.
> http://click.atdmt.com/AVE/go/onm00200364ave/direct/01/
>
>