Re: [802.3EEESG] On the topic of tranistion time (was Re: [802.3EEESG] Comments on our work from Vern Paxson)
Ken-
I believe you misunderstood my suggestion and thesis.
My points were:
1) There is a new class of consumer equipment and protocols being
developed in 802.1AVB. This "stuff" will have much more stringent
requirements for both latency and jitter, both in terms of absolute and
variability. I believe the requirements will be such that the applications
will not tolerate speed shifts during their streaming operations.
2) The market volume and duty cycle of these systems will be such
it will absolutely NOT OK to disable (or ignore) power saving modes.
3) These systems are not "legacy", they are current standards
projects. As such, hooks can be put into the protocols to support power
saving modes. That is, when a streaming application is firing up, the
device could send a packet requesting that the network switch to the higher
speed that is necessary to support the streaming application that is firing up.
I was, by no means, suggesting "a soft state of "link power management = off"
I was suggesting that link power management could be explicitly commanded
to change speeds by active packets in addition to a passive traffic
monitoring of (legacy) traffic levels.
I hope this clears up any misunderstanding.
Regards,
Geoff Thompson
\
At 08:21 AM 6/19/2007 , Ken Christensen wrote:
>Hello all, if I may weigh-in... It is no doubt that there will be
>applications for which EEE is not suitable (and should be disabled). Such
>applications may include latency-sensitive cluster computing and some
>realtime streaming applications. But, will these be a majority of
>applications? So, maybe Vern's comments may still hold for the majority
>of cases. Over time it seems that applications have become more robust,
>not less so, to loss and delay. As bandwidth, processing, and memory
>capacity have all increased the need for fine grained control (for classic
>"QoS") has diminished.
>
>In a previous email to this list, Geoff suggested a control packet to
>maintain a soft state of "link power management = off". Did I understand
>this correctly? I think that remote management of power state of links
>and devices will become an interesting problem area for many protocol
>standards. DMTF looked at this in 2004. Should such power management
>control occur at the 802.3 level?
>
>For speed of packet transmission, there are two factors: transmission time
>and end-to-end latency. For small packets, the latter may dominate. For
>a 64 byte packet, the transmission time at 100 Mb/s is about 5
>microseconds and at 1 Gb/s it is about 0.5 microseconds... is there any
>protocol (or operating system or device) that is sufficiently sensitive to
>tell the difference between 0.5 and 5 microseconds?
>
>Thanks for letting me say a few words.
>
>Regards,
>
>Ken Christensen
>Department of Computer Science and Engineering
>University of South Florida
>Phone: (813) 974-4761
>Web: http://www.csee.usf.edu/~christen