Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
I’ve had a chance to read a bit further into the working paper. I have a few comments and question:
The paragraph following figure 5.8 doesn’t make much sense to me.
Figure 5.20 shows FIFO’s on the MII side of the PHY. The main working FIFOs are typically on the MII side of the MAC. Accurate receive packet time stamping can be obtained by monitoring the MII receiver carrier sense (CRS) signal.
In order to enforce bandwidth subscriptions, it seems there needs to be an association between sourceID:plugID and some sort of identifier in the stream data packets. The network needs to identify these packets and associate them with a stream, measure bandwidth consumed by the stream and potentially drop packets if the subscription terms are violated. Figure 5.22 and bullet (a) in section 5.6.3 seem to indicate that a multicast destination address for each stream is the preferred association. We should be aware that this solution creates the following side effects: 1/ All media data is always multicast. 2/ ClassA “routing tables” must store a multicast destination MAC (I’ll call it destinationID) along with sourceID and plugID. 3/ destinationIDs must be unique on the network. We’ll need some network-wide means of allocating destinationIDs. We’ll need to deal with destinatationID collisions when two separate working networks are connected together.
The pacing feature described in section 5.7 appears to insert an isochronous cycle’s worth of gating delay for each switch hop. I assume the motivation here is to keep traffic flowing smoothly. Many applications would prefer to have a fixed, longer latency than to have latency dependent on network location. Also, holding gigabit data for 125us implies an additional 120Kbits of buffer per port. There will be a cost associated with this.
Transmission gating described in section 5.7.4 will only work properly if ClassA packets in the transmission queue are sorted according to isochronous cycle number. Are the multiple arrows into the queue in figures 5.26b and 5.27b trying to indicate the presence of sorting hardware?
I’ve taken in a first reading of Annex F. I would like to understand how the graphs of latency vs. switch hops were generated. These graphs show an unexpected exponential relationship between switch hop count and latency.
From:
owner-stds-802-3-re@ieee.org [mailto:owner-stds-802-3-re@ieee.org] On Behalf Of David V James
All,
Based on last meeting's consensus, the white paper now includes multicast-address-selected classes, pacing for classA and shaping for classB.
Things seemed to work out during the writing, yielding a good compatibility classB as well as achieveable low-latency bridge forwarding possibilities: 1 cycles (125 us) for Gb-to-Gb/100Mb 2 cycles *250 us) for 100Mb-to-Gb/100Mb
I have placed these on my DVJ web site, but assume that Michael will transfer them to the SG web site soon. You may find them at:
If anyone has a chance to read the pacing stuff, we can discuss this briefly at tomorrow's adhoc meeting. In case anyone missed the announcement, tomorrow's interest group meeting is as follows:
REsE interest group conference call, code 802373
I normally wouldn't want to discuss things on such short notice, but I'm aware that Michael will be there tomorrow and not a week later.
Cheers, DVJ |