Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
I think that we can get away without local (MAC-PHY) flow control if we use variable IPG. I'm not sure what size buffering is available in an OC-192 SONET framer, but I would think that whatever is there to handle the overhead insertion may be enough. I haven't had time to evaluate that yet.
Thanks,
Brad
Brad Booth
bbooth@xxxxxxxxxx
Level One Communications, Austin Design Center
(512) 407-2135 office
(512) 589-4438 cellular
-----Original Message-----
From: Chang, Edward S [SMTP:Edward.Chang@xxxxxxxxxx]
Sent: Wednesday, August 04, 1999 1:30 PM
To: Booth, Brad; stds-802-3-hssg@xxxxxxxx
Subject: RE: Proposal for accomodating 10.0000 and 9.58464 line rates
Brad:
Thanks very much for keeping me in the right direction. I tried to round
off for easier discussion. However, do you think we can get away without
complex flow-control, if we make the IPG and buffer size reasonably
acceptable?
Regards,
Edward S. Chang
NetWorth Technologies, Inc.
NetworthTK@xxxxxxx <mailto:NetworthTK@xxxxxxx>
-----Original Message-----
From: Booth, Brad [mailto:bbooth@xxxxxxxxxx]
Sent: Wednesday, August 04, 1999 11:27 AM
To: stds-802-3-hssg@xxxxxxxx
Subject: RE: Proposal for accomodating 10.0000 and 9.58464 line rates
Ed,
You may wish to re-work your numbers. If you assume that there is no
stripping of the preamble, only the IPG is stripped, then the maximum size
packet is 1530 bytes. This would require a minimum 67 byte IPG. Based on
the OC-192 payload transfer rate of 9.58464 Gb/s, the equation for
calculating the required IPG is:
0.043336 * x = y
where:
x = the packet size (in bytes)
y = the IPG size (in bytes)
To help offset the loss of throughput due to selecting on IPG size (i.e. 67
byte IPG with 64 byte frame), the "standard" could state the equation used
for the WAN (OC-192) PHY but leave it up to the implementor as to the
granularity of the implementation. For example, a step function based on
the standardized minimum IPG (12 bytes) could be implemented, or another
implementor could use a smaller granularity step function.
Cheers,
Brad
Brad Booth
bbooth@xxxxxxxxxx
Level One Communications, Austin Design Center
(512) 407-2135 office
(512) 589-4438 cellular
BM__MailData-----Original Message-----
From: NetWorthTK@xxxxxxx [SMTP:NetWorthTK@xxxxxxx]
Sent: Tuesday, August 03, 1999 11:14 PM
To: dan_dove@xxxxxx; stds-802-3-hssg@xxxxxxxx
Subject: Re: Proposal for accomodating 10.0000 and 9.58464 line rates
Dan:
Thanks very much for your patience in replaying my questions. I
agree your
comments. However, there are questions which you may like to respond
>
> >
> We will need buffers in the PHY to accomodate the difference in MAC
> and link speeds. I assume this will necessary anyway to allow for some
> latency in the framing process.
>
> I agree we need buffers. Using the numbers we are discussing, 10.000
Gbps
+/- 100ppm and 9. 58464 Gbps +/- 100 ppm at the maximum frame size of 1500
bytes, the worst case timing skew due to the data rate difference is 6.53ns.
In other words, if the MAC/PLS and PHY interface is a byte-wide, the buffer
size we need is 66 bit deep (byte) FIFO. I believe Steve has mentioned
before a short buffer, 64 bit deep.
The buffer size is small; therefore, buffer is not an issue at all.
On the
other hands, if we make all IPG 64 bytes (or 66 bytes the worst case) in
conjunction with 66 bit deep byte-buffer, theoretically, we are in good
shape
without any additional flow control. Although the small packet, 64 byte,
will slow down to 1/2 of the maximum throughput. Is this suggestion causing
other deficiencies?
>
> I am in support of a MAC/PLS interface that is un-coded data (as Shimon
> suggested) and will therefore allow PHYs that use different coding for
> their different environments. It is entirely reasonable to believe that
> the coding requirements for a 40Km link will be different than those of
> a 20m copper link.
>
I agree. I do not have any quam with this.
Regards,
Edward S. Chang
NetWorth Technologies, Inc.
NetworthTK@xxxxxxx
>