RE: XAUI, SFF connectors
Maybe someone can help me understand this....
Was XAUI initially driven by:
a. IC vendors as an effort to reduce the number of leads on the
chips?
b. PC board vendors to reduce the number, and complexity of traces
on a board
c. OEMs for the reasons a. and/or b.
d. Other?
Thanks,
Chris
-----Original Message-----
From: Joel Goergen [mailto:joel@xxxxxxxxxxxxxxxxxxx]
Sent: Friday, July 28, 2000 9:26 AM
To: stds-802-3-hssg@xxxxxxxx
Subject: Re: XAUI, SFF connectors
Sabato,
I agree with your comments stated below. Though I still don't believe 8b10b
was
the best choice for XUAI, I strongly believe XUAI is necessary and I support
it
completly. Thanks for taking the time to get the points across.
Take care
Joel
-----------------
"Simon L. Sabato" wrote:
> Roy,
>
> I'm quite happy to have go through the "growing pains". If this means
> beating FC and IB to 10G rates, that means that some of their applications
> will migrate to Ethernet. As they climb on the XAUI bandwagon, it will
only
> serve to increase the market for many of the components (retimers, etc)
that
> go into 10G Ethernet boxes, as well as increasing the amount of expertise
in
> the core technologies.
>
> Imagine a world where there's a limited number of experts who can put
> together a high speed interface such as XAUI (this shouldn't be very
hard...
> just look around you). Now, imagine two possible scenarios. The first in
> which 10GE, FC, and IB all have different physical layers, and the limited
> number of experts is split up into three groups. The second in which all
> have very similar physical layers and the limited number of experts work
on
> parts that could be used in each application. Now, you tell me, which one
> of the above scenarios gives the end customer the lowest cost, highest
> quality product?
>
> You seem to assume that if FC and IB reuse our technology then we are
> "paying for this and the other technologies are the actual beneficaries".
> You're partially right, we are paying for the development -- that's the
> price of being on the cutting edge. Trailing technologies will always
> leverage the latest technology.
>
> But your continuous assertions imply that we are paying for development of
> XAUI features that *are not useful in 10GE*. These assertions need to
stop
> or backed up with some information that makes sense. I have yet to see
this
> information. I've seen a lot of explanations from varied people at
various
> companies explaining the benefits of XAUI. I find it odd that you can be
so
> sure that *no-one* needs something. It's easy to know what *you* need,
but
> how can you speak for everyone else?
>
> In summary, I see XAUI as a way to use a common technology across various
> applications due to the shared market and shared expertise. Using TTL
> voltage levels across various applications is a *good thing* even if, say,
> it wasn't particularly ideal for each one. The only argument against this
> is that we are saddling XAUI with burdens from other standards, which you
> claim but cannot substantiate.
>
> -Simon Sabato
> -Manager, Product Architecture
> -InterNetworking Operation, Intel Corp.
>
> P.S. Speaking of what is "costing" the 10GE group, constant discussion on
> subjects that are closed costs us all. It just cost me twenty minutes to
> write this message, and a whole lot to read all the others.
>
> > -----Original Message-----
> > From: owner-stds-802-3-hssg@xxxxxxxx
> > [mailto:owner-stds-802-3-hssg@xxxxxxxx]On Behalf Of Roy Bynum
> > Sent: Thursday, July 27, 2000 2:55 PM
> > To: stds-802-3-hssg@xxxxxxxx
> > Subject: Re: XAUI, SFF connectors
> >
> >
> >
> > Ali,
> >
> > You may have it backwards. XAUI is not presented as a back plane
> > technology. XAUI is presented as a copper etch extension
> > contained only on
> > the PCB.
> >
> > The people that would be benefiting from the technology sharing is Fibre
> > Channel and Infiniband. At present, those groups do not have any mature
> > technology at 10Gb. Have you not noticed the reflector traffic
> > discussing
> > how this technology should be developed. If Fibre Channel and Infiniband
> > had demonstrated 10Gb parallel interfaces already, then it would be a
> > different situation. The original presentations on "Hari" and
> > then "XAUI"
> > would have been very different; they would have referenced previous
> > implementations. If it were mature technology from the other
> > environments,
> > then the questions of "striping" and "jitter" would have been answered
> > already and not be topics of discussion here. The fact that these are
> > topics of discussion is another proof that it is P802.3ae that is paying
> > for this and the other technologies are the actual beneficiaries.
> >
> > It will be P802.3ae that will be paying for the technology
> > development and
> > go through the "growing pains" to mature the technology. As a
> > customer, I
> > find paying for the development of a technology specific for
> > other markets
> > to be difficult to accept in order to get the technology that I
> > do want and
> > am willing to pay for. I also find it uncomfortable and insecure to
> > attempt to depend on a technology that does not have a development and
> > maturation history.
> >
> > Thank you,
> > Roy Bynum
> >
> > At 09:03 AM 7/26/00 -0700, ghiasi wrote:
> >
> > >Hi Roy
> > >
> > > > X-Sender: rabynum@xxxxxxxxxxxxxxxxxx
> > > > Date: Mon, 24 Jul 2000 14:26:27 -0500
> > > > To: ghiasi <Ali.Ghiasi@xxxxxxxxxxx>
> > > > From: Roy Bynum <rabynum@xxxxxxxxxxxxxx>
> > > > Subject: Re: XAUI, SFF connectors
> > > > Mime-Version: 1.0
> > > >
> > > > Ali,
> > > >
> > > > I agree, it should be possible to put more than one 10GbE
> > port on a PCI
> > > > form factor. I agree, XAUI is a good technology to get from
> > the backplane
> > > > to the ASIC.
> > >
> > >How do you expect 10Gig Ethernet data is getting from ASIC through
> > >backplane and to the I/O?
> > >
> > > >What I object to is hijacking the Ethernet standard to
> > > > develop technology that is not for Ethernet, but for generic system
> > > vendors
> > > > using Infeneband and Fibre Channel.
> > >
> > >Gigabit Ethernet physical layer was based on Fiber Channel, it is
called
> > >leveraging or cost amortization. Also Infeneband is written as
> > >"Infiniband".
> > >
> > > >If possible, I am going to make the
> > > > XAUI people pay for their pushing the cost of that technology
> > development
> > > > into the P802.3ae standard.
> > >
> > >Since we are going to save you some buck, I hope you don't mind instead
> > >you paying us.
> > >
> > >Thanks,
> > >
> > >Ali Ghiasi
> > >Sun Microsystems
> > >
> > > >
> > > > Thank you,
> > > > Roy Bynum
> > > >
> > > > At 10:33 AM 7/24/00 -0700, you wrote:
> > > > >Hi Roy
> > > > >
> > > > > > X-Sender: rabynum@xxxxxxxxxxxxxxxxxx
> > > > > > Date: Mon, 24 Jul 2000 11:23:15 -0500
> > > > > > To: rtaborek@xxxxxxxxxxxxx, HSSG <stds-802-3-hssg@xxxxxxxx>
> > > > > > From: Roy Bynum <rabynum@xxxxxxxxxxxxxx>
> > > > > > Subject: Re: XAUI, SFF connectors
> > > > > > Mime-Version: 1.0
> > > > > > X-Resent-To: Multiple Recipients
> > <stds-802-3-hssg@xxxxxxxxxxxxxxxxxx>
> > > > > > X-Listname: stds-802-3-hssg
> > > > > > X-Info: [Un]Subscribe requests to majordomo@xxxxxxxxxxxxxxxxxx
> > > > > > X-Moderator-Address: stds-802-3-hssg-approval@xxxxxxxxxxxxxxxxxx
> > > > > >
> > > > > >
> > > > > > Rich,
> > > > > >
> > > > > > What need does an interface card have for SFF connectors that
can
> > > only put
> > > > > > one optical port within a 13 inch copper etch radius?
> > > > >
> > > > >It should be very reasonable to put two to four 10 Gig port
> > on a PCI form
> > > > >factor card.
> > > > >
> > > > > From what you and
> > > > > > others are making us believe, the form factor
> > requirements for 10GbE
> > > > > are so
> > > > > > large that SFF connectors are a non-issue. If 10GbE
> > interfaces are
> > > going
> > > > > > to be so dense that we will need SFF connectors, why did we need
> > > XAUI? I
> > > > > > can't see how you would need both.
> > > > >
> > > > >XAUI provides high through put 3.125 Gb/s from two ASIC pin
> > (+ few extra)
> > > > >with very flexible interconnect, while keeping the package pin
count
> > > > >reasonable. XAUI is the high bandwidth pipe to get data to and
from
> > > > >your big ASIC.
> > > > >
> > > > >The other reason for XAUI was to define an interface for the
> > backplane
> > > > >and ASIC so they can be developed, while everyone is arguing on the
> > > > >right PMD for 10 gig.
> > > > >
> > > > >Thanks,
> > > > >
> > > > >Ali Ghiasi
> > > > >Sun Microsystems
> > > > >
> > > > > >
> > > > > > Thank you,
> > > > > > Roy Bynum
> > > > > >
> > > > > > At 10:13 PM 7/23/00 -0700, Rich Taborek wrote:
> > > > > >
> > > > > > >Roy,
> > > > > > >
> > > > > > >As is usually the case, you always bring up interesting
> > tangential
> > > > > > >issues in your email. This time it's:
> > > > > > >
> > > > > > >"Given the form factor that would use XAUI, SFF connectors
would
> > > not be
> > > > > > >a requirement."
> > > > > > >
> > > > > > >What in the world does the XAUI interface, specified for
> > use as an
> > > XGMII
> > > > > > >extender, have to do with SFF connectors???
> > > > > > >
> > > > > > >Please enlighten me.
> > > > > > >
> > > > > > >Best Regards,
> > > > > > >Rich
> > > > > > >
> > > > > > >--
> > > > > > >
> > > > > > >Roy Bynum wrote:
> > > > > > > >
> > > > > > > > Chris,
> > > > > > > >
> > > > > > > > I am not sure of your comment about LC having a proven track
> > > record
> > >for
> > > > > > > > single mode implementations. At present, WorldCom has not
> > > deployed
> > >any
> > > > > > > > LC. All of the connectors currently specified for SM
> > > installations is
> > > > > > > > SC. A particular vendor is attempting to get
> > WorldCom to make
> > > use of
> > > > >their
> > > > > > > > connectors. ( I will not say how successful or not they
are.
> > > > > ) Several
> > > > > > > > system vendors are attempting to make use of LC, but
> > at present,
> > > > > none have
> > > > > > > > been certified. Given the form factor that would use
> > XAUI, SFF
> > > > > connectors
> > > > > > > > would not be a requirement.
> > > > > > > >
> > > > > > > > Thank you,
> > > > > > > > Roy Bynum
> > > > > > > >
> > > > > > > > At 04:28 PM 7/21/00 -0600, Chris Simoneaux wrote:
> > > > > > > >
> > > > > > > > >Our opinion is that LC is a better connector than
> > MTRJ. The LC
> > > > > does not
> > > > > > > > >seem to suffer the possible damage that MTRJ can see
> > with high
> > > > >mate/demate
> > > > > > > > >cycles...due to the guide pin action. Also, the LC
> > has a proven
> > >track
> > > > > > > > >record for singlemode whereas the MTRJ does not.
> > > > > > > > >
> > > > > > > > >PS: My feeling is the standards body's charter should be to
> > > specify a
> > > > > > > > >connector. However, there's too much rhetoric in the
> > procedure.
> > > > > Therefore
> > > > > > > > >it's difficult to choose the best solution.
> > Inevitably the real
> > > > > winner/s
> > > > > > > > >will come forward. Conclusion: Choose a connector at
> > the standards
> > > > > > > level as
> > > > > > > > >it can expose good points of each solution.
> > > > > > > > >
> > > > > > > > >Chris Simoneaux
> > > > > > > > >Picolight
> > > > > > >
> > > > > > >-------------------------------------------------------
> > > > > > >Richard Taborek Sr. Phone: 408-845-6102
> > > > > > >Chief Technology Officer Cell: 408-832-3957
> > > > > > >nSerial Corporation Fax: 408-845-6114
> > > > > > >2500-5 Augustine Dr. mailto:rtaborek@xxxxxxxxxxx
> > > > > > >Santa Clara, CA 95054 http://www.nSerial.com
> > > > > >
> > > >
> >
> >
--
Joel Goergen
Force10 Networks
1440 McCarthy blvd
Milpitas, Ca, 95035
Email: joel@xxxxxxxxxxxxxxxxxxx
Direct: (408) 571-3694
Cell: (612) 670-5930
Fax: (408) 571-3550