Hongming – thank you for your answers. It’s clear you’ve given some thought to how to build this.
The insertion loss of the mixing segment is specified in 147.8.1 (which references 147.7.1). The loss at midband is specified to be less than 2.6 dB. In voltage terms that says
that the amplitude of the signal from a far end node will be 75% of the amplitude (at 10 MHz) than what is transmitted near-end. I’d think that kind of variation would be detectable. Remember, we have a scrambler, which ensures that transmitted bit sequences
are unique with high probability. That separates this from early Ethernet’s challenges.
If you don’t agree, please provide the analysis to show otherwise. And not just that a detection COULD be missed – we can all make broken designs. Instead, I’d want to see analysis
that a device cannot be made to provide a reliable detection (because others believe it can).
Also, remember, the standard is not a tutorial, it is an interoperability specification. The detection method is local to a node, and is not an interoperability specification.
On process, I’d like to point out that questions, while welcome, and a good use of the reflector, are a different thing than either a problem or a proper ballot comment. Proper
ballot comments come when a commenter thinks there is a problem. To convince others, the commenter needs to provide analysis showing the problem and a potential remedy. Having questions isn’t enough – you need to convince people there is a problem, AND the
commenter has responsibility to propose a remedy. If you think something CANNOT be accomplished, please show that – you must convince others why the specification is not implementable, and not just say “show me how”.
Additionally, having read Philip’s email, I honestly take it as some questions, not a comment.
I refer Philip to Clause 22, where the behavior or CRS and COL are specified as signals. Requirements for asserting and holding CRS are found there, and not in the 802.3cg draft.
I wouldn’t take the 256 bit time assertion of COL as the detection time allowed. That specification is simply on the time to hold the COL signal, not the detection window. Collisions
need to be detected in less time than that – I would think a 20 bit sequence of corrupted transmission symbols (2usec) would be enough to detect.
I’d add that when all nodes have PLCA implemented and enabled, these signals do not interfere with data frames on the medium.
The way I understand it, if a node either without PLCA enabled or without PLCA implemented is present on the mixing segment attempts to transmit a frame simultaneously with one
of these control signals, that (non-PLCA) PHY would detect the collision and signal its MAC appropriately to control the transmission of its data frame, as already specified in clause 147. The PHY transmitting the PLCA signal’s behavior is specified by the
Figure 148-3 (PLCA Control state diagram). (basically the beacon transmission stops as the PHY goes to the RECOVER state due to CRS being asserted).
Piergiorgio may have more explanation to follow.
-george