Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Dear Marek, Thank you for the clarification. Your comments and suggestions are very helpful. I appreciate your suggestion (invitation) to provide calculations, which I performed in preparing my presentation, for review by the entire group. I accept the invitation and will provide these calculations,
in suitable form, within a few days. One thing which I really cannot do is provide calculations which derive (precisely) what BER is required “to guarantee that kind of error rate at MAC service interface.” This is because (as mentioned in one
bullet in my presentation!) you cannot allocate a Codeword Error Rate requirement satisfactory to meet a Packet Error Rate (frame error rate) requirement BEFORE knowing the Forward Error Control (FEC) specifics. In exactly the same vein, speaking rigorously,
it is not possible to back into a BER requirement from a bottom line frame error rate requirement without knowing the FEC details, which we do not have for EPoC at this stage. In fact, that is one of the points I was trying to make with my submission ---
we really would be best-served by knowing a frame error rate requirement as our Objective, and then designing the details of the PHY from that perspective or starting point. When this is concluded we can calculate (estimate) the BER (in accordance with channel
models which the Task Group will have developed), with the PHY and FEC which we will have developed. As a note hopefully adding some clarification, the frame error rate for the 1518 byte packet is calculated in the architecture document (which you thankfully brought to our attention below), as 1.21x10^-4; when
this is related to the 8x10^-8 error rate per octet, it can be shown that there is an assumption that the octet errors are INDEPENDENT events. Of course, with almost any FEC we will develop, the octet errors will NOT be independent events (especially true
for upstream). Thus, I infer from the note at the bottom of your provided passage from the architecture document that the INTENT of the architecture document is for systems to meet the given frame error rate (for 1518 byte packets), even though the octet
error rate is the “shall.” In other words, we have a “shall” for octet error rate, and an “is to be less than” for the frame error rate. Is it the case that the frame error rate “is to be less than” is in REALITY what IEEE 802.3 wants
to see, IF the correlation of octet error events is such that the ratio of octet error rate and frame error rate is deviated from 1518?
I have to conclude that it is in fact the situation that if there were a frame error rate of 1x10^-3 and an octet error rate of 10^-9 that IEEE 802.3 would reject this as unacceptable (I do not believe this is
possible, but this is a rhetorical question). On the other hand, would IEEE 802.3 consider it unacceptable if the octet error rate was 9x10^-8, and the frame error rate were 10^-5? I suspect that this would actually be acceptable, even if not meeting the
letter of the “shall” law; the situation IS meeting the letter of the “is to be lesser than” guidance.
I hope we all agree that it is the frame error rate that really matters in this situation, not the underlying octet error rate. Note that after the passage highlighted below was written (2004 approx?), that
IEEE 802.3 did in fact adopt a frame error rate “shall,” in IEEE 802.3an-2006, as mentioned in my submission. I think there is precedent for, and logic supports, making the frame error rate the guiding figure of merit. Error rates of BER and octets are not
relatable to frame error rate prior to developing the details of the PHY; I believe they properly should be secondary figures of merit and not the bottom line Objective, for guiding our developments to efficiently meet system needs, and I interpret the provided
passages from the architecture document for IEEE 802.3 as being aligned with this approach.
Again, sincerely, thank you Marek for your dialogue on this topic and raising my level of awareness on IEEE 802.3 architecture requirements. I look forward to working with you and all others going forward. Later, tjk From: Marek Hajduczenia [mailto:marek.hajduczenia@xxxxxx]
Bill, I am not going to defend BER as the ultimate solution. This topic comes up in almost every SG that I have ever participated in and people complain about BER. Proposals are made, lots of cycles are burnt on how
to do that right and then … no changes are made because there is never consensus on how to do that right in an interoperable and reproducible manner.
If we do something new, we will be setting precedence and there will be plenty of scrutiny. Also, there is always resistance from the industry that knows how to test BER, but would have to change procedures to
adopt a new testing procedures. With that said, I’d really love us not to spend a lot of time on this topic and get lost in the forest. We have PHY to develop and not specify methods for testing errors in a transmission channel
Marek From: Bill Wall (wallb) [mailto:wallb@xxxxxxxxx]
Sorry for jumping in here, but this can be a good example of why BER is not relevant. Consider a frame of 2000 bytes (approximately the length of the DVB C2 LDPC code). According to this requirement a frame
error rate of 2000 x 8x10^-8 = 1.6x10^-4 would meet the requirement. Now depending on how the FEC fails, many bits in the failed frame could be wrong. If it was totally wrong and bits were random 50% might be wrong, which would give a BER of 8x10-5 that
would still give an acceptable frame error rate. -Bill From: Marek Hajduczenia
[mailto:marek.hajduczenia@xxxxxx]
Dear Tom, Just one clarification – the number you quote as requirement, is not a requirement. It is an informative note, which has no weight when it comes to the interpretation of the standard. The requirement speaks of
“8x10^-8 error rate per octet of MAC frame length” (this is where the shall statement is placed) and this is what we have to be able to demonstrate. Everything else does not matter (or almost does not matter).
If you could kindly provide calculations showing what BER we’d have to be working at to guarantee that kind of error rate at MAC service interface, I believe it would be a very valuable contribution for this
group. And please, do provide all calculations (Excel would be nice) so that people can go and run their own calculations as well. We are mostly engineers in here and like number crunching
J Regards Marek From: Thomas Kolze [mailto:tkolze@xxxxxxxxxxxx]
Duane, Marek, Thank you very much for your feedback, comments, and substantive responses in your emails (in this string). The bottom line of the architecture document passages you have provided is that while secondary targets are created, such as “8x10^-8 error rate per octet,” the PRIMARY requirement in the architecture document
is in fact Packet Error Rate (referred to as “frame error rate” in the text you provided). The Packet Error Rate requirement for a 1518 byte packet is 1.21x10^-4 according to the provided and highlighted text from the architecture document.
I am thrilled at your quick and expert feedback; your feedback shows that the objectives I suggest for EPoC error rate requirements --- both the TYPE of error rate requirement AND the values of the error rate
requirement --- ARE NOT IN VIOLATION of, and in fact are aligned WITH, the over-arching architecture document of IEEE 802.3. Thank you sincerely. I greatly appreciate your raising my awareness of the architecture document and its contents in this regard. I would have loved to have had that information PRIOR to making the presentation
of my submission for error rate recommendations for EPoC. Later, tjk [Tom Kolze Broadcom] From: Marek Hajduczenia
[mailto:marek.hajduczenia@xxxxxx]
Thank you Duane, One of the 5 Critters clearly states that we have to maintain compliance with 802 architecture. Here is a quote: IEEE 802 defines a family of standards.
All standards should be in conformance with the IEEE 802.1 Architecture, Management, and Interworking documents as follows: IEEE 802. Overview and Architecture, IEEE 802.1D, IEEE 802.1Q, and parts of IEEE 802.1F.
If any variances in conformance emerge, they shall be thoroughly disclosed and reviewed with IEEE 802.1.
Each standard in the IEEE 802 family of standards shall include a definition of managed objects that are compatible with systems management standards.
Compatibility with IEEE Std 802.3
Conformance with the IEEE Std 802.3 MAC
Managed object definitions compatible with SNMP
The part of interest is highlighted in red to make sure it is not missed. As you can see, we may have some leeway opening exception to compliance, but is a rough way if we chose to go. It would be a time consuming
effort, with lots of questions asked at each turn of the way. In short, I suggest we maintain the compatibility with the necessary documents and look at how it could be achieved at the physical layer.
Marek From: Duane Remein
[mailto:Duane.Remein@xxxxxxxxxx]
All, In reviewing the
EFM WEB site it appears that the task force was operating under a different set of objectives between approval of the TF in July 2001 and March 2002. The
initial objectives approved by the WG did not appear to include any error rate requirement. The
subsequent
objectives including the error rate for the optical interfaces were not approved by the WG until the March.
My conclusion is that there is precedence for objectives without error rates.
That said I must agree with Marek in his observation that we must comply with the architecture doc. Best Regards, Duane FutureWei Technologies Inc. Director, Access R&D 919 418 4741 Raleigh, NC From: Marek Hajduczenia
[mailto:marek.hajduczenia@xxxxxx]
Here is the excerpt from 802-2004 architecture document that puts bounds on how high in BER/PER we can go
Regards Marek <="" p=""> <="" p=""> <="" p="">
|