Re: [RPRWG] Why Data CRC Stomping is a BAD Idea?
>
> Nirmal,
>
> Would be nice to receive feedback on items earlier,
> such as the comment phase. From what I understand,
> some companies are interested in finishing this
> standard(:>), so prompt review would be helpful!
David I do recognize the importance of getting
comments in a timely fashion. However, I hope
this is not used as a deterrent to discourage
valid concerns while the Standard is in still
in progress.
I will try to be prompt next time. :-)
Here are my comments:
a) CRC stomping has very little to do with protocol
compliance. For example, it is not on par with
requirements such as frame format etc.
I am at a loss to understand why this is being
made a requirement. At a minimum, it must be
a suggested hint with full disclosure of caveats
(such as those I raised in my comments).
b) Location of failed links is best left to layer 1
interfaces. For example, loss-of-link or BER in SONET.
A distinguished architect such as yourself would agree
with me that it is never a good idea to overload
gratituous functions on established methods when
we are fully aware that we are actually reducing
the probability of error logging and causing implementation
problems.
Even if we agree that layer 1 methods may not be sufficient
to detect failed links; we have other established layer 2
mechanisms like keep-alive messages to detect failed links/
nodes.
c) The probability of correct CRC (undetected error) in the
presence of error is irrelevant to the CRC stomping discussion
because with or without stomping the probability of undetectable
error is the same for both methods.
d) The real issue is the probability of not-logging given an error
in data frame (i.e., probability of CRC being checkStomp).
Your claim that the probability is 1/2**32 is not correct on two
counts:
1) It assumes equally likey error model with probability
of bit error = 1/2. The actual error rate on the links
is much lower and estimating this probability with a known
bit error rate is NP-Hard.
2) Your expression assumes unconditional probability. What we
are interested in is conditional probability. That is,
given a data frame error what is the probability that
the computed CRC is equal to checkStomp value. My claim
is that it could be very HIGH. For example, consider a
a single bit error in a data frame of length L bits.
Depending on the position of this single bit error and the
chosen STOMP value the conditional probability could be
ONE. I can generate a long list of STOMP values that are
catestrophically BAD for the standard CRC-32 polynomial
for single bit errors. The list for double-bit errors is
quadratically longer. By the way, these catestrophical
STOMP values are also functions of packet length. This
compounds the problem of finding good STOMP values.