Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [STDS-802-16-MOBILE] Question regarding CMAC key derivation (7.5.4.6.1)



https://www.deadhat.com/wmancrypto/dot16kdf_0.1.c

https://www.deadhat.com/wmancrypto/cmac_0.1.c

The two links above point to C implementations of dot16KDF-CMAC and
CMAC. It compiles and runs on a little endian linux pc with gcc.

The proposed format for (i|astring|keylength) in 7.5.4.6.1 is that 'i'
and keylength are represented as unsigned 32 bit integers with the MSB
first (in the lowest byte index) to match the endianess in the CMAC NIST
specification. I chose 32 bits to fit in with the convenient size for
most embedded 32 bit micros.

The CMAC code matches against the NIST specs. The dot16KDF code is
essentially unchecked because I have nothing to check it against, but it
produces random looking output. I really needs two independent
implementations to confirm the vectors are correct with respect to the
intended algorithm and bit level representation of i, keylength and the
endianess thereof.

I would like to encourage people to produce independent implementations
of dot16KDF so that the vectors can be compared  for correctness. The
likelyhood of an error in the dot16KDF code is very high, since I just
finished it and these things tend to be wrong the first time around.

DJ 

-----Original Message-----
From: Giesberts Pieter-Paul-apg035 [mailto:giesberts@MOTOROLA.COM] 
Sent: Monday, October 10, 2005 3:14 AM
To: STDS-802-16-MOBILE@listserv.ieee.org
Subject: [STDS-802-16-MOBILE] Question regarding CMAC key derivation
(7.5.4.6.1)

Section 7.5.4.6.1 describes the Dot16KDF key derivation algorithms for
CMAC and HMAC.

The target string parameter of the CMAC calculation has the form "i |
astring | keylength", which asks for a concatenation of an integer (i),
a bitstring (astring) and another integer (keylength). This can only be
done in an unambiguous manner if there are agreements on the bit
representation of integers, which is currently lacking from the
algorithm specification. Given the keylengths used in the instances of
this algorithm in the standard it should be at least 16 bits, presumably
unsigned. And then there is endian-ness. Can someone clarify how these
numbers should be represented?
 
I also have two additional comments concerning this section: 
* The pseudo-code to specify the algorithm uses the same symbol <= to
mean "smaller than or equal to" in the line with the "for statement" and
to mean "is changed into" or  "becomes" in the next line. Though I think
most readers are likely to understand what is intended it may be
somewhat confusing.
* The operation Truncate(CMAC(xxx),128) seems meaningless if a CMAC is
always a 128-bit string. If it is not necessary, it might as well be
removed.