Rene Struik writes:
Thanks for the feedback: If the big group has as cardinality a "safe
prime" of the form p=2*q+1 (where q is prime as well), as you suggest,
then the subgroup's size q is indeed known. I could not find this in
the
RFC though.
When RFC 3526 was written it was clear it was generating similar
groups that was originally generated in RFC2412. The RFC 3526 do have
RFC 2412 as non-normative reference, but the reference to that draft
is missing from the abstract where it says that:
----------------------------------------------------------------------
The selection of the primes for theses groups follows the criteria
established by Richard Schroeppel.
----------------------------------------------------------------------
that should have said "Richard Schroeppel [RFC2412]"...
Some comments:
1) Security (a).
Indeed, one could use small-size t-bit exponents in a subgroup G of
size
q of Zp, where p=1+2*q. However, it may lead to quite biased
distributions that would not have been there if the private exponents
would have been random integers in the range [1,q-1], as is nowadays
(2013) "the norm". As an example, with a 3072-bit discrete log group Zp
and 256-bit exponents, the key K:=g^d has as pre-image all 256-bit
integers x and y for which x*y=d. Hence, the number of preimages
depends
on the factorization of d (e.g., if d is a prime number in the range
{512-bit}\{256-bit}, this set would be empty; if d is a 128-bit prime,
it has two pre-images {(1,d), (d,1)}, etc.). With drawing private keys
from the entire set [1,q-1], one gets a flat distribution of the
integer
d in the range [1,q-1]. Whether this is really exploitable is another
question of course. However, this seems to be a red flag to me and
might
suggest that perhaps this 2003 RFC might be somewhat dated? Are there
any papers that have studied the non-standard DLP problem that seems to
be underlying the suitability of the groups suggested in this RFC
document? (I know those with random exponents from [1,q-1], but none
where one simply truncates and "hopes for the best".)
I have not heard that there is problem with those groups when using
them for Diffie-Hellman calculations. The output of the Diffie-Hellman
calculation should have entropy based on the input parameters, and
yes, it is not uniformly distributed over the resulting output, which
is one of the reasons you cannot simply take truncated output of the
Diffie-Hellman output and use that. Instead of that you need to use
proper Key Derivation Function that extracts the entropy from the
output.
Or are you referenceing to something else?
2) Security (b).
All discrete log groups Zp, where p=2*q+1 and q prime, have 2 as
generator. This may easily leak info on the lower-order ~dozen bits of
the computation of the ephemeral key contribution via side channel
analysis (after all, 2^t (mod p) does not require modular reduction if
t< len(p)).
If you are referencing to the timing attacks, see section 5 from the
RFC2412, which says:
----------------------------------------------------------------------
5. Security Implementation Notes
Timing attacks that are capable of recovering the exponent value
used
in Diffie-Hellman calculations have been described by Paul Kocher
[Kocher]. In order to nullify the attack, implementors must take
pains to obscure the sequence of operations involved in carrying out
modular exponentiations.
A "blinding factor" can accomplish this goal. A group element, r,
is
chosen at random. When an exponent x is chosen, the value r^(-x) is
also calculated. Then, when calculating (g^y)^x, the implementation
will calculate this sequence:
A = (rg^y)
B = A^x = (rg^y)^x = (r^x)(g^(xy))
C = B*r^(-x) = (r^x)(r^-(x))(g^(xy)) = g^(xy)
The blinding factor is only necessary if the exponent x is used more
than 100 times (estimate by Richard Schroeppel).
----------------------------------------------------------------------
In most cases the private exponent x is not reused at all, in which
case there is no need to do this kind of blinding.
3) Performance (in context of 802.11ai's charter).
Use of one of the "enormous size" IANA groups, leads to signatures
(r,s)
We did also generate enourmous size groups while we created those
medium sized groups defined in the RFC 3526. We did generate 12288 and
16384 bit groups too, and did generate primality proofs for those too,
but those were not published in the RFC, as they were considered too
big for practical use (i.e. enourmous size :-)
of size ~ len(p) + t and public keys of size len(p) and signature
verification requiring scalars of full-size len(p). For 802.11ai FILS,
use of 2048-bit primes with DH key agreement would probably have cost
of
roughly 1/8-th RSA-2048 no-CRT decryption or 1/2 an RSA-CRT-2048
decryption. If one uses this with signing ("no TTP case"), the (at
least) two signature verification cost roughly at least 2x a full
2048-bit RSA no CRT decryption or 8 times an RSA-CRT-2048 decryption.
This begs the question whether supporting these huge discrete logs
groups with 802.11ai would be in the interest of fullfilling its
charter
("F" in FILS was supposed to stand for "Fast").
That also depends on the hardware. Normal 5 years old PC can do about
100 2048-bit Diffie-Hellman operations per second (both setup and
agreement), i.e. 50ms per operation. Half of that can be done offline,
i.e. before the client connects, other half needs to be done online.
If that is not fast enough, then adding modp hardware will make it
faster...
Given the above, it seems to serve 802.11ai well to
a) ditch all discrete log groups based on Sophie-Germain groups, since
possibly having security problems (presumably #1, #2, #5, #14-#18 of
the
IANA list);
I would like to know more about these security problems?
b) ditch the elliptic curve groups over composite extension fields,
that
have long ago been ditched elsewhere (ditch #3, #4).
That I do agree...