Sunday, April 17, 2016

Inherent leakage-resiliency of LWE

MathJax TeX Test Page

Why we care about post-quantum cryptography should be clear by now (if it’s not, you’re in the right place anyway: you just need to take a step back and read this splendidly packaged blog post). A mathematical notion that is having much success in achieving it is the Learning With Errors (LWE) problem (guess what, we've also talked about that here). Thanks to some mathematical evidences, we are quite confident about its hardness so that many cryptographic primitives whose security is based on it have arisen. That said, let’s take a step further and imagine a future (not that far in time) when all those primitives are standardised and deployed at many levels, including hardware security. Our experience with classic algorithms teaches us that we shouldn't be too confident of the security of an implementation given the security of the scheme on paper, and post-quantum cryptography is no exception. In fact, side-channel attacks are known to be a great threat to cryptographic implementations. The question this blog post wishes to partially answer is: how can we secure our (future) post-quantum implementations?

Playing cops and robbers is a way, of course: applying common techniques (e.g. masking, randomisation, …) against known side-channel attacks is possible just by fixing the design of specific schemes. However, this approach has at least a couple of drawbacks:

  1. a protection is not guaranteed to thwart future attacks (sometime new attacks are specifically designed to circumvent a countermeasure). More generally, a countermeasure doesn't protect against anything outside what it was designed to protect against;
  2. usual countermeasures introduce overhead (in terms of whatever metric you prefer, e.g. area, time, ...). This results in a protected scheme to be less efficient than its unprotected version.

Leakage-resilient cryptography is a possible alternative: a cryptographic scheme is provably secure even in the case of some anticipated amount of leakage, by design. Usually such proofs only rely on a bounded amount of leakage that the scheme can tolerate. The main idea sounds like the following statement.

We build a cryptographic scheme which retains security even if a bounded amount of arbitrary information is leaked.

Full stop. There aren't many other assumptions: this fact gives leakage-resilient cryptography enough generality to capture many, if not all, side-channel attacks. In essence, we are assuming that the scheme is secure if leakage occurs, no matter how that is exploited by an attacker. This seems to address the first drawback of ad-hoc countermeasures, provided everything is implemented in the proper manner. Is there anything we can do about the second drawback?

If we keep thinking of securing a scheme, then answering to that question is really difficult. Indeed a countermeasure, even the most general and provable one, introduces some changes in the original design which necessarily result in a overhead, for the simple reason that they take into account a more powerful adversary, namely one who is able of measuring and exploiting leakage too. However, think of the case in which none of the anticipated amount of leakage actually occurs. Then we end up having a less efficient implementation, the protected one, which doesn't provide any additional security because, without leakage, the original scheme was secure anyway. Eventually, stop thinking of securing a scheme and start thinking of securing an assumption.

In their paper, Goldwasser, Kalai, Peikert and Vaikuntanathan show that LWE has the nice feature of being hard even in the presence of leakage. Does it sound like what we've just said? Essentially yes, but with a great difference: they're not building a leakage-resilient scheme, hence introducing some sort of overhead, but proving the leakage-resiliency of an assumption. This makes a big difference, because the former implies that the efficiency degrades to let leakage-resiliency grow, while the latter implies that security, not efficiency, degrades. In particular, their proof establishes a relation between the security level and the amount of leakage in such a way that we don't need to anticipate a maximum tolerable amount of it. The straightforward consequence is that a scheme based on LWE is secure independently on whether the leakage occurs or not, with the advantage that in the latter case we have not introduced any protection-related overhead in the design. As a result, they propose a symmetric-key encryption scheme being secure even in the presence of leakage (with some caveats): the novelty relies on the fact that you won't find any parameter describing an amount of leakage (usually called $\lambda$ in the literature of leakage-resilient cryptography).

We omit any technicality here as this would like to be a generic and accessible reading. For further details on how the authors achieved this result, we refer to their paper. Instead, it you prefer to have just a mathematical flavour without going into too many details, you may like this post from the Bristol cryptography group's blog. Stay tuned for more insights on post-quantum cryptography!

P.S.: the word "leakage" appears 19 times in this text. If you're tired of reading it, watch this.

Marco (Thanks to Simon and Matthias for some corrections!)

No comments:

Post a Comment