Content area
In the upcoming era of quantum computation, those successful classical, number-theoretical public-key cryptosystems that get widely deployed nowadays, such as RSA or ElGamal, all face severe insecurity if the adversary gets access to quantum computers. More urgently, time-insensitive secrets on today’s systems are facing immediate threats of the same level due to the “harvest now, decrypt later” attack. The field of post-quantum cryptography aims at building quantum-safe systems to prepare for the future. Among all candidate post-quantum proposals, lattice-based cryptography, which builds cryptosystems based on lattice problems, so far provides the best results in terms of performance, simplicity, and versatility. It is thus important to better understand the complexity of lattice problems.
In this thesis, we study a couple of new lower and upper bounds for various lattice problems under various complexity notions. These new results include:
1. (Fine-grained hardness/lower bounds.) The fine-grained (exponential-time) complexity studies the precise constant c in the 2nc or 2c·n running time for solving a computational problem, usually based on (variants of) the exponential time hypothesis (ETH) or the strong exponential time hypothesis (SETH).
Under this complexity notion, for the bounded distance decoding problem (BDD), we give quantitatively improved bounds on the approximation factor α for which BDDα cannot be solved in 2o(n) or 2c·n time, assuming variants of ETH or SETH, respectively, and for part of our results also assuming a geometric conjecture known as exponential lattice kissing number. For the shortest vector problem (SVP), we show a qualitatively improved result that for some constant approximation factor γ>1, SVPγ cannot be solved in 2c·n time, assuming a variant of SETH; previously, only analogous result for non-approximate, exact SVP was known.
2. (“Standard” NP-hardness.) The NP-hardness is a classical flag for identifying a computational problem as being hard. For SVP, previously only NP-hardness results under randomized reductions are known (except in corner cases), and whether SVP is “standardly” NP-hard under deterministic reductions is a notoriously longstanding open problem. We propose conjectures about ways to derandomize existing reductions, which would lead to the “standard” NP-hardness of approximate SVP. We also present empirical evidences that support the conjectures.
3. (Parallel algorithms/upper bounds.) The parallel complexity/sequentially of a computational problem is crucial for its applications in time-release/timed cryptography. We show logarithmic-depth parallel algorithms for a sequential version of the short integer solution problem (SIS), which is used in a certain proof of sequential work (PoSW) timed-cryptographic protocol. This means that sequential SIS has low parallel complexity, as opposed to its previous conjecture. Moreover, we show a polylogarithmic-depth parallel attack that (almost) breaks that PoSW protocol.