Content area

Abstract

Linear codes and point lattices are two mathematical objects that play a fundamental role in many areas of computer science. In cryptography the computational hardness of problems for codes and lattices is used as a security assumption for most cryptographic schemes proposed to be post-quantum. Understanding the complexity of these problems, and how techniques for codes and lattices are related, is crucial for understanding the security of the corresponding cryptosystems. With this motivation, we explore both the computational complexity and algorithmic sides of coding problems and lattice problems.

We study the computational complexity of two key problems relevant to cryptography: Learning With Errors (LWE) and Code Equivalence (CE). For problems like LWE, we introduce an alternative measure of computational hardness: the maximum success probability achievable by any probabilistic polynomial-time algorithm. This more accurately models the security goals of cryptosystems based on these problems. Under this new perspective, we study the worst-case to average-case hardness of LWE and prove a tight Turing reduction from the Bounded Distance Decoding (BDD) problem to both search and decision variants of LWE. Our reduction improves previous reductions by using only a few oracle calls and explicitly quantifying the loss in success probability. The CE problem has several variants, including Permutation (PCE), Signed Permutation (SPCE), and Linear (LCE) Code Equivalence. We prove polynomial-time Karp reductions from PCE to both LCE and SPCE. Along with a known Karp reduction from SPCE to the Lattice Isomorphism Problem (LIP), our second result implies a reduction from PCE to LIP.

On the algorithmic side, we use lattices and Fourier analytic techniques to construct an algorithm that list-decodes Generalized Reed-Solomon (GRS) codes from worst-case or average-case errors over any p (quasi)norm where 0 < p <= 2. This is based on the Guruswami-Sudan soft-decision decoding algorithm. Our algorithm generalizes previous algorithms for the 2 and 1 norms and achieves a better rate-(decoding) distance trade-off than these algorithms. We also discuss lattice list-decoding capacity bounds for general norms and construct lattices with high density - which achieve the Minkowski bound - using Construction D applied to random linear codes.

Details

1010268
Title
Codes & Lattices: Computational Complexity and Constructions
Number of pages
121
Publication year
2025
Degree date
2025
School code
0127
Source
DAI-A 87/2(E), Dissertation Abstracts International
ISBN
9798291567876
Committee member
Nagarajan, Viswanath; Bansal, Nikhil; Peikert, Christopher J.; Ribeiro, João
University/institution
University of Michigan
Department
Computer Science & Engineering
University location
United States -- Michigan
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32271947
ProQuest document ID
3244862088
Document URL
https://www.proquest.com/dissertations-theses/codes-amp-lattices-computational-complexity/docview/3244862088/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic