Friday, March 8 at MIT

When: Friday, Mar 8, 2019.

Where:  MIT Stata Center G-882 (Hewlett Room)

Organizers: Ran Canetti, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.

Program:

9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Zvika Brakerski, Weizmann
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
11:15 – 12:15.Gilad Asharov, JP Morgan AI Research
OptORAMa: Optimal Oblivious RAM
12:15 – 1:30.Lunch
1:30 – 2:30.Rachel Lin, University of Washington
Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation
2:45 – 3:45.Mohammad Mahmoody, University of Virginia
Coin-tossing, Concentration of Products, and Limits of Robust Learning
4:00 – 5:00.abhi shelat, Northeastern
Threshold Factoring from Factoring Assumptions

Abstracts:

Speaker: Zvika Brakerski, Weizmann
Title: Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing

We present a worst case decoding problem whose hardness reduces to that ofsolving the Learning Parity with Noise (LPN) problem, in some parameterregime. Prior to this work, no worst case hardness result was known for LPN(as opposed to syntactically similar problems such as Learning with Errors).The caveat is that this worst case problem is only mildly hard and inparticular admits a quasi-polynomial time algorithm, whereas the LPN variantused in the reduction requires extremely high noise rate of $1/2-1/\poly(n)$.Thus we can only show that “very hard” LPN is harder than some “very mildlyhard” worst case problem. We note that LPN with noise $1/2-1/\poly(n)$already implies symmetric cryptography.
Specifically, we consider the $(n,m,w)$-nearest codeword problem($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linearcode in $m$ dimensions and rank $n$, and a target vector which is very closeto the code (Hamming distance at most $w$), and asks to find the codewordnearest to the target vector. We show that for balanced (unbiased) codes andfor relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solvedgiven oracle access to an LPN distinguisher with noise ratio$1/2-1/\poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have furtherimplications: We show that $(n,m,w)$-NCP with the aforementioned parameterslies in the complexity class $SearchBPP^{SZK}$ (i.e.\ reducible to a problemthat has a statistical zero knowledge protocol) implying that it is unlikelyto be $NP$-hard. We then show that the hardness of LPN with very low noiserate $\log^2(n)/n$ implies the existence of collision resistant hash functions(our aforementioned result implies that in this parameter regime LPN is alsoin $BPP^{SZK}$).
Joint work with Vadim Lyubashevsky, Vinod Vaikuntanathan and Daniel Wichs.

Speaker: Gilad Asharov, JP Morgan AI Research
Title: OptORAMa: Optimal Oblivious RAM

Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs.  To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions.
One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is  $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively. 
With Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Enoch Peserico and Elaine Shi.

Speaker: Rachel Lin, University of Washington
Title: Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation

We introduce Pseudo Flawed-smudging Generators (PFGs). They are polynomially expanding functions over Zp with polynomially small outputs that have some weak pseudo-randomness property. More specifically, the output distribution of a PFG is computationally indistinguishable to a so-called flawed smudging distribution y <– Y, satisfying that for every B = poly(n) bounded noise distribution E, the distributions of (e, e + y) and (e’, e+y) are statistically close, where e and e’ are independent samples from E conditioned on agreeing at a few, o(n), coordinates. Moreover, the statistical closeness only holds with 1/poly(n) probability over the choice of y <– Y. In essence, the output of PFG computationally hides a small noise vector at all but a few coordinates with noticeable probability.
We use PFGs to construct Indistinguishability Obfuscation (IO) schemes for polynomial-sized circuits. Assuming LWE and the existence of constant-locality pseudorandom generators, we construct IO using PFGs and a  Functional Encryption (FE) scheme able to compute them. Instantiating the PFGs with new candidates from [Ananth, Jain, Sahai, Eprint 2018] and instantiating the FE with a new partially hiding FE scheme constructed from bilinear maps, we obtain IO based on i) the security of the new PFG candidates, ii) the SXDH assumption over bilinear maps, iii) LWE, and iv) the security of constant-locality pseudorandom generators.  

Speaker: Mohammad Mahmoody, University of Virginia
Title: Coin-tossing, Concentration of Products, and Limits of Robust Learning

A recent active line of work in robust learning studies attacks on learning algorithms through adversarial perturbations that happen during the training phase (i.e., poisoning attacks) or testing phase (i.e., evasion attacks; aka adversarial examples). In this talk, I first show the existence of some generic information theoretic poisoning attacks as well as evasion attacks for certain theoretically natural input distributions (e.g., the uniform distribution) based on classical results about concentration of measure in certain metric probability spaces (known as Normal Levy Families). I will then show how to make some of these attacks polynomial time by proving computational variants of the measure concentration for any product space under Hamming distance using new (polynomial time) attacks on cryptographic coin tossing protocols.
Based on joint works with Saeed Mahloujifar and Dimitrios Diochnos from NeurIPS’18, AAAI’19 and ALT’19.

Speaker: abhi shelat, Northeastern
Title: Threshold Factoring from Factoring Assumptions

The problem of jointly generating an RSA modulus N has been well considered since Boneh and Franklin first proposed a solution in 1996.  In this talk, we discuss our latest multi-party protocol for this problem as well as interesting open directions that could improve our solution.  Our solution only requires the assumption of OT and can thus be instantiated from the factoring assumption alone; we discuss the costs of this restriction by studying the benefits/costs of using protocols that rely on homomorphic primitives. We also discuss the new motivation behind this problem arising from the use of an RSA modulus N in the construction of a verifiable delay function (VDF), and the use of VDFs in consensus protocols.


This is joint work with Megan Chen, Ran Cohen, Jack Doerner, Yash Kondi,  Eysa Lee, Schuyler Rosefield as well as Carmit Hazay and Muthu Venkitasubramaniam from Ligero.

Comments are closed.

%d bloggers like this: