Archive

Author Archives: omerpagmailcom

When: Friday, May 3, 2019.

Where: Northeastern University, ISEC Building (805 Columbus Ave) Room 655.

Organizers: Ran Canetti, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.

Program:

9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Benedikt Bünz, Stanford University
Verifiable Delay Functions and Succinct Proofs in Groups of Unknown Order
11:15 – 12:15.Dakshita Khurana, Microsoft Research
Quantum Advantage and Classical Cryptography
12:15 – 1:30.Lunch
1:30 – 2:30.Prabhanjan Ananth, MIT
Secure MPC with Honest Majority: Minimal Rounds and Efficiency Optimizations
3:00 – 4:00.Aloni Cohen, MIT
Towards Formalizing the GDPR Notion of Singling Out

Abstracts:

Speaker: Benedikt Bünz, Stanford University
Title: Verifiable Delay Functions and Succinct Proofs in Groups of Unknown Order

We study the problem of building a verifiable delay function (VDF). A VDF requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.

Further we survey new VDF constructions by Pietrzak and Wesolowski in groups of unknown order. Building on Wesolowski’s techniques we build succinct proofs in groups of unknown order. We build Sigma protocols that have good soundness in the challenge space and communication complexity independent of the witness. This circumvents a previous impossibility result. These proofs have applications to accumulators and vector commitments.

Papers: https://eprint.iacr.org/2018/601.pdf and https://eprint.iacr.org/2018/1188

Speaker: Dakshita Khurana, Microsoft Research
Title: Quantum Advantage and Classical Cryptography

We demonstrate applications of quantum advantage to building new protocols for classical cryptography. In particular, we observe that in many applications, the oft-used technique of leveraging (time) complexity can be replaced with leveraging quantum advantage instead. In this talk, I will describe how this observation can be combined with additional techniques to obtain:
1. Non-interactive non-malleable commitments without setup in the plain model, under well-studied assumptions.
2. Two-message (privately verifiable) WI arguments based on the polynomial hardness of factoring and quantum polynomial hardness of LWE.

In the first case, I will begin by discussing how to build simple non-interactive non-malleable commitments for O (log log n) tags assuming the sub-exponential hardness of factoring or discrete log, and *quantum* sub-exponential hardness of LWE. Next, I will describe new combinatorial techniques to amplify the number of tags, assuming only the existence of NIWI (non-interactive witness indistinguishable) proofs. These techniques yield the first construction of non-interactive non-malleable commitments w.r.t. replacement for exponentially many tags in the plain model, based on well-studied assumptions. 

I will also discuss how the second application (and some others) follow almost immediately from our observation on quantum advantage. This is based on joint work with Yael Tauman Kalai. The paper is available online at https://eccc.weizmann.ac.il/report/2018/203/.

Speaker: Prabhanjan Ananth, MIT
Title: Secure MPC with Honest Majority: Minimal Rounds and Efficiency Optimizations

We present two constructions of two-round secure multiparty computation protocols with honest majority:

– Our first construction satisfies statistical security with abort and can handle any functionality implementable in NC1. All the previous two-round protocols either required computational assumptions or had a weaker corruption threshold. Our construction employs both broadcast and point-to-point channels. We also show how to generically transform this protocol into one that only employs point-to-point channels while satisfying statistical security with selective abort property. 

– Our second construction can securely compute any functionality implementable in P/poly and has per-party computation cost to be (|C|+n^2) poly(k)  with C being the circuit securely computed, n being the number of parties and k being the security parameter. It is based on one-way functions and satisfies semi-honest security. The per-party computation cost of all the previous protocols in this setting was |C| n^2 poly(k) or more. With an additional round, we can either upgrade to malicious security or achieve a protocol with total computation cost (|C|+n^2) poly(k). 

Based on joint works with Arka Rai Choudhuri (JHU), Aarushi Goel (JHU) and Abhishek Jain (JHU). 

Speaker: Aloni Cohen, MIT
Title: Towards Formalizing the GDPR Notion of Singling Out

There is a significant conceptual gap between legal and mathematical thinking around data privacy. The effect is uncertainty as to the which technical offerings adequately match expectations expressed in legal standards. The uncertainty is exacerbated by a litany of successful privacy attacks, demonstrating that traditional statistical disclosure limitation techniques often fall short of the sort of privacy envisioned by legal standards.

We define predicate singling out, a new type of privacy attack intended to capture the concept of singling out appearing in the General Data Protection Regulation (GDPR).

Informally, an adversary predicate singles out a dataset X using the output of a data release mechanism M(X) if it manages to a predicate p matching exactly one row in X with probability much better than a statistical baseline. A data release mechanism that precludes such attacks is secure against predicate singling out (PSO secure).

We argue that PSO security is a mathematical concept with legal consequences. Any data release mechanism that purports to “render anonymous” personal data under the GDPR must be secure against singling out, and hence must be PSO secure. We then analyze PSO security, showing that it fails to self-compose. Namely, a combination of $\omega(\log n)$ exact counts, each individually PSO secure, enables an attacker to predicate single out. In fact, the composition of just two PSO secure mechanisms can fail to provide PSO security.

Finally, we ask whether differential privacy and k-anonymity are PSO secure. Leveraging a connection to statistical generalization, we show that differential privacy implies PSO security. However, k-anonymity does not: there exists a simple and general predicate singling out attack under mild assumptions on the k-anonymizer and the data distribution.

When: Friday, Mar 8, 2019.

Where:  MIT Stata Center G-882 (Hewlett Room)

Organizers: Ran Canetti, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.

Program:

9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Zvika Brakerski, Weizmann
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
11:15 – 12:15.Gilad Asharov, JP Morgan AI Research
OptORAMa: Optimal Oblivious RAM
12:15 – 1:30.Lunch
1:30 – 2:30.Rachel Lin, University of Washington
Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation
2:45 – 3:45.Mohammad Mahmoody, University of Virginia
Coin-tossing, Concentration of Products, and Limits of Robust Learning
4:00 – 5:00.abhi shelat, Northeastern
Threshold Factoring from Factoring Assumptions

Abstracts:

Speaker: Zvika Brakerski, Weizmann
Title: Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing

We present a worst case decoding problem whose hardness reduces to that ofsolving the Learning Parity with Noise (LPN) problem, in some parameterregime. Prior to this work, no worst case hardness result was known for LPN(as opposed to syntactically similar problems such as Learning with Errors).The caveat is that this worst case problem is only mildly hard and inparticular admits a quasi-polynomial time algorithm, whereas the LPN variantused in the reduction requires extremely high noise rate of $1/2-1/\poly(n)$.Thus we can only show that “very hard” LPN is harder than some “very mildlyhard” worst case problem. We note that LPN with noise $1/2-1/\poly(n)$already implies symmetric cryptography.
Specifically, we consider the $(n,m,w)$-nearest codeword problem($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linearcode in $m$ dimensions and rank $n$, and a target vector which is very closeto the code (Hamming distance at most $w$), and asks to find the codewordnearest to the target vector. We show that for balanced (unbiased) codes andfor relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solvedgiven oracle access to an LPN distinguisher with noise ratio$1/2-1/\poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have furtherimplications: We show that $(n,m,w)$-NCP with the aforementioned parameterslies in the complexity class $SearchBPP^{SZK}$ (i.e.\ reducible to a problemthat has a statistical zero knowledge protocol) implying that it is unlikelyto be $NP$-hard. We then show that the hardness of LPN with very low noiserate $\log^2(n)/n$ implies the existence of collision resistant hash functions(our aforementioned result implies that in this parameter regime LPN is alsoin $BPP^{SZK}$).
Joint work with Vadim Lyubashevsky, Vinod Vaikuntanathan and Daniel Wichs.

Speaker: Gilad Asharov, JP Morgan AI Research
Title: OptORAMa: Optimal Oblivious RAM

Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs.  To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions.
One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is  $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively. 
With Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Enoch Peserico and Elaine Shi.

Speaker: Rachel Lin, University of Washington
Title: Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation

We introduce Pseudo Flawed-smudging Generators (PFGs). They are polynomially expanding functions over Zp with polynomially small outputs that have some weak pseudo-randomness property. More specifically, the output distribution of a PFG is computationally indistinguishable to a so-called flawed smudging distribution y <– Y, satisfying that for every B = poly(n) bounded noise distribution E, the distributions of (e, e + y) and (e’, e+y) are statistically close, where e and e’ are independent samples from E conditioned on agreeing at a few, o(n), coordinates. Moreover, the statistical closeness only holds with 1/poly(n) probability over the choice of y <– Y. In essence, the output of PFG computationally hides a small noise vector at all but a few coordinates with noticeable probability.
We use PFGs to construct Indistinguishability Obfuscation (IO) schemes for polynomial-sized circuits. Assuming LWE and the existence of constant-locality pseudorandom generators, we construct IO using PFGs and a  Functional Encryption (FE) scheme able to compute them. Instantiating the PFGs with new candidates from [Ananth, Jain, Sahai, Eprint 2018] and instantiating the FE with a new partially hiding FE scheme constructed from bilinear maps, we obtain IO based on i) the security of the new PFG candidates, ii) the SXDH assumption over bilinear maps, iii) LWE, and iv) the security of constant-locality pseudorandom generators.  

Speaker: Mohammad Mahmoody, University of Virginia
Title: Coin-tossing, Concentration of Products, and Limits of Robust Learning

A recent active line of work in robust learning studies attacks on learning algorithms through adversarial perturbations that happen during the training phase (i.e., poisoning attacks) or testing phase (i.e., evasion attacks; aka adversarial examples). In this talk, I first show the existence of some generic information theoretic poisoning attacks as well as evasion attacks for certain theoretically natural input distributions (e.g., the uniform distribution) based on classical results about concentration of measure in certain metric probability spaces (known as Normal Levy Families). I will then show how to make some of these attacks polynomial time by proving computational variants of the measure concentration for any product space under Hamming distance using new (polynomial time) attacks on cryptographic coin tossing protocols.
Based on joint works with Saeed Mahloujifar and Dimitrios Diochnos from NeurIPS’18, AAAI’19 and ALT’19.

Speaker: abhi shelat, Northeastern
Title: Threshold Factoring from Factoring Assumptions

The problem of jointly generating an RSA modulus N has been well considered since Boneh and Franklin first proposed a solution in 1996.  In this talk, we discuss our latest multi-party protocol for this problem as well as interesting open directions that could improve our solution.  Our solution only requires the assumption of OT and can thus be instantiated from the factoring assumption alone; we discuss the costs of this restriction by studying the benefits/costs of using protocols that rely on homomorphic primitives. We also discuss the new motivation behind this problem arising from the use of an RSA modulus N in the construction of a verifiable delay function (VDF), and the use of VDFs in consensus protocols.


This is joint work with Megan Chen, Ran Cohen, Jack Doerner, Yash Kondi,  Eysa Lee, Schuyler Rosefield as well as Carmit Hazay and Muthu Venkitasubramaniam from Ligero.