ITC (July 5-7) + Crypto Day (July 8) = Crypto Week!

WhenCrypto Day is on Friday, July 8, immediately after ITC (July 5-7).

Logistic: ITC requires a separate registration. No registration required for Crypto Day.

Where:  MIT Stata Center, Room 32-141.

To enter the building you need to sign in here: and receive a QR code.

Organizers: Ran Canetti, Henry Corrigan-Gibbs, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs


9:00 – 9:30Welcome and Breakfast
9:30 – 10:30Benny Applebaum (Tel Aviv University) – video
Quadratic Multiparty Randomized Encodings Beyond Honest Majority and Their Applications
11:00 – 12:00Rafael Pass (Cornell Tech) – video
Leakage-Resilient Hardness v.s. Randomness
12 – 1:30Lunch (provided)
1:30 – 2:30Dakshita Khurana (UIUC) – video
SNARGs for P from sub-exponential DDH and QR
3:00 – 4:00Wei-Kai Lin (Northeastern) – video
Optimal Single-Server Private Information Retrieval

Speaker: Benny Applebaum (Tel Aviv University)
Title: Quadratic Multiparty Randomized Encodings Beyond Honest Majority and Their Applications


Multiparty randomized encodings (Applebaum, Brakerski, and Tsabary, SICOMP 2021) reduce the task of securely computing a complicated multiparty functionality f to the task of securely computing a simpler functionality g. The reduction is non-interactive and preserves information-theoretic security against a passive (semi-honest) adversary, also referred to as {\em privacy}. The special case of a degree-2 encoding g (2MPRE) has recently found several applications to secure multiparty computation (MPC) with either information-theoretic security or making black-box access to cryptographic primitives. Unfortunately, as all known constructions are based on information-theoretic MPC protocols in the plain model, they can only be private with an honest majority.

In this work, we break the honest-majority barrier and present the first construction of general 2MPRE that remains secure in the presence of a dishonest majority. Our construction encodes every n-party functionality f by a 2MPRE that tolerates at most t=2n/3 passive corruptions. 

We derive several applications including (1) The first non-interactive client-server MPC protocol with perfect privacy against any coalition of a minority of the servers and up to t of the n clients; (2) Completeness of 3-party functionalities under non-interactive t-private reductions; and (3) A single-round t-private reduction from general-MPC to an ideal oblivious transfer (OT). These positive results partially resolve open questions that were posed in several previous works. We also show that t-private 2MPREs are necessary for solving (2) and (3), thus establishing new equivalence theorems between these three notions. Finally, we present a new approach for constructing fully-private 2MPREs based on multi-round protocols in the OT-hybrid model that achieve *perfect privacy* against *active* attacks. Moreover, by slightly restricting the power of the active adversary, we derive an equivalence between these notions. This forms a surprising and quite unique connection between a non-interactive passively-private primitive to an interactive actively-private primitive.

Based on joint work with Yuval Ishai, Or Karni, and Arpita Patra.

Speaker: Rafael Pass (Cornell Tech)
Title: Leakage-Resilient Hardness v.s. Randomness


A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated “hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which prBPP = prP, but these hardness assumptions are not known to also be necessary for such derandomization.

In this work, following the recent work by Chen and Tell (STOC’20) that considers “almost-all-input” hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider “almost-all-input” *leakage-resilient hardness* of a function f—that is, hardness of computing f(x) even given, say, sqrt(|x|) bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both *necessary* and *sufficient* condition for derandomization). In more detail, we show that there exists a constant c such that the following are equivalent:

  • prBPP = P;
  • Existence of a poly-time computable function f : {0, 1}^n -> {0,1}^n that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms.

Our characterization naturally extends also to the low-end derandomization regime, to derandomization of MA, and also to average-case derandomization, by appropriately weakening the requirements on the function f.

Joint work with Yanyi Liu

Speaker: Dakshita Khurana (UIUC)
Title: SNARGs for P from sub-exponential DDH and QR


Abstract: We will describe a construction of publicly-verifiable Succinct Non-Interactive Arguments (SNARGs) for arbitrary deterministic computations and bounded space non-deterministic computations from group-based assumptions, without relying on pairings. In particular, assuming the sub-exponential hardness of both the Decisional Diffie-Hellman (DDH) and Quadratic Residuosity (QR) assumptions, we obtain the following results, where n denotes the size of the instance:
1. A SNARG for any language that can be decided in non-deterministic time T and space S with communication complexity and verifier runtime (n+S).T^{o(1)}
2. A SNARG for any language that can be decided in deterministic time T with communication complexity and verifier runtime n.T^{o(1)}.

Speaker: Wei-Kai Lin (Northeastern)
Title: Optimal Single-Server Private Information Retrieval


In this talk, we present a single-server Private Information Retrieval (PIR) scheme, which allows a client to privately query the database from the server. Our construction achieves optimal bandwidth and server computation (up to poly-logarithmic factors), assuming the hardness of the Learning With Errors (LWE) problem.

In the setting of single-server PIR, the server stores an n-bit database, and the client wants to query some bits in the database privately and adaptively so that the server learns nothing about the indices of the queried bits. Our scheme uses ~O(√n) client storage, but it achieves amortized ~O(√n) server and client computation and ~O(1) bandwidth per query, and completes in a single roundtrip, where the ~O notation hides a security parameter and poly-logarithmic factor. In particular, we achieve a significant reduction in bandwidth over the state-of-the-art scheme by Corrigan-Gibbs, Henzinger, and Kogan (Eurocrypt’22): their scheme requires as much as ~O(√n) bandwidth per query, with comparable computational and storage cost as ours. Since they proved that their scheme is nearly optimal in terms of server computation and client storage, our scheme is additionally optimal in bandwidth.

This is a joint work with Mingxun Zhou, Yiannis Tselekounis, and Elaine Shi.

WhenCrypto Day on Friday, April 29.

Where:  Northeastern University, ISEC building (805 Columbus Ave), Room 655 (6th floor)

Logistics: Masks are optional for fully vaccinated attendees.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.


9:00 – 9:30 Welcome and Breakfast
9:30 – 10:30Zvika Brakerski, Weizmann Institute,
Constructive Post-Quantum Reductions
11:00 – 12:00David Wu, UT Austin
Batch Arguments for NP and More from Standard Bilinear Group Assumptions

12 – 1:30Lunch (provided)
1:30 – 2:30Rahul Ilango, MIT
The Minimum Circuit Size Problem: What is it, what’s new, and where might things go?
3:00 – 4:00Alexandra Henzinger, MIT
Single-Server Private Information Retrieval with Sublinear Amortized Time

Speaker: Zvika Brakerski (Weizmann Institute)
Title: Constructive Post-Quantum Reductions


Is it possible to convert classical cryptographic reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a non-constructive post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage.

We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.

Along the way, we make several additional contributions:

1. We put forth a framework for reductions (or general interaction) with stateful solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting.

2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically “restored” post-measurement. This shows that the novel rewinding technique of Chiesa et al. (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.

Joint work with Nir Bitansky and Yael Kalai.

Speaker: David Wu, UT Austin
Title: Batch Arguments for NP and More from Standard Bilinear Group Assumptions


Non-interactive batch arguments for NP provide a way to amortize the cost of NP verification across multiple instances. They enable a prover to convince a verifier of multiple NP statements with communication much smaller than the total witness length and verification time much smaller than individually checking each instance.

In this talk, we introduce a direct approach for constructing batch arguments from standard bilinear map assumptions (e.g., subgroup decision or k-Lin) which avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches. In turn, we obtain the first construction of batch arguments for NP from standard bilinear map assumptions.

As corollaries to our main construction, we also obtain a delegation scheme for RAM programs (i.e., a SNARG for P) as well as an aggregate signature scheme supporting bounded aggregation from standard bilinear map assumptions in the plain model.

Joint work with Brent Waters

Speaker: Rahul Ilango
Title: The Minimum Circuit Size Problem: What is it, what’s new, and where might things go?

Title: The Minimum Circuit Size Problem: What is it, what’s new, and where might things go?


In this talk, I will give a broad introduction to the Minimum Circuit Size Problem (and, more generally, the area of “meta-complexity”) and then survey some exciting recent developments. These new developments include:
a) equivalences between the existence of one-way functions and average-case hardness of the Minimum Circuit Size Problem,
b) worst-case to average-case reductions for NP via meta-complexity, and
c) a framework building towards reducing SAT to the Minimum Circuit Size Problem.
No prior background on this topic is required!

This survey is based on many works by many authors, including (non-exhaustively): Eric Allender, Lijie Chen, Mahdi Cheraghchi, Shuichi Hirahara, Yanyi Liu, Bruno Loff, Dimitrios Myrisiotis, Igor Oliveira, Rafael Pass, Hanlin Ren, Rahul Santhanam, Harsha Tirumala, Neekon Vafa, and Ilya Volkovich.

Speaker: Alexandra Henzinger, MIT
Single-Server Private Information Retrieval with Sublinear Amortized Time


This talk will present new private-information-retrieval protocols in the single-server setting. These schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. 

Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time—yielding sublinear amortized cost. 

Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.

This talk is based on joint work with Henry Corrigan-Gibbs and Dmitry Kogan.

We are very excited to bring the Boston-area crypto community back together for another in-person crypto day with four exciting talks.

When*IN-PERSON* Crypto Day on Friday, March 11 at BU

Where:  BU’s Barrister Hall  (located at the ground floor of the law school tower [map]).

Logistics: All attendants are required to be fully vaccinated, and wear masks when indoors at all times.

We look forward to seeing you all.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.


9:00 – 9:30 Welcome and Coffee
9:30 – 10:30Yevgeniy Dodis, NYU
Small-Box Cryptography
11:00 – 12:00Guy Bresler, MIT
Average-case Reductions amongst Statistical Problems
12 – 1:30Lunch (provided)
1:30 – 2:30David Heath, Georgia Tech
EpiGRAM: Practical Garbled RAM
2:30 – 3:00Coffee
3:00 – 4:00Gabe Kaptchuk, Boston University
Weaving Social Accountability into Cryptographic Protocols

Speaker: Yevgeniy Dodis
Title: Small-Box Cryptography


One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building complex cryptographic primitives  from small components, such as cryptographic S-boxes. Unfortunately, a fundamental obstacle towards reaching this goal comes from the fact that traditional security proofs cannot get security beyond 1/2^n, where n is the size of the corresponding small component. As a result, prior provably secure approaches — which we call “big-box cryptography” — always made n larger than the security parameter, which led to several problems. Most notably, the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such “big-boxes” were completely abstracted out. 

In this work, we introduce a novel paradigm (or heuristic?) for building big objects from small objects, which we call small-box cryptography. 

As an illustration, we apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we also managed to find a direct “big-box”-style security justification, under a well-studied and widely believed eXact Linear Parity with Noise (XLPN) assumption.

Overall, we hope that our work might initiate new follow-up results in the area of small-box cryptography.

Joint work with Harish Karthikeyan and Daniel Wichs.

Speaker: Guy Bresler
Title: Average-Case Reductions Amongst Statistical Problems


In this talk I will describe a few simple average-case reduction techniques and use these techniques to show how the computational phase transitions in a variety of statistical problems with widely varying structures all follow from a slight generalization of the planted clique conjecture. Some of these problems are robust sparse linear regression, tensor PCA, and certain dense stochastic block models. The talk is based on joint work with Matthew Brennan (

Speaker: David Heath
Title: EpiGRAM: Practical Garbled RAM

Yao’s Garbled Circuit (GC) is a foundational technique for achieving secure multiparty computation (MPC). GC allows two parties to securely compute any Boolean circuit. However, many computations are naturally expressed as RAM programs, not as Boolean circuits. While any RAM program can be polynomially reduced to a circuit, in practice the reduction is often prohibitively expensive.

Garbled RAM (GRAM) is a GC improvement that directly supports random access arrays without adding extra rounds to the GC protocol. Hence, GRAM allows us to securely and directly compute RAM programs. The first non-trivial GRAM (Lu and Ostrovsky, Eurocrypt 2013) required repeated evaluation of a PRF inside GC. Unfortunately, this non-black-box use of a cryptographic primitive is staggeringly expensive. While several subsequent works improved various important aspects of GRAM, no works addressed the central issue: GRAM remained expensive.

In this talk, I will present EpiGRAM, a new construction that dramatically improves GRAM’s efficiency. EpiGRAM improves over prior work by three to four orders of magnitude and opens the door to using GRAM in implementations. Our key insight is a new garbled data structure that we call a lazy permutation network. This data structure allows the garbled circuit to efficiently (and without extra rounds) outsource RAM accesses to the GC evaluator. In this talk, I will present the technical issues inherent to GRAM, and I will show how the lazy permutation network efficiently solves these problems.

EpiGRAM is joint work with Vladimir Kolesnikov (Georgia Tech) and Rafail Ostrovsky (UCLA) and will appear at Eurocrypt 2022.

Speaker: Gabe Kaptchuk
Title: Weaving Social Accountability into Cryptographic Protocols

When deployed, cryptographic systems transform from purely technical system into sociotechnical constructs.  As a result, cryptographers must take special care that it is possible to keep those who use these cryptographic systems maliciously can be help accountable, by making misuse difficult and detectable.  I investigate this problem in the setting of end-to-end encrypted systems.  First, I will discuss Abuse Resistant Law Enforcement Access Systems [Eurocrypt21], which studies the problem of constructing law enforcement access systems (ie. government backdoors) that ensure accountability mechanisms in the case of unauthorized surveillance.  Second, I will discuss Apple’s recent CSAM scanning proposal, analyzing the system through the lens of accountability.

We are very excited to bring the Boston-area crypto community back together for another in-person crypto day with exciting talks by our own students.

When*IN-PERSON* Crypto Day on Friday, November 19 at MIT

Where: MIT Stata Center, Star Auditorium (D-463).

Logistics: Masks are mandatory. Non-MIT visitors: please sign up at this link for a Tim ticket which you need to enter the Stata building. Once you sign up, you will receive a QR code which you can use to activate the elevators and open the doors. If there are any access issues, please contact Debbie Lehto ( We look forward to seeing you all.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.


9:15-9:30 Welcome and Breakfast
9:30 – 10:30Alex Lombardi, MIT
10:45 – 11:45Luowen Qian, Boston University
11:45 – 1:30Lunch (provided)
1:30 – 2:30Ari Karchmer, Boston University
2:45 – 3:45Meghal Gupta, Microsoft Research
4:00 – 5:00Willy Quach, Northeastern University

Speaker: Alex Lombardi
Title: Post-Quantum Zero Knowledge, Revisited


A major difficulty in quantum rewinding is the fact that measurement is destructive: extracting information from a quantum state irreversibly changes it. This is especially problematic in the context of zero-knowledge simulation, where preserving the adversary’s state is essential.   
In this work, we develop new techniques for quantum rewinding in the context of extraction and zero-knowledge simulation:

1) We show how to extract information from a quantum adversary by rewinding it without disturbing its internal state. We use this technique to prove that important interactive protocols, such as the Goldreich-Micali-Wigderson protocol for graph non-isomorphism and the Feige-Shamir protocol for NP, are zero-knowledge against quantum adversaries.

2) We prove that the Goldreich-Kahan protocol for NP is post-quantum zero knowledge using a simulator that can be seen as a natural quantum extension of the classical simulator.

Our results achieve (constant-round) black-box zero-knowledge with negligible simulation error, appearing to contradict a recent impossibility result due to Chia-Chung-Liu-Yamakawa (FOCS 2021). This brings us to our final contribution:

3) We introduce coherent-runtime expected quantum polynomial time, a computational model that (1) captures all of our zero-knowledge simulators, (2) cannot break any polynomial hardness assumptions, and (3) is not subject to the CCLY impossibility. In light of our positive results and the CCLY negative results, we propose coherent-runtime simulation to be the right quantum analogue of classical expected polynomial-time simulation.

Joint work with Fermi Ma and Nicholas Spooner.

Speaker: Luowen Qian
Title: Beating Classical Impossibility of Position Verification


We introduce the notion of position verification protocols with classical verifiers, and instantiate such a protocol via adapting the proof of quantumness protocol by Brakerski et al (FOCS’18). As a result, we achieve classically verifiable position verification assuming quantum hardness of Learning with Errors.
Towards establishing the security, we prove that a natural non-local game for noisy trapdoor claw-free functions (NTCFs) are computationally hard assuming the adaptive hardcore bit property, which could be of independent interest.
Joint work with Jiahui Liu and Qipeng Liu.

Speaker: Ari Karchmer
Title: Covert Verifiable Learning: How to Learn with an Untrusted Intermediary


In this talk, we introduce and motivate the Covert Verifiable Learning (CVL) framework and survey several results. The CVL framework considers the task of learning a function via oracle queries, where the queries and responses are monitored (and perhaps also modified) by an untrusted intermediary. The goal is twofold: First, to prevent the intermediary from gaining any information about either the function or the learner’s intentions (e.g. the particular hypothesis class the learner is considering). Second, to curb the intermediary’s ability to meaningfully interfere with the learning process, even when it can modify the oracles’ responses.

Among the results described in this talk are the following:

  1. A Covert Learning algorithm in the agnostic setting for decision trees, where a polynomial time eavesdropping adversary that observes all queries and responses learns nothing about either the function, or the learned hypothesis.
  2. A Covert Verifiable Learning algorithm that provides similar learning and privacy guarantees, even in the presence of a polynomial-time adversarial intermediary that can modify all oracle responses. Here the learner is granted additional random examples and is allowed to abort whenever the oracles responses are modified.

Aside theoretical interest, the study of Covert Verifiable Learning is motivated by applications — outlined in this talk — to the outsourcing of automated scientific discovery in molecular biology as well as model extraction attacks.   

Joint work with Ran Canetti.

Speaker: Meghal Gupta
Title: Interactive Error Correcting Codes Over Binary Erasure Channels Resilient to >1/2 Adversarial Corruption


An important question in coding theory is how to communicate over a noisy channel while being resilient to a maximum fraction of errors. One way this has been studied is via error correcting codes (ECC); in this talk we focus on two interactive generalizations of ECC’s that also address communication over a noisy channel.

In the first setting, only Alice (the sender) has a message she wants to send to Bob (the receiver). Though this can be solved with ECC’s, is there a better solution where Alice and Bob engage in an interactive protocol? We define this new concept as an interactive error correcting code (iECC), and show that for adversarial erasures over a binary channel, iECC’s can be better than ECC’s! The maximal error-resilience of an ECC is 1/2 of the communicated bits; in this talk, we construct an iECC that is resilient to adversarial erasure of 6/11 (>1/2) of the total communicated bits. 

In the second setting, both Alice and Bob wish to compute some function f of their private inputs x and y by engaging in a protocol to jointly compute f(x,y). This problem introduced the field of interactive coding in the 90’s, which asks the question: how resilient to adversarial error can we make their protocol? In this talk, we resolve a long-standing problem about binary interactive protocols: what is the best possible error resilience one can achieve? We construct a binary protocol resilient to 1/6 errors, for which a matching impossibility result is known. Before our result, the best construction was resilient to 5/39 errors.

This is based on joint work with Yael Tauman Kalai and Rachel Zhang.

Speaker: Willy Quach
Title: Succinct LWE Sampling, Random Polynomials, and Obfuscation


We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudorandom LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.

Based on joint work with Lalita Devadas, Vinod Vaikuntanathan, Hoeteck Wee and Daniel Wichs.

What: A special *IN-PERSON* crypto day on Friday, July 23 at Northeastern

We’re very excited to bring the Boston-area crypto community back together for a special in-person crypto day. We can’t wait to see all of you in person again! 

Where: Northeastern University, ISEC building (805 Columbus Ave), first floor auditorium (room 102).

Logistics: Masks are optional for fully vaccinated participants and required for everyone else. The talks will be in a large auditorium to give participants plenty of room to socially distance if desired. No food or drinks are allowed in the auditorium. Lunch will be served on the 6th floor in room 655.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.


9:15-9:30 Welcome
9:30 – 10:30Jon Ullman, Northeastern University
The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation (video)
11:00 – 12:00Hoeteck Wee, NTT Research
Breaking the sqrt(N) Barrier in Pairing-Based Broadcast Encryption (video)
12:00 – 1:30Lunch (provided – in ISEC 655)
1:30 – 2:30Zhengzhong Jin, Johns Hopkins University
SNARGs for P from LWE (video)
3 – 4Yael Tauman Kalai, MSR and MIT
Somewhere Statistical Soundness, Post-Quantum Security, and SNARGs (video)
4:30 – 5:30Rishab Goyal, MIT
Dynamic Collusion Bounded Functional Encryption from Identity-Based Encryption (video)


Speaker:  Jon Ullman, Northeastern University
Title:  The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

Abstract: There has been a recent wave of interest in intermediate trust models for differential privacy that eliminate the need for a fully trusted central data collector, but overcome the limitations of local differential privacy. This interest has led to the introduction of the shuffle model (Cheu et al., EUROCRYPT 2019; Erlingsson et al., SODA 2019) and revisiting the pan-private model (Dwork et al., ITCS 2010). The message of this line of work is that, for a variety of low-dimensional problems — such as counts, means, and histograms — these intermediate models offer nearly as much power as central differential privacy. However, there has been considerably less success using these models for high-dimensional learning and estimation problems. In this work, we show that, for a variety of high-dimensional learning and estimation problems, both the shuffle model and the pan-private model inherently incur an exponential price in sample complexity relative to the central model. For example, we show that, private agnostic learning of parity functions over $d$ bits requires $Ω(2^{d/2})$ samples in these models, and privately selecting the most common attribute from a set of $d$ choices requires $Ω(d^{1/2})$ samples, both of which are exponential separations from the central model. Our work gives the first non-trivial lower bounds for these problems for both the pan-private model and the general multi-message shuffle model.

Joint work with Albert Cheu

Speaker:  Hoeteck Wee, NTT Research
Title: Breaking the sqrt(N) Barrier in Pairing-Based Broadcast Encryption

Abstract: We present the first pairing-based ciphertext-policy attribute-based
encryption (CP-ABE) scheme for the class of degree 3 polynomials with
compact parameters: the public key, ciphertext and secret keys
comprise O(n) group elements, where n is input length for the
function. As an immediate corollary, we obtain a pairing-based
broadcast encryption scheme for N users with O(N^{1/3})-sized
parameters, breaking the long-standing sqrt{N} barrier for pairing-based
broadcast encryption. All of our constructions achieve adaptive security
against unbounded collusions, and rely on the (bilateral) $k$-Lin
assumption in prime-order bilinear groups.

Speaker:  Zhengzhong Jin, Johns Hopkins University
Title:  SNARGs for P from LWE

Abstract: We provide the first construction of a succinct non-interactive argument (SNARG) for *all* polynomial time deterministic computations based on standard assumptions. For T steps of computation, the size of the proof and the common random string (CRS) as well as the verification time are poly-logarithmic in T. The security of our scheme relies on the hardness of the Learning with Errors (LWE) problem against polynomial-time adversaries. Previously, SNARGs based on standard assumptions could support bounded-depth computations and required sub-exponential hardness assumptions [Jawale-Kalai-Khurana-Zhang, STOC’21].

Along the way, we also provide the first construction of non-interactive batch arguments for NP based solely on the LWE assumption.

Based on the joint work with Arka Rai Choudhuri and Abhishek Jain.

Speaker: Yael Tauman Kalai, MSR and MIT
Title:  Somewhere Statistical Soundness, Post-Quantum Security, and SNARGs


We introduce the notion of a somewhere statistically sound (SSS) interactive argument, which is a hybrid between a statistically sound proof and a computationally sound proof.We show that Kilian’s protocol, instantiated with a computationally non-signaling PCP (Brakerski, Holmgren, and Kalai, STOC 2017) and a somewhere statistically binding hash family (Hubacek and Wichs, ITCS 2015), is an SSS argument.  We observe that the first two messages of Kilian, instantiated with these primitives, is a sound instantiation of the BMW heuristic (Kalai, Raz, Rothblum, STOC 2013).  Armed with this observation, we show how to efficiently convert any succinct non-interactive argument (SNARG) for BatchNP into a SNARG for any language that has a non-signaling PCP, including any deterministic language and any language in NTISP, using a somewhere statistically binding hash family.

Secondly, we show that the soundness of SSS arguments can be proved in a straight-line manner, implying that they are also post-quantum sound if the underlying assumption is post-quantum secure. This provides a straightforward proof that Kilian’s protocol, instantiated this way, is post-quantum sound under the post-quantum hardness of LWE (though we emphasize that a computationally non-signaling PCP exists only for deterministic languages and for specific subclasses of non-deterministic languages such as NTISP, but not for all of NP).  We put forward a natural conjecture that constant-round SSS arguments can be soundly converted into non-interactive arguments via the Fiat-Shamir transformation. We argue that SSS arguments evade the current Fiat-Shamir counterexamples, including the one for Kilian’s protocol (Bartusek, Bronfman, Holmgren, Ma and Rothblum, TCC 2019) by requiring additional properties from both the hash family and the PCP.

This is joint work with Vinod Vaikuntanathan and Rachel Zhang.

Speaker:  Rishab Goyal, MIT
Title: Dynamic Collusion Bounded Functional Encryption from Identity-Based Encryption

Abstract: Functional Encryption is a powerful notion of encryption in which each decryption key is associated with a function $f$ such that decryption recovers the function evaluation $f(m)$. Informally, security states that a user with access to function keys $\mathsf{sk}_{f_1}, \mathsf{sk}_{f_2}, \ldots$ (and so on) can only learn $f_1(m), f_2(m), \ldots$ (and so on) but nothing more about the message. The system is said to be $q$-bounded collusion resistant if the security holds as long as an adversary gets access to at most $q = q(\lambda)$ function keys. A major drawback of such \emph{statically} bounded collusion systems is that the collusion bound $q$ must be declared at setup time and is fixed for the entire lifetime of the system.
We initiate the study of \emph{dynamically} bounded collusion resistant functional encryption systems which provide more flexibility in terms of selecting the collusion bound, while reaping the benefits of statically bounded collusion FE systems (such as quantum resistance, simulation security, and general assumptions).
Briefly, the virtues of a dynamically bounded scheme can be summarized as:

1) [Fine-grained individualized selection.] It lets each encryptor select the collusion bound by weighing the trade-off between performance overhead and the amount of collusion resilience.
2) [Evolving encryption strategies.] Since the system is no longer tied to a single collusion bound, thus it allows to dynamically adjust the desired collusion resilience based on any number of evolving factors such as the age of the system, or a number of active users, etc.
3) [Ease and simplicity of updatability.] None of the system parameters have to be updated when adjusting the collusion bound. That is, the same key $\mathsf{sk}_f$ can be used to decrypt ciphertexts for collusion bound $q = 2$ as well as $q = 2^\lambda$.

We construct such a dynamically bounded functional encryption scheme for the class of all polynomial-size circuits under the general assumption of Identity-Based Encryption. 

joint work with Rachit Garg, George Lu and Brent Waters

When: Friday, Feb 12, 2021.

Where: Virtual: A zoom link will be sent to the e-mail list. If you do not receive it, e-mail the organizers to ask for one.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.


10:30 – 11:30.Nick Spooner, BU
Proof-Carrying Data without Succinct Arguments

11:45 – 12:45.Dima Kogan, Stanford
Private Information Retrieval with Sublinear Online Time

12:45 – 1:30.Lunch
1:30 – 3:15.Alex Bredariol Grilo (CNRS/Sorbonne)
and James Bartusek (UC Berkeley)
(double feature with 15 min break in the middle)
Secure Computation is in MiniQCrypt

3:30-4:30. Justin Holmgren, NTT Research
Error Correcting Codes for Uncompressed Messages



Speaker: Nick Spooner, BU
Title: Proof-Carrying Data without Succinct Arguments

Abstract: Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Prior approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this talk I will describe how to obtain PCD without relying on SNARKs. In particular, we construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size proofs) that has a split accumulation scheme, which is a weak form of accumulation that we introduce. We then exploit this new framework to achieve a more efficient PCD construction, by giving an accumulation scheme for a non-interactive argument of knowledge for R1CS with constant verification time. Our results are supported by a modular and efficient implementation.

Speaker: Dima Kogan (Stanford)
Title: Private Information Retrieval with Sublinear Online Time

Speaker: Alex Bredariol Grilo (CNRS/Sorbonne) and James Bartusek (UC Berkeley)
Title: Secure Computation is in MiniQCrypt

Abstract: MiniQCrypt is a world where quantum-secure one-way functions exist, and quantum communication is possible. This talk will show that simulation-secure oblivious transfer is in MiniQCrypt, and thus that secure multi-party computation of any quantum functionality is in MiniQCrypt. The main technical contribution is a construction of extractable and equivocal bit commitment from quantum-secure one-way functions, which is used to instantiate the Bennet-Brassard-Crépeau-Skubiszewska (CRYPTO 91) framework to yield simulation-secure OT.

Speaker: Justin Holmgren (NTT Research)
Title: Error Correcting Codes for Uncompressed Messages

Abstract: Most types of messages we transmit (e.g., video, audio, images, text)
are not fully compressed, since efficient compression algorithms
can fail to reach the information-theoretic limit. In this
work, we study the transmission of partially compressed messages
over a noisy channel, noting that these messages may have additional
structure that is unused by standard error correcting codes.

We introduce a model in which “well-formed” messages comprise a
small fraction of all strings, and are recognizable. In this model,
we construct a (probabilistic) encoding procedure that achieves
better tradeoffs between data rates and error-resilience (compared to
just applying a standard error correcting code).

Surprisingly, our techniques also yield better tradeoffs in the standard setting
where all binary strings are valid messages.

When: Thursday and Friday, July 30-31.
How: A Zoom link was sent to the e-mail list. If you did not receive it, e-mail the organizers to ask for one.
Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.

Program (All times are in ET)

Thursday, July 30
12:15–12:30p Virtual Coffee / Hangout
12:30–1:30p Venkata Koppula, Weizmann Institute of Science
Chosen Ciphertext Security from Injective Trapdoor Functions
1:45–2:45p Alex Lombardi, MIT
Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs
3:00–4:00p Lisa Yang, MIT
PPAD-Hardness and Delegation with Unambiguous Proofs
Friday, July 31
12:30-1:30p Mark Bun, Boston University
An Equivalence between Private Classification and Online Prediction
1:45-2:45p Jonathan Shafer, UC Berkeley
Learning vs. Verifying: Interactive Proofs for Verifying Machine Learning
3:00-4:00p Mark Zhandry, Princeton
Local Quantum Cryptography

Chosen Ciphertext Security from Injective Trapdoor Functions
Venkata Koppula

Abstract: In this talk, I will present a construction of chosen ciphertext secure public-key encryption from (injective) trapdoor functions. This construction is black box and assumes no special properties (e.g. “lossy”, “correlated product secure”) of the trapdoor function.

Joint work with Susan Hohenberger and Brent Waters [ paper ]

Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs
Alex Lombardi

Abstract: The Fiat-Shamir transform is a methodology for compiling a (public-coin) interactive proof system for a language L into a non-interactive argument system for L. In this work, we consider the problem of compiling Pietrzak’s succinct interactive proof system (ITCS 2019) for the iterated squaring problem. We construct a hash function family (with evaluation time roughly 2^{\lambda^\epsilon}) that guarantees the soundness of Fiat-Shamir for this protocol assuming the sub-exponential (2^{-n^{1-\epsilon}})-hardness of the n-dimensional learning with errors problem. (The latter follows from the worst-case 2^{n^{1-\epsilon}} hardness of lattice problems.) More generally, we extend the “bad-challenge function” methodology of Canetti et al. for proving the soundness of Fiat-Shamir to a class of protocols whose bad-challenge functions are not efficiently computable.

As a corollary (following Choudhuri et al., ePrint 2019 and Ephraim et al., EUROCRYPT 2020), we construct hard-on-average problems in the complexity class CLS \subset PPAD under the 2^{\lambda^\epsilon}-hardness of the repeated squaring problem and the 2^{-n^{1-\epsilon}}-hardness of the learning with errors problem. Under the additional assumption that the repeated squaring problem is “inherently sequential”, we also obtain a Verifiable Delay Function (Boneh et al., EUROCRYPT 2018) in the standard model. Finally, we give additional PPAD-hardness and VDF instantiations demonstrating a broader tradeoff between the strength of the repeated squaring assumption and the strength of the lattice assumption.

Joint work with Vinod Vaikuntanathan.

Updatable Delegation and PPAD-Hardness
Lisa Yang

Abstract: In this work, we show the hardness of finding a Nash equilibrium, a PPAD-complete problem, based on the quasi-polynomial hardness of the decisional assumption on bilinear groups introduced by Kalai, Paneth and Yang [STOC 2019].Towards this goal, we construct an unambiguous and updatable delegation scheme that is of independent interest.

This delegation scheme is for super-polynomial time deterministic computations and is publicly verifiable and non-interactive in the common reference string (CRS) model. It is unambiguous meaning that it is hard to find two different proofs for the same statement. It is updatable meaning that given a proof for the statement that a Turing machine reaches some configuration C in T steps, one can efficiently update it into a proof for the statement that the machine reaches the next configuration C’ in T+1 steps.

Joint work with Yael Kalai and Omer Paneth.

An Equivalence between Private Classification and Online Prediction
Mark Bun

Abstract: Differential privacy enables rich statistical analyses on data while provably protecting individual-level privacy. The last decade of research has shown that, at least in principle, a number of fundamental statistical tasks are compatible with differential privacy. However, privacy-preserving analyses often require additional complexity over their non-private counterparts, for instance, in terms of the number of data samples one needs to collect in order to get accurate results. In fact, some infinite concept classes that are “easy” to learn in standard computational learning theory become impossible to learn under differential privacy using any finite number of samples.

In this talk, we will describe these impossibility results for private learning and place them in context. We will present a recent characterization of the privately learnable concept classes as exactly those that are learnable in Littlestone’s mistake-bound model of online learning. This equivalence opens new connections between privacy, combinatorics, and stability in learning theory.

Based primarily on work with contributions from Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran, Kobbi Nissim, Uri Stemmer, and Salil Vadhan.

Learning vs. Verifying: Interactive Proofs for Verifying Machine Learning
Jonathan Shafer

Abstract: This talk will address the following question: Assume an untrusted party claims to have used machine learning to learn a classifier for a certain task. In what cases is it cheaper to verify that the proposed classifier is good (in the PAC sense), compared to independently learning a new classifier from scratch?

If verification is significantly cheaper than learning, that could have important practical implications for delegation of machine learning tasks in commercial settings, and might also have consequences for verification of scientific publications, and for AI safety. Two results will be discussed: (1) There exists a learning problem where verification requires quadratically less random samples than are required for learning. (2) The broad class of Fourier-sparse functions (which includes decision trees, for example) can be efficiently verified using random samples only, even though it is widely believed to be impossible to efficiently learn this class from random samples alone.

Jonathan is a PhD student at UC Berkeley. This talk covers joint work with Shafi Goldwasser (UC Berkeley), Guy Rothblum (Weizmann Institute of Science), and Amir Yehudayoff (Technion-IIT).

Local Quantum Cryptography
Mark Zhandry

Abstract: “Quantum cryptography” uses a quantum communication channel to achieve new functionalities. For example, the unclonability of quantum messages means an attacker cannot both record quantum communication and also pass it along to the recipient. This leads to a form of eavesdropping detection that is central to quantum key distribution.

In this talk, I will discuss some recent work in an emerging area that I call “local quantum cryptography.” Here, all communication channels remain classical, but the various parties leverage local quantum computing to achieve never-before-possible functionalities. Functionalities we achieve include unclonable cryptographic keys, quantum money with classical communication, rate-limited signatures, and more. The central challenge in this area is to take advantage of features of quantum mechanics such as unclonability, even though all “quantumness” is happening locally on the adversary’s device and therefore is entirely under the adversary’s control.

Based on joint works with Ryan Amos, Marios Georgiou, and Aggelos Kiayias

When: Friday, Nov 1, 2019.

Where: Microsoft New England Research and Development Center
Abigail Adams room on floor M.
1 Memorial Drive, Cambridge MA 02142.

Organizers: Ran Canetti, Yael Kalai, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.


9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Fermi Ma, Princeton University
On the (In)security of Kilian-Based SNARGs
11:15 – 12:15.Giulio Malavolta, Carnegie Mellon University
Homomorphic Time-Lock Puzzles
12:15 – 1:30.Lunch
1:30 – 2:30.Shai Halevi, Algorand Foundation
Compressible FHE with Applications to PIR
2:45 – 3:45.Noah Stephens-Davidowitz, MIT
Will lattice-based cryptography be broken in practice?
4 – 5 Prashant Vasudevan, UC Berkeley
Formalizing Data Deletion in the Context of the Right to be Forgotten


Speaker: Fermi Ma, Princeton University
Title: On the (In)security of Kilian-Based SNARGs

Abstract: The Fiat-Shamir transform applied to Kilian’s protocol (and its extension to interactive oracle proofs) is at the heart of both theoretical results as well as practical implementations of highly efficient non-interactive proof systems (e.g., SNARKs and STARKs). In this work, we demonstrate significant obstacles to establishing the soundness of this approach. We exhibit a particular (contrived) interactive oracle proof (IOP) for NP for which Fiat-Shamir is unsound for any concrete hash function and any Merkle tree commitment. We also show that if Kilian’s original protocol for PCPs is instantiated with a particular (contrived) Merkle tree commitment and any “natural” PCP, then the Fiat-Shamir transform cannot be soundly applied using any concrete hash function. Put together, our results suggest that provably-secure PCP/IOP-based SNARGs will require a synergistic choice of the PCP/IOP, the Merkle tree commitment, and Fiat-Shamir hash function.

Joint work with James Bartusek, Liron Bronfman, Justin Holmgren, and
Ron D. Rothblum.

Speaker: Giulio Malavolta, Carnegie Mellon University
Title: Homomorphic Time-Lock Puzzles

Abstract: Time-lock puzzles allow one to encrypt messages for the future, by efficiently generating a puzzle with a solution s that remains hidden until time T has elapsed. The solution is required to be concealed from the eyes of any algorithm running in (parallel) time less than T. We put forth the notion of homomorphic time-lock puzzles, where one can evaluate functions over puzzles without solving them, i.e., one can manipulate a set of puzzles with solutions (s1,…,sn) to obtain a puzzle that solves to f(s1,…,sn), for any function f. We give a construction of homomorphic time-lock puzzle for any polynomially-computable function based on standard cryptographic assumptions. Then we discuss how homomorphic time-lock puzzles overcome the limitations of classical time-lock puzzles by proposing new protocols for applications of interest, such as e-voting, multi-party coin flipping, and fair contract signing.

Based on a joint work with Sri Aravinda Krishnan Thyagarajan.

Speaker: Shai Halevi, Algorand Foundation
Title: Compressible FHE with Applications to PIR

Homomorphic encryption (HE) is often viewed as impractical, both in communication and computation. Here we provide an additively homomorphic encryption scheme based on (ring) LWE with nearly optimal rate (1-\epsilon for any \epsilon>0). Moreover, we describe how to compress many Gentry-Sahai-Waters (GSW) ciphertexts (e.g., ciphertexts that may have come from a homomorphic evaluation) into (fewer) high-rate ciphertexts.

Using our high-rate HE scheme, we are able for the first time to describe a single-server private information retrieval (PIR) scheme with sufficiently low computational overhead so as to be practical for large databases. Single-server PIR inherently requires the server to perform at least one bit operation per database bit, and we describe a rate-(4/9) scheme with computation which is not so much worse than this inherent lower bound. In fact it is probably less than whole-database AES encryption — specifically about 2.3 mod-q multiplication per database byte, where q is about 50 to 60 bits. Asymptotically, the computational overhead of our PIR scheme is $\tilde{O}(\log\log k + \log\log\log N)$, where $k$ is the security parameter and $N$ is the number of database files, which are assumed to be sufficiently large.

joint work with Craig Gentry

Speaker: Noah Stephens-Davidowitz, MIT
Title:  Will lattice-based cryptography be broken in practice?

Lattice-based cryptography is currently being standardized for widespread use, in anticipation of the development of quantum computers. I.e., we’re going to use this to secure the internet. So, how secure is it really?
I’ll discuss the intimate relationship between lattice-based cryptography and worst-case lattice problems and present both new lower bounds and new algorithmic progress for these problems.

Based on joint works with Divesh Aggarwal, Zvika Brakerski, Huck Bennett, Sasha Golovnev, and Vinod Vaikuntanathan.

Speaker: Prashant Vasudevan, UC Berkeley
Title:  Formalizing Data Deletion in the Context of the Right to be Forgotten

  The right of an individual to request the deletion of their personal data by an entity that might be storing it — referred to as “the right to be forgotten” —  has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled — of what it means for such personal data to be deleted.
    In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals’ data when a request is made of it to delete some of this data. While it cannot be viewed as expressing the statements of current laws (since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require.
    Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.

Joint work with Sanjam Garg and Shafi Goldwasser

When: Friday, May 3, 2019.

Where: Northeastern University, ISEC Building (805 Columbus Ave) Room 655.

Organizers: Ran Canetti, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.


9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Benedikt Bünz, Stanford University
Verifiable Delay Functions and Succinct Proofs in Groups of Unknown Order
11:15 – 12:15.Dakshita Khurana, Microsoft Research
Quantum Advantage and Classical Cryptography
12:15 – 1:30.Lunch
1:30 – 2:30.Prabhanjan Ananth, MIT
Secure MPC with Honest Majority: Minimal Rounds and Efficiency Optimizations
3:00 – 4:00.Aloni Cohen, MIT
Towards Formalizing the GDPR Notion of Singling Out


Speaker: Benedikt Bünz, Stanford University
Title: Verifiable Delay Functions and Succinct Proofs in Groups of Unknown Order

We study the problem of building a verifiable delay function (VDF). A VDF requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.

Further we survey new VDF constructions by Pietrzak and Wesolowski in groups of unknown order. Building on Wesolowski’s techniques we build succinct proofs in groups of unknown order. We build Sigma protocols that have good soundness in the challenge space and communication complexity independent of the witness. This circumvents a previous impossibility result. These proofs have applications to accumulators and vector commitments.

Papers: and

Speaker: Dakshita Khurana, Microsoft Research
Title: Quantum Advantage and Classical Cryptography

We demonstrate applications of quantum advantage to building new protocols for classical cryptography. In particular, we observe that in many applications, the oft-used technique of leveraging (time) complexity can be replaced with leveraging quantum advantage instead. In this talk, I will describe how this observation can be combined with additional techniques to obtain:
1. Non-interactive non-malleable commitments without setup in the plain model, under well-studied assumptions.
2. Two-message (privately verifiable) WI arguments based on the polynomial hardness of factoring and quantum polynomial hardness of LWE.

In the first case, I will begin by discussing how to build simple non-interactive non-malleable commitments for O (log log n) tags assuming the sub-exponential hardness of factoring or discrete log, and *quantum* sub-exponential hardness of LWE. Next, I will describe new combinatorial techniques to amplify the number of tags, assuming only the existence of NIWI (non-interactive witness indistinguishable) proofs. These techniques yield the first construction of non-interactive non-malleable commitments w.r.t. replacement for exponentially many tags in the plain model, based on well-studied assumptions. 

I will also discuss how the second application (and some others) follow almost immediately from our observation on quantum advantage. This is based on joint work with Yael Tauman Kalai. The paper is available online at

Speaker: Prabhanjan Ananth, MIT
Title: Secure MPC with Honest Majority: Minimal Rounds and Efficiency Optimizations

We present two constructions of two-round secure multiparty computation protocols with honest majority:

– Our first construction satisfies statistical security with abort and can handle any functionality implementable in NC1. All the previous two-round protocols either required computational assumptions or had a weaker corruption threshold. Our construction employs both broadcast and point-to-point channels. We also show how to generically transform this protocol into one that only employs point-to-point channels while satisfying statistical security with selective abort property. 

– Our second construction can securely compute any functionality implementable in P/poly and has per-party computation cost to be (|C|+n^2) poly(k)  with C being the circuit securely computed, n being the number of parties and k being the security parameter. It is based on one-way functions and satisfies semi-honest security. The per-party computation cost of all the previous protocols in this setting was |C| n^2 poly(k) or more. With an additional round, we can either upgrade to malicious security or achieve a protocol with total computation cost (|C|+n^2) poly(k). 

Based on joint works with Arka Rai Choudhuri (JHU), Aarushi Goel (JHU) and Abhishek Jain (JHU). 

Speaker: Aloni Cohen, MIT
Title: Towards Formalizing the GDPR Notion of Singling Out

There is a significant conceptual gap between legal and mathematical thinking around data privacy. The effect is uncertainty as to the which technical offerings adequately match expectations expressed in legal standards. The uncertainty is exacerbated by a litany of successful privacy attacks, demonstrating that traditional statistical disclosure limitation techniques often fall short of the sort of privacy envisioned by legal standards.

We define predicate singling out, a new type of privacy attack intended to capture the concept of singling out appearing in the General Data Protection Regulation (GDPR).

Informally, an adversary predicate singles out a dataset X using the output of a data release mechanism M(X) if it manages to a predicate p matching exactly one row in X with probability much better than a statistical baseline. A data release mechanism that precludes such attacks is secure against predicate singling out (PSO secure).

We argue that PSO security is a mathematical concept with legal consequences. Any data release mechanism that purports to “render anonymous” personal data under the GDPR must be secure against singling out, and hence must be PSO secure. We then analyze PSO security, showing that it fails to self-compose. Namely, a combination of $\omega(\log n)$ exact counts, each individually PSO secure, enables an attacker to predicate single out. In fact, the composition of just two PSO secure mechanisms can fail to provide PSO security.

Finally, we ask whether differential privacy and k-anonymity are PSO secure. Leveraging a connection to statistical generalization, we show that differential privacy implies PSO security. However, k-anonymity does not: there exists a simple and general predicate singling out attack under mild assumptions on the k-anonymizer and the data distribution.

When: Friday, Mar 8, 2019.

Where:  MIT Stata Center G-882 (Hewlett Room)

Organizers: Ran Canetti, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.


9:30 – 10:00.Coffee/Breakfast
10:00 – 11:00.Zvika Brakerski, Weizmann
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
11:15 – 12:15.Gilad Asharov, JP Morgan AI Research
OptORAMa: Optimal Oblivious RAM
12:15 – 1:30.Lunch
1:30 – 2:30.Rachel Lin, University of Washington
Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation
2:45 – 3:45.Mohammad Mahmoody, University of Virginia
Coin-tossing, Concentration of Products, and Limits of Robust Learning
4:00 – 5:00.abhi shelat, Northeastern
Threshold Factoring from Factoring Assumptions


Speaker: Zvika Brakerski, Weizmann
Title: Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing

We present a worst case decoding problem whose hardness reduces to that ofsolving the Learning Parity with Noise (LPN) problem, in some parameterregime. Prior to this work, no worst case hardness result was known for LPN(as opposed to syntactically similar problems such as Learning with Errors).The caveat is that this worst case problem is only mildly hard and inparticular admits a quasi-polynomial time algorithm, whereas the LPN variantused in the reduction requires extremely high noise rate of $1/2-1/\poly(n)$.Thus we can only show that “very hard” LPN is harder than some “very mildlyhard” worst case problem. We note that LPN with noise $1/2-1/\poly(n)$already implies symmetric cryptography.
Specifically, we consider the $(n,m,w)$-nearest codeword problem($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linearcode in $m$ dimensions and rank $n$, and a target vector which is very closeto the code (Hamming distance at most $w$), and asks to find the codewordnearest to the target vector. We show that for balanced (unbiased) codes andfor relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solvedgiven oracle access to an LPN distinguisher with noise ratio$1/2-1/\poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have furtherimplications: We show that $(n,m,w)$-NCP with the aforementioned parameterslies in the complexity class $SearchBPP^{SZK}$ (i.e.\ reducible to a problemthat has a statistical zero knowledge protocol) implying that it is unlikelyto be $NP$-hard. We then show that the hardness of LPN with very low noiserate $\log^2(n)/n$ implies the existence of collision resistant hash functions(our aforementioned result implies that in this parameter regime LPN is alsoin $BPP^{SZK}$).
Joint work with Vadim Lyubashevsky, Vinod Vaikuntanathan and Daniel Wichs.

Speaker: Gilad Asharov, JP Morgan AI Research
Title: OptORAMa: Optimal Oblivious RAM

Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs.  To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions.
One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is  $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively. 
With Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Enoch Peserico and Elaine Shi.

Speaker: Rachel Lin, University of Washington
Title: Pseudo Flawed-smudging Generators and its Application to Indistinguishability Obfuscation

We introduce Pseudo Flawed-smudging Generators (PFGs). They are polynomially expanding functions over Zp with polynomially small outputs that have some weak pseudo-randomness property. More specifically, the output distribution of a PFG is computationally indistinguishable to a so-called flawed smudging distribution y <– Y, satisfying that for every B = poly(n) bounded noise distribution E, the distributions of (e, e + y) and (e’, e+y) are statistically close, where e and e’ are independent samples from E conditioned on agreeing at a few, o(n), coordinates. Moreover, the statistical closeness only holds with 1/poly(n) probability over the choice of y <– Y. In essence, the output of PFG computationally hides a small noise vector at all but a few coordinates with noticeable probability.
We use PFGs to construct Indistinguishability Obfuscation (IO) schemes for polynomial-sized circuits. Assuming LWE and the existence of constant-locality pseudorandom generators, we construct IO using PFGs and a  Functional Encryption (FE) scheme able to compute them. Instantiating the PFGs with new candidates from [Ananth, Jain, Sahai, Eprint 2018] and instantiating the FE with a new partially hiding FE scheme constructed from bilinear maps, we obtain IO based on i) the security of the new PFG candidates, ii) the SXDH assumption over bilinear maps, iii) LWE, and iv) the security of constant-locality pseudorandom generators.  

Speaker: Mohammad Mahmoody, University of Virginia
Title: Coin-tossing, Concentration of Products, and Limits of Robust Learning

A recent active line of work in robust learning studies attacks on learning algorithms through adversarial perturbations that happen during the training phase (i.e., poisoning attacks) or testing phase (i.e., evasion attacks; aka adversarial examples). In this talk, I first show the existence of some generic information theoretic poisoning attacks as well as evasion attacks for certain theoretically natural input distributions (e.g., the uniform distribution) based on classical results about concentration of measure in certain metric probability spaces (known as Normal Levy Families). I will then show how to make some of these attacks polynomial time by proving computational variants of the measure concentration for any product space under Hamming distance using new (polynomial time) attacks on cryptographic coin tossing protocols.
Based on joint works with Saeed Mahloujifar and Dimitrios Diochnos from NeurIPS’18, AAAI’19 and ALT’19.

Speaker: abhi shelat, Northeastern
Title: Threshold Factoring from Factoring Assumptions

The problem of jointly generating an RSA modulus N has been well considered since Boneh and Franklin first proposed a solution in 1996.  In this talk, we discuss our latest multi-party protocol for this problem as well as interesting open directions that could improve our solution.  Our solution only requires the assumption of OT and can thus be instantiated from the factoring assumption alone; we discuss the costs of this restriction by studying the benefits/costs of using protocols that rely on homomorphic primitives. We also discuss the new motivation behind this problem arising from the use of an RSA modulus N in the construction of a verifiable delay function (VDF), and the use of VDFs in consensus protocols.

This is joint work with Megan Chen, Ran Cohen, Jack Doerner, Yash Kondi,  Eysa Lee, Schuyler Rosefield as well as Carmit Hazay and Muthu Venkitasubramaniam from Ligero.