Friday, Dec 15 at Northeastern

When: Friday, December 15 at Northeastern.

Where:  Northeastern University, ISEC Building (805 Columbus Ave).
Talks in room 136. Coffee + Lunch in room 655.

Attendance is free, as usual, but please register by filling in this rather short google form: https://goo.gl/forms/PkOmXELG8xPqwG812

Organizers: Ran Canetti, Yael Kalai, Ron Rothblum, Vinod Vaikuntanathan and Daniel Wichs.

Thanks: NSF MACS Project for their generous support.

Program:

9:30 – 10:00. Coffee/Welcome (Room 655)
10:00 – 11:00.
Guy Rothblum, Weizmann:
Calibration for The  Masses
11:15 – 12:15. Ilan  Komargodski, Cornell Tech:
How to Share a Secret: infinitely, dynamically and robustly
12:15 – 1:30. Lunch (Room 655)
1:30 – 2:30. Fabrice Benhamouda, IBM:
k-Round MPC from k-Round OT via Garbled Interactive Circuits
2:45 – 3:45. Adam Smith, BU:
Learning with Fully Distributed Private Protocols

Abstracts:

Speaker: Guy Rothblum, Weizmann Institute of Science
Title: Calibration for The  Masses

Abstract:  As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern.

We develop and study multicalibration — a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group.

We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw connections to the theory of agnostic learning.

Based on joint work with Úrsula Hébert-Johnson, Michael P. Kim and Omer Reingold.

Speaker: Ilan  Komargodski, Cornell Tech
Title: How to Share a Secret: infinitely, dynamically and robustly.

Abstract:  Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the k-threshold access structure, where the qualified subsets are those of size at least k. When k=2 and there are n parties, there are schemes where the size of the share each party gets is roughly log(n) bits, and this is tight even for secrets of 1 bit. In these schemes, the number of parties n must be given in advance to the dealer.
We consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the t-th party arriving a small share as possible as a function of t. We present a scheme for general access structures and several schemes for variants of the k-threshold access structure in which at any point in time some bounded number of parties can recover the secret. Lastly, we discuss other classical notions such as robustness, and adapt them to the unbounded setting.

 

Speaker: Fabrice Benhamouda, IBM
Title: k-Round MPC from k-Round OT via Garbled Interactive Circuits

 

Abstract:  We present new constructions of round-efficient, or even round-optimal, Multi-Party Computation (MPC) protocols from Oblivious Transfer (OT) protocols. Our constructions establish a tight connection between MPC and OT: In the setting of semi-honest security, for any $k \ge 2$, $k$-round semi-honest OT is necessary and complete for $k$-round semi-honest MPC. In the round-optimal case of $k = 2$, we obtain 2-round semi-honest MPC from 2-round semi-honest OT, resolving the round complexity of semi-honest MPC assuming weak and necessary assumption. In comparison, previous 2-round constructions rely on either the heavy machinery of indistinguishability obfuscation or witness encryption, or the algebraic structure of bilinear pairing groups. More generally, for an arbitrary number of rounds $k$, all previous constructions of $k$-round semi-honest MPC require at least OT with $k’$ rounds for $k’ \le \lfloor k/2 \rfloor$.
In the setting of malicious security, we show: For any $k \ge 5$, $k$-round malicious OT is necessary and complete for $k$-round malicious MPC. In fact, OT satisfying a weaker notion of delayed-semi-malicious security suffices. In the common reference string model, for any $k \ge 2$, we obtain $k$-round malicious Universal Composable (UC) protocols from any $k$-round semi-malicious OT and non-interactive zero-knowledge. Previous 5-round protocols in the plain model, and 2-round protocols in the common reference string model all require algebraic assumptions such as DDH or LWE.
At the core of our constructions is a new framework for garbling interactive circuits. Roughly speaking, it allows for garbling interactive machines that participates in interactions of a special form. The garbled machine can emulate the original interactions receiving messages sent in the clear (without being encoded using secrets), and reveals only the transcript of the interactions, provided that the transcript is computationally uniquely defined. We show that garbled interactive circuits for the purpose of constructing MPC can be implemented using OT. Along the way, we also propose a new primitive of witness selector that strengthens witness encryption, and a new notion of zero-knowledge functional commitments.
This is a joint work with Rachel (Huijia) Lin

Speaker: Adam Smith,BU
Title:  Learning with Fully Distributed Private Protocols

Abstract: I’ll review some recent (and not so recent) results on learning in the “local” model of differential privacy, in which participants retain control of their private data and send only differential private outputs to a central, untrusted aggregator. I’ll focus, in particular, on the role of interaction in this model — how does restricting our attention to noninteractive protocols limit the class of learning tasks we can accomplish? The model is of both theoretical and practical interest–several recent deployments of differentially private algorithms follow the “local” paradigm.

 

Comments are closed.

%d bloggers like this: