Privacy Preserving Data-Analysis: New Models, New Algorithms and New Applications

Differential privacy offers a rigorous mathematical definition of privacy, immune to post-processing and composition. This proposal is centered around two venues: The first venue deals with the trust-free local-model of differential privacy, in which computations do not involve a curator (a central, trusted authority), but rather are done using protocols in which each agent adds noise to her own message thus preserving privacy by herself. Recent works have shown that augmenting the local-model with certain primitives, such as a random shuffle of users’ messages or the addition of a trusted curator with access to the details of few individuals, allow for new capabilities and better privacy-accuracy trade-offs than standard local-model protocols. Thus, we propose the study of augmentations of the local-model with trusted primitives that allow for better privacy-accuracy trade-offs. The second venue deals with the study of three specific subfields in Machine Learning and Statistics where we believe the study of differentially private algorithms is lacking. We aim to design differentially-private algorithms for various tasks in clustering, reinforcement learning and hypothesis testing with optimal privacy-accuracy trade-offs. Note that since almost all analyses in medicine, econometrics and quantitative social sciences rely on hypothesis-testing, the design of differentially private hypothesis testers is pivotal for the spread of differential privacy into such data-centric fields. The grant award is 920,000 NIS.