Writing on Wind and Water: Enforcing File Robustness in the Cloud
Ari Juels, RSA Laboratories
The Cloud abstracts away infrastructural complexity for the benefit of tenants. But to tenants’ detriment, it can also abstract away vital security information. In this talk, I discuss several protocols for remote testing of cloud storage integrity and robustness. Executing these protocols without detailed infrastructural knowledge or trust in cloud providers, clients or auditors can: (1) Verify the integrity of full files without downloading them; (2) Distribute files across cloud providers and ensure strong robustness with periodic, inexpensive checks (in a cloud analog to RAID); and (3) Test whether files are resilient to drive crashes. Joint work with Kevin Bowers, Marten van Dijk, Burt Kaliski, Alina Oprea, and Ron Rivest.
Cloud Cryptography: A new era for cryptographic research
Giuseppe Atteniese, Johns Hopkins University
Let’s face it: hundreds of elegant cryptographic schemes have been devised in the last 30 years but only encryption and authentication are deployed in practice. Cloud computing and storage are expected to change this status quo. The Cloud represents an incredible business opportunity but only if users will be in control of their data. In this talk, we will briefly highlight the opportunities the Cloud offers to cryptographers, then we will cover some recent results in the areas of Provable Data Possession and Proxy Re-encryption.
Cloudy, Without a Chance of Data Loss – Implementation
Kevin Bowers, RSA Laboratories
Cloud storage promises to provide consumers with cheap storage that can be accessed at any time, from anywhere. The appeal of such a system is easy to understand, as too is the fear people have of outsourcing the storage of valuable data. Recent failures and loss of data by cloud storage providers has done little to ease these fears. Fear is often a product of the unknown. In this talk we present several newly developed techniques that give consumers better insight into the cloud, as well as ways to leverage this insight to ensure fault tolerance. In particular, we will focus on challenges encountered while implementing Proofs of Retrievability (PORs), High Availability and Integrity Layer (HAIL), and Remote Assessments of Fault Tolerance (RAFTs). Given the right tools, the benefits of cloud storage can be achieved, without a chance of data loss.
Virtual Machine Reset Vulnerabilities and Hedged Cryptography
Tom Ristenpart, UC San Diego
Virtual machines are widely used to, for example, support cloud computing services and improve home desktop security. In this talk I’ll present recent work on showing a new class of vulnerabilities, termed VM reset vulnerabilities, that arise due to reuse of VM snapshots. A snapshot is the saved state of a VM, which can include caches, memory, persistent storage, etc. A reset vulnerability occurs when resuming two or more times from the same VM snapshot exposes security bugs. I’ll report on our discovery of several reset vulnerabilities in modern browsers used within commonly-used VM managers. These vulnerabilities exploit weaknesses in cryptographic protocols when confronted with reused randomness. I’ll then explore a new framework of hedged cryptography, which aims to build into cryptographic protocols mechanisms that provide improved security in the face of reset (or other) vulnerabilities.
Founding Cryptography on Tamper-Proof Hardware Tokens?
Vipul Goyal, MSR
A number of works have investigated using tamper-proof hardware tokens as tools to achieve a variety of cryptographic tasks. In particular, there have been works studying obfuscation, UC secure computation, one time programs, etc using tamper proof hardware. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We distinguish between using stateful tokens vs stateless tokens. We show that a surprisingly large number of goals which are impossible to achieve in the “plain” model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens. This in particular implies results for program obfuscation with a malicious sender (using hardware tokens), obfuscating machines running interactive protocols, etc. Parts of the work are joint with Yuval Ishai, Mohammad Mahmoody, Amit Sahai, Ramarathnam Venkatesan and Akshay Wadia.
Sharing Sensitive Information with Privacy
Gene Tsudik, UC Irvine
Modern society is increasingly dependent on, and fearful of, the availability of electronic information. There are numerous realistic scenarios where sensitive data must be (sometimes reluctantly or suspiciously) shared between two or more entities without mutual trust. As often happens, the research community has foreseen the need for mechanisms to enable limited (privacy-preserving) sharing of sensitive information and a number of effective (if not always efficient) solutions have been proposed. Among them, Private Set Intersection techniques are particularly appealing whenever two parties wish to compute an intersection of their respective sets of items without revealing to each other any other information. This talk motivates the need for Private Set Intersection (PSI) techniques with various features and privacy properties and illustrates several concrete Private Set Intersection protocols that offer appreciably better efficiency than prior work. We also demonstrate their practicality via experimental results obtained from a prototype implementation and discuss a number of systems issues encountered in developing a toolkit that provides various flavors of PSI.
Point Obfuscation and Friends
Ran Canetti, Tel Aviv University
A point program is a program that accepts a single input and rejects all other inputs. A point obfuscator takes a point program (in a certain format) and turns it into a program with the same functionality, but where the program itself does not provide any additional information on the special point that cannot be deduced from simply running the program as a black box. We will review some of the recent research on point obfuscators (POs), including composable POs, non-malleable POs, multi-bit POs (aka Digital Lockers), set obfuscators, hyperplane obfuscators, and applications to strong variants of encryption and digital signatures.
Cryptography Against Continuous Memory Attacks
Yevgeniy Dodis, New York University
We say that a cryptographic scheme is Continuous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:
— The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the “outside world” is neither affected by these key refreshes, nor needs to know about their frequency.
— The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.
In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption.
Joint work with Kristiyan Haralambiev and Adriana Lopez-Alt and Daniel Wichs. The extended abstract of the paper will appear at FOCS’10 and can be found at http://eprint.iacr.org/2010/196
Bi-Deniable Encryption
Chris Peikert, Georgia Tech
A *deniable* encryption scheme allows a sender and/or receiver, having already performed some encrypted communication, to produce `fake’ but legitimate-looking encryption coins and/or decryption keys that make the ciphertext appear as an encryption of some message other than the `true’ one. Deniability is a powerful notion for both theory and practice: apart from its inherent utility for resisting coercion, a deniable scheme is also *noncommitting* (an important property for constructing adaptively secure protocols), and secure under selective-opening attacks. To date, however, known constructions have achieved only limited forms of deniability, requiring at least one party to remain uncoerced, and in some cases using an interactive protocol.
Our main result is a *bideniable* public-key cryptosystem, i.e., one in which both the sender and receiver can simultaneously equivocate; we stress that the scheme is noninteractive and involves no external parties. The construction is based on the (worst-case) hardness of lattice problems.
This is joint work with Adam O’Neill at Georgia Tech.
Structured Encryption and Controlled Disclosure
Seny Kamara, Microsoft Research
We consider the problem of encrypting structured data (e.g., a web graph or a social network) in such a way that it can be efficiently and privately queried. For this purpose, we introduce the notion of structured encryption which generalizes previous work on symmetric searchable encryptio (SSE) to the setting of arbitrarily-structured data. In the context of cloud storage, structured encryption allows a client to encrypt data without losing the ability to query and retrieve it efficiently. Another application, which we introduce in this work, is to the problem of controlled disclosure, where a data owner wishes to grant access to only part of a massive data set.
Joint work with Melissa Chase.
Rerandomizable Yao Circuits and i-Hop Homomorphic Encryption
Vinod Vaikuntanathan, Microsoft Research
A homomorphic encryption scheme enables computing on encrypted data by means of a public evaluation procedure on ciphertexts, making it possible for anyone to transform an encryption of x into an encryption of f(x) (for any function f). But evaluated ciphertexts may differ from freshly encrypted ones, which brings up the question of whether one can keep computing on evaluated ciphertexts. Namely, is it still possible for anyone to transform the encryption of f(x) into an encryption of g(f(x))?
An i-hop homomorphic encryption is a scheme where the evaluation procedure can be called on its own output upto i times (while still being able to decrypt the result). A multi-hop homomorphic encryption is a scheme that is i-hop for all i. In this work we study i-hop and multi-hop schemes, in conjunction with the properties of circuit-privacy (i.e., the evaluation procedure hides the function) and compactness (i.e., the output of evaluation is short). We provide appropriate formal definitions for all these properties and show three constructions:
* We show a DDH-based construction, which is multi-hop and circuit private (but not compact), and where the size of the ciphertext is linear in the size of the composed circuit, but independent of the number of hops.
* More generically, for any constant i, an i-hop circuit-private homomorphic encryption can be constructed from any two-flow protocol for two-party SFE. (Conversely, a two-flow protocol for two-party SFE can be constructed from any 1-hop circuit-private homomorphic encryption.)
* For any polynomial i, an i-hop compact and circuit-private homomorphic encryption can be constructed from a 1-hop compact homomorphic encryption and a 1-hop circuit-private homomorphic encryption, where the size of the public key grows linearly with $i$. Moreover, a multi-hop scheme can be constructed by making an additional circular-security assumption.
For the first construction, we describe a *re-randomizable* variant of Yao’s garbled-circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated that circuit (in collusion with the intended recipient) will not be able to recognize it. This construction may be of independent interest.
Joint work with Shai Halevi and Craig Gentry.
Computing with Multivariate Polynomials and Application to Multi-Party Secure Set Intersection
Mariana Raykova, Columbia
We present a robust methodology for computing functions that are efficiently represented as multivariate polynomials, where parties hold different variables as private inputs.
We obtain general protocols that are fully black-box and employ threshold homomorphic encryption (e.g., Paillier over a ring modulo a composite). Our protocols do not assume honest majority, yet are robust and can detect any misbehavior.
As a variant of our general tool optimized for the specific problem structure we present a protocol for secure multi-party set intersection. It achieves communication complexity $O(md + d log^2 d )$ for $m$ parties each having input set of maximum size $d$, which is much better than when employing generic methods for the problem.
Our method also yields efficient secure multi-party protocols for a large collection of other problems that are naturally and efficiently represented as multivariate polynomials over a field or a ring. These include problems from linear algebra, statistics, logic and set operations, as well as the multi-party variant of the oblivious polynomial evaluation problem.
Zero Knowledge Sets: Short Commitments and Proofs for Large Sets
Melissa Chase, MSR
Zero Knowledge Sets (introduced by MRK’03) allow a data owner to commit to a large set (or elementary database) without revealing anything about the elements in the set. The data owner can then reply to membership (or key-value) queries and prove that his responses are consistent with the committed set without revealing anything about the other elements of the set. Furthermore, both commitments and proofs can be done without revealing any information about the size of the set (database), which may in many situations be confidential. This also implies that the communication cost between the data owner and querier must be independent of the size of the set.
Dynamic Symmetric Searchable Encryption
Tom Roeder, MSR
Searchable Symmetric Encryption (SSE) allows a user to generate an encrypted index for terms extracted from a document collection. The encrypted index can be stored with the encrypted documents on a remote host, and the user can generate tokens that the host can use to search the encrypted index without revealing much information. In most applications of SSE, the encrypted documents are updated frequently, hence the encrypted index must also be updated. But published algorithms for SSE do not provide practical update mechanisms.
In this talk, we introduce two new SSE schemes that provide practical update mechanisms; one scheme allows individual terms from a document to be added or deleted, and the other scheme supports adding or deleting all the terms for a document at once. We also describe a prototype implementation of the document-based scheme built over Windows Home Server.
Efficient Verification of Outsourced Data and Computations
Charalampos Papamanthou, Brown University
With the prevalence of the Internet in every aspect of our life, there has been an increasing interest in remote storage of data and structured information (e.g., emails, photos). This trend has given rise to a new discipline, termed under the name “cloud computing,” widely adopted by many companies (and individuals) in order to save operating and maintenance costs. However, as remote repositories (i.e., the cloud) may lose or modify data due to errors or malicious attacks, it is important to develop methods that provide strong assurance to the users of the integrity of the outsourced data.
In order to address the above problems, one has to take into consideration that the produced solutions are efficient. In other words, if the security added to a cloud service leads to slow performance, the user might reject the service, since, although secure and trusted, the experienced overhead (time, bandwidth) by the service might be unacceptable.
This talk explores integrity checking solutions that go beyond traditional hash-based methods, towards improving efficiency and achieving better asymptotic bounds. The systematic application of multiple cryptographic primitives, such as accumulators and lattices, leads to the proposal of new authenticated data structures schemes that compare favorably with existing solutions. We conclude by also reporting on some practical work we have done to address the aforementioned problems.
This is joint work with Roberto Tamassia and Nikos Triandopoulos.
Predicate Encryption
Emily Shen, MIT
Predicate encryption is a new encryption paradigm which gives a master secret key owner fine-grained control over access to encrypted data. The master secret key owner can generate secret key tokens corresponding to predicates. An encryption of data x can be evaluated using a secret token corresponding to a predicate f; the user learns whether the data satisfies the predicate, i.e., whether f(x) = 1. This talk will survey recent results in this area, and present some ideas behind one of the constructions.
We Have The Technology, Now Where Next?
David Molnar, MSR
What will it take to convince people that cryptography makes the cloud safe? How might our favourite cryptographic constructions work together with systems moving to the cloud? I will describe examples where existing policies blocked movement of data or computation to the cloud. I will then discuss trends in cloud audit approaches and in document labeling that may be complementary to the use of cloud cryptography. Finally I will talk about what is required today for storing highly sensitive data on premises in a large company.
Subspace LWE
Krzysztof Pietrzak
The (decisional) learning with errors (LWE) problem asks to distinguish “noisy” inner products of a secret vector with random vectors from uniform. The presumed hardness of this and the related learning parity with noise (LPN) problem has found many applications in cryptography. Its appeal stems from the fact that it is provably as hard as well studied worst-case lattice problems [Regev’05].
We introduce (seemingly) much stronger *adaptive* assumptions SLWE and SLPN (S for “subspace”), where the adversary can learn inner products of the secret and random vectors after they were projected into an adaptively and adversarially chosen subspace. We prove that SLWE/SLPN mapping into subspaces of dimension N are almost as hard as the standard LWE/LPN assumptions using secrets of length N.
This implies that the standard LWE/LPN problems are surprisingly robust with respect to tampering with the secret and/or the randomness used to generate the samples. This robustness directly translates to stronger security guarantees (e.g. it implies security against a broad class of related-key attacks) one can give for cryptosystems proven secure under the standard LWE/LPN assumptions.
We also present a new very simple and efficient authentication protocol which is secure against active attacks under the SLPN (and thus the standard LPN) assumption. Our protocol improves upon the HB+ protocol [Juels and Weis’05],[Katz, Shin’05] (which is the best known protocol based on LPN achieving active security) in term of round complexity (optimal two rounds compared to three) and a tight security reduction.