82 research outputs found
Lower Bounds on Signatures from Symmetric Primitives
We show that every construction of one-time signature schemes from a random
oracle achieves black-box security at most , where is the
total number of oracle queries asked by the key generation, signing, and
verification algorithms. That is, any such scheme can be broken with
probability close to by a (computationally unbounded) adversary making
queries to the oracle. This is tight up to a constant factor in
the number of queries, since a simple modification of Lamport's one-time
signatures (Lamport '79) achieves black-box security using
queries to the oracle.
Our result extends (with a loss of a constant factor in the number of
queries) also to the random permutation and ideal-cipher oracles. Since the
symmetric primitives (e.g. block ciphers, hash functions, and message
authentication codes) can be constructed by a constant number of queries to the
mentioned oracles, as corollary we get lower bounds on the efficiency of
signature schemes from symmetric primitives when the construction is black-box.
This can be taken as evidence of an inherent efficiency gap between signature
schemes and symmetric primitives
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Many modern machine learning classifiers are shown to be vulnerable to
adversarial perturbations of the instances. Despite a massive amount of work
focusing on making classifiers robust, the task seems quite challenging. In
this work, through a theoretical study, we investigate the adversarial risk and
robustness of classifiers and draw a connection to the well-known phenomenon of
concentration of measure in metric measure spaces. We show that if the metric
probability space of the test instance is concentrated, any classifier with
some initial constant error is inherently vulnerable to adversarial
perturbations.
One class of concentrated metric probability spaces are the so-called Levy
families that include many natural distributions. In this special case, our
attacks only need to perturb the test instance by at most to make
it misclassified, where is the data dimension. Using our general result
about Levy instance spaces, we first recover as special case some of the
previously proved results about the existence of adversarial examples. However,
many more Levy families are known (e.g., product distribution under the Hamming
distance) for which we immediately obtain new attacks that find adversarial
examples of distance .
Finally, we show that concentration of measure for product spaces implies the
existence of forms of "poisoning" attacks in which the adversary tampers with
the training data with the goal of degrading the classifier. In particular, we
show that for any learning algorithm that uses training examples, there is
an adversary who can increase the probability of any "bad property" (e.g.,
failing on a particular test instance) that initially happens with
non-negligible probability to by substituting only of the examples with other (still correctly labeled) examples
Multi-party Poisoning through Generalized -Tampering
In a poisoning attack against a learning algorithm, an adversary tampers with
a fraction of the training data with the goal of increasing the
classification error of the constructed hypothesis/model over the final test
distribution. In the distributed setting, might be gathered gradually from
data providers who generate and submit their shares of
in an online way.
In this work, we initiate a formal study of -poisoning attacks in
which an adversary controls of the parties, and even for each
corrupted party , the adversary submits some poisoned data on
behalf of that is still "-close" to the correct data (e.g.,
fraction of is still honestly generated). For , this model
becomes the traditional notion of poisoning, and for it coincides with
the standard notion of corruption in multi-party computation.
We prove that if there is an initial constant error for the generated
hypothesis , there is always a -poisoning attacker who can decrease
the confidence of (to have a small error), or alternatively increase the
error of , by . Our attacks can be implemented in
polynomial time given samples from the correct data, and they use no wrong
labels if the original distributions are not noisy.
At a technical level, we prove a general lemma about biasing bounded
functions through an attack model in which each
block might be controlled by an adversary with marginal probability
in an online way. When the probabilities are independent, this coincides with
the model of -tampering attacks, thus we call our model generalized
-tampering. We prove the power of such attacks by incorporating ideas from
the context of coin-flipping attacks into the -tampering model and
generalize the results in both of these areas
Black-Box Uselessness: Composing Separations in Cryptography
Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive on another . Such separations, however, do not say anything about the power of the combination of primitives ₁,₂ for constructing , even if cannot be based on ₁ or ₂ alone.
By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive black-box useless (BBU) for if cannot help constructing in a black-box way, even in the presence of another primitive . This is formalized by saying that is BBU for if for any auxiliary primitive , whenever there exists a black-box construction of from (,), then there must already also exist a black-box construction of from alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to is below a threshold.
Impagliazzo and Rudich (STOC'89) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement.
We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC'15) as well as Garg, Mahmoody, and Mohammed (Crypto'17 & TCC'17) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness.
Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt'98). This conjecture may also be of interest in other contexts, such as amplification of hardness
- …
