1,290 research outputs found
A Quantum-Proof Non-Malleable Extractor, With Application to Privacy Amplification against Active Quantum Adversaries
In privacy amplification, two mutually trusted parties aim to amplify the
secrecy of an initial shared secret in order to establish a shared private
key by exchanging messages over an insecure communication channel. If the
channel is authenticated the task can be solved in a single round of
communication using a strong randomness extractor; choosing a quantum-proof
extractor allows one to establish security against quantum adversaries.
In the case that the channel is not authenticated, Dodis and Wichs (STOC'09)
showed that the problem can be solved in two rounds of communication using a
non-malleable extractor, a stronger pseudo-random construction than a strong
extractor.
We give the first construction of a non-malleable extractor that is secure
against quantum adversaries. The extractor is based on a construction by Li
(FOCS'12), and is able to extract from source of min-entropy rates larger than
. Combining this construction with a quantum-proof variant of the
reduction of Dodis and Wichs, shown by Cohen and Vidick (unpublished), we
obtain the first privacy amplification protocol secure against active quantum
adversaries
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS
3027, pp. 523-540. Differences from version 3: minor edits for grammar,
clarity, and typo
Multiparty Quantum Coin Flipping
We investigate coin-flipping protocols for multiple parties in a quantum
broadcast setting:
(1) We propose and motivate a definition for quantum broadcast. Our model of
quantum broadcast channel is new.
(2) We discovered that quantum broadcast is essentially a combination of
pairwise quantum channels and a classical broadcast channel. This is a somewhat
surprising conclusion, but helps us in both our lower and upper bounds.
(3) We provide tight upper and lower bounds on the optimal bias epsilon of a
coin which can be flipped by k parties of which exactly g parties are honest:
for any 1 <= g <= k, epsilon = 1/2 - Theta(g/k).
Thus, as long as a constant fraction of the players are honest, they can
prevent the coin from being fixed with at least a constant probability. This
result stands in sharp contrast with the classical setting, where no
non-trivial coin-flipping is possible when g <= k/2.Comment: v2: bounds now tight via new protocol; to appear at IEEE Conference
on Computational Complexity 200
A Rational Approach to Cryptographic Protocols
This work initiates an analysis of several cryptographic protocols from a
rational point of view using a game-theoretical approach, which allows us to
represent not only the protocols but also possible misbehaviours of parties.
Concretely, several concepts of two-person games and of two-party cryptographic
protocols are here combined in order to model the latters as the formers. One
of the main advantages of analysing a cryptographic protocol in the game-theory
setting is the possibility of describing improved and stronger cryptographic
solutions because possible adversarial behaviours may be taken into account
directly. With those tools, protocols can be studied in a malicious model in
order to find equilibrium conditions that make possible to protect honest
parties against all possible strategies of adversaries
When Can Limited Randomness Be Used in Repeated Games?
The central result of classical game theory states that every finite normal
form game has a Nash equilibrium, provided that players are allowed to use
randomized (mixed) strategies. However, in practice, humans are known to be bad
at generating random-like sequences, and true random bits may be unavailable.
Even if the players have access to enough random bits for a single instance of
the game their randomness might be insufficient if the game is played many
times.
In this work, we ask whether randomness is necessary for equilibria to exist
in finitely repeated games. We show that for a large class of games containing
arbitrary two-player zero-sum games, approximate Nash equilibria of the
-stage repeated version of the game exist if and only if both players have
random bits. In contrast, we show that there exists a class of
games for which no equilibrium exists in pure strategies, yet the -stage
repeated version of the game has an exact Nash equilibrium in which each player
uses only a constant number of random bits.
When the players are assumed to be computationally bounded, if cryptographic
pseudorandom generators (or, equivalently, one-way functions) exist, then the
players can base their strategies on "random-like" sequences derived from only
a small number of truly random bits. We show that, in contrast, in repeated
two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then
random bits remain necessary for equilibria to exist
Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources
We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the con- denser. That is, we seek functions Cond : {0, 1}n ×{0, 1}d → {0, 1}m such that if we choose a random seed S ← {0,1}d, and a source X = A(S) is generated by a randomized circuit A of size t such that X has min- entropy at least k given S, then Cond(X;S) should have min-entropy at least some k′ given S. The distinction from the standard notion of ran- domness condensers is that the source X may be correlated with the seed S (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where k′ = m) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00). We show that:
– Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size t of the adversarial sampling algorithm A. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy k′ = m − O(log t), i.e. logarithmic entropy deficiency.
– Randomness condensers suffice for key derivation in many crypto- graphic applications: when an adversary has negligible success proba- bility (or negligible “squared advantage” [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.
– Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.Engineering and Applied Science
Quantitative information flow under generic leakage functions and adaptive adversaries
We put forward a model of action-based randomization mechanisms to analyse
quantitative information flow (QIF) under generic leakage functions, and under
possibly adaptive adversaries. This model subsumes many of the QIF models
proposed so far. Our main contributions include the following: (1) we identify
mild general conditions on the leakage function under which it is possible to
derive general and significant results on adaptive QIF; (2) we contrast the
efficiency of adaptive and non-adaptive strategies, showing that the latter are
as efficient as the former in terms of length up to an expansion factor bounded
by the number of available actions; (3) we show that the maximum information
leakage over strategies, given a finite time horizon, can be expressed in terms
of a Bellman equation. This can be used to compute an optimal finite strategy
recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title
appeared in Proc. of FORTE 2014, LNC
Weak randomness completely trounces the security of QKD
In usual security proofs of quantum protocols the adversary (Eve) is expected
to have full control over any quantum communication between any communicating
parties (Alice and Bob). Eve is also expected to have full access to an
authenticated classical channel between Alice and Bob. Unconditional security
against any attack by Eve can be proved even in the realistic setting of device
and channel imperfection. In this Letter we show that the security of QKD
protocols is ruined if one allows Eve to possess a very limited access to the
random sources used by Alice. Such knowledge should always be expected in
realistic experimental conditions via different side channels
On the Communication Complexity of Secure Computation
Information theoretically secure multi-party computation (MPC) is a central
primitive of modern cryptography. However, relatively little is known about the
communication complexity of this primitive.
In this work, we develop powerful information theoretic tools to prove lower
bounds on the communication complexity of MPC. We restrict ourselves to a
3-party setting in order to bring out the power of these tools without
introducing too many complications. Our techniques include the use of a data
processing inequality for residual information - i.e., the gap between mutual
information and G\'acs-K\"orner common information, a new information
inequality for 3-party protocols, and the idea of distribution switching by
which lower bounds computed under certain worst-case scenarios can be shown to
apply for the general case.
Using these techniques we obtain tight bounds on communication complexity by
MPC protocols for various interesting functions. In particular, we show
concrete functions that have "communication-ideal" protocols, which achieve the
minimum communication simultaneously on all links in the network. Also, we
obtain the first explicit example of a function that incurs a higher
communication cost than the input length in the secure computation model of
Feige, Kilian and Naor (1994), who had shown that such functions exist. We also
show that our communication bounds imply tight lower bounds on the amount of
randomness required by MPC protocols for many interesting functions.Comment: 37 page
Modulus Computational Entropy
The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable in case the adversary who having already learned some random
variables correlated with , obtains some further
information about . Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on .
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.Comment: Accepted at ICTS 201
- …
