2,713 research outputs found

    Security analysis of standard authentication and key agreement protocols utilising timestamps

    Get PDF
    We propose a generic modelling technique that can be used to extend existing frameworks for theoretical security analysis in order to capture the use of timestamps. We apply this technique to two of the most popular models adopted in literature (Bellare-Rogaway and Canetti-Krawczyk). We analyse previous results obtained using these models in light of the proposed extensions, and demonstrate their application to a new class of protocols. In the timed CK model we concentrate on modular design and analysis of protocols, and propose a more efficient timed authenticator relying on timestamps. The structure of this new authenticator implies that an authentication mechanism standardised in ISO-9798 is secure. Finally, we use our timed extension to the BR model to establish the security of an efficient ISO protocol for key transport and unilateral entity authentication

    Lower bounds for polynomials using geometric programming

    Full text link
    We make use of a result of Hurwitz and Reznick, and a consequence of this result due to Fidalgo and Kovacec, to determine a new sufficient condition for a polynomial fR[X1,...,Xn]f\in\mathbb{R}[X_1,...,X_n] of even degree to be a sum of squares. This result generalizes a result of Lasserre and a result of Fidalgo and Kovacec, and it also generalizes the improvements of these results given in [6]. We apply this result to obtain a new lower bound fgpf_{gp} for ff, and we explain how fgpf_{gp} can be computed using geometric programming. The lower bound fgpf_{gp} is generally not as good as the lower bound fsosf_{sos} introduced by Lasserre and Parrilo and Sturmfels, which is computed using semidefinite programming, but a run time comparison shows that, in practice, the computation of fgpf_{gp} is much faster. The computation is simplest when the highest degree term of ff has the form i=1naiXi2d\sum_{i=1}^n a_iX_i^{2d}, ai>0a_i>0, i=1,...,ni=1,...,n. The lower bounds for ff established in [6] are obtained by evaluating the objective function of the geometric program at the appropriate feasible points

    Forward-Security in Private-Key Cryptography

    Get PDF
    This paper provides a comprehensive treatment of forward-security in the context of sharedkey based cryptographic primitives, as a practical means to mitigate the damage caused by key-exposure. We provide definitions of security, practical proven-secure constructions, and applications for the main primitives in this area. We identify forward-secure pseudorandom bit generators as the central primitive, providing several constructions and then showing how forward-secure message authentication schemes and symmetric encryption schemes can be built based on standard schemes for these problems coupled with forward-secure pseudorandom bit generators. We then apply forward-secure message authentication schemes to the problem of maintaining secure access logs in the presence of break-ins

    Towards Bidirectional Ratcheted Key Exchange

    Get PDF
    Ratcheted key exchange (RKE) is a cryptographic technique used in instant messaging systems like Signal and the WhatsApp messenger for attaining strong security in the face of state exposure attacks. RKE received academic attention in the recent works of Cohn-Gordon et al. (EuroS&P 2017) and Bellare et al. (CRYPTO 2017). While the former is analytical in the sense that it aims primarily at assessing the security that one particular protocol does achieve (which might be weaker than the notion that it should achieve), the authors of the latter develop and instantiate a notion of security from scratch, independently of existing implementations. Unfortunately, however, their model is quite restricted, e.g. for considering only unidirectional communication and the exposure of only one of the two parties.In this article we resolve the limitations of prior work by developing alternative security definitions, for unidirectional RKE as well as for RKE where both parties contribute. We follow a purist approach, aiming at finding strong yet convincing notions that cover a realistic communication model with fully concurrent operation of both participants. We further propose secure instantiations (as the protocols analyzed or proposed by Cohn-Gordon et al. and Bellare et al. turn out to be weak in our models). While our scheme for the unidirectional case builds on a generic KEM as the main building block (differently to prior work that requires explicitly Diffie–Hellman), our schemes for bidirectional RKE require a stronger, HIBE-like component

    Efficient public-key cryptography with bounded leakage and tamper resilience

    Get PDF
    We revisit the question of constructing public-key encryption and signature schemes with security in the presence of bounded leakage and tampering memory attacks. For signatures we obtain the first construction in the standard model; for public-key encryption we obtain the first construction free of pairing (avoiding non-interactive zero-knowledge proofs). Our constructions are based on generic building blocks, and, as we show, also admit efficient instantiations under fairly standard number-theoretic assumptions. The model of bounded tamper resistance was recently put forward by Damgård et al. (Asiacrypt 2013) as an attractive path to achieve security against arbitrary memory tampering attacks without making hardware assumptions (such as the existence of a protected self-destruct or key-update mechanism), the only restriction being on the number of allowed tampering attempts (which is a parameter of the scheme). This allows to circumvent known impossibility results for unrestricted tampering (Gennaro et al., TCC 2010), while still being able to capture realistic tampering attack

    Implementation of higher-order absorbing boundary conditions for the Einstein equations

    Full text link
    We present an implementation of absorbing boundary conditions for the Einstein equations based on the recent work of Buchman and Sarbach. In this paper, we assume that spacetime may be linearized about Minkowski space close to the outer boundary, which is taken to be a coordinate sphere. We reformulate the boundary conditions as conditions on the gauge-invariant Regge-Wheeler-Zerilli scalars. Higher-order radial derivatives are eliminated by rewriting the boundary conditions as a system of ODEs for a set of auxiliary variables intrinsic to the boundary. From these we construct boundary data for a set of well-posed constraint-preserving boundary conditions for the Einstein equations in a first-order generalized harmonic formulation. This construction has direct applications to outer boundary conditions in simulations of isolated systems (e.g., binary black holes) as well as to the problem of Cauchy-perturbative matching. As a test problem for our numerical implementation, we consider linearized multipolar gravitational waves in TT gauge, with angular momentum numbers l=2 (Teukolsky waves), 3 and 4. We demonstrate that the perfectly absorbing boundary condition B_L of order L=l yields no spurious reflections to linear order in perturbation theory. This is in contrast to the lower-order absorbing boundary conditions B_L with L<l, which include the widely used freezing-Psi_0 boundary condition that imposes the vanishing of the Newman-Penrose scalar Psi_0.Comment: 25 pages, 9 figures. Minor clarifications. Final version to appear in Class. Quantum Grav

    The chaining lemma and its application

    Get PDF
    We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X(0)with high min-entropy and a number (say, t in total) of arbitrary functions (T1,…, Tt) which are applied in succession to that source to generate the chain (Formula presented). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain i.e. (Formula presented), the preceding part (Formula presented) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a prior

    Unpicking PLAID: a cryptographic analysis of an ISO-standards-track authentication protocol

    Get PDF
    The Protocol for Lightweight Authentication of Identity (PLAID) aims at secure and private authentication between a smart card and a terminal. Originally developed by a unit of the Australian Department of Human Services for physical and logical access control, PLAID has now been standardized as an Australian standard AS-5185-2010 and is currently in the fast track standardization process for ISO/IEC 25182-1.2. We present a cryptographic evaluation of PLAID. As well as reporting a number of undesirable cryptographic features of the protocol, we show that the privacy properties of PLAID are significantly weaker than claimed: using a variety of techniques we can fingerprint and then later identify cards. These techniques involve a novel application of standard statistical and data analysi

    Algorithmic and Hardness Results for the Colorful Components Problems

    Full text link
    In this paper we investigate the colorful components framework, motivated by applications emerging from comparative genomics. The general goal is to remove a collection of edges from an undirected vertex-colored graph GG such that in the resulting graph GG' all the connected components are colorful (i.e., any two vertices of the same color belong to different connected components). We want GG' to optimize an objective function, the selection of this function being specific to each problem in the framework. We analyze three objective functions, and thus, three different problems, which are believed to be relevant for the biological applications: minimizing the number of singleton vertices, maximizing the number of edges in the transitive closure, and minimizing the number of connected components. Our main result is a polynomial time algorithm for the first problem. This result disproves the conjecture of Zheng et al. that the problem is NP NP-hard (assuming PNPP \neq NP). Then, we show that the second problem is APX APX-hard, thus proving and strengthening the conjecture of Zheng et al. that the problem is NP NP-hard. Finally, we show that the third problem does not admit polynomial time approximation within a factor of V1/14ϵ|V|^{1/14 - \epsilon} for any ϵ>0\epsilon > 0, assuming PNPP \neq NP (or within a factor of V1/2ϵ|V|^{1/2 - \epsilon}, assuming ZPPNPZPP \neq NP).Comment: 18 pages, 3 figure

    Computer-aided verification in mechanism design

    Full text link
    In mechanism design, the gold standard solution concepts are dominant strategy incentive compatibility and Bayesian incentive compatibility. These solution concepts relieve the (possibly unsophisticated) bidders from the need to engage in complicated strategizing. While incentive properties are simple to state, their proofs are specific to the mechanism and can be quite complex. This raises two concerns. From a practical perspective, checking a complex proof can be a tedious process, often requiring experts knowledgeable in mechanism design. Furthermore, from a modeling perspective, if unsophisticated agents are unconvinced of incentive properties, they may strategize in unpredictable ways. To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions. As an immediate consequence, our work also formalizes Bayesian incentive compatibility for the entire family of mechanisms derived via this reduction. Finally, as an intermediate step in our formalization, we provide the first formal verification of incentive compatibility for the celebrated Vickrey-Clarke-Groves mechanism
    corecore