537 research outputs found

    Brief Announcement: Memory Lower Bounds for Self-Stabilization

    Get PDF
    In the context of self-stabilization, a silent algorithm guarantees that the communication registers (a.k.a register) of every node do not change once the algorithm has stabilized. At the end of the 90\u27s, Dolev et al. [Acta Inf. \u2799] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent deterministic algorithm must use a memory of Omega(log n) bits per register in n-node networks. Similarly, Korman et al. [Dist. Comp. \u2707] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning tree (MST), every silent algorithm must use a memory of Omega(log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary deterministic self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for deterministic general algorithms, also established at the end of the 90\u27s, is due to Beauquier et al. [PODC \u2799] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing the lower bound Omega(log log n) bits per register for deterministic self-stabilizing algorithms solving (Delta+1)-coloring, leader election or constructing a spanning tree in networks of maximum degree Delta

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    Physical functions : the common factor of side-channel and fault attacks ?

    Get PDF
    International audienceSecurity is a key component for information technologies and communication. Among the security threats, a very important one is certainly due to vulnerabilities of the integrated circuits that implement cryptographic algorithms. These electronic devices (such as smartcards) could fall into the hands of malicious people and then could be sub-ject to "physical attacks". These attacks are generally classified into two categories : fault and side-channel attacks. One of the main challenges to secure circuits against such attacks is to propose methods and tools to estimate as soundly as possible, the efficiency of protections. Numer-ous works attend to provide tools based on sound statistical techniques but, to our knowledge, only address side-channel attacks. In this article, a formal link between fault and side-channel attacks is presented. The common factor between them is what we called the 'physical' function which is an extension of the concept of 'leakage function' widely used in side-channel community. We think that our work could make possible the re-use (certainly modulo some adjustments) for fault attacks of the strong theoretical background developed for side-channel attacks. This work could also make easier the combination of side-channel and fault attacks and thus, certainly could facilitate the discovery of new attack paths. But more importantly, the notion of physical functions opens from now new challenges about estimating the protection of circuits

    Comprendre les freins à la consommation de spectacles vivants à travers la conception individuelle de l'art.

    Get PDF
    L'objectif de notre recherche est d'explorer la conception que les individus ont de l'art afin d'identifier les freins à la consommation de spectacles vivants. Le discours de soixante-huit non-consommateurs de spectacles vivants révèle deux conceptions de la relation à l'art : l'une individuelle et affective associée à la peinture et à la musique, l'autre collective et cognitive associée aux spectacles vivants. Ces dissonances qui renvoient aux dimensions de la valeur perçue de l'expérience de consommation pourraient expliquer les freins à la consommation de spectacles vivants. Des stratégies visant à créer des contextes expérientiels sont alors proposées aux professionnels de la culture afin de favoriser l'appropriation de cette forme d'art par les non-publics.

    A unified formalism for side-channel and fault attacks on cryptographic circuits

    Get PDF
    National audienceSecurity is a key component for information technologies and communication. Security is a very large research area involved in the whole information technology, related to both hardware and software. This paper focuses on hardware security, and more specifically on hardware cryptanalysis whose aim is to extract confidential information (such as encryption keys) from cryptographic circuits. Many physical cryptanalysis techniques have been proposed in the last ten years but they always belong to one of those very distinct categories: fault and side channel attacks. In this article, a formal link between these two categories is proposed. To the best of our knowledge, this is the first time that a wide class of attacks is described in such a generic manner

    Evidence for Altered Cytoskeleton Mobilization Pathway in Splenic Dendritic Cells (DC) from HLA-B27/human b2 microglobulin Transgenic Rats (B27-rats)

    Get PDF
    Background: Although the association of the MHC class I allele HLA-B27 with Spondyloarthropathy (SpA) has been known for almost 35 years different hypotheses on its relation to disease mechanism still exist in parallel. Several lines of rats transgenic for HLA-B27 and human β2-microglobulin develop an inflammatory disease that strikingly resembles human SpA. It is hypothesized that disease in HLA-B27-transgenic rats arises as a consequence of interaction between antigen-presenting cells expressing high levels of HLA-B27 and peripheral T lymphocytes, and may result from a rupture of tolerance towards gut bacteria. Methods: We used 2D PAGE and iTRAQ to compare the protein expression profile of HLA-B27 dendritic cells (DCs) to that of healthy HLA-B7 expressing and nontransgenic (NTG) rat DCs. MHC II surface expression and apoptotic sensitivity were quantified using flow cytometry. Results: Three protein sets from the proteome analysis were indicative for aberrant cellular processes. First, all proteins involved in protein processing and MHC I assembly were upregulated in B27 DCs, illustrating the higher pressure on the ER due to misfolding of the HLA-B27 heavy chain. Second, all proteins directly influencing actin-dynamics were downregulated. We showed earlier that this not only influences motility, but also plays an important role in deficient immunological synapse formation. Third, the key thiol protease Cathepsin S involved in MHC II synthesis was downregulated, which led us to quantify RT1-B and RT1-D surface expression. Downregulation concerned both CD4+ and CD4- OX62+ HLA-B27 DC subpopulations and maturation enlarged differences in both population bias and expression intensity. Deficient actin dynamics could also contribute to this lower MHC II surface expression. Study of sensitivity to MHC class II-mediated apoptosis by antibody stimulation showed that compared to NTG, both B7 and B27 CD4+ DC were more prone to apoptosis but did not mutually differ. In contrast, overnight culturing resulted in a higher cell death in B27 than in control CD4- DC, even without antibody stimulation. Interestingly, decreased actin dynamics could also be involved in DC apoptosis. Conclusions: We have demonstrated that DCs are a very vulnerable cell type in HLA-B27 rats. Deficient cytoskeletal dynamics could immobilize matured DC in the tissue or induce aberrant migration patterns upon activation. On top of that abnormal intracellular trafficking and membrane organization together with a reduced expression of MHC class II molecules makes them aberrant in T-cell communication by deficient immunological synapse formation. Especially the reduced motility and viability of the tolerigenic CD4- DC could play an important role in initiating a systemic auto-immune response

    A Template Attack Against VERIFY PIN Algorithms

    Get PDF
    International audienceThis paper presents the first side channel analysis from electromagnetic emissions on VERIFY PIN algorithms. To enter a PIN code, a user has a limited number of trials. Therefore the main difficulty of the attack is to succeed with very few traces. More precisely, this work implements a template attack and experimentally verifies its success rate. It becomes a new real threat, and it is feasible on a low cost and portable platform. Moreover, this paper shows that some protections for VERIFY PIN algorithms against fault attacks introduce new vulnerabilities with respect to side channel analysis

    A Unified Formalism for Physical Attacks

    Get PDF
    Technical reportThe security of cryptographic algorithms can be considered in two contexts. On the one hand, these algorithms can be proven secure mathematically. On the other hand, physical attacks can weaken the implementation of an algorithm yet proven secure. Under the common name of physical attacks, different attacks are regrouped: side channel attacks and fault injection attacks. This paper presents a common formalism for these attacks and highlights their underlying principles. All physical attacks on symmetric algorithms can be described with a 3-step process. Moreover it is possible to compare different physical attacks, by separating the theoretical attack path and the experimental parts of the attacks
    corecore