445 research outputs found

    The Crypto-democracy and the Trustworthy

    Full text link
    In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, ...) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.Comment: DPM 201

    AnKLe: détection automatique d'attaques par divergence d'information

    Get PDF
    4 pagesInternational audienceDans cet article, nous considérons le contexte de très grands systèmes distribués, au sein desquels chaque noeud doit pouvoir rapidement analyser une grande quantité d'information, lui arrivant sous la forme d'un flux. Ce dernier ayant pu être modifié par un adversaire, un problème fondamental consiste en la détection et la quantification d'actions malveillantes effectuées sur ce flux. Dans ce but, nous proposons AnKLe (pour Attack-tolerant eNhanced Kullback-Leibler divergence Estimator), un algorithme local permettant d'estimer la divergence de Kullback-Leibler entre un flux observé et le flux espéré. AnKLe combine des techniques d'échantillonnage et des méthodes de théorie de l'information. Il est efficace à la fois en complexité en terme d'espace et en temps, et ne nécessite qu'une passe unique sur le flux. Les résultats expérimentaux montre que l'estimateur fourni par AnKLe est pertinent pour plusieurs types d'attaques, pour lesquels les autres méthodes existantes sont significativement moins performantes

    On the Power of the Adversary to Solve the Node Sampling Problem

    Get PDF
    International audienceWe study the problem of achieving uniform and fresh peer sampling in large scale dynamic systems under adversarial behaviors. Briefly, uniform and fresh peer sampling guarantees that any node in the system is equally likely to appear as a sample at any non malicious node in the system and that infinitely often any node has a non-null probability to appear as a sample of honest nodes. This sample is built locally out of a stream of node identifiers received at each node. An important issue that seriously hampers the feasibility of node sampling in open and large scale systems is the unavoidable presence of malicious nodes. The objective of malicious nodes mainly consists in continuously and largely biasing the input data stream out of which samples are obtained, to prevent (honest) nodes from being selected as samples. First we demonstrate that restricting the number of requests that malicious nodes can issue and providing a full knowledge of the composition of the system is a necessary and sufficient condition to guarantee uniform and fresh sampling. We also define and study two types of adversary models: an omniscient adversary that has the capacity to eavesdrop on all the messages that are exchanged within the system, and a blind adversary that can only observe messages that have been sent or received by nodes it controls. The former model allows us to derive lower bounds on the impact that the adversary has on the sampling functionality while the latter one corresponds to a more realistic setting. Given any sampling strategy, we quantify the minimum effort exerted by both types of adversary on any input stream to prevent this sampling strategy from outputting a uniform and fresh sample
    corecore