6,336 research outputs found

    Conclave: secure multi-party computation on big data (extended TR)

    Full text link
    Secure Multi-Party Computation (MPC) allows mutually distrusting parties to run joint computations without revealing private data. Current MPC algorithms scale poorly with data size, which makes MPC on "big data" prohibitively slow and inhibits its practical use. Many relational analytics queries can maintain MPC's end-to-end security guarantee without using cryptographic MPC techniques for all operations. Conclave is a query compiler that accelerates such queries by transforming them into a combination of data-parallel, local cleartext processing and small MPC steps. When parties trust others with specific subsets of the data, Conclave applies new hybrid MPC-cleartext protocols to run additional steps outside of MPC and improve scalability further. Our Conclave prototype generates code for cleartext processing in Python and Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave scales to data sets between three and six orders of magnitude larger than state-of-the-art MPC frameworks support on their own. Thanks to its hybrid protocols, Conclave also substantially outperforms SMCQL, the most similar existing system.Comment: Extended technical report for EuroSys 2019 pape

    Vertical Field Effect Transistor based on Graphene-WS2 Heterostructures for flexible and transparent electronics

    Full text link
    The celebrated electronic properties of graphene have opened way for materials just one-atom-thick to be used in the post-silicon electronic era. An important milestone was the creation of heterostructures based on graphene and other two-dimensional (2D) crystals, which can be assembled in 3D stacks with atomic layer precision. These layered structures have already led to a range of fascinating physical phenomena, and also have been used in demonstrating a prototype field effect tunnelling transistor - a candidate for post-CMOS technology. The range of possible materials which could be incorporated into such stacks is very large. Indeed, there are many other materials where layers are linked by weak van der Waals forces, which can be exfoliated and combined together to create novel highly-tailored heterostructures. Here we describe a new generation of field effect vertical tunnelling transistors where 2D tungsten disulphide serves as an atomically thin barrier between two layers of either mechanically exfoliated or CVD-grown graphene. Our devices have unprecedented current modulation exceeding one million at room temperature and can also operate on transparent and flexible substrates

    Test of OPE and OGE through mixing angles of negative parity N* resonances in electromagnetic transitions

    Full text link
    In this report, by using the mixing angles of one-gluon-exchange model(OGE) and one-pion-exchange model(OPE), and by using the electromagnetic Hamiltonian of Close and Li, we calculate the amplitudes of L=1 N* resonances for photoproduction and electroproduction. The results are compared to experimental data. It's found that the data support OGE, not OPE.Comment: 5 pages, Latex, 1 figure, accepted by Phys.Rev.

    Monotonicity of Fitness Landscapes and Mutation Rate Control

    Get PDF
    A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms

    Context-aware Approach for Determining the Threshold Price in Name-Your-Own-Price Channels

    Get PDF
    Key feature of a context-aware application is the ability to adapt based on the change of context. Two approaches that are widely used in this regard are the context-action pair mapping where developers match an action to execute for a particular context change and the adaptive learning where a context-aware application refines its action over time based on the preceding action’s outcome. Both these approaches have limitation which makes them unsuitable in situations where a context-aware application has to deal with unknown context changes. In this paper we propose a framework where adaptation is carried out via concurrent multi-action evaluation of a dynamically created action space. This dynamic creation of the action space eliminates the need for relying on the developers to create context-action pairs and the concurrent multi-action evaluation reduces the adaptation time as opposed to the iterative approach used by adaptive learning techniques. Using our reference implementation of the framework we show how it could be used to dynamically determine the threshold price in an e-commerce system which uses the name-your-own-price (NYOP) strategy

    Acupuncture for chronic neck pain: a pilot for a randomised controlled trial

    Get PDF
    Background: Acupuncture is increasingly being used for many conditions including chronic neck pain. However the evidence remains inconclusive, indicating the need for further well-designed research. The aim of this study was to conduct a pilot randomised controlled parallel arm trial, to establish key features required for the design and implementation of a large-scale trial on acupuncture for chronic neck pain. Methods: Patients whose GPs had diagnosed neck pain were recruited from one general practice, and randomised to receive usual GP care only, or acupuncture ( up to 10 treatments over 3 months) as an adjunctive treatment to usual GP care. The primary outcome measure was the Northwick Park Neck Pain Questionnaire (NPQ) at 3 months. The primary analysis was to determine the sample size for the full scale study. Results: Of the 227 patients with neck pain identified from the GP database, 28 (12.3%) consenting patients were eligible to participate in the pilot and 24 (10.5%) were recruited to the trial. Ten patients were randomised to acupuncture, receiving an average of eight treatments from one of four acupuncturists, and 14 were randomised to usual GP care alone. The sample size for the full scale trial was calculated from a clinically meaningful difference of 5% on the NPQ and, from this pilot, an adjusted standard deviation of 15.3%. Assuming 90% power at the 5% significance level, a sample size of 229 would be required in each arm in a large-scale trial when allowing for a loss to follow-up rate of 14%. In order to achieve this sample, one would need to identify patients from databases of GP practices with a total population of 230,000 patients, or approximately 15 GP practices roughly equal in size to the one involved in this study (i.e. 15,694 patients). Conclusion: This pilot study has allowed a number of recommendations to be made to facilitate the design of a large-scale trial, which in turn will help to clarify the existing evidence base on acupuncture for neck pain

    Classical and semi-classical energy conditions

    Full text link
    The standard energy conditions of classical general relativity are (mostly) linear in the stress-energy tensor, and have clear physical interpretations in terms of geodesic focussing, but suffer the significant drawback that they are often violated by semi-classical quantum effects. In contrast, it is possible to develop non-standard energy conditions that are intrinsically non-linear in the stress-energy tensor, and which exhibit much better well-controlled behaviour when semi-classical quantum effects are introduced, at the cost of a less direct applicability to geodesic focussing. In this article we will first review the standard energy conditions and their various limitations. (Including the connection to the Hawking--Ellis type I, II, III, and IV classification of stress-energy tensors). We shall then turn to the averaged, nonlinear, and semi-classical energy conditions, and see how much can be done once semi-classical quantum effects are included.Comment: V1: 25 pages. Draft chapter, on which the related chapter of the book "Wormholes, Warp Drives and Energy Conditions" (to be published by Springer), will be based. V2: typos fixed. V3: small typo fixe

    Algorithmically Effective Differentially Private Synthetic Data

    Full text link
    We present a highly effective algorithmic approach for generating ε\varepsilon-differentially private synthetic data in a bounded metric space with near-optimal utility guarantees under the 1-Wasserstein distance. In particular, for a dataset XX in the hypercube [0,1]d[0,1]^d, our algorithm generates synthetic dataset YY such that the expected 1-Wasserstein distance between the empirical measure of XX and YY is O((εn)1/d)O((\varepsilon n)^{-1/d}) for d2d\geq 2, and is O(log2(εn)(εn)1)O(\log^2(\varepsilon n)(\varepsilon n)^{-1}) for d=1d=1. The accuracy guarantee is optimal up to a constant factor for d2d\geq 2, and up to a logarithmic factor for d=1d=1. Our algorithm has a fast running time of O(εn)O(\varepsilon n) for all d1d\geq 1 and demonstrates improved accuracy compared to the method in (Boedihardjo et al., 2022) for d2d\geq 2.Comment: 23 page
    corecore