190 research outputs found

    Computation of local exchange coefficients in strongly interacting one-dimensional few-body systems: local density approximation and exact results

    Full text link
    One-dimensional multi-component Fermi or Bose systems with strong zero-range interactions can be described in terms of local exchange coefficients and mapping the problem into a spin model is thus possible. For arbitrary external confining potentials the local exchanges are given by highly non-trivial geometric factors that depend solely on the geometry of the confinement through the single-particle eigenstates of the external potential. To obtain accurate effective Hamiltonians to describe such systems one needs to be able to compute these geometric factors with high precision which is difficult due to the computational complexity of the high-dimensional integrals involved. An approach using the local density approximation would therefore be a most welcome approximation due to its simplicity. Here we assess the accuracy of the local density approximation by going beyond the simple harmonic oscillator that has been the focus of previous studies and consider some double-wells of current experimental interest. We find that the local density approximation works quite well as long as the potentials resemble harmonic wells but break down for larger barriers. In order to explore the consequences of applying the local density approximation in a concrete setup we consider quantum state transfer in the effective spin models that one obtains. Here we find that even minute deviations in the local exchange coefficients between the exact and the local density approximation can induce large deviations in the fidelity of state transfer for four, five, and six particles.Comment: 12 pages, 7 figures, 1 table, final versio

    An Experimental Evaluation of Deliberate Unsoundness in a Static Program Analyzer

    Get PDF
    Abstract. Many practical static analyzers are not completely sound by design. Their designers trade soundness in order to increase automa-tion, improve performance, and reduce the number of false positives or the annotation overhead. However, the impact of such design decisions on the effectiveness of an analyzer is not well understood. In this pa-per, we report on the first systematic effort to document and evaluate the sources of unsoundness in a static analyzer. We present a code in-strumentation that reflects the sources of deliberate unsoundness in the.NET static analyzer Clousot. We have instrumented code from several open source projects to evaluate how often concrete executions violate Clousot’s unsound assumptions. In our experiments, this was the case in 8–29 % of all analyzed methods. Our approach and findings can guide users of static analyzers in using them fruitfully, and designers in finding good trade-offs.

    Scattering statistics of rock outcrops: Model-data comparisons and Bayesian inference using mixture distributions

    Get PDF
    The probability density function of the acoustic field amplitude scattered by the seafloor was measured in a rocky environment off the coast of Norway using a synthetic aperture sonar system, and is reported here in terms of the probability of false alarm. Interpretation of the measurements focused on finding appropriate class of statistical models (single versus two-component mixture models), and on appropriate models within these two classes. It was found that two-component mixture models performed better than single models. The two mixture models that performed the best (and had a basis in the physics of scattering) were a mixture between two K distributions, and a mixture between a Rayleigh and generalized Pareto distribution. Bayes' theorem was used to estimate the probability density function of the mixture model parameters. It was found that the K-K mixture exhibits significant correlation between its parameters. The mixture between the Rayleigh and generalized Pareto distributions also had significant parameter correlation, but also contained multiple modes. We conclude that the mixture between two K distributions is the most applicable to this dataset.Comment: 15 pages, 7 figures, Accepted to the Journal of the Acoustical Society of Americ

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers

    Efficient family-based model checking via variability abstractions

    Get PDF
    Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families of related systems),specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using a single-system model checker, one-by-one. Nevertheless, their computational cost still greatly depends on the number of features and variants. For variational models with a large number of features and variants, the family-based model checking may be too costly or even infeasible.In this work, we address two key problems of family-based model checking. First, we improve scalability by introducing abstractions that simplify variability. Second, we reduce the burden of maintaining specialized family-based model checkers, by showing how the presented variability abstractions can be used to model check variational models using the standard version of (single-system) Spin. The variability abstractions are first defined as Galois connections on semantic domains. We then show how to use them for defining abstract family-based model checking, where a variability model is replaced with an abstract version of it, which preserves the satisfaction of LTL properties. Moreover, given an abstraction, we define a syntactic source-to-source transformation on high-level modelling languages that describe variational models, such that the model checking of the transformed high-level variational model coincides with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate the practicality of this method on several case studies

    From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

    Get PDF
    Variational systems (system families) allow effective building of many custom system variants for various configurations. Lifted (family-based) verification is capable of verifying all variants of the family simultaneously, in a single run, by exploiting the similarities between the variants. These algorithms scale much better than the simple enumerative “brute-force” way. Still, the design of family-based verification algorithms greatly depends on the existence of compact variability models (state representations). Moreover, developing the corresponding family-based tools for each particular analysis is often tedious and labor intensive.In this work, we make two contributions. First, we survey the history of development of variability models of computation that compactly represent behavior of variational systems. Second, we introduce variability abstractions that simplify variability away to achieve efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source-to-source transformations on UPPAAL input files, and we illustrate the practicality of this method on a real-time case study.Both authors are supported by The Danish Council for Independent Research under a Sapere Aude project, VARIETE

    Detecting chiral pairing and topological superfluidity using circular dichroism

    Full text link
    Realising and probing topological superfluids is a key goal for fundamental science, with exciting technological promises. Here, we show that chiral px+ipyp_x+ip_y pairing in a two-dimensional topological superfluid can be detected through circular dichroism, namely, as a difference in the excitation rates induced by a clockwise and counter-clockwise circular drive. For weak pairing, this difference is to a very good approximation determined by the Chern number of the superfluid, whereas there is a non-topological contribution scaling as the superfluid gap squared that becomes signifiant for stronger pairing. This gives rise to a competition between the experimentally driven goal to maximise the critical temperature of the superfluid, and observing a signal given by the underlying topology. Using a combination of strong coupling Eliashberg and Berezinskii-Kosterlitz-Thouless theory, we analyse this tension for an atomic Bose-Fermi gas, which represents a promising platform for realising a chiral superfluid. We identify a wide range of system parameters where both the critical temperature is high and the topological contribution to the dichroic signal is dominant.Comment: 6 pages, 3 figure

    Developing a digital intervention for cancer survivors: an evidence-, theory- and person-based approach

    Get PDF
    This paper illustrates a rigorous approach to developing digital interventions using an evidence-, theory- and person-based approach. Intervention planning included a rapid scoping review which identified cancer survivors’ needs, including barriers and facilitators to intervention success. Review evidence (N=49 papers) informed the intervention’s Guiding Principles, theory-based behavioural analysis and logic model. The intervention was optimised based on feedback on a prototype intervention through interviews (N=96) with cancer survivors and focus groups with NHS staff and cancer charity workers (N=31). Interviews with cancer survivors highlighted barriers to engagement, such as concerns about physical activity worsening fatigue. Focus groups highlighted concerns about support appointment length and how to support distressed participants. Feedback informed intervention modifications, to maximise acceptability, feasibility and likelihood of behaviour change. Our systematic method for understanding user views enabled us to anticipate and address important barriers to engagement. This methodology may be useful to others developing digital interventions
    corecore