1,714 research outputs found

    Information Entropy in Cosmology

    Full text link
    The effective evolution of an inhomogeneous cosmological model may be described in terms of spatially averaged variables. We point out that in this context, quite naturally, a measure arises which is identical to a fluid model of the `Kullback-Leibler Relative Information Entropy', expressing the distinguishability of the local inhomogeneous mass density field from its spatial average on arbitrary compact domains. We discuss the time-evolution of `effective information' and explore some implications. We conjecture that the information content of the Universe -- measured by Relative Information Entropy of a cosmological model containing dust matter -- is increasing.Comment: LateX, PRLstyle, 4 pages; to appear in PR

    How `hot' are mixed quantum states?

    Get PDF
    Given a mixed quantum state ρ\rho of a qudit, we consider any observable MM as a kind of `thermometer' in the following sense. Given a source which emits pure states with these or those distributions, we select such distributions that the appropriate average value of the observable MM is equal to the average TrMρM\rho of MM in the stare ρ\rho. Among those distributions we find the most typical one, namely, having the highest differential entropy. We call this distribution conditional Gibbs ensemble as it turns out to be a Gibbs distribution characterized by a temperature-like parameter β\beta. The expressions establishing the liaisons between the density operator ρ\rho and its temperature parameter β\beta are provided. Within this approach, the uniform mixed state has the highest `temperature', which tends to zero as the state in question approaches to a pure state.Comment: Contribution to Quantum 2006: III workshop ad memoriam of Carlo Novero: Advances in Foundations of Quantum Mechanics and Quantum Information with atoms and photons. 2-5 May 2006 - Turin, Ital

    Comparing compact binary parameter distributions I: Methods

    Full text link
    Being able to measure each merger's sky location, distance, component masses, and conceivably spins, ground-based gravitational-wave detectors will provide a extensive and detailed sample of coalescing compact binaries (CCBs) in the local and, with third-generation detectors, distant universe. These measurements will distinguish between competing progenitor formation models. In this paper we develop practical tools to characterize the amount of experimentally accessible information available, to distinguish between two a priori progenitor models. Using a simple time-independent model, we demonstrate the information content scales strongly with the number of observations. The exact scaling depends on how significantly mass distributions change between similar models. We develop phenomenological diagnostics to estimate how many models can be distinguished, using first-generation and future instruments. Finally, we emphasize that multi-observable distributions can be fully exploited only with very precisely calibrated detectors, search pipelines, parameter estimation, and Bayesian model inference

    Universality of optimal measurements

    Get PDF
    We present optimal and minimal measurements on identical copies of an unknown state of a qubit when the quality of measuring strategies is quantified with the gain of information (Kullback of probability distributions). We also show that the maximal gain of information occurs, among isotropic priors, when the state is known to be pure. Universality of optimal measurements follows from our results: using the fidelity or the gain of information, two different figures of merits, leads to exactly the same conclusions. We finally investigate the optimal capacity of NN copies of an unknown state as a quantum channel of information.Comment: Revtex, 5 pages, no figure

    Quantifying the complexity of random Boolean networks

    Full text link
    We study two measures of the complexity of heterogeneous extended systems, taking random Boolean networks as prototypical cases. A measure defined by Shalizi et al. for cellular automata, based on a criterion for optimal statistical prediction [Shalizi et al., Phys. Rev. Lett. 93, 118701 (2004)], does not distinguish between the spatial inhomogeneity of the ordered phase and the dynamical inhomogeneity of the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing complexity values for networks in the ordered and critical regimes and for highly disordered networks, peaking somewhere in the disordered regime. Individual nodes with high complexity are the ones that pass the most information from the past to the future, a quantity that depends in a nontrivial way on both the Boolean function of a given node and its location within the network.Comment: 8 pages, 4 figure

    Pairwise Confusion for Fine-Grained Visual Classification

    Full text link
    Fine-Grained Visual Classification (FGVC) datasets contain small sample sizes, along with significant intra-class variation and inter-class similarity. While prior work has addressed intra-class variation using localization and segmentation techniques, inter-class similarity may also affect feature learning and reduce classification performance. In this work, we address this problem using a novel optimization procedure for the end-to-end neural network training on FGVC tasks. Our procedure, called Pairwise Confusion (PC) reduces overfitting by intentionally {introducing confusion} in the activations. With PC regularization, we obtain state-of-the-art performance on six of the most widely-used FGVC datasets and demonstrate improved localization ability. {PC} is easy to implement, does not need excessive hyperparameter tuning during training, and does not add significant overhead during test time.Comment: Camera-Ready version for ECCV 201

    WIMP astronomy and particle physics with liquid-noble and cryogenic direct-detection experiments

    Get PDF
    Once weakly-interacting massive particles (WIMPs) are unambiguously detected in direct-detection experiments, the challenge will be to determine what one may infer from the data. Here, I examine the prospects for reconstructing the local speed distribution of WIMPs in addition to WIMP particle-physics properties (mass, cross sections) from next-generation cryogenic and liquid-noble direct-detection experiments. I find that the common method of fixing the form of the velocity distribution when estimating constraints on WIMP mass and cross sections means losing out on the information on the speed distribution contained in the data and may lead to biases in the inferred values of the particle-physics parameters. I show that using a more general, empirical form of the speed distribution can lead to good constraints on the speed distribution. Moreover, one can use Bayesian model-selection criteria to determine if a theoretically-inspired functional form for the speed distribution (such as a Maxwell-Boltzmann distribution) fits better than an empirical model. The shape of the degeneracy between WIMP mass and cross sections and their offset from the true values of those parameters depends on the hypothesis for the speed distribution, which has significant implications for consistency checks between direct-detection and collider data. In addition, I find that the uncertainties on theoretical parameters depends sensitively on the upper end of the energy range used for WIMP searches. Better constraints on the WIMP particle-physics parameters and speed distribution are obtained if the WIMP search is extended to higher energy (~ 1 MeV).Comment: 25 pages, 27 figures, matches published versio

    Quantum estimation via minimum Kullback entropy principle

    Full text link
    We address quantum estimation in situations where one has at disposal data from the measurement of an incomplete set of observables and some a priori information on the state itself. By expressing the a priori information in terms of a bias toward a given state the problem may be faced by minimizing the quantum relative entropy (Kullback entropy) with the constraint of reproducing the data. We exploit the resulting minimum Kullback entropy principle for the estimation of a quantum state from the measurement of a single observable, either from the sole mean value or from the complete probability distribution, and apply it as a tool for the estimation of weak Hamiltonian processes. Qubit and harmonic oscillator systems are analyzed in some details.Comment: 7 pages, slightly revised version, no figure

    Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

    Full text link
    We present a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results. For a given search or recommendation task, our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be tailored to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations over different parameter choices, and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice. Our approach resulted in tremendous improvement in the fairness metrics (nearly three fold increase in the number of search queries with representative results) without affecting the business metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201

    Holevo's bound from a general quantum fluctuation theorem

    Full text link
    We give a novel derivation of Holevo's bound using an important result from nonequilibrium statistical physics, the fluctuation theorem. To do so we develop a general formalism of quantum fluctuation theorems for two-time measurements, which explicitly accounts for the back action of quantum measurements as well as possibly non-unitary time evolution. For a specific choice of observables this fluctuation theorem yields a measurement-dependent correction to the Holevo bound, leading to a tighter inequality. We conclude by analyzing equality conditions for the improved bound.Comment: 5 page
    corecore