2,011 research outputs found

    Constraints on Light Dark Matter From Core-Collapse Supernovae

    Full text link
    We show that light (\simeq 1 -- 30 MeV) dark matter particles can play a significant role in core-collapse supernovae, if they have relatively large annihilation and scattering cross sections, as compared to neutrinos. We find that if such particles are lighter than \simeq 10 MeV and reproduce the observed dark matter relic density, supernovae would cool on a much longer time scale and would emit neutrinos with significantly smaller energies than in the standard scenario, in disagreement with observations. This constraint may be avoided, however, in certain situations for which the neutrino--dark matter scattering cross sections remain comparatively small.Comment: 4 pages, 1 figur

    Improving information/disturbance and estimation/distortion trade-offs with non universal protocols

    Get PDF
    We analyze in details a conditional measurement scheme based on linear optical components, feed-forward loop and homodyne detection. The scheme may be used to achieve two different tasks. On the one hand it allows the extraction of information with minimum disturbance about a set of coherent states. On the other hand, it represents a nondemolitive measurement scheme for the annihilation operator, i.e. an indirect measurement of the Q-function. We investigate the information/disturbance trade-off for state inference and introduce the estimation/distortion trade-off to assess estimation of the Q-function. For coherent states chosen from a Gaussian set we evaluate both information/disturbance and estimation/distortion trade-offs and found that non universal protocols may be optimized in order to achieve better performances than universal ones. For Fock number states we prove that universal protocols do not exist and evaluate the estimation/distortion trade-off for a thermal distribution.Comment: 10 pages, 6 figures; published versio

    Phase estimation for thermal Gaussian states

    Get PDF
    We give the optimal bounds on the phase-estimation precision for mixed Gaussian states in the single-copy and many-copy regimes. Specifically, we focus on displaced thermal and squeezed thermal states. We find that while for displaced thermal states an increase in temperature reduces the estimation fidelity, for squeezed thermal states a larger temperature can enhance the estimation fidelity. The many-copy optimal bounds are compared with the minimum variance achieved by three important single-shot measurement strategies. We show that the single-copy canonical phase measurement does not always attain the optimal bounds in the many-copy scenario. Adaptive homodyning schemes do attain the bounds for displaced thermal states, but for squeezed states they yield fidelities that are insensitive to temperature variations and are, therefore, sub-optimal. Finally, we find that heterodyne measurements perform very poorly for pure states but can attain the optimal bounds for sufficiently mixed states. We apply our results to investigate the influence of losses in an optical metrology experiment. In the presence of losses squeezed states cease to provide Heisenberg limited precision and their performance is close to that of coherent states with the same mean photon number.Comment: typos correcte

    Balancing efficiencies by squeezing in realistic eight-port homodyne detection

    Get PDF
    We address measurements of covariant phase observables (CPOs) by means of realistic eight-port homodyne detectors. We do not assume equal quantum efficiencies for the four photodetectors and investigate the conditions under which the measurement of a CPO may be achieved. We show that balancing the efficiencies using an additional beam splitter allows us to achieve a CPO at the price of reducing the overall effective efficiency, and prove that it is never a smearing of the ideal CPO achievable with unit quantum efficiency. An alternative strategy based on employing a squeezed vacuum as a parameter field is also suggested, which allows one to increase the overall efficiency in comparison to the passive case using only a moderate amount of squeezing. Both methods are suitable for implementantion with current technology.Comment: 8 pages, 5 figures, revised versio

    Laplace deconvolution on the basis of time domain data and its application to Dynamic Contrast Enhanced imaging

    Full text link
    In the present paper we consider the problem of Laplace deconvolution with noisy discrete non-equally spaced observations on a finite time interval. We propose a new method for Laplace deconvolution which is based on expansions of the convolution kernel, the unknown function and the observed signal over Laguerre functions basis (which acts as a surrogate eigenfunction basis of the Laplace convolution operator) using regression setting. The expansion results in a small system of linear equations with the matrix of the system being triangular and Toeplitz. Due to this triangular structure, there is a common number mm of terms in the function expansions to control, which is realized via complexity penalty. The advantage of this methodology is that it leads to very fast computations, produces no boundary effects due to extension at zero and cut-off at TT and provides an estimator with the risk within a logarithmic factor of the oracle risk. We emphasize that, in the present paper, we consider the true observational model with possibly nonequispaced observations which are available on a finite interval of length TT which appears in many different contexts, and account for the bias associated with this model (which is not present when TT\rightarrow\infty). The study is motivated by perfusion imaging using a short injection of contrast agent, a procedure which is applied for medical assessment of micro-circulation within tissues such as cancerous tumors. Presence of a tuning parameter aa allows to choose the most advantageous time units, so that both the kernel and the unknown right hand side of the equation are well represented for the deconvolution. The methodology is illustrated by an extensive simulation study and a real data example which confirms that the proposed technique is fast, efficient, accurate, usable from a practical point of view and very competitive.Comment: 36 pages, 9 figures. arXiv admin note: substantial text overlap with arXiv:1207.223

    On the influence of the cosmological constant on gravitational lensing in small systems

    Full text link
    The cosmological constant Lambda affects gravitational lensing phenomena. The contribution of Lambda to the observable angular positions of multiple images and to their amplification and time delay is here computed through a study in the weak deflection limit of the equations of motion in the Schwarzschild-de Sitter metric. Due to Lambda the unresolved images are slightly demagnified, the radius of the Einstein ring decreases and the time delay increases. The effect is however negligible for near lenses. In the case of null cosmological constant, we provide some updated results on lensing by a Schwarzschild black hole.Comment: 8 pages, 1 figure; v2: extended discussion on the lens equation, references added, results unchanged, in press on PR

    Constrained MaxLik reconstruction of multimode photon distributions

    Get PDF
    We address the reconstruction of the full photon distribution of multimode fields generated by seeded parametric down-conversion (PDC). Our scheme is based on on/off avalanche photodetection assisted by maximum-likelihood (MaxLik) estimation and does not involve photon counting. We present a novel constrained MaxLik method that incorporates the request of finite energy to improve the rate of convergence and, in turn, the overall accuracy of the reconstruction

    Remote state preparation and teleportation in phase space

    Full text link
    Continuous variable remote state preparation and teleportation are analyzed using Wigner functions in phase space. We suggest a remote squeezed state preparation scheme between two parties sharing an entangled twin beam, where homodyne detection on one beam is used as a conditional source of squeezing for the other beam. The scheme works also with noisy measurements, and provide squeezing if the homodyne quantum efficiency is larger than 50%. Phase space approach is shown to provide a convenient framework to describe teleportation as a generalized conditional measurement, and to evaluate relevant degrading effects, such the finite amount of entanglement, the losses along the line, and the nonunit quantum efficiency at the sender location.Comment: 2 figures, revised version to appear in J.Opt.

    How to measure the wave-function absolute squared of a moving particle by using mirrors

    Full text link
    We consider a slow particle with wave function ψt(x)\psi_t(\vec{x}), moving freely in some direction. A mirror is briefly switched on around a time TT and its position is scanned. It is shown that the measured reflection probability then allows the determination of ψT(x)2|\psi_T(\vec{x})|^2. Experimentally available atomic mirrors should make this method applicable to the center-of-mass wave function of atoms with velocities in the cm/s range.Comment: 4 pages, 5 figure
    corecore