322 research outputs found
Squashing Models for Optical Measurements in Quantum Communication
Measurements with photodetectors necessarily need to be described in the
infinite dimensional Fock space of one or several modes. For some measurements
a model has been postulated which describes the full mode measurement as a
composition of a mapping (squashing) of the signal into a small dimensional
Hilbert space followed by a specified target measurement. We present a
formalism to investigate whether a given measurement pair of mode and target
measurements can be connected by a squashing model. We show that the
measurements used in the BB84 protocol do allow a squashing description,
although the six-state protocol does not. As a result, security proofs for the
BB84 protocol can be based on the assumption that the eavesdropper forwards at
most one photon, while the same does not hold for the six-state protocol.Comment: 4 pages, 2 figures. Fixed a typographical error. Replaced the
six-state protocol counter-example. Conclusions of the paper are unchange
On single-photon quantum key distribution in the presence of loss
We investigate two-way and one-way single-photon quantum key distribution
(QKD) protocols in the presence of loss introduced by the quantum channel. Our
analysis is based on a simple precondition for secure QKD in each case. In
particular, the legitimate users need to prove that there exists no separable
state (in the case of two-way QKD), or that there exists no quantum state
having a symmetric extension (one-way QKD), that is compatible with the
available measurements results. We show that both criteria can be formulated as
a convex optimisation problem known as a semidefinite program, which can be
efficiently solved. Moreover, we prove that the solution to the dual
optimisation corresponds to the evaluation of an optimal witness operator that
belongs to the minimal verification set of them for the given two-way (or
one-way) QKD protocol. A positive expectation value of this optimal witness
operator states that no secret key can be distilled from the available
measurements results. We apply such analysis to several well-known
single-photon QKD protocols under losses.Comment: 14 pages, 6 figure
Upper bounds for the secure key rate of decoy state quantum key distribution
The use of decoy states in quantum key distribution (QKD) has provided a
method for substantially increasing the secret key rate and distance that can
be covered by QKD protocols with practical signals. The security analysis of
these schemes, however, leaves open the possibility that the development of
better proof techniques, or better classical post-processing methods, might
further improve their performance in realistic scenarios. In this paper, we
derive upper bounds on the secure key rate for decoy state QKD. These bounds
are based basically only on the classical correlations established by the
legitimate users during the quantum communication phase of the protocol. The
only assumption about the possible post-processing methods is that double click
events are randomly assigned to single click events. Further we consider only
secure key rates based on the uncalibrated device scenario which assigns
imperfections such as detection inefficiency to the eavesdropper. Our analysis
relies on two preconditions for secure two-way and one-way QKD: The legitimate
users need to prove that there exists no separable state (in the case of
two-way QKD), or that there exists no quantum state having a symmetric
extension (one-way QKD), that is compatible with the available measurements
results. Both criteria have been previously applied to evaluate single-photon
implementations of QKD. Here we use them to investigate a realistic source of
weak coherent pulses. The resulting upper bounds can be formulated as a convex
optimization problem known as a semidefinite program which can be efficiently
solved. For the standard four-state QKD protocol, they are quite close to known
lower bounds, thus showing that there are clear limits to the further
improvement of classical post-processing techniques in decoy state QKD.Comment: 10 pages, 3 figure
One-way quantum key distribution: Simple upper bound on the secret key rate
We present a simple method to obtain an upper bound on the achievable secret
key rate in quantum key distribution (QKD) protocols that use only
unidirectional classical communication during the public-discussion phase. This
method is based on a necessary precondition for one-way secret key
distillation; the legitimate users need to prove that there exists no quantum
state having a symmetric extension that is compatible with the available
measurements results. The main advantage of the obtained upper bound is that it
can be formulated as a semidefinite program, which can be efficiently solved.
We illustrate our results by analysing two well-known qubit-based QKD
protocols: the four-state protocol and the six-state protocol. Recent results
by Renner et al., Phys. Rev. A 72, 012332 (2005), also show that the given
precondition is only necessary but not sufficient for unidirectional secret key
distillation.Comment: 11 pages, 1 figur
Security of distributed-phase-reference quantum key distribution
Distributed-phase-reference quantum key distribution stands out for its easy
implementation with present day technology. Since many years, a full security
proof of these schemes in a realistic setting has been elusive. For the first
time, we solve this long standing problem and present a generic method to prove
the security of such protocols against general attacks. To illustrate our
result we provide lower bounds on the key generation rate of a variant of the
coherent-one-way quantum key distribution protocol. In contrast to standard
predictions, it appears to scale quadratically with the system transmittance.Comment: 4 pages + appendix, 4 figure
Passive decoy state quantum key distribution with practical light sources
Decoy states have been proven to be a very useful method for significantly
enhancing the performance of quantum key distribution systems with practical
light sources. While active modulation of the intensity of the laser pulses is
an effective way of preparing decoy states in principle, in practice passive
preparation might be desirable in some scenarios. Typical passive schemes
involve parametric down-conversion. More recently, it has been shown that phase
randomized weak coherent pulses (WCP) can also be used for the same purpose [M.
Curty {\it et al.}, Opt. Lett. {\bf 34}, 3238 (2009).] This proposal requires
only linear optics together with a simple threshold photon detector, which
shows the practical feasibility of the method. Most importantly, the resulting
secret key rate is comparable to the one delivered by an active decoy state
setup with an infinite number of decoy settings. In this paper we extend these
results, now showing specifically the analysis for other practical scenarios
with different light sources and photo-detectors. In particular, we consider
sources emitting thermal states, phase randomized WCP, and strong coherent
light in combination with several types of photo-detectors, like, for instance,
threshold photon detectors, photon number resolving detectors, and classical
photo-detectors. Our analysis includes as well the effect that detection
inefficiencies and noise in the form of dark counts shown by current threshold
detectors might have on the final secret ket rate. Moreover, we provide
estimations on the effects that statistical fluctuations due to a finite data
size can have in practical implementations.Comment: 17 pages, 14 figure
Upper bound on the secret key rate distillable from effective quantum correlations with imperfect detectors
We provide a simple method to obtain an upper bound on the secret key rate
that is particularly suited to analyze practical realizations of quantum key
distribution protocols with imperfect devices. We consider the so-called
trusted device scenario where Eve cannot modify the actual detection devices
employed by Alice and Bob. The upper bound obtained is based on the available
measurements results, but it includes the effect of the noise and losses
present in the detectors of the legitimate users.Comment: 9 pages, 1 figure; suppress sifting effect in the figure, final
versio
Biomechanical analysis of the effect of congruence, depth and radius on the stability ratio of a simplistic ‘ball-and-socket’ joint model
Objectives The bony shoulder stability ratio (BSSR) allows for quantification
of the bony stabilisers in vivo. We aimed to biomechanically validate the
BSSR, determine whether joint incongruence affects the stability ratio (SR) of
a shoulder model, and determine the correct parameters (glenoid concavity
versus humeral head radius) for calculation of the BSSR in vivo. Methods Four
polyethylene balls (radii: 19.1 mm to 38.1 mm) were used to mould four fitting
sockets in four different depths (3.2 mm to 19.1mm). The SR was measured in
biomechanical congruent and incongruent experimental series. The experimental
SR of a congruent system was compared with the calculated SR based on the BSSR
approach. Differences in SR between congruent and incongruent experimental
conditions were quantified. Finally, the experimental SR was compared with
either calculated SR based on the socket concavity or plastic ball radius.
Results The experimental SR is comparable with the calculated SR (mean
difference 10%, sd 8%; relative values). The experimental incongruence study
observed almost no differences (2%, sd 2%). The calculated SR on the basis of
the socket concavity radius is superior in predicting the experimental SR
(mean difference 10%, sd 9%) compared with the calculated SR based on the
plastic ball radius (mean difference 42%, sd 55%). Conclusion The present
biomechanical investigation confirmed the validity of the BSSR. Incongruence
has no significant effect on the SR of a shoulder model. In the event of an
incongruent system, the calculation of the BSSR on the basis of the glenoid
concavity radius is recommended
Rank-based model selection for multiple ions quantum tomography
The statistical analysis of measurement data has become a key component of
many quantum engineering experiments. As standard full state tomography becomes
unfeasible for large dimensional quantum systems, one needs to exploit prior
information and the "sparsity" properties of the experimental state in order to
reduce the dimensionality of the estimation problem. In this paper we propose
model selection as a general principle for finding the simplest, or most
parsimonious explanation of the data, by fitting different models and choosing
the estimator with the best trade-off between likelihood fit and model
complexity. We apply two well established model selection methods -- the Akaike
information criterion (AIC) and the Bayesian information criterion (BIC) -- to
models consising of states of fixed rank and datasets such as are currently
produced in multiple ions experiments. We test the performance of AIC and BIC
on randomly chosen low rank states of 4 ions, and study the dependence of the
selected rank with the number of measurement repetitions for one ion states. We
then apply the methods to real data from a 4 ions experiment aimed at creating
a Smolin state of rank 4. The two methods indicate that the optimal model for
describing the data lies between ranks 6 and 9, and the Pearson test
is applied to validate this conclusion. Additionally we find that the mean
square error of the maximum likelihood estimator for pure states is close to
that of the optimal over all possible measurements.Comment: 24 pages, 6 figures, 3 table
On asymptotic continuity of functions of quantum states
A useful kind of continuity of quantum states functions in asymptotic regime
is so-called asymptotic continuity. In this paper we provide general tools for
checking if a function possesses this property. First we prove equivalence of
asymptotic continuity with so-called it robustness under admixture. This allows
us to show that relative entropy distance from a convex set including maximally
mixed state is asymptotically continuous. Subsequently, we consider it arrowing
- a way of building a new function out of a given one. The procedure originates
from constructions of intrinsic information and entanglement of formation. We
show that arrowing preserves asymptotic continuity for a class of functions
(so-called subextensive ones). The result is illustrated by means of several
examples.Comment: Minor corrections, version submitted for publicatio
- …
