182 research outputs found

    On Protected Realizations of Quantum Information

    Full text link
    There are two complementary approaches to realizing quantum information so that it is protected from a given set of error operators. Both involve encoding information by means of subsystems. One is initialization-based error protection, which involves a quantum operation that is applied before error events occur. The other is operator quantum error correction, which uses a recovery operation applied after the errors. Together, the two approaches make it clear how quantum information can be stored at all stages of a process involving alternating error and quantum operations. In particular, there is always a subsystem that faithfully represents the desired quantum information. We give a definition of faithful realization of quantum information and show that it always involves subsystems. This justifies the "subsystems principle" for realizing quantum information. In the presence of errors, one can make use of noiseless, (initialization) protectable, or error-correcting subsystems. We give an explicit algorithm for finding optimal noiseless subsystems. Finding optimal protectable or error-correcting subsystems is in general difficult. Verifying that a subsystem is error-correcting involves only linear algebra. We discuss the verification problem for protectable subsystems and reduce it to a simpler version of the problem of finding error-detecting codes.Comment: 17 page

    Zeno effect for quantum computation and control

    Get PDF
    It is well known that the quantum Zeno effect can protect specific quantum states from decoherence by using projective measurements. Here we combine the theory of weak measurements with stabilizer quantum error correction and detection codes. We derive rigorous performance bounds which demonstrate that the Zeno effect can be used to protect appropriately encoded arbitrary states to arbitrary accuracy, while at the same time allowing for universal quantum computation or quantum control.Comment: Significant modifications, including a new author. To appear in PR

    Scaling the neutral atom Rydberg gate quantum computer by collective encoding in Holmium atoms

    Full text link
    We discuss a method for scaling a neutral atom Rydberg gate quantum processor to a large number of qubits. Limits are derived showing that the number of qubits that can be directly connected by entangling gates with errors at the 10310^{-3} level using long range Rydberg interactions between sites in an optical lattice, without mechanical motion or swap chains, is about 500 in two dimensions and 7500 in three dimensions. A scaling factor of 60 at a smaller number of sites can be obtained using collective register encoding in the hyperfine ground states of the rare earth atom Holmium. We present a detailed analysis of operation of the 60 qubit register in Holmium. Combining a lattice of multi-qubit ensembles with collective encoding results in a feasible design for a 1000 qubit fully connected quantum processor.Comment: 6 figure

    Loss tolerant linear optical quantum memory by measurement-based quantum computing

    Get PDF
    We give a scheme for loss tolerantly building a linear optical quantum memory which itself is tolerant to qubit loss. We use the encoding recently introduced in Varnava et al 2006 Phys. Rev. Lett. 97 120501, and give a method for efficiently achieving this. The entire approach resides within the 'one-way' model for quantum computing (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 5188–91; Raussendorf et al 2003 Phys. Rev. A 68 022312). Our results suggest that it is possible to build a loss tolerant quantum memory, such that if the requirement is to keep the data stored over arbitrarily long times then this is possible with only polynomially increasing resources and logarithmically increasing individual photon life-times

    Scalability of quantum computation with addressable optical lattices

    Get PDF
    We make a detailed analysis of error mechanisms, gate fidelity, and scalability of proposals for quantum computation with neutral atoms in addressable (large lattice constant) optical lattices. We have identified possible limits to the size of quantum computations, arising in 3D optical lattices from current limitations on the ability to perform single qubit gates in parallel and in 2D lattices from constraints on laser power. Our results suggest that 3D arrays as large as 100 x 100 x 100 sites (i.e., 106\sim 10^6 qubits) may be achievable, provided two-qubit gates can be performed with sufficiently high precision and degree of parallelizability. Parallelizability of long range interaction-based two-qubit gates is qualitatively compared to that of collisional gates. Different methods of performing single qubit gates are compared, and a lower bound of 1×1051 \times 10^{-5} is determined on the error rate for the error mechanisms affecting 133^{133}Cs in a blue-detuned lattice with Raman transition-based single qubit gates, given reasonable limits on experimental parameters.Comment: 17 pages, 5 figures. Accepted for publication in Physical Review

    Polynomial-time algorithm for simulation of weakly interacting quantum spin systems

    Full text link
    We describe an algorithm that computes the ground state energy and correlation functions for 2-local Hamiltonians in which interactions between qubits are weak compared to single-qubit terms. The running time of the algorithm is polynomial in the number of qubits and the required precision. Specifically, we consider Hamiltonians of the form H=H0+ϵVH=H_0+\epsilon V, where H_0 describes non-interacting qubits, V is a perturbation that involves arbitrary two-qubit interactions on a graph of bounded degree, and ϵ\epsilon is a small parameter. The algorithm works if ϵ|\epsilon| is below a certain threshold value that depends only upon the spectral gap of H_0, the maximal degree of the graph, and the maximal norm of the two-qubit interactions. The main technical ingredient of the algorithm is a generalized Kirkwood-Thomas ansatz for the ground state. The parameters of the ansatz are computed using perturbative expansions in powers of ϵ\epsilon. Our algorithm is closely related to the coupled cluster method used in quantum chemistry.Comment: 27 page

    The Stability of Quantum Concatenated Code Hamiltonians

    Full text link
    Protecting quantum information from the detrimental effects of decoherence and lack of precise quantum control is a central challenge that must be overcome if a large robust quantum computer is to be constructed. The traditional approach to achieving this is via active quantum error correction using fault-tolerant techniques. An alternative to this approach is to engineer strongly interacting many-body quantum systems that enact the quantum error correction via the natural dynamics of these systems. Here we present a method for achieving this based on the concept of concatenated quantum error correcting codes. We define a class of Hamiltonians whose ground states are concatenated quantum codes and whose energy landscape naturally causes quantum error correction. We analyze these Hamiltonians for robustness and suggest methods for implementing these highly unnatural Hamiltonians.Comment: 18 pages, small corrections and clarification

    Optimal, reliable estimation of quantum states

    Get PDF
    Accurately inferring the state of a quantum device from the results of measurements is a crucial task in building quantum information processing hardware. The predominant state estimation procedure, maximum likelihood estimation (MLE), generally reports an estimate with zero eigenvalues. These cannot be justified. Furthermore, the MLE estimate is incompatible with error bars, so conclusions drawn from it are suspect. I propose an alternative procedure, Bayesian mean estimation (BME). BME never yields zero eigenvalues, its eigenvalues provide a bound on their own uncertainties, and it is the most accurate procedure possible. I show how to implement BME numerically, and how to obtain natural error bars that are compatible with the estimate. Finally, I briefly discuss the differences between Bayesian and frequentist estimation techniques.Comment: RevTeX; 14 pages, 2 embedded figures. Comments enthusiastically welcomed

    Holonomic quantum computing in symmetry-protected ground states of spin chains

    Full text link
    While solid-state devices offer naturally reliable hardware for modern classical computers, thus far quantum information processors resemble vacuum tube computers in being neither reliable nor scalable. Strongly correlated many body states stabilized in topologically ordered matter offer the possibility of naturally fault tolerant computing, but are both challenging to engineer and coherently control and cannot be easily adapted to different physical platforms. We propose an architecture which achieves some of the robustness properties of topological models but with a drastically simpler construction. Quantum information is stored in the symmetry-protected degenerate ground states of spin-1 chains, while quantum gates are performed by adiabatic non-Abelian holonomies using only single-site fields and nearest-neighbor couplings. Gate operations respect the symmetry, and so inherit some protection from noise and disorder from the symmetry-protected ground states.Comment: 19 pages, 4 figures. v2: published versio

    Topological fault-tolerance in cluster state quantum computation

    Get PDF
    We describe a fault-tolerant version of the one-way quantum computer using a cluster state in three spatial dimensions. Topologically protected quantum gates are realized by choosing appropriate boundary conditions on the cluster. We provide equivalence transformations for these boundary conditions that can be used to simplify fault-tolerant circuits and to derive circuit identities in a topological manner. The spatial dimensionality of the scheme can be reduced to two by converting one spatial axis of the cluster into time. The error threshold is 0.75% for each source in an error model with preparation, gate, storage and measurement errors. The operational overhead is poly-logarithmic in the circuit size.Comment: 20 pages, 12 figure
    corecore