1,686 research outputs found

    Optimal quantum control of Bose Einstein condensates in magnetic microtraps

    Get PDF
    Transport of Bose-Einstein condensates in magnetic microtraps, controllable by external parameters such as wire currents or radio-frequency fields, is studied within the framework of optimal control theory (OCT). We derive from the Gross-Pitaevskii equation the optimality system for the OCT fields that allow to efficiently channel the condensate between given initial and desired states. For a variety of magnetic confinement potentials we study transport and wavefunction splitting of the condensate, and demonstrate that OCT allows to drastically outperfrom more simple schemes for the time variation of the microtrap control parameters.Comment: 11 pages, 7 figure

    Learning-aided Stochastic Network Optimization with Imperfect State Prediction

    Full text link
    We investigate the problem of stochastic network optimization in the presence of imperfect state prediction and non-stationarity. Based on a novel distribution-accuracy curve prediction model, we develop the predictive learning-aided control (PLC) algorithm, which jointly utilizes historic and predicted network state information for decision making. PLC is an online algorithm that requires zero a-prior system statistical information, and consists of three key components, namely sequential distribution estimation and change detection, dual learning, and online queue-based control. Specifically, we show that PLC simultaneously achieves good long-term performance, short-term queue size reduction, accurate change detection, and fast algorithm convergence. In particular, for stationary networks, PLC achieves a near-optimal [O(ϵ)[O(\epsilon), O(log(1/ϵ)2)]O(\log(1/\epsilon)^2)] utility-delay tradeoff. For non-stationary networks, \plc{} obtains an [O(ϵ),O(log2(1/ϵ)[O(\epsilon), O(\log^2(1/\epsilon) +min(ϵc/21,ew/ϵ))]+ \min(\epsilon^{c/2-1}, e_w/\epsilon))] utility-backlog tradeoff for distributions that last Θ(max(ϵc,ew2)ϵ1+a)\Theta(\frac{\max(\epsilon^{-c}, e_w^{-2})}{\epsilon^{1+a}}) time, where ewe_w is the prediction accuracy and a=Θ(1)>0a=\Theta(1)>0 is a constant (the Backpressue algorithm \cite{neelynowbook} requires an O(ϵ2)O(\epsilon^{-2}) length for the same utility performance with a larger backlog). Moreover, PLC detects distribution change O(w)O(w) slots faster with high probability (ww is the prediction size) and achieves an O(min(ϵ1+c/2,ew/ϵ)+log2(1/ϵ))O(\min(\epsilon^{-1+c/2}, e_w/\epsilon)+\log^2(1/\epsilon)) convergence time. Our results demonstrate that state prediction (even imperfect) can help (i) achieve faster detection and convergence, and (ii) obtain better utility-delay tradeoffs

    Ancilla-assisted sequential approximation of nonlocal unitary operations

    Get PDF
    We consider the recently proposed "no-go" theorem of Lamata et al [Phys. Rev. Lett. 101, 180506 (2008)] on the impossibility of sequential implementation of global unitary operations with the aid of an itinerant ancillary system and view the claim within the language of Kraus representation. By virtue of an extremely useful tool for analyzing entanglement properties of quantum operations, namely, operator-Schmidt decomposition, we provide alternative proof to the "no-go" theorem and also study the role of initial correlations between the qubits and ancilla in sequential preparation of unitary entanglers. Despite the negative response from the "no-go" theorem, we demonstrate explicitly how the matrix-product operator(MPO) formalism provides a flexible structure to develop protocols for sequential implementation of such entanglers with an optimal fidelity. The proposed numerical technique, that we call variational matrix-product operator (VMPO), offers a computationally efficient tool for characterizing the "globalness" and entangling capabilities of nonlocal unitary operations.Comment: Slightly improved version as published in Phys. Rev.

    Towards a Universal Theory of Artificial Intelligence based on Algorithmic Probability and Sequential Decision Theory

    Get PDF
    Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown distribution. We unify both theories and give strong arguments that the resulting universal AIXI model behaves optimal in any computable environment. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI^tl, which is still superior to any other time t and space l bounded agent. The computation time of AIXI^tl is of the order t x 2^l.Comment: 8 two-column pages, latex2e, 1 figure, submitted to ijca

    Distinguishing between optical coherent states with imperfect detection

    Full text link
    Several proposed techniques for distinguishing between optical coherent states are analyzed under a physically realistic model of photodetection. Quantum error probabilities are derived for the Kennedy receiver, the Dolinar receiver and the unitary rotation scheme proposed by Sasaki and Hirota for sub-unity detector efficiency. Monte carlo simulations are performed to assess the effects of detector dark counts, dead time, signal processing bandwidth and phase noise in the communication channel. The feedback strategy employed by the Dolinar receiver is found to achieve the Helstrom bound for sub-unity detection efficiency and to provide robustness to these other detector imperfections making it more attractive for laboratory implementation than previously believed

    Time Optimal Unitary Operations

    Get PDF
    Extending our previous work on time optimal quantum state evolution, we formulate a variational principle for the time optimal unitary operation, which has direct relevance to quantum computation. We demonstrate our method with three examples, i.e. the swap of qubits, the quantum Fourier transform and the entangler gate, by choosing a two-qubit anisotropic Heisenberg model.Comment: 4 pages, 1 figure. References adde

    Functional Maps Representation on Product Manifolds

    Get PDF
    We consider the tasks of representing, analyzing and manipulating maps between shapes. We model maps as densities over the product manifold of the input shapes; these densities can be treated as scalar functions and therefore are manipulable using the language of signal processing on manifolds. Being a manifold itself, the product space endows the set of maps with a geometry of its own, which we exploit to define map operations in the spectral domain; we also derive relationships with other existing representations (soft maps and functional maps). To apply these ideas in practice, we discretize product manifolds and their Laplace--Beltrami operators, and we introduce localized spectral analysis of the product manifold as a novel tool for map processing. Our framework applies to maps defined between and across 2D and 3D shapes without requiring special adjustment, and it can be implemented efficiently with simple operations on sparse matrices.Comment: Accepted to Computer Graphics Foru

    Generalizing movements with information-theoretic stochastic optimal control

    Get PDF
    Stochastic optimal control is typically used to plan a movement for a specific situation. Although most stochastic optimal control methods fail to generalize this movement plan to a new situation without replanning, a stochastic optimal control method is presented that allows reuse of the obtained policy in a new situation, as the policy is more robust to slight deviations from the initial movement plan. To improve the robustness of the policy, we employ information-theoretic policy updates that explicitly operate on trajectory distributions instead of single trajectories. To ensure a stable and smooth policy update, the ”distance” is limited between the trajectory distributions of the old and the new control policies. The introduced bound offers a closed-form solution for the resulting policy and extends results from recent developments in stochastic optimal control. In contrast to many standard stochastic optimal control algorithms, the current approach can directly infer the system dynamics from data points, and hence can also be used for model-based reinforcement learning. This paper represents an extension of the paper by Lioutikov et al. (“Sample-Based Information-Theoretic Stochastic Optimal Control,” Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE, Piscataway, NJ, 2014, pp. 3896–3902). In addition to revisiting the content, an extensive theoretical comparison is presented of the approach with related work, additional aspects of the implementation are discussed, and further evaluations are introduced

    Approximate policy iteration: A survey and some new methods

    Get PDF
    We consider the classical policy iteration method of dynamic programming (DP), where approximations and simulation are used to deal with the curse of dimensionality. We survey a number of issues: convergence and rate of convergence of approximate policy evaluation methods, singularity and susceptibility to simulation noise of policy evaluation, exploration issues, constrained and enhanced policy iteration, policy oscillation and chattering, and optimistic and distributed policy iteration. Our discussion of policy evaluation is couched in general terms and aims to unify the available methods in the light of recent research developments and to compare the two main policy evaluation approaches: projected equations and temporal differences (TD), and aggregation. In the context of these approaches, we survey two different types of simulation-based algorithms: matrix inversion methods, such as least-squares temporal difference (LSTD), and iterative methods, such as least-squares policy evaluation (LSPE) and TD (λ), and their scaled variants. We discuss a recent method, based on regression and regularization, which rectifies the unreliability of LSTD for nearly singular projected Bellman equations. An iterative version of this method belongs to the LSPE class of methods and provides the connecting link between LSTD and LSPE. Our discussion of policy improvement focuses on the role of policy oscillation and its effect on performance guarantees. We illustrate that policy evaluation when done by the projected equation/TD approach may lead to policy oscillation, but when done by aggregation it does not. This implies better error bounds and more regular performance for aggregation, at the expense of some loss of generality in cost function representation capability. Hard aggregation provides the connecting link between projected equation/TD-based and aggregation-based policy evaluation, and is characterized by favorable error bounds.National Science Foundation (U.S.) (No.ECCS-0801549)Los Alamos National Laboratory. Information Science and Technology InstituteUnited States. Air Force (No.FA9550-10-1-0412

    The stochastic matching problem

    Get PDF
    The matching problem plays a basic role in combinatorial optimization and in statistical mechanics. In its stochastic variants, optimization decisions have to be taken given only some probabilistic information about the instance. While the deterministic case can be solved in polynomial time, stochastic variants are worst-case intractable. We propose an efficient method to solve stochastic matching problems which combines some features of the survey propagation equations and of the cavity method. We test it on random bipartite graphs, for which we analyze the phase diagram and compare the results with exact bounds. Our approach is shown numerically to be effective on the full range of parameters, and to outperform state-of-the-art methods. Finally we discuss how the method can be generalized to other problems of optimization under uncertainty.Comment: Published version has very minor change
    corecore