61 research outputs found

    Fireside Chat: AI in Canada and the United States

    Get PDF

    MSVIPER: Improved Policy Distillation for Reinforcement-Learning-Based Robot Navigation

    Full text link
    We present Multiple Scenario Verifiable Reinforcement Learning via Policy Extraction (MSVIPER), a new method for policy distillation to decision trees for improved robot navigation. MSVIPER learns an "expert" policy using any Reinforcement Learning (RL) technique involving learning a state-action mapping and then uses imitation learning to learn a decision-tree policy from it. We demonstrate that MSVIPER results in efficient decision trees and can accurately mimic the behavior of the expert policy. Moreover, we present efficient policy distillation and tree-modification techniques that take advantage of the decision tree structure to allow improvements to a policy without retraining. We use our approach to improve the performance of RL-based robot navigation algorithms for indoor and outdoor scenes. We demonstrate the benefits in terms of reduced freezing and oscillation behaviors (by up to 95\% reduction) for mobile robots navigating among dynamic obstacles and reduced vibrations and oscillation (by up to 17\%) for outdoor robot navigation on complex, uneven terrains.Comment: 6 pages main paper, 2 pages of references, 5 page appendix (13 pages total) 5 tables, 9 algorithms, 4 figure

    AI Risk Management Framework

    No full text
    As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework. The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.</jats:p

    A Taxonomy and Terminology of Adversarial Machine Learning

    No full text
    <jats:p /

    Image Specific Error Rate: A Biometric Performance Metric

    Full text link

    Biometric Sample Quality, Standardization

    Full text link

    A Taxonomy and Terminology of Adversarial Machine Learning

    No full text
    <jats:p /

    NIST fingerprint image quality (NFIQ) compliance test

    No full text

    Biometric Sample Quality

    Full text link
    corecore