480 research outputs found

    Measurement of Longitudinal Electron Diffusion in Liquid Argon

    Full text link
    We report the measurement of longitudinal electron diffusion coefficients in liquid argon for electric fields between 100 and 2000 V/cm with a gold photocathode as a bright electron source. The measurement principle, apparatus, and data analysis are described. Our results, which are consistent with previous measurements in the region between 100 to 350 V/cm [1] , are systematically higher than the prediction of Atrazhev-Timoshkin[2], and represent the world's best measurement in the region between 350 to 2000 V/cm. The quantum efficiency of the gold photocathode, the drift velocity and longitudinal diffusion coefficients in gas argon are also presented.Comment: Accepted by NIM on January 29th. 201

    Distributed optimization with inexact oracle

    Get PDF
    summary:In this paper, we study the distributed optimization problem using approximate first-order information. We suppose the agent can repeatedly call an inexact first-order oracle of each individual objective function and exchange information with its time-varying neighbors. We revisit the distributed subgradient method in this circumstance and show its suboptimality under square summable but not summable step sizes. We also present several conditions on the inexactness of the local oracles to ensure an exact convergence of the iterative sequences towards the global optimal solution. A numerical example is given to verify the efficiency of our algorithm

    Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality

    Full text link
    Stochastic gradient descent with momentum (SGDM) has been widely used in many machine learning and statistical applications. Despite the observed empirical benefits of SGDM over traditional SGD, the theoretical understanding of the role of momentum for different learning rates in the optimization process remains widely open. We analyze the finite-sample convergence rate of SGDM under the strongly convex settings and show that, with a large batch size, the mini-batch SGDM converges faster than mini-batch SGD to a neighborhood of the optimal value. Furthermore, we analyze the Polyak-averaging version of the SGDM estimator, establish its asymptotic normality, and justify its asymptotic equivalence to the averaged SGD

    Observer-based Leader-following Consensus for Positive Multi-agent Systems Over Time-varying Graphs

    Full text link
    This paper addresses the leader-following consensus problem for discrete-time positive multi-agent systems over time-varying graphs. We assume that the followers may have mutually different positive dynamics which can also be different from the leader. Compared with most existing positive consensus works for homogeneous multi-agent systems, the formulated problem is more general and challenging due to the interplay between the positivity requirement and high-order heterogeneous dynamics. To solve the problem, we present an extended version of existing observer-based design for positive multi-agent systems. By virtue of the common quadratic Lyapunov function technique, we show the followers will maintain their state variables in the positive orthant and finally achieve an output consensus specified by the leader. A numerical example is used to verify the efficacy of our algorithms

    Advancing Continual Learning for Robust Deepfake Audio Classification

    Full text link
    The emergence of new spoofing attacks poses an increasing challenge to audio security. Current detection methods often falter when faced with unseen spoofing attacks. Traditional strategies, such as retraining with new data, are not always feasible due to extensive storage. This paper introduces a novel continual learning method Continual Audio Defense Enhancer (CADE). First, by utilizing a fixed memory size to store randomly selected samples from previous datasets, our approach conserves resources and adheres to privacy constraints. Additionally, we also apply two distillation losses in CADE. By distillation in classifiers, CADE ensures that the student model closely resembles that of the teacher model. This resemblance helps the model retain old information while facing unseen data. We further refine our model's performance with a novel embedding similarity loss that extends across multiple depth layers, facilitating superior positive sample alignment. Experiments conducted on the ASVspoof2019 dataset show that our proposed method outperforms the baseline methods.Comment: Submitted to IEEE Tencon. 5 page

    Retrieval-Augmented Embodied Agents

    Full text link
    Embodied agents operating in complex and uncertain environments face considerable challenges. While some advanced agents handle complex manipulation tasks with proficiency, their success often hinges on extensive training data to develop their capabilities. In contrast, humans typically rely on recalling past experiences and analogous situations to solve new problems. Aiming to emulate this human approach in robotics, we introduce the Retrieval-Augmented Embodied Agent (RAEA). This innovative system equips robots with a form of shared memory, significantly enhancing their performance. Our approach integrates a policy retriever, allowing robots to access relevant strategies from an external policy memory bank based on multi-modal inputs. Additionally, a policy generator is employed to assimilate these strategies into the learning process, enabling robots to formulate effective responses to tasks. Extensive testing of RAEA in both simulated and real-world scenarios demonstrates its superior performance over traditional methods, representing a major leap forward in robotic technology.Comment: CVPR202
    corecore