450 research outputs found

    On Security and Sparsity of Linear Classifiers for Adversarial Settings

    Full text link
    Machine-learning techniques are widely used in security-related applications, like spam and malware detection. However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection. In this work, we focus on the vulnerability of linear classifiers to evasion attacks. This can be considered a relevant problem, as linear classifiers have been increasingly used in embedded systems and mobile devices for their low processing time and memory requirements. We exploit recent findings in robust optimization to investigate the link between regularization and security of linear classifiers, depending on the type of attack. We also analyze the relationship between the sparsity of feature weights, which is desirable for reducing processing cost, and the security of linear classifiers. We further propose a novel octagonal regularizer that allows us to achieve a proper trade-off between them. Finally, we empirically show how this regularizer can improve classifier security and sparsity in real-world application examples including spam and malware detection

    Unitarity of the Leptonic Mixing Matrix

    Get PDF
    We determine the elements of the leptonic mixing matrix, without assuming unitarity, combining data from neutrino oscillation experiments and weak decays. To that end, we first develop a formalism for studying neutrino oscillations in vacuum and matter when the leptonic mixing matrix is not unitary. To be conservative, only three light neutrino species are considered, whose propagation is generically affected by non-unitary effects. Precision improvements within future facilities are discussed as well.Comment: Standard Model radiative corrections to the invisible Z width included. Some numerical results modified at the percent level. Updated with latest bounds on the rare tau decay. Physical conculsions unchange

    MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

    Full text link
    Present attack methods can make state-of-the-art classification systems based on deep neural networks misclassify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image, a trained network is picked randomly from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that this approach, MTDeep, reduces misclassification on perturbed images in various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.Comment: Accepted to the Conference on Decision and Game Theory for Security (GameSec), 201

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

    Full text link
    Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using L0 norm to measure the number of modifications and L2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.Comment: Accepted by the 56th Design Automation Conference (DAC 2019

    Keyed Non-Parametric Hypothesis Tests

    Full text link
    The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention. In a poisoning attack the opponent partially tampers the dataset used for learning to mislead the classifier during the testing phase. This paper proposes a new protection strategy against poisoning attacks. The technique relies on a new primitive called keyed non-parametric hypothesis tests allowing to evaluate under adversarial conditions the training input's conformance with a previously learned distribution D\mathfrak{D}. To do so we use a secret key κ\kappa unknown to the opponent. Keyed non-parametric hypothesis tests differs from classical tests in that the secrecy of κ\kappa prevents the opponent from misleading the keyed test into concluding that a (significantly) tampered dataset belongs to D\mathfrak{D}.Comment: Paper published in NSS 201

    Feature-Guided Black-Box Safety Testing of Deep Neural Networks

    Full text link
    Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc.) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player's objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can con- verge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure

    Are You Tampering With My Data?

    Full text link
    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.Comment: 18 page

    Low energy effects of neutrino masses

    Full text link
    While all models of Majorana neutrino masses lead to the same dimension five effective operator, which does not conserve lepton number, the dimension six operators induced at low energies conserve lepton number and differ depending on the high energy model of new physics. We derive the low-energy dimension six operators which are characteristic of generic Seesaw models, in which neutrino masses result from the exchange of heavy fields which may be either fermionic singlets, fermionic triplets or scalar triplets. The resulting operators may lead to effects observable in the near future, if the coefficients of the dimension five and six operators are decoupled along a certain pattern, which turns out to be common to all models. The phenomenological consequences are explored as well, including their contributions to μeγ\mu \to e \gamma and new bounds on the Yukawa couplings for each model.Comment: modifications: couplings in appendix B, formulas (121)-(122) on rare leptons decays (to match with published version) and consequently bounds in table

    Neutrino masses in SU(5)×U(1)FSU(5)\times U(1)_F with adjoint flavons

    Get PDF
    We present a SU(5)×U(1)FSU(5)\times U(1)_F supersymmetric model for neutrino masses and mixings that implements the seesaw mechanism by means of the heavy SU(2) singlets and triplets states contained in three adjoints of SU(5). We discuss how Abelian U(1)FU(1)_F symmetries can naturally yield non-hierarchical light neutrinos even when the heavy states are strongly hierarchical, and how it can also ensure that RR--parity arises as an exact accidental symmetry. By assigning two flavons that break U(1)FU(1)_F to the adjoint representation of SU(5) and assuming universality for all the fundamental couplings, the coefficients of the effective Yukawa and Majorana mass operators become calculable in terms of group theoretical quantities. There is a single free parameter in the model, however, at leading order the structure of the light neutrinos mass matrix is determined in a parameter independent way.Comment: 16 pages, 9 figures. Included contributions to neutrino masses from the triplet states contained in the three adjoints of SU(5
    corecore