2,868 research outputs found

    Low energy processes to distinguish among seesaw models

    Full text link
    We consider the three basic seesaw scenarios (with fermionic singlets, scalar triplets or fermionic triplets) and discuss their phenomenology, aside from neutrino masses. We use the effective field theory approach and compare the dimension-six operators characteristic of these models. We discuss the possibility of having large dimension-six operators and small dimension-five (small neutrino masses) without any fine-tuning, if the lepton number is violated at a low energy scale. Finally, we discuss some peculiarities of the phenomenology of the fermionic triplet seesaw model.Comment: 3 pages, to appear in the proceedings of IFAE08, Bologna, Ital

    On Security and Sparsity of Linear Classifiers for Adversarial Settings

    Full text link
    Machine-learning techniques are widely used in security-related applications, like spam and malware detection. However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection. In this work, we focus on the vulnerability of linear classifiers to evasion attacks. This can be considered a relevant problem, as linear classifiers have been increasingly used in embedded systems and mobile devices for their low processing time and memory requirements. We exploit recent findings in robust optimization to investigate the link between regularization and security of linear classifiers, depending on the type of attack. We also analyze the relationship between the sparsity of feature weights, which is desirable for reducing processing cost, and the security of linear classifiers. We further propose a novel octagonal regularizer that allows us to achieve a proper trade-off between them. Finally, we empirically show how this regularizer can improve classifier security and sparsity in real-world application examples including spam and malware detection

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Higgs-gauge unification without tadpoles

    Full text link
    In orbifold gauge theories localized tadpoles can be radiatively generated at the fixed points where U(1) subgroups are conserved. If the Standard Model Higgs fields are identified with internal components of the bulk gauge fields (Higgs-gauge unification) in the presence of these tadpoles the Higgs mass becomes sensitive to the UV cutoff and electroweak symmetry breaking is spoiled. We find the general conditions, based on symmetry arguments, for the absence/presence of localized tadpoles in models with an arbitrary number of dimensions D. We show that in the class of orbifold compactifications based on T^{D-4}/Z_N (D even, N>2) tadpoles are always allowed, while on T^{D-4}/\mathbb Z_2 (arbitrary D) with fermions in arbitrary representations of the bulk gauge group tadpoles can only appear in D=6 dimensions. We explicitly check this with one- and two-loops calculationsComment: 19 pages, 3 figures, axodraw.sty. v2: version to appear in Nucl. Phys.

    Is the 125 GeV Higgs the superpartner of a neutrino?

    Full text link
    Recent LHC searches have provided strong evidence for the Higgs, a boson whose gauge quantum numbers coincide with those of a SM fermion, the neutrino. This raises the mandatory question of whether Higgs and neutrino can be related by supersymmetry. We study this possibility in a model in which an approximate R-symmetry acts as a lepton number. We show that Higgs physics resembles that of the SM-Higgs with the exception of a novel invisible decay into Goldstino and neutrino with a branching fraction that can be as large as ~10%. Based on naturalness criteria, only stops and sbottoms are required to be lighter than the TeV with a phenomenology dictated by the R-symmetry. They have novel decays into quarks+leptons that could be seen at the LHC, allowing to distinguish these scenarios from the ordinary MSSM.Comment: 19 pages, 8 figure

    Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

    Get PDF
    Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.Comment: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) 201
    corecore