29 research outputs found

    A Compromise between Neutrino Masses and Collider Signatures in the Type-II Seesaw Model

    Full text link
    A natural extension of the standard SU(2)L×U(1)YSU(2)_{\rm L} \times U(1)_{\rm Y} gauge model to accommodate massive neutrinos is to introduce one Higgs triplet and three right-handed Majorana neutrinos, leading to a 6×66\times 6 neutrino mass matrix which contains three 3×33\times 3 sub-matrices MLM_{\rm L}, MDM_{\rm D} and MRM_{\rm R}. We show that three light Majorana neutrinos (i.e., the mass eigenstates of νe\nu_e, νμ\nu_\mu and ντ\nu_\tau) are exactly massless in this model, if and only if ML=MDMR1MDTM_{\rm L} = M_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T exactly holds. This no-go theorem implies that small but non-vanishing neutrino masses may result from a significant but incomplete cancellation between MLM_{\rm L} and MDMR1MDTM_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T terms in the Type-II seesaw formula, provided three right-handed Majorana neutrinos are of O(1){\cal O}(1) TeV and experimentally detectable at the LHC. We propose three simple Type-II seesaw scenarios with the A4×U(1)XA_4 \times U(1)_{\rm X} flavor symmetry to interpret the observed neutrino mass spectrum and neutrino mixing pattern. Such a TeV-scale neutrino model can be tested in two complementary ways: (1) searching for possible collider signatures of lepton number violation induced by the right-handed Majorana neutrinos and doubly-charged Higgs particles; and (2) searching for possible consequences of unitarity violation of the 3×33\times 3 neutrino mixing matrix in the future long-baseline neutrino oscillation experiments.Comment: RevTeX 19 pages, no figure

    Dynamic Supervised Learning: Some Basic Issues and Application Aspects

    No full text

    Incremental Learning

    No full text

    Performance Analysis of Classification Methods for Cardio Vascular Disease (CVD)

    No full text

    Procedural Creation of Behavior Trees for NPCs

    No full text
    Part 3: Artificial IntelligenceInternational audienceBased on an emerging need for automated AI generation, we present a machine learning approach to generate behavior trees controlling NPCs in a “Capture the Flag” game. After discussing the game’s mechanics and rule set, we present the implemented logic and how trees are generated. Subsequently, teams of agents controlled by generated trees are matched up against each other, allowing underlying trees to be refined by learning from victorious opponents. Following three program executions, featuring 1600, 8000 and 16000 matches, highest scoring trees are presented and discussed in this paper

    ForestNet – Automatic Design of Sparse Multilayer Perceptron Network Architectures Using Ensembles of Randomized Trees

    No full text
    In this paper, we introduce a mechanism for designing the architecture of a Sparse Multi-Layer Perceptron network, for classification, called ForestNet. Networks built using our approach are capable of handling high-dimensional data and learning representations of both visual and non-visual data. The proposed approach first builds an ensemble of randomized trees in order to gather information on the hierarchy of features and their separability among the classes. Subsequently, such information is used to design the architecture of a sparse network, for a specific data set and application. The number of neurons is automatically adapted to the dataset. The proposed approach was evaluated using two non-visual and two visual datasets. For each dataset, 4 ensembles of randomized trees with different sizes were built. In turn, per ensemble, a sparse network architecture was designed using our approach and a fully connected network with same architecture was also constructed. The sparse networks defined using our approach consistently outperformed their respective tree ensembles, achieving statistically significant improvements in classification accuracy. While we do not beat state-of-art results with our network size and the lack of data augmentation techniques, our method exhibits very promising results, as the sparse networks performed similarly to their fully connected counterparts with a reduction of more than 98% of connections in the visual tasks

    Learning Sparse Features with an Auto-Associator

    Get PDF
    International audienceA major issue in statistical machine learning is the design of a representa-tion, or feature space, facilitating the resolution of the learning task at hand. Sparse representations in particular facilitate discriminant learning: On the one hand, they are robust to noise. On the other hand, they disentangle the factors of variation mixed up in dense representations, favoring the separa-bility and interpretation of data. This chapter focuses on auto-associators (AAs), i.e. multi-layer neural networks trained to encode/decode the data and thus de facto defining a feature space. AAs, first investigated in the 80s, were recently reconsidered as building blocks for deep neural networks. This chapter surveys related work about building sparse representations, and presents a new non-linear explicit sparse representation method referred to as Sparse Auto-Associator (SAA), integrating a sparsity objective within the standard auto-associator learning criterion. The comparative empirical val-idation of SAAs on state-of-art handwritten digit recognition benchmarks shows that SAAs outperform standard auto-associators in terms of classifi-cation performance and yield similar results as denoising auto-associators. Furthermore, SAAs enable to control the representation size to some extent, through a conservative pruning of the feature space
    corecore