170 research outputs found

    Mass spectrometric and first principles study of Aln_nC^- clusters

    Get PDF
    We study the carbon-dope aluminum clusters by using time-of-flight mass spectrum experiments and {\em ab initio} calculations. Mass abundance distributions are obtained for anionic aluminum and aluminum-carbon mixed clusters. Besides the well-known magic aluminum clusters such as Al13_{13}^- and Al23_{23}^-, Al7_7C^- cluster is found to be particularly stable among those Aln_nC^- clusters. Density functional calculations are performed to determine the ground state structures of Aln_nC^- clusters. Our results show that the Al7_7C^- is a magic cluster with extremely high stability, which might serve as building block of the cluster-assembled materials.Comment: 4 pages, 6 figure

    Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper Directly-Trained Spiking Neural Networks

    Full text link
    Spiking neural networks (SNNs) are bio-inspired neural networks with asynchronous discrete and sparse characteristics, which have increasingly manifested their superiority in low energy consumption. Recent research is devoted to utilizing spatio-temporal information to directly train SNNs by backpropagation. However, the binary and non-differentiable properties of spike activities force directly trained SNNs to suffer from serious gradient vanishing and network degradation, which greatly limits the performance of directly trained SNNs and prevents them from going deeper. In this paper, we propose a multi-level firing (MLF) method based on the existing spatio-temporal back propagation (STBP) method, and spiking dormant-suppressed residual network (spiking DS-ResNet). MLF enables more efficient gradient propagation and the incremental expression ability of the neurons. Spiking DS-ResNet can efficiently perform identity mapping of discrete spikes, as well as provide a more suitable connection for gradient propagation in deep SNNs. With the proposed method, our model achieves superior performances on a non-neuromorphic dataset and two neuromorphic datasets with much fewer trainable parameters and demonstrates the great ability to combat the gradient vanishing and degradation problem in deep SNNs.Comment: Accepted by the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22

    Effective AER Object Classification Using Segmented Probability-Maximization Learning in Spiking Neural Networks

    Full text link
    Address event representation (AER) cameras have recently attracted more attention due to the advantages of high temporal resolution and low power consumption, compared with traditional frame-based cameras. Since AER cameras record the visual input as asynchronous discrete events, they are inherently suitable to coordinate with the spiking neural network (SNN), which is biologically plausible and energy-efficient on neuromorphic hardware. However, using SNN to perform the AER object classification is still challenging, due to the lack of effective learning algorithms for this new representation. To tackle this issue, we propose an AER object classification model using a novel segmented probability-maximization (SPA) learning algorithm. Technically, 1) the SPA learning algorithm iteratively maximizes the probability of the classes that samples belong to, in order to improve the reliability of neuron responses and effectiveness of learning; 2) a peak detection (PD) mechanism is introduced in SPA to locate informative time points segment by segment, based on which information within the whole event stream can be fully utilized by the learning. Extensive experimental results show that, compared to state-of-the-art methods, not only our model is more effective, but also it requires less information to reach a certain level of accuracy.Comment: AAAI 2020 (Oral

    Revealing the mechanisms of semantic satiation with deep learning models

    Get PDF
    The phenomenon of semantic satiation, which refers to the loss of meaning of a word or phrase after being repeated many times, is a well-known psychological phenomenon. However, the microscopic neural computational principles responsible for these mechanisms remain unknown. In this study, we use a deep learning model of continuous coupled neural networks to investigate the mechanism underlying semantic satiation and precisely describe this process with neuronal components. Our results suggest that, from a mesoscopic perspective, semantic satiation may be a bottom-up process. Unlike existing macroscopic psychological studies that suggest that semantic satiation is a top-down process, our simulations use a similar experimental paradigm as classical psychology experiments and observe similar results. Satiation of semantic objectives, similar to the learning process of our network model used for object recognition, relies on continuous learning and switching between objects. The underlying neural coupling strengthens or weakens satiation. Taken together, both neural and network mechanisms play a role in controlling semantic satiation

    Manipulating Electromagnetic Waves with Zero Index Materials

    Get PDF
    Zero-index material is a typical metamaterial with an effective zero refractive index, possessing a variety of exotic electromagnetic properties and particular functionalities. We have considered two kinds of zero-index materials with the first one a nearly matched zero index made of magnetic metamaterial and the second one a radially anisotropic zero index. The magnetic metamaterial-based systems are shown to be significant in wavefront engineering and flexibly tunable by an external magnetic field and a temperature field. The radially anisotropic zero-index-based systems can remarkably enhance the omnidirectional isotropic radiation by enclosing a line source and a dielectric particle within a shell configuration. The physical origin lies in that the dielectric particle effectively rescatters the trapped anisotropic higher order modes and converts them into the isotropic 0th order mode radiated outside the system. The case for the system with the loss is then examined and the energy compensation with a gain particle is also demonstrated

    ESL-SNNs: An Evolutionary Structure Learning Strategy for Spiking Neural Networks

    Full text link
    Spiking neural networks (SNNs) have manifested remarkable advantages in power consumption and event-driven property during the inference process. To take full advantage of low power consumption and improve the efficiency of these models further, the pruning methods have been explored to find sparse SNNs without redundancy connections after training. However, parameter redundancy still hinders the efficiency of SNNs during training. In the human brain, the rewiring process of neural networks is highly dynamic, while synaptic connections maintain relatively sparse during brain development. Inspired by this, here we propose an efficient evolutionary structure learning (ESL) framework for SNNs, named ESL-SNNs, to implement the sparse SNN training from scratch. The pruning and regeneration of synaptic connections in SNNs evolve dynamically during learning, yet keep the structural sparsity at a certain level. As a result, the ESL-SNNs can search for optimal sparse connectivity by exploring all possible parameters across time. Our experiments show that the proposed ESL-SNNs framework is able to learn SNNs with sparse structures effectively while reducing the limited accuracy. The ESL-SNNs achieve merely 0.28% accuracy loss with 10% connection density on the DVS-Cifar10 dataset. Our work presents a brand-new approach for sparse training of SNNs from scratch with biologically plausible evolutionary mechanisms, closing the gap in the expressibility between sparse training and dense training. Hence, it has great potential for SNN lightweight training and inference with low power consumption and small memory usage

    Biologically inspired structure learning with reverse knowledge distillation for spiking neural networks

    Full text link
    Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility. However, the performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy. This redundancy from both connection and neurons is one of the key factors hindering the practical application of SNNs. Although Some pruning methods were proposed to tackle this problem, they normally ignored the fact the neural topology in the human brain could be adjusted dynamically. Inspired by this, this paper proposed an evolutionary-based structure construction method for constructing more reasonable SNNs. By integrating the knowledge distillation and connection pruning method, the synaptic connections in SNNs can be optimized dynamically to reach an optimal state. As a result, the structure of SNNs could not only absorb knowledge from the teacher model but also search for deep but sparse network topology. Experimental results on CIFAR100 and DVS-Gesture show that the proposed structure learning method can get pretty well performance while reducing the connection redundancy. The proposed method explores a novel dynamical way for structure learning from scratch in SNNs which could build a bridge to close the gap between deep learning and bio-inspired neural dynamics

    Mass spectrometric and first principles study of AlnC− clusters

    Get PDF
    We study the carbon-dope aluminum clusters by using time-of-flight mass spectrum experiments and ab initio calculations. Mass abundance distributions are obtained for anionic aluminum and aluminum-carbon mixed clusters. Besides the well-known magic aluminum clusters such as Al−13 and Al−23, Al7C− cluster is found to be particularly stable among those AlnC− clusters. Density functional calculations are performed to determine the ground state structures of AlnC− clusters. Our results show that the Al7C− is a magic cluster with extremely high stability, which might serve as building block of the cluster-assembled materials
    corecore