1,042 research outputs found
Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks
Evolutionary deep intelligence has recently shown great promise for producing
small, powerful deep neural network models via the organic synthesis of
increasingly efficient architectures over successive generations. Existing
evolutionary synthesis processes, however, have allowed the mating of parent
networks independent of architectural alignment, resulting in a mismatch of
network structures. We present a preliminary study into the effects of
architectural alignment during evolutionary synthesis using a gene tagging
system. Surprisingly, the network architectures synthesized using the gene
tagging approach resulted in slower decreases in performance accuracy and
storage size; however, the resultant networks were comparable in size and
performance accuracy to the non-gene tagging networks. Furthermore, we
speculate that there is a noticeable decrease in network variability for
networks synthesized with gene tagging, indicating that enforcing a
like-with-like mating policy potentially restricts the exploration of the
search space of possible network architectures.Comment: 5 page
Efficient Deep Feature Learning and Extraction via StochasticNets
Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1508.0546
Assessing Architectural Similarity in Populations of Deep Neural Networks
Evolutionary deep intelligence has recently shown great promise for producing
small, powerful deep neural network models via the synthesis of increasingly
efficient architectures over successive generations. Despite recent research
showing the efficacy of multi-parent evolutionary synthesis, little has been
done to directly assess architectural similarity between networks during the
synthesis process for improved parent network selection. In this work, we
present a preliminary study into quantifying architectural similarity via the
percentage overlap of architectural clusters. Results show that networks
synthesized using architectural alignment (via gene tagging) maintain higher
architectural similarities within each generation, potentially restricting the
search space of highly efficient network architectures.Comment: 3 pages. arXiv admin note: text overlap with arXiv:1811.0796
Texture Classification in Extreme Scale Variations using GANet
Research in texture recognition often concentrates on recognizing textures
with intraclass variations such as illumination, rotation, viewpoint and small
scale changes. In contrast, in real-world applications a change in scale can
have a dramatic impact on texture appearance, to the point of changing
completely from one texture category to another. As a result, texture
variations due to changes in scale are amongst the hardest to handle. In this
work we conduct the first study of classifying textures with extreme variations
in scale. To address this issue, we first propose and then reduce scale
proposals on the basis of dominant texture patterns. Motivated by the
challenges posed by this problem, we propose a new GANet network where we use a
Genetic Algorithm to change the units in the hidden layers during network
training, in order to promote the learning of more informative semantic texture
patterns. Finally, we adopt a FVCNN (Fisher Vector pooling of a Convolutional
Neural Network filter bank) feature encoder for global texture representation.
Because extreme scale variations are not necessarily present in most standard
texture databases, to support the proposed extreme-scale aspects of texture
understanding we are developing a new dataset, the Extreme Scale Variation
Textures (ESVaT), to test the performance of our framework. It is demonstrated
that the proposed framework significantly outperforms gold-standard texture
features by more than 10% on ESVaT. We also test the performance of our
proposed approach on the KTHTIPS2b and OS datasets and a further dataset
synthetically derived from Forrest, showing superior performance compared to
the state of the art.Comment: submitted to IEEE Transactions on Image Processin
Observation of Target Electron Momentum Effects in Single-Arm M\o ller Polarimetry
In 1992, L.G. Levchuk noted that the asymmetries measured in M\o ller
scattering polarimeters could be significantly affected by the intrinsic
momenta of the target electrons. This effect is largest in devices with very
small acceptance or very high resolution in laboratory scattering angle. We use
a high resolution polarimeter in the linac of the polarized SLAC Linear
Collider to study this effect. We observe that the inclusion of the effect
alters the measured beam polarization by -14% of itself and produces a result
that is consistent with measurements from a Compton polarimeter. Additionally,
the inclusion of the effect is necessary to correctly simulate the observed
shape of the two-body elastic scattering peak.Comment: 29 pages, uuencoded gzip-compressed postscript (351 kb). Uncompressed
postscript file (898 kb) available to DECNET users as
SLC::USER_DISK_SLC1:[MORRIS]levpre.p
- …
