23,240 research outputs found
GM-Net: Learning Features with More Efficiency
Deep Convolutional Neural Networks (CNNs) are capable of learning
unprecedentedly effective features from images. Some researchers have struggled
to enhance the parameters' efficiency using grouped convolution. However, the
relation between the optimal number of convolutional groups and the recognition
performance remains an open problem. In this paper, we propose a series of
Basic Units (BUs) and a two-level merging strategy to construct deep CNNs,
referred to as a joint Grouped Merging Net (GM-Net), which can produce joint
grouped and reused deep features while maintaining the feature discriminability
for classification tasks. Our GM-Net architectures with the proposed BU_A
(dense connection) and BU_B (straight mapping) lead to significant reduction in
the number of network parameters and obtain performance improvement in image
classification tasks. Extensive experiments are conducted to validate the
superior performance of the GM-Net than the state-of-the-arts on the benchmark
datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.Comment: 6 Pages, 5 figure
Distributed Learning over Unreliable Networks
Most of today's distributed machine learning systems assume {\em reliable
networks}: whenever two machines exchange information (e.g., gradients or
models), the network should guarantee the delivery of the message. At the same
time, recent work exhibits the impressive tolerance of machine learning
algorithms to errors or noise arising from relaxed communication or
synchronization. In this paper, we connect these two trends, and consider the
following question: {\em Can we design machine learning systems that are
tolerant to network unreliability during training?} With this motivation, we
focus on a theoretical problem of independent interest---given a standard
distributed parameter server architecture, if every communication between the
worker and the server has a non-zero probability of being dropped, does
there exist an algorithm that still converges, and at what speed? The technical
contribution of this paper is a novel theoretical analysis proving that
distributed learning over unreliable network can achieve comparable convergence
rate to centralized or distributed learning over reliable networks. Further, we
prove that the influence of the packet drop rate diminishes with the growth of
the number of \textcolor{black}{parameter servers}. We map this theoretical
result onto a real-world scenario, training deep neural networks over an
unreliable network layer, and conduct network simulation to validate the system
improvement by allowing the networks to be unreliable
- …
