9,475 research outputs found
Adversarial Data Programming: Using GANs to Relax the Bottleneck of Curated Labeled Data
Paucity of large curated hand-labeled training data for every
domain-of-interest forms a major bottleneck in the deployment of machine
learning models in computer vision and other fields. Recent work (Data
Programming) has shown how distant supervision signals in the form of labeling
functions can be used to obtain labels for given data in near-constant time. In
this work, we present Adversarial Data Programming (ADP), which presents an
adversarial methodology to generate data as well as a curated aggregated label
has given a set of weak labeling functions. We validated our method on the
MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many
state-of-the-art models. We conducted extensive experiments to study its
usefulness, as well as showed how the proposed ADP framework can be used for
transfer learning as well as multi-task learning, where data from two domains
are generated simultaneously using the framework along with the label
information. Our future work will involve understanding the theoretical
implications of this new framework from a game-theoretic perspective, as well
as explore the performance of the method on more complex datasets.Comment: CVPR 2018 main conference pape
ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Two major momentum-based techniques that have achieved tremendous success in
optimization are Polyak's heavy ball method and Nesterov's accelerated
gradient. A crucial step in all momentum-based methods is the choice of the
momentum parameter which is always suggested to be set to less than .
Although the choice of is justified only under very strong theoretical
assumptions, it works well in practice even when the assumptions do not
necessarily hold. In this paper, we propose a new momentum based method
, which relaxes the constraint of and allows the
learning algorithm to use adaptive higher momentum. We motivate our hypothesis
on by experimentally verifying that a higher momentum () can help
escape saddles much faster. Using this motivation, we propose our method
that helps weigh the previous updates more (by setting the
momentum parameter ), evaluate our proposed algorithm on deep neural
networks and show that helps the learning algorithm to
converge much faster without compromising on the generalization error.Comment: 8 + 1 pages, 12 figures, accepted at CoDS-COMAD 201
STWalk: Learning Trajectory Representations in Temporal Graphs
Analyzing the temporal behavior of nodes in time-varying graphs is useful for
many applications such as targeted advertising, community evolution and outlier
detection. In this paper, we present a novel approach, STWalk, for learning
trajectory representations of nodes in temporal graphs. The proposed framework
makes use of structural properties of graphs at current and previous time-steps
to learn effective node trajectory representations. STWalk performs random
walks on a graph at a given time step (called space-walk) as well as on graphs
from past time-steps (called time-walk) to capture the spatio-temporal behavior
of nodes. We propose two variants of STWalk to learn trajectory
representations. In one algorithm, we perform space-walk and time-walk as part
of a single step. In the other variant, we perform space-walk and time-walk
separately and combine the learned representations to get the final trajectory
embedding. Extensive experiments on three real-world temporal graph datasets
validate the effectiveness of the learned representations when compared to
three baseline methods. We also show the goodness of the learned trajectory
embeddings for change point detection, as well as demonstrate that arithmetic
operations on these trajectory representations yield interesting and
interpretable results.Comment: 10 pages, 5 figures, 2 table
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Over the last decade, Convolutional Neural Network (CNN) models have been
highly successful in solving complex vision problems. However, these deep
models are perceived as "black box" methods considering the lack of
understanding of their internal functioning. There has been a significant
recent interest in developing explainable deep learning models, and this paper
is an effort in this direction. Building on a recently proposed method called
Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide
better visual explanations of CNN model predictions, in terms of better object
localization as well as explaining occurrences of multiple object instances in
a single image, when compared to state-of-the-art. We provide a mathematical
derivation for the proposed method, which uses a weighted combination of the
positive partial derivatives of the last convolutional layer feature maps with
respect to a specific class score as weights to generate a visual explanation
for the corresponding class label. Our extensive experiments and evaluations,
both subjective and objective, on standard datasets showed that Grad-CAM++
provides promising human-interpretable visual explanations for a given CNN
architecture across multiple tasks including classification, image caption
generation and 3D action recognition; as well as in new settings such as
knowledge distillation.Comment: 17 Pages, 15 Figures, 11 Tables. Accepted in the proceedings of IEEE
Winter Conf. on Applications of Computer Vision (WACV2018). Extended version
is under review at IEEE Transactions on Pattern Analysis and Machine
Intelligenc
Borrow from Anywhere: Pseudo Multi-modal Object Detection in Thermal Imagery
Can we improve detection in the thermal domain by borrowing features from
rich domains like visual RGB? In this paper, we propose a pseudo-multimodal
object detector trained on natural image domain data to help improve the
performance of object detection in thermal images. We assume access to a
large-scale dataset in the visual RGB domain and relatively smaller dataset (in
terms of instances) in the thermal domain, as is common today. We propose the
use of well-known image-to-image translation frameworks to generate pseudo-RGB
equivalents of a given thermal image and then use a multi-modal architecture
for object detection in the thermal image. We show that our framework
outperforms existing benchmarks without the explicit need for paired training
examples from the two domains. We also show that our framework has the ability
to learn with less data from thermal domain when using our approach. Our code
and pre-trained models are made available at
https://github.com/tdchaitanya/MMTODComment: Accepted at Perception Beyond Visible Spectrum Workshop, CVPR 201
- …
