23 research outputs found
Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks
Deep neural networks (DNNs) are powerful learning machines that have enabled
breakthroughs in several domains. In this work, we introduce a new
retrospective loss to improve the training of deep neural network models by
utilizing the prior experience available in past model states during training.
Minimizing the retrospective loss, along with the task-specific loss, pushes
the parameter state at the current training step towards the optimal parameter
state while pulling it away from the parameter state at a previous training
step. Although a simple idea, we analyze the method as well as to conduct
comprehensive sets of experiments across domains - images, speech, text, and
graphs - to show that the proposed loss results in improved performance across
input domains, tasks, and architectures.Comment: Accepted at KDD 2020; The first two authors contributed equall
DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural Networks
Although deep neural networks have been very successful in
image-classification tasks, they are prone to adversarial attacks. To generate
adversarial inputs, there has emerged a wide variety of techniques, such as
black- and whitebox attacks for neural networks. In this paper, we present
DeepSearch, a novel fuzzing-based, query-efficient, blackbox attack for image
classifiers. Despite its simplicity, DeepSearch is shown to be more effective
in finding adversarial inputs than state-of-the-art blackbox approaches.
DeepSearch is additionally able to generate the most subtle adversarial inputs
in comparison to these approaches
