1,880 research outputs found
Exfiltration of Data from Air-gapped Networks via Unmodulated LED Status Indicators
The light-emitting diode(LED) is widely used as an indicator on the
information device. Early in 2002, Loughry et al studied the exfiltration of
LED indicators and found the kind of LEDs unmodulated to indicate some state of
the device can hardly be utilized to establish covert channels. In our paper, a
novel approach is proposed to modulate this kind of LEDs. We use binary
frequency shift keying(B-FSK) to replace on-off keying(OOK) in modulation. In
order to verify the validity, we implement a prototype of an exfiltration
malware. Our experiment show a great improvement in the imperceptibility of
covert communication. It is available to leak data covertly from air-gapped
networks via unmodulated LED status indicators.Comment: 12 pages, 7 figure
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network
In this paper, we propose an improvement of Adversarial Transformation
Networks(ATN) to generate adversarial examples, which can fool white-box models
and black-box models with a state of the art performance and won the 2rd place
in the non-target task in CAAD 2018
CAAD 2018: Iterative Ensemble Adversarial Attack
Deep Neural Networks (DNNs) have recently led to significant improvements in
many fields. However, DNNs are vulnerable to adversarial examples which are
samples with imperceptible perturbations while dramatically misleading the
DNNs. Adversarial attacks can be used to evaluate the robustness of deep
learning models before they are deployed. Unfortunately, most of existing
adversarial attacks can only fool a black-box model with a low success rate. To
improve the success rates for black-box adversarial attacks, we proposed an
iterated adversarial attack against an ensemble of image classifiers. With this
method, we won the 5th place in CAAD 2018 Targeted Adversarial Attack
competition.Comment: arXiv admin note: text overlap with arXiv:1811.0018
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Deep neural networks (DNNs) have achieved tremendous success in many tasks of
machine learning, such as the image classification. Unfortunately, researchers
have shown that DNNs are easily attacked by adversarial examples, slightly
perturbed images which can mislead DNNs to give incorrect classification
results. Such attack has seriously hampered the deployment of DNN systems in
areas where security or safety requirements are strict, such as autonomous
cars, face recognition, malware detection. Defensive distillation is a
mechanism aimed at training a robust DNN which significantly reduces the
effectiveness of adversarial examples generation. However, the state-of-the-art
attack can be successful on distilled networks with 100% probability. But it is
a white-box attack which needs to know the inner information of DNN. Whereas,
the black-box scenario is more general. In this paper, we first propose the
epsilon-neighborhood attack, which can fool the defensively distilled networks
with 100% success rate in the white-box setting, and it is fast to generate
adversarial examples with good visual quality. On the basis of this attack, we
further propose the region-based attack against defensively distilled DNNs in
the black-box setting. And we also perform the bypass attack to indirectly
break the distillation defense as a complementary method. The experimental
results show that our black-box attacks have a considerable success rate on
defensively distilled networks
Reversible Adversarial Examples
Deep Neural Networks have recently led to significant improvement in many
fields such as image classification and speech recognition. However, these
machine learning models are vulnerable to adversarial examples which can
mislead machine learning classifiers to give incorrect classifications. In this
paper, we take advantage of reversible data hiding to construct reversible
adversarial examples which are still misclassified by Deep Neural Networks.
Furthermore, the proposed method can recover original images from reversible
adversarial examples with no distortion.Comment: arXiv admin note: text overlap with arXiv:1806.0918
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense
Neural networks are vulnerable to adversarial examples, which poses a threat
to their application in security sensitive systems. We propose a Denoiser and
UPsampler Network (DUP-Net) structure as defenses for 3D adversarial point
cloud classification, where the two modules reconstruct surface smoothness by
dropping or adding points. In this paper, statistical outlier removal (SOR) and
a data-driven upsampling network are considered as denoiser and upsampler
respectively. Compared with baseline defenses, DUP-Net has three advantages.
First, with DUP-Net as a defense, the target model is more robust to white-box
adversarial attacks. Second, the statistical outlier removal provides added
robustness since it is a non-differentiable denoising operation. Third, the
upsampler network can be trained on a small dataset and defends well against
adversarial attacks generated from other point cloud datasets. We conduct
various experiments to validate that DUP-Net is very effective as defense in
practice. Our best defense eliminates 83.8% of C&W and l_2 loss based attack
(point shifting), 50.0% of C&W and Hausdorff distance loss based attack (point
adding) and 9.0% of saliency map based attack (point dropping) under 200
dropped points on PointNet.Comment: Published in IEEE ICCV201
When Provably Secure Steganography Meets Generative Models
Steganography is the art and science of hiding secret messages in public
communication so that the presence of the secret messages cannot be detected.
There are two provably secure steganographic frameworks, one is black-box
sampling based and the other is compression based. The former requires a
perfect sampler which yields data following the same distribution, and the
latter needs explicit distributions of generative objects. However, these two
conditions are too strict even unrealistic in the traditional data environment,
because it is hard to model the explicit distribution of natural image. With
the development of deep learning, generative models bring new vitality to
provably secure steganography, which can serve as the black-box sampler or
provide the explicit distribution of generative media. Motivated by this, this
paper proposes two types of provably secure stegosystems with generative
models. Specifically, we first design block-box sampling based provably secure
stegosystem for broad generative models without explicit distribution, such as
GAN, VAE, and flow-based generative models, where the generative network can
serve as the perfect sampler. For compression based stegosystem, we leverage
the generative models with explicit distribution such as autoregressive models
instead, where the adaptive arithmetic coding plays the role of the perfect
compressor, decompressing the encrypted message bits into generative media, and
the receiver can compress the generative media into the encrypted message bits.
To show the effectiveness of our method, we take DFC-VAE, Glow, WaveNet as
instances of generative models and demonstrate the perfectly secure performance
of these stegosystems with the state-of-the-art steganalysis methods
Detection based Defense against Adversarial Examples from the Steganalysis Point of View
Deep Neural Networks (DNNs) have recently led to significant improvements in
many fields. However, DNNs are vulnerable to adversarial examples which are
samples with imperceptible perturbations while dramatically misleading the
DNNs. Moreover, adversarial examples can be used to perform an attack on
various kinds of DNN based systems, even if the adversary has no access to the
underlying model. Many defense methods have been proposed, such as obfuscating
gradients of the networks or detecting adversarial examples. However it is
proved out that these defense methods are not effective or cannot resist
secondary adversarial attacks. In this paper, we point out that steganalysis
can be applied to adversarial examples detection, and propose a method to
enhance steganalysis features by estimating the probability of modifications
caused by adversarial attacks. Experimental results show that the proposed
method can accurately detect adversarial examples. Moreover, secondary
adversarial attacks cannot be directly performed to our method because our
method is not based on a neural network but based on high-dimensional
artificial features and FLD (Fisher Linear Discriminant) ensemble.Comment: 8 pages, 3 figure
Modeling of Laser wakefield acceleration in Lorentz boosted frame using EM-PIC code with spectral solver
Simulating laser wakefield acceleration (LWFA) in a Lorentz boosted frame in
which the plasma drifts towards the laser with can speedup the simulation
by factors of . In these simulations the
relativistic drifting plasma inevitably induces a high frequency numerical
instability that contaminates the interested physics. Various approaches have
been proposed to mitigate this instability. One approach is to solve Maxwell
equations in Fourier space (a spectral solver) as this has been shown to
suppress the fastest growing modes of this instability in simple test problems
using a simple low pass, ring (in two dimensions), or shell (in three
dimensions) filter in Fourier space. We describe the development of a fully
parallelized, multi-dimensional, particle-in-cell code that uses a spectral
solver to solve Maxwell's equations and that includes the ability to launch a
laser using a moving antenna. This new EM-PIC code is called UPIC-EMMA and it
is based on the components of the UCLA PIC framework (UPIC). We show that by
using UPIC-EMMA, LWFA simulations in the boosted frames with arbitrary
can be conducted without the presence of the numerical instability.
We also compare the results of a few LWFA cases for several values of
, including lab frame simulations using OSIRIS, a EM-PIC code with a
finite difference time domain (FDTD) Maxwell solver. These comparisons include
cases in both linear, and nonlinear regimes. We also investigate some issues
associated with numerical dispersion in lab and boosted frame simulations and
between FDTD and spectral solvers
Controlling the Numerical Cerenkov Instability in PIC simulations using a customized finite difference Maxwell solver and a local FFT based current correction
In this paper we present a customized finite-difference-time-domain (FDTD)
Maxwell solver for the particle-in-cell (PIC) algorithm. The solver is
customized to effectively eliminate the numerical Cerenkov instability (NCI)
which arises when a plasma (neutral or non-neutral) relativistically drifts on
a grid when using the PIC algorithm. We control the EM dispersion curve in the
direction of the plasma drift of a FDTD Maxwell solver by using a customized
higher order finite difference operator for the spatial derivative along the
direction of the drift ( direction). We show that this eliminates the
main NCI modes with moderate , while keeps additional main NCI
modes well outside the range of physical interest with higher . These main NCI modes can be easily filtered out along with first
spatial aliasing NCI modes which are also at the edge of the fundamental
Brillouin zone. The customized solver has the possible advantage of improved
parallel scalability because it can be easily partitioned along which
typically has many more cells than other directions for the problems of
interest. We show that FFTs can be performed locally to current on each
partition to filter out the main and first spatial aliasing NCI modes, and to
correct the current so that it satisfies the continuity equation for the
customized spatial derivative. This ensures that Gauss' Law is satisfied. We
present simulation examples of one relativistically drifting plasmas, of two
colliding relativistically drifting plasmas, and of nonlinear laser wakefield
acceleration (LWFA) in a Lorentz boosted frame that show no evidence of the NCI
can be observed when using this customized Maxwell solver together with its NCI
elimination scheme
- …
