1,119 research outputs found
CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression
Lossy image compression algorithms are pervasively used to reduce the size of
images transmitted over the web and recorded on data storage media. However, we
pay for their high compression rate with visual artifacts degrading the user
experience. Deep convolutional neural networks have become a widespread tool to
address high-level computer vision tasks very successfully. Recently, they have
found their way into the areas of low-level computer vision and image
processing to solve regression problems mostly with relatively shallow
networks.
We present a novel 12-layer deep convolutional network for image compression
artifact suppression with hierarchical skip connections and a multi-scale loss
function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an
improvement of up to 0.36 dB over the best previous ConvNet result. We show
that a network trained for a specific quality factor (QF) is resilient to the
QF used to compress the input image - a single network trained for QF 60
provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.Comment: 8 page
Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
After the tremendous success of convolutional neural networks in image
classification, object detection, speech recognition, etc., there is now rising
demand for deployment of these compute-intensive ML models on tightly power
constrained embedded and mobile systems at low cost as well as for pushing the
throughput in data centers. This has triggered a wave of research towards
specialized hardware accelerators. Their performance is often constrained by
I/O bandwidth and the energy consumption is dominated by I/O transfers to
off-chip memory. We introduce and evaluate a novel, hardware-friendly
compression scheme for the feature maps present within convolutional neural
networks. We show that an average compression ratio of 4.4x relative to
uncompressed data and a gain of 60% over existing method can be achieved for
ResNet-34 with a compression block requiring <300 bit of sequential cells and
minimal combinational logic
Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks
Detecting and classifying targets in video streams from surveillance cameras
is a cumbersome, error-prone and expensive task. Often, the incurred costs are
prohibitive for real-time monitoring. This leads to data being stored locally
or transmitted to a central storage site for post-incident examination. The
required communication links and archiving of the video data are still
expensive and this setup excludes preemptive actions to respond to imminent
threats. An effective way to overcome these limitations is to build a smart
camera that transmits alerts when relevant video sequences are detected. Deep
neural networks (DNNs) have come to outperform humans in visual classifications
tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be
extended to make use of higher-dimensional input data such as multispectral
data. We explore this opportunity in terms of achievable accuracy and required
computational effort. To analyze the precision of DNNs for scene labeling in an
urban surveillance scenario we have created a dataset with 8 classes obtained
in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR
snapshot sensor to assess the potential of multispectral image data for target
classification. We evaluate several new DNNs, showing that the spectral
information fused together with the RGB frames can be used to improve the
accuracy of the system or to achieve similar accuracy with a 3x smaller
computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even
for scarcely occurring, but particularly interesting classes, such as cars, 75%
of the pixels are labeled correctly with errors occurring only around the
border of the objects. This high accuracy was obtained with a training set of
only 30 labeled images, paving the way for fast adaptation to various
application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and
Background Signatures I
YodaNN: An Architecture for Ultra-Low Power Binary-Weight CNN Acceleration
Convolutional neural networks (CNNs) have revolutionized the world of
computer vision over the last few years, pushing image classification beyond
human accuracy. The computational effort of today's CNNs requires power-hungry
parallel processors or GP-GPUs. Recent developments in CNN accelerators for
system-on-chip integration have reduced energy consumption significantly.
Unfortunately, even these highly optimized devices are above the power envelope
imposed by mobile and deeply embedded applications and face hard limitations
caused by CNN weight I/O and storage. This prevents the adoption of CNNs in
future ultra-low power Internet of Things end-nodes for near-sensor analytics.
Recent algorithmic and theoretical advancements enable competitive
classification accuracy even when limiting CNNs to binary (+1/-1) weights
during training. These new findings bring major optimization opportunities in
the arithmetic core by removing the need for expensive multiplications, as well
as reducing I/O bandwidth and storage. In this work, we present an accelerator
optimized for binary-weight CNNs that achieves 1510 GOp/s at 1.2 V on a core
area of only 1.33 MGE (Million Gate Equivalent) or 0.19 mm and with a power
dissipation of 895 {\mu}W in UMC 65 nm technology at 0.6 V. Our accelerator
significantly outperforms the state-of-the-art in terms of energy and area
efficiency achieving 61.2 TOp/s/[email protected] V and 1135 GOp/s/[email protected] V, respectively
Deep Structured Features for Semantic Segmentation
We propose a highly structured neural network architecture for semantic
segmentation with an extremely small model size, suitable for low-power
embedded and mobile platforms. Specifically, our architecture combines i) a
Haar wavelet-based tree-like convolutional neural network (CNN), ii) a random
layer realizing a radial basis function kernel approximation, and iii) a linear
classifier. While stages i) and ii) are completely pre-specified, only the
linear classifier is learned from data. We apply the proposed architecture to
outdoor scene and aerial image semantic segmentation and show that the accuracy
of our architecture is competitive with conventional pixel classification CNNs.
Furthermore, we demonstrate that the proposed architecture is data efficient in
the sense of matching the accuracy of pixel classification CNNs when trained on
a much smaller data set.Comment: EUSIPCO 2017, 5 pages, 2 figure
Hydra: An Accelerator for Real-Time Edge-Aware Permeability Filtering in 65nm CMOS
Many modern video processing pipelines rely on edge-aware (EA) filtering
methods. However, recent high-quality methods are challenging to run in
real-time on embedded hardware due to their computational load. To this end, we
propose an area-efficient and real-time capable hardware implementation of a
high quality EA method. In particular, we focus on the recently proposed
permeability filter (PF) that delivers promising quality and performance in the
domains of HDR tone mapping, disparity and optical flow estimation. We present
an efficient hardware accelerator that implements a tiled variant of the PF
with low on-chip memory requirements and a significantly reduced external
memory bandwidth (6.4x w.r.t. the non-tiled PF). The design has been taped out
in 65 nm CMOS technology, is able to filter 720p grayscale video at 24.8 Hz and
achieves a high compute density of 6.7 GFLOPS/mm2 (12x higher than embedded
GPUs when scaled to the same technology node). The low area and bandwidth
requirements make the accelerator highly suitable for integration into SoCs
where silicon area budget is constrained and external memory is typically a
heavily contended resource
Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features
Accurate, fast, and reliable multiclass classification of
electroencephalography (EEG) signals is a challenging task towards the
development of motor imagery brain-computer interface (MI-BCI) systems. We
propose enhancements to different feature extractors, along with a support
vector machine (SVM) classifier, to simultaneously improve classification
accuracy and execution time during training and testing. We focus on the
well-known common spatial pattern (CSP) and Riemannian covariance methods, and
significantly extend these two feature extractors to multiscale temporal and
spectral cases. The multiscale CSP features achieve 73.7015.90% (mean
standard deviation across 9 subjects) classification accuracy that surpasses
the state-of-the-art method [1], 70.614.70%, on the 4-class BCI
competition IV-2a dataset. The Riemannian covariance features outperform the
CSP by achieving 74.2715.5% accuracy and executing 9x faster in training
and 4x faster in testing. Using more temporal windows for Riemannian features
results in 75.4712.8% accuracy with 1.6x faster testing than CSP.Comment: Published as a conference paper at the IEEE European Signal
Processing Conference (EUSIPCO), 201
Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine
Deep neural networks have achieved impressive results in computer vision and
machine learning. Unfortunately, state-of-the-art networks are extremely
compute and memory intensive which makes them unsuitable for mW-devices such as
IoT end-nodes. Aggressive quantization of these networks dramatically reduces
the computation and memory footprint. Binary-weight neural networks (BWNs)
follow this trend, pushing weight quantization to the limit. Hardware
accelerators for BWNs presented up to now have focused on core efficiency,
disregarding I/O bandwidth and system-level efficiency that are crucial for
deployment of accelerators in ultra-low power devices. We present Hyperdrive: a
BWN accelerator dramatically reducing the I/O bandwidth exploiting a novel
binary-weight streaming approach, which can be used for arbitrarily sized
convolutional neural network architecture and input resolution by exploiting
the natural scalability of the compute units both at chip-level and
system-level by arranging Hyperdrive chips systolically in a 2D mesh while
processing the entire feature map together in parallel. Hyperdrive achieves 4.3
TOp/s/W system-level efficiency (i.e., including I/Os)---3.1x higher than
state-of-the-art BWN accelerators, even if its core uses resource-intensive
FP16 arithmetic for increased robustness
- …
