628 research outputs found
1 Indirect estimation of signal-dependent noise with non-adaptive heterogeneous samples
Abstract—We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images. Index Terms—Noise estimation, signal-dependent noise, Poisson noise
Wiener Weltausstellung 1873: A ‘Peripheral’ Perspective of the Triester Zeitung
A consideration of the phenomenon of international exhibitions in the political and
cultural history of central-European powers as opposed to the models represented by the
London and Paris great exhibitions offers relevant insights into this topic. The Exposition
organized in Vienna in 1873 – the first in the German language area – should be studied
in the light of the strategic urgency which impelled the Habsburg Empire to fashion
or redefine a representation of its multinational formation, in the wake of the military
defeats it suffered on the French-Piedmont and Prussian fronts. As will become apparent
in the later Berlin exhibition of 1879, the Wiener Weltausstellung already makes clear its
desire to exhibit the network of global relations in which the central-European Empires
were also trying to gain prominence, despite the essential irrelevance of their extra-
European colonial enterprise, as compared to British and French imperialist ventures.The essay comprises a critical reassessment of the existing historiographies specifically
devoted to the Viennese Exposition (the most significant of which dates to 1989), to be
revised in the light of updated interpretive paradigms, and a further analysis which aims
at a first systematic taxonomy of the most significant literary and journalistic echoes of
this first central-European Weltausstellung. More specifically, the investigation will focus
on the hundreds of articles, correspondence and notes which appeared in the Triester
Zeitung, the principal newspaper in German in Habsburg Trieste. These textual sources
have not as yet received scholarly attention and they make it possible to investigate
the reception of the Exhibition within the geographical and cultural context of the
multilingual and multicultural port of Trieste which, despite its peripheral position, was,
nonetheless, of primary strategic importance to the central Austrian government
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014 (preprint) 1 Joint Removal of Random and Fixed-Pattern Noise through Spatiotemporal Video Filtering
Abstract—We propose a framework for the denoising of videos jointly corrupted by spatially correlated (i.e. non-white) random noise and spatially correlated fixed-pattern noise. Our approach is based on motion-compensated 3-D spatiotemporal volumes, i.e. a sequence of 2-D square patches extracted along the motion trajectories of the noisy video. First, the spatial and temporal correlations within each volume are leveraged to sparsify the data in 3-D spatiotemporal transform domain, and then the coefficients of the 3-D volume spectrum are shrunk using an adaptive 3-D threshold array. Such array depends on the particular motion trajectory of the volume, the individual power spectral densities of the random and fixed-pattern noise, and also the noise variances which are adaptively estimated in transform domain. Experimental results on both synthetically corrupted data and real infrared videos demonstrate a superior suppression of the random and fixed-pattern noise from both an objective and a subjective point of view. Index Terms—Video denoising, spatiotemporal filtering, fixedpattern noise, power spectral density, adaptive transforms, thermal imaging. I
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
Robust Multi-Image HDR Reconstruction for the Modulo Camera
Photographing scenes with high dynamic range (HDR) poses great challenges to
consumer cameras with their limited sensor bit depth. To address this, Zhao et
al. recently proposed a novel sensor concept - the modulo camera - which
captures the least significant bits of the recorded scene instead of going into
saturation. Similar to conventional pipelines, HDR images can be reconstructed
from multiple exposures, but significantly fewer images are needed than with a
typical saturating sensor. While the concept is appealing, we show that the
original reconstruction approach assumes noise-free measurements and quickly
breaks down otherwise. To address this, we propose a novel reconstruction
algorithm that is robust to image noise and produces significantly fewer
artifacts. We theoretically analyze correctness as well as limitations, and
show that our approach significantly outperforms the baseline on real data.Comment: to appear at the 39th German Conference on Pattern Recognition (GCPR)
201
Actual curriculum development practices instrument: testing for factorial validity
The Actual Curriculum Development Practices Instrument (ACDP-I) was developed and the factorial validity of the ACDP-I was tested (n = 107) using exploratory factor analysis procedures in the earlier work of [1]. Despite the ACDP-I appears to be content and construct valid instrument with very high internal reliability qualities for using in Malaysia, the accumulated evidences are still needed to provide a sound scientific basis for the proposed score interpretations. Therefore, the present study addresses this concern by utilising the confirmatory factor analysis to further confirm the theoretical structure of the variable Actual Curriculum Development Practices (ACDP) and enrich the psychometrical properties of ACDP-I. Results of this study have practical implication to both researchers and educators whose concerns focus on teachers' classroom practices and the instrument development and validation process
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
A multiresolution framework for local similarity based image denoising
In this paper, we present a generic framework for denoising of images corrupted with additive white Gaussian noise based on the idea of regional similarity. The proposed framework employs a similarity function using the distance between pixels in a multidimensional feature space, whereby multiple feature maps describing various local regional characteristics can be utilized, giving higher weight to pixels having similar regional characteristics. An extension of the proposed framework into a multiresolution setting using wavelets and scale space is presented. It is shown that the resulting multiresolution multilateral (MRM) filtering algorithm not only eliminates the coarse-grain noise but can also faithfully reconstruct anisotropic features, particularly in the presence of high levels of noise
ActiveStereoNet: End-to-End Self-Supervised Learning for Active Stereo Systems
In this paper we present ActiveStereoNet, the first deep learning solution
for active stereo systems. Due to the lack of ground truth, our method is fully
self-supervised, yet it produces precise depth with a subpixel precision of
of a pixel; it does not suffer from the common over-smoothing issues;
it preserves the edges; and it explicitly handles occlusions. We introduce a
novel reconstruction loss that is more robust to noise and texture-less
patches, and is invariant to illumination changes. The proposed loss is
optimized using a window-based cost aggregation with an adaptive support weight
scheme. This cost aggregation is edge-preserving and smooths the loss function,
which is key to allow the network to reach compelling results. Finally we show
how the task of predicting invalid regions, such as occlusions, can be trained
end-to-end without ground-truth. This component is crucial to reduce blur and
particularly improves predictions along depth discontinuities. Extensive
quantitatively and qualitatively evaluations on real and synthetic data
demonstrate state of the art results in many challenging scenes.Comment: Accepted by ECCV2018, Oral Presentation, Main paper + Supplementary
Material
Schiller’s Demetrius Project: Questions of Legitimacy for the Twentieth Century
This essay explores the way in which Schiller problematizes power legitimation in his dramatic fragment Demetrius, and demonstrates how relevant this unfinished text was to early twentieth-century political thinking on legitimacy in Germany. This is not a matter of mere speculation: there is evidence, both direct and indirect, of the influence of Schiller, and in particular of his Demetrius, on three important intellectuals: Ferdinand Toennies, Max Weber, and Carl Schmitt. Rather than discuss the reception of Schiller’s text by legal philosophers, this essay shows, via a close reading of the play, how Schiller’s theater around 1800 reveals the problematic nature of various aspects of power legitimation and how these emerged with particular force during the early twentieth century, when they intersected with the legal and political reflections of the time
- …
