700 research outputs found
Human Rights And Development In The 21st Century: The Complex Path To Peace And Democracy: Themes From The 2000 Goodwin Seminar
As the twenty-first century begins, the international human rights system faces a profound anomaly
Derechos Humanos Y Desarrollo En El Siglo Veintiuno: El Camino Complejo Hacia La Paz Y La Democracia: Temas De Los Seminarios Goodwin 2000
Mientras que el Siglo Veintiuno comienza, el sistema intemacional de los derechos humanos hace frente a una anomalia profunda
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
We consider linear inverse problems where the solution is assumed to have a
sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that
replacing the usual quadratic regularizing penalties by weighted l^p-penalties
on the coefficients of such expansions, with 1 < or = p < or =2, still
regularizes the problem. If p < 2, regularized solutions of such l^p-penalized
problems will have sparser expansions, with respect to the basis under
consideration. To compute the corresponding regularized solutions we propose an
iterative algorithm that amounts to a Landweber iteration with thresholding (or
nonlinear shrinkage) applied at each iteration step. We prove that this
algorithm converges in norm. We also review some potential applications of this
method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1:
small correction in proof (but not statement of) lemma 3.15; description of
Besov spaces in intro and app A clarified (and corrected); smaller pointsize
(making 30 instead of 38 pages
Time-frequency detection algorithm for gravitational wave bursts
An efficient algorithm is presented for the identification of short bursts of
gravitational radiation in the data from broad-band interferometric detectors.
The algorithm consists of three steps: pixels of the time-frequency
representation of the data that have power above a fixed threshold are first
identified. Clusters of such pixels that conform to a set of rules on their
size and their proximity to other clusters are formed, and a final threshold is
applied on the power integrated over all pixels in such clusters. Formal
arguments are given to support the conjecture that this algorithm is very
efficient for a wide class of signals. A precise model for the false alarm rate
of this algorithm is presented, and it is shown using a number of
representative numerical simulations to be accurate at the 1% level for most
values of the parameters, with maximal error around 10%.Comment: 26 pages, 15 figures, to appear in PR
Towards a comprehensive evaluation of ultrasound speckle reduction
Over the last three decades, several despeckling filters have been developed to reduce the speckle noise inherently present in ultrasound images without losing the diagnostic information. In this paper, a new intensity and feature preservation evaluation metric for full speckle reduction evaluation is proposed based contrast and feature similarities. A comparison of the despeckling methods is done, using quality metrics and visual interpretation of images profiles to evaluate their performance and show the benefits each one can contribute to noise reduction and feature preservation. To test the methods, noise-free images and simulated B-mode ultrasound images are used. This way, the despeckling techniques can be compared using numeric metrics, taking the noise-free image as a reference. In this study, a total of seventeen different speckle reduction algorithms have been documented based on adaptive filtering, diffusion filtering and wavelet filtering, with sixteen qualitative metrics estimation.info:eu-repo/semantics/publishedVersio
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Nonlinear mixture-wise expansion approach to underdetermined blind separation of nonnegative dependent sources
Underdetermined blind separation of nonnegative dependent sources consists in decomposing set of observed mixed signals into greater number of original nonnegative and dependent component (source) signals. That is an important problem for which very few algorithms exist. It is also practically relevant for contemporary metabolic profiling of biological samples, such as biomarker identification studies, where sources (a.k.a. pure components or analytes) are aimed to be extracted from mass spectra of complex multicomponent mixtures. This paper presents method for underdetermined blind separation of nonnegative dependent sources. The method performs nonlinear mixture-wise mapping of observed data in high-dimensional reproducible kernel Hilbert space (RKHS) of functions and sparseness constrained nonnegative matrix factorization (NMF) therein. Thus, original problem is converted into new one with increased number of mixtures, increased number of dependent sources and higher-order (error) terms generated by nonlinear mapping. Provided that amplitudes of original components are sparsely distributed, that is the case for mass spectra of analytes, sparseness constrained NMF in RKHS yields, with significant probability, improved accuracy relative to the case when the same NMF algorithm is performed on original problem. The method is exemplified on numerical and experimental examples related respectively to extraction of ten dependent components from five mixtures and to extraction of ten dependent analytes from mass spectra of two to five mixtures. Thereby, analytes mimic complexity of components expected to be found in biological samples
Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut.
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature
- …
