1,263 research outputs found
Generalized Qualification and Qualification Levels for Spectral Regularization Methods
The concept of qualification for spectral regularization methods for inverse
ill-posed problems is strongly associated to the optimal order of convergence
of the regularization error. In this article, the definition of qualification
is extended and three different levels are introduced: weak, strong and
optimal. It is shown that the weak qualification extends the definition
introduced by Mathe and Pereverzev in 2003, mainly in the sense that the
functions associated to orders of convergence and source sets need not be the
same. It is shown that certain methods possessing infinite classical
qualification, e.g. truncated singular value decomposition (TSVD), Landweber's
method and Showalter's method, also have generalized qualification leading to
an optimal order of convergence of the regularization error. Sufficient
conditions for a SRM to have weak qualification are provided and necessary and
sufficient conditions for a given order of convergence to be strong or optimal
qualification are found. Examples of all three qualification levels are
provided and the relationships between them as well as with the classical
concept of qualification and the qualification introduced by Mathe and
Perevezev are shown. In particular, spectral regularization methods having
extended qualification in each one of the three levels and having zero or
infinite classical qualification are presented. Finally several implications of
this theory in the context of orders of convergence, converse results and
maximal source sets for inverse ill-posed problems, are shown.Comment: 20 pages, 1 figur
Global Saturation of Regularization Methods for Inverse Ill-Posed Problems
In this article the concept of saturation of an arbitrary regularization
method is formalized based upon the original idea of saturation for spectral
regularization methods introduced by A. Neubauer in 1994. Necessary and
sufficient conditions for a regularization method to have global saturation are
provided. It is shown that for a method to have global saturation the total
error must be optimal in two senses, namely as optimal order of convergence
over a certain set which at the same time, must be optimal (in a very precise
sense) with respect to the error. Finally, two converse results are proved and
the theory is applied to find sufficient conditions which ensure the existence
of global saturation for spectral methods with classical qualification of
finite positive order and for methods with maximal qualification. Finally,
several examples of regularization methods possessing global saturation are
shown.Comment: 29 page
Elastic-Net Regularization: Error estimates and Active Set Methods
This paper investigates theoretical properties and efficient numerical
algorithms for the so-called elastic-net regularization originating from
statistics, which enforces simultaneously l^1 and l^2 regularization. The
stability of the minimizer and its consistency are studied, and convergence
rates for both a priori and a posteriori parameter choice rules are
established. Two iterative numerical algorithms of active set type are
proposed, and their convergence properties are discussed. Numerical results are
presented to illustrate the features of the functional and algorithms
Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints
The Tikhonov regularization of linear ill-posed problems with an
penalty is considered. We recall results for linear convergence rates and
results on exact recovery of the support. Moreover, we derive conditions for
exact support recovery which are especially applicable in the case of ill-posed
problems, where other conditions, e.g. based on the so-called coherence or the
restricted isometry property are usually not applicable. The obtained results
also show that the regularized solutions do not only converge in the
-norm but also in the vector space (when considered as the
strict inductive limit of the spaces as tends to infinity).
Additionally, the relations between different conditions for exact support
recovery and linear convergence rates are investigated.
With an imaging example from digital holography the applicability of the
obtained results is illustrated, i.e. that one may check a priori if the
experimental setup guarantees exact recovery with Tikhonov regularization with
sparsity constraints
Parameter identification in a semilinear hyperbolic system
We consider the identification of a nonlinear friction law in a
one-dimensional damped wave equation from additional boundary measurements.
Well-posedness of the governing semilinear hyperbolic system is established via
semigroup theory and contraction arguments. We then investigte the inverse
problem of recovering the unknown nonlinear damping law from additional
boundary measurements of the pressure drop along the pipe. This coefficient
inverse problem is shown to be ill-posed and a variational regularization
method is considered for its stable solution. We prove existence of minimizers
for the Tikhonov functional and discuss the convergence of the regularized
solutions under an approximate source condition. The meaning of this condition
and some arguments for its validity are discussed in detail and numerical
results are presented for illustration of the theoretical findings
The equivalence of fluctuation scale dependence and autocorrelations
We define optimal per-particle fluctuation and correlation measures, relate
fluctuations and correlations through an integral equation and show how to
invert that equation to obtain precise autocorrelations from fluctuation scale
dependence. We test the precision of the inversion with Monte Carlo data and
compare autocorrelations to conditional distributions conventionally used to
study high- jet structure.Comment: 10 pages, 9 figures, proceedings, MIT workshop on correlations and
fluctuations in relativistic nuclear collision
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
- …
