357 research outputs found

    Some matrix nearness problems suggested by Tikhonov regularization

    Full text link
    The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available ill-conditioned problem by a nearby better conditioned one. The most popular regularization methods for problems of small to moderate size are Tikhonov regularization and truncated singular value decomposition (TSVD). By considering matrix nearness problems related to Tikhonov regularization, several novel regularization methods are derived. These methods share properties with both Tikhonov regularization and TSVD, and can give approximate solutions of higher quality than either one of these methods

    Fractional regularization matrices for linear discrete ill-posed problems

    Get PDF
    The numerical solution of linear discrete ill-posed problems typically requires regularization. Two of the most popular regularization methods are due to Tikhonov and Lavrentiev. These methods require the choice of a regularization matrix. Common choices include the identity matrix and finite difference approximations of a derivative operator. It is the purpose of the present paper to explore the use of fractional powers of the matrices {Mathematical expression} (for Tikhonov regularization) and A (for Lavrentiev regularization) as regularization matrices, where A is the matrix that defines the linear discrete ill-posed problem. Both small- and large-scale problems are considered. © 2013 Springer Science+Business Media Dordrecht

    On the generation of Krylov subspace bases

    Get PDF
    Many problems in scientific computing involving a large sparse matrix A are solved by Krylov subspace methods. This includes methods for the solution of large linear systems of equations with A, for the computation of a few eigenvalues and associated eigenvectors of A, and for the approximation of nonlinear matrix functions of A. When the matrix A is non-Hermitian, the Arnoldi process commonly is used to compute an orthonormal basis of a Krylov subspace associated with A. The Arnoldi process often is implemented with the aid of the modified Gram-Schmidt method. It is well known that the latter constitutes a bottleneck in parallel computing environments, and to some extent also on sequential computers. Several approaches to circumvent orthogonalization by the modified Gram-Schmidt method have been described in the literature, including the generation of Krylov subspace bases with the aid of suitably chosen Chebyshev or Newton polynomials. We review these schemes and describe new ones. Numerical examples are presented

    An Analogue for Szegő Polynomials of the Clenshaw Algorithm

    Get PDF
    NSF grant DMS 9002884National Research Council fellowshi

    Convergence rates for inverse-free rational approximation of matrix functions

    Get PDF
    This article deduces geometric convergence rates for approximating matrix functions via inverse-free rational Krylov methods. In applications one frequently encounters matrix functions such as the matrix exponential or matrix logarithm; often the matrix under consideration is too large to compute the matrix function directly, and Krylov subspace methods are used to determine a reduced problem. If many evaluations of a matrix function of the form f(A)v with a large matrix A are required, then it may be advantageous to determine a reduced problem using rational Krylov subspaces. These methods may give more accurate approximations of f(A)v with subspaces of smaller dimension than standard Krylov subspace methods. Unfortunately, the system solves required to construct an orthogonal basis for a rational Krylov subspace may create numerical difficulties and/or require excessive computing time. This paper investigates a novel approach to determine an orthogonal basis of an approximation of a rational Krylov subspace of (small) dimension from a standard orthogonal Krylov subspace basis of larger dimension. The approximation error will depend on properties of the matrix A and on the dimension of the original standard Krylov subspace. We show that our inverse-free method for approximating the rational Krylov subspace converges geometrically (for increasing dimension of the standard Krylov subspace) to a rational Krylov subspace. The convergence rate may be used to predict the dimension of the standard Krylov subspace necessary to obtain a certain accuracy in the approximation. Computed examples illustrate the theory developed

    Regularization matrices determined by matrix nearness problems

    Get PDF
    This paper is concerned with the solution of large-scale linear discrete ill-posed problems with error-contaminated data. Tikhonov regularization is a popular approach to determine meaningful approximate solutions of such problems. The choice of regularization matrix in Tikhonov regularization may significantly affect the quality of the computed approximate solution. This matrix should be chosen to promote the recovery of known important features of the desired solution, such as smoothness and monotonicity. We describe a novel approach to determine regularization matrices with desired properties by solving a matrix nearness problem. The constructed regularization matrix is the closest matrix in the Frobenius norm with a prescribed null space to a given matrix. Numerical examples illustrate the performance of the regularization matrices so obtained

    Stieltjes-type polynomials on the unit circle

    Get PDF
    29 pages, no figures.-- MSC2000 codes: Primary 65D32, 42A10, 42C05; Secondary 30E20.MR#: MR2476567Stieltjes-type polynomials corresponding to measures supported on the unit circle T are introduced and their asymptotic properties away from T are studied for general classes of measures. As an application, we prove the convergence of an associated sequence of interpolating rational functions to the corresponding Carathéodory function. In turn, this is used to give an estimate of the rate of convergence of certain quadrature formulae that resemble the Gauss-Kronrod rule, provided that the integrand is analytic in a neighborhood of T.The work of B. de la Calle received support from Dirección General de Investigación (DGI), Ministerio de Educación y Ciencia, under grants MTM2006-13000-C03-02 and MTM2006-07186 and from UPM-CAM under grants CCG07-UPM/000-1652 and CCG07-UPM/ESP-1896. The work of G. López was supported by DGI under grant MTM2006-13000-C03-02 and by UC3M-CAM through CCG06-UC3M/ESP-0690. The work of L. Reichel was supported by an OBR Research Challenge Grant.Publicad

    Simple Square Smoothing Regularization Operators

    Get PDF
    Tikhonov regularization of linear discrete ill-posed problems often is applied with a finite difference regularization operator that approximates a low-order derivative. These operators generally are represented by a banded rectangular matrix with fewer rows than columns. They therefore cannot be applied in iterative methods that are based on the Arnoldi process, which requires the regularization operator to be represented by a square matrix. This paper discusses two approaches to circumvent this difficulty: zero-padding the rectangular matrices to make them square and extending the rectangular matrix to a square circulant. We also describe how to combine these operators by weighted averaging and with orthogonal projection. Applications to Arnoldi and Lanczos bidiagonalization-based Tikhonov regularization, as well as to truncated iteration with a range-restricted minimal residual method, are presented
    corecore