32 research outputs found

    Scaled and squared subdiagonal Padé approximation for the matrix exponential

    No full text
    The scaling and squaring method is the most widely used algorithm for computing the exponential of a square matrix A. We introduce an efficient variant that uses a much smaller squaring factor when ||A|| » 1 and a subdiagonal Padé approximant of low degree, thereby significantly reducing the overall cost and avoiding the potential instability caused by overscaling, while giving forward error of the same magnitude as that of the standard algorithm. The new algorithm performs well if a rough estimate of the rightmost eigenvalue of A is available and the rightmost eigenvalues do not have widely varying imaginary parts, and it achieves significant speedup over the conventional algorithm especially when A is of large norm. Our algorithm uses the partial fraction form to evaluate the Padé approximant, which makes it suitable for parallelization and directly applicable to computing the action of the matrix exponential exp(A)b, where b is a vector or a tall skinny matrix. For this problem the significantly smaller squaring factor has an even more pronounced benefit for efficiency when evaluating the action of the Padé approximant

    Convergence of linear barycentric rational interpolation for analytic functions

    No full text
    Polynomial interpolation to analytic functions can be very accurate, depending on the distribution of the interpolation nodes. However, in equispaced nodes and the like, besides being badly conditioned, these interpolants fail to converge even in exact arithmetic in some cases. Linear barycentric rational interpolation with the weights presented by Floater and Hormann can be viewed as blended polynomial interpolation and often yields better approximation in such cases. This has been proven for differentiable functions and indicated in several experiments for analytic functions. So far, these rational interpolants have been used mainly with a constant parameter usually denoted by d, the degree of the blended polynomials, which leads to small condition numbers but to merely algebraic convergence. With the help of logarithmic potential theory we derive asymptotic convergence results for analytic functions when this parameter varies with the number of nodes. Moreover, we present suggestions on how to choose d in order to observe fast and stable convergence, even in equispaced nodes where stable geometric convergence is provably impossible. We demonstrate our results with several numerical examples

    A GENERALIZATION OF THE STEEPEST DESCENT METHOD FOR MATRIX FUNCTIONS

    No full text
    We consider the special case of the restarted Arnoldi method for approximating the product of a function of a Hermitian matrix with a vector which results when the restart length is set to one. When applied to the solution of a linear system of equations, this approach coincides with the method of steepest descent. We show that the method is equivalent to an interpolation process in which the node sequence has at most two points of accumulation. This knowledge is used to quantify the asymptotic convergence rate
    corecore