Eigenvalue Algorithms For Symmetric Hierarchical Matrices
Download Eigenvalue Algorithms For Symmetric Hierarchical Matrices full books in PDF, epub, and Kindle. Read online free Eigenvalue Algorithms For Symmetric Hierarchical Matrices ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads. We cannot guarantee that every ebooks is available!
Author | : Thomas Mach |
Publisher | : Thomas Mach |
Total Pages | : 173 |
Release | : 2012 |
Genre | : Mathematics |
ISBN | : |
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDL factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.
Author | : Wolfgang Hackbusch |
Publisher | : Springer |
Total Pages | : 532 |
Release | : 2015-12-21 |
Genre | : Mathematics |
ISBN | : 3662473240 |
This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists in computational mathematics, physics, chemistry and engineering.
Author | : Susanne C. Brenner |
Publisher | : Springer Nature |
Total Pages | : 778 |
Release | : 2023-03-15 |
Genre | : Mathematics |
ISBN | : 3030950255 |
These are the proceedings of the 26th International Conference on Domain Decomposition Methods in Science and Engineering, which was hosted by the Chinese University of Hong Kong and held online in December 2020. Domain decomposition methods are iterative methods for solving the often very large systems of equations that arise when engineering problems are discretized, frequently using finite elements or other modern techniques. These methods are specifically designed to make effective use of massively parallel, high-performance computing systems. The book presents both theoretical and computational advances in this domain, reflecting the state of art in 2020.
Author | : Peter Benner |
Publisher | : Springer |
Total Pages | : 635 |
Release | : 2015-05-09 |
Genre | : Mathematics |
ISBN | : 3319152602 |
This edited volume highlights the scientific contributions of Volker Mehrmann, a leading expert in the area of numerical (linear) algebra, matrix theory, differential-algebraic equations and control theory. These mathematical research areas are strongly related and often occur in the same real-world applications. The main areas where such applications emerge are computational engineering and sciences, but increasingly also social sciences and economics. This book also reflects some of Volker Mehrmann's major career stages. Starting out working in the areas of numerical linear algebra (his first full professorship at TU Chemnitz was in "Numerical Algebra," hence the title of the book) and matrix theory, Volker Mehrmann has made significant contributions to these areas ever since. The highlights of these are discussed in Parts I and II of the present book. Often the development of new algorithms in numerical linear algebra is motivated by problems in system and control theory. These and his later major work on differential-algebraic equations, to which he together with Peter Kunkel made many groundbreaking contributions, are the topic of the chapters in Part III. Besides providing a scientific discussion of Volker Mehrmann's work and its impact on the development of several areas of applied mathematics, the individual chapters stand on their own as reference works for selected topics in the fields of numerical (linear) algebra, matrix theory, differential-algebraic equations and control theory.
Author | : Roumen Kountchev |
Publisher | : Springer |
Total Pages | : 389 |
Release | : 2016-05-19 |
Genre | : Technology & Engineering |
ISBN | : 3319321927 |
This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the analysis of the modeling of the developed algorithms in different application areas.
Author | : Daniel Kressner |
Publisher | : Springer Science & Business Media |
Total Pages | : 272 |
Release | : 2006-01-20 |
Genre | : Mathematics |
ISBN | : 3540285024 |
This book is about computing eigenvalues, eigenvectors, and invariant subspaces of matrices. Treatment includes generalized and structured eigenvalue problems and all vital aspects of eigenvalue computations. A unique feature is the detailed treatment of structured eigenvalue problems, providing insight on accuracy and efficiency gains to be expected from algorithms that take the structure of a matrix into account.
Author | : Rio Yokota |
Publisher | : Springer |
Total Pages | : 301 |
Release | : 2018-03-20 |
Genre | : Computers |
ISBN | : 3319699539 |
It constitutes the refereed proceedings of the 4th Asian Supercomputing Conference, SCFA 2018, held in Singapore in March 2018. Supercomputing Frontiers will be rebranded as Supercomputing Frontiers Asia (SCFA), which serves as the technical programme for SCA18. The technical programme for SCA18 consists of four tracks: Application, Algorithms & Libraries Programming System Software Architecture, Network/Communications & Management Data, Storage & Visualisation The 20 papers presented in this volume were carefully reviewed nd selected from 60 submissions.
Author | : Gene H. Golub |
Publisher | : JHU Press |
Total Pages | : 781 |
Release | : 2013-02-15 |
Genre | : Mathematics |
ISBN | : 1421408597 |
A comprehensive treatment of numerical linear algebra from the standpoint of both theory and practice. The fourth edition of Gene H. Golub and Charles F. Van Loan's classic is an essential reference for computational scientists and engineers in addition to researchers in the numerical linear algebra community. Anyone whose work requires the solution to a matrix problem and an appreciation of its mathematical properties will find this book to be an indispensible tool. This revision is a cover-to-cover expansion and renovation of the third edition. It now includes an introduction to tensor computations and brand new sections on • fast transforms • parallel LU • discrete Poisson solvers • pseudospectra • structured linear equation problems • structured eigenvalue problems • large-scale SVD methods • polynomial eigenvalue problems Matrix Computations is packed with challenging problems, insightful derivations, and pointers to the literature—everything needed to become a matrix-savvy developer of numerical methods and software. The second most cited math book of 2012 according to MathSciNet, the book has placed in the top 10 for since 2005.
Author | : Tetsuya Sakurai |
Publisher | : Springer |
Total Pages | : 312 |
Release | : 2018-01-03 |
Genre | : Computers |
ISBN | : 3319624261 |
This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.
Author | : Raf Vandebril |
Publisher | : JHU Press |
Total Pages | : 594 |
Release | : 2008-01-14 |
Genre | : Mathematics |
ISBN | : 0801896797 |
In recent years several new classes of matrices have been discovered and their structure exploited to design fast and accurate algorithms. In this new reference work, Raf Vandebril, Marc Van Barel, and Nicola Mastronardi present the first comprehensive overview of the mathematical and numerical properties of the family's newest member: semiseparable matrices. The text is divided into three parts. The first provides some historical background and introduces concepts and definitions concerning structured rank matrices. The second offers some traditional methods for solving systems of equations involving the basic subclasses of these matrices. The third section discusses structured rank matrices in a broader context, presents algorithms for solving higher-order structured rank matrices, and examines hybrid variants such as block quasiseparable matrices. An accessible case study clearly demonstrates the general topic of each new concept discussed. Many of the routines featured are implemented in Matlab and can be downloaded from the Web for further exploration.