Computing matrix symmetrizers, part 2: New methods using eigendata and linear means; a comparison Articles
- September 2016
Digital Object Identifier (DOI)
International Standard Serial Number (ISSN)
Electronic International Standard Serial Number (EISSN)
- Over any field F every square matrix A can be factored into the product of two symmetric matrices as A=S1⋅S2 with and either factor can be chosen nonsingular, as was discovered by Frobenius in 1910. Frobenius' symmetric matrix factorization has been lying almost dormant for a century. The first successful method for computing matrix symmetrizers, i.e., symmetric matricesS such that SA is symmetric, was inspired by an iterative linear systems algorithm of Huang and Nong (2010) in 2013  and . The resulting iterative algorithm has solved this computational problem over R and C, but at high computational cost. This paper develops and tests another linear equations solver, as well as eigen- and principal vector or Schur Normal Form based algorithms for solving the matrix symmetrizer problem numerically. Four new eigendata based algorithms use, respectively, SVD based principal vector chain constructions, Gram&-Schmidt orthogonalization techniques, the Arnoldi method, or the Schur Normal Form of A in their formulations. They are helped by Datta's 1973 method that symmetrizes unreduced Hessenberg matrices directly. The eigendata based methods work well and quickly for generic matrices A and create well conditioned matrix symmetrizers through eigenvector dyad accumulation. But all of the eigen based methods have differing deficiencies with matrices A that have ill-conditioned or complicated eigen structures with nontrivial Jordan normal forms. Our symmetrizer studies for matrices with ill-conditioned eigensystems lead to two open problems of matrix optimization.
- symmetric matrix factorization; symmetrizer; symmetrizer computation; eigenvalue method; linear equation; principal subspace computation; matrix optimization; numerical algorithm; matlab code