Singular Value Decomposition
The Singular Value Decomposition (SVD) is one of the most powerful and widely used tools in linear algebra. Unlike eigenvalue decomposition, which is only defined for square matrices, the SVD exists for any matrix .
Construction
Section titled “Construction”The SVD is closely related to the eigenvalues and eigenvectors of the symmetric matrices and .
- The columns of are the orthonormal eigenvectors of .
- The columns of are the orthonormal eigenvectors of .
- The non-zero singular values are the square roots of the non-zero eigenvalues of both and .
Geometric Interpretation
Section titled “Geometric Interpretation”The SVD can be thought of as decomposing any linear transformation into three simple steps:
- Rotation in the domain (): Rotate the input vector to align with the principal axes.
- Scaling (): Stretch or compress the vector along these axes by the singular values.
- Rotation in the codomain (): Rotate the resulting vector to the final orientation.
This tells us that every matrix maps a unit sphere to a (possibly degenerate) hyper-ellipse. The singular values are the lengths of the semi-axes of this ellipse.
Low-rank Approximation
Section titled “Low-rank Approximation”One of the most important applications of SVD is the Eckart-Young-Mirsky Theorem. It states that the best rank- approximation of a matrix (in terms of Frobenius or Spectral norm) is obtained by keeping only the top singular values and their corresponding vectors.
If , then the best rank- approximation () is:
This is the foundation for techniques like Principal Component Analysis (PCA) and Image Compression.
Reduced SVD
Section titled “Reduced SVD”In many cases, specifically when or the matrix is not full rank, we use the Reduced SVD (or thin SVD). If is with , the reduced SVD only keeps the first columns of and the upper square of :
This is more computationally efficient while preserving all the information needed to reconstruct .
Properties and Applications
Section titled “Properties and Applications”- Rank: The number of non-zero singular values is equal to the rank of the matrix.
- Invertibility: A square matrix is invertible if and only if all its singular values are non-zero.
- Condition Number: The ratio is the condition number of the matrix, which measures how sensitive a system of linear equations is to errors.
- Pseudo-inverse: The Moore-Penrose pseudo-inverse is easily computed as , where is obtained by inverting the non-zero singular values.