* Eigen-decomposition: If $latex \mathbf{v}$ is a vector such that left (or right) multiplication with a matrix $latex \mathbf{A}$ results in only scaling $latex \mathbf{v}$ by a scalar $latex \lambda$, i.e. $latex \mathbf{A}\mathbf{v} = \lambda\mathbf{v}$ then $latex \mathbf{v}$ is called the eigenvector of matrix $latex \mathbf{A}$, with a corresponding eigenvalue of $latex \lambda$. Since a scaled version of $latex \mathbf{v}$ is also a valid eigenvector, we are usually only interested in unit vectors. The Eigen-decomposition of a matrix $latex \mathbf{A}$ is then represented as $latex \mathbf{A} = \mathbf{V}\text{diag}(\mathbf{\lambda})\mathbf{V}^{-1}$, where $\mathbf{V}$ represents a matrix where every column is a eigenvector of the matrix, and $latex \mathbf{\lambda}$ represents a diagonal matrix of eigenvalues. Although not every matrix is eigen-decomposable, every real-symmetric matrix can be decomposed into only real-valued eigenvectors and eigenvalues, as $latex \mathbf{A} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^T$, where $latex \mathbf{Q}$ is a orthogonal matrix of eigenvectors, and $latex \mathbf{\Lambda}$ is a diagonal matrix of eigenvalues. Although every real-symmetric matrix is eigen-decomposable, the decomposition may not be unique. For convenience, we can require the eigenvalues are sorted in the descending order, thereby leading to a unique decomposition if the eigenvalues are all unique.

* Singular value decomposition (SVD): Eigen-decomposition is not defined if a matrix is not square and we must use another kind of decomposition called the SVD. It is written as $latex \mathbf{A} = \mathbf{U}\mathbf{D}\mathbf{V}^T$. $latex \mathbf{D}$ is a diagonal matrix, not necessarily square, and its values are called the singular values of the matrix. The columns of $latex \mathbf{U}$ are called the left-singular vectors, and the columns of $latex \mathbf{V}$ are called the right-singular vectors. There is an interesting relationship between the singular vectors and eigen-decomposition. It turns out that the left-singular vectors are the eigenvectors of the matrix $latex \mathbf{A}\mathbf{A}^T$. The right-singular vectors are the eigenvectors of the matrix $latex \mathbf{A}^T\mathbf{A}$. The non-zero singular values of $latex \mathbf{A}$ are the square roots of the eigenvalues of $latex \mathbf{A}\mathbf{A}^T$ or $latex \mathbf{A}^T\mathbf{A}$.

* Cholesky decomposition: Positive-definite matrices can be decomposed into a lower triangular matrix and its conjugate transpose as $latex \mathbf{A} = \mathbf{L}\mathbf{L}^T$. Every Hermitian positive-definite (and therefore any real-valued symmetric positive definite) matrix has a unique Cholesky decomposition.

* LDL decomposition: Related to the Cholesky decomposition, $latex \mathbf{A} = \mathbf{L}\mathbf{D}\mathbf{L}^T$, is called the LDL decomposition, and is possible for some indefinite matrices too, unlike Cholesky which requires the matrix to be positive definite. In this case though, $latex \mathbf{L}$ is required to be a unit lower triangular matrix.

* QR decomposition: $latex \mathbf{A} = \mathbf{Q}\mathbf{R}$, where $latex \mathbf{Q}$ is a square orthogonal matrix and $latex \mathbf{R}$ is a upper triangular matrix. It is typically used as an alternative for solving system of linear equations, without explicitly computing the inverse.

* Rank factorization: For a $latex m \times n$ matrix $latex \mathbf{A}$ of rank $latex r$, $latex \mathbf{A} = \mathbf{C}\mathbf{F}$, where $latex \mathbf{C}$ is a a full-rank matrix of size $latex m \times r$, and $latex \mathbf{F}$ is a full-rank matrix of size $latex r times n$.

Share This