# Eigenvalues: Spectral Decomposition

#### 2020-10-29

library(matlib)   # use the package

## Setup

This vignette uses an example of a $$3 \times 3$$ matrix to illustrate some properties of eigenvalues and eigenvectors. We could consider this to be the variance-covariance matrix of three variables, but the main thing is that the matrix is square and symmetric, which guarantees that the eigenvalues, $$\lambda_i$$ are real numbers, and non-negative, $$\lambda_i \ge 0$$.

A <- matrix(c(13, -4, 2, -4, 11, -2, 2, -2, 8), 3, 3, byrow=TRUE)
A
##      [,1] [,2] [,3]
## [1,]   13   -4    2
## [2,]   -4   11   -2
## [3,]    2   -2    8

Get the eigenvalues and eigenvectors using eigen(); this returns a named list, with eigenvalues named values and eigenvectors named vectors. We call these L and V here, but in formulas they correspond to a diagonal matrix, $$\mathbf{\Lambda} = diag(\lambda_1, \lambda_2, \lambda_3)$$, and a (orthogonal) matrix $$\mathbf{V}$$.

ev <- eigen(A)
# extract components
(L <- ev$values) ##  17 8 7 (V <- ev$vectors)
##         [,1]    [,2]   [,3]
## [1,]  0.7454  0.6667 0.0000
## [2,] -0.5963  0.6667 0.4472
## [3,]  0.2981 -0.3333 0.8944

## Matrix factorization

1. Factorization of A: A = V diag(L) V’. That is, the matrix $$\mathbf{A}$$ can be represented as the product $$\mathbf{A}= \mathbf{V} \mathbf{\Lambda} \mathbf{V}'$$.
V %*% diag(L) %*% t(V)
##      [,1] [,2] [,3]
## [1,]   13   -4    2
## [2,]   -4   11   -2
## [3,]    2   -2    8
1. V diagonalizes A: L = V’ A V. That is, the matrix $$\mathbf{V}$$ transforms $$\mathbf{A}$$ into the diagonal matrix $$\mathbf{\Lambda}$$, corresponding to orthogonal (uncorrelated) variables.
diag(L)
##      [,1] [,2] [,3]
## [1,]   17    0    0
## [2,]    0    8    0
## [3,]    0    0    7
zapsmall(t(V) %*% A %*% V)
##      [,1] [,2] [,3]
## [1,]   17    0    0
## [2,]    0    8    0
## [3,]    0    0    7

## Spectral decomposition

The basic idea here is that each eigenvalue–eigenvector pair generates a rank 1 matrix, $$\lambda_i \mathbf{v}_i \mathbf{v}_i '$$, and these sum to the original matrix, $$\mathbf{A} = \sum_i \lambda_i \mathbf{v}_i \mathbf{v}_i '$$.

A1 = L * V[,1] %*% t(V[,1])
A1
##        [,1]   [,2]   [,3]
## [1,]  9.444 -7.556  3.778
## [2,] -7.556  6.044 -3.022
## [3,]  3.778 -3.022  1.511
A2 = L * V[,2] %*% t(V[,2])
A2
##        [,1]   [,2]    [,3]
## [1,]  3.556  3.556 -1.7778
## [2,]  3.556  3.556 -1.7778
## [3,] -1.778 -1.778  0.8889
A3 = L * V[,3] %*% t(V[,3])
A3
##      [,1] [,2] [,3]
## [1,]    0  0.0  0.0
## [2,]    0  1.4  2.8
## [3,]    0  2.8  5.6

Then, summing them gives A, so they do decompose A:

A1 + A2 + A3
##      [,1] [,2] [,3]
## [1,]   13   -4    2
## [2,]   -4   11   -2
## [3,]    2   -2    8
all.equal(A, A1+A2+A3)
##  TRUE

### Further properties

1. Sum of squares of A = sum of sum of squares of A1, A2, A3
sum(A^2)
##  402
c( sum(A1^2), sum(A2^2), sum(A3^2) )
##  289  64  49
sum( sum(A1^2), sum(A2^2), sum(A3^2) )
##  402
#' same as tr(A' A)
tr(crossprod(A))
##  402
1. Each squared eigenvalue gives the sum of squares accounted for by the latent vector
L^2
##  289  64  49
cumsum(L^2)   # cumulative
##  289 353 402
1. The first $$i$$ eigenvalues and vectors give a rank $$i$$ approximation to A
R(A1)
##  1
R(A1 + A2)
##  2
R(A1 + A2 + A3)
##  3
# two dimensions
sum((A1+A2)^2)
##  353
sum((A1+A2)^2) / sum(A^2)   # proportion
##  0.8781