Low Dimensional Topology

May 8, 2012

Principal Component Analysis

Filed under: Data topology — Jesse Johnson @ 11:58 am

I will continue my series of posts on the geometry and topology of big data with a description of principle component analysis (PCA), a technique from the statistics side of data analysis. Like my last post, there will be nothing explicitly related to or drawn from low dimensional topology, but bear with me. After a few more posts about the standard approaches to these problems, I will address how ideas drawn from topology can be applied to these sorts of problems. As mentioned before, I believe these applications have the potential to enrich the pure field with new problems and new techniques.

In statistics, given a set of numbers \{ x_i \} with mean 0, the variance of the set is the sum \sum x_i^2. This measures the how tightly packed the numbers are around 0. Given a set of vectors, we can calculate the variance in each dimension, i.e. we think of all the values in the first place of the different vectors and calculate the variance of this set. (We will assume that the vectors have been translated so that the mean in each dimension is zero.)

For a set of two-dimensional vectors \{ (x_i, y_i)\}, the covariance of the two dimensions is \sum x_i y_i. This measures how evenly distributed the values are between the two dimensions, or how likely it is for y_i to be large when x_i is large, etc. For any set of n dimensional vectors, we can calculate the covariance for any pair of dimensions, and we can think of the results as an n \times n matrix (with the variance of each dimension down the diagonal), called the covariance matrix.

If we think of the set of k vectors of dimension n as a k \times n matrix X, then the covariance matrix is the product C of X with its transpose. Because of this definition, the reader can check that if we apply a basis change to the vectors, defined by a matrix M, then the covariance matrix becomes M C M^T (where M^T is the transpose.)

Since C is a symmetric matrix, we can find a matrix M so that M C M^T is a diagonal matrix, i.e. all the covariances are zero. So in this new basis for the data vectors, the data is “evenly balanced” between the variables. The intuition from the statistical point of view is that the new basis vectors (which are eigenvectors of C) are the main axes (i.e. the principle components) of the set of data. In particular, this is the case if you take a Gaussian distribution and apply a linear transformation that stretches it more in some directions than in others.

The eigenvalues of C, which are the diagonal entries of M C M^T, give you the variance along the main axes, i.e. how much the data is stretched in that direction. So the way that this is most often used is to orthogonally project onto the largest few principle components. In particular, if there is a sudden drop off in the eigenvalues – the first few eigenvalues are much larger than the later ones – this suggests that the data set is very flat in most directions, so we should project onto the eigenvectors corresponding to the larger eigenvalues. For visualization, however, it is more common to project onto the two eigenvectors with the largest eigenvalues, regardless of how large the remaining eigenvalues are. As you can imagine, this has the potential to greatly distort the picture.

As a topologist, of course, you should be skeptical that it is a good idea to restrict oneself to linear projection, particularly when projecting from lots of dimensions down to 2. The main advantage of this type of method is that it uses basic linear algebra operations and is thus very computationally efficient.  I will discuss in my next post a trick called a kernel that can be used to make algorithms like PCA less linear without giving up to much computational efficiency.

2 Comments »

  1. I think you want to say “eigenvectors of C” and “eigenvalues of C” instead of “eigenvectors of M” and eigenvalues of C”, i.e., PCA is essentially the spectral decomposition of the covariance matrix (M in this case is the matrix of eigenvectors)

    Comment by hrf — March 30, 2013 @ 8:20 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.