7.3 Factor analysis and principal-components analysis
Retaining the Gaussian emissions from the GMM but exchanging the categorical latent variable for a standard normal variate yields “factor analysis” (see Section 2.1.2). We also restrict the emission covariance to be diagonal in order to remove a degree of freedom that is not (as we shall see) identifiable from the data. The model is fully described by Eq. 2.20. However, we depart slightly from that formulation by augmenting the latent variable vector with a “random” scalar that is 1 with probability 1. This allows us to absorb the bias into a column of the emission matrix , reducing clutter without reducing generality.
The learning problem starts once again with minimization of the joint cross entropy:
The M step.
The model prior distribution does not depend on any parameters, so only the model emission is differentiated. Starting with the emission matrix :
the normal equations. Thus, in a fully observed model, finding amounts to linear regression.
The emission covariance also takes on a familiar form:
The final line can be simplified using our newly acquired formula for . First expanding the quadratic and then applying the identity:
Now, we require to be diagonal. It may be observed that the derivative with respect to any particular entry of is independent of all other entries, so simply setting some components to zero does not change the optimum for the other components. So we merely extract the diagonal from the final equation.
The E step.
In Section 2.1.2, we derived the posterior distribution for factor analysis:
(7.3) |
In the E step, then, the expected sufficient statistics for and are calculated as
and
7.3.1 Principal-components analysis
We saw that in the limit of equal and infinite emission precisions, EM for the GMM reduces to -means. Now we investigate this limit in the case of EM for the factor analyzer. In this case the only parameter to estimate is .
From Eq. 7.3, we see that the posterior covariance goes to zero as goes to infinity—inference becomes deterministic. With slightly more work, we can also determine the mean, , to which each is deterministic assigned. Setting , we find
The final expression is the Moore-Penrose pseudo-inverse of , i.e., the latent-space projection of that yields the smallest reconstruction error under the emission matrix .
[……]
[Tipping1999]
Iterative Principal-Components Analysis
E step: | |
M step: |