9.1 InfoMax ICA, revisited
Historically, this equivalence was noted first [3] for a specific model, InfoMax ICA [1], which we first encountered in Section 5.2. Consider the very simply “generative model” in which the observations are related to the “latent” variables by a square, full-rank matrix:
Substituting this relationship (cf. Eq. 9.1) into Eq. 9.3, we see that the marginal distribution of the observed variables is
where again is the CDF of the corresponding “latent” variable, , and is a row of . Clearly, fitting this marginal density follows the same gradient as in InfoMax ICA, Eq. 5.31.
That is, InfoMax ICA can be implemented as density estimation in a generative model with latent variables distributed independently and cumulatively according to [3]; see Fig. 9.1.
|
|
But we haven’t specified ! This omission may have seemed minor in the discriminative model—sigmoidal nonlinearities in neural networks are typically selected rather freely—but is striking in a generative model. And indeed, the choice matters. Suppose we had let the sigmoidal function be the CDF of a Gaussian. Then since we are modeling the observations as linear functions of the latent variables, , their marginal distribution (Eq. 9.5) is clearly another mean-zero normal distribution, in particular ††margin: Exercise LABEL:ex:ICAwithGaussianCDFsMarginal . Minimizing the loss in Eq. 9.4 then amounts merely to fitting the covariance of the observed data: . (This can also be shown by setting the gradient of Eq. 9.4, i.e. Eq. 5.31, to zero and solving for .††margin: Exercise LABEL:ex:ICAwithGaussianCDFsMinimizer )
If the observations are indeed normal, then whitening them in this way would indeed render them independent (since for jointly Gaussian random variables, uncorrelatedness implies independence)—but we do not need such an elaborate procedure to arrive at this conclusion! ICA is of interest precisely when the observations are not normal, in which case the optimal linear transformation cannot generally be stated a priori. Critically, squashing the data with the Gaussian CDF makes the outputs blind to the higher-order correlations, and is therefore not a suitable nonlinearity in cases of interest. In contrast, the (standard) logistic function is super-Gaussian (leptokurtotic), so InfoMax ICA with logistic outputs will generally do more than decorrelate its inputs. This may seem remarkable, given the visually minor discrepancy between the Gaussian CDF and the logistic function (Fig. LABEL:fig:; B.A. Olshausen, personal communication). Now we see the advantage of the generative perspective, from which this difference is more salient—and at long last, shed light on how to choose the feedforward nonlinearities, , in InfoMax ICA.