9.2 Nonlinear independent-component estimation
We now expand our view to nonlinear, albeit still invertible, transformations [6, 40, 7, 24]. In particular, consider a “generative function” that consists of a series of invertible transformations. Once again, to emphasize that it is the inverse of a recognition or discriminative function, , we write as :
This change of variables is called a flow††margin: flow [40]. Let us still assume a factorial prior, Eq. 9.2, and furthermore that it does not depend on any parameters. Since the transformations are (by assumption) invertible, the change-of-variables formula still applies. Therefore, Eq. 9.3 still holds, but the Jacobian determinant of composed functions becomes the product of the individual Jacobian determinants:
with Jacobians given by
(For the sake of writing derivatives, we have named the argument of the function . This makes .) The functions induced by multiplying the initial distribution (the Jacobian determinant at left) by, in turn, the determinants of each of the Jacobians at right, are “automatically” normalized and positive, and consequently valid probability distributions. This sequence is accordingly called a normalizing flow††margin: normalizing flow [40].
Since the generative function is invertible, we can certainly compute the arguments to the Jacobians. However, to keep the problem tractable, we also need to be able to compute the Jacobian determinants efficiently. Generically, this computation is cubic in the dimension of the data. This is intolerable, so we will generally limit the expressiveness of each to achieve something more practical.
Perhaps the most obvious limitation is to require that the transformations be “volume preserving”; that is, to require that Jacobian determinants are always unity [6] This can be achieved, for example, by splitting a data vector into two parts, and requiring (1) that the flow at a particular step of only one of the parts may depend on the other (this ensures that the Jacobian is upper triangular); and (2) that the flows of both parts depend on their previous values only through an identity transformation (this ensures that the two diagonal blocks of the Jacobian are identity matrices). In equations,
… [[multiple layers of this]]
Now our loss is, as usual, the relative entropy. With the “recognition functions” of Eq. 9.6 and the corresponding model density of Eq. 9.7, the relative entropy becomes
(For concision, the Jacobians are written as a function directly of .)
The discriminative dual.
The model defined by Eq. 9.7, along with the loss in Eq. 9.8, has been called “nonlinear independent component analysis” (NICE) [6]. To see if the name is apposite, we employ our discriminative/generative duality, reinterpreting the minimization of the relative-entropy in Eq. 9.8 as a maximization of mutual information between the data, , and a random variable defined by the (reverse) flow in Eq. 9.6, , followed by (elementwise) transformation by the CDF of the prior. Can this still be thought of as an “unmixing” operation, as in InfoMax ICA? The question is acute particularly in the case where the prior is chosen to be normal, since (as we have just seen) ICA reduces to whitening in such circumstances.
In this case, the generative marginal given by the normalizing flow, Eq. 9.7, becomes
Despite the appearance of a normal distribution in this expression, this marginal distribution is certainly not normal—even though the generative prior, , is. So fitting this to the data will not in general merely fit their second-order statistics.
[[Connection to HMC]]