8.2 Variational Autoencoders

The use of a recognition distribution for every observation, 𝒚\bm{y}, will clearly fail if the number of data is too large. An alternative is to encourage the recognition model to use a single set of parameters for all data, by specifying a single, parameter-dependent mapping from the observations to the moments of the recognition distribution. For example, sticking with Gaussian recognition distributions, we can use

pˇ(𝒙ˇ|𝒚;ϕ)=𝒩(𝝂(𝒚,ϕ𝝂),𝚼(𝒚,ϕ𝚼)).{\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}=\mathcal{N}\mathopen{}\mathclose{{}\left({\bm{\nu}(\leavevmode\color[% rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{},\bm{\phi}_{% \bm{\nu}})},\>{\mathbf{{\Upsilon}}(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}{}{},\bm{\phi}_{\mathbf{{\Upsilon}}})}}% \right). (8.12)

The moments, 𝝂(𝒚,ϕ𝝂){\bm{\nu}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{}% ,\bm{\phi}_{\bm{\nu}})} and 𝚼(𝒚,ϕ𝚼){\mathbf{{\Upsilon}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}{},\bm{\phi}_{\mathbf{{\Upsilon}}})}, are now interpreted to be mappings or functions rather than parameters. (It will now be slightly simpler to work with covariance rather than precision matrices.) For example, the functions can be deep neural networks, and so in some sense arbitrarily flexible (see Section 5.1.3). For sufficiently large ϕ\bm{\phi}, this flexibility could include (depending on the ratio of parameters to observations) the ability to store a separate mean and covariance matrix for every datum, and therefore to mimic Eq. 8.2, but ideally it does not. Limiting the flexibility of the functions encourages the computational cost of learning ϕ\bm{\phi} to be “amortized” across observations, and for the recognition model to provide good inferences for new observations 𝒚\bm{y}.

Replacing the moments of the recognition distribution with deep neural networks is the point of departure for margin: variational autoencoders [27, 41]. Note well that they are not limited to Gaussian recognition distributions (Eq. 8.12). However, we start with this case because it is a natural extension of the approach to sparse coding just discussed; it will likewise make use of the identities Eqs. 8.3 and 8.4. We subsequently generalize.

The M step.

How does this change to the parameterization of the recognition model change the optimization? Since the recognition distribution is still normal, we can still use Eq. 8.1 for the joint relative entropy, and Eqs. 8.3 and 8.4 for its gradients. Beginning with the parameters ϕ𝝂\bm{\phi}_{\bm{\nu}} of the mean function:

JREϕ𝝂=𝔼𝒀[𝝂Tϕ𝝂𝝂𝔼𝑿ˇ|𝒀[-logp^(𝑿ˇ,𝒀;𝜽)|𝒀]]=𝔼𝒀[𝝂Tϕ𝝂𝔼𝑿ˇ|𝒀[Ejoint𝒙^(𝑿ˇ,𝒀,𝜽)|𝒀]].\begin{split}\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\bm{\phi}_{\bm% {\nu}}}}&=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\frac{\partial% {\bm{\nu}}^{\text{T}}}{\partial{\bm{\phi}_{\bm{\nu}}}}\frac{\partial{}}{% \partial{\bm{\nu}}}\mathbb{E}_{{\bm{\check{X}}}{}|{\bm{Y}}}{\mathopen{}% \mathclose{{}\left[-\log{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}% ,{\bm{Y}};\bm{\theta}}\right)}\middle|{\bm{Y}}{}}\right]}}\right]}\\ &=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\frac{\partial{\bm{\nu% }}^{\text{T}}}{\partial{\bm{\phi}_{\bm{\nu}}}}\mathbb{E}_{{\bm{\check{X}}}{}|{% \bm{Y}}}{\mathopen{}\mathclose{{}\left[\frac{\partial{E_{\text{joint}}}}{% \partial{\bm{\hat{x}}}}\mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}}% ,\bm{\theta}}\right)\middle|{\bm{Y}}{}}\right]}}\right]}.\end{split} (8.13)

Likewise, for the covariance, we follow Eq. 8.4, although for notational simplicity we write the derivative with respect to only a single parameter, ϕ𝚼i\phi_{\mathbf{{\Upsilon}}}^{i}

JREϕ𝚼i=𝔼𝒀[tr[(𝚼𝔼𝑿ˇ|𝒀[-logp^(𝑿ˇ,𝒀;𝜽)|𝒀]-12𝚼log|𝚼|)𝚼ϕ𝚼i]]=𝔼𝒀[12tr[(𝔼𝑿ˇ|𝒀[2Ejoint𝒙^𝒙^T(𝑿ˇ,𝒀,𝜽)|𝒀]-𝚼-1)𝚼ϕ𝚼i]].\begin{split}\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\phi_{\mathbf{% {\Upsilon}}}^{i}}}&=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \text{tr}\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left(\frac{% \partial{}}{\partial{\mathbf{{\Upsilon}}}}\mathbb{E}_{{\bm{\check{X}}}{}|{\bm{% Y}}}{\mathopen{}\mathclose{{}\left[-\log{\hat{p}\mathopen{}\mathclose{{}\left(% {\bm{\check{X}}},{\bm{Y}};\bm{\theta}}\right)}\middle|{\bm{Y}}{}}\right]}-% \frac{1}{2}\frac{\partial{}}{\partial{\mathbf{{\Upsilon}}}}\log\mathopen{}% \mathclose{{}\left\lvert\mathbf{{\Upsilon}}}\right\rvert}\right)\frac{\partial% {\mathbf{{\Upsilon}}}}{\partial{\phi_{\mathbf{{\Upsilon}}}^{i}}}}\right]}% \right]}\\ &=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\frac{1}{2}\text{tr}% \mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left(\mathbb{E}_{{\bm{% \check{X}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[\frac{\partial^{2}{E_{% \text{joint}}}}{\partial{\bm{\hat{x}}}\partial{\bm{\hat{x}}}^{\text{T}}}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}},\bm{\theta}}\right)% \middle|{\bm{Y}}{}}\right]}-\mathbf{{\Upsilon}}^{-1}}\right)\frac{\partial{% \mathbf{{\Upsilon}}}}{\partial{\phi_{\mathbf{{\Upsilon}}}^{i}}}}\right]}\right% ]}.\end{split} (8.14)

Computing the Hessian 2Epost/𝒙^𝒙^T\partial^{2}{E_{\text{post}}}/\partial{\bm{\hat{x}}}\partial{\bm{\hat{x}}}^{% \text{T}} naïvely is expensive (𝒪(K3)\mathcal{O}({K}^{3})) but there are clever, 𝒪(K2)\mathcal{O}({K}^{2}), alternatives [41].

Now, for sufficiently expressive generative models, the conditional expectations under the recognition distribution will in general be difficult to carry out (although cf. Eqs. 8.10 and 8.11). Instead we might replace the conditional expectation with a sample average, since it is trivial to draw samples from Eq. 8.12. For example, the generative model could be yet more normal distributions,

p^(𝒙^)=𝒩(𝟎,𝐈)\displaystyle{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}}\right)% }=\mathcal{N}\mathopen{}\mathclose{{}\left(\bm{0},\>\mathbf{I}}\right) p^(𝒚^|𝒙^;𝜽)=𝒩(𝝁(𝒙^,𝜽),𝚺(𝒙^,𝜽)),\displaystyle{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{}\middle|% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{% };\bm{\theta}}\right)}=\mathcal{N}\mathopen{}\mathclose{{}\left(\bm{\mu}(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \bm{\theta}),\>\mathbf{{\Sigma}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta})}\right), (8.15)

but with the mean and covariance of the emission, like the recognition distribution, flexible parameterized functions—indeed, neural networks—operating on their “inputs,” in this case 𝒙^\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}. Then Eqs. 8.13 and 8.14 simplify slightly [41]:

JREϕ𝝂=𝝂Tϕ𝝂(Eemiss𝒙^(𝑿ˇ,𝒀,𝜽)𝑿ˇ|𝒀+𝝂(𝒀,ϕ𝝂))𝒀,\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\bm{\phi}_{\bm{\nu}}}}={% \mathopen{}\mathclose{{}\left\langle{\frac{\partial{\bm{\nu}}^{\text{T}}}{% \partial{\bm{\phi}_{\bm{\nu}}}}\mathopen{}\mathclose{{}\left({\mathopen{}% \mathclose{{}\left\langle{\frac{\partial{E_{\text{emiss}}}}{\partial{\bm{\hat{% x}}}}\mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}},\bm{\theta}}% \right)}}\right\rangle_{{\bm{\check{X}}}{}|{\bm{Y}}{}}}+{\bm{\nu}({\bm{Y}},\bm% {\phi}_{\bm{\nu}})}}\right)}}\right\rangle_{{\bm{Y}}{}}}, (8.16)
JREϕ𝚼i=12tr[(2Eemiss𝒙^𝒙^T(𝑿ˇ,𝒀,𝜽)𝑿ˇ|𝒀-𝚼-1+𝐈)𝚼ϕ𝚼i]𝒀,\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\phi_{\mathbf{{\Upsilon}}}^% {i}}}={\mathopen{}\mathclose{{}\left\langle{\frac{1}{2}\text{tr}\mathopen{}% \mathclose{{}\left[\mathopen{}\mathclose{{}\left({\mathopen{}\mathclose{{}% \left\langle{\frac{\partial^{2}{E_{\text{emiss}}}}{\partial{\bm{\hat{x}}}% \partial{\bm{\hat{x}}}^{\text{T}}}\mathopen{}\mathclose{{}\left({\bm{\check{X}% }},{\bm{Y}},\bm{\theta}}\right)}}\right\rangle_{{\bm{\check{X}}}{}|{\bm{Y}}{}}% }-\mathbf{{\Upsilon}}^{-1}+\mathbf{I}}\right)\frac{\partial{\mathbf{{\Upsilon}% }}}{\partial{\phi_{\mathbf{{\Upsilon}}}^{i}}}}\right]}}\right\rangle_{{\bm{Y}}% {}}}, (8.17)

with the emission energy EemissE_{\text{emiss}} equal to the energy of the Gaussian distribution on the right in Eq. 8.15,

Eemiss(𝒙^,𝒚^)=12(𝒚^-𝝁(𝒙^,𝜽))T𝚺-1(𝒙^,𝜽)(𝒚^-𝝁(𝒙^,𝜽)).E_{\text{emiss}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}})=\frac{1}{2}\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% -\bm{\mu}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {x}},\bm{\theta})}\right)^{\text{T}}\mathbf{{\Sigma}}^{-1}(\leavevmode\color[% rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{% \theta})\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}-\bm{\mu}(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{% \theta})}\right). (8.18)

To optimize these parameters, we can descend these gradients—but discussion of how precisely this is to be carried out we defer until after deriving the E step.

The E step.

Starting again with the joint relative entropy under Gaussian recognition distributions, Eq. 8.1, assuming the Gaussian generative model of Eq. 8.15, and differentiating with respect to the parameters of the generative model, 𝜽\bm{\theta}, we find

JRE𝜽=dd𝜽𝔼𝑿ˇ,𝒀[-logp^(𝑿ˇ,𝒀;𝜽)]=𝔼𝑿ˇ,𝒀[Eemiss𝜽(𝑿ˇ,𝒀,𝜽)]Eemiss𝜽(𝑿ˇ,𝒀,𝜽)𝑿ˇ,𝒀\begin{split}\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\bm{\theta}}}&% =\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}\mathbb{E}_% {{\bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[-\log{\hat{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}};\bm{\theta}}\right)}}% \right]}\\ &=\mathbb{E}_{{\bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \frac{\partial{E_{\text{emiss}}}}{\partial{\bm{\theta}}}\mathopen{}\mathclose{% {}\left({\bm{\check{X}}},{\bm{Y}},\bm{\theta}}\right)}\right]}\\ &\approx{\mathopen{}\mathclose{{}\left\langle{\frac{\partial{E_{\text{emiss}}}% }{\partial{\bm{\theta}}}\mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}% },\bm{\theta}}\right)}}\right\rangle_{{\bm{\check{X}}}{},{\bm{Y}}{}}}\\ \end{split} (8.19)

Again we propose to descend this gradient iteratively.

Stochastic backpropagation.

Computing the gradients on the left-hand sides of Eqs. 8.16 and 8.17 evidently requires acquiring the gradients appearing in the right-hand sides. The gradients of 𝝂(𝒚,ϕ𝝂){\bm{\nu}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{}% ,\bm{\phi}_{\bm{\nu}})} and 𝚼(𝒚,ϕ𝚼){\mathbf{{\Upsilon}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}{},\bm{\phi}_{\mathbf{{\Upsilon}}})}, for their part, obviously pass backwards through the recognition neural networks. But the gradients of the emission energy with respect 𝒙^\bm{\hat{x}} must pass backwards through the generative neural networks, 𝝁(𝒙^,𝜽)\bm{\mu}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {x}},\bm{\theta}) and 𝚺(𝒙^,𝜽)\mathbf{{\Sigma}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta}), for which 𝒙^\bm{\hat{x}} is an input. Along this backward propagating computation, we will compute precisely the quantities required for the gradients 𝝁/𝜽\partial{\bm{\mu}}/\partial{\bm{\theta}} and 𝚺/𝜽\partial{\mathbf{{\Sigma}}}/\partial{\bm{\theta}} that are needed for the E step, Eq. 8.19.

This suggests that the E and M steps be simultaneous rather than alternating. In particular, a set of N{N} observations 𝒚1,,𝒚N\bm{y}_{1},\ldots,\bm{y}_{{N}} can be passed “upward” through the recognition distribution, Eq. 8.12, yielding N{N} sample latent vectors, 𝒙ˇ1,,𝒙ˇN\bm{\check{x}}_{1},\ldots,\bm{\check{x}}_{{N}}.33 3 It might seem that we need multiple samples of 𝑿ˇ{\bm{\check{X}}} for every sample of 𝒀{\bm{Y}}, and indeed we do need (roughly) quadratically more samples to estimate the joint distribution of 𝑿ˇ,𝒀{\bm{\check{X}}},{\bm{Y}} as opposed to just 𝒀{\bm{Y}}—say, L2L^{2}. But it is more efficient to have L2L^{2} different values of each of 𝑿ˇ,𝒀{\bm{\check{X}}},{\bm{Y}} rather than just LL different values for 𝒀{\bm{Y}} and L2L^{2} values of 𝑿ˇ{\bm{\check{X}}}; see Fig. LABEL:fig:XXX. These are then passed on through the generative model to produce values for 𝝁(𝒙ˇn,𝜽)\bm{\mu}(\bm{\check{x}}_{n},\bm{\theta}) and 𝚺(𝒙ˇn,𝜽)\mathbf{{\Sigma}}(\bm{\check{x}}_{n},\bm{\theta}). Notice that this entire process looks like a forward pass through a single neural network that maps each value to itself—i.e., an autoencoder—or, more precisely, to a mean and variance for that sample, encoding the network’s “best guess” for that sample and its (inverse) confidence in that guess.

The gradient is therefore evaluated by initializing a backward pass through the emission energy Eq. 8.18 at the samples 𝒚1,,𝒚N\bm{y}_{1},\ldots,\bm{y}_{{N}}, working backwards through 𝝁(𝒙ˇn,𝜽)\bm{\mu}(\bm{\check{x}}_{n},\bm{\theta}) and 𝚺(𝒙ˇn,𝜽)\mathbf{{\Sigma}}(\bm{\check{x}}_{n},\bm{\theta}) toward the “inputs” 𝑿^{\bm{\hat{X}}}. The gradient at each layer of these neural networks is evaluated at the values computed under the preceding forward pass. Averaging these gradients under all N{N} samples yields the right-hand side of Eq. 8.19, the “E-step” gradients. But rather than stop at the parameters of the first layer, the gradient computation is continued into the inputs 𝑿^{\bm{\hat{X}}}. The chain of backward computations then proceeds through the parameters of the recognition distribution according to Eqs. 8.16 and 8.17, finally terminating at the first layers of 𝝂(𝒚n,ϕ𝝂){\bm{\nu}(\bm{y}_{n},\bm{\phi}_{\bm{\nu}})} and 𝚼(𝒚n,ϕ𝚼){\mathbf{{\Upsilon}}(\bm{y}_{n}{},\bm{\phi}_{\mathbf{{\Upsilon}}})}. Averaging these gradients under the N{N} samples yields the “M-step” gradients for updating the recognition distribution.

A probabilistic autoencoder.

We observed above that this optimization procedure suggests thinking of the generative and recognition models together as a single, feedforward neural network (Fig. LABEL:fig:VAE). To emphasize this intuition, let us again reorganize the joint relative entropy:

JRE(𝜽,ϕ) . . =𝔼𝑿ˇ,𝒀[logpˇ(𝑿ˇ|𝒀;ϕ)p(𝒀)-logp^(𝑿ˇ,𝒀;𝜽)]=𝔼𝑿ˇ,𝒀[logpˇ(𝑿ˇ|𝒀;ϕ)-logp^(𝑿ˇ;𝜽)+logp(𝒀)-logp^(𝒀|𝑿ˇ;𝜽)]=DKL{pˇ(𝑿ˇ|𝒀;ϕ)p^(𝑿ˇ;𝜽)}+H(ppˇ)p^[𝒀|𝑿ˇ;ϕ,𝜽]-Hp[𝒀].\begin{split}\mathcal{L}_{\text{JRE}}(\bm{\theta},\bm{\phi})&\mathrel{\vbox{% \hbox{\scriptsize.}\hbox{\scriptsize.} }}=\mathbb{E}_{{\bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \log{\check{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};% \bm{\phi}}\right)}{p\mathopen{}\mathclose{{}\left({\bm{Y}}}\right)}-\log{\hat{% p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}};\bm{\theta}}\right)}% }\right]}\\ &=\mathbb{E}_{{\bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \log{\check{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};% \bm{\phi}}\right)}-\log{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}};% \bm{\theta}}\right)}+\log{p\mathopen{}\mathclose{{}\left({\bm{Y}}}\right)}-% \log{\hat{p}\mathopen{}\mathclose{{}\left({\bm{Y}}\middle|{\bm{\check{X}}};\bm% {\theta}}\right)}}\right]}\\ &=\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{\check{p% }\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm{\phi}}% \right)}\middle\|{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}};\bm{% \theta}}\right)}}\right\}+{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}% \mathclose{{}\left[{\bm{Y}}{}\middle|{\bm{\check{X}}}{};\bm{\phi},\bm{\theta}}% \right]}}-{\text{H}_{p}{\mathopen{}\mathclose{{}\left[{\bm{Y}}{}}\right]}}.% \end{split} (8.20)

The second term, H(ppˇ)p^[𝒀|𝑿ˇ;ϕ,𝜽]{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{Y}}{}% \middle|{\bm{\check{X}}}{};\bm{\phi},\bm{\theta}}\right]}}, can be interpreted as a reconstruction cost (see Section 6.3.2): the number of bits44 4 If a base-2 logarithm is used. required to encode the observations under the generative model, given latent states inferred with the recognition model. It measures the “lossiness” of the autoencoder. By itself, this term enforces a discriminative rather than generative cost. Note that, without further constraint, the reconstruction cost could be either greater or lower than the marginal entropy, Hp[𝒀]{\text{H}_{p}{\mathopen{}\mathclose{{}\left[{\bm{Y}}{}}\right]}}; i.e., the final two terms could together be either positive or negative.55 5 For example, let Y{Y} be a biased coin (entropy less than 1 bit). A model could simply copy through the observations (reconstruction cost of 0 bits), or again predict heads and tails with equal probability (reconstruction cost of 1 bit).

We have previously (Section 6.3.2) interpreted the first term as the sum of a “code cost,” H(ppˇ)p^[𝑿ˇ;ϕ,𝜽]{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{\check{X}}}% {};\bm{\phi},\bm{\theta}}\right]}}, required to encode the latent states inferred by the recognition model; and -H(ppˇ)[𝑿ˇ|𝒀;ϕ]-{\text{H}_{(p\check{p})}{\mathopen{}\mathclose{{}\left[{\bm{\check{X}}}{}% \middle|{\bm{Y}}{};\bm{\phi}}\right]}}, a regularizer that prevents the optimization from decreasing reconstruction and code costs merely by making recognition more confident. Alternatively, DKL{pˇ(𝑿ˇ|𝒀;ϕ)p^(𝑿ˇ;𝜽)}\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{\check{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm{\phi}}% \right)}\middle\|{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}};\bm{% \theta}}\right)}}\right\} can be interpreted [27] as a regularizer that encourages the recognition posterior to resemble the generative prior. Intuitively, optimizing reconstruction cost obliges the emission density to “play well” with the recognition model (as well as to fit the observations); but to complete the generative model, we also need it to play well with the generative prior! This requires the recognition model and the generative prior to resemble each other. Indeed, the regularization that enforces this resemblance is the only place that the generative prior enters the loss.

It is important to realize that we do not expect this relative entropy to vanish in the optimal model, which will embody a compromise between this regularizer and the reconstruction cost (cf. Section 6.3.2). Instead, the joint relative entropy JRE\mathcal{L}_{\text{JRE}} vanishes at the optimum, so the most we can say is

DKL{pˇ(𝑿ˇ|𝒀;ϕ)p^(𝑿ˇ;𝜽)}=Hp[𝒀]-H(ppˇ)p^[𝒀|𝑿ˇ;ϕ,𝜽].\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{\check{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm{\phi}}% \right)}\middle\|{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}};\bm{% \theta}}\right)}}\right\}={\text{H}_{p}{\mathopen{}\mathclose{{}\left[{\bm{Y}}% {}}\right]}}-{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose{{}\left[{% \bm{Y}}{}\middle|{\bm{\check{X}}}{};\bm{\phi},\bm{\theta}}\right]}}.

This equation tells us something intuitive: Since the left-hand side is non-negative, so must the right-hand side be; or in other words, the optimal reconstruction cost never exceeds the entropy of the data distribution.66 6 Returning to our previous example of a biased coin, we find that the optimal model should correctly identify heads-vs.-tails at least as frequently as the bias of the coin.

In the simplest VAEs, the generative priors are not parameterized, p^(𝒙^;𝜽)=p^(𝒙^){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{\theta}}\right)}={\hat{p}% \mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}}\right)}, as in Eq. 8.15, in which case the KL regularizer pushes the recognition model toward the generative prior but not vice versa. For the generative model considered in isolation, such a truly structureless prior distribution is desirable. The ideal latent variables are akin to the fundamental particles of physics: they should be as independent as possible, since any remaining structure is something that still needs to be explained. We prefer to let the emission, p^(𝒚^|𝒙^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{% \theta}}\right)}, capture all structure.

However, the recognition model makes additional demands on the generative prior: A simple calculation shows that if the recognition model, pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}, matches the generative posterior, p^(𝒙^|𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{};\bm{% \theta}}\right)}, and the generative distribution, p^(𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{};\bm{\theta}}\right)}, matches the data, p(𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}}\right)}, then the generative prior, p^(𝒙^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{\theta}}\right)}, must match the “aggregated posterior,”

pˇ(𝒙ˇ;ϕ) . . =pˇ(𝒙ˇ|𝒚;ϕ)𝒚.{\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{};\bm{\phi}}\right)}\mathrel{\vbox{% \hbox{\scriptsize.}\hbox{\scriptsize.} }}={\mathopen{}\mathclose{{}\left\langle{{\check{p}\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\check{x% }}{}\middle|\bm{y};\bm{\phi}}\right)}}}\right\rangle_{\bm{y}{}}}.

In brief, for optimal generative and recognition models,

DKL{pˇ(𝑿ˇ;ϕ)p^(𝑿ˇ;𝜽)}=0\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{\check{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}};\bm{\phi}}\right)}\middle\|{% \hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}};\bm{\theta}}\right)}}% \right\}=0

must hold. To understand this requirement, we yet again rewritemargin: Exercise LABEL:ex:JREELBOsurgery the joint relative entropy, Eq. 8.20, this time in terms of this relative entropy on “priors” [17]:

JRE(𝜽,ϕ)=DKL{pˇ(𝑿ˇ;ϕ)p^(𝑿ˇ;𝜽)}+(𝑿ˇ;𝒀)+H(ppˇ)p^[𝒀|𝑿ˇ;ϕ,𝜽]-Hp[𝒀].\mathcal{L}_{\text{JRE}}(\bm{\theta},\bm{\phi})=\operatorname*{\text{D}_{\text% {KL}}}\mathopen{}\mathclose{{}\left\{{\check{p}\mathopen{}\mathclose{{}\left({% \bm{\check{X}}};\bm{\phi}}\right)}\middle\|{\hat{p}\mathopen{}\mathclose{{}% \left({\bm{\check{X}}};\bm{\theta}}\right)}}\right\}+\mathcal{I}\mathopen{}% \mathclose{{}\left({\bm{\check{X}}};{\bm{Y}}}\right)+{\text{H}_{(p\check{p})% \hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{Y}}{}\middle|{\bm{\check{X}}}{};% \bm{\phi},\bm{\theta}}\right]}}-{\text{H}_{p}{\mathopen{}\mathclose{{}\left[{% \bm{Y}}{}}\right]}}. (8.21)

The penalty on information transmission prevents the reconstruction loss from being driven to zero, and vice versa. But the relative entropy in this equation—unlike the relative entropy in Eq. 8.20—does vanish in the optimal model.

8.2.1 Monte Carlo gradient estimators

In our discussion thus far of probabilistic autoencoders, we have considered only Gaussian generative (Eq. 8.15) and recognition (Eq. 8.12) models. To see what other possibilities are available, let us pause to appraise our situation.

Our aim throughout these last few chapters has been to fit models to observed, complex distributions of data, p(𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}}\right)}, but under the implicit guiding hypothesis that the complexity is a consequence of some kind of marginalization. Accordingly, we have attempted to “undo” the marginalization by constructing latent-variable generative models, p^(𝒙^;𝜽)p^(𝒚^|𝒙^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{\theta}}\right)}{\hat{p}% \mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{% \theta}}\right)}, with relatively simple distributions, but which when marginalized result in complex distributions, p^(𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{};\bm{\theta}}\right)}.

However, fitting the joint distribution of observed and latent variables is not equivalent to fitting the marginal, since the former can be achieved in part simply by making inference under the model more deterministic. Ideally, one would just additionally penalize deterministic posteriors, but computing p^(𝒙^|𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{};\bm{% \theta}}\right)} (typically) requires computing the marginal distribution, p^(𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{};\bm{\theta}}\right)}, which by our original hypothesis is too complicated to work with.

We therefore opted instead to separate the inferential and generative functions, introducing a second model, pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}, for the former. The generative-model joint distribution is consequently fit by minimizing DKL{pˇ(𝑿ˇ|𝒀;ϕ)p(𝒀)p^(𝑿ˇ,𝒀;𝜽)}\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{\check{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm{\phi}}% \right)}{p\mathopen{}\mathclose{{}\left({\bm{Y}}}\right)}\middle\|{\hat{p}% \mathopen{}\mathclose{{}\left({\bm{\check{X}}},{\bm{Y}};\bm{\theta}}\right)}}\right\}. This can likewise be improved without improving the marginal fitness, DKL{p(𝒀)p^(𝒀;𝜽)}\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left\{{p\mathopen% {}\mathclose{{}\left({\bm{Y}}}\right)}\middle\|{\hat{p}\mathopen{}\mathclose{{% }\left({\bm{Y}};\bm{\theta}}\right)}}\right\}, merely by reducing posterior entropy—in this case of the recognition model, pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}—so a regularizer is still required. But the crucial difference is that this regularizer does not depend on the intractable generative posterior distribution, p^(𝒙^|𝒚;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y};\bm{\theta}}% \right)}.

Since we no longer need to invert the generative model, it can be almost arbitrarily complex. But what about the recognition model, pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}? The key constraint here is that the joint relative entropy is an expectation under this distribution, and we also need to compute gradients with respect to its parameters, ϕ\bm{\phi}. The joint relative entropy is computed from the joint cross entropy and the recognition entropy (ignoring the constant data entropy). Let us assume that the latter can be computed in closed form (it can for most distributions), and focus on the former, H(ppˇ)p^[𝑿ˇ,𝒀;ϕ,𝜽]{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{\check{X}}}% {},{\bm{Y}}{};\bm{\phi},\bm{\theta}}\right]}}, since in this term the recognition parameters ϕ\bm{\phi} occur only in the expectation (the surprisal depends on the generative parameters 𝜽\bm{\theta} only). Therefore, we need to compute

ddϕ𝔼𝑿ˇ,𝒀[f(𝑿ˇ,𝒀)]=𝔼𝒀[ddϕ𝔼𝑿ˇ|𝒀[f(𝑿ˇ,𝒀)|𝒀]]\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}\mathbb{E}_{{% \bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[f({\bm{\check{X}}}% ,{\bm{Y}})}\right]}=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}\mathbb{E}_{{% \bm{\check{X}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[f({\bm{\check{X}}},{% \bm{Y}})\middle|{\bm{Y}}{}}\right]}}\right]} (8.22)

where f(𝑿^,𝒀^)=-logp^(𝑿^,𝒀^;𝜽)f({\bm{\hat{X}}},{\bm{\hat{Y}}})=-\log{\hat{p}\mathopen{}\mathclose{{}\left({% \bm{\hat{X}}},{\bm{\hat{Y}}};\bm{\theta}}\right)}. The fundamental problem is that for functions ff that are at all complicated in 𝑿^{\bm{\hat{X}}}—for example, in generative models employing neural networks, like Eq. 8.15—the expectations will not be available in closed-form. Instead, we shall need to make use of sample averages, i.e., Monte Carlo estimatesmargin: Monte Carlo estimates of this gradient.

In deciding how to carry out this derivative, we are concerned with the constraints that the usual desiderata for estimators—consistency, unbiasedness, low variance—impose on the class of tractable recognition distributions and on the dimensionality of the latent space. Up to this point, we have considered a single pair of estimators, Eqs. 8.16 and 8.17, based on the identities in Eqs. 8.13 and 8.14. But these identities hold only for normal recognition distributions.

The score-function gradient estimator.

Arguably the simplest and most general gradient estimator can be derived by passing the derivative directly into the integral defined by the expectation:

𝔼𝒀[ddϕ𝔼𝑿ˇ|𝒀[f(𝑿ˇ,𝒀)|𝒀]]=𝔼𝒀[𝒙ˇddϕpˇ(𝒙ˇ|𝒀;ϕ)f(𝒙ˇ,𝒀)d𝒙ˇ]=𝔼𝒀[𝒙ˇpˇ(𝒙ˇ|𝒀;ϕ)dlogpˇ(𝒙ˇ|𝒀;ϕ)dϕf(𝒙ˇ,𝒀)d𝒙ˇ]=𝔼𝑿ˇ,𝒀[dlogpˇ(𝑿ˇ|𝒀;ϕ)dϕf(𝑿ˇ,𝒀)]dlogpˇ(𝑿ˇ|𝒀;ϕ)dϕf(𝑿ˇ,𝒀)𝑿ˇ,𝒀,\begin{split}\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\frac{% \mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}\mathbb{E}_{{\bm{% \check{X}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[f({\bm{\check{X}}},{\bm{% Y}})\middle|{\bm{Y}}{}}\right]}}\right]}&=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}% \mathclose{{}\left[\int_{\bm{\check{x}}{}}\frac{\mathop{}\!\mathrm{d}{}}{% \mathop{}\!\mathrm{d}{\bm{\phi}}}{\check{p}\mathopen{}\mathclose{{}\left(\bm{% \check{x}}\middle|{\bm{Y}};\bm{\phi}}\right)}f(\bm{\check{x}},{\bm{Y}})\mathop% {}\!\mathrm{d}{\bm{\check{x}}{}}}\right]}\\ &=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\int_{\bm{\check{x}}{}% }{\check{p}\mathopen{}\mathclose{{}\left(\bm{\check{x}}\middle|{\bm{Y}};\bm{% \phi}}\right)}\frac{\mathop{}\!\mathrm{d}{\log{\check{p}\mathopen{}\mathclose{% {}\left(\bm{\check{x}}\middle|{\bm{Y}};\bm{\phi}}\right)}}}{\mathop{}\!\mathrm% {d}{\bm{\phi}}}f(\bm{\check{x}},{\bm{Y}})\mathop{}\!\mathrm{d}{\bm{\check{x}}{% }}}\right]}\\ &=\mathbb{E}_{{\bm{\check{X}}}{},{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[% \frac{\mathop{}\!\mathrm{d}{\log{\check{p}\mathopen{}\mathclose{{}\left({\bm{% \check{X}}}\middle|{\bm{Y}};\bm{\phi}}\right)}}}{\mathop{}\!\mathrm{d}{\bm{% \phi}}}f({\bm{\check{X}}},{\bm{Y}})}\right]}\\ &\approx{\mathopen{}\mathclose{{}\left\langle{\frac{\mathop{}\!\mathrm{d}{\log% {\check{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm{% \phi}}\right)}}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}f({\bm{\check{X}}},{\bm{Y}})% }}\right\rangle_{{\bm{\check{X}}}{},{\bm{Y}}{}}},\end{split}

where we have used Leibniz’s rule on the first line.77 7 This is generally licit for differentiable recognition distributions; for more precise conditions, see [33]. The point of the second line is to turn the expression back into an expectation under pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}, in order to allow for a Monte Carlo estimate. The derivative on the final line is known as the score functionmargin: score function , from which this gradient estimator inherits its name.

The score-function estimator is consistent and unbiased [33], but the variance scales poorly with latent dimension. This can be seen (e.g.) by considering the gradient estimator in the case of factorial recognition distributions:

(kKdlogpˇ(xˇk|𝒚;ϕ)dϕ)f(𝒙ˇ,𝒚).\mathopen{}\mathclose{{}\left(\sum_{k}^{{K}}\frac{\mathop{}\!\mathrm{d}{\log{% \check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\check{x}_{k}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)}}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}}\right)f(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\check{x}},% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}).

Clearly, the variance of this estimator scales with the number of terms, K{K}, in the sum—even though dimensions may have little or no relationship with the cost ff. Therefore, in problems with large latent spaces, a prohibitively large number of samples must be drawn at each gradient step [27, 37].

On the other hand, the score-function estimator makes only very weak assumptions about pˇ(𝒙ˇ|𝒚;ϕ){\check{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\check{x}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{};\bm{\phi}}% \right)} and ff (the generative-model surprisal); and the cost of computing it scales (favorably) as 𝒪(N(D+L))\mathcal{O}({N}(D+L)), with DD and LL the dimensionality of the parameters ϕ\bm{\phi} and the cost of evaluating ff [33]. Furthermore, variance can often be reduced with the method of “control variates”; e.g., using as our estimator

dlogpˇ(𝑿ˇ|𝒀)dϕ(f(𝑿ˇ,𝒀)-β)𝑿ˇ,𝒀.{\mathopen{}\mathclose{{}\left\langle{\frac{\mathop{}\!\mathrm{d}{\log{\check{% p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}}}\right)}}}{% \mathop{}\!\mathrm{d}{\bm{\phi}}}\mathopen{}\mathclose{{}\left(f({\bm{\check{X% }}},{\bm{Y}})-\beta}\right)}}\right\rangle_{{\bm{\check{X}}}{},{\bm{Y}}{}}}.

Because the expected score is always zero (see Section B.2), this alteration does not change the expected value of the estimator, but can potentially lower its variance. More sophisticated control variates are also possible.

The pathwise gradient estimator.

Still, since we anticipate working with high-dimensional latent spaces, it behooves us to find an estimator whose variance is independent of this dimension. Here we exploit the fact that many continuous random variables with highly expressive distributions can be written as transformations of random variables with simpler, “base” distributions. This will allow us to separate parameter dependence (in the transformation) from the sampling procedure (for the base random variable). For example, any computationally tractable inverse CDF can be used to transform a uniformly distributed random variable into a random variable with the corresponding PDF:

Zˇ𝒰(0,1),Xˇ=set-log(1-Zˇ)/λ= . . g(Zˇ,λ)XˇExpo(λ).{\check{Z}}\sim\mathcal{U}\mathopen{}\mathclose{{}\left(0,1}\right),\hskip 18.% 0675pt{\check{X}}\stackrel{{\scriptstyle\text{set}}}{{=}}-\log\mathopen{}% \mathclose{{}\left(1-{\check{Z}}}\right)/\lambda=\mathrel{\vbox{\hbox{% \scriptsize.}\hbox{\scriptsize.} }}g({\check{Z}},\lambda)\implies{\check{X}}\sim\text{Expo}\mathopen{}% \mathclose{{}\left(\lambda}\right). (8.23)

Similarly, random variables with distributions parameterized by location and scale can frequently be expressed as affine functions of random variables with the “standard” version of the distribution:

𝒁ˇ𝒩(𝟎,𝐈),𝑿ˇ=set𝚺1/2𝒁ˇ+𝝁= . . 𝒈(𝒁ˇ,𝚺,𝝁)𝑿ˇ𝒩(𝝁,𝚺).{\bm{\check{Z}}}\sim\mathcal{N}\mathopen{}\mathclose{{}\left(\bm{0},\>\mathbf{% I}}\right),\hskip 18.0675pt{\bm{\check{X}}}\stackrel{{\scriptstyle\text{set}}}% {{=}}\mathbf{\Sigma}^{1/2}{\bm{\check{Z}}}+\bm{\mu}=\mathrel{\vbox{\hbox{% \scriptsize.}\hbox{\scriptsize.} }}\bm{g}({\bm{\check{Z}}},\mathbf{\Sigma},\bm{\mu})\implies{\bm{\check{X}}}% \sim\mathcal{N}\mathopen{}\mathclose{{}\left(\bm{\mu},\>\mathbf{\Sigma}}\right). (8.24)

More ambitiously, as long as the “path” 𝒈\bm{g} from auxiliary variables 𝒁ˇ{\bm{\check{Z}}} to latent variables 𝑿ˇ{\bm{\check{X}}} is invertible, we can construct complex distributions from simple ones with the change-of-variables formula:

𝒁ˇpˇ𝒁ˇ(𝒛ˇ),𝑿ˇ=set𝒈(𝒁ˇ,ϕ)𝑿ˇpˇ𝒁ˇ(𝒈-1(𝒙ˇ,ϕ))|𝒈-1𝒙ˇT|.{\bm{\check{Z}}}\sim\check{p}_{{\bm{\check{Z}}}}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\check{z}% }}\right),\hskip 18.0675pt{\bm{\check{X}}}\stackrel{{\scriptstyle\text{set}}}{% {=}}\bm{g}({\bm{\check{Z}}},\bm{\phi})\implies{\bm{\check{X}}}\sim\check{p}_{{% \bm{\check{Z}}}}\mathopen{}\mathclose{{}\left(\bm{g}^{-1}(\leavevmode\color[% rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\check{x}},\bm{% \phi})}\right)\mathopen{}\mathclose{{}\left\lvert\frac{\partial{\bm{g}^{-1}}}{% \partial{\bm{\check{x}}}^{\text{T}}}}\right\rvert. (8.25)

Indeed, the two preceding examples qualify as special cases of this technique. More subtle variations still on this theme allow for sampling from yet more distributions [27, 41, 33].

In all such cases, the function 𝒈(𝒛ˇ,𝒚,ϕ)\bm{g}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb% }{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\check{% z}},\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y},\bm{% \phi}) (notice that we have additionally allowed it to depend on 𝒚\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}) can be used in conjunction with the “law of the unconscious statistician” (LotUS) to remove the parameters from the expectation in Eq. 8.22:

𝔼𝒀[ddϕ𝔼𝑿ˇ|𝒀[f(𝑿ˇ,𝒀)|𝒀]]=𝔼𝒀[ddϕ𝔼𝒁ˇ|𝒀[f(𝒈(𝒁ˇ,𝒀,ϕ),𝒀)|𝒀]]=𝔼𝒀[𝔼𝒁ˇ|𝒀[𝒈Tϕ(𝒁ˇ,𝒀,ϕ)f𝒙ˇ(𝒈,𝒀)|𝒀]]𝒈Tϕ(𝒁ˇ,𝒀,ϕ)f𝒙ˇ(𝒈,𝒀)𝒁ˇ,𝒀.\begin{split}\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\frac{% \mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\phi}}}\mathbb{E}_{{\bm{% \check{X}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[f({\bm{\check{X}}},{\bm{% Y}})\middle|{\bm{Y}}{}}\right]}}\right]}&=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}% \mathclose{{}\left[\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{% \phi}}}\mathbb{E}_{{\bm{\check{Z}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[% f\mathopen{}\mathclose{{}\left(\bm{g}({\bm{\check{Z}}},{\bm{Y}},\bm{\phi}),{% \bm{Y}}}\right)\middle|{\bm{Y}}{}}\right]}}\right]}\\ &=\mathbb{E}_{{\bm{Y}}{}}{\mathopen{}\mathclose{{}\left[\mathbb{E}_{{\bm{% \check{Z}}}{}|{\bm{Y}}}{\mathopen{}\mathclose{{}\left[\frac{\partial{\bm{g}}^{% \text{T}}}{\partial{\bm{\phi}}}\mathopen{}\mathclose{{}\left({\bm{\check{Z}}},% {\bm{Y}},\bm{\phi}}\right)\frac{\partial{f}}{\partial{\bm{\check{x}}}}% \mathopen{}\mathclose{{}\left(\bm{g},{\bm{Y}}}\right)\middle|{\bm{Y}}{}}\right% ]}}\right]}\\ &\approx{\mathopen{}\mathclose{{}\left\langle{\frac{\partial{\bm{g}}^{\text{T}% }}{\partial{\bm{\phi}}}\mathopen{}\mathclose{{}\left({\bm{\check{Z}}},{\bm{Y}}% ,\bm{\phi}}\right)\frac{\partial{f}}{\partial{\bm{\check{x}}}}\mathopen{}% \mathclose{{}\left(\bm{g},{\bm{Y}}}\right)}}\right\rangle_{{\bm{\check{Z}}}{},% {\bm{Y}}{}}}.\end{split}

Crucially, under this formulation, the gradient passes into the cost function itself. Therefore, dimensions of the latent space that have little effect on the cost are guaranteed to have little effect on the estimator itself—in contrast to the score-function estimator. This is what allows low-variance estimates to be made even in high-dimensional spaces. The computational cost also turns out to be identical to that of the score-function estimator [33].

Nomenclature.

This class of density estimator, with neural-network parameterized generative and recognition models, has come to be known as the variational autoencoder [27] We have seen how the model resembles the classical autoencoder. As for the term “variational,” it is something of a misnomer; we shall see how that term entered the literature shortly (Section 8.4).

The “pathwise gradient estimator” that made this model practical has for its part come to be known as the “reparameterization trick.” I have followed the terminology of Mohamed and colleagues [33].

8.2.2 Examples

With this model, we have made considerable progress toward the goal of generating high-quality samples from complex distributions. …

Independent Gaussian random variables

Let us first proceed with the example we have been considering so far, Eqs. 8.12 and 8.15, but for simplicity impose a few more restrictions. In particular, we insist that the generative covariance be isotropic, and the recognition covariance matrix be diagonal:

𝚺(𝒙^,𝜽)=α𝐈\displaystyle\mathbf{{\Sigma}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta})=\alpha\mathbf{I} 𝚼(𝒚,ϕ𝚼)=diag(𝝊2(𝒚,ϕ𝚼)),\displaystyle{\mathbf{{\Upsilon}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor% [named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}{},\bm{\phi}_{\mathbf{{\Upsilon}}})}=\text{% diag}\mathopen{}\mathclose{{}\left({\bm{\upsilon}^{2}(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{}{},\bm{\phi}_{% \mathbf{{\Upsilon}}})}}\right), (8.26)

Then the emission energy, Eq. 8.18, and its derivatives simplify to

Eemiss(𝒙^,𝒚^)=𝒚^-𝝁(𝒙^,𝜽)22αJRE𝜽T=1α(𝒚^-𝝁(𝒙^,𝜽))T𝝁𝜽T(𝒙^,𝜽),Eemissx^k(𝒙^,𝒚^)=1α(𝒚^-𝝁(𝒙^,𝜽))T𝝁x^k(𝒙^,𝜽)2Eemissx^jx^k(𝒙^,𝒚^)=1α[(𝒚^-𝝁(𝒙^,𝜽))T2𝝁x^jx^k(𝒙^,𝜽)-𝝁Tx^j(𝒙^,𝜽)𝝁x^k(𝒙^,𝜽)].\begin{split}E_{\text{emiss}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}})&=\frac{\mathopen{}\mathclose{{}\left% \lVert\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% -\bm{\mu}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {x}},\bm{\theta})}\right\rVert^{2}}{2\alpha}\\ \implies\frac{\partial{\mathcal{L}_{\text{JRE}}}}{\partial{\bm{\theta}}^{\text% {T}}}&=\frac{1}{\alpha}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}-\bm{\mu}(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \bm{\theta})}\right)^{\text{T}}\frac{\partial{\bm{\mu}}}{\partial{\bm{\theta}}% ^{\text{T}}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta}),\\ \implies\frac{\partial{E_{\text{emiss}}}}{\partial{\hat{x}_{k}}}(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}})% &=\frac{1}{\alpha}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}-\bm{\mu}(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \bm{\theta})}\right)^{\text{T}}\frac{\partial{\bm{\mu}}}{\partial{\hat{x}_{k}}% }(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \bm{\theta})\\ \implies\frac{\partial^{2}{E_{\text{emiss}}}}{\partial{\hat{x}_{j}}\partial{% \hat{x}_{k}}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}})&=\frac{1}{\alpha}\mathopen{}% \mathclose{{}\left[\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}-\bm{\mu}(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}},% \bm{\theta})}\right)^{\text{T}}\frac{\partial^{2}{\bm{\mu}}}{\partial{\hat{x}_% {j}}\partial{\hat{x}_{k}}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]% {pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta})-\frac{\partial{\bm{\mu}}^% {\text{T}}}{\partial{\hat{x}_{j}}}(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta})\frac{\partial{\bm{\mu}}}% {\partial{\hat{x}_{k}}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}},\bm{\theta})}\right].\end{split}

These gradients can be accumulated with backpropagation through the entire “autoencoder.” Parameters are then updated so as to descend the (joint) relative-entropy gradients, Eqs. 8.16, 8.17, and 8.19, which require the expectations of the above gradients under the recognition distribution. These can be approximated with sample averages. Notice that no pathwise gradient estimator is required.

We have avoided the pathwise gradient estimator by employing the identities Eqs. 8.3 and 8.4. However, it is useful to consider the more generic approach. In particular, we will simply derive the loss function, since in modern software packages, this is handed directly to an automatic differentiator. Let us take the perspective of an autoencoder, i.e. the last line in Eq. 8.20.

Anticipating, we calculate the log partition functions:

logZr=log|τ𝚼|1/2=K2logτ+12kKlogυk2  logZg=log|τ𝐈|1/2=K2logτ\log Z_{\text{r}}=\log|\tau\mathbf{{\Upsilon}}|^{1/2}=\frac{{K}}{2}\log\tau+% \frac{1}{2}\sum_{k}^{{K}}\log\upsilon^{2}_{k}\qquad\log Z_{\text{g}}=\log|\tau% \mathbf{I}|^{1/2}=\frac{{K}}{2}\log\tau

Then the KL regularizer is

DKL{pˇ(𝑿ˇ|𝒀;ϕ)p^(𝑿ˇ;𝜽)}=log(ZgZrexp{-12(𝝂-𝑿ˇ)T𝚼-1(𝝂-𝑿ˇ)+12𝑿ˇT𝑿ˇ})𝑿ˇ,𝒀=-12kKlogυk2-12(𝝂-𝑿ˇ)T𝚼-1(𝝂-𝑿ˇ)+12𝑿ˇT𝑿ˇ𝑿ˇ,𝒀=12-kKlogυk2-tr[𝚼𝚼-1]+tr[𝚼]+𝝂T𝝂𝒀=kK12-logυk2-1+υk2+νk2𝒀.\begin{split}\operatorname*{\text{D}_{\text{KL}}}\mathopen{}\mathclose{{}\left% \{{\check{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}\middle|{\bm{Y}};\bm% {\phi}}\right)}\middle\|{\hat{p}\mathopen{}\mathclose{{}\left({\bm{\check{X}}}% ;\bm{\theta}}\right)}}\right\}&={\mathopen{}\mathclose{{}\left\langle{\log% \mathopen{}\mathclose{{}\left(\frac{Z_{\text{g}}}{Z_{\text{r}}}\exp\mathopen{}% \mathclose{{}\left\{-\frac{1}{2}(\bm{\nu}-{\bm{\check{X}}})^{\text{T}}\mathbf{% {\Upsilon}}^{-1}(\bm{\nu}-{\bm{\check{X}}})+\frac{1}{2}{\bm{\check{X}}}^{\text% {T}}{\bm{\check{X}}}}\right\}}\right)}}\right\rangle_{{\bm{\check{X}}}{},{\bm{% Y}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\frac{1}{2}\sum_{k}^{{K}}\log\upsilon% ^{2}_{k}-\frac{1}{2}(\bm{\nu}-{\bm{\check{X}}})^{\text{T}}\mathbf{{\Upsilon}}^% {-1}(\bm{\nu}-{\bm{\check{X}}})+\frac{1}{2}{\bm{\check{X}}}^{\text{T}}{\bm{% \check{X}}}}}\right\rangle_{{\bm{\check{X}}}{},{\bm{Y}}{}}}\\ &=\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{-\sum_{k}^{{K}}\log\upsilon% ^{2}_{k}-\text{tr}\mathopen{}\mathclose{{}\left[\mathbf{{\Upsilon}}\mathbf{{% \Upsilon}}^{-1}}\right]+\text{tr}\mathopen{}\mathclose{{}\left[\mathbf{{% \Upsilon}}}\right]+\bm{\nu}^{\text{T}}\bm{\nu}}}\right\rangle_{{\bm{Y}}{}}}\\ &=\boxed{\sum_{k}^{{K}}\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{-\log% \upsilon^{2}_{k}-1+\upsilon^{2}_{k}+\nu_{k}{}^{2}}}\right\rangle_{{\bm{Y}}{}}}% }.\end{split}

In moving to the third line we twice used the identity Eq. B.13. Notice that this obviated the need for a sample average under the recognition distribution. Note also that the KL regularizer never depends on the generative-model parameters.

The reconstruction error is a cross entropy:

-logp^(𝒀|𝑿ˇ;𝜽)𝒀,𝑿ˇ=log|τα𝐈|1/2+𝝁(𝑿ˇ,𝜽)-𝒀22α𝑿ˇ,𝒀=12Klog(τα)+𝝁(𝑿ˇ,𝜽)-𝒀2α𝑿ˇ,𝒀=12Klog(τα)+𝝁(𝚼𝒁ˇ+𝝂,𝜽)-𝒀2α𝒀,𝒁ˇ.\begin{split}{\mathopen{}\mathclose{{}\left\langle{-\log{\hat{p}\mathopen{}% \mathclose{{}\left({\bm{Y}}\middle|{\bm{\check{X}}};\bm{\theta}}\right)}}}% \right\rangle_{{\bm{Y}}{},{\bm{\check{X}}}{}}}&={\mathopen{}\mathclose{{}\left% \langle{\log|\tau\alpha\mathbf{I}|^{1/2}+\frac{\mathopen{}\mathclose{{}\left% \lVert\bm{\mu}({\bm{\check{X}}},\bm{\theta})-{\bm{Y}}}\right\rVert^{2}}{2% \alpha}}}\right\rangle_{{\bm{\check{X}}}{},{\bm{Y}}{}}}\\ &=\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{K\log\mathopen{}\mathclose{% {}\left(\tau\alpha}\right)+\frac{\mathopen{}\mathclose{{}\left\lVert\bm{\mu}({% \bm{\check{X}}},\bm{\theta})-{\bm{Y}}}\right\rVert^{2}}{\alpha}}}\right\rangle% _{{\bm{\check{X}}}{},{\bm{Y}}{}}}\\ &=\boxed{\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{K\log\mathopen{}% \mathclose{{}\left(\tau\alpha}\right)+\frac{\mathopen{}\mathclose{{}\left% \lVert\bm{\mu}(\mathbf{{\Upsilon}}{\bm{\check{Z}}}+\bm{\nu},\bm{\theta})-{\bm{% Y}}}\right\rVert^{2}}{\alpha}}}\right\rangle_{{\bm{Y}}{},{\bm{\check{Z}}}{}}}}% .\end{split}

We have reparameterized the average on the last line in anticipation of differentiating with respect to the parameters of 𝚼\mathbf{{\Upsilon}} and 𝝂\bm{\nu}.

Discrete VAEs with the Gumbel-Softmax trick

So far we have considered VAEs with Gaussian recognition distributions (Eq. 8.12), but what if we want discrete latent variables (as in e.g. the GMM)? As it stands, the pathwise gradient estimator will not work, because any function that discretizes will be flat almost everywhere. Any gradient that must pass through this function will be zero.

The most obvious workaround is to relax the discretization into some real-valued function, and we consider this first, although in a more subtle variation. Less obviously, we can employ a discretization that still passes real-valued information….

We begin simply with the relaxation itself and then bring in the VAE at the end. Suppose we want to draw categorical samples—which we interpret as one-hot vectors, 𝒚\bm{y}—according to real-valued natural parameters 𝜼\bm{\eta}. The standard way to do this is first to exponentiate (to make positive) and normalize the vector of natural parameters,

l,Pr[Yl=1]=exp{ηl}kKexp{ηk}= . . σl(𝜼);\forall l,\quad\text{Pr}[{Y}_{l}=1]=\frac{\exp\mathopen{}\mathclose{{}\left\{% \eta_{l}}\right\}}{\sum_{k}^{K}\exp\mathopen{}\mathclose{{}\left\{\eta_{k}}% \right\}}=\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}\sigma_{l}(\bm{\eta}); (8.27)

then to draw a uniformly-distributed random variable, U𝒰(0,1)U\sim\mathcal{U}\mathopen{}\mathclose{{}\left(0,1}\right); and finally to select the category at which the cumulative mass (under Eq. 8.27) first reaches UU. (Picturesquely, one can imagine dividing up the interval [0,1][0,1] into KK bins of widths given by the σl(𝜼)\sigma_{l}(\bm{\eta}). The bin into which UU falls determines the category.)

However, now suppose we want to differentiate through the sampling operation. The inverse cumulative mass function is flat almost everywhere, so the derivative gives us no information. Perhaps we could smooth the CDF, although it is not obvious how we ought to.88 8 The two papers that proposed the Gumbel-softmax reparameterization do not even consider this possibility [32, 19]. Alternatively, perhaps there are other ways to sample from Eq. 8.27.

Gumbel perturbations.

Consider a set of KK independent, Gumbel-distributed, random variables, each with its own mean ηk\eta_{k}. The cumulative-distribution and probability-density functions of each such random variable are, respectively,

Fk(g,ηk)=exp{-e-(g-ηk)},\displaystyle F_{k}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}g,\eta_{k})=\exp\mathopen{}\mathclose{{}\left\{-e^{% -(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}g-\eta_{k})}}% \right\}, fk(g,ηk)=exp{-(g-ηk+e-(g-ηk))}.\displaystyle f_{k}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}g,\eta_{k})=\exp\mathopen{}\mathclose{{}\left\{-% \mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}g-\eta_{k}+e^{-(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}g-\eta_{k})}}\right)}\right\}.

Equivalently, these random variables are produced by adding independent, zero-mean, Gumbel perturbations, 𝑮0{\bm{G}}_{0}, to our set of means: 𝑮=𝜼+𝑮0{\bm{G}}=\bm{\eta}+{\bm{G}}_{0}.

Distribution of the argmax.

If we know that Gl=g{G}_{l}=g, then the probability that Gl{G}_{l} is the largest Gumbel random variable is the probability that all the other variables are smaller than gg. In anticipation of what follows, we interpret the output of the argmax as a one-hot vector, and assign the symbol 𝝈0\bm{\sigma}_{0} to this vector-valued function:

𝒀 . . =argmaxk1,,N{Gk}= . . 𝝈0(𝑮)=𝝈0(𝜼+𝑮0){\bm{Y}}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}=\operatorname*{argmax}_{k\in{1,\ldots,N}}\mathopen{}\mathclose{{}\left\{{G}% _{k}}\right\}=\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}\bm{\sigma}_{0}({\bm{G}})=\bm{\sigma}_{0}(\bm{\eta}+{\bm{G}}_{0})

Then

Pr[Yl=1|Gl=g]=Pr[Gk<gkl|Gl=g]=klexp{-e-(g-ηk)}.\text{Pr}[{Y}_{l}=1|{G}_{l}=g]=\text{Pr}[{G}_{k}<g\>\>\forall k\neq l|{G}_{l}=% g]=\prod_{k\neq l}\exp\mathopen{}\mathclose{{}\left\{-e^{-(g-\eta_{k})}}\right\}.

To convert this conditional probability into the marginal probability that Gl{G}_{l} is the largest, we simply multiply by the probability of Gl=g{G}_{l}=g and integrate over all possible values of gg:

Pr[Yl=1]=-klexp{-e-(g-ηk)}exp{-(g-ηl+e-(g-ηl))}dg=-kKexp{-e-(g-ηk)}exp{-(g-ηl)}dg=exp{ηl}-exp{-e-gkKeηk}exp{-g}dgt . . =e-g=exp{ηl}0exp{-tkKeηk}dt=exp{ηl}kKexp{ηk}.\begin{split}\text{Pr}[{Y}_{l}=1]&=\int_{-\infty}^{\infty}\prod_{k\neq l}\exp% \mathopen{}\mathclose{{}\left\{-e^{-(g-\eta_{k})}}\right\}\exp\mathopen{}% \mathclose{{}\left\{-\mathopen{}\mathclose{{}\left(g-\eta_{l}+e^{-(g-\eta_{l})% }}\right)}\right\}\mathop{}\!\mathrm{d}{g}\\ &=\int_{-\infty}^{\infty}\prod_{k}^{K}\exp\mathopen{}\mathclose{{}\left\{-e^{-% (g-\eta_{k})}}\right\}\exp\mathopen{}\mathclose{{}\left\{-\mathopen{}% \mathclose{{}\left(g-\eta_{l}}\right)}\right\}\mathop{}\!\mathrm{d}{g}\\ &=\exp\mathopen{}\mathclose{{}\left\{\eta_{l}}\right\}\int_{-\infty}^{\infty}% \exp\mathopen{}\mathclose{{}\left\{-e^{-g}\sum_{k}^{K}e^{\eta_{k}}}\right\}% \exp\mathopen{}\mathclose{{}\left\{-g}\right\}\mathop{}\!\mathrm{d}{g}\\ t\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}=e^{-g}\implies&=\exp\mathopen{}\mathclose{{}\left\{\eta_{l}}\right\}\int_{0% }^{\infty}\exp\mathopen{}\mathclose{{}\left\{-t\sum_{k}^{K}e^{\eta_{k}}}\right% \}\mathop{}\!\mathrm{d}{t}\\ &=\frac{\exp\mathopen{}\mathclose{{}\left\{\eta_{l}}\right\}}{\sum_{k}^{K}\exp% \mathopen{}\mathclose{{}\left\{\eta_{k}}\right\}}.\end{split} (8.28)

In words, the probabilities of each Gumbel variate Gk{G}_{k} being the largest are given by the soft(arg)max function of their means. Eq. 8.28 matches Eq. 8.27, so we have arrived at an alternative method for drawing categorical samples: “Perturb” the vector 𝜼\bm{\eta} by independent, zero-mean Gumbel noise and then select the index of the largest element of the vector.

Relaxing the categorical distribution.

Unfortunately, the final step of the sampling procedure—the argmax—is not differentiable, so it seems we have made no progress over the standard sampling technique. On the other hand, it is more or less clear how to approximate the argmax—with a soft(arg)max, whose softness (ss) we can titrate:

Ylexp{Gl/s}kKexp{Gk/s}=σl(𝑮/s)=σl(𝜼/s+𝑮0/s).{Y}_{l}\approx\frac{\exp\mathopen{}\mathclose{{}\left\{{G}_{l}/s}\right\}}{% \sum_{k}^{K}\exp\mathopen{}\mathclose{{}\left\{{G}_{k}/s}\right\}}=\sigma_{l}(% {\bm{G}}/s)=\sigma_{l}(\bm{\eta}/s+{\bm{G}}_{0}/s). (8.29)

The softmax function is evidently differentiable:

dσldgm=σlddgmlogσl=σl[1s𝟙[l=m]-1sexp{gm}kKexp{gk/s}]=σl1s[𝟙[l=m]-h~m].\frac{\mathop{}\!\mathrm{d}{\sigma_{l}}}{\mathop{}\!\mathrm{d}{g_{m}}}=\sigma_% {l}\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{g_{m}}}\log\sigma_{l}=% \sigma_{l}\mathopen{}\mathclose{{}\left[\frac{1}{s}\mathbbm{1}[l=m]-\frac{1}{s% }\frac{\exp\mathopen{}\mathclose{{}\left\{g_{m}}\right\}}{\sum_{k}^{K}\exp% \mathopen{}\mathclose{{}\left\{g_{k}/s}\right\}}}\right]=\sigma_{l}\frac{1}{s}% \mathopen{}\mathclose{{}\left[\mathbbm{1}[l=m]-\tilde{h}_{m}}\right].

For very soft functions (s1s\gg 1), the gradient magnitudes are all manageable—indeed, less than 1—but the elements of 𝒀{\bm{Y}} become nearly equal (no matter the values of 𝜼\bm{\eta}), and the approximation to categorical random variables is bad. At s=1s=1, we are very nearly approximating 𝒀{\bm{Y}} by its mean, although because 𝜼\bm{\eta} is perturbed by Gumbel noise before passing through the softmax, this is not quite the case and samples will vary even for fixed 𝜼\bm{\eta}. For very hard functions (s1s\ll 1), 𝒀{\bm{Y}} approaches a one-hot vector—that is, an actual categorical sample. But in this same limit, lims0\lim_{s\to 0}, the gradient becomes unbounded. In practice, then, the softness is typically decreased (starting from \sim1) over the course of learning.

If we interpret Eq. 8.29 not as an approximation but as the definition of 𝒀{\bm{Y}}, then we can no longer say that 𝒀{\bm{Y}} is categorically distributed (except in the lims0\lim_{s\to 0}, which we never reach). Instead, 𝒀{\bm{Y}} has a novel distribution, which its inventors call the “concrete” (a portmanteau of “continuous” and “discrete”) [32] or “Gumbel-softmax” [19] distribution. We will simply refer to it as the soft categorical distribution. What is the probability density for this distribution?

Using the Gumbel-softmax trick in a loss function.

Let the loss be an average over some function of 𝒀{\bm{Y}}, =f(𝒀)𝒀\mathcal{L}={\mathopen{}\mathclose{{}\left\langle{f({\bm{Y}})}}\right\rangle_{% {\bm{Y}}{}}}, and the natural parameters be functions of some other parameters 𝜽\bm{\theta}, i.e. 𝜼=𝜼(𝜽)\bm{\eta}=\bm{\eta}(\bm{\theta}). First, we reparameterize to allow the derivative to pass into the average:

dd𝜽=dd𝜽f(𝒀)𝒀=dd𝜽f(𝝈0(𝑮))𝑮=dd𝜽f(𝝈0(𝜼(𝜽)+𝑮0))𝑮0=dd𝜽f(𝝈0(𝜼(𝜽)+𝑮0))𝑮0.\frac{\mathop{}\!\mathrm{d}{\mathcal{L}}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}=% \frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}{\mathopen{}% \mathclose{{}\left\langle{f({\bm{Y}})}}\right\rangle_{{\bm{Y}}{}}}=\frac{% \mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}{\mathopen{}% \mathclose{{}\left\langle{f(\bm{\sigma}_{0}({\bm{G}}))}}\right\rangle_{{\bm{G}% }{}}}=\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}{% \mathopen{}\mathclose{{}\left\langle{f(\bm{\sigma}_{0}(\bm{\eta}(\bm{\theta})+% {\bm{G}}_{0}))}}\right\rangle_{{\bm{G}}_{0}{}}}={\mathopen{}\mathclose{{}\left% \langle{\frac{\mathop{}\!\mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}f(% \bm{\sigma}_{0}(\bm{\eta}(\bm{\theta})+{\bm{G}}_{0}))}}\right\rangle_{{\bm{G}}% _{0}{}}}.

Now making the approximation,

dd𝜽dd𝜽f(𝝈(𝜼(𝜽)+𝑮0))𝑮0=𝜼T𝜽𝝈T𝒈(𝜼(𝜽)+𝑮0)f𝒚(𝝈(𝜼(𝜽)+𝑮0))𝑮0.\begin{split}\frac{\mathop{}\!\mathrm{d}{\mathcal{L}}}{\mathop{}\!\mathrm{d}{% \bm{\theta}}}\approx{\mathopen{}\mathclose{{}\left\langle{\frac{\mathop{}\!% \mathrm{d}{}}{\mathop{}\!\mathrm{d}{\bm{\theta}}}f(\bm{\sigma}(\bm{\eta}(\bm{% \theta})+{\bm{G}}_{0}))}}\right\rangle_{{\bm{G}}_{0}{}}}&={\mathopen{}% \mathclose{{}\left\langle{\frac{\partial{\bm{\eta}}^{\text{T}}}{\partial{\bm{% \theta}}}\frac{\partial{\bm{\sigma}}^{\text{T}}}{\partial{\bm{g}}}(\bm{\eta}(% \bm{\theta})+{\bm{G}}_{0})\frac{\partial{f}}{\partial{\bm{y}}}(\bm{\sigma}(\bm% {\eta}(\bm{\theta})+{\bm{G}}_{0}))}}\right\rangle_{{\bm{G}}_{0}{}}}.\end{split}