10.1 Noise-Contrastive Objectives

Here we focus on (2) (we set aside the problem of generating samples). One possible solution is to design a loss function that is minimized only for an energy corresponding to a normalized distribution, i.e. for which Z(𝜽)=1Z(\bm{\theta})=1. We will not constrain the energy itself; that is, there exist settings of the parameters 𝜽\bm{\theta} for which Z(𝜽)1Z(\bm{\theta})\neq 1. However, none of these settings minimizes the loss. What objectives have this property?

10.1.1 Noise-Contrastive Estimation

The basic intuition behind noise-contrastive estimation (NCE) [9] is that one such objective is distinguishing data from noise. More precisely, we let the task be to discriminate good or “positive” samples drawn from p(𝒚+){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}^{+}}\right)} from “negative” drawn from a “noise” distribution, pn(𝒚-){p_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right)}, by improving an unnormalized model exp{-E(𝒚^+,𝜽)}\exp\mathopen{}\mathclose{{}\left\{-E\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{% \theta}}\right)}\right\} for the “positive” data. The dual demands of minimizing both false alarms and misses will prevent the model from making its implicit normalizer either too big or too small (respectively). We choose the noise distribution, so we can (to some extent) control how hard this task is.

Mathematically, if X{X} is the Bernoulli random variable indicating from which of the distributions 𝒀{\bm{Y}} was drawn, the problem becomes that of minimizing the posterior cross entropy Hpp^[X|𝒀]\text{H}_{p\hat{p}}{\mathopen{}\mathclose{{}\left[{X}|{\bm{Y}}}\right]}. There is no reason to make negative or positive samples more common, so we let the prior probability of X{X} be uniform. Therefore the data distribution is

p(𝒙)\displaystyle{p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{x}{}}\right)} . . =1/2\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}=1/2 p(𝒚|𝒙)\displaystyle{p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}\middle|\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{x}{}}\right)} . . =p(𝒚)xpn(𝒚)1-x.\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}={p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}}\right)}^{\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}x}{p_{\text{n}}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}}\right% )}^{1-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}x}. (10.1)

We will give our generative model the same form, except that our model for the positive data will not be normalized. For notational symmetry between the model and noise distribution, we also write the noise distribution in terms of an energy,

pn(𝒚-)=exp{-En(𝒚-)}.{p_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right)}=\exp\mathopen{}\mathclose{{}% \left\{-E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right)}% \right\}.

However, we define this energy such that this noise distribution is indeed normalized. Our generative model is then

p^(x^;𝜽)\displaystyle{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\hat{x};\bm{\theta}}% \right)} . . =1/2,\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}=1/2, q^(𝒚^|x^;𝜽)\displaystyle{\hat{q}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\hat{x};\bm{% \theta}}\right)} . . =exp{-x^E(𝒚^,𝜽)-(1-x^)En(𝒚^)}.\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.} }}=\exp\mathopen{}\mathclose{{}\left\{-\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\hat{x}E\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{% \theta}}\right)-(1-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x})E_{\text{n}}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}}% \right)}\right\}.

Note well that q^(𝒚^|X^=1;𝜽){\hat{q}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|{\hat{X}}=1;\bm{\theta}}\right)} is not normalized: at the beginning of training, at least, it will not integrate to 1. Nevertheless, if we ignore this and compute the posterior in the usual way with Bayes’ rule, we get a perfectly legitimate probability distribution. In particular, the posterior probability an example being positive is

p^(X^=1|𝒚^;𝜽)=exp{-E(𝒚^,𝜽)}exp{-E(𝒚^,𝜽)}+exp{-En(𝒚^)}=σ{En(𝒚^)-E(𝒚^,𝜽)}=σ{h(𝒚^,𝜽)},{\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}=1\middle|\leavevmode\color[rgb% ]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{% \theta}}\right)}=\frac{\exp\mathopen{}\mathclose{{}\left\{-E\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right)}\right\}}{\exp% \mathopen{}\mathclose{{}\left\{-E\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{% \theta}}\right)}\right\}+\exp\mathopen{}\mathclose{{}\left\{-E_{\text{n}}% \mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}}\right)}\right\}}=\sigma\mathopen{}% \mathclose{{}\left\{E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}}\right)-E% \mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right)}\right\}=\sigma% \mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{% \theta}}\right)}\right\}, (10.2)

with σ\sigma the logistic function and h(𝒚^,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right) the difference in energies:

h(𝒚^,𝜽) . . =En(𝒚^)-E(𝒚^,𝜽).h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right)\mathrel{\vbox{% \hbox{\scriptsize.}\hbox{\scriptsize.} }}=E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}}\right)-E\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% ,\bm{\theta}}\right). (10.3)

The key result that makes NCE work is that the cross entropy of this posterior (see Eq. 10.4 below) is minimized only when E(𝒚+,𝜽)=-logp(𝒚+)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}^{+},\bm{\theta}}\right)=-\log{p\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}^{+}}\right)}, as opposed to E(𝒚+,𝜽)=-logp(𝒚+)+CE\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}^{+},\bm{\theta}}\right)=-\log{p\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}^{+}}\right)}+C for some constant CC [9]. (Technically, the proof requires the noise distribution to be supported wherever the data distribution is.) So we will not need to compute the normalizer, i.e. to integrate E(𝒚^+,𝜽)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{\theta}}\right). Intuitively, this happens because the model and noise energies always show up together and must balance. If the learned (implicit) normalizer is too small, for example if the model energy E(𝒚,𝜽)E\mathopen{}\mathclose{{}\left(\bm{y},\bm{\theta}}\right) is smaller than the noise energy for most values of 𝒚\bm{y}, then most negative samples will be assigned to the positive distribution. The reverse, also undesirable, holds when the implicit normalizer is too large. Both kinds of mistakes will increase the cross entropy.

Notice, however, that these mistakes will be less noticeable if the data and noise distributions are very different from each other—e.g., if the bulks of the probability masses of the distributions are very far from each other. In this case, the model could assign (e.g.) overly high probability to the data (by making the normalizer too small) without making the noise samples particularly probable under the model. Technically, the normalized energy of the data distribution is guaranteed to be the unique solution to the loss based on the posterior in Eq. 10.2 (see below) as long as the noise distribution is supported wherever the data distribution is. But for finite training samples (the sitution in which we usually find ourselves), the guarantee is voided. The problem would appear to be more acute for more expressive model distributions.

Quasi-generative learning.

The cross-entropy loss is the negative log of the posterior distribution (Eq. 10.2), averaged under the data (Eq. 10.1):

=H(ppˇ)p^[X|𝒀]-log(p^(X^=1|𝒀;𝜽)Xp^(X^=0|𝒀;𝜽)1-X)X,𝒀=12-logp^(X^=1|𝒀+;𝜽)𝒀++12-log(1-p^(X^=1|𝒀-;𝜽))𝒀-=12-logσ{h(𝒀+,𝜽)}𝒀++12-logσ{-h(𝒀-,𝜽)}𝒀-.\begin{split}\mathcal{L}={\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose% {{}\left[{X}{}\middle|{\bm{Y}}}\right]}}&\approx{\mathopen{}\mathclose{{}\left% \langle{-\log\mathopen{}\mathclose{{}\left({\hat{p}\mathopen{}\mathclose{{}% \left({\hat{X}}=1\middle|{\bm{Y}};\bm{\theta}}\right)}^{{X}}{\hat{p}\mathopen{% }\mathclose{{}\left({\hat{X}}=0\middle|{\bm{Y}};\bm{\theta}}\right)}^{1-{X}}}% \right)}}\right\rangle_{{X}{},{\bm{Y}}{}}}\\ &=\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{-\log{\hat{p}\mathopen{}% \mathclose{{}\left({\hat{X}}=1\middle|{\bm{Y}}^{+};\bm{\theta}}\right)}}}% \right\rangle_{{\bm{Y}}^{+}{}}}+\frac{1}{2}{\mathopen{}\mathclose{{}\left% \langle{-\log\mathopen{}\mathclose{{}\left(1-{\hat{p}\mathopen{}\mathclose{{}% \left({\hat{X}}=1\middle|{\bm{Y}}^{-};\bm{\theta}}\right)}}\right)}}\right% \rangle_{{\bm{Y}}^{-}{}}}\\ &=\frac{1}{2}{\mathopen{}\mathclose{{}\left\langle{-\log\sigma\mathopen{}% \mathclose{{}\left\{h\mathopen{}\mathclose{{}\left({\bm{Y}}^{+},\bm{\theta}}% \right)}\right\}}}\right\rangle_{{\bm{Y}}^{+}{}}}+\frac{1}{2}{\mathopen{}% \mathclose{{}\left\langle{-\log\sigma\mathopen{}\mathclose{{}\left\{-h% \mathopen{}\mathclose{{}\left({\bm{Y}}^{-},\bm{\theta}}\right)}\right\}}}% \right\rangle_{{\bm{Y}}^{-}{}}}.\end{split} (10.4)

This is evidently a discriminative problem, but with a twist. The canonical generative approach to binary classification is to model the generative distribution p^(x^;𝜽)p^(𝒚^|x^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\hat{x};\bm{\theta}}\right)}{\hat{p}\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|\leavevmode\color[rgb]{.5,.5,.5% }\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.% 5}\pgfsys@color@gray@fill{.5}\hat{x};\bm{\theta}}\right)} (like NCE); acquire the parameters by minimizing the joint cross entropy Hpp^[X,𝒀]\text{H}_{p\hat{p}}{\mathopen{}\mathclose{{}\left[{X},{\bm{Y}}}\right]} (unlike NCE); and then invert to p^(x^|𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\hat{x}\middle|\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{\theta}}\right)} with Bayes’ rule. For Gaussian mixtures, this is known as linear/quadratic discriminant analysis (depending on whether the covariance is the same/different across classes). The canonical discriminative approach to binary classification is to model p^(x^|𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\hat{x}\middle|\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{\theta}}\right)} directly (unlike NCE); and then minimize the cross entropy Hpp^[X|𝒀]\text{H}_{p\hat{p}}{\mathopen{}\mathclose{{}\left[{X}|{\bm{Y}}}\right]} (like NCE). This is logistic regression. NCE mixes both methods: it models the generative distribution p^(x^;𝜽)p^(𝒚^|x^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\hat{x};\bm{\theta}}\right)}{\hat{p}\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|\leavevmode\color[rgb]{.5,.5,.5% }\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.% 5}\pgfsys@color@gray@fill{.5}\hat{x};\bm{\theta}}\right)}, but first inverts with Bayes rule, and finally minimizes the discriminative cross entropy Hpp^[X|𝒀]\text{H}_{p\hat{p}}{\mathopen{}\mathclose{{}\left[{X}|{\bm{Y}}}\right]}. In the classic case of the mixture of two Gaussians/binary classification, this would amount to learning the two (mean, covariance) pairs by minimizing the posterior cross entropy—as opposed to learning these parameters by minimizing the joint cross entropy (generative), or learning a separating hyperplane by minimizing the posterior cross entropy (discriminative).

[[Nice properties of the estimator….]]

10.1.2 InfoNCE

Van den Oord and colleagues propose to put NCE to a very different purpose [51]. Rather than attempting to learn a parametric form for the probability of observed samples, they aim to extract useful features from data. In order to do so, they introduce what amounts to four novel variations on NCE, which we discuss one at a time.

(1) Generalizing to multiple “examples.”

Suppose that the observation 𝒚\bm{y} is not a single sample but a collection of K{K} “examples,” (𝒚1,,𝒚K(\bm{y}^{1},\ldots,\bm{y}^{{K}}), precisely one of which is not noise. Then the goal is not to determine whether or not the sample is noise, but rather to determine which of the examples is noise. This means that rather than use the model-noise energy difference (Eq. 10.3) directly to assign the example to the positive or negative class, as in NCE, we will compare K{K} energy differences to each other (with the softmax function).

𝑿^{\bm{\hat{X}}}p^(𝒙^)=1K{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}}\right)}=\frac{1}{{K}}𝒀^{\bm{\hat{Y}}}p^(𝒚^|𝒙^;𝜽)=q^(𝒚^;𝜽)x^kpn(𝒚^)1-x^k{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{% \theta}}\right)}={\hat{q}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{% \theta}}\right)}^{\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x}^{k}}{p_{\text{n}}\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% }\right)}^{1-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x}^{k}} K{K} N{N}
(A)
𝑿^{\bm{\hat{X}}}p^(𝒙^)=(12)K{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{}}\right)}=\mathopen{}\mathclose{{}% \left(\frac{1}{2}}\right)^{{K}}𝒀^s{\bm{\hat{Y}}}sp^(𝒚^|𝒙^;𝜽)=k=1Kq^(y^k;𝜽)x^kpn(y^k)1-x^k{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{% \theta}}\right)}=\prod_{k=1}^{{K}}{\hat{q}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\hat{y}^{k};% \bm{\theta}}\right)}^{\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x}^{k}}{p_{\text{n}}\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\hat{y}^{k}}% \right)}^{1-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor% }{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\hat{x% }^{k}} K{K} N{N}
(B)
Figure 10.1: Generalizations of NCE for representation learning. (LABEL:sub@subfig:InfoNCE) InfoNCE. For each toss of a K{K}-sided die, 𝑿^{\bm{\hat{X}}}{}, one sample (corresponding to the face of the die that came up) is drawn from the model distribution, and K-1{K}-1 samples are drawn from a noise distribution. (LABEL:sub@subfig:localNCE) “Local NCE” (as in wav2vec). For each of N{N} trials, a coin is flipped K{K} times to determine whether to draw data from the model distribution or a noise distribution. Note that the data distribution is the same for both models, and corresponds to (LABEL:sub@subfig:InfoNCE), with q^(𝒚^;𝜽){\hat{q}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{\theta}}\right)} replaced by the data (Eq. 10.5).

In this setup, the latent variable is categorical (conceived as a one-hot vector 𝑿^{\bm{\hat{X}}}) rather than Bernoulli, and the data distribution is:

p(𝒙)\displaystyle{p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{x}{}}\right)} =1K,\displaystyle=\frac{1}{{K}}, p(𝒚1,,𝒚K|𝒙)\displaystyle{p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{1},\ldots,\bm{y}^{{K}}\middle|\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{x}{}}\right)} =k=1Kp(𝒚k)xkpn(𝒚k)1-xk.\displaystyle=\prod_{k=1}^{{K}}{p\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}^{k}}\right)}^{% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}x^{k}}{p_{% \text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{k}}\right)}^{1-\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}x^{k}}. (10.5)

Again we have set the prior uniform, since we have no reason to make any one of the elements more or less likely to be noise than any other. We emphasize that this is not a mixture model: a single sample contains K{K} “examples”: one positive, and K-1{K}-1 negative.

The generative model takes the same form, with the model distribution taking the place of the data marginal. Writing it in terms of energies, we obtain

p^(𝒙^;𝜽)\displaystyle{\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{% \theta}}\right)} =1K,\displaystyle=\frac{1}{{K}}, q^(𝒚^1,,𝒚^K|𝒙^;𝜽)\displaystyle{\hat{q}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{1},% \ldots,\bm{\hat{y}}^{{K}}\middle|\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{x}}{};\bm{\theta}}\right)} =exp{kK(-x^kE(𝒚^k,𝜽)-(1-x^k)En(𝒚^k))}.\displaystyle=\exp\mathopen{}\mathclose{{}\left\{\sum_{k}^{{K}}\mathopen{}% \mathclose{{}\left(-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x}^{k}E\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {k},\bm{\theta}}\right)-(1-\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]% {pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\hat{x}^{k})E_{\text{n}}\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% ^{k}}\right)}\right)}\right\}.

Again we ignore the fact that the emission is unnormalized and simply compute a (normalized) posterior distribution with Bayes’ rule

p^(X^i=1|𝒚^1,,𝒚^K;𝜽)=1Kexp{-E(𝒚^i,𝜽)-kiKEn(𝒚^k)}j=1K1Kexp{-E(𝒚^j,𝜽)-kjKEn(𝒚^k)}=1Kexp{h(𝒚^i,𝜽)-k=1KEn(𝒚^k)}j=1K1Kexp{h(𝒚^j,𝜽)-k=1KEn(𝒚^k)}=exp{h(𝒚^i,𝜽)}j=1Kexp{h(𝒚^j,𝜽)}=softmax{h(𝒚^1,𝜽),h(𝒚^K,𝜽)}i;\begin{split}{\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}^{i}=1\middle|% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {1},\ldots,\bm{\hat{y}}^{{K}};\bm{\theta}}\right)}&=\frac{\frac{1}{{K}}\exp% \mathopen{}\mathclose{{}\left\{-E\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{i},\bm{% \theta}}\right)-\sum_{k\neq i}^{{K}}E_{\text{n}}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {k}}\right)}\right\}}{\sum_{j=1}^{{K}}\frac{1}{{K}}\exp\mathopen{}\mathclose{{% }\left\{-E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{j},\bm{\theta}}\right)-\sum_{k\neq j% }^{{K}}E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{k}}% \right)}\right\}}\\ &=\frac{\frac{1}{{K}}\exp\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose% {{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {y}}^{i},\bm{\theta}}\right)-\sum_{k=1}^{{K}}E_{\text{n}}\mathopen{}\mathclose% {{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {y}}^{k}}\right)}\right\}}{\sum_{j=1}^{{K}}\frac{1}{{K}}\exp\mathopen{}% \mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{j},\bm{% \theta}}\right)-\sum_{k=1}^{{K}}E_{\text{n}}\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {k}}\right)}\right\}}\\ &=\frac{\exp\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {i},\bm{\theta}}\right)}\right\}}{\sum_{j=1}^{{K}}\exp\mathopen{}\mathclose{{}% \left\{h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{j},\bm{\theta}}\right)}\right\}}\\ &=\operatorname*{softmax}\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose% {{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{% rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat% {y}}^{1},\bm{\theta}}\right)\ldots,h\mathopen{}\mathclose{{}\left(\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{{K}},\bm% {\theta}}\right)}\right\}_{i};\end{split} (10.6)

that is, the ithi^{\text{th}} output of the softmax function. Eq. 10.6 is evidently a kind of generalization of Eq. 10.2.11 1 However, note that the multi-example version of NCE does not quite reduce to the single-example case even when K=2{K}=2. Eq. 10.2 can indeed be re-written with a softmax as in Eq. 10.6, with the first argument equal to h(𝒚^i,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{i},\bm{\theta}}\right) and the second equal to 0. The latter reflects our indifferent prior, which provides no additional information. In the two-example version of the generalization under discussion, on the other hand, the second argument encodes the relative probability of the second example being data or noise. In short, deciding which of two samples is “real” is easier than deciding whether or not a single sample is. Putting this together with the data distribution, we can write the conditional cross entropy as

=H(ppˇ)p^[𝑿|𝒀1,,𝒀K]-log(k=1Kp^(Xk=1|𝒀1,,𝒀K;𝜽)Xk)𝑿,𝒀1,,𝒀K=-k=1KXklogp^(Xk=1|𝒀1,,𝒀K;𝜽)𝑿,𝒀1,,𝒀K=-logp^(X+=1|𝒀1,,𝒀K;𝜽)𝒀1,,𝒀K=-log(softmax{h(𝒀1,𝜽),,h(𝒀K,𝜽)}+)𝒀1,,𝒀K.\begin{split}\mathcal{L}={\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose% {{}\left[{\bm{X}}\middle|{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}}\right]}}&\approx{% \mathopen{}\mathclose{{}\left\langle{-\log\mathopen{}\mathclose{{}\left(\prod_% {k=1}^{{K}}{\hat{p}\mathopen{}\mathclose{{}\left({X}^{k}=1\middle|{\bm{Y}}^{1}% ,\ldots,{\bm{Y}}^{{K}};\bm{\theta}}\right)}^{{X}^{k}}}\right)}}\right\rangle_{% {\bm{X}}{}{},{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\sum_{k=1}^{{K}}{X}^{k}\log{\hat{p}% \mathopen{}\mathclose{{}\left({X}^{k}=1\middle|{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{% K}};\bm{\theta}}\right)}}}\right\rangle_{{\bm{X}}{}{},{\bm{Y}}^{1},\ldots,{\bm% {Y}}^{{K}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\log{\hat{p}\mathopen{}\mathclose{{}% \left({X}^{+}=1\middle|{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}};\bm{\theta}}\right)}% }}\right\rangle_{{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\log\mathopen{}\mathclose{{}\left(% \operatorname*{softmax}\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{% }\left({\bm{Y}}^{1},\bm{\theta}}\right),\ldots,h\mathopen{}\mathclose{{}\left(% {\bm{Y}}^{{K}},\bm{\theta}}\right)}\right\}_{+}}\right)}}\right\rangle_{{\bm{Y% }}^{1},\ldots,{\bm{Y}}^{{K}}{}}}.\end{split} (10.7)

In the final line, we are selecting only that output of the softmax function that corresponds to the actual positive sample (whose index will of course differ from trial to trial).

Negative samples enforce normalization.

We can shed light on the role played by the negative examples by considering them separately from the positive example in the posterior probability of a positive example:

p^(X^+=1|𝒚^1,,𝒚^K;𝜽)=exp{h(𝒚^+,𝜽)}exp{h(𝒚^+,𝜽)}+j+Kexp{h(𝒚^j,𝜽)}.{\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}^{+}=1\middle|\leavevmode\color% [rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{1},% \ldots,\bm{\hat{y}}^{{K}}{};\bm{\theta}}\right)}=\frac{\exp\mathopen{}% \mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{% \theta}}\right)}\right\}}{\exp\mathopen{}\mathclose{{}\left\{h\mathopen{}% \mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{\theta}}\right)}\right\}+\sum_% {j\neq+}^{{K}}\exp\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}% \left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}% {.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}% ^{j},\bm{\theta}}\right)}\right\}}.

Now notice that the negative-sample terms sum approximately to a constant:

j+Kexp{h(𝒚j,𝜽)}=Z(𝜽)j+Kp^(𝒚j;𝜽)pn(𝒚j)=Z(𝜽)(K-1)p^(𝒀-;𝜽)pn(𝒀-)𝒀-Z(𝜽)(K-1)𝒚-pn(𝒚-)p^(𝒚-;𝜽)pn(𝒚-)d𝒚-=Z(𝜽)(K-1).\begin{split}\sum_{j\neq+}^{{K}}\exp\mathopen{}\mathclose{{}\left\{h\mathopen{% }\mathclose{{}\left(\bm{y}^{j},\bm{\theta}}\right)}\right\}=Z(\bm{\theta})\sum% _{j\neq+}^{{K}}\frac{{\hat{p}\mathopen{}\mathclose{{}\left(\bm{y}^{j};\bm{% \theta}}\right)}}{{p_{\text{n}}\mathopen{}\mathclose{{}\left(\bm{y}^{j}}\right% )}}&=Z(\bm{\theta})({K}-1){\mathopen{}\mathclose{{}\left\langle{\frac{{\hat{p}% \mathopen{}\mathclose{{}\left({\bm{Y}}^{-};\bm{\theta}}\right)}}{{p_{\text{n}}% \mathopen{}\mathclose{{}\left({\bm{Y}}^{-}}\right)}}}}\right\rangle_{{\bm{Y}}^% {-}{}}}\\ &\approx Z(\bm{\theta})({K}-1)\int_{\bm{y}^{-}{}}{p_{\text{n}}\mathopen{}% \mathclose{{}\left(\bm{y}^{-}}\right)}\frac{{\hat{p}\mathopen{}\mathclose{{}% \left(\bm{y}^{-};\bm{\theta}}\right)}}{{p_{\text{n}}\mathopen{}\mathclose{{}% \left(\bm{y}^{-}}\right)}}\mathop{}\!\mathrm{d}{\bm{y}^{-}{}}\\ &=Z(\bm{\theta})({K}-1).\end{split} (10.8)

The approximate equality becomes more exact as the number of negative examples increases. (And technically, the final equality requires the model and noise distributions to have the same support.) Eq. 10.8 says that, if we had in hand an expression for the normalizer, we could do without the negative samples altogether—they drop out of the loss function. Indeed, the loss now becomes

=H(ppˇ)p^[𝑿|𝒀]-log(exp{h(𝒀+,𝜽)}exp{h(𝒀+,𝜽)}+Z(𝜽)(K-1))𝒀+=log(1+Z(𝜽)(K-1)exp{-h(𝒀+,𝜽)})𝒀+=log(1+pn(𝒀+)p^(𝒀+;𝜽)(K-1))𝒀+log(pn(𝒀+)p^(𝒀+;𝜽))𝒀++logK    \begin{split}\mathcal{L}={\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose% {{}\left[{\bm{X}}\middle|{\bm{Y}}}\right]}}&\approx{\mathopen{}\mathclose{{}% \left\langle{-\log\mathopen{}\mathclose{{}\left(\frac{\exp\mathopen{}% \mathclose{{}\left\{h\mathopen{}\mathclose{{}\left({\bm{Y}}^{+},\bm{\theta}}% \right)}\right\}}{\exp\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}% \left({\bm{Y}}^{+},\bm{\theta}}\right)}\right\}+Z(\bm{\theta})({K}-1)}}\right)% }}\right\rangle_{{\bm{Y}}^{+}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{\log\mathopen{}\mathclose{{}\left(1+Z(% \bm{\theta})({K}-1)\exp\mathopen{}\mathclose{{}\left\{-h\mathopen{}\mathclose{% {}\left({\bm{Y}}^{+},\bm{\theta}}\right)}\right\}}\right)}}\right\rangle_{{\bm% {Y}}^{+}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{\log\mathopen{}\mathclose{{}\left(1+% \frac{{p_{\text{n}}\mathopen{}\mathclose{{}\left({\bm{Y}}^{+}}\right)}}{{\hat{% p}\mathopen{}\mathclose{{}\left({\bm{Y}}^{+};\bm{\theta}}\right)}}({K}-1)}% \right)}}\right\rangle_{{\bm{Y}}^{+}{}}}\\ &\approx{\mathopen{}\mathclose{{}\left\langle{\log\mathopen{}\mathclose{{}% \left(\frac{{p_{\text{n}}\mathopen{}\mathclose{{}\left({\bm{Y}}^{+}}\right)}}{% {\hat{p}\mathopen{}\mathclose{{}\left({\bm{Y}}^{+};\bm{\theta}}\right)}}}% \right)}}\right\rangle_{{\bm{Y}}^{+}{}}}+\log{K}\qquad\qquad\end{split} (10.9)

where the final line follows for large K{K}.22 2 The authors of the original paper [51] interpret this approximation as a lower bound when the model distribution matches the data distribution. Presumably the idea is that, for a very good model p^(𝒚^;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}};\bm{\theta}}\right)}, the noise-to-model ratio will usually be less than one when evaluated on positive examples. Therefore, the neglected +1+1 will dominate the neglected -1-1. It would take more work to prove this. This makes sense: the whole point of using negative examples was to force unnormalized models to learn the correct normalization. Since we want to use models for which computing Z(𝜽)Z(\bm{\theta}) is intractable, we will not use Eq. 10.9 as our objective—but we will use it below to prove that optimizing the multi-example NCE loss (Eq. 10.7) increases mutual information in a certain setting.

(2) Modeling the energy difference.

We have assumed up to this point that the source of our “noise” samples is also an evaluatable expression for the probability of samples. What if we have only samples from the noise distribution? Can we still learn a model of the positive data?

One obvious solution is to learn a model for the negative as well as the positive samples; for example, to build a parameterized model for the noise energy, En(𝒚^-,𝜽)E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{-},\bm{\theta}}\right), and use it in the generative model. But if we wanted to get a normalized version of the model energy, E(𝒚^+,𝜽)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{\theta}}\right), we would have to be able to get or to know the normalizer for this noise energy, En(𝒚^-,𝜽)E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{-},\bm{\theta}}\right), which is troubling. However, as noted at the outset, getting a probability model for the data, normalized or unnormalized, is not the goal of InfoNCE. So instead we will directly model the energy difference, i.e. the left-hand rather than right-hand side of Eq. 10.3. Rather than asking for the probabilities of an example 𝒚k\bm{y}^{k} under the two models (positive and negative), we are asking for its relative probability.

One subtlety with modeling h(𝒚^,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right) directly is that we are still at liberty to interpret this as fitting E(𝒚^+,𝜽)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{\theta}}\right) only, that is to say, not fitting the noise energy, En(𝒚-)E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right). In other words, we can attribute any error in h(𝒚^,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\bm{\theta}}\right) to an error in E(𝒚^+,𝜽)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{+},\bm{\theta}}\right) rather than En(𝒚-)E_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right). Consequently, the denominator in Eq. 10.8 can still be interpreted as pn(𝒚-){p_{\text{n}}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}^{-}}\right)}, and the equation still goes through. We will use it below.

(3) Contrasting a conditional with a marginal distribution.

In the third departure from the original NCE, the InfoNCE method proposes to learn to model a conditional distribution p(𝒚|𝒛){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}\middle|\bm{z}}\right)}, given some auxiliary variable 𝒛\bm{z}. More importantly, we use the data marginal, p(𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}}\right)}, as the noise distribution. Thus, p(𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}}\right)} has switched roles, from data to “noise,” or more felicitously from the source of positive to negative examples. The intuition behind this choice of distributions is that a model that can distinguish them must be able to extract information about 𝒀{\bm{Y}} from 𝒁{\bm{Z}}.

This can be made precise in the language of information theory. However, the information we aim to increase is not precisely a mutual information, neither between 𝒁{\bm{Z}} and 𝒀^{\bm{\hat{Y}}} nor anything else, because it depends on two different distributions: the model, p^(𝒚^|𝒛;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{z};\bm{\theta}}% \right)}, and the data, p(𝒚,𝒛)p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y},\leavevmode\color[rgb]{.5,.5,.5}\definecolor% [named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{z}}\right). The standard mutual information can of course be written as

(𝒀;𝒁)=Hp[𝒀]-Hp[𝒀|𝒁].\mathcal{I}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right)=\text{H}_{p% }{\mathopen{}\mathclose{{}\left[{\bm{Y}}}\right]}-\text{H}_{p}{\mathopen{}% \mathclose{{}\left[{\bm{Y}}|{\bm{Z}}}\right]}.

The information quantity we are interested in retains the marginal entropy over 𝒀{\bm{Y}}, since the model has no effect on it (see previous section), but replaces the conditional entropy with the conditional cross entropy:

Hp[𝒀]-Hpp^[𝒀|𝒁]= . . pp^(𝒀;𝒁).\text{H}_{p}{\mathopen{}\mathclose{{}\left[{\bm{Y}}}\right]}-\text{H}_{p\hat{p% }}{\mathopen{}\mathclose{{}\left[{\bm{Y}}|{\bm{Z}}}\right]}=\mathrel{\vbox{% \hbox{\scriptsize.}\hbox{\scriptsize.} }}\mathcal{I}_{p\hat{p}}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right). (10.10)

We might accordingly call this (for want of something better) the “cross mutual information.” Intuitively, it is the portion of the (actual) entropy of 𝒀{\bm{Y}} that is explained by 𝒁{\bm{Z}} under the model p^(𝒚^|𝒛;𝜽){\hat{p}\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{y}}{}\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{z};\bm{\theta}}% \right)}.

Now, Gibbs’s inequality tells us that Hpp^[𝒀|𝒁]Hp[𝒀|𝒁]\text{H}_{p\hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{Y}}|{\bm{Z}}}\right]}% \geq\text{H}_{p}{\mathopen{}\mathclose{{}\left[{\bm{Y}}|{\bm{Z}}}\right]}, so consequently pp^(𝒀;𝒁)(𝒀;𝒁)\mathcal{I}_{p\hat{p}}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right)% \leq\mathcal{I}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right): the cross mutual information is never greater than the actual mutual information. Equality is reached when the model matches the true data conditional. Although this is also the point at which the posterior cross entropy in Eq. 10.7 reaches its minimum, this is not quite the same as saying that improving the latter increases the cross mutual information. Still, it is intuitive, since we expect training to oblige the model to make increasing use of 𝒁{\bm{Z}} in order to distinguish the conditional data from the marginal data. And indeed, we can show this. The cross mutual information of Eq. 10.10 between 𝒀{\bm{Y}} and 𝒁{\bm{Z}} can be written more explicitly in terms of log probabilities, and then related to the (approximate) loss function in Eq. 10.9:

(𝒀;𝒁)pp^(𝒀;𝒁)=𝔼𝒀,𝒁[logp^(𝒀|𝒁;𝜽)p(𝒀)]-H(ppˇ)p^[𝑿|𝒀]+logK.\mathcal{I}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right)\geq\mathcal% {I}_{p\hat{p}}\mathopen{}\mathclose{{}\left({\bm{Y}};{\bm{Z}}}\right)=\mathbb{% E}_{{\bm{Y}}{},{\bm{Z}}{}}{\mathopen{}\mathclose{{}\left[\log\frac{{\hat{p}% \mathopen{}\mathclose{{}\left({\bm{Y}}\middle|{\bm{Z}};\bm{\theta}}\right)}}{{% p\mathopen{}\mathclose{{}\left({\bm{Y}}}\right)}}}\right]}\\ \approx-{\text{H}_{(p\check{p})\hat{p}}{\mathopen{}\mathclose{{}\left[{\bm{X}}% \middle|{\bm{Y}}}\right]}}+\log{K}. (10.11)

The final (approximate) equality follows because of the (somewhat subtle) fact that the expectation includes only positive samples, to wit, samples in which 𝒛\bm{z} is correctly paired with 𝒚\bm{y}, and therefore Eq. 10.9 applies.

Eq. 10.11 tells us that decreasing the posterior cross entropy (on the right-hand side of Eq. 10.11) increases, at least approximately, the cross mutual information (on the left). The larger K{K}, the less approximate the final equality (see Eq. 10.8). (And although this also increases the logK\log{K} term in Eq. 10.11 and therefore the discrepancy between the cross mutual information and the cross entropy, it does not increase the discrepancy between their gradients.) In sum, minimizing the NCE loss in Eq. 10.7, with h(𝒚^,𝒛^,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{z}},\bm{\theta}}\right) defined to be the difference between the conditional and marginal energies, maximizes the information extracted from 𝒁{\bm{Z}} by the function that assigns energies to 𝒀{\bm{Y}}, E(𝒚^,𝒛^,𝜽)E\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{z}},\bm{\theta}}\right).

But now notice that there is nothing mathematical to distinguish the roles played by 𝒀{\bm{Y}} and 𝒁{\bm{Z}}. In terms of the data, they are either paired (positive examples) or unpaired (negative examples), so they play symmetrical roles. In terms of the model, they enter the loss only through the generic function h(𝒚^,𝒛^,𝜽)h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{\hat{z}},\bm{\theta}}\right), which is learned and has no pre-specified role for its first and second arguments. So we can equally interpret descent of the InfoNCE loss as learning to extract useful information from 𝒀{\bm{Y}} about 𝒁{\bm{Z}} rather than the other way around. Indeed, perhaps the most felicitous interpretation, which emphasizes this symmetry, is that the training scheme asks the model to distinguish between the joint distribution p(𝒛,𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{z},\leavevmode\color[rgb]{.5,.5,.5}\definecolor% [named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}{}}\right)} and the product of the marginals, p(𝒛)p(𝒚){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{z}}\right)}{p\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}{}}% \right)}.

(4) Modeling future samples of a sequence.

There are many possibilities, but one nice application of InfoNCE is to time-series data, and in particular with learning to extract useful information from the “auxiliary variable” (𝒀1,,𝒀t)\mathopen{}\mathclose{{}\left({\bm{Y}}_{1},\ldots,{\bm{Y}}_{t}{}}\right), about the variable of interest, 𝒀t+s{\bm{Y}}_{t+s}{} (for some positive integer ss). That is, we want to learn how to “summarize” sequences of random variables so as best to predict their future state. (For example, for linear dynamical systems, the optimal summary is a weighted sum of past states, with weights decaying exponentially into the past.) Thus the positive and negative (“noise”) distributions are, respectively, p(𝒚t+s|𝒚1,,𝒚t)p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}_{t+s}{}|\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}_{1},\ldots,\bm{y}_{t}{}}\right) and p(𝒚t+s){p\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}_{t+s}}\right)}.

As lately discussed, the authors model the difference between the conditional and unconditional energies, rather than the energies themselves. In particular, they let this model have the form

h(𝒚t+s,𝒚1,,𝒚t,𝜽)=fs(𝒚t+s,𝜽)T𝐖fRNN(fs(𝒚1,𝜽),,fs(𝒚t,𝜽),𝜽),h\mathopen{}\mathclose{{}\left(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}_{t+s},\leavevmode\color[rgb]{.5,.5,.5}% \definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5% }\pgfsys@color@gray@fill{.5}\bm{y}_{1},\ldots,\bm{y}_{t},\bm{\theta}}\right)=f% _{\text{s}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor% }{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}% _{t+s},\bm{\theta})^{\text{T}}\mathbf{W}f_{\text{RNN}}(f_{\text{s}}(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{y}_{1},% \bm{\theta}),\ldots,f_{\text{s}}(\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{y}_{t},\bm{\theta}),\bm{\theta}),

where fsf_{\text{s}} is a static “encoder” ANN and fRNNf_{\text{RNN}} is an RNN. In order to decrease the posterior cross entropy (Eq. 10.7), the encoder and the RNN must extract representations from the data history (on the one hand) and a future sample (on the other) that expose the shared information between them to a bilinear form. The parameters 𝜽\bm{\theta} and 𝐖\mathbf{W} are all learned by stochastic gradient descent of Eq. 10.7.

10.1.3 “Local” NCE

There is another, subtlely different (from InfoNCE) way of generalizing NCE [43]. In short, although (as before) only one out of K{K} samples will be positive, our generative model will now be ignorant of this fact (cf. Fig. 10.1A, the graphical model for InfoNCE, with Fig. 10.1B). It will instead (incorrectly) treat each “example” as independent of each other, and furthermore assume (incorrectly) that positive and negative example are equally likely. We can still compute the posterior distribution over categorical random variables (one-hot vectors) under this model by aggregrating together the relevant K{K} samples, even though the model doesn’t know that they form a group:

p^(X^i=1,X^ji=0|𝒚^1,,𝒚^K;𝜽)=p^(X^i=1|𝒚^i;𝜽)jiKp^(X^j=0|𝒚^j;𝜽)=p^(X^i=1|𝒚^i;𝜽)jiK(1-p^(X^j=1|𝒚^j;𝜽))=σ{h(𝒚^i,𝜽)}jiK(1-σ{h(𝒚^j,𝜽)})\begin{split}{\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}^{i}=1,{\hat{X}}^{% j\neq i}=0\middle|\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{% pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{1},\ldots,\bm{\hat{y}}^{{K}};\bm{% \theta}}\right)}&={\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}^{i}=1\middle% |\leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {i};\bm{\theta}}\right)}\prod_{j\neq i}^{{K}}{\hat{p}\mathopen{}\mathclose{{}% \left({\hat{X}}^{j}=0\middle|\leavevmode\color[rgb]{.5,.5,.5}\definecolor[% named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}% \pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{j};\bm{\theta}}\right)}\\ &={\hat{p}\mathopen{}\mathclose{{}\left({\hat{X}}^{i}=1\middle|\leavevmode% \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{i};\bm{% \theta}}\right)}\prod_{j\neq i}^{{K}}\mathopen{}\mathclose{{}\left(1-{\hat{p}% \mathopen{}\mathclose{{}\left({\hat{X}}^{j}=1\middle|\leavevmode\color[rgb]{% .5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}% \pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^{j};\bm{% \theta}}\right)}}\right)\\ &=\sigma\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {i},\bm{\theta}}\right)}\right\}\prod_{j\neq i}^{{K}}\mathopen{}\mathclose{{}% \left(1-\sigma\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left(% \leavevmode\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{% .5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\bm{\hat{y}}^% {j},\bm{\theta}}\right)}\right\}}\right)\\ \end{split} (10.12)

The loss under the data distribution is then

=-log(k=1Kp^(X^k=1,X^ji=0|𝒀1,,𝒀K;𝜽)Xk)𝑿,𝒀1,,𝒀K=-k=1KXklogp^(X^k=1,X^ji=0|𝒀1,,𝒀K;𝜽)Xk𝑿,𝒀1,,𝒀K=-logp^(X^+=1,X^j+=0|𝒀1,,𝒀K;𝜽)𝒀1,,𝒀K=-log(σ{h(𝒀+,𝜽)}j+K(1-σ{h(𝒀-,𝜽)}))𝒀1,,𝒀K=-logσ{h(𝒀+,𝜽)}+j+Klogσ{-h(𝒀-,𝜽)}𝒀1,,𝒀K.\begin{split}\mathcal{L}&={\mathopen{}\mathclose{{}\left\langle{-\log\mathopen% {}\mathclose{{}\left(\prod_{k=1}^{{K}}{\hat{p}\mathopen{}\mathclose{{}\left({% \hat{X}}^{k}=1,{\hat{X}}^{j\neq i}=0\middle|{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}% ;\bm{\theta}}\right)}^{{X}^{k}}}\right)}}\right\rangle_{{\bm{X}}{}{},{\bm{Y}}^% {1},\ldots,{\bm{Y}}^{{K}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\sum_{k=1}^{{K}}{X}^{k}\log{\hat{p}% \mathopen{}\mathclose{{}\left({\hat{X}}^{k}=1,{\hat{X}}^{j\neq i}=0\middle|{% \bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}};\bm{\theta}}\right)}^{{X}^{k}}}}\right% \rangle_{{\bm{X}}{}{},{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}{}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\log{\hat{p}\mathopen{}\mathclose{{}% \left({\hat{X}}^{+}=1,{\hat{X}}^{j\neq+}=0\middle|{\bm{Y}}^{1},\ldots,{\bm{Y}}% ^{{K}};\bm{\theta}}\right)}}}\right\rangle_{{\bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}% {}}}\\ &={\mathopen{}\mathclose{{}\left\langle{-\log\mathopen{}\mathclose{{}\left(% \sigma\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left({\bm{Y}}^{% +},\bm{\theta}}\right)}\right\}\prod_{j\neq+}^{{K}}\mathopen{}\mathclose{{}% \left(1-\sigma\mathopen{}\mathclose{{}\left\{h\mathopen{}\mathclose{{}\left({% \bm{Y}}^{-},\bm{\theta}}\right)}\right\}}\right)}\right)}}\right\rangle_{{\bm{% Y}}^{1},\ldots,{\bm{Y}}^{{K}}{}}}\\ &=-{\mathopen{}\mathclose{{}\left\langle{\log\sigma\mathopen{}\mathclose{{}% \left\{h\mathopen{}\mathclose{{}\left({\bm{Y}}^{+},\bm{\theta}}\right)}\right% \}+\sum_{j\neq+}^{{K}}\log\sigma\mathopen{}\mathclose{{}\left\{-h\mathopen{}% \mathclose{{}\left({\bm{Y}}^{-},\bm{\theta}}\right)}\right\}}}\right\rangle_{{% \bm{Y}}^{1},\ldots,{\bm{Y}}^{{K}}{}}}.\end{split} (10.13)