Chapter 3 Undirected Generative Models

In Chapter 2, we considered modeling tasks in which we begin with some knowledge or intuition about the conditional probability of certain variables, given certain others. After assembling distributions for all the relevant variables, we can construct a joint distribution out of their product. Now we consider modeling tasks in which we begin with some intuitions or knowledge only about how “stable” certain configurations of variables are—that is, with unnormalized probability distributions. [[It might seem that we can just normalize all these, turn them into conditionals, and then make a directed graphical model…. Certain UGMs that can’t be turned into DGMs, e.g. the square…. In practice, we do the reverse….]]

…. We saw in Chapter 2 that inference in directed graphical models is essentially some kind of more or less complex application of Bayes’s rule. But abstracting away from the precise meaning of the probability distributions in Eq. 2.1, we see that the fundamental operations are multiplication, marginalization, and normalization, and these carry over to the undirected setting…. [[This will be relevant for our investigation into probabilistic computation in the brain….]]