Notation
This author firmly holds the (seemingly unpopular) view that good notation makes mathematical texts much easier to understand. More precisely, bad notation is much easier to parse—indeed, unremarkable—when one has already mastered the concepts; it can also mask deep underlying conceptual issues. I have attempted, although not everywhere with success, to use good notation in what follows.
symbol | use |
, , | scalar random variables |
, , | scalar instantiations |
, , | vector random variables |
, , | vector instantiations |
, etc. | matrices |
(non-random) parameters | |
vector of categorical | |
probabilities () | |
mean (vector) | |
covariance matrix |
Basic symbols.
Basic notational conventions are for the most part standard. This book uses capital letters for random variables, lowercase for their instantiations, boldface italic font for vectors, and italic for scalar variables. The (generally Latin) letters for matrices are capitalized and bolded, but (unless random) in Roman font, and not necessarily from the front of the alphabet.
The set of all standard parameters (like means, variances, and the like) of a distribution are generally denoted as a single vector with either or (or, in a pinch, some nearby Greek letter). But note well that in the context of Bayesian statistics their status as random variables is marked in the notation: , . The Greek letters , , and are generally reserved for particular parameters: the vector of categorical probabilities, the mean (vector), and the covariance matrix, respectively. Note that we do not use for the transcendental constant; we use [11].
Arguments and variables
In this textbook, I distinguish notationally between the arguments of functions (on the one hand) and variables, at which a function might be evaluated (on the other). Why?
An ambiguity in argument binding.
In standard notation, a function might be defined with the expression
Although this usually causes no problems, note that does not indicate any particular value; or, to put it another way, the expression is completed by an implicit (omitted) . On the occasions that we do not intend universal quantification, then, problems can arise. For example suppose we want to say that the unary function is identical to the binary function when its second argument is set to the value (or, alternatively, that such a value exists: ). We could write
but the fact that we are (mentally) to insert but not is not evident from the equation, but only from the surrounding verbal context.
There are several standard alternatives, but none is wholly satisfactory. We could include all quantifiers whenever there is ambiguity—but ambiguity is often in the eye of the beholder, and it is dangerous for a textbook to assume that an expression is perfectly transparent. We could simply include all quantifiers, but equations with many arguments would be littered with statements. Or again, we could use the mechanism of raised dots,
although still violates the standard convention in being unbound to a universal quantifier, and this has to be extracted from the verbal context. But more fatally, this mechanism doesn’t generalize well to functions of more variables:
Which dots on the left corresponds to which on the right?
Subscripts to the rescue?
Now, in a statistics textbook, the probability-mass function associated with a discrete random variable is usually written or (to emphasize that it is a function) , and the probability of a particular observation correspondingly as . The subscript distinguishes this mass function from, say, one associated with the random variable , namely . Conditional distribributions, in turn, are written , and the value of a conditional distribution . This might seem exactly the mechanism we seek to identify the (universally quantified) arguments of functions. For example, consider this instance of Bayes’ rule:
The convention for understanding it is that omitted arguments () are universally quantified, whereas included variables () have been bound to something in the enclosing context.
But this proposal, too, has problems. First of all, although the subscripts make it possible to infer which omitted arguments on the left correspond to which on the right, the dots themselves are just noise. For the reader that is not convinced by Eq. 1.2, I suggest
a partially evaluated function that we will encounter in Chapter 2. Second (what is fatal), raised dots can’t be used for variables occurring outside of the list of function arguments. For example, how are we to write Eq. 1.1?—certainly not
Gray arguments.
We can get a hold of the fundamental issue that we are grappling with here by distinguishing function arguments from variables. This is most intuitive in terms of a programming language. For example, in the following snippet of (Python) code,
1 def quadratic(x):
2 return (x - c)**2
x is an argument to the quadratic function, whereas c is a variable that is (presumably) bound somewhere in the enclosing scope. Critically, x is an argument both in the function declaration, def quadratic(x):, and in the function body, return (x - c)**2. A function can also be defined as a partially evaluated instance of another function:
1 def shifted_quadratic(x, c):
2 return (x-c)**2
3
4 def centered_quadratic(x):
5 return shifted_quadratic(x,0)
Both x amd c are arguments of shifted_quadratic, but centered_quadratic has only a single argument, x. It is analogous to the partially evaluated function exhibited in Eq. 1.3, whose only arguments are .
With some reservations, I have introduced a new notational convention in this book to mark this distinction between arguments and variables, employing a gray font color for the former. For example, Eq. 1.3 will be written as
As in Eq. 1.3, the fact that the function is partially evaluated at is indicated, but in this case with the standard (black) font.
This notational convention neatly solves the problems just discussed. That is, it makes clear which variables are universally quantified—namely, the arguments, in gray—without resorting to explicit quantification, verbal context, or subscripts and dots. This last is particularly appealing not just because it is easier to read and generalizes better (recall Eq. 1.4), although these are its chief merits. It also provides an alternative mechanism for disambiguating probability-mass and -density functions from each other; namely, by their (gray) arguments rather than by subscripts. Indeed, this is the standard device employed in the machine-learning literature—but without the distinction between arguments and variables that solves our main problem.
And then, finally, we will see below that this distinction is exceedingly useful for another purpose: distinguishing partial and total derivatives.
Probabilistical functions and functionals
Symbols for probability mass and density.
symbol | use |
the data mass/density function | |
, | the model mass/density functions |
This text indiscriminately uses the same letter for probability-mass and probability-density functions, in both cases the usual (for mass functions) . Further semantic content is, however, communicated by diacritics. In particular, is reserved for “the data distribution,” i.e., the true source in the world of our samples, as opposed to a model. Often in the literature, but not in this book, the data distribution is taken to be a discrete set of points corresponding to a particular sample, that is, a collection of delta functions. Here, is interpreted to be a full-fledged distribution, known not in form but only through the samples that we have observed from it.
For model distributions we generally employ , although we shall also have occasion to use for certain model distributions.
Now, it is a fact from elementary probability theory [XXX] that a random variable carries with it a probability distribution. Conversely, it makes no sense to talk about two different probability distributions over the same random variable—although texts on machine learning routinely do, usually in the context of relative or cross entropy [GoodfellowXXX]. We will indeed often be interested in (e.g.) the relative entropy (KL divergence) of two distributions, and , but this text takes pains to note that these are distributions over different random variables, for example and , respectively. In general, the text marks random variables, their corresponding distributions, and the arguments of those distributions with the same diacritics; hence,
It may at first blush seem surprising, then, to see the relative entropy (KL divergence) expressed as
that is, with on both sides. But is not an argument of these distributions; it is the variable at which they are being evaluated. Notice that despite the arguments not appearing in this expression, the two distributions are still distinguishable—by their diacritics.
Still, the conventions are not bulletproof. Consider for example density functions for two different data distributions:
These are distinguished not by any diacritics, but by their arguments. According to our convention, these arguments are generally listed (in gray), so it is usually possible to tell these two distributions apart. And even when considering evaluated density functions, we can typically disambiguate by our choice for the letter used for the observations: . Occasionally, however, we will need to consider evaluating such functions at some other point, say or 1. Then we will be thrown back on one of the other, standard conventions: , , etc.
symbol | use |
expectation of | |
sample average of | |
variance of | |
covariance matrix of | |
covariance between and |
Expectation, covariance, and sample averages.
The symbol is used with a single argument to denote the operator that turns a random variable into a covariance matrix; but with two arguments, , to indicate the (cross) covariance between two random variables. Perhaps more idiosyncratically, angle brackets, , are usually reserved for sample averages, as opposed to expectation values, although occasionally this stricture is relaxed.
The distribution with respect to which an expectation is taken will only occasionally be inferrable from its argument, so we will typically resort to subscripts (the previous discussion not withstanding). For example, we will write
Thus, e.g., the subscripts in the second and third examples tell us that the expectation is taken under the distribution . Of course, only the variable is averaged out in these expressions; the conditioning variable is free to take on any value, which need not match the argument symbol (as in the third example).
Let us put together some of our conventions with an iterated expectation under and ,
There are a few things to notice. First of all, and do not appear in gray. This is because they are dummy variables, not arguments; or, to put it a different way, they are bound to the integral operaters, not (implicitly) to universal quantifiers. (Accordingly, they do not appear outside of the integrals, e.g. on the other side of the equation.) Second, bear in mind that the symbol on the right side of the conditioning bar (here, ) need not match the subscript of the outer expectation (here also ); e.g.,
Thus, the subscripts to the conditional expectation tell us that it is taken with respect to , but we are not forbidden from filling the vacant argument with a different random variable, in this case , and taking an expectation.
Third, the “vector differentials” are to be interpreted as
and the integral as an iterated integral; hence:
The subscript to the integral tells us that it is to be taken over the entire support of the corresponding random variable.
Derivatives
[[The use of transposes in vector (and matrix) derivatives. The total derivative vs. partial derivatives. The “gradient” and the Hessian.]]