I often return to Jaynes’s wonderful 1957 paper “Information theory and statistical mechanics”. Starting from the entropy as defined by Shannon, Jaynes explains lucidly how to decipher the correct probability distribution to describe a statistical phenomenon. The correct distribution derives from the pieces of information that we might have about the phenomenon. Using the pieces and obeying the “principle of maximum ignorance”, the proper distribution can be computed.
The form of the definition of Shannon entropy is interesting in itself. Shannon defined the entropy as the expected value of the information contained in a message. The information of a message was quantified by because small changes in the messages make little impact on the overall information, and information in a message is additive. For example a message consisting of a single coin toss has one bit of information , while two coin tosses have two bits, . Thus the average information, now called the entropy is computed for a probability distribution on an -state system as
The two coin flipping situation has (if the coins are fair) an equal probability for heads and tails and thus . The maximum entropy occurs, in fact, when the probabilities are even, and we take forward this idea of entropy as maximal evenness. As an aside, the base of the logarithm is unimportant, base two is chosen for ease in bit systems. Instead, in statistical mechanics the natural logarithm is frequently used.
Let be a discrete variable drawn from a length- set . We don’t know the probability distribution on . But, what if we assume we do know the mean of some function of ? This mean is defined by the rules of continuous probability to be
Can we now infer the mean of some other function ? At first, Jaynes says, that inference seems impossible. Even using the normalization condition on the probability distribution: , we still are short equations to account for all the unknown variables.
Laplace answered the questions with his ‘principle of insufficient reason’: that no assumptions should be made about the probabilities. Using his principle, for example, if no information is given about the probability distribution , we must assume that all are equally likely, thus is a uniform random variable and its probability distribution flat.
When we know the mean and the definition of a probability distribution, we can use Lagrange multipliers to enforce two constraints:
constraint that the total probability is unity.
constraint of the definition of the mean.
Using we write the total entropy using the Lagrange multipliers as
where then to be maximal we must have . Computing the derivative:
where the chosen function does not depend on the ultimately maximally entropic distribution so that and thus . The term in the brackets above must be zero for each . Solving for leads to
so that the maximally entropic distribution is an exponential function. This should seem familiar to those who have seen the Maxwell-Boltzmann distribution for the micro-canonical ensemble in statistical physics. The normalization can be recovered.
What if we add another constraint? For example, if we constrain the standard deviation, thereby constraining the second moment, we end up with the omnipresent Gaussian distribution. It is really cool that the “max-ent” method for inferring distributions agrees with common assumptions made with intuition but less mathematical rigor. The note is meant to illustrate this powerful technique as well as showing how we can actually define maximum ignorance in making inferences.