Table of Contents

1 Deductive and inductive posets

1.1 Definition

If you have a collection \(P\) of logical propositions, you can order them by implication: \(a\) precedes \(b\) if and only if \(a\) implies \(b\). This makes \(P\) into a poset. Since the ordering arose from deductive implication, we'll call this a deductive poset.

If you have a deductive poset \(P\), you can create a related poset \(P^*\) as follows: the underlying set is the same, and for any two propositions \(a\) and \(b\) in \(P\), \(a\) precedes \(ab\) in \(P^*\). We'll call this an inductive poset.

1.2 A canonical map from deductive posets to inductive posets

Each poset corresponds with a poset-category, that is a category with at most one arrow between any two objects. Considered as categories, inductive and deuctive posets are related as follows: there is a map \(\mathscr{F}\) which sends each arrow \(a\rightarrow b\) in \(P\) to the arrow \(a\rightarrow ab\) in \(P^*\). In fact, since \(a\) implies \(b\) if and only if \(a = ab\), \(\mathscr{F}\) sends each arrow in \(P\) to an identity arrow in \(P^*\) (specifically, it sends the arrow \(a\rightarrow b\) to the identity arrow \(a\rightarrow a\)).

2 Assigning plausibilities to inductive posets

Inductive posets encode the relative (qualitative) plausibilities of its propositions: there exists an arrow \(x\rightarrow y\) only if \(x\) is at least as plausible as \(y\).

2.1 Consistent reasoning as a commutative diagram

Inductive categories enable the following neat trick: we can interpret the objects of \(P^*\) as states of given information and interpret each arrow \(a\rightarrow ab\) in \(P^*\) as an inductive inference: the arrow \(a\rightarrow ab\) represents an inferential leap from the state of knowledge where only \(a\) is given to the state of knowledge where both \(a\) and \(b\) are given— in this way, it represents the process of inferring \(b\) when given \(a\), and we label the arrow with \((b|a)\).

This trick has several important features that suggest its usefulness, namely

  • Composition of arrows corresponds to compound inference.
  • In the special case of deductive inference, the inferential arrow is an identity; the source and destination states of knowledge are the same.
  • One aspect of the consistency requirement of Jaynes1 takes the form of a commutative square: \(x\rightarrow ax \rightarrow abx\) = \(x\rightarrow bx \rightarrow abx\) is the categorified version of \((AB|X)=(A|X)\cdot(B|AX)=(B|X)\cdot(A|BX)\).
  • We can make plausibility assignments by enriching the inductive category \(P^*\) over some monoidal category, e.g. the set of real numbers (considered as a category) with its usual multiplication. When we do, the identity arrows of \(P^*\) —corresponding to deductive inferences— are assigned a value of certainty automatically.

2.2 ``Multiplicity'' is reciprocal probability

The natural numbers have a comparatively concrete origin: they are the result of decategorifying the category of finite sets2, or the coequalizer of the arrows from a one-object category to a two-object category with a single nonidentity arrow. Extensions of the set of natural numbers— such as the set of integers or rational numbers or real numbers— strike me as being somewhat more abstract (however, see the Eudoxus construction of the real numbers).

Jaynes points out that our existing choice of scale for probabilities (i.e., the scale from 0 for impossibility to 1 for certainty) has a degree of freedom: any monotonic function of probability encodes the same information that probability does. Though the resulting laws for compound probability and so on change in form when probabilities are changed, they do not change in content.

With this in mind, it seems natural and permissible to use not probability but reciprocal probability instead. This scale, which we might call multiplicity, ranges from 1 (certainty) to positive infinity (impossibility); higher numbers are ascribed to less-plausible events.

In this way, the ``probability'' associated with choosing one out of \(n\) indistinguishable choices becomes identified with \(n\).

2.3 Laws for multiplicity

Jaynes derives laws of probability; either his method or his results can be used to obtain laws for multiplicities.

product rule
The product rule is unchanged: \(\xi(AB|X)=\xi(A|X)\cdot \xi(B|AX) = \xi(B|X)\cdot \xi(A|BX)\)
certainty
States of absolute certainty are assigned a multiplicity of 1. States of absolute impossibility are assigned a multiplicity of positive infinity.
entropy
In terms of probability, entropy has the form \(S=-\sum_i p_i \ln{p_i} = \sum_i p_i (-\ln{p_i}) = \sum_i p_i \ln{(1/p_i)} \). Hence, in terms of multiplicity, entropy has the form \(S = \sum_i \frac{\ln{\xi_i}}{\xi_i} \).

Another interesting quantity is \(\exp{S}\), which behaves multiplicitively rather than additively. \(\exp{S} = \prod_i \exp{\frac{\ln{\xi_i}}{\xi_i}} = \left(\exp{\ln{\xi_i}}\right)^{1/\xi_i} = \prod_i \xi_i^{1/\xi_i} \)

Footnotes:

1 (IIIa) If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result.

2 As Baez explains.

Date: 2011-07-09 14:19:42 EDT

Author: Dylan Holmes

Org version 7.6 with Emacs version 23

Validate XHTML 1.0