ocsenave@11: #+title:A Category-Theoretic View of Inductive Reasoning ocsenave@10: #+author: Dylan Holmes ocsenave@10: #+email: ocsenave@gmail.com ocsenave@10: ##+description: An insight into plausible reasoning comes from experimenting with mathematical models. ocsenave@10: #+SETUPFILE: ../../aurellem/org/setup.org ocsenave@10: #+INCLUDE: ../../aurellem/org/level-0.org ocsenave@10: ocsenave@10: #Mathematics and computer science are the refineries of ideas. By ocsenave@10: #demanding unwavering precision and lucidness ocsenave@10: ocsenave@10: I've discovered a nifty mathematical presentation of ocsenave@10: plausible reasoning, which I've given the label *inductive posets* so ocsenave@10: that I can refer to the idea later. Though the idea of inductive posets has a number of shortcomings, it also ocsenave@10: shows some promise---there were a few resounding /clicks/ of agreement ocsenave@10: between the ocsenave@10: model and my intuition, and I got to see some exciting category-theoretic ocsenave@10: manifestations of some of my vaguer ideas. In this article, I'll talk about what I found particularly ocsenave@10: suggestive, and also what I found improvable. ocsenave@10: ocsenave@10: ocsenave@10: First, when you have a /deductive/ logical system, you can use a ocsenave@10: boolean lattice as a model. These boolean lattices capture ideas like ocsenave@10: deductive implication, negation, and identical truth/falsity. ocsenave@10: ocsenave@10: Suppose you have such a boolean lattice, \(L\), considered as a poset ocsenave@10: category with products defined between each of its members [fn::I haven't begun to think about big ocsenave@10: lattices, i.e. those with infinitely many atomic propositions. As ocsenave@10: such, let's consider just the finite case here.] and both an initial ocsenave@10: (\ldquo{}0\rdquo{}) and final (\ldquo{}1\rdquo{}) element. Now, using ocsenave@10: $L$ as a starting point, you can construct a new ocsenave@10: category $M$ as follows: the objects of $M$ are the same ocsenave@10: as the objects of $M$, and there is exactly one arrow ocsenave@10: \(A\rightarrow A\times B\) in $M$ for every pair of objects ocsenave@10: $A,B\in L$. ocsenave@10: ocsenave@10: Whereas we used $L$ to model deductive reasoning in a certain logical system, we will use ocsenave@10: this new lattice $M$ to model inductive reasoning in the same ocsenave@10: system. To do so, we will assign certain meanings to the features of ocsenave@10: $M$. Here is the key idea: ocsenave@10: ocsenave@10: #+begin_quote ocsenave@10: We'll interpret each arrow $A\rightarrow A\times B$ as the ocsenave@10: plausibility of $B$ given $A$. To strengthen the analogy, we'll ocsenave@10: sometimes borrow notation from probability theory, writing \((B|A)\) ocsenave@10: \(A\rightarrow A\times B\). ocsenave@10: #+end_quote ocsenave@10: ocsenave@10: This interpretation leads to some suggestive observations: ocsenave@10: ocsenave@10: - Certainty is represented by 1 :: You may know that the proposition \(A\Rightarrow B\) is logically ocsenave@10: equivalent to \(A=AB\). (If you haven't encountered this ocsenave@10: interesting fact yet, you should confirm it!) In our deductive ocsenave@10: lattice $L$, this equivalence means that there is an arrow $A\rightarrow B$ just if ocsenave@10: \(A\cong A\times B\) in \(L\). Relatedly, in our inductive lattice ocsenave@10: \(M\), this equivalence means that whenever $A\Rightarrow ocsenave@10: B$ in $L$, the arrow \(A\rightarrow A\times ocsenave@10: B\) is actually the (unique) arrow \(A\rightarrow A\). In ocsenave@10: probability theory notation, we write this as \((B|A)=1_A\) (!) This ocsenave@10: is a neat category-theoretic declaration of the usual ocsenave@10: result that the plausibility of a certainly true proposition is 1. ocsenave@10: - Deduction is included as a special case :: Because implications (arrows) in $L$ ocsenave@10: correspond to identity arrows in $M$, we have an inclusion ocsenave@10: functor \(\mathfrak{F}:L\rightarrow M\), which acts on arrows by ocsenave@10: sending \(A\rightarrow B\) to \(A\rightarrow A\times B\). This ocsenave@10: - Bayes' Law is a commutative diagram :: In his book on probability ocsenave@10: theory, Jaynes derives a product rule for plausibilities based ocsenave@10: on his [[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=Jaynes%20probability%20theory&pg=PA19#v=onepage&q&f=fals][criterion for consistent reasoning]]. This product rule ocsenave@10: states that \((AB|X) = (A|X)\cdot (B|AX) = (B|X)\cdot(A|BX)\). If ocsenave@10: we now work backwards to see what this statement in probability ocsenave@10: theory means in our inductive lattice \(M\), we find that it's ocsenave@10: astonishingly simple---Jaynes' product rule is just a commutative ocsenave@10: square: \((X\rightarrow ABX) = (X\rightarrow AX \rightarrow ABX) = ocsenave@10: (X\rightarrow BX\rightarrow ABX)\). ocsenave@10: - Inductive reasoning as uphill travel :: There is a certain analogy ocsenave@10: between the process of inductive reasoning and uphill travel: You ocsenave@10: begin in a particular state (your state of ocsenave@10: given information). From this starting point, you can choose to ocsenave@10: travel to other states. But travel is almost always uphill: to ocsenave@10: climb from a state of less information to a state of greater ocsenave@10: information incurs a cost in the form of low ocsenave@10: probability [fn::There are a number of reasons why I favor ocsenave@10: reciprocal probability---perhaps we could call it ocsenave@10: multiplicity?---and why I think reciprocal probability works ocsenave@10: better for category-theoretic approaches to probability ocsenave@10: theory. One of these is that, as you can see, reciprocal probabilities ocsenave@10: capture the idea of uphill costs. ]. Treating your newfound state ocsenave@10: as your new starting point, you can climb further. reaching states of successively higher information, while ocsenave@10: accumulating all the uphill costs. This analogy works well in a ocsenave@10: number of ways: it correctly shows that the probability of an ocsenave@10: event utterly depends on your current state of given information ocsenave@10: (the difficulty of a journey depends utterly on your starting ocsenave@10: point). It depicts deductive reasoning as zero-cost travel (the ocsenave@10: step from a proposition to one of its implications is /certain/ [fn::This is a thoroughly significant pun.] ---the travel is not ocsenave@10: precarious nor uphill, and there is no cost.) With the inductive ocsenave@10: lattice model in this article, we gain a new perspective of this ocsenave@10: travel metaphor: we can visualize inductive reasoning as the /accretion of given ocsenave@10: information/, going from \(X\rightarrow AX\rightarrow ABX\), and ocsenave@10: getting permission to use our current hypotheses as contingent ocsenave@10: givens by paying the uphill toll. ocsenave@10: ocsenave@10: ocsenave@10: # - The propositions are entirely syntactic; they lack internal ocsenave@10: # structure. This model has forgotten /why/ certain relations ocsenave@10: # hold. Possible repair is to