# HG changeset patch # User Dylan Holmes # Date 1320130526 18000 # Node ID 543b1dbf821d12b958dea8749e22bfd06eb71c78 # Parent 23db8b1f0ee7de02cc739a55c43e34c6f6579844 New article: Inductive lattices diff -r 23db8b1f0ee7 -r 543b1dbf821d org/visualizing-reason.org --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/org/visualizing-reason.org Tue Nov 01 01:55:26 2011 -0500 @@ -0,0 +1,104 @@ +#+title:How to model Inductive Reasoning +#+author: Dylan Holmes +#+email: ocsenave@gmail.com +##+description: An insight into plausible reasoning comes from experimenting with mathematical models. +#+SETUPFILE: ../../aurellem/org/setup.org +#+INCLUDE: ../../aurellem/org/level-0.org + +#Mathematics and computer science are the refineries of ideas. By +#demanding unwavering precision and lucidness + +I've discovered a nifty mathematical presentation of +plausible reasoning, which I've given the label *inductive posets* so +that I can refer to the idea later. Though the idea of inductive posets has a number of shortcomings, it also +shows some promise---there were a few resounding /clicks/ of agreement +between the +model and my intuition, and I got to see some exciting category-theoretic +manifestations of some of my vaguer ideas. In this article, I'll talk about what I found particularly +suggestive, and also what I found improvable. + + +First, when you have a /deductive/ logical system, you can use a +boolean lattice as a model. These boolean lattices capture ideas like +deductive implication, negation, and identical truth/falsity. + +Suppose you have such a boolean lattice, \(L\), considered as a poset +category with products defined between each of its members [fn::I haven't begun to think about big +lattices, i.e. those with infinitely many atomic propositions. As +such, let's consider just the finite case here.] and both an initial +(\ldquo{}0\rdquo{}) and final (\ldquo{}1\rdquo{}) element. Now, using +$L$ as a starting point, you can construct a new +category $M$ as follows: the objects of $M$ are the same +as the objects of $M$, and there is exactly one arrow +\(A\rightarrow A\times B\) in $M$ for every pair of objects +$A,B\in L$. + +Whereas we used $L$ to model deductive reasoning in a certain logical system, we will use +this new lattice $M$ to model inductive reasoning in the same +system. To do so, we will assign certain meanings to the features of +$M$. Here is the key idea: + +#+begin_quote +We'll interpret each arrow $A\rightarrow A\times B$ as the + plausibility of $B$ given $A$. To strengthen the analogy, we'll + sometimes borrow notation from probability theory, writing \((B|A)\) + \(A\rightarrow A\times B\). +#+end_quote + +This interpretation leads to some suggestive observations: + +- Certainty is represented by 1 :: You may know that the proposition \(A\Rightarrow B\) is logically + equivalent to \(A=AB\). (If you haven't encountered this + interesting fact yet, you should confirm it!) In our deductive + lattice $L$, this equivalence means that there is an arrow $A\rightarrow B$ just if + \(A\cong A\times B\) in \(L\). Relatedly, in our inductive lattice + \(M\), this equivalence means that whenever $A\Rightarrow + B$ in $L$, the arrow \(A\rightarrow A\times + B\) is actually the (unique) arrow \(A\rightarrow A\). In + probability theory notation, we write this as \((B|A)=1_A\) (!) This + is a neat category-theoretic declaration of the usual + result that the plausibility of a certainly true proposition is 1. +- Deduction is included as a special case :: Because implications (arrows) in $L$ + correspond to identity arrows in $M$, we have an inclusion + functor \(\mathfrak{F}:L\rightarrow M\), which acts on arrows by + sending \(A\rightarrow B\) to \(A\rightarrow A\times B\). This +- Bayes' Law is a commutative diagram :: In his book on probability + theory, Jaynes derives a product rule for plausibilities based + on his [[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=Jaynes%20probability%20theory&pg=PA19#v=onepage&q&f=fals][criterion for consistent reasoning]]. This product rule + states that \((AB|X) = (A|X)\cdot (B|AX) = (B|X)\cdot(A|BX)\). If + we now work backwards to see what this statement in probability + theory means in our inductive lattice \(M\), we find that it's + astonishingly simple---Jaynes' product rule is just a commutative + square: \((X\rightarrow ABX) = (X\rightarrow AX \rightarrow ABX) = + (X\rightarrow BX\rightarrow ABX)\). +- Inductive reasoning as uphill travel :: There is a certain analogy + between the process of inductive reasoning and uphill travel: You + begin in a particular state (your state of + given information). From this starting point, you can choose to + travel to other states. But travel is almost always uphill: to + climb from a state of less information to a state of greater + information incurs a cost in the form of low + probability [fn::There are a number of reasons why I favor + reciprocal probability---perhaps we could call it + multiplicity?---and why I think reciprocal probability works + better for category-theoretic approaches to probability + theory. One of these is that, as you can see, reciprocal probabilities + capture the idea of uphill costs. ]. Treating your newfound state + as your new starting point, you can climb further. reaching states of successively higher information, while + accumulating all the uphill costs. This analogy works well in a + number of ways: it correctly shows that the probability of an + event utterly depends on your current state of given information + (the difficulty of a journey depends utterly on your starting + point). It depicts deductive reasoning as zero-cost travel (the + step from a proposition to one of its implications is /certain/ [fn::This is a thoroughly significant pun.] ---the travel is not + precarious nor uphill, and there is no cost.) With the inductive + lattice model in this article, we gain a new perspective of this + travel metaphor: we can visualize inductive reasoning as the /accretion of given + information/, going from \(X\rightarrow AX\rightarrow ABX\), and + getting permission to use our current hypotheses as contingent + givens by paying the uphill toll. + + +# - The propositions are entirely syntactic; they lack internal +# structure. This model has forgotten /why/ certain relations +# hold. Possible repair is to