Mercurial > dylan
view org/visualizing-reason.org @ 10:543b1dbf821d
New article: Inductive lattices
author | Dylan Holmes <ocsenave@gmail.com> |
---|---|
date | Tue, 01 Nov 2011 01:55:26 -0500 |
parents | |
children | 1f112b4f9e8f |
line wrap: on
line source
1 #+title:How to model Inductive Reasoning2 #+author: Dylan Holmes3 #+email: ocsenave@gmail.com4 ##+description: An insight into plausible reasoning comes from experimenting with mathematical models.5 #+SETUPFILE: ../../aurellem/org/setup.org6 #+INCLUDE: ../../aurellem/org/level-0.org8 #Mathematics and computer science are the refineries of ideas. By9 #demanding unwavering precision and lucidness11 I've discovered a nifty mathematical presentation of12 plausible reasoning, which I've given the label *inductive posets* so13 that I can refer to the idea later. Though the idea of inductive posets has a number of shortcomings, it also14 shows some promise---there were a few resounding /clicks/ of agreement15 between the16 model and my intuition, and I got to see some exciting category-theoretic17 manifestations of some of my vaguer ideas. In this article, I'll talk about what I found particularly18 suggestive, and also what I found improvable.21 First, when you have a /deductive/ logical system, you can use a22 boolean lattice as a model. These boolean lattices capture ideas like23 deductive implication, negation, and identical truth/falsity.25 Suppose you have such a boolean lattice, \(L\), considered as a poset26 category with products defined between each of its members [fn::I haven't begun to think about big27 lattices, i.e. those with infinitely many atomic propositions. As28 such, let's consider just the finite case here.] and both an initial29 (\ldquo{}0\rdquo{}) and final (\ldquo{}1\rdquo{}) element. Now, using30 $L$ as a starting point, you can construct a new31 category $M$ as follows: the objects of $M$ are the same32 as the objects of $M$, and there is exactly one arrow33 \(A\rightarrow A\times B\) in $M$ for every pair of objects34 $A,B\in L$.36 Whereas we used $L$ to model deductive reasoning in a certain logical system, we will use37 this new lattice $M$ to model inductive reasoning in the same38 system. To do so, we will assign certain meanings to the features of39 $M$. Here is the key idea:41 #+begin_quote42 We'll interpret each arrow $A\rightarrow A\times B$ as the43 plausibility of $B$ given $A$. To strengthen the analogy, we'll44 sometimes borrow notation from probability theory, writing \((B|A)\)45 \(A\rightarrow A\times B\).46 #+end_quote48 This interpretation leads to some suggestive observations:50 - Certainty is represented by 1 :: You may know that the proposition \(A\Rightarrow B\) is logically51 equivalent to \(A=AB\). (If you haven't encountered this52 interesting fact yet, you should confirm it!) In our deductive53 lattice $L$, this equivalence means that there is an arrow $A\rightarrow B$ just if54 \(A\cong A\times B\) in \(L\). Relatedly, in our inductive lattice55 \(M\), this equivalence means that whenever $A\Rightarrow56 B$ in $L$, the arrow \(A\rightarrow A\times57 B\) is actually the (unique) arrow \(A\rightarrow A\). In58 probability theory notation, we write this as \((B|A)=1_A\) (!) This59 is a neat category-theoretic declaration of the usual60 result that the plausibility of a certainly true proposition is 1.61 - Deduction is included as a special case :: Because implications (arrows) in $L$62 correspond to identity arrows in $M$, we have an inclusion63 functor \(\mathfrak{F}:L\rightarrow M\), which acts on arrows by64 sending \(A\rightarrow B\) to \(A\rightarrow A\times B\). This65 - Bayes' Law is a commutative diagram :: In his book on probability66 theory, Jaynes derives a product rule for plausibilities based67 on his [[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=Jaynes%20probability%20theory&pg=PA19#v=onepage&q&f=fals][criterion for consistent reasoning]]. This product rule68 states that \((AB|X) = (A|X)\cdot (B|AX) = (B|X)\cdot(A|BX)\). If69 we now work backwards to see what this statement in probability70 theory means in our inductive lattice \(M\), we find that it's71 astonishingly simple---Jaynes' product rule is just a commutative72 square: \((X\rightarrow ABX) = (X\rightarrow AX \rightarrow ABX) =73 (X\rightarrow BX\rightarrow ABX)\).74 - Inductive reasoning as uphill travel :: There is a certain analogy75 between the process of inductive reasoning and uphill travel: You76 begin in a particular state (your state of77 given information). From this starting point, you can choose to78 travel to other states. But travel is almost always uphill: to79 climb from a state of less information to a state of greater80 information incurs a cost in the form of low81 probability [fn::There are a number of reasons why I favor82 reciprocal probability---perhaps we could call it83 multiplicity?---and why I think reciprocal probability works84 better for category-theoretic approaches to probability85 theory. One of these is that, as you can see, reciprocal probabilities86 capture the idea of uphill costs. ]. Treating your newfound state87 as your new starting point, you can climb further. reaching states of successively higher information, while88 accumulating all the uphill costs. This analogy works well in a89 number of ways: it correctly shows that the probability of an90 event utterly depends on your current state of given information91 (the difficulty of a journey depends utterly on your starting92 point). It depicts deductive reasoning as zero-cost travel (the93 step from a proposition to one of its implications is /certain/ [fn::This is a thoroughly significant pun.] ---the travel is not94 precarious nor uphill, and there is no cost.) With the inductive95 lattice model in this article, we gain a new perspective of this96 travel metaphor: we can visualize inductive reasoning as the /accretion of given97 information/, going from \(X\rightarrow AX\rightarrow ABX\), and98 getting permission to use our current hypotheses as contingent99 givens by paying the uphill toll.102 # - The propositions are entirely syntactic; they lack internal103 # structure. This model has forgotten /why/ certain relations104 # hold. Possible repair is to