comparison org/visualizing-reason.org @ 10:543b1dbf821d

New article: Inductive lattices
author Dylan Holmes <ocsenave@gmail.com>
date Tue, 01 Nov 2011 01:55:26 -0500
parents
children 1f112b4f9e8f
comparison
equal deleted inserted replaced
9:23db8b1f0ee7 10:543b1dbf821d
1 #+title:How to model Inductive Reasoning
2 #+author: Dylan Holmes
3 #+email: ocsenave@gmail.com
4 ##+description: An insight into plausible reasoning comes from experimenting with mathematical models.
5 #+SETUPFILE: ../../aurellem/org/setup.org
6 #+INCLUDE: ../../aurellem/org/level-0.org
7
8 #Mathematics and computer science are the refineries of ideas. By
9 #demanding unwavering precision and lucidness
10
11 I've discovered a nifty mathematical presentation of
12 plausible reasoning, which I've given the label *inductive posets* so
13 that I can refer to the idea later. Though the idea of inductive posets has a number of shortcomings, it also
14 shows some promise---there were a few resounding /clicks/ of agreement
15 between the
16 model and my intuition, and I got to see some exciting category-theoretic
17 manifestations of some of my vaguer ideas. In this article, I'll talk about what I found particularly
18 suggestive, and also what I found improvable.
19
20
21 First, when you have a /deductive/ logical system, you can use a
22 boolean lattice as a model. These boolean lattices capture ideas like
23 deductive implication, negation, and identical truth/falsity.
24
25 Suppose you have such a boolean lattice, \(L\), considered as a poset
26 category with products defined between each of its members [fn::I haven't begun to think about big
27 lattices, i.e. those with infinitely many atomic propositions. As
28 such, let's consider just the finite case here.] and both an initial
29 (\ldquo{}0\rdquo{}) and final (\ldquo{}1\rdquo{}) element. Now, using
30 $L$ as a starting point, you can construct a new
31 category $M$ as follows: the objects of $M$ are the same
32 as the objects of $M$, and there is exactly one arrow
33 \(A\rightarrow A\times B\) in $M$ for every pair of objects
34 $A,B\in L$.
35
36 Whereas we used $L$ to model deductive reasoning in a certain logical system, we will use
37 this new lattice $M$ to model inductive reasoning in the same
38 system. To do so, we will assign certain meanings to the features of
39 $M$. Here is the key idea:
40
41 #+begin_quote
42 We'll interpret each arrow $A\rightarrow A\times B$ as the
43 plausibility of $B$ given $A$. To strengthen the analogy, we'll
44 sometimes borrow notation from probability theory, writing \((B|A)\)
45 \(A\rightarrow A\times B\).
46 #+end_quote
47
48 This interpretation leads to some suggestive observations:
49
50 - Certainty is represented by 1 :: You may know that the proposition \(A\Rightarrow B\) is logically
51 equivalent to \(A=AB\). (If you haven't encountered this
52 interesting fact yet, you should confirm it!) In our deductive
53 lattice $L$, this equivalence means that there is an arrow $A\rightarrow B$ just if
54 \(A\cong A\times B\) in \(L\). Relatedly, in our inductive lattice
55 \(M\), this equivalence means that whenever $A\Rightarrow
56 B$ in $L$, the arrow \(A\rightarrow A\times
57 B\) is actually the (unique) arrow \(A\rightarrow A\). In
58 probability theory notation, we write this as \((B|A)=1_A\) (!) This
59 is a neat category-theoretic declaration of the usual
60 result that the plausibility of a certainly true proposition is 1.
61 - Deduction is included as a special case :: Because implications (arrows) in $L$
62 correspond to identity arrows in $M$, we have an inclusion
63 functor \(\mathfrak{F}:L\rightarrow M\), which acts on arrows by
64 sending \(A\rightarrow B\) to \(A\rightarrow A\times B\). This
65 - Bayes' Law is a commutative diagram :: In his book on probability
66 theory, Jaynes derives a product rule for plausibilities based
67 on his [[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=Jaynes%20probability%20theory&pg=PA19#v=onepage&q&f=fals][criterion for consistent reasoning]]. This product rule
68 states that \((AB|X) = (A|X)\cdot (B|AX) = (B|X)\cdot(A|BX)\). If
69 we now work backwards to see what this statement in probability
70 theory means in our inductive lattice \(M\), we find that it's
71 astonishingly simple---Jaynes' product rule is just a commutative
72 square: \((X\rightarrow ABX) = (X\rightarrow AX \rightarrow ABX) =
73 (X\rightarrow BX\rightarrow ABX)\).
74 - Inductive reasoning as uphill travel :: There is a certain analogy
75 between the process of inductive reasoning and uphill travel: You
76 begin in a particular state (your state of
77 given information). From this starting point, you can choose to
78 travel to other states. But travel is almost always uphill: to
79 climb from a state of less information to a state of greater
80 information incurs a cost in the form of low
81 probability [fn::There are a number of reasons why I favor
82 reciprocal probability---perhaps we could call it
83 multiplicity?---and why I think reciprocal probability works
84 better for category-theoretic approaches to probability
85 theory. One of these is that, as you can see, reciprocal probabilities
86 capture the idea of uphill costs. ]. Treating your newfound state
87 as your new starting point, you can climb further. reaching states of successively higher information, while
88 accumulating all the uphill costs. This analogy works well in a
89 number of ways: it correctly shows that the probability of an
90 event utterly depends on your current state of given information
91 (the difficulty of a journey depends utterly on your starting
92 point). It depicts deductive reasoning as zero-cost travel (the
93 step from a proposition to one of its implications is /certain/ [fn::This is a thoroughly significant pun.] ---the travel is not
94 precarious nor uphill, and there is no cost.) With the inductive
95 lattice model in this article, we gain a new perspective of this
96 travel metaphor: we can visualize inductive reasoning as the /accretion of given
97 information/, going from \(X\rightarrow AX\rightarrow ABX\), and
98 getting permission to use our current hypotheses as contingent
99 givens by paying the uphill toll.
100
101
102 # - The propositions are entirely syntactic; they lack internal
103 # structure. This model has forgotten /why/ certain relations
104 # hold. Possible repair is to