changeset 10:543b1dbf821d

New article: Inductive lattices
author Dylan Holmes <ocsenave@gmail.com>
date Tue, 01 Nov 2011 01:55:26 -0500
parents 23db8b1f0ee7
children 1f112b4f9e8f
files org/visualizing-reason.org
diffstat 1 files changed, 104 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/org/visualizing-reason.org	Tue Nov 01 01:55:26 2011 -0500
     1.3 @@ -0,0 +1,104 @@
     1.4 +#+title:How to model  Inductive Reasoning
     1.5 +#+author: Dylan Holmes
     1.6 +#+email: ocsenave@gmail.com
     1.7 +##+description: An insight into plausible reasoning comes from experimenting with mathematical models. 
     1.8 +#+SETUPFILE: ../../aurellem/org/setup.org
     1.9 +#+INCLUDE: ../../aurellem/org/level-0.org
    1.10 +
    1.11 +#Mathematics and computer science are the refineries of ideas. By
    1.12 +#demanding unwavering precision and lucidness  
    1.13 +
    1.14 +I've discovered a nifty mathematical presentation of
    1.15 +plausible reasoning, which I've given the label *inductive posets* so
    1.16 +that I can refer to the idea later. Though the idea of inductive posets has a number of shortcomings, it also
    1.17 +shows some promise---there were a few resounding /clicks/ of agreement
    1.18 +between the
    1.19 +model and my intuition, and I got to see some exciting category-theoretic 
    1.20 +manifestations of some of my vaguer ideas. In this article, I'll talk about what I found particularly
    1.21 +suggestive, and also what I found improvable.
    1.22 +
    1.23 +
    1.24 +First, when you have a /deductive/ logical system, you can use a
    1.25 +boolean lattice as a model. These boolean lattices capture ideas like
    1.26 +deductive implication, negation, and identical truth/falsity.
    1.27 +
    1.28 +Suppose you have such a boolean lattice, \(L\), considered as a poset
    1.29 +category with products defined between each of its members [fn::I haven't begun to think about big
    1.30 +lattices, i.e. those with infinitely many atomic propositions. As
    1.31 +such, let's consider just the finite case here.] and both an initial
    1.32 +(\ldquo{}0\rdquo{}) and final (\ldquo{}1\rdquo{}) element. Now, using
    1.33 +$L$ as a starting point, you can construct a new
    1.34 +category $M$ as follows: the objects of $M$ are the same
    1.35 +as the objects of $M$, and there is exactly one arrow
    1.36 +\(A\rightarrow A\times B\) in $M$ for every pair of objects
    1.37 +$A,B\in L$. 
    1.38 +
    1.39 +Whereas we used $L$ to model deductive reasoning in a certain logical system, we will use
    1.40 +this new lattice $M$ to model inductive reasoning in the same
    1.41 +system. To do so, we will assign certain meanings to the features of
    1.42 +$M$. Here is the key idea:
    1.43 +
    1.44 +#+begin_quote
    1.45 +We'll interpret each arrow $A\rightarrow A\times B$ as the
    1.46 +  plausibility of $B$ given $A$. To strengthen the analogy, we'll
    1.47 +  sometimes borrow notation from probability theory, writing \((B|A)\)
    1.48 +  \(A\rightarrow A\times B\). 
    1.49 +#+end_quote
    1.50 +
    1.51 +This interpretation leads to some suggestive observations:
    1.52 +
    1.53 +- Certainty is represented by 1 :: You may know that the proposition \(A\Rightarrow B\) is logically
    1.54 +  equivalent to \(A=AB\). (If you haven't encountered this
    1.55 +  interesting fact yet, you should confirm it!) In our deductive
    1.56 +     lattice $L$, this equivalence means that there is an arrow $A\rightarrow B$ just if
    1.57 +     \(A\cong A\times B\) in \(L\). Relatedly, in our inductive lattice
    1.58 +     \(M\), this equivalence means that whenever $A\Rightarrow
    1.59 +     B$ in $L$, the arrow \(A\rightarrow A\times
    1.60 +     B\) is actually the (unique) arrow \(A\rightarrow A\). In
    1.61 +     probability theory notation, we write this as \((B|A)=1_A\) (!) This
    1.62 +     is a neat category-theoretic declaration of the usual
    1.63 +     result that the plausibility of a certainly true proposition is 1.
    1.64 +- Deduction is included as a special case :: Because implications (arrows) in $L$
    1.65 +     correspond to identity arrows in $M$, we have an inclusion
    1.66 +     functor \(\mathfrak{F}:L\rightarrow M\), which acts on arrows by
    1.67 +     sending \(A\rightarrow B\) to \(A\rightarrow A\times B\). This
    1.68 +- Bayes' Law is a commutative diagram ::  In his book on probability
    1.69 +     theory, Jaynes derives a product rule for plausibilities based
    1.70 +     on his [[http://books.google.com/books?id=tTN4HuUNXjgC&lpg=PP1&dq=Jaynes%20probability%20theory&pg=PA19#v=onepage&q&f=fals][criterion for consistent reasoning]]. This product rule
    1.71 +     states that \((AB|X) = (A|X)\cdot (B|AX) = (B|X)\cdot(A|BX)\). If
    1.72 +     we now work backwards to see what this statement in probability 
    1.73 +     theory means in our inductive lattice \(M\), we find that it's
    1.74 +     astonishingly simple---Jaynes' product rule is just a commutative
    1.75 +     square: \((X\rightarrow ABX) = (X\rightarrow AX \rightarrow ABX) =
    1.76 +     (X\rightarrow BX\rightarrow ABX)\). 
    1.77 +- Inductive reasoning as uphill travel :: There is a certain analogy
    1.78 +     between the process of inductive reasoning and uphill travel: You
    1.79 +     begin in a particular state (your state of
    1.80 +     given information). From this starting point, you can choose to
    1.81 +     travel to other states. But travel is almost always uphill: to
    1.82 +     climb from a state of less information to a state of greater
    1.83 +     information incurs a cost in the form of low
    1.84 +     probability [fn::There are a number of reasons why I favor
    1.85 +     reciprocal probability---perhaps we could call it
    1.86 +     multiplicity?---and why I think reciprocal probability works
    1.87 +     better for category-theoretic approaches to probability
    1.88 +     theory. One of these is that, as you can see, reciprocal probabilities 
    1.89 +     capture the idea of uphill costs. ]. Treating your newfound state
    1.90 +     as your new starting point, you can climb further. reaching states of successively higher information, while
    1.91 +     accumulating all the uphill costs. This analogy works well in a
    1.92 +     number of ways: it correctly shows that the probability of an
    1.93 +     event utterly depends on your current state of given information
    1.94 +     (the difficulty of a journey depends utterly on your starting
    1.95 +     point). It depicts deductive reasoning as zero-cost travel (the
    1.96 +     step from a proposition to one of its implications is /certain/ [fn::This is a thoroughly significant pun.] ---the travel is not
    1.97 +     precarious nor uphill, and there is no cost.)  With the inductive
    1.98 +     lattice model in this article, we gain a new perspective of this
    1.99 +     travel metaphor: we can visualize inductive reasoning as the /accretion of given
   1.100 +     information/, going from \(X\rightarrow AX\rightarrow ABX\), and
   1.101 +     getting permission to use our current hypotheses as contingent
   1.102 +     givens by paying the uphill toll.
   1.103 +      
   1.104 +
   1.105 +# - The propositions are entirely syntactic; they lack internal
   1.106 +#  structure. This model has forgotten /why/ certain relations
   1.107 +#  hold. Possible repair is to