changeset 536:0b0fef5e817b

more clarification.
author Robert McIntyre <rlm@mit.edu>
date Sun, 27 Apr 2014 20:39:33 -0400
parents 8a5abd51cd4f
children bc5eb476693a
files thesis/cortex.org
diffstat 1 files changed, 34 insertions(+), 15 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/thesis/cortex.org	Sun Apr 27 20:25:22 2014 -0400
     1.2 +++ b/thesis/cortex.org	Sun Apr 27 20:39:33 2014 -0400
     1.3 @@ -2946,7 +2946,7 @@
     1.4     #+end_listing
     1.5  
     1.6     #+caption: =longest-thread= finds the longest path of consecutive 
     1.7 -   #+caption: experiences to explain proprioceptive worm data from 
     1.8 +   #+caption: past experiences to explain proprioceptive worm data from 
     1.9     #+caption: previous data. Here, the film strip represents the  
    1.10     #+caption: creature's previous experience. Sort sequences of
    1.11     #+caption: memories are spliced together to match the
    1.12 @@ -2967,13 +2967,13 @@
    1.13     =longest-thread= takes time proportional to the average number of
    1.14     entries in a proprioceptive bin, because for each element in the
    1.15     starting bin it performs a series of set lookups in the preceding
    1.16 -   bins. If the total history is limited, then this is only a constant
    1.17 -   multiple times the number of entries in the starting bin. This
    1.18 -   analysis also applies even if the action requires multiple longest
    1.19 -   chains -- it's still the average number of entries in a
    1.20 -   proprioceptive bin times the desired chain length. Because
    1.21 -   =longest-thread= is so efficient and simple, I can interpret
    1.22 -   worm-actions in real time.
    1.23 +   bins. If the total history is limited, then this takes time
    1.24 +   proprotional to a only a constant multiple of the number of entries
    1.25 +   in the starting bin. This analysis also applies, even if the action
    1.26 +   requires multiple longest chains -- it's still the average number
    1.27 +   of entries in a proprioceptive bin times the desired chain length.
    1.28 +   Because =longest-thread= is so efficient and simple, I can
    1.29 +   interpret worm-actions in real time.
    1.30  
    1.31     #+caption: Program to calculate empathy by tracing though \Phi-space
    1.32     #+caption: and finding the longest (ie. most coherent) interpretation
    1.33 @@ -3015,8 +3015,13 @@
    1.34     using a gradient over the closest known sensory data points,
    1.35     averages can be misleading. It is certainly possible to create an
    1.36     impossible sensory state by averaging two possible sensory states.
    1.37 -   Therefore, I simply replicate the most recent sensory experience to
    1.38 -   fill in the gaps.
    1.39 +   For example, consider moving your hand in an arc over your head. If
    1.40 +   for some reason you only have the initial and final positions of
    1.41 +   this movement in your \Phi-space, averaging them together will
    1.42 +   produce the proprioceptive sensation of having your hand /inside/
    1.43 +   your head, which is physically impossible to ever experience
    1.44 +   (barring motor adaption illusions). Therefore I simply replicate
    1.45 +   the most recent sensory experience to fill in the gaps.
    1.46  
    1.47     #+caption: Fill in blanks in sensory experience by replicating the most 
    1.48     #+caption: recent experience.
    1.49 @@ -3079,7 +3084,7 @@
    1.50     #+end_src
    1.51     #+end_listing
    1.52  
    1.53 -   #+caption: Use longest thread and a phi-space generated from a short
    1.54 +   #+caption: Use =longest-thread= and a \Phi-space generated from a short
    1.55     #+caption: exercise routine to interpret actions during free play.
    1.56     #+name: empathy-debug
    1.57  #+begin_listing clojure
    1.58 @@ -3102,13 +3107,25 @@
    1.59             (curled? empathy)       (.setText text "Curled")
    1.60             (wiggling? empathy)     (.setText text "Wiggling")
    1.61             (resting? empathy)      (.setText text "Resting")
    1.62 -           :else                       (.setText text "Unknown")))))))
    1.63 +           :else                   (.setText text "Unknown")))))))
    1.64  
    1.65  (defn empathy-experiment [record]
    1.66    (.start (worm-world :experience-watch (debug-experience-phi)
    1.67                        :record record :worm worm*)))
    1.68     #+end_src
    1.69     #+end_listing
    1.70 +
    1.71 +   These programs create a test for the empathy system. First, the
    1.72 +   worm's \Phi-space is generated from a simple motor script. Then the
    1.73 +   worm is re-created in an environment almost exactly identical to
    1.74 +   the testing environment for the action-predicates, with one major
    1.75 +   difference : the only sensory information available to the system
    1.76 +   is proprioception. From just the proprioception data and
    1.77 +   \Phi-space, =longest-thread= synthesises a complete record the last
    1.78 +   300 sensory experiences of the worm. These synthesized experiences
    1.79 +   are fed directly into the action predicates =grand-circle?=,
    1.80 +   =curled?=, =wiggling?=, and =resting?= from before and their output
    1.81 +   is printed to the screen at each frame.
    1.82     
    1.83     The result of running =empathy-experiment= is that the system is
    1.84     generally able to interpret worm actions using the action-predicates
    1.85 @@ -3192,9 +3209,11 @@
    1.86    boundaries of transitioning from one type of action to another.
    1.87    During these transitions the exact label for the action is more open
    1.88    to interpretation, and disagreement between empathy and experience
    1.89 -  is more excusable.
    1.90 -
    1.91 -** COMMENT Digression: Learn touch sensor layout through free play
    1.92 +  is essentially irrelevant at this point, giving a practical
    1.93 +  identification accuracy of even higher than 95%. When I watch this
    1.94 +  system myself, I generally see no errors in action identification.
    1.95 +
    1.96 +** COMMENT Digression: Learning touch sensor layout through free play
    1.97  
    1.98     In the previous section I showed how to compute actions in terms of
    1.99     body-centered predicates which relied on the average touch