changeset 555:c8f7d10b1a2a

further clarification about skin partitioning per winston's request
author Robert McIntyre <rlm@mit.edu>
date Fri, 02 May 2014 14:27:29 -0400
parents 663e3d4f98c1
children 531bcd85d153
files thesis/cortex.org
diffstat 1 files changed, 41 insertions(+), 33 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/thesis/cortex.org	Fri May 02 14:16:35 2014 -0400
     1.2 +++ b/thesis/cortex.org	Fri May 02 14:27:29 2014 -0400
     1.3 @@ -2520,18 +2520,17 @@
     1.4    This is all demonstrated with using a simple worm-like creature, and
     1.5    recognizing worm-actions based on limited data.
     1.6  
     1.7 -  #+caption: Here is the worm with which we will be working. 
     1.8 -  #+caption: It is composed of 5 segments. Each segment has a 
     1.9 -  #+caption: pair of extensor and flexor muscles. Each of the 
    1.10 -  #+caption: worm's four joints is a hinge joint which allows 
    1.11 -  #+caption: about 30 degrees of rotation to either side. Each segment
    1.12 -  #+caption: of the worm is touch-capable and has a uniform 
    1.13 -  #+caption: distribution of touch sensors on each of its faces.
    1.14 -  #+caption: Each joint has a proprioceptive sense to detect 
    1.15 -  #+caption: relative positions. The worm segments are all the 
    1.16 -  #+caption: same except for the first one, which has a much
    1.17 -  #+caption: higher weight than the others to allow for easy 
    1.18 -  #+caption: manual motor control.
    1.19 +  #+caption: Here is the worm with which we will be working. It is
    1.20 +  #+caption: composed of 5 segments. Each segment has a pair of
    1.21 +  #+caption: extensor and flexor muscles. Each of the worm's four
    1.22 +  #+caption: joints is a hinge joint which allows about 30 degrees of
    1.23 +  #+caption: rotation to either side. Each segment of the worm is
    1.24 +  #+caption: touch-capable and has a uniform distribution of touch
    1.25 +  #+caption: sensors on each of its faces. Each joint has a
    1.26 +  #+caption: proprioceptive sense to detect relative positions. The
    1.27 +  #+caption: worm segments are all the same except for the first one,
    1.28 +  #+caption: which has a much higher weight than the others to allow
    1.29 +  #+caption: for easy manual motor control.
    1.30    #+name: basic-worm-view
    1.31    #+ATTR_LaTeX: :width 10cm
    1.32    [[./images/basic-worm-view.png]]
    1.33 @@ -3188,10 +3187,14 @@
    1.34     #+end_listing
    1.35  
    1.36    Running =test-empathy-accuracy= using the very short exercise
    1.37 -  program defined in listing \ref{generate-phi-space}, and then doing
    1.38 -  a similar pattern of activity manually yields an accuracy of around
    1.39 -  73%. This is based on very limited worm experience. By training the
    1.40 -  worm for longer, the accuracy dramatically improves.
    1.41 +  program =do-all-the-things= defined in listing
    1.42 +  \ref{generate-phi-space}, and then doing a similar pattern of
    1.43 +  activity using manual control of the worm, yields an accuracy of
    1.44 +  around 73%. This is based on very limited worm experience, and
    1.45 +  almost all errors are due to the worm's \Phi-space being too
    1.46 +  incomplete to properly interpret common poses. By manually training
    1.47 +  the worm for longer using =init-interactive= defined in listing
    1.48 +  \ref{manual-phi-space}, the accuracy dramatically improves:
    1.49  
    1.50     #+caption: Program to generate \Phi-space using manual training.
    1.51     #+name: manual-phi-space
    1.52 @@ -3211,10 +3214,12 @@
    1.53     #+end_src
    1.54     #+end_listing
    1.55  
    1.56 -  After about 1 minute of manual training, I was able to achieve 95%
    1.57 -  accuracy on manual testing of the worm using =init-interactive= and
    1.58 +  =init-interactive= allows me to take direct control of the worm's
    1.59 +  muscles and run it through each characteristic movement I care
    1.60 +  about. After about 1 minute of manual training, I was able to
    1.61 +  achieve 95% accuracy on manual testing of the worm using
    1.62    =test-empathy-accuracy=. The majority of disagreements are near the
    1.63 -  transition boundaries from one type of action to another.  During
    1.64 +  transition boundaries from one type of action to another. During
    1.65    these transitions the exact label for the action is often unclear,
    1.66    and disagreement between empathy and experience is practically
    1.67    irrelevant. Thus, the system's effective identification accuracy is
    1.68 @@ -3228,21 +3233,24 @@
    1.69     body-centered predicates, but some of those predicates relied on
    1.70     the average touch activation of pre-defined regions of the worm's
    1.71     skin. What if, instead of receiving touch pre-grouped into the six
    1.72 -   faces of each worm segment, the true topology of the worm's skin
    1.73 -   was unknown? This is more similar to how a nerve fiber bundle might
    1.74 -   be arranged inside an animal. While two fibers that are close in a
    1.75 -   nerve bundle /might/ correspond to two touch sensors that are close
    1.76 -   together on the skin, the process of taking a complicated surface
    1.77 -   and forcing it into essentially a circle requires that some regions
    1.78 -   of skin that are close together in the animal end up far apart in
    1.79 -   the nerve bundle.
    1.80 +   faces of each worm segment, the true partitioning of the worm's
    1.81 +   skin was unknown? This is more similar to how a nerve fiber bundle
    1.82 +   might be arranged inside an animal. While two fibers that are close
    1.83 +   in a nerve bundle /might/ correspond to two touch sensors that are
    1.84 +   close together on the skin, the process of taking a complicated
    1.85 +   surface and forcing it into essentially a 2D circle requires that
    1.86 +   some regions of skin that are close together in the animal end up
    1.87 +   far apart in the nerve bundle.
    1.88     
    1.89 -   In this chapter I show how to automatically learn the skin-topology of
    1.90 -   a worm segment by free exploration. As the worm rolls around on the
    1.91 -   floor, large sections of its surface get activated. If the worm has
    1.92 -   stopped moving, then whatever region of skin that is touching the
    1.93 -   floor is probably an important region, and should be recorded.
    1.94 -   
    1.95 +   In this chapter I show how to automatically learn the
    1.96 +   skin-partitioning of a worm segment by free exploration. As the
    1.97 +   worm rolls around on the floor, large sections of its surface get
    1.98 +   activated. If the worm has stopped moving, then whatever region of
    1.99 +   skin that is touching the floor is probably an important region,
   1.100 +   and should be recorded. The code I provide relies on the worm
   1.101 +   segment having flat faces, but still demonstrates a primitive kind
   1.102 +   of multi-sensory bootstrapping that I find appealing.
   1.103 +
   1.104     #+caption: Program to detect whether the worm is in a resting state 
   1.105     #+caption: with one face touching the floor.
   1.106     #+name: pure-touch