changeset 517:68665d2c32a7

spellcheck; almost done with first draft!
author Robert McIntyre <rlm@mit.edu>
date Mon, 31 Mar 2014 00:18:26 -0400
parents ced955c3c84f
children d78f5102d693
files thesis/cortex.bib thesis/cortex.org thesis/rlm-cortex-meng.tex
diffstat 3 files changed, 237 insertions(+), 259 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/thesis/cortex.bib	Sun Mar 30 22:48:19 2014 -0400
     1.2 +++ b/thesis/cortex.bib	Mon Mar 31 00:18:26 2014 -0400
     1.3 @@ -12,7 +12,7 @@
     1.4    year = 2013,
     1.5    addendum = {\why{All complicated creatures in {\tt CORTEX} are
     1.6                    described using Blender's extensive 3D modeling
     1.7 -                  capabilities. Blender is a very sophistaced 3D
     1.8 +                  capabilities. Blender is a very sophisticated 3D
     1.9                    modeling environment and has been used to create a
    1.10                    short movie called Sintel \url{http://www.sintel.org/}.}}
    1.11  }
    1.12 @@ -90,10 +90,10 @@
    1.13    year = "1998",
    1.14    title = "The Man Who Mistook His Wife For A Hat: And Other Clinical Tales",
    1.15    ISBN = "9780330700580",
    1.16 -  addendum = {\why{This book describes exoitic cases where the human
    1.17 +  addendum = {\why{This book describes exotic cases where the human
    1.18                    mind goes wrong. The section on proprioception is
    1.19 -                  particurally relevant to this thesis, and one of the
    1.20 -                  best explinations of how important proprioception
    1.21 +                  particularly relevant to this thesis, and one of the
    1.22 +                  best explanations of how important proprioception
    1.23                    is, though the eyes of someone who has lost the
    1.24                    sense.}}
    1.25  }
    1.26 @@ -158,7 +158,7 @@
    1.27                    be improved with {\tt CORTEX}. Larson uses a simple
    1.28                    blocks world simulator to explore using
    1.29                    self-organizing maps to bootstrap symbols just from
    1.30 -                  exploration with a simule arm and colored blocks.}}
    1.31 +                  exploration with a simulate arm and colored blocks.}}
    1.32  }
    1.33  
    1.34  @phdthesis{sussman-hacker,
    1.35 @@ -174,7 +174,7 @@
    1.36                    problem solving is begging to be implemented in {\tt
    1.37                    CORTEX}'s rich world. Will program debugging still
    1.38                    work well with many more senses and a more
    1.39 -                  complicated environement?}}
    1.40 +                  complicated environment?}}
    1.41  }
    1.42  
    1.43  @phdthesis{coen-x-modal,
     2.1 --- a/thesis/cortex.org	Sun Mar 30 22:48:19 2014 -0400
     2.2 +++ b/thesis/cortex.org	Mon Mar 31 00:18:26 2014 -0400
     2.3 @@ -59,7 +59,6 @@
     2.4    constraint can be the difference between easily understanding what
     2.5    is happening in a video and being completely lost in a sea of
     2.6    incomprehensible color and movement.
     2.7 -
     2.8    
     2.9  ** The problem: recognizing actions in video is hard!
    2.10     
    2.11 @@ -77,7 +76,7 @@
    2.12     the problem is that many computer vision systems focus on
    2.13     pixel-level details or comparisons to example images (such as
    2.14     \cite{volume-action-recognition}), but the 3D world is so variable
    2.15 -   that it is hard to descrive the world in terms of possible images.
    2.16 +   that it is hard to describe the world in terms of possible images.
    2.17  
    2.18     In fact, the contents of scene may have much less to do with pixel
    2.19     probabilities than with recognizing various affordances: things you
    2.20 @@ -102,7 +101,7 @@
    2.21     [[./images/wall-push.png]]
    2.22    
    2.23     Each of these examples tells us something about what might be going
    2.24 -   on in our minds as we easily solve these recognition problems.
    2.25 +   on in our minds as we easily solve these recognition problems:
    2.26     
    2.27     The hidden chair shows us that we are strongly triggered by cues
    2.28     relating to the position of human bodies, and that we can determine
    2.29 @@ -115,6 +114,11 @@
    2.30     most positions, and we can easily project this self-knowledge to
    2.31     imagined positions triggered by images of the human body.
    2.32  
    2.33 +   The cat tells us that imagination of some kind plays an important
    2.34 +   role in understanding actions. The question is: Can we be more
    2.35 +   precise about what sort of imagination is required to understand
    2.36 +   these actions?
    2.37 +
    2.38  ** A step forward: the sensorimotor-centered approach
    2.39  
    2.40     In this thesis, I explore the idea that our knowledge of our own
    2.41 @@ -139,13 +143,13 @@
    2.42        model of its own body in place of the cat. Possibly also create
    2.43        a simulation of the stream of water.
    2.44  
    2.45 -   2. Play out this simulated scene and generate imagined sensory
    2.46 +   2. ``Play out'' this simulated scene and generate imagined sensory
    2.47        experience. This will include relevant muscle contractions, a
    2.48        close up view of the stream from the cat's perspective, and most
    2.49 -      importantly, the imagined feeling of water entering the
    2.50 -      mouth. The imagined sensory experience can come from a
    2.51 -      simulation of the event, but can also be pattern-matched from
    2.52 -      previous, similar embodied experience.
    2.53 +      importantly, the imagined feeling of water entering the mouth.
    2.54 +      The imagined sensory experience can come from a simulation of
    2.55 +      the event, but can also be pattern-matched from previous,
    2.56 +      similar embodied experience.
    2.57  
    2.58     3. The action is now easily identified as drinking by the sense of
    2.59        taste alone. The other senses (such as the tongue moving in and
    2.60 @@ -160,7 +164,7 @@
    2.61      2. Generate proprioceptive sensory data from this alignment.
    2.62    
    2.63      3. Use the imagined proprioceptive data as a key to lookup related
    2.64 -       sensory experience associated with that particular proproceptive
    2.65 +       sensory experience associated with that particular proprioceptive
    2.66         feeling.
    2.67  
    2.68      4. Retrieve the feeling of your bottom resting on a surface, your
    2.69 @@ -194,14 +198,14 @@
    2.70     viewpoint.
    2.71  
    2.72     Another powerful advantage is that using the language of multiple
    2.73 -   body-centered rich senses to describe body-centerd actions offers a
    2.74 +   body-centered rich senses to describe body-centered actions offers a
    2.75     massive boost in descriptive capability. Consider how difficult it
    2.76     would be to compose a set of HOG filters to describe the action of
    2.77     a simple worm-creature ``curling'' so that its head touches its
    2.78     tail, and then behold the simplicity of describing thus action in a
    2.79     language designed for the task (listing \ref{grand-circle-intro}):
    2.80  
    2.81 -   #+caption: Body-centerd actions are best expressed in a body-centered 
    2.82 +   #+caption: Body-centered actions are best expressed in a body-centered 
    2.83     #+caption: language. This code detects when the worm has curled into a 
    2.84     #+caption: full circle. Imagine how you would replicate this functionality
    2.85     #+caption: using low-level pixel features such as HOG filters!
    2.86 @@ -220,30 +224,23 @@
    2.87     #+end_src
    2.88     #+end_listing
    2.89  
    2.90 -** =EMPATH= regognizes actions using empathy
    2.91 -
    2.92 -   First, I built a system for constructing virtual creatures with
    2.93 +** =EMPATH= recognizes actions using empathy
    2.94 +
    2.95 +   Exploring these ideas further demands a concrete implementation, so
    2.96 +   first, I built a system for constructing virtual creatures with
    2.97     physiologically plausible sensorimotor systems and detailed
    2.98     environments. The result is =CORTEX=, which is described in section
    2.99 -   \ref{sec-2}. (=CORTEX= was built to be flexible and useful to other
   2.100 -   AI researchers; it is provided in full with detailed instructions
   2.101 -   on the web [here].)
   2.102 +   \ref{sec-2}.
   2.103  
   2.104     Next, I wrote routines which enabled a simple worm-like creature to
   2.105     infer the actions of a second worm-like creature, using only its
   2.106     own prior sensorimotor experiences and knowledge of the second
   2.107     worm's joint positions. This program, =EMPATH=, is described in
   2.108 -   section \ref{sec-3}, and the key results of this experiment are
   2.109 -   summarized below.
   2.110 -
   2.111 -   I have built a system that can express the types of recognition
   2.112 -   problems in a form amenable to computation. It is split into
   2.113 -   four parts:
   2.114 -
   2.115 -   - Free/Guided Play :: The creature moves around and experiences the
   2.116 -        world through its unique perspective. Many otherwise
   2.117 -        complicated actions are easily described in the language of a
   2.118 -        full suite of body-centered, rich senses. For example,
   2.119 +   section \ref{sec-3}. It's main components are:
   2.120 +
   2.121 +   - Embodied Action Definitions :: Many otherwise complicated actions
   2.122 +        are easily described in the language of a full suite of
   2.123 +        body-centered, rich senses and experiences. For example,
   2.124          drinking is the feeling of water sliding down your throat, and
   2.125          cooling your insides. It's often accompanied by bringing your
   2.126          hand close to your face, or bringing your face close to water.
   2.127 @@ -251,26 +248,35 @@
   2.128          your quadriceps, then feeling a surface with your bottom and
   2.129          relaxing your legs. These body-centered action descriptions
   2.130          can be either learned or hard coded.
   2.131 -   - Posture Imitation :: When trying to interpret a video or image,
   2.132 +
   2.133 +   - Guided Play      :: The creature moves around and experiences the
   2.134 +        world through its unique perspective. As the creature moves,
   2.135 +        it gathers experiences that satisfy the embodied action
   2.136 +        definitions. 
   2.137 +
   2.138 +   - Posture imitation :: When trying to interpret a video or image,
   2.139          the creature takes a model of itself and aligns it with
   2.140 -        whatever it sees. This alignment can even cross species, as
   2.141 +        whatever it sees. This alignment might even cross species, as
   2.142          when humans try to align themselves with things like ponies,
   2.143          dogs, or other humans with a different body type.
   2.144 -   - Empathy         :: The alignment triggers associations with
   2.145 +
   2.146 +   - Empathy          :: The alignment triggers associations with
   2.147          sensory data from prior experiences. For example, the
   2.148          alignment itself easily maps to proprioceptive data. Any
   2.149          sounds or obvious skin contact in the video can to a lesser
   2.150 -        extent trigger previous experience. Segments of previous
   2.151 -        experiences are stitched together to form a coherent and
   2.152 -        complete sensory portrait of the scene.
   2.153 -   - Recognition      :: With the scene described in terms of first
   2.154 -        person sensory events, the creature can now run its
   2.155 -        action-identification programs on this synthesized sensory
   2.156 -        data, just as it would if it were actually experiencing the
   2.157 -        scene first-hand. If previous experience has been accurately
   2.158 +        extent trigger previous experience keyed to hearing or touch.
   2.159 +        Segments of previous experiences gained from play are stitched
   2.160 +        together to form a coherent and complete sensory portrait of
   2.161 +        the scene.
   2.162 +
   2.163 +   - Recognition      :: With the scene described in terms of
   2.164 +        remembered first person sensory events, the creature can now
   2.165 +        run its action-identified programs (such as the one in listing
   2.166 +        \ref{grand-circle-intro} on this synthesized sensory data,
   2.167 +        just as it would if it were actually experiencing the scene
   2.168 +        first-hand. If previous experience has been accurately
   2.169          retrieved, and if it is analogous enough to the scene, then
   2.170          the creature will correctly identify the action in the scene.
   2.171 -   
   2.172  
   2.173     My program, =EMPATH= uses this empathic problem solving technique
   2.174     to interpret the actions of a simple, worm-like creature. 
   2.175 @@ -287,28 +293,31 @@
   2.176     #+name: worm-recognition-intro
   2.177     #+ATTR_LaTeX: :width 15cm
   2.178     [[./images/worm-poses.png]]
   2.179 -
   2.180 -   #+caption: From only \emph{proprioceptive} data, =EMPATH= was able to infer 
   2.181 -   #+caption: the complete sensory experience and classify these four poses.
   2.182 -   #+caption: The last image is a composite, depicting the intermediate stages
   2.183 -   #+caption: of \emph{wriggling}.
   2.184 -   #+name: worm-recognition-intro-2
   2.185 -   #+ATTR_LaTeX: :width 15cm
   2.186 -   [[./images/empathy-1.png]]
   2.187     
   2.188 -   Next, I developed an experiment to test the power of =CORTEX='s
   2.189 -   sensorimotor-centered language for solving recognition problems. As
   2.190 -   a proof of concept, I wrote routines which enabled a simple
   2.191 -   worm-like creature to infer the actions of a second worm-like
   2.192 -   creature, using only its own previous sensorimotor experiences and
   2.193 -   knowledge of the second worm's joints (figure
   2.194 -   \ref{worm-recognition-intro-2}). The result of this proof of
   2.195 -   concept was the program =EMPATH=, described in section \ref{sec-3}.
   2.196 -
   2.197 -** =EMPATH= is built on =CORTEX=, en environment for making creatures.
   2.198 -
   2.199 - # =CORTEX= provides a language for describing the sensorimotor
   2.200 -   # experiences of various creatures. 
   2.201 +*** Main Results 
   2.202 +
   2.203 +   - After one-shot supervised training, =EMPATH= was able recognize a
   2.204 +     wide variety of static poses and dynamic actions---ranging from
   2.205 +     curling in a circle to wiggling with a particular frequency ---
   2.206 +     with 95\% accuracy.
   2.207 +
   2.208 +   - These results were completely independent of viewing angle
   2.209 +     because the underlying body-centered language fundamentally is
   2.210 +     independent; once an action is learned, it can be recognized
   2.211 +     equally well from any viewing angle.
   2.212 +
   2.213 +   - =EMPATH= is surprisingly short; the sensorimotor-centered
   2.214 +     language provided by =CORTEX= resulted in extremely economical
   2.215 +     recognition routines --- about 500 lines in all --- suggesting
   2.216 +     that such representations are very powerful, and often
   2.217 +     indispensable for the types of recognition tasks considered here.
   2.218 +
   2.219 +   - Although for expediency's sake, I relied on direct knowledge of
   2.220 +     joint positions in this proof of concept, it would be
   2.221 +     straightforward to extend =EMPATH= so that it (more
   2.222 +     realistically) infers joint positions from its visual data.
   2.223 +
   2.224 +** =EMPATH= is built on =CORTEX=, a creature builder.
   2.225  
   2.226     I built =CORTEX= to be a general AI research platform for doing
   2.227     experiments involving multiple rich senses and a wide variety and
   2.228 @@ -319,19 +328,21 @@
   2.229     language of creatures and senses, but in order to explore those
   2.230     ideas they must first build a platform in which they can create
   2.231     simulated creatures with rich senses! There are many ideas that
   2.232 -   would be simple to execute (such as =EMPATH=), but attached to them
   2.233 -   is the multi-month effort to make a good creature simulator. Often,
   2.234 -   that initial investment of time proves to be too much, and the
   2.235 -   project must make do with a lesser environment.
   2.236 +   would be simple to execute (such as =EMPATH= or
   2.237 +   \cite{larson-symbols}), but attached to them is the multi-month
   2.238 +   effort to make a good creature simulator. Often, that initial
   2.239 +   investment of time proves to be too much, and the project must make
   2.240 +   do with a lesser environment.
   2.241  
   2.242     =CORTEX= is well suited as an environment for embodied AI research
   2.243     for three reasons:
   2.244  
   2.245 -   - You can create new creatures using Blender, a popular 3D modeling
   2.246 -     program. Each sense can be specified using special blender nodes
   2.247 -     with biologically inspired paramaters. You need not write any
   2.248 -     code to create a creature, and can use a wide library of
   2.249 -     pre-existing blender models as a base for your own creatures.
   2.250 +   - You can create new creatures using Blender (\cite{blender}), a
   2.251 +     popular 3D modeling program. Each sense can be specified using
   2.252 +     special blender nodes with biologically inspired parameters. You
   2.253 +     need not write any code to create a creature, and can use a wide
   2.254 +     library of pre-existing blender models as a base for your own
   2.255 +     creatures.
   2.256  
   2.257     - =CORTEX= implements a wide variety of senses: touch,
   2.258       proprioception, vision, hearing, and muscle tension. Complicated
   2.259 @@ -343,24 +354,25 @@
   2.260       available.
   2.261  
   2.262     - =CORTEX= supports any number of creatures and any number of
   2.263 -     senses. Time in =CORTEX= dialates so that the simulated creatures
   2.264 -     always precieve a perfectly smooth flow of time, regardless of
   2.265 +     senses. Time in =CORTEX= dilates so that the simulated creatures
   2.266 +     always perceive a perfectly smooth flow of time, regardless of
   2.267       the actual computational load.
   2.268  
   2.269 -   =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
   2.270 -   engine designed to create cross-platform 3D desktop games. =CORTEX=
   2.271 -   is mainly written in clojure, a dialect of =LISP= that runs on the
   2.272 -   java virtual machine (JVM). The API for creating and simulating
   2.273 -   creatures and senses is entirely expressed in clojure, though many
   2.274 -   senses are implemented at the layer of jMonkeyEngine or below. For
   2.275 -   example, for the sense of hearing I use a layer of clojure code on
   2.276 -   top of a layer of java JNI bindings that drive a layer of =C++=
   2.277 -   code which implements a modified version of =OpenAL= to support
   2.278 -   multiple listeners. =CORTEX= is the only simulation environment
   2.279 -   that I know of that can support multiple entities that can each
   2.280 -   hear the world from their own perspective. Other senses also
   2.281 -   require a small layer of Java code. =CORTEX= also uses =bullet=, a
   2.282 -   physics simulator written in =C=.
   2.283 +   =CORTEX= is built on top of =jMonkeyEngine3=
   2.284 +   (\cite{jmonkeyengine}), which is a video game engine designed to
   2.285 +   create cross-platform 3D desktop games. =CORTEX= is mainly written
   2.286 +   in clojure, a dialect of =LISP= that runs on the java virtual
   2.287 +   machine (JVM). The API for creating and simulating creatures and
   2.288 +   senses is entirely expressed in clojure, though many senses are
   2.289 +   implemented at the layer of jMonkeyEngine or below. For example,
   2.290 +   for the sense of hearing I use a layer of clojure code on top of a
   2.291 +   layer of java JNI bindings that drive a layer of =C++= code which
   2.292 +   implements a modified version of =OpenAL= to support multiple
   2.293 +   listeners. =CORTEX= is the only simulation environment that I know
   2.294 +   of that can support multiple entities that can each hear the world
   2.295 +   from their own perspective. Other senses also require a small layer
   2.296 +   of Java code. =CORTEX= also uses =bullet=, a physics simulator
   2.297 +   written in =C=.
   2.298  
   2.299     #+caption: Here is the worm from figure \ref{worm-intro} modeled 
   2.300     #+caption: in Blender, a free 3D-modeling program. Senses and 
   2.301 @@ -375,8 +387,8 @@
   2.302     - distributed communication among swarm creatures
   2.303     - self-learning using free exploration, 
   2.304     - evolutionary algorithms involving creature construction
   2.305 -   - exploration of exoitic senses and effectors that are not possible
   2.306 -     in the real world (such as telekenisis or a semantic sense)
   2.307 +   - exploration of exotic senses and effectors that are not possible
   2.308 +     in the real world (such as telekinesis or a semantic sense)
   2.309     - imagination using subworlds
   2.310  
   2.311     During one test with =CORTEX=, I created 3,000 creatures each with
   2.312 @@ -400,37 +412,6 @@
   2.313     \end{sidewaysfigure}
   2.314  #+END_LaTeX
   2.315  
   2.316 -** Contributions
   2.317 -
   2.318 -   - I built =CORTEX=, a comprehensive platform for embodied AI
   2.319 -     experiments. =CORTEX= supports many features lacking in other
   2.320 -     systems, such proper simulation of hearing. It is easy to create
   2.321 -     new =CORTEX= creatures using Blender, a free 3D modeling program.
   2.322 -
   2.323 -   - I built =EMPATH=, which uses =CORTEX= to identify the actions of
   2.324 -     a worm-like creature using a computational model of empathy.
   2.325 -
   2.326 -   - After one-shot supervised training, =EMPATH= was able recognize a
   2.327 -     wide variety of static poses and dynamic actions---ranging from
   2.328 -     curling in a circle to wriggling with a particular frequency ---
   2.329 -     with 95\% accuracy.
   2.330 -
   2.331 -   - These results were completely independent of viewing angle
   2.332 -     because the underlying body-centered language fundamentally is
   2.333 -     independent; once an action is learned, it can be recognized
   2.334 -     equally well from any viewing angle.
   2.335 -
   2.336 -   - =EMPATH= is surprisingly short; the sensorimotor-centered
   2.337 -     language provided by =CORTEX= resulted in extremely economical
   2.338 -     recognition routines --- about 500 lines in all --- suggesting
   2.339 -     that such representations are very powerful, and often
   2.340 -     indispensible for the types of recognition tasks considered here.
   2.341 -
   2.342 -   - Although for expediency's sake, I relied on direct knowledge of
   2.343 -     joint positions in this proof of concept, it would be
   2.344 -     straightforward to extend =EMPATH= so that it (more
   2.345 -     realistically) infers joint positions from its visual data.
   2.346 -
   2.347  * Designing =CORTEX=
   2.348  
   2.349    In this section, I outline the design decisions that went into
   2.350 @@ -441,18 +422,18 @@
   2.351  
   2.352    Throughout this project, I intended for =CORTEX= to be flexible and
   2.353    extensible enough to be useful for other researchers who want to
   2.354 -  test out ideas of their own. To this end, wherver I have had to make
   2.355 -  archetictural choices about =CORTEX=, I have chosen to give as much
   2.356 +  test out ideas of their own. To this end, wherever I have had to make
   2.357 +  architectural choices about =CORTEX=, I have chosen to give as much
   2.358    freedom to the user as possible, so that =CORTEX= may be used for
   2.359 -  things I have not forseen.
   2.360 +  things I have not foreseen.
   2.361  
   2.362  ** Building in simulation versus reality
   2.363 -   The most important archetictural decision of all is the choice to
   2.364 -   use a computer-simulated environemnt in the first place! The world
   2.365 +   The most important architectural decision of all is the choice to
   2.366 +   use a computer-simulated environment in the first place! The world
   2.367     is a vast and rich place, and for now simulations are a very poor
   2.368     reflection of its complexity. It may be that there is a significant
   2.369 -   qualatative difference between dealing with senses in the real
   2.370 -   world and dealing with pale facilimilies of them in a simulation
   2.371 +   qualitative difference between dealing with senses in the real
   2.372 +   world and dealing with pale facsimiles of them in a simulation
   2.373     \cite{brooks-representation}. What are the advantages and
   2.374     disadvantages of a simulation vs. reality?
   2.375     
   2.376 @@ -519,13 +500,13 @@
   2.377     The need for real time processing only increases if multiple senses
   2.378     are involved. In the extreme case, even simple algorithms will have
   2.379     to be accelerated by ASIC chips or FPGAs, turning what would
   2.380 -   otherwise be a few lines of code and a 10x speed penality into a
   2.381 +   otherwise be a few lines of code and a 10x speed penalty into a
   2.382     multi-month ordeal. For this reason, =CORTEX= supports
   2.383 -   /time-dialiation/, which scales back the framerate of the
   2.384 +   /time-dilation/, which scales back the framerate of the
   2.385     simulation in proportion to the amount of processing each frame.
   2.386     From the perspective of the creatures inside the simulation, time
   2.387     always appears to flow at a constant rate, regardless of how
   2.388 -   complicated the envorimnent becomes or how many creatures are in
   2.389 +   complicated the environment becomes or how many creatures are in
   2.390     the simulation. The cost is that =CORTEX= can sometimes run slower
   2.391     than real time. This can also be an advantage, however ---
   2.392     simulations of very simple creatures in =CORTEX= generally run at
   2.393 @@ -536,7 +517,7 @@
   2.394     If =CORTEX= is to support a wide variety of senses, it would help
   2.395     to have a better understanding of what a ``sense'' actually is!
   2.396     While vision, touch, and hearing all seem like they are quite
   2.397 -   different things, I was supprised to learn during the course of
   2.398 +   different things, I was surprised to learn during the course of
   2.399     this thesis that they (and all physical senses) can be expressed as
   2.400     exactly the same mathematical object due to a dimensional argument!
   2.401  
   2.402 @@ -561,13 +542,13 @@
   2.403     Most human senses consist of many discrete sensors of various
   2.404     properties distributed along a surface at various densities. For
   2.405     skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
   2.406 -   disks, and Ruffini's endings, which detect pressure and vibration
   2.407 -   of various intensities. For ears, it is the stereocilia distributed
   2.408 -   along the basilar membrane inside the cochlea; each one is
   2.409 -   sensitive to a slightly different frequency of sound. For eyes, it
   2.410 -   is rods and cones distributed along the surface of the retina. In
   2.411 -   each case, we can describe the sense with a surface and a
   2.412 -   distribution of sensors along that surface.
   2.413 +   disks, and Ruffini's endings (\cite{9.01-textbook), which detect
   2.414 +   pressure and vibration of various intensities. For ears, it is the
   2.415 +   stereocilia distributed along the basilar membrane inside the
   2.416 +   cochlea; each one is sensitive to a slightly different frequency of
   2.417 +   sound. For eyes, it is rods and cones distributed along the surface
   2.418 +   of the retina. In each case, we can describe the sense with a
   2.419 +   surface and a distribution of sensors along that surface.
   2.420  
   2.421     The neat idea is that every human sense can be effectively
   2.422     described in terms of a surface containing embedded sensors. If the
   2.423 @@ -614,7 +595,7 @@
   2.424     I did not need to write my own physics simulation code or shader to
   2.425     build =CORTEX=. Doing so would lead to a system that is impossible
   2.426     for anyone but myself to use anyway. Instead, I use a video game
   2.427 -   engine as a base and modify it to accomodate the additional needs
   2.428 +   engine as a base and modify it to accommodate the additional needs
   2.429     of =CORTEX=. Video game engines are an ideal starting point to
   2.430     build =CORTEX=, because they are not far from being creature
   2.431     building systems themselves.
   2.432 @@ -684,7 +665,7 @@
   2.433     for other projects, it needs a way to construct complicated
   2.434     creatures. If possible, it would be nice to leverage work that has
   2.435     already been done by the community of 3D modelers, or at least
   2.436 -   enable people who are talented at moedling but not programming to
   2.437 +   enable people who are talented at modeling but not programming to
   2.438     design =CORTEX= creatures.
   2.439  
   2.440     Therefore, I use Blender, a free 3D modeling program, as the main
   2.441 @@ -704,7 +685,7 @@
   2.442       sensors if applicable.
   2.443     - Make each empty-node the child of the top-level node.
   2.444       
   2.445 -   #+caption: An example of annoting a creature model with empty
   2.446 +   #+caption: An example of annotating a creature model with empty
   2.447     #+caption: nodes to describe the layout of senses. There are 
   2.448     #+caption: multiple empty nodes which each describe the position
   2.449     #+caption: of muscles, ears, eyes, or joints.
   2.450 @@ -717,7 +698,7 @@
   2.451     Blender is a general purpose animation tool, which has been used in
   2.452     the past to create high quality movies such as Sintel
   2.453     \cite{blender}. Though Blender can model and render even complicated
   2.454 -   things like water, it is crucual to keep models that are meant to
   2.455 +   things like water, it is crucial to keep models that are meant to
   2.456     be simulated as creatures simple. =Bullet=, which =CORTEX= uses
   2.457     though jMonkeyEngine3, is a rigid-body physics system. This offers
   2.458     a compromise between the expressiveness of a game level and the
   2.459 @@ -725,9 +706,9 @@
   2.460     should be naturally expressed as rigid components held together by
   2.461     joint constraints.
   2.462  
   2.463 -   But humans are more like a squishy bag with wrapped around some
   2.464 -   hard bones which define the overall shape. When we move, our skin
   2.465 -   bends and stretches to accomodate the new positions of our bones. 
   2.466 +   But humans are more like a squishy bag wrapped around some hard
   2.467 +   bones which define the overall shape. When we move, our skin bends
   2.468 +   and stretches to accommodate the new positions of our bones.
   2.469  
   2.470     One way to make bodies composed of rigid pieces connected by joints
   2.471     /seem/ more human-like is to use an /armature/, (or /rigging/)
   2.472 @@ -735,17 +716,16 @@
   2.473     mesh deforms as a function of the position of each ``bone'' which
   2.474     is a standard rigid body. This technique is used extensively to
   2.475     model humans and create realistic animations. It is not a good
   2.476 -   technique for physical simulation, however because it creates a lie
   2.477 -   -- the skin is not a physical part of the simulation and does not
   2.478 -   interact with any objects in the world or itself. Objects will pass
   2.479 -   right though the skin until they come in contact with the
   2.480 -   underlying bone, which is a physical object. Whithout simulating
   2.481 -   the skin, the sense of touch has little meaning, and the creature's
   2.482 -   own vision will lie to it about the true extent of its body.
   2.483 -   Simulating the skin as a physical object requires some way to
   2.484 -   continuously update the physical model of the skin along with the
   2.485 -   movement of the bones, which is unacceptably slow compared to rigid
   2.486 -   body simulation. 
   2.487 +   technique for physical simulation because it is a lie -- the skin
   2.488 +   is not a physical part of the simulation and does not interact with
   2.489 +   any objects in the world or itself. Objects will pass right though
   2.490 +   the skin until they come in contact with the underlying bone, which
   2.491 +   is a physical object. Without simulating the skin, the sense of
   2.492 +   touch has little meaning, and the creature's own vision will lie to
   2.493 +   it about the true extent of its body. Simulating the skin as a
   2.494 +   physical object requires some way to continuously update the
   2.495 +   physical model of the skin along with the movement of the bones,
   2.496 +   which is unacceptably slow compared to rigid body simulation.
   2.497  
   2.498     Therefore, instead of using the human-like ``deformable bag of
   2.499     bones'' approach, I decided to base my body plans on multiple solid
   2.500 @@ -762,7 +742,7 @@
   2.501     together by invisible joint constraints. This is what I mean by
   2.502     ``eve-like''. The main reason that I use eve-style bodies is for
   2.503     efficiency, and so that there will be correspondence between the
   2.504 -   AI's semses and the physical presence of its body. Each individual
   2.505 +   AI's senses and the physical presence of its body. Each individual
   2.506     section is simulated by a separate rigid body that corresponds
   2.507     exactly with its visual representation and does not change.
   2.508     Sections are connected by invisible joints that are well supported
   2.509 @@ -870,7 +850,7 @@
   2.510      must be called /after/ =physical!= is called.
   2.511     
   2.512      #+caption: Program to find the targets of a joint node by 
   2.513 -    #+caption: exponentiallly growth of a search cube.
   2.514 +    #+caption: exponentially growth of a search cube.
   2.515      #+name: joint-targets
   2.516      #+begin_listing clojure
   2.517      #+begin_src clojure
   2.518 @@ -905,7 +885,7 @@
   2.519      a dispatch on the metadata of each joint node.
   2.520  
   2.521      #+caption: Program to dispatch on blender metadata and create joints
   2.522 -    #+caption: sutiable for physical simulation.
   2.523 +    #+caption: suitable for physical simulation.
   2.524      #+name: joint-dispatch
   2.525      #+begin_listing clojure
   2.526      #+begin_src clojure
   2.527 @@ -985,8 +965,8 @@
   2.528      In general, whenever =CORTEX= exposes a sense (or in this case
   2.529      physicality), it provides a function of the type =sense!=, which
   2.530      takes in a collection of nodes and augments it to support that
   2.531 -    sense. The function returns any controlls necessary to use that
   2.532 -    sense. In this case =body!= cerates a physical body and returns no
   2.533 +    sense. The function returns any controls necessary to use that
   2.534 +    sense. In this case =body!= creates a physical body and returns no
   2.535      control functions.
   2.536  
   2.537      #+caption: Program to give joints to a creature.
   2.538 @@ -1022,7 +1002,7 @@
   2.539      creature.
   2.540     
   2.541      #+caption: With the ability to create physical creatures from blender,
   2.542 -    #+caption: =CORTEX= gets one step closer to becomming a full creature
   2.543 +    #+caption: =CORTEX= gets one step closer to becoming a full creature
   2.544      #+caption: simulation environment.
   2.545      #+name: name
   2.546      #+ATTR_LaTeX: :width 15cm
   2.547 @@ -1085,7 +1065,7 @@
   2.548      hold the data. It does not do any copying from the GPU to the CPU
   2.549      itself because it is a slow operation.
   2.550  
   2.551 -    #+caption: Function to make the rendered secne in jMonkeyEngine 
   2.552 +    #+caption: Function to make the rendered scene in jMonkeyEngine 
   2.553      #+caption: available for further processing.
   2.554      #+name: pipeline-1 
   2.555      #+begin_listing clojure
   2.556 @@ -1160,7 +1140,7 @@
   2.557    (let [target (closest-node creature eye)
   2.558          [cam-width cam-height] 
   2.559          ;;[640 480] ;; graphics card on laptop doesn't support
   2.560 -                    ;; arbitray dimensions.
   2.561 +                    ;; arbitrary dimensions.
   2.562          (eye-dimensions eye)
   2.563          cam (Camera. cam-width cam-height)
   2.564          rot (.getWorldRotation eye)]
   2.565 @@ -1345,7 +1325,7 @@
   2.566  
   2.567     =CORTEX='s hearing is unique because it does not have any
   2.568     limitations compared to other simulation environments. As far as I
   2.569 -   know, there is no other system that supports multiple listerers,
   2.570 +   know, there is no other system that supports multiple listeners,
   2.571     and the sound demo at the end of this section is the first time
   2.572     it's been done in a video game environment.
   2.573  
   2.574 @@ -1384,7 +1364,7 @@
   2.575      Extending =OpenAL= to support multiple listeners requires 500
   2.576      lines of =C= code and is too hairy to mention here. Instead, I
   2.577      will show a small amount of extension code and go over the high
   2.578 -    level stragety. Full source is of course available with the
   2.579 +    level strategy. Full source is of course available with the
   2.580      =CORTEX= distribution if you're interested.
   2.581  
   2.582      =OpenAL= goes to great lengths to support many different systems,
   2.583 @@ -1406,7 +1386,7 @@
   2.584      sound it receives to a file, if everything has been set up
   2.585      correctly when configuring =OpenAL=.
   2.586  
   2.587 -    Actual mixing (doppler shift and distance.environment-based
   2.588 +    Actual mixing (Doppler shift and distance.environment-based
   2.589      attenuation) of the sound data happens in the Devices, and they
   2.590      are the only point in the sound rendering process where this data
   2.591      is available.
   2.592 @@ -1623,10 +1603,10 @@
   2.593      #+END_SRC
   2.594      #+end_listing
   2.595  
   2.596 -    #+caption: First ever simulation of multiple listerners in =CORTEX=.
   2.597 +    #+caption: First ever simulation of multiple listeners in =CORTEX=.
   2.598      #+caption: Each cube is a creature which processes sound data with
   2.599      #+caption: the =process= function from listing \ref{sound-test}. 
   2.600 -    #+caption: the ball is constantally emiting a pure tone of
   2.601 +    #+caption: the ball is constantly emitting a pure tone of
   2.602      #+caption: constant volume. As it approaches the cubes, they each
   2.603      #+caption: change color in response to the sound.
   2.604      #+name: sound-cubes.
   2.605 @@ -1756,7 +1736,7 @@
   2.606      fit the height and width of the UV image).
   2.607  
   2.608      #+caption: Programs to extract triangles from a geometry and get 
   2.609 -    #+caption: their verticies in both world and UV-coordinates.
   2.610 +    #+caption: their vertices in both world and UV-coordinates.
   2.611      #+name: get-triangles
   2.612      #+begin_listing clojure
   2.613      #+BEGIN_SRC clojure
   2.614 @@ -1851,7 +1831,7 @@
   2.615      jMonkeyEngine's =Matrix4f= objects, which can describe any affine
   2.616      transformation.
   2.617  
   2.618 -    #+caption: Program to interpert triangles as affine transforms.
   2.619 +    #+caption: Program to interpret triangles as affine transforms.
   2.620      #+name: triangle-affine
   2.621      #+begin_listing clojure
   2.622      #+BEGIN_SRC clojure
   2.623 @@ -1894,7 +1874,7 @@
   2.624  =inside-triangle?= determines whether a point is inside a triangle
   2.625  in 2D pixel-space.
   2.626  
   2.627 -    #+caption: Program to efficiently determine point includion 
   2.628 +    #+caption: Program to efficiently determine point inclusion 
   2.629      #+caption: in a triangle.
   2.630      #+name: in-triangle
   2.631      #+begin_listing clojure
   2.632 @@ -2089,7 +2069,7 @@
   2.633  
   2.634      Armed with the =touch!= function, =CORTEX= becomes capable of
   2.635      giving creatures a sense of touch. A simple test is to create a
   2.636 -    cube that is outfitted with a uniform distrubition of touch
   2.637 +    cube that is outfitted with a uniform distribution of touch
   2.638      sensors. It can feel the ground and any balls that it touches.
   2.639  
   2.640      #+caption: =CORTEX= interface for creating touch in a simulated
   2.641 @@ -2111,7 +2091,7 @@
   2.642      #+end_listing
   2.643      
   2.644      The tactile-sensor-profile image for the touch cube is a simple
   2.645 -    cross with a unifom distribution of touch sensors:
   2.646 +    cross with a uniform distribution of touch sensors:
   2.647  
   2.648      #+caption: The touch profile for the touch-cube. Each pure white 
   2.649      #+caption: pixel defines a touch sensitive feeler.
   2.650 @@ -2119,7 +2099,7 @@
   2.651      #+ATTR_LaTeX: :width 7cm
   2.652      [[./images/touch-profile.png]]
   2.653  
   2.654 -    #+caption: The touch cube reacts to canonballs. The black, red, 
   2.655 +    #+caption: The touch cube reacts to cannonballs. The black, red, 
   2.656      #+caption: and white cross on the right is a visual display of 
   2.657      #+caption: the creature's touch. White means that it is feeling 
   2.658      #+caption: something strongly, black is not feeling anything,
   2.659 @@ -2171,7 +2151,7 @@
   2.660      like a normal dot-product angle is.
   2.661  
   2.662      The purpose of these functions is to build a system of angle
   2.663 -    measurement that is biologically plausable.
   2.664 +    measurement that is biologically plausible.
   2.665  
   2.666      #+caption: Program to measure angles along a vector
   2.667      #+name: helpers
   2.668 @@ -2201,7 +2181,7 @@
   2.669      connects. The only tricky part here is making the angles relative
   2.670      to the joint's initial ``straightness''.
   2.671  
   2.672 -    #+caption: Program to return biologially reasonable proprioceptive
   2.673 +    #+caption: Program to return biologically reasonable proprioceptive
   2.674      #+caption: data for each joint.
   2.675      #+name: proprioception
   2.676      #+begin_listing clojure
   2.677 @@ -2359,7 +2339,7 @@
   2.678  
   2.679  *** Creating muscles
   2.680  
   2.681 -    #+caption: This is the core movement functoion in =CORTEX=, which
   2.682 +    #+caption: This is the core movement function in =CORTEX=, which
   2.683      #+caption: implements muscles that report on their activation.
   2.684      #+name: muscle-kernel
   2.685      #+begin_listing clojure
   2.686 @@ -2417,7 +2397,7 @@
   2.687     intricate marionette hand with several strings for each finger:
   2.688  
   2.689     #+caption: View of the hand model with all sense nodes. You can see 
   2.690 -   #+caption: the joint, muscle, ear, and eye nodess here.
   2.691 +   #+caption: the joint, muscle, ear, and eye nodes here.
   2.692     #+name: hand-nodes-1
   2.693     #+ATTR_LaTeX: :width 11cm
   2.694     [[./images/hand-with-all-senses2.png]]
   2.695 @@ -2430,7 +2410,7 @@
   2.696     With the hand fully rigged with senses, I can run it though a test
   2.697     that will test everything. 
   2.698  
   2.699 -   #+caption: A full test of the hand with all senses. Note expecially 
   2.700 +   #+caption: A full test of the hand with all senses. Note especially 
   2.701     #+caption: the interactions the hand has with itself: it feels 
   2.702     #+caption: its own palm and fingers, and when it curls its fingers, 
   2.703     #+caption: it sees them with its eye (which is located in the center
   2.704 @@ -2440,7 +2420,7 @@
   2.705     #+ATTR_LaTeX: :width 16cm
   2.706     [[./images/integration.png]]
   2.707  
   2.708 -** =CORTEX= enables many possiblities for further research
   2.709 +** =CORTEX= enables many possibilities for further research
   2.710  
   2.711     Often times, the hardest part of building a system involving
   2.712     creatures is dealing with physics and graphics. =CORTEX= removes
   2.713 @@ -2561,14 +2541,14 @@
   2.714    #+end_src
   2.715    #+end_listing
   2.716  
   2.717 -** Embodiment factors action recognition into managable parts
   2.718 +** Embodiment factors action recognition into manageable parts
   2.719  
   2.720     Using empathy, I divide the problem of action recognition into a
   2.721     recognition process expressed in the language of a full compliment
   2.722 -   of senses, and an imaganitive process that generates full sensory
   2.723 +   of senses, and an imaginative process that generates full sensory
   2.724     data from partial sensory data. Splitting the action recognition
   2.725     problem in this manner greatly reduces the total amount of work to
   2.726 -   recognize actions: The imaganitive process is mostly just matching
   2.727 +   recognize actions: The imaginative process is mostly just matching
   2.728     previous experience, and the recognition process gets to use all
   2.729     the senses to directly describe any action.
   2.730  
   2.731 @@ -2586,8 +2566,8 @@
   2.732     experience, observe however much of it they desire, and decide
   2.733     whether the worm is doing the action they describe. =curled?=
   2.734     relies on proprioception, =resting?= relies on touch, =wiggling?=
   2.735 -   relies on a fourier analysis of muscle contraction, and
   2.736 -   =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
   2.737 +   relies on a Fourier analysis of muscle contraction, and
   2.738 +   =grand-circle?= relies on touch and reuses =curled?= as a guard.
   2.739     
   2.740     #+caption: Program for detecting whether the worm is curled. This is the 
   2.741     #+caption: simplest action predicate, because it only uses the last frame 
   2.742 @@ -2634,7 +2614,7 @@
   2.743     #+caption: uses a summary of the tactile information from the underbelly 
   2.744     #+caption: of the worm, and is only true if every segment is touching the 
   2.745     #+caption: floor. Note that this function contains no references to 
   2.746 -   #+caption: proprioction at all.
   2.747 +   #+caption: proprioception at all.
   2.748     #+name: resting
   2.749  #+begin_listing clojure
   2.750     #+begin_src clojure
   2.751 @@ -2675,9 +2655,9 @@
   2.752  
   2.753  
   2.754     #+caption: Program for detecting whether the worm has been wiggling for 
   2.755 -   #+caption: the last few frames. It uses a fourier analysis of the muscle 
   2.756 +   #+caption: the last few frames. It uses a Fourier analysis of the muscle 
   2.757     #+caption: contractions of the worm's tail to determine wiggling. This is 
   2.758 -   #+caption: signigicant because there is no particular frame that clearly 
   2.759 +   #+caption: significant because there is no particular frame that clearly 
   2.760     #+caption: indicates that the worm is wiggling --- only when multiple frames 
   2.761     #+caption: are analyzed together is the wiggling revealed. Defining 
   2.762     #+caption: wiggling this way also gives the worm an opportunity to learn 
   2.763 @@ -2738,7 +2718,7 @@
   2.764     #+end_listing
   2.765  
   2.766     #+caption: Using =debug-experience=, the body-centered predicates
   2.767 -   #+caption: work together to classify the behaviour of the worm. 
   2.768 +   #+caption: work together to classify the behavior of the worm. 
   2.769     #+caption: the predicates are operating with access to the worm's
   2.770     #+caption: full sensory data.
   2.771     #+name: basic-worm-view
   2.772 @@ -2749,10 +2729,10 @@
   2.773     empathic recognition system. There is power in the simplicity of
   2.774     the action predicates. They describe their actions without getting
   2.775     confused in visual details of the worm. Each one is frame
   2.776 -   independent, but more than that, they are each indepent of
   2.777 +   independent, but more than that, they are each independent of
   2.778     irrelevant visual details of the worm and the environment. They
   2.779     will work regardless of whether the worm is a different color or
   2.780 -   hevaily textured, or if the environment has strange lighting.
   2.781 +   heavily textured, or if the environment has strange lighting.
   2.782  
   2.783     The trick now is to make the action predicates work even when the
   2.784     sensory data on which they depend is absent. If I can do that, then
   2.785 @@ -2776,7 +2756,7 @@
   2.786  
   2.787     As the worm moves around during free play and its experience vector
   2.788     grows larger, the vector begins to define a subspace which is all
   2.789 -   the sensations the worm can practicaly experience during normal
   2.790 +   the sensations the worm can practically experience during normal
   2.791     operation. I call this subspace \Phi-space, short for
   2.792     physical-space. The experience vector defines a path through
   2.793     \Phi-space. This path has interesting properties that all derive
   2.794 @@ -2801,7 +2781,7 @@
   2.795     body along a specific path through \Phi-space.
   2.796  
   2.797     There is a simple way of taking \Phi-space and the total ordering
   2.798 -   provided by an experience vector and reliably infering the rest of
   2.799 +   provided by an experience vector and reliably inferring the rest of
   2.800     the senses.
   2.801  
   2.802  ** Empathy is the process of tracing though \Phi-space 
   2.803 @@ -2817,8 +2797,8 @@
   2.804     matching experience records for each input, using the tiered
   2.805     proprioceptive bins. 
   2.806  
   2.807 -   Finally, to infer sensory data, select the longest consective chain
   2.808 -   of experiences. Conecutive experience means that the experiences
   2.809 +   Finally, to infer sensory data, select the longest consecutive chain
   2.810 +   of experiences. Consecutive experience means that the experiences
   2.811     appear next to each other in the experience vector.
   2.812  
   2.813     This algorithm has three advantages: 
   2.814 @@ -2833,8 +2813,8 @@
   2.815  
   2.816     2. It protects from wrong interpretations of transient ambiguous
   2.817        proprioceptive data. For example, if the worm is flat for just
   2.818 -      an instant, this flattness will not be interpreted as implying
   2.819 -      that the worm has its muscles relaxed, since the flattness is
   2.820 +      an instant, this flatness will not be interpreted as implying
   2.821 +      that the worm has its muscles relaxed, since the flatness is
   2.822        part of a longer chain which includes a distinct pattern of
   2.823        muscle activation. Markov chains or other memoryless statistical
   2.824        models that operate on individual frames may very well make this
   2.825 @@ -2855,7 +2835,7 @@
   2.826  
   2.827  (defn gen-phi-scan 
   2.828    "Nearest-neighbors with binning. Only returns a result if
   2.829 -   the propriceptive data is within 10% of a previously recorded
   2.830 +   the proprioceptive data is within 10% of a previously recorded
   2.831     result in all dimensions."
   2.832    [phi-space]
   2.833    (let [bin-keys (map bin [3 2 1])
   2.834 @@ -2882,13 +2862,13 @@
   2.835     from previous experience. It prefers longer chains of previous
   2.836     experience to shorter ones. For example, during training the worm
   2.837     might rest on the ground for one second before it performs its
   2.838 -   excercises. If during recognition the worm rests on the ground for
   2.839 -   five seconds, =longest-thread= will accomodate this five second
   2.840 +   exercises. If during recognition the worm rests on the ground for
   2.841 +   five seconds, =longest-thread= will accommodate this five second
   2.842     rest period by looping the one second rest chain five times.
   2.843  
   2.844 -   =longest-thread= takes time proportinal to the average number of
   2.845 +   =longest-thread= takes time proportional to the average number of
   2.846     entries in a proprioceptive bin, because for each element in the
   2.847 -   starting bin it performes a series of set lookups in the preceeding
   2.848 +   starting bin it performs a series of set lookups in the preceding
   2.849     bins. If the total history is limited, then this is only a constant
   2.850     multiple times the number of entries in the starting bin. This
   2.851     analysis also applies even if the action requires multiple longest
   2.852 @@ -2966,7 +2946,7 @@
   2.853     experiences from the worm that includes the actions I want to
   2.854     recognize. The =generate-phi-space= program (listing
   2.855     \ref{generate-phi-space} runs the worm through a series of
   2.856 -   exercices and gatheres those experiences into a vector. The
   2.857 +   exercises and gatherers those experiences into a vector. The
   2.858     =do-all-the-things= program is a routine expressed in a simple
   2.859     muscle contraction script language for automated worm control. It
   2.860     causes the worm to rest, curl, and wiggle over about 700 frames
   2.861 @@ -2975,7 +2955,7 @@
   2.862     #+caption: Program to gather the worm's experiences into a vector for 
   2.863     #+caption: further processing. The =motor-control-program= line uses
   2.864     #+caption: a motor control script that causes the worm to execute a series
   2.865 -   #+caption: of ``exercices'' that include all the action predicates.
   2.866 +   #+caption: of ``exercises'' that include all the action predicates.
   2.867     #+name: generate-phi-space
   2.868  #+begin_listing clojure 
   2.869     #+begin_src clojure
   2.870 @@ -3039,14 +3019,14 @@
   2.871  
   2.872    #+caption: From only proprioceptive data, =EMPATH= was able to infer 
   2.873    #+caption: the complete sensory experience and classify four poses
   2.874 -  #+caption: (The last panel shows a composite image of \emph{wriggling}, 
   2.875 +  #+caption: (The last panel shows a composite image of /wiggling/, 
   2.876    #+caption: a dynamic pose.)
   2.877    #+name: empathy-debug-image
   2.878    #+ATTR_LaTeX: :width 10cm :placement [H]
   2.879    [[./images/empathy-1.png]]
   2.880  
   2.881    One way to measure the performance of =EMPATH= is to compare the
   2.882 -  sutiability of the imagined sense experience to trigger the same
   2.883 +  suitability of the imagined sense experience to trigger the same
   2.884    action predicates as the real sensory experience. 
   2.885    
   2.886     #+caption: Determine how closely empathy approximates actual 
   2.887 @@ -3086,7 +3066,7 @@
   2.888  
   2.889    Running =test-empathy-accuracy= using the very short exercise
   2.890    program defined in listing \ref{generate-phi-space}, and then doing
   2.891 -  a similar pattern of activity manually yeilds an accuracy of around
   2.892 +  a similar pattern of activity manually yields an accuracy of around
   2.893    73%. This is based on very limited worm experience. By training the
   2.894    worm for longer, the accuracy dramatically improves.
   2.895  
   2.896 @@ -3113,21 +3093,21 @@
   2.897    =test-empathy-accuracy=. The majority of errors are near the
   2.898    boundaries of transitioning from one type of action to another.
   2.899    During these transitions the exact label for the action is more open
   2.900 -  to interpretation, and dissaggrement between empathy and experience
   2.901 +  to interpretation, and disagreement between empathy and experience
   2.902    is more excusable.
   2.903  
   2.904  ** Digression: Learn touch sensor layout through free play
   2.905  
   2.906     In the previous section I showed how to compute actions in terms of
   2.907 -   body-centered predicates which relied averate touch activation of
   2.908 -   pre-defined regions of the worm's skin. What if, instead of
   2.909 -   recieving touch pre-grouped into the six faces of each worm
   2.910 -   segment, the true topology of the worm's skin was unknown? This is
   2.911 -   more similiar to how a nerve fiber bundle might be arranged. While
   2.912 -   two fibers that are close in a nerve bundle /might/ correspond to
   2.913 -   two touch sensors that are close together on the skin, the process
   2.914 -   of taking a complicated surface and forcing it into essentially a
   2.915 -   circle requires some cuts and rerragenments.
   2.916 +   body-centered predicates which relied on the average touch
   2.917 +   activation of pre-defined regions of the worm's skin. What if,
   2.918 +   instead of receiving touch pre-grouped into the six faces of each
   2.919 +   worm segment, the true topology of the worm's skin was unknown?
   2.920 +   This is more similar to how a nerve fiber bundle might be
   2.921 +   arranged. While two fibers that are close in a nerve bundle /might/
   2.922 +   correspond to two touch sensors that are close together on the
   2.923 +   skin, the process of taking a complicated surface and forcing it
   2.924 +   into essentially a circle requires some cuts and rearrangements.
   2.925     
   2.926     In this section I show how to automatically learn the skin-topology of
   2.927     a worm segment by free exploration. As the worm rolls around on the
   2.928 @@ -3151,15 +3131,15 @@
   2.929     #+end_listing
   2.930  
   2.931     After collecting these important regions, there will many nearly
   2.932 -   similiar touch regions. While for some purposes the subtle
   2.933 +   similar touch regions. While for some purposes the subtle
   2.934     differences between these regions will be important, for my
   2.935 -   purposes I colapse them into mostly non-overlapping sets using
   2.936 -   =remove-similiar= in listing \ref{remove-similiar}
   2.937 -
   2.938 -   #+caption: Program to take a lits of set of points and ``collapse them''
   2.939 -   #+caption: so that the remaining sets in the list are siginificantly 
   2.940 +   purposes I collapse them into mostly non-overlapping sets using
   2.941 +   =remove-similar= in listing \ref{remove-similar}
   2.942 +
   2.943 +   #+caption: Program to take a list of sets of points and ``collapse them''
   2.944 +   #+caption: so that the remaining sets in the list are significantly 
   2.945     #+caption: different from each other. Prefer smaller sets to larger ones.
   2.946 -   #+name: remove-similiar
   2.947 +   #+name: remove-similar
   2.948     #+begin_listing clojure
   2.949     #+begin_src clojure
   2.950  (defn remove-similar
   2.951 @@ -3181,7 +3161,7 @@
   2.952     Actually running this simulation is easy given =CORTEX='s facilities.
   2.953  
   2.954     #+caption: Collect experiences while the worm moves around. Filter the touch 
   2.955 -   #+caption: sensations by stable ones, collapse similiar ones together, 
   2.956 +   #+caption: sensations by stable ones, collapse similar ones together, 
   2.957     #+caption: and report the regions learned.
   2.958     #+name: learn-touch
   2.959     #+begin_listing clojure
   2.960 @@ -3216,7 +3196,7 @@
   2.961     #+end_src
   2.962     #+end_listing
   2.963  
   2.964 -   The only thing remining to define is the particular motion the worm
   2.965 +   The only thing remaining to define is the particular motion the worm
   2.966     must take. I accomplish this with a simple motor control program.
   2.967  
   2.968     #+caption: Motor control program for making the worm roll on the ground.
   2.969 @@ -3275,7 +3255,7 @@
   2.970     the worm's physiology and the worm's environment to correctly
   2.971     deduce that the worm has six sides. Note that =learn-touch-regions=
   2.972     would work just as well even if the worm's touch sense data were
   2.973 -   completely scrambled. The cross shape is just for convienence. This
   2.974 +   completely scrambled. The cross shape is just for convenience. This
   2.975     example justifies the use of pre-defined touch regions in =EMPATH=.
   2.976  
   2.977  * Contributions
   2.978 @@ -3283,19 +3263,18 @@
   2.979    In this thesis you have seen the =CORTEX= system, a complete
   2.980    environment for creating simulated creatures. You have seen how to
   2.981    implement five senses: touch, proprioception, hearing, vision, and
   2.982 -  muscle tension. You have seen how to create new creatues using
   2.983 +  muscle tension. You have seen how to create new creatures using
   2.984    blender, a 3D modeling tool. I hope that =CORTEX= will be useful in
   2.985    further research projects. To this end I have included the full
   2.986    source to =CORTEX= along with a large suite of tests and examples. I
   2.987 -  have also created a user guide for =CORTEX= which is inculded in an
   2.988 -  appendix to this thesis \ref{}.
   2.989 -# dxh: todo reference appendix
   2.990 +  have also created a user guide for =CORTEX= which is included in an
   2.991 +  appendix to this thesis.
   2.992  
   2.993    You have also seen how I used =CORTEX= as a platform to attach the
   2.994    /action recognition/ problem, which is the problem of recognizing
   2.995    actions in video. You saw a simple system called =EMPATH= which
   2.996 -  ientifies actions by first describing actions in a body-centerd,
   2.997 -  rich sense language, then infering a full range of sensory
   2.998 +  identifies actions by first describing actions in a body-centered,
   2.999 +  rich sense language, then inferring a full range of sensory
  2.1000    experience from limited data using previous experience gained from
  2.1001    free play.
  2.1002  
  2.1003 @@ -3305,23 +3284,22 @@
  2.1004  
  2.1005    In conclusion, the main contributions of this thesis are:
  2.1006  
  2.1007 -  - =CORTEX=, a system for creating simulated creatures with rich
  2.1008 -    senses.
  2.1009 -  - =EMPATH=, a program for recognizing actions by imagining sensory
  2.1010 -    experience. 
  2.1011 -
  2.1012 -# An anatomical joke:
  2.1013 -# - Training
  2.1014 -# - Skeletal imitation
  2.1015 -# - Sensory fleshing-out
  2.1016 -# - Classification
  2.1017 +   - =CORTEX=, a comprehensive platform for embodied AI experiments.
  2.1018 +     =CORTEX= supports many features lacking in other systems, such
  2.1019 +     proper simulation of hearing. It is easy to create new =CORTEX=
  2.1020 +     creatures using Blender, a free 3D modeling program.
  2.1021 +
  2.1022 +   - =EMPATH=, which uses =CORTEX= to identify the actions of a
  2.1023 +     worm-like creature using a computational model of empathy.
  2.1024 +
  2.1025  #+BEGIN_LaTeX
  2.1026  \appendix
  2.1027  #+END_LaTeX
  2.1028 +
  2.1029  * Appendix: =CORTEX= User Guide
  2.1030  
  2.1031    Those who write a thesis should endeavor to make their code not only
  2.1032 -  accessable, but actually useable, as a way to pay back the community
  2.1033 +  accessible, but actually usable, as a way to pay back the community
  2.1034    that made the thesis possible in the first place. This thesis would
  2.1035    not be possible without Free Software such as jMonkeyEngine3,
  2.1036    Blender, clojure, emacs, ffmpeg, and many other tools. That is why I
  2.1037 @@ -3349,7 +3327,7 @@
  2.1038  
  2.1039     Creatures are created using /Blender/, a free 3D modeling program.
  2.1040     You will need Blender version 2.6 when using the =CORTEX= included
  2.1041 -   in this thesis. You create a =CORTEX= creature in a similiar manner
  2.1042 +   in this thesis. You create a =CORTEX= creature in a similar manner
  2.1043     to modeling anything in Blender, except that you also create
  2.1044     several trees of empty nodes which define the creature's senses.
  2.1045  
  2.1046 @@ -3417,7 +3395,7 @@
  2.1047      to set the empty node's display mode to ``Arrows'' so that you can
  2.1048      clearly see the direction of the axes.
  2.1049  
  2.1050 -    Each retina file should contain white pixels whever you want to be
  2.1051 +    Each retina file should contain white pixels wherever you want to be
  2.1052      sensitive to your chosen color. If you want the entire field of
  2.1053      view, specify :all of 0xFFFFFF and a retinal map that is entirely
  2.1054      white. 
  2.1055 @@ -3453,7 +3431,7 @@
  2.1056      #+END_EXAMPLE
  2.1057  
  2.1058      You may also include an optional ``scale'' metadata number to
  2.1059 -    specifiy the length of the touch feelers. The default is $0.1$,
  2.1060 +    specify the length of the touch feelers. The default is $0.1$,
  2.1061      and this is generally sufficient.
  2.1062  
  2.1063      The touch UV should contain white pixels for each touch sensor.
  2.1064 @@ -3475,7 +3453,7 @@
  2.1065      #+ATTR_LaTeX: :width 9cm :placement [H]
  2.1066      [[./images/finger-2.png]]
  2.1067  
  2.1068 -*** Propriocepotion
  2.1069 +*** Proprioception
  2.1070  
  2.1071      Proprioception is tied to each joint node -- nothing special must
  2.1072      be done in a blender model to enable proprioception other than
  2.1073 @@ -3582,10 +3560,10 @@
  2.1074          representing that described in a blender file.
  2.1075  
  2.1076     - =(light-up-everything world)= :: distribute a standard compliment
  2.1077 -        of lights throught the simulation. Should be adequate for most
  2.1078 +        of lights throughout the simulation. Should be adequate for most
  2.1079          purposes.
  2.1080  
  2.1081 -   - =(node-seq node)= :: return a recursuve list of the node's
  2.1082 +   - =(node-seq node)= :: return a recursive list of the node's
  2.1083          children.
  2.1084  
  2.1085     - =(nodify name children)= :: construct a node given a node-name and
  2.1086 @@ -3638,7 +3616,7 @@
  2.1087     - =(proprioception! creature)= :: give the creature the sense of
  2.1088          proprioception. Returns a list of functions, one for each
  2.1089          joint, that when called during a running simulation will
  2.1090 -        report the =[headnig, pitch, roll]= of the joint.
  2.1091 +        report the =[heading, pitch, roll]= of the joint.
  2.1092  
  2.1093     - =(movement! creature)= :: give the creature the power of movement.
  2.1094          Creates a list of functions, one for each muscle, that when
  2.1095 @@ -3677,7 +3655,7 @@
  2.1096          function will import all jMonkeyEngine3 classes for immediate
  2.1097          use.
  2.1098  
  2.1099 -   - =(display-dialated-time world timer)= :: Shows the time as it is
  2.1100 +   - =(display-dilated-time world timer)= :: Shows the time as it is
  2.1101          flowing in the simulation on a HUD display.
  2.1102  
  2.1103  
     3.1 --- a/thesis/rlm-cortex-meng.tex	Sun Mar 30 22:48:19 2014 -0400
     3.2 +++ b/thesis/rlm-cortex-meng.tex	Mon Mar 31 00:18:26 2014 -0400
     3.3 @@ -53,7 +53,7 @@
     3.4  \usepackage{minted}    
     3.5  \usepackage[backend=bibtex,style=alphabetic]{biblatex}
     3.6  %\usepackage[section]{placeins}
     3.7 -\usepackage[section,subsection,subsubsection]{extraplaceins}
     3.8 +\usepackage[section,subsection]{extraplaceins}
     3.9  %\floatsetup[listing]{style=Plaintop}    
    3.10  
    3.11