# HG changeset patch
# User Robert McIntyre <rlm@mit.edu>
# Date 1398704348 14400
# Node ID b2c66ea58c39d1062c7dacdf04bfa21fe840a846
# Parent  431e6aedf67db7df6125189eb3fb5a27559ea4aa
changes from athena.

diff -r 431e6aedf67d -r b2c66ea58c39 thesis/cortex.org
--- a/thesis/cortex.org	Mon Apr 28 01:06:03 2014 -0400
+++ b/thesis/cortex.org	Mon Apr 28 12:59:08 2014 -0400
@@ -349,7 +349,7 @@
 
    - =CORTEX= implements a wide variety of senses: touch,
      proprioception, vision, hearing, and muscle tension. Complicated
-     senses like touch, and vision involve multiple sensory elements
+     senses like touch and vision involve multiple sensory elements
      embedded in a 2D surface. You have complete control over the
      distribution of these sensor elements through the use of simple
      png image files. In particular, =CORTEX= implements more
@@ -1132,6 +1132,7 @@
     #+caption: with =bind-sense=
     #+name: add-eye
     #+begin_listing clojure
+    #+begin_src clojure
 (defn add-eye!
   "Create a Camera centered on the current position of 'eye which
    follows the closest physical node in 'creature. The camera will
@@ -1157,6 +1158,7 @@
      (float 1)
      (float 1000))
     (bind-sense target cam) cam))
+    #+end_src
     #+end_listing
 
 *** Simulated Retina 
@@ -1191,8 +1193,8 @@
     #+ATTR_LaTeX: :width 7cm
     [[./images/retina-small.png]]
 
-    Together, the number 0xFF0000 and the image image above describe
-    the placement of red-sensitive sensory elements.
+    Together, the number 0xFF0000 and the image above describe the
+    placement of red-sensitive sensory elements.
 
     Meta-data to very crudely approximate a human eye might be
     something like this:
@@ -2179,7 +2181,7 @@
 *** Proprioception Kernel
     
     Given a joint, =proprioception-kernel= produces a function that
-    calculates the Euler angles between the the objects the joint
+    calculates the Euler angles between the objects the joint
     connects. The only tricky part here is making the angles relative
     to the joint's initial ``straightness''.
 
@@ -2559,7 +2561,7 @@
 ** Action recognition is easy with a full gamut of senses
 
    Embodied representations using multiple senses such as touch,
-   proprioception, and muscle tension turns out be be exceedingly
+   proprioception, and muscle tension turns out be exceedingly
    efficient at describing body-centered actions. It is the right
    language for the job. For example, it takes only around 5 lines of
    LISP code to describe the action of curling using embodied
@@ -3049,7 +3051,7 @@
    experiences from the worm that includes the actions I want to
    recognize. The =generate-phi-space= program (listing
    \ref{generate-phi-space} runs the worm through a series of
-   exercises and gatherers those experiences into a vector. The
+   exercises and gathers those experiences into a vector. The
    =do-all-the-things= program is a routine expressed in a simple
    muscle contraction script language for automated worm control. It
    causes the worm to rest, curl, and wiggle over about 700 frames