changeset 545:b2c66ea58c39

changes from athena.
author Robert McIntyre <rlm@mit.edu>
date Mon, 28 Apr 2014 12:59:08 -0400
parents 431e6aedf67d
children f4770e3d30ae
files thesis/cortex.org
diffstat 1 files changed, 8 insertions(+), 6 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/thesis/cortex.org	Mon Apr 28 01:06:03 2014 -0400
     1.2 +++ b/thesis/cortex.org	Mon Apr 28 12:59:08 2014 -0400
     1.3 @@ -349,7 +349,7 @@
     1.4  
     1.5     - =CORTEX= implements a wide variety of senses: touch,
     1.6       proprioception, vision, hearing, and muscle tension. Complicated
     1.7 -     senses like touch, and vision involve multiple sensory elements
     1.8 +     senses like touch and vision involve multiple sensory elements
     1.9       embedded in a 2D surface. You have complete control over the
    1.10       distribution of these sensor elements through the use of simple
    1.11       png image files. In particular, =CORTEX= implements more
    1.12 @@ -1132,6 +1132,7 @@
    1.13      #+caption: with =bind-sense=
    1.14      #+name: add-eye
    1.15      #+begin_listing clojure
    1.16 +    #+begin_src clojure
    1.17  (defn add-eye!
    1.18    "Create a Camera centered on the current position of 'eye which
    1.19     follows the closest physical node in 'creature. The camera will
    1.20 @@ -1157,6 +1158,7 @@
    1.21       (float 1)
    1.22       (float 1000))
    1.23      (bind-sense target cam) cam))
    1.24 +    #+end_src
    1.25      #+end_listing
    1.26  
    1.27  *** Simulated Retina 
    1.28 @@ -1191,8 +1193,8 @@
    1.29      #+ATTR_LaTeX: :width 7cm
    1.30      [[./images/retina-small.png]]
    1.31  
    1.32 -    Together, the number 0xFF0000 and the image image above describe
    1.33 -    the placement of red-sensitive sensory elements.
    1.34 +    Together, the number 0xFF0000 and the image above describe the
    1.35 +    placement of red-sensitive sensory elements.
    1.36  
    1.37      Meta-data to very crudely approximate a human eye might be
    1.38      something like this:
    1.39 @@ -2179,7 +2181,7 @@
    1.40  *** Proprioception Kernel
    1.41      
    1.42      Given a joint, =proprioception-kernel= produces a function that
    1.43 -    calculates the Euler angles between the the objects the joint
    1.44 +    calculates the Euler angles between the objects the joint
    1.45      connects. The only tricky part here is making the angles relative
    1.46      to the joint's initial ``straightness''.
    1.47  
    1.48 @@ -2559,7 +2561,7 @@
    1.49  ** Action recognition is easy with a full gamut of senses
    1.50  
    1.51     Embodied representations using multiple senses such as touch,
    1.52 -   proprioception, and muscle tension turns out be be exceedingly
    1.53 +   proprioception, and muscle tension turns out be exceedingly
    1.54     efficient at describing body-centered actions. It is the right
    1.55     language for the job. For example, it takes only around 5 lines of
    1.56     LISP code to describe the action of curling using embodied
    1.57 @@ -3049,7 +3051,7 @@
    1.58     experiences from the worm that includes the actions I want to
    1.59     recognize. The =generate-phi-space= program (listing
    1.60     \ref{generate-phi-space} runs the worm through a series of
    1.61 -   exercises and gatherers those experiences into a vector. The
    1.62 +   exercises and gathers those experiences into a vector. The
    1.63     =do-all-the-things= program is a routine expressed in a simple
    1.64     muscle contraction script language for automated worm control. It
    1.65     causes the worm to rest, curl, and wiggle over about 700 frames