changeset 563:3c3dc4dbb973

adding changes from winston
author Robert McIntyre <rlm@mit.edu>
date Mon, 12 May 2014 12:49:18 -0400
parents 6299450f9618
children ecae29320b00
files thesis/cortex.org
diffstat 1 files changed, 27 insertions(+), 27 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/thesis/cortex.org	Mon May 12 10:04:16 2014 -0400
     1.2 +++ b/thesis/cortex.org	Mon May 12 12:49:18 2014 -0400
     1.3 @@ -324,7 +324,7 @@
     1.4       structure of =EMPATH= and =CORTEX= will make future work to
     1.5       enable video analysis much easier than it would otherwise be.
     1.6  
     1.7 -** COMMENT =EMPATH= is built on =CORTEX=, a creature builder.
     1.8 +** =EMPATH= is built on =CORTEX=, a creature builder.
     1.9  
    1.10     I built =CORTEX= to be a general AI research platform for doing
    1.11     experiments involving multiple rich senses and a wide variety and
    1.12 @@ -577,12 +577,12 @@
    1.13     three-dimensional object.
    1.14  
    1.15     To make a sense, interpret the UV-image as describing the
    1.16 -   distribution of that senses sensors. To get different types of
    1.17 +   distribution of that senses' sensors. To get different types of
    1.18     sensors, you can either use a different color for each type of
    1.19     sensor, or use multiple UV-maps, each labeled with that sensor
    1.20     type. I generally use a white pixel to mean the presence of a
    1.21     sensor and a black pixel to mean the absence of a sensor, and use
    1.22 -   one UV-map for each sensor-type within a given sense. 
    1.23 +   one UV-map for each sensor-type within a given sense.
    1.24  
    1.25     #+CAPTION: The UV-map for an elongated icososphere. The white
    1.26     #+caption: dots each represent a touch sensor. They are dense 
    1.27 @@ -593,7 +593,7 @@
    1.28     #+ATTR_latex: :width 10cm
    1.29     [[./images/finger-UV.png]]
    1.30  
    1.31 -   #+caption: Ventral side of the UV-mapped finger. Notice the 
    1.32 +   #+caption: Ventral side of the UV-mapped finger. Note the 
    1.33     #+caption: density of touch sensors at the tip.
    1.34     #+name: finger-side-view
    1.35     #+ATTR_LaTeX: :width 10cm
    1.36 @@ -612,16 +612,16 @@
    1.37     First off, general purpose video game engines come with a physics
    1.38     engine and lighting / sound system. The physics system provides
    1.39     tools that can be co-opted to serve as touch, proprioception, and
    1.40 -   muscles. Since some games support split screen views, a good video
    1.41 -   game engine will allow you to efficiently create multiple cameras
    1.42 -   in the simulated world that can be used as eyes. Video game systems
    1.43 -   offer integrated asset management for things like textures and
    1.44 -   creature models, providing an avenue for defining creatures. They
    1.45 -   also understand UV-mapping, since this technique is used to apply a
    1.46 -   texture to a model. Finally, because video game engines support a
    1.47 -   large number of developers, as long as =CORTEX= doesn't stray too
    1.48 -   far from the base system, other researchers can turn to this
    1.49 -   community for help when doing their research.
    1.50 +   muscles. Because some games support split screen views, a good
    1.51 +   video game engine will allow you to efficiently create multiple
    1.52 +   cameras in the simulated world that can be used as eyes. Video game
    1.53 +   systems offer integrated asset management for things like textures
    1.54 +   and creature models, providing an avenue for defining creatures.
    1.55 +   They also understand UV-mapping, because this technique is used to
    1.56 +   apply a texture to a model. Finally, because video game engines
    1.57 +   support a large number of developers, as long as =CORTEX= doesn't
    1.58 +   stray too far from the base system, other researchers can turn to
    1.59 +   this community for help when doing their research.
    1.60     
    1.61  ** =CORTEX= is based on jMonkeyEngine3
    1.62  
    1.63 @@ -824,8 +824,8 @@
    1.64      node.
    1.65  
    1.66     #+caption: Retrieving the children empty nodes from a single 
    1.67 -   #+caption: named empty node is a common pattern in =CORTEX=
    1.68 -   #+caption: further instances of this technique for the senses 
    1.69 +   #+caption: named empty node is a common pattern in =CORTEX=.
    1.70 +   #+caption: Further instances of this technique for the senses 
    1.71     #+caption: will be omitted
    1.72     #+name: get-empty-nodes
    1.73     #+begin_listing clojure
    1.74 @@ -853,9 +853,9 @@
    1.75      intersects two physical objects. The objects are ordered according
    1.76      to the joint's rotation, with the first one being the object that
    1.77      has more negative coordinates in the joint's reference frame.
    1.78 -    Since the objects must be physical, the empty-node itself escapes
    1.79 -    detection. Because the objects must be physical, =joint-targets=
    1.80 -    must be called /after/ =physical!= is called.
    1.81 +    Because the objects must be physical, the empty-node itself
    1.82 +    escapes detection. Because the objects must be physical,
    1.83 +    =joint-targets= must be called /after/ =physical!= is called.
    1.84     
    1.85      #+caption: Program to find the targets of a joint node by 
    1.86      #+caption: exponentially growth of a search cube.
    1.87 @@ -1270,12 +1270,12 @@
    1.88      #+END_SRC
    1.89      #+end_listing
    1.90  
    1.91 -    Note that since each of the functions generated by =vision-kernel=
    1.92 -    shares the same =register-eye!= function, the eye will be
    1.93 -    registered only once the first time any of the functions from the
    1.94 -    list returned by =vision-kernel= is called. Each of the functions
    1.95 -    returned by =vision-kernel= also allows access to the =Viewport=
    1.96 -    through which it receives images.
    1.97 +    Note that because each of the functions generated by
    1.98 +    =vision-kernel= shares the same =register-eye!= function, the eye
    1.99 +    will be registered only once the first time any of the functions
   1.100 +    from the list returned by =vision-kernel= is called. Each of the
   1.101 +    functions returned by =vision-kernel= also allows access to the
   1.102 +    =Viewport= through which it receives images.
   1.103  
   1.104      All the hard work has been done; all that remains is to apply
   1.105      =vision-kernel= to each eye in the creature and gather the results
   1.106 @@ -1372,8 +1372,8 @@
   1.107  *** Extending =OpenAl=
   1.108  
   1.109      Extending =OpenAL= to support multiple listeners requires 500
   1.110 -    lines of =C= code and is too hairy to mention here. Instead, I
   1.111 -    will show a small amount of extension code and go over the high
   1.112 +    lines of =C= code and is too complicated to mention here. Instead,
   1.113 +    I will show a small amount of extension code and go over the high
   1.114      level strategy. Full source is of course available with the
   1.115      =CORTEX= distribution if you're interested.
   1.116