# HG changeset patch # User Robert McIntyre # Date 1399913358 14400 # Node ID 3c3dc4dbb9738f40f1a5d0b7f7c6e44cc5c9b157 # Parent 6299450f96182f35a66c1c219114be6bf83f9227 adding changes from winston diff -r 6299450f9618 -r 3c3dc4dbb973 thesis/cortex.org --- a/thesis/cortex.org Mon May 12 10:04:16 2014 -0400 +++ b/thesis/cortex.org Mon May 12 12:49:18 2014 -0400 @@ -324,7 +324,7 @@ structure of =EMPATH= and =CORTEX= will make future work to enable video analysis much easier than it would otherwise be. -** COMMENT =EMPATH= is built on =CORTEX=, a creature builder. +** =EMPATH= is built on =CORTEX=, a creature builder. I built =CORTEX= to be a general AI research platform for doing experiments involving multiple rich senses and a wide variety and @@ -577,12 +577,12 @@ three-dimensional object. To make a sense, interpret the UV-image as describing the - distribution of that senses sensors. To get different types of + distribution of that senses' sensors. To get different types of sensors, you can either use a different color for each type of sensor, or use multiple UV-maps, each labeled with that sensor type. I generally use a white pixel to mean the presence of a sensor and a black pixel to mean the absence of a sensor, and use - one UV-map for each sensor-type within a given sense. + one UV-map for each sensor-type within a given sense. #+CAPTION: The UV-map for an elongated icososphere. The white #+caption: dots each represent a touch sensor. They are dense @@ -593,7 +593,7 @@ #+ATTR_latex: :width 10cm [[./images/finger-UV.png]] - #+caption: Ventral side of the UV-mapped finger. Notice the + #+caption: Ventral side of the UV-mapped finger. Note the #+caption: density of touch sensors at the tip. #+name: finger-side-view #+ATTR_LaTeX: :width 10cm @@ -612,16 +612,16 @@ First off, general purpose video game engines come with a physics engine and lighting / sound system. The physics system provides tools that can be co-opted to serve as touch, proprioception, and - muscles. Since some games support split screen views, a good video - game engine will allow you to efficiently create multiple cameras - in the simulated world that can be used as eyes. Video game systems - offer integrated asset management for things like textures and - creature models, providing an avenue for defining creatures. They - also understand UV-mapping, since this technique is used to apply a - texture to a model. Finally, because video game engines support a - large number of developers, as long as =CORTEX= doesn't stray too - far from the base system, other researchers can turn to this - community for help when doing their research. + muscles. Because some games support split screen views, a good + video game engine will allow you to efficiently create multiple + cameras in the simulated world that can be used as eyes. Video game + systems offer integrated asset management for things like textures + and creature models, providing an avenue for defining creatures. + They also understand UV-mapping, because this technique is used to + apply a texture to a model. Finally, because video game engines + support a large number of developers, as long as =CORTEX= doesn't + stray too far from the base system, other researchers can turn to + this community for help when doing their research. ** =CORTEX= is based on jMonkeyEngine3 @@ -824,8 +824,8 @@ node. #+caption: Retrieving the children empty nodes from a single - #+caption: named empty node is a common pattern in =CORTEX= - #+caption: further instances of this technique for the senses + #+caption: named empty node is a common pattern in =CORTEX=. + #+caption: Further instances of this technique for the senses #+caption: will be omitted #+name: get-empty-nodes #+begin_listing clojure @@ -853,9 +853,9 @@ intersects two physical objects. The objects are ordered according to the joint's rotation, with the first one being the object that has more negative coordinates in the joint's reference frame. - Since the objects must be physical, the empty-node itself escapes - detection. Because the objects must be physical, =joint-targets= - must be called /after/ =physical!= is called. + Because the objects must be physical, the empty-node itself + escapes detection. Because the objects must be physical, + =joint-targets= must be called /after/ =physical!= is called. #+caption: Program to find the targets of a joint node by #+caption: exponentially growth of a search cube. @@ -1270,12 +1270,12 @@ #+END_SRC #+end_listing - Note that since each of the functions generated by =vision-kernel= - shares the same =register-eye!= function, the eye will be - registered only once the first time any of the functions from the - list returned by =vision-kernel= is called. Each of the functions - returned by =vision-kernel= also allows access to the =Viewport= - through which it receives images. + Note that because each of the functions generated by + =vision-kernel= shares the same =register-eye!= function, the eye + will be registered only once the first time any of the functions + from the list returned by =vision-kernel= is called. Each of the + functions returned by =vision-kernel= also allows access to the + =Viewport= through which it receives images. All the hard work has been done; all that remains is to apply =vision-kernel= to each eye in the creature and gather the results @@ -1372,8 +1372,8 @@ *** Extending =OpenAl= Extending =OpenAL= to support multiple listeners requires 500 - lines of =C= code and is too hairy to mention here. Instead, I - will show a small amount of extension code and go over the high + lines of =C= code and is too complicated to mention here. Instead, + I will show a small amount of extension code and go over the high level strategy. Full source is of course available with the =CORTEX= distribution if you're interested.