changeset 572:202c6d19acad

add index page as part of aurellem redesign.
author Robert McIntyre <rlm@mit.edu>
date Sun, 08 Mar 2015 22:08:17 -0700
parents 819968c8a391
children ebdedb039cbb
files org/index.org thesis/abstract.tex thesis/cortex.tex
diffstat 3 files changed, 3833 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/org/index.org	Sun Mar 08 22:08:17 2015 -0700
     1.3 @@ -0,0 +1,33 @@
     1.4 +#+title: CORTEX
     1.5 +#+author: Robert McIntyre
     1.6 +#+email: rlm@mit.edu
     1.7 +#+description: cortex, a simulated enviroment for sensate AI
     1.8 +#+keywords: computer vision, jMonkeyEngine3, clojure
     1.9 +#+SETUPFILE: ../../aurellem/org/setup.org
    1.10 +#+INCLUDE: ../../aurellem/org/level-0.org
    1.11 +#+babel: :mkdirp yes :noweb yes :exports both
    1.12 +
    1.13 +
    1.14 +** Cortex: a virtual world for sensate AI 
    1.15 +
    1.16 + This was an MEng thesis project while I was at MIT. It won the 2014
    1.17 + Charles and Jennifer Johnson Thesis Award –- 1st Place!
    1.18 +
    1.19 + - [[http://aurellem.org/dl/rlm-meng-cortex-final.pdf][Thesis]]
    1.20 + - [[http://aurellem.org/dl/cortex-1.0.0.tar.bz2][Code]]
    1.21 +
    1.22 + 1. [[../../cortex/html/intro.html][Intro: Choosing between Virtual and Real Time]]
    1.23 + 2. [[../../cortex/html/setup.html][Installing jMonkeyEngine3, a 3D Game Engine]]
    1.24 + 3. [[../../cortex/html/world.html][Creating a Virtual World]]
    1.25 + 4. [[../../cortex/html/util.html][Utilities that Integrate jMonkeyEngine3 into Clojure]]
    1.26 + 5. [[../../cortex/html/games.html][ Showing off: Games and Examples]]
    1.27 + 6. *Sensors and effectors*
    1.28 +     1. [[../../cortex/html/sense.html][Preamble: auxillary functions]]
    1.29 +     2. [[../../cortex/html/body.html][Building a Body]]
    1.30 +     3. [[../../cortex/html/vision.html][Vision]]
    1.31 +     4. [[../../cortex/html/hearing.html][Hearing]]
    1.32 +     5. [[../../cortex/html/touch.html][Touch]]
    1.33 +     6. [[../../cortex/html/proprioception.html][Proprioception]]
    1.34 +     7. [[../../cortex/html/movement.html][Movement]]
    1.35 +     8. [[../../cortex/html/integration.html][Integration]]
    1.36 + 7. [[../../cortex/html/gabor.html][Gabor Filters]]
     2.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     2.2 +++ b/thesis/abstract.tex	Sun Mar 08 22:08:17 2015 -0700
     2.3 @@ -0,0 +1,25 @@
     2.4 +Here I demonstrate the power of using embodied artificial intelligence
     2.5 +to attack the \emph{action recognition} problem, which is the challenge of
     2.6 +recognizing actions performed by a creature given limited data about
     2.7 +the creature's actions, such as a video recording. I solve this
     2.8 +problem in the case of a worm-like creature performing actions such as
     2.9 +curling and wiggling. 
    2.10 +
    2.11 +To attack the action recognition problem, I developed a computational
    2.12 +model of empathy (\texttt{EMPATH}) which allows me to recognize actions using
    2.13 +simple, embodied representations of actions (which require rich
    2.14 +sensory data), even when that sensory data is not actually available.
    2.15 +The missing sense data is imagined by combining previous experiences
    2.16 +gained from unsupervised free play. The worm is a five-segment
    2.17 +creature equipped with touch, proprioception, and muscle tension
    2.18 +senses. It recognizes actions using only proprioception data.
    2.19 +
    2.20 +In order to build this empathic, action-recognizing system, I created
    2.21 +a program called \texttt{CORTEX}, which is a complete platform for embodied
    2.22 +AI research. It provides multiple senses for simulated creatures,
    2.23 +including vision, touch, proprioception, muscle tension, and hearing.
    2.24 +Each of these senses provides a wealth of parameters that are
    2.25 +biologically inspired. \texttt{CORTEX} is able to simulate any number of
    2.26 +creatures and senses, and provides facilities for easily modeling and
    2.27 +creating new creatures. As a research platform it is more complete
    2.28 +than any other system currently available.
     3.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     3.2 +++ b/thesis/cortex.tex	Sun Mar 08 22:08:17 2015 -0700
     3.3 @@ -0,0 +1,3775 @@
     3.4 +
     3.5 +\section{Empathy $\backslash$ Embodiment: problem solving strategies}
     3.6 +\label{sec-1}
     3.7 +
     3.8 +By the time you have read this thesis, you will understand a novel
     3.9 +approach to representing and recognizing physical actions using
    3.10 +embodiment and empathy. You will also see one way to efficiently
    3.11 +implement physical empathy for embodied creatures. Finally, you will
    3.12 +become familiar with \texttt{CORTEX}, a system for designing and simulating
    3.13 +creatures with rich senses, which I have designed as a library that
    3.14 +you can use in your own research. Note that I \emph{do not} process video
    3.15 +directly --- I start with knowledge of the positions of a creature's
    3.16 +body parts and work from there.
    3.17 +
    3.18 +This is the core vision of my thesis: That one of the important ways
    3.19 +in which we understand others is by imagining ourselves in their
    3.20 +position and empathically feeling experiences relative to our own
    3.21 +bodies. By understanding events in terms of our own previous
    3.22 +corporeal experience, we greatly constrain the possibilities of what
    3.23 +would otherwise be an unwieldy exponential search. This extra
    3.24 +constraint can be the difference between easily understanding what
    3.25 +is happening in a video and being completely lost in a sea of
    3.26 +incomprehensible color and movement.
    3.27 +
    3.28 +\subsection{The problem: recognizing actions is hard!}
    3.29 +\label{sec-1-1}
    3.30 +
    3.31 +Examine figure \ref{cat-drink}. What is happening? As you, and
    3.32 +indeed very young children, can easily determine, this is an image
    3.33 +of drinking.
    3.34 +
    3.35 +\begin{figure}[htb]
    3.36 +\centering
    3.37 +\includegraphics[width=7cm]{./images/cat-drinking.jpg}
    3.38 +\caption{\label{cat-drink}A cat drinking some water. Identifying this action is beyond the capabilities of existing computer vision systems.}
    3.39 +\end{figure}
    3.40 +
    3.41 +Nevertheless, it is beyond the state of the art for a computer
    3.42 +vision program to describe what's happening in this image. Part of
    3.43 +the problem is that many computer vision systems focus on
    3.44 +pixel-level details or comparisons to example images (such as
    3.45 +\cite{volume-action-recognition}), but the 3D world is so variable
    3.46 +that it is hard to describe the world in terms of possible images.
    3.47 +
    3.48 +In fact, the contents of a scene may have much less to do with
    3.49 +pixel probabilities than with recognizing various affordances:
    3.50 +things you can move, objects you can grasp, spaces that can be
    3.51 +filled . For example, what processes might enable you to see the
    3.52 +chair in figure \ref{hidden-chair}?
    3.53 +
    3.54 +\begin{figure}[htb]
    3.55 +\centering
    3.56 +\includegraphics[width=10cm]{./images/fat-person-sitting-at-desk.jpg}
    3.57 +\caption{\label{hidden-chair}The chair in this image is quite obvious to humans, but it can't be found by any modern computer vision program.}
    3.58 +\end{figure}
    3.59 +
    3.60 +Finally, how is it that you can easily tell the difference between
    3.61 +how the girl's \emph{muscles} are working in figure \ref{girl}?
    3.62 +
    3.63 +\begin{figure}[htb]
    3.64 +\centering
    3.65 +\includegraphics[width=7cm]{./images/wall-push.png}
    3.66 +\caption{\label{girl}The mysterious ``common sense'' appears here as you are able to discern the difference in how the girl's arm muscles are activated between the two images. When you compare these two images, do you feel something in your own arm muscles?}
    3.67 +\end{figure}
    3.68 +
    3.69 +Each of these examples tells us something about what might be going
    3.70 +on in our minds as we easily solve these recognition problems:
    3.71 +
    3.72 +\begin{itemize}
    3.73 +\item The hidden chair shows us that we are strongly triggered by cues
    3.74 +relating to the position of human bodies, and that we can
    3.75 +determine the overall physical configuration of a human body even
    3.76 +if much of that body is occluded.
    3.77 +
    3.78 +\item The picture of the girl pushing against the wall tells us that we
    3.79 +have common sense knowledge about the kinetics of our own bodies.
    3.80 +We know well how our muscles would have to work to maintain us in
    3.81 +most positions, and we can easily project this self-knowledge to
    3.82 +imagined positions triggered by images of the human body.
    3.83 +
    3.84 +\item The cat tells us that imagination of some kind plays an important
    3.85 +role in understanding actions. The question is: Can we be more
    3.86 +precise about what sort of imagination is required to understand
    3.87 +these actions?
    3.88 +\end{itemize}
    3.89 +
    3.90 +\subsection{A step forward: the sensorimotor-centered approach}
    3.91 +\label{sec-1-2}
    3.92 +
    3.93 +In this thesis, I explore the idea that our knowledge of our own
    3.94 +bodies, combined with our own rich senses, enables us to recognize
    3.95 +the actions of others.
    3.96 +
    3.97 +For example, I think humans are able to label the cat video as
    3.98 +``drinking'' because they imagine \emph{themselves} as the cat, and
    3.99 +imagine putting their face up against a stream of water and
   3.100 +sticking out their tongue. In that imagined world, they can feel
   3.101 +the cool water hitting their tongue, and feel the water entering
   3.102 +their body, and are able to recognize that \emph{feeling} as drinking.
   3.103 +So, the label of the action is not really in the pixels of the
   3.104 +image, but is found clearly in a simulation / recollection inspired
   3.105 +by those pixels. An imaginative system, having been trained on
   3.106 +drinking and non-drinking examples and learning that the most
   3.107 +important component of drinking is the feeling of water flowing
   3.108 +down one's throat, would analyze a video of a cat drinking in the
   3.109 +following manner:
   3.110 +
   3.111 +\begin{enumerate}
   3.112 +\item Create a physical model of the video by putting a ``fuzzy''
   3.113 +model of its own body in place of the cat. Possibly also create
   3.114 +a simulation of the stream of water.
   3.115 +
   3.116 +\item Play out this simulated scene and generate imagined sensory
   3.117 +experience. This will include relevant muscle contractions, a
   3.118 +close up view of the stream from the cat's perspective, and most
   3.119 +importantly, the imagined feeling of water entering the mouth.
   3.120 +The imagined sensory experience can come from a simulation of
   3.121 +the event, but can also be pattern-matched from previous,
   3.122 +similar embodied experience.
   3.123 +
   3.124 +\item The action is now easily identified as drinking by the sense of
   3.125 +taste alone. The other senses (such as the tongue moving in and
   3.126 +out) help to give plausibility to the simulated action. Note that
   3.127 +the sense of vision, while critical in creating the simulation,
   3.128 +is not critical for identifying the action from the simulation.
   3.129 +\end{enumerate}
   3.130 +
   3.131 +For the chair examples, the process is even easier:
   3.132 +
   3.133 +\begin{enumerate}
   3.134 +\item Align a model of your body to the person in the image.
   3.135 +
   3.136 +\item Generate proprioceptive sensory data from this alignment.
   3.137 +
   3.138 +\item Use the imagined proprioceptive data as a key to lookup related
   3.139 +sensory experience associated with that particular proprioceptive
   3.140 +feeling.
   3.141 +
   3.142 +\item Retrieve the feeling of your bottom resting on a surface, your
   3.143 +knees bent, and your leg muscles relaxed.
   3.144 +
   3.145 +\item This sensory information is consistent with your \texttt{sitting?}
   3.146 +sensory predicate, so you (and the entity in the image) must be
   3.147 +sitting.
   3.148 +
   3.149 +\item There must be a chair-like object since you are sitting.
   3.150 +\end{enumerate}
   3.151 +
   3.152 +Empathy offers yet another alternative to the age-old AI
   3.153 +representation question: ``What is a chair?'' --- A chair is the
   3.154 +feeling of sitting!
   3.155 +
   3.156 +One powerful advantage of empathic problem solving is that it
   3.157 +factors the action recognition problem into two easier problems. To
   3.158 +use empathy, you need an \emph{aligner}, which takes the video and a
   3.159 +model of your body, and aligns the model with the video. Then, you
   3.160 +need a \emph{recognizer}, which uses the aligned model to interpret the
   3.161 +action. The power in this method lies in the fact that you describe
   3.162 +all actions from a body-centered viewpoint. You are less tied to
   3.163 +the particulars of any visual representation of the actions. If you
   3.164 +teach the system what ``running'' is, and you have a good enough
   3.165 +aligner, the system will from then on be able to recognize running
   3.166 +from any point of view -- even strange points of view like above or
   3.167 +underneath the runner. This is in contrast to action recognition
   3.168 +schemes that try to identify actions using a non-embodied approach.
   3.169 +If these systems learn about running as viewed from the side, they
   3.170 +will not automatically be able to recognize running from any other
   3.171 +viewpoint.
   3.172 +
   3.173 +Another powerful advantage is that using the language of multiple
   3.174 +body-centered rich senses to describe body-centered actions offers
   3.175 +a massive boost in descriptive capability. Consider how difficult
   3.176 +it would be to compose a set of HOG (Histogram of Oriented
   3.177 +Gradients) filters to describe the action of a simple worm-creature
   3.178 +``curling'' so that its head touches its tail, and then behold the
   3.179 +simplicity of describing thus action in a language designed for the
   3.180 +task (listing \ref{grand-circle-intro}):
   3.181 +
   3.182 +\begin{listing}
   3.183 +\begin{verbatim}
   3.184 +(defn grand-circle?
   3.185 +  "Does the worm form a majestic circle (one end touching the other)?"
   3.186 +  [experiences]
   3.187 +  (and (curled? experiences)
   3.188 +       (let [worm-touch (:touch (peek experiences))
   3.189 +             tail-touch (worm-touch 0)
   3.190 +             head-touch (worm-touch 4)]
   3.191 +         (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
   3.192 +              (< 0.2 (contact worm-segment-top-tip    head-touch))))))
   3.193 +\end{verbatim}
   3.194 +\caption{\label{grand-circle-intro}Body-centered actions are best expressed in a body-centered language. This code detects when the worm has curled into a full circle. Imagine how you would replicate this functionality using low-level pixel features such as HOG filters!}
   3.195 +\end{listing}
   3.196 +
   3.197 +\subsection{\texttt{EMPATH} recognizes actions using empathy}
   3.198 +\label{sec-1-3}
   3.199 +
   3.200 +Exploring these ideas further demands a concrete implementation, so
   3.201 +first, I built a system for constructing virtual creatures with
   3.202 +physiologically plausible sensorimotor systems and detailed
   3.203 +environments. The result is \texttt{CORTEX}, which I describe in chapter
   3.204 +\ref{sec-2}.
   3.205 +
   3.206 +Next, I wrote routines which enabled a simple worm-like creature to
   3.207 +infer the actions of a second worm-like creature, using only its
   3.208 +own prior sensorimotor experiences and knowledge of the second
   3.209 +worm's joint positions. This program, \texttt{EMPATH}, is described in
   3.210 +chapter \ref{sec-3}. It's main components are:
   3.211 +
   3.212 +\begin{description}
   3.213 +\item[{Embodied Action Definitions}] Many otherwise complicated actions
   3.214 +are easily described in the language of a full suite of
   3.215 +body-centered, rich senses and experiences. For example,
   3.216 +drinking is the feeling of water flowing down your throat, and
   3.217 +cooling your insides. It's often accompanied by bringing your
   3.218 +hand close to your face, or bringing your face close to water.
   3.219 +Sitting down is the feeling of bending your knees, activating
   3.220 +your quadriceps, then feeling a surface with your bottom and
   3.221 +relaxing your legs. These body-centered action descriptions
   3.222 +can be either learned or hard coded.
   3.223 +
   3.224 +\item[{Guided Play     }] The creature moves around and experiences the
   3.225 +world through its unique perspective. As the creature moves,
   3.226 +it gathers experiences that satisfy the embodied action
   3.227 +definitions.
   3.228 +
   3.229 +\item[{Posture Imitation}] When trying to interpret a video or image,
   3.230 +the creature takes a model of itself and aligns it with
   3.231 +whatever it sees. This alignment might even cross species, as
   3.232 +when humans try to align themselves with things like ponies,
   3.233 +dogs, or other humans with a different body type.
   3.234 +
   3.235 +\item[{Empathy         }] The alignment triggers associations with
   3.236 +sensory data from prior experiences. For example, the
   3.237 +alignment itself easily maps to proprioceptive data. Any
   3.238 +sounds or obvious skin contact in the video can to a lesser
   3.239 +extent trigger previous experience keyed to hearing or touch.
   3.240 +Segments of previous experiences gained from play are stitched
   3.241 +together to form a coherent and complete sensory portrait of
   3.242 +the scene.
   3.243 +
   3.244 +\item[{Recognition}] With the scene described in terms of remembered
   3.245 +first person sensory events, the creature can now run its
   3.246 +action-definition programs (such as the one in listing
   3.247 +\ref{grand-circle-intro}) on this synthesized sensory data,
   3.248 +just as it would if it were actually experiencing the scene
   3.249 +first-hand. If previous experience has been accurately
   3.250 +retrieved, and if it is analogous enough to the scene, then
   3.251 +the creature will correctly identify the action in the scene.
   3.252 +\end{description}
   3.253 +
   3.254 +My program \texttt{EMPATH} uses this empathic problem solving technique
   3.255 +to interpret the actions of a simple, worm-like creature. 
   3.256 +
   3.257 +\begin{figure}[htb]
   3.258 +\centering
   3.259 +\includegraphics[width=15cm]{./images/worm-intro-white.png}
   3.260 +\caption{\label{worm-intro}The worm performs many actions during free play such as curling, wiggling, and resting.}
   3.261 +\end{figure}
   3.262 +
   3.263 +\begin{figure}[htb]
   3.264 +\centering
   3.265 +\includegraphics[width=15cm]{./images/worm-poses.png}
   3.266 +\caption{\label{worm-recognition-intro}\texttt{EMPATH} recognized and classified each of these poses by inferring the complete sensory experience from proprioceptive data.}
   3.267 +\end{figure}
   3.268 +
   3.269 +\subsubsection{Main Results}
   3.270 +\label{sec-1-3-1}
   3.271 +
   3.272 +\begin{itemize}
   3.273 +\item After one-shot supervised training, \texttt{EMPATH} was able to
   3.274 +recognize a wide variety of static poses and dynamic
   3.275 +actions---ranging from curling in a circle to wiggling with a
   3.276 +particular frequency --- with 95$\backslash$ accuracy.
   3.277 +
   3.278 +\item These results were completely independent of viewing angle
   3.279 +because the underlying body-centered language fundamentally is
   3.280 +independent; once an action is learned, it can be recognized
   3.281 +equally well from any viewing angle.
   3.282 +
   3.283 +\item \texttt{EMPATH} is surprisingly short; the sensorimotor-centered
   3.284 +language provided by \texttt{CORTEX} resulted in extremely economical
   3.285 +recognition routines --- about 500 lines in all --- suggesting
   3.286 +that such representations are very powerful, and often
   3.287 +indispensable for the types of recognition tasks considered here.
   3.288 +
   3.289 +\item For expediency's sake, I relied on direct knowledge of joint
   3.290 +positions in this proof of concept. However, I believe that the
   3.291 +structure of \texttt{EMPATH} and \texttt{CORTEX} will make future work to
   3.292 +enable video analysis much easier than it would otherwise be.
   3.293 +\end{itemize}
   3.294 +
   3.295 +\subsection{\texttt{EMPATH} is built on \texttt{CORTEX}, a creature builder.}
   3.296 +\label{sec-1-4}
   3.297 +
   3.298 +I built \texttt{CORTEX} to be a general AI research platform for doing
   3.299 +experiments involving multiple rich senses and a wide variety and
   3.300 +number of creatures. I intend it to be useful as a library for many
   3.301 +more projects than just this thesis. \texttt{CORTEX} was necessary to meet
   3.302 +a need among AI researchers at CSAIL and beyond, which is that
   3.303 +people often will invent wonderful ideas that are best expressed in
   3.304 +the language of creatures and senses, but in order to explore those
   3.305 +ideas they must first build a platform in which they can create
   3.306 +simulated creatures with rich senses! There are many ideas that
   3.307 +would be simple to execute (such as \texttt{EMPATH} or Larson's
   3.308 +self-organizing maps (\cite{larson-symbols})), but attached to them
   3.309 +is the multi-month effort to make a good creature simulator. Often,
   3.310 +that initial investment of time proves to be too much, and the
   3.311 +project must make do with a lesser environment or be abandoned
   3.312 +entirely.
   3.313 +
   3.314 +\texttt{CORTEX} is well suited as an environment for embodied AI research
   3.315 +for three reasons:
   3.316 +
   3.317 +\begin{itemize}
   3.318 +\item You can design new creatures using Blender (\cite{blender}), a
   3.319 +popular, free 3D modeling program. Each sense can be specified
   3.320 +using special Blender nodes with biologically inspired
   3.321 +parameters. You need not write any code to create a creature, and
   3.322 +can use a wide library of pre-existing Blender models as a base
   3.323 +for your own creatures.
   3.324 +
   3.325 +\item \texttt{CORTEX} implements a wide variety of senses: touch,
   3.326 +proprioception, vision, hearing, and muscle tension. Complicated
   3.327 +senses like touch and vision involve multiple sensory elements
   3.328 +embedded in a 2D surface. You have complete control over the
   3.329 +distribution of these sensor elements through the use of simple
   3.330 +image files. \texttt{CORTEX} implements more comprehensive hearing than
   3.331 +any other creature simulation system available.
   3.332 +
   3.333 +\item \texttt{CORTEX} supports any number of creatures and any number of
   3.334 +senses. Time in \texttt{CORTEX} dilates so that the simulated creatures
   3.335 +always perceive a perfectly smooth flow of time, regardless of
   3.336 +the actual computational load.
   3.337 +\end{itemize}
   3.338 +
   3.339 +\texttt{CORTEX} is built on top of \texttt{jMonkeyEngine3}
   3.340 +(\cite{jmonkeyengine}), which is a video game engine designed to
   3.341 +create cross-platform 3D desktop games. \texttt{CORTEX} is mainly written
   3.342 +in clojure, a dialect of \texttt{LISP} that runs on the Java Virtual
   3.343 +Machine (JVM). The API for creating and simulating creatures and
   3.344 +senses is entirely expressed in clojure, though many senses are
   3.345 +implemented at the layer of jMonkeyEngine or below. For example,
   3.346 +for the sense of hearing I use a layer of clojure code on top of a
   3.347 +layer of java JNI bindings that drive a layer of \texttt{C++} code which
   3.348 +implements a modified version of \texttt{OpenAL} to support multiple
   3.349 +listeners. \texttt{CORTEX} is the only simulation environment that I know
   3.350 +of that can support multiple entities that can each hear the world
   3.351 +from their own perspective. Other senses also require a small layer
   3.352 +of Java code. \texttt{CORTEX} also uses \texttt{bullet}, a physics simulator
   3.353 +written in \texttt{C}.
   3.354 +
   3.355 +\begin{figure}[htb]
   3.356 +\centering
   3.357 +\includegraphics[width=12cm]{./images/blender-worm.png}
   3.358 +\caption{\label{worm-recognition-intro-2}Here is the worm from figure \ref{worm-intro} modeled in Blender, a free 3D-modeling program. Senses and joints are described using special nodes in Blender.}
   3.359 +\end{figure}
   3.360 +
   3.361 +Here are some things I anticipate that \texttt{CORTEX} might be used for:
   3.362 +
   3.363 +\begin{itemize}
   3.364 +\item exploring new ideas about sensory integration
   3.365 +\item distributed communication among swarm creatures
   3.366 +\item self-learning using free exploration,
   3.367 +\item evolutionary algorithms involving creature construction
   3.368 +\item exploration of exotic senses and effectors that are not possible
   3.369 +in the real world (such as telekinesis or a semantic sense)
   3.370 +\item imagination using subworlds
   3.371 +\end{itemize}
   3.372 +
   3.373 +During one test with \texttt{CORTEX}, I created 3,000 creatures each with
   3.374 +its own independent senses and ran them all at only 1/80 real time.
   3.375 +In another test, I created a detailed model of my own hand,
   3.376 +equipped with a realistic distribution of touch (more sensitive at
   3.377 +the fingertips), as well as eyes and ears, and it ran at around 1/4
   3.378 +real time.
   3.379 +
   3.380 +\begin{sidewaysfigure}
   3.381 +\includegraphics[width=8.5in]{images/full-hand.png}
   3.382 +\caption{
   3.383 +I modeled my own right hand in Blender and rigged it with all the
   3.384 +senses that {\tt CORTEX} supports. My simulated hand has a
   3.385 +biologically inspired distribution of touch sensors. The senses are
   3.386 +displayed on the right (the red/black squares are raw sensory output), 
   3.387 +and the simulation is displayed on the
   3.388 +left. Notice that my hand is curling its fingers, that it can see
   3.389 +its own finger from the eye in its palm, and that it can feel its
   3.390 +own thumb touching its palm.}
   3.391 +\end{sidewaysfigure}
   3.392 +
   3.393 +\section{Designing \texttt{CORTEX}}
   3.394 +\label{sec-2}
   3.395 +
   3.396 +In this chapter, I outline the design decisions that went into
   3.397 +making \texttt{CORTEX}, along with some details about its implementation.
   3.398 +(A practical guide to getting started with \texttt{CORTEX}, which skips
   3.399 +over the history and implementation details presented here, is
   3.400 +provided in an appendix at the end of this thesis.)
   3.401 +
   3.402 +Throughout this project, I intended for \texttt{CORTEX} to be flexible and
   3.403 +extensible enough to be useful for other researchers who want to
   3.404 +test ideas of their own. To this end, wherever I have had to make
   3.405 +architectural choices about \texttt{CORTEX}, I have chosen to give as much
   3.406 +freedom to the user as possible, so that \texttt{CORTEX} may be used for
   3.407 +things I have not foreseen.
   3.408 +
   3.409 +\subsection{Building in simulation versus reality}
   3.410 +\label{sec-2-1}
   3.411 +The most important architectural decision of all is the choice to
   3.412 +use a computer-simulated environment in the first place! The world
   3.413 +is a vast and rich place, and for now simulations are a very poor
   3.414 +reflection of its complexity. It may be that there is a significant
   3.415 +qualitative difference between dealing with senses in the real
   3.416 +world and dealing with pale facsimiles of them in a simulation
   3.417 +(\cite{brooks-representation}). What are the advantages and
   3.418 +disadvantages of a simulation vs. reality?
   3.419 +
   3.420 +\subsubsection{Simulation}
   3.421 +\label{sec-2-1-1}
   3.422 +
   3.423 +The advantages of virtual reality are that when everything is a
   3.424 +simulation, experiments in that simulation are absolutely
   3.425 +reproducible. It's also easier to change the creature and
   3.426 +environment to explore new situations and different sensory
   3.427 +combinations.
   3.428 +
   3.429 +If the world is to be simulated on a computer, then not only do
   3.430 +you have to worry about whether the creature's senses are rich
   3.431 +enough to learn from the world, but whether the world itself is
   3.432 +rendered with enough detail and realism to give enough working
   3.433 +material to the creature's senses. To name just a few
   3.434 +difficulties facing modern physics simulators: destructibility of
   3.435 +the environment, simulation of water/other fluids, large areas,
   3.436 +nonrigid bodies, lots of objects, smoke. I don't know of any
   3.437 +computer simulation that would allow a creature to take a rock
   3.438 +and grind it into fine dust, then use that dust to make a clay
   3.439 +sculpture, at least not without spending years calculating the
   3.440 +interactions of every single small grain of dust. Maybe a
   3.441 +simulated world with today's limitations doesn't provide enough
   3.442 +richness for real intelligence to evolve.
   3.443 +
   3.444 +\subsubsection{Reality}
   3.445 +\label{sec-2-1-2}
   3.446 +
   3.447 +The other approach for playing with senses is to hook your
   3.448 +software up to real cameras, microphones, robots, etc., and let it
   3.449 +loose in the real world. This has the advantage of eliminating
   3.450 +concerns about simulating the world at the expense of increasing
   3.451 +the complexity of implementing the senses. Instead of just
   3.452 +grabbing the current rendered frame for processing, you have to
   3.453 +use an actual camera with real lenses and interact with photons to
   3.454 +get an image. It is much harder to change the creature, which is
   3.455 +now partly a physical robot of some sort, since doing so involves
   3.456 +changing things around in the real world instead of modifying
   3.457 +lines of code. While the real world is very rich and definitely
   3.458 +provides enough stimulation for intelligence to develop (as
   3.459 +evidenced by our own existence), it is also uncontrollable in the
   3.460 +sense that a particular situation cannot be recreated perfectly or
   3.461 +saved for later use. It is harder to conduct Science because it is
   3.462 +harder to repeat an experiment. The worst thing about using the
   3.463 +real world instead of a simulation is the matter of time. Instead
   3.464 +of simulated time you get the constant and unstoppable flow of
   3.465 +real time. This severely limits the sorts of software you can use
   3.466 +to program an AI, because all sense inputs must be handled in real
   3.467 +time. Complicated ideas may have to be implemented in hardware or
   3.468 +may simply be impossible given the current speed of our
   3.469 +processors. Contrast this with a simulation, in which the flow of
   3.470 +time in the simulated world can be slowed down to accommodate the
   3.471 +limitations of the creature's programming. In terms of cost, doing
   3.472 +everything in software is far cheaper than building custom
   3.473 +real-time hardware. All you need is a laptop and some patience.
   3.474 +
   3.475 +\subsection{Simulated time enables rapid prototyping $\backslash$ simple programs}
   3.476 +\label{sec-2-2}
   3.477 +
   3.478 +I envision \texttt{CORTEX} being used to support rapid prototyping and
   3.479 +iteration of ideas. Even if I could put together a well constructed
   3.480 +kit for creating robots, it would still not be enough because of
   3.481 +the scourge of real-time processing. Anyone who wants to test their
   3.482 +ideas in the real world must always worry about getting their
   3.483 +algorithms to run fast enough to process information in real time.
   3.484 +The need for real time processing only increases if multiple senses
   3.485 +are involved. In the extreme case, even simple algorithms will have
   3.486 +to be accelerated by ASIC chips or FPGAs, turning what would
   3.487 +otherwise be a few lines of code and a 10x speed penalty into a
   3.488 +multi-month ordeal. For this reason, \texttt{CORTEX} supports
   3.489 +\emph{time-dilation}, which scales back the framerate of the simulation
   3.490 +in proportion to the amount of processing each frame. From the
   3.491 +perspective of the creatures inside the simulation, time always
   3.492 +appears to flow at a constant rate, regardless of how complicated
   3.493 +the environment becomes or how many creatures are in the
   3.494 +simulation. The cost is that \texttt{CORTEX} can sometimes run slower than
   3.495 +real time. Time dilation works both ways, however --- simulations
   3.496 +of very simple creatures in \texttt{CORTEX} generally run at 40x real-time
   3.497 +on my machine!
   3.498 +
   3.499 +\subsection{All sense organs are two-dimensional surfaces}
   3.500 +\label{sec-2-3}
   3.501 +
   3.502 +If \texttt{CORTEX} is to support a wide variety of senses, it would help
   3.503 +to have a better understanding of what a sense actually is! While
   3.504 +vision, touch, and hearing all seem like they are quite different
   3.505 +things, I was surprised to learn during the course of this thesis
   3.506 +that they (and all physical senses) can be expressed as exactly the
   3.507 +same mathematical object!
   3.508 +
   3.509 +Human beings are three-dimensional objects, and the nerves that
   3.510 +transmit data from our various sense organs to our brain are
   3.511 +essentially one-dimensional. This leaves up to two dimensions in
   3.512 +which our sensory information may flow. For example, imagine your
   3.513 +skin: it is a two-dimensional surface around a three-dimensional
   3.514 +object (your body). It has discrete touch sensors embedded at
   3.515 +various points, and the density of these sensors corresponds to the
   3.516 +sensitivity of that region of skin. Each touch sensor connects to a
   3.517 +nerve, all of which eventually are bundled together as they travel
   3.518 +up the spinal cord to the brain. Intersect the spinal nerves with a
   3.519 +guillotining plane and you will see all of the sensory data of the
   3.520 +skin revealed in a roughly circular two-dimensional image which is
   3.521 +the cross section of the spinal cord. Points on this image that are
   3.522 +close together in this circle represent touch sensors that are
   3.523 +\emph{probably} close together on the skin, although there is of course
   3.524 +some cutting and rearrangement that has to be done to transfer the
   3.525 +complicated surface of the skin onto a two dimensional image.
   3.526 +
   3.527 +Most human senses consist of many discrete sensors of various
   3.528 +properties distributed along a surface at various densities. For
   3.529 +skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
   3.530 +disks, and Ruffini's endings (\cite{textbook901}), which detect
   3.531 +pressure and vibration of various intensities. For ears, it is the
   3.532 +stereocilia distributed along the basilar membrane inside the
   3.533 +cochlea; each one is sensitive to a slightly different frequency of
   3.534 +sound. For eyes, it is rods and cones distributed along the surface
   3.535 +of the retina. In each case, we can describe the sense with a
   3.536 +surface and a distribution of sensors along that surface.
   3.537 +
   3.538 +In fact, almost every human sense can be effectively described in
   3.539 +terms of a surface containing embedded sensors. If the sense had
   3.540 +any more dimensions, then there wouldn't be enough room in the
   3.541 +spinal cord to transmit the information!
   3.542 +
   3.543 +Therefore, \texttt{CORTEX} must support the ability to create objects and
   3.544 +then be able to ``paint'' points along their surfaces to describe
   3.545 +each sense. 
   3.546 +
   3.547 +Fortunately this idea is already a well known computer graphics
   3.548 +technique called \emph{UV-mapping}. In UV-mapping, the three-dimensional
   3.549 +surface of a model is cut and smooshed until it fits on a
   3.550 +two-dimensional image. You paint whatever you want on that image,
   3.551 +and when the three-dimensional shape is rendered in a game the
   3.552 +smooshing and cutting is reversed and the image appears on the
   3.553 +three-dimensional object.
   3.554 +
   3.555 +To make a sense, interpret the UV-image as describing the
   3.556 +distribution of that senses' sensors. To get different types of
   3.557 +sensors, you can either use a different color for each type of
   3.558 +sensor, or use multiple UV-maps, each labeled with that sensor
   3.559 +type. I generally use a white pixel to mean the presence of a
   3.560 +sensor and a black pixel to mean the absence of a sensor, and use
   3.561 +one UV-map for each sensor-type within a given sense.
   3.562 +
   3.563 +\begin{figure}[htb]
   3.564 +\centering
   3.565 +\includegraphics[width=10cm]{./images/finger-UV.png}
   3.566 +\caption{\label{finger-UV}The UV-map for an elongated icososphere. The white dots each represent a touch sensor. They are dense in the regions that describe the tip of the finger, and less dense along the dorsal side of the finger opposite the tip.}
   3.567 +\end{figure}
   3.568 +
   3.569 +\begin{figure}[htb]
   3.570 +\centering
   3.571 +\includegraphics[width=10cm]{./images/finger-1.png}
   3.572 +\caption{\label{finger-side-view}Ventral side of the UV-mapped finger. Note the density of touch sensors at the tip.}
   3.573 +\end{figure}
   3.574 +
   3.575 +\subsection{Video game engines provide ready-made physics and shading}
   3.576 +\label{sec-2-4}
   3.577 +
   3.578 +I did not need to write my own physics simulation code or shader to
   3.579 +build \texttt{CORTEX}. Doing so would lead to a system that is impossible
   3.580 +for anyone but myself to use anyway. Instead, I use a video game
   3.581 +engine as a base and modify it to accommodate the additional needs
   3.582 +of \texttt{CORTEX}. Video game engines are an ideal starting point to
   3.583 +build \texttt{CORTEX}, because they are not far from being creature
   3.584 +building systems themselves.
   3.585 +
   3.586 +First off, general purpose video game engines come with a physics
   3.587 +engine and lighting / sound system. The physics system provides
   3.588 +tools that can be co-opted to serve as touch, proprioception, and
   3.589 +muscles. Because some games support split screen views, a good
   3.590 +video game engine will allow you to efficiently create multiple
   3.591 +cameras in the simulated world that can be used as eyes. Video game
   3.592 +systems offer integrated asset management for things like textures
   3.593 +and creature models, providing an avenue for defining creatures.
   3.594 +They also understand UV-mapping, because this technique is used to
   3.595 +apply a texture to a model. Finally, because video game engines
   3.596 +support a large number of developers, as long as \texttt{CORTEX} doesn't
   3.597 +stray too far from the base system, other researchers can turn to
   3.598 +this community for help when doing their research.
   3.599 +
   3.600 +\subsection{\texttt{CORTEX} is based on jMonkeyEngine3}
   3.601 +\label{sec-2-5}
   3.602 +
   3.603 +While preparing to build \texttt{CORTEX} I studied several video game
   3.604 +engines to see which would best serve as a base. The top contenders
   3.605 +were:
   3.606 +
   3.607 +\begin{description}
   3.608 +\item[{\href{http://www.idsoftware.com}{Quake II}/\href{http://www.bytonic.de/html/jake2.html}{Jake2}}] The Quake II engine was designed by ID software
   3.609 +in 1997. All the source code was released by ID software into
   3.610 +the Public Domain several years ago, and as a result it has
   3.611 +been ported to many different languages. This engine was
   3.612 +famous for its advanced use of realistic shading and it had
   3.613 +decent and fast physics simulation. The main advantage of the
   3.614 +Quake II engine is its simplicity, but I ultimately rejected
   3.615 +it because the engine is too tied to the concept of a
   3.616 +first-person shooter game. One of the problems I had was that
   3.617 +there does not seem to be any easy way to attach multiple
   3.618 +cameras to a single character. There are also several physics
   3.619 +clipping issues that are corrected in a way that only applies
   3.620 +to the main character and do not apply to arbitrary objects.
   3.621 +
   3.622 +\item[{\href{http://source.valvesoftware.com/}{Source Engine}    }] The Source Engine evolved from the Quake II
   3.623 +and Quake I engines and is used by Valve in the Half-Life
   3.624 +series of games. The physics simulation in the Source Engine
   3.625 +is quite accurate and probably the best out of all the engines
   3.626 +I investigated. There is also an extensive community actively
   3.627 +working with the engine. However, applications that use the
   3.628 +Source Engine must be written in C++, the code is not open, it
   3.629 +only runs on Windows, and the tools that come with the SDK to
   3.630 +handle models and textures are complicated and awkward to use.
   3.631 +
   3.632 +\item[{\href{http://jmonkeyengine.com/}{jMonkeyEngine3}}] jMonkeyEngine3 is a new library for creating
   3.633 +games in Java. It uses OpenGL to render to the screen and uses
   3.634 +screengraphs to avoid drawing things that do not appear on the
   3.635 +screen. It has an active community and several games in the
   3.636 +pipeline. The engine was not built to serve any particular
   3.637 +game but is instead meant to be used for any 3D game.
   3.638 +\end{description}
   3.639 +
   3.640 +I chose jMonkeyEngine3 because it had the most features out of all
   3.641 +the free projects I looked at, and because I could then write my
   3.642 +code in clojure, an implementation of \texttt{LISP} that runs on the JVM.
   3.643 +
   3.644 +\subsection{\texttt{CORTEX} uses Blender to create creature models}
   3.645 +\label{sec-2-6}
   3.646 +
   3.647 +For the simple worm-like creatures I will use later on in this
   3.648 +thesis, I could define a simple API in \texttt{CORTEX} that would allow
   3.649 +one to create boxes, spheres, etc., and leave that API as the sole
   3.650 +way to create creatures. However, for \texttt{CORTEX} to truly be useful
   3.651 +for other projects, it needs a way to construct complicated
   3.652 +creatures. If possible, it would be nice to leverage work that has
   3.653 +already been done by the community of 3D modelers, or at least
   3.654 +enable people who are talented at modeling but not programming to
   3.655 +design \texttt{CORTEX} creatures.
   3.656 +
   3.657 +Therefore I use Blender, a free 3D modeling program, as the main
   3.658 +way to create creatures in \texttt{CORTEX}. However, the creatures modeled
   3.659 +in Blender must also be simple to simulate in jMonkeyEngine3's game
   3.660 +engine, and must also be easy to rig with \texttt{CORTEX}'s senses. I
   3.661 +accomplish this with extensive use of Blender's ``empty nodes.''
   3.662 +
   3.663 +Empty nodes have no mass, physical presence, or appearance, but
   3.664 +they can hold metadata and have names. I use a tree structure of
   3.665 +empty nodes to specify senses in the following manner:
   3.666 +
   3.667 +\begin{itemize}
   3.668 +\item Create a single top-level empty node whose name is the name of
   3.669 +the sense.
   3.670 +\item Add empty nodes which each contain meta-data relevant to the
   3.671 +sense, including a UV-map describing the number/distribution of
   3.672 +sensors if applicable.
   3.673 +\item Make each empty-node the child of the top-level node.
   3.674 +\end{itemize}
   3.675 +
   3.676 +\begin{figure}[htb]
   3.677 +\centering
   3.678 +\includegraphics[width=10cm]{./images/empty-sense-nodes.png}
   3.679 +\caption{\label{sense-nodes}An example of annotating a creature model with empty nodes to describe the layout of senses. There are multiple empty nodes which each describe the position of muscles, ears, eyes, or joints.}
   3.680 +\end{figure}
   3.681 +
   3.682 +\subsection{Bodies are composed of segments connected by joints}
   3.683 +\label{sec-2-7}
   3.684 +
   3.685 +Blender is a general purpose animation tool, which has been used in
   3.686 +the past to create high quality movies such as Sintel
   3.687 +(\cite{blender}). Though Blender can model and render even
   3.688 +complicated things like water, it is crucial to keep models that
   3.689 +are meant to be simulated as creatures simple. \texttt{Bullet}, which
   3.690 +\texttt{CORTEX} uses though jMonkeyEngine3, is a rigid-body physics
   3.691 +system. This offers a compromise between the expressiveness of a
   3.692 +game level and the speed at which it can be simulated, and it means
   3.693 +that creatures should be naturally expressed as rigid components
   3.694 +held together by joint constraints.
   3.695 +
   3.696 +But humans are more like a squishy bag wrapped around some hard
   3.697 +bones which define the overall shape. When we move, our skin bends
   3.698 +and stretches to accommodate the new positions of our bones.
   3.699 +
   3.700 +One way to make bodies composed of rigid pieces connected by joints
   3.701 +\emph{seem} more human-like is to use an \emph{armature}, (or \emph{rigging})
   3.702 +system, which defines a overall ``body mesh'' and defines how the
   3.703 +mesh deforms as a function of the position of each ``bone'' which
   3.704 +is a standard rigid body. This technique is used extensively to
   3.705 +model humans and create realistic animations. It is not a good
   3.706 +technique for physical simulation because it is a lie -- the skin
   3.707 +is not a physical part of the simulation and does not interact with
   3.708 +any objects in the world or itself. Objects will pass right though
   3.709 +the skin until they come in contact with the underlying bone, which
   3.710 +is a physical object. Without simulating the skin, the sense of
   3.711 +touch has little meaning, and the creature's own vision will lie to
   3.712 +it about the true extent of its body. Simulating the skin as a
   3.713 +physical object requires some way to continuously update the
   3.714 +physical model of the skin along with the movement of the bones,
   3.715 +which is unacceptably slow compared to rigid body simulation.
   3.716 +
   3.717 +Therefore, instead of using the human-like ``bony meatbag''
   3.718 +approach, I decided to base my body plans on multiple solid objects
   3.719 +that are connected by joints, inspired by the robot \texttt{EVE} from the
   3.720 +movie WALL-E.
   3.721 +
   3.722 +\begin{figure}[htb]
   3.723 +\centering
   3.724 +\includegraphics[width=10cm]{./images/Eve.jpg}
   3.725 +\caption{\texttt{EVE} from the movie WALL-E.  This body plan turns out to be much better suited to my purposes than a more human-like one.}
   3.726 +\end{figure}
   3.727 +
   3.728 +\texttt{EVE}'s body is composed of several rigid components that are held
   3.729 +together by invisible joint constraints. This is what I mean by
   3.730 +\emph{eve-like}. The main reason that I use eve-like bodies is for
   3.731 +simulation efficiency, and so that there will be correspondence
   3.732 +between the AI's senses and the physical presence of its body. Each
   3.733 +individual section is simulated by a separate rigid body that
   3.734 +corresponds exactly with its visual representation and does not
   3.735 +change. Sections are connected by invisible joints that are well
   3.736 +supported in jMonkeyEngine3. Bullet, the physics backend for
   3.737 +jMonkeyEngine3, can efficiently simulate hundreds of rigid bodies
   3.738 +connected by joints. Just because sections are rigid does not mean
   3.739 +they have to stay as one piece forever; they can be dynamically
   3.740 +replaced with multiple sections to simulate splitting in two. This
   3.741 +could be used to simulate retractable claws or \texttt{EVE}'s hands, which
   3.742 +are able to coalesce into one object in the movie.
   3.743 +
   3.744 +\subsubsection{Solidifying/Connecting a body}
   3.745 +\label{sec-2-7-1}
   3.746 +
   3.747 +\texttt{CORTEX} creates a creature in two steps: first, it traverses the
   3.748 +nodes in the Blender file and creates physical representations for
   3.749 +any of them that have mass defined in their Blender meta-data.
   3.750 +
   3.751 +\begin{listing}
   3.752 +\begin{verbatim}
   3.753 +(defn physical!
   3.754 +  "Iterate through the nodes in creature and make them real physical
   3.755 +   objects in the simulation."
   3.756 +  [#^Node creature]
   3.757 +  (dorun
   3.758 +   (map
   3.759 +    (fn [geom]
   3.760 +      (let [physics-control
   3.761 +            (RigidBodyControl.
   3.762 +             (HullCollisionShape.
   3.763 +              (.getMesh geom))
   3.764 +             (if-let [mass (meta-data geom "mass")]
   3.765 +               (float mass) (float 1)))]
   3.766 +        (.addControl geom physics-control)))
   3.767 +    (filter #(isa? (class %) Geometry )
   3.768 +            (node-seq creature)))))
   3.769 +\end{verbatim}
   3.770 +\caption{\label{physical}Program for iterating through the nodes in a Blender file and generating physical jMonkeyEngine3 objects with mass and a matching physics shape.}
   3.771 +\end{listing}
   3.772 +
   3.773 +The next step to making a proper body is to connect those pieces
   3.774 +together with joints. jMonkeyEngine has a large array of joints
   3.775 +available via \texttt{bullet}, such as Point2Point, Cone, Hinge, and a
   3.776 +generic Six Degree of Freedom joint, with or without spring
   3.777 +restitution. 
   3.778 +
   3.779 +Joints are treated a lot like proper senses, in that there is a
   3.780 +top-level empty node named ``joints'' whose children each
   3.781 +represent a joint.
   3.782 +
   3.783 +\begin{figure}[htb]
   3.784 +\centering
   3.785 +\includegraphics[width=10cm]{./images/hand-screenshot1.png}
   3.786 +\caption{\label{blender-hand}View of the hand model in Blender showing the main ``joints'' node (highlighted in yellow) and its children which each represent a joint in the hand. Each joint node has metadata specifying what sort of joint it is.}
   3.787 +\end{figure}
   3.788 +
   3.789 +
   3.790 +\texttt{CORTEX}'s procedure for binding the creature together with joints
   3.791 +is as follows:
   3.792 +
   3.793 +\begin{itemize}
   3.794 +\item Find the children of the ``joints'' node.
   3.795 +\item Determine the two spatials the joint is meant to connect.
   3.796 +\item Create the joint based on the meta-data of the empty node.
   3.797 +\end{itemize}
   3.798 +
   3.799 +The higher order function \texttt{sense-nodes} from \texttt{cortex.sense}
   3.800 +simplifies finding the joints based on their parent ``joints''
   3.801 +node.
   3.802 +
   3.803 +\begin{listing}
   3.804 +\begin{verbatim}
   3.805 +(defn sense-nodes
   3.806 +  "For some senses there is a special empty Blender node whose
   3.807 +   children are considered markers for an instance of that sense. This
   3.808 +   function generates functions to find those children, given the name
   3.809 +   of the special parent node."
   3.810 +  [parent-name]
   3.811 +  (fn [#^Node creature]
   3.812 +    (if-let [sense-node (.getChild creature parent-name)]
   3.813 +      (seq (.getChildren sense-node)) [])))
   3.814 +
   3.815 +(def
   3.816 +  ^{:doc "Return the children of the creature's \"joints\" node."
   3.817 +    :arglists '([creature])}
   3.818 +  joints
   3.819 +  (sense-nodes "joints"))
   3.820 +\end{verbatim}
   3.821 +\caption{\label{get-empty-nodes}Retrieving the children empty nodes from a single named empty node is a common pattern in \texttt{CORTEX}. Further instances of this technique for the senses will be omitted}
   3.822 +\end{listing}
   3.823 +
   3.824 +To find a joint's targets, \texttt{CORTEX} creates a small cube, centered
   3.825 +around the empty-node, and grows the cube exponentially until it
   3.826 +intersects two physical objects. The objects are ordered according
   3.827 +to the joint's rotation, with the first one being the object that
   3.828 +has more negative coordinates in the joint's reference frame.
   3.829 +Because the objects must be physical, the empty-node itself
   3.830 +escapes detection. Because the objects must be physical,
   3.831 +\texttt{joint-targets} must be called \emph{after} \texttt{physical!} is called.
   3.832 +
   3.833 +\begin{listing}
   3.834 +\begin{verbatim}
   3.835 +(defn joint-targets
   3.836 +  "Return the two closest two objects to the joint object, ordered
   3.837 +  from bottom to top according to the joint's rotation."
   3.838 +  [#^Node parts #^Node joint]
   3.839 +  (loop [radius (float 0.01)]
   3.840 +    (let [results (CollisionResults.)]
   3.841 +      (.collideWith
   3.842 +       parts
   3.843 +       (BoundingBox. (.getWorldTranslation joint)
   3.844 +                     radius radius radius) results)
   3.845 +      (let [targets
   3.846 +            (distinct
   3.847 +             (map  #(.getGeometry %) results))]
   3.848 +        (if (>= (count targets) 2)
   3.849 +          (sort-by
   3.850 +           #(let [joint-ref-frame-position
   3.851 +                  (jme-to-blender
   3.852 +                   (.mult
   3.853 +                    (.inverse (.getWorldRotation joint))
   3.854 +                    (.subtract (.getWorldTranslation %)
   3.855 +                               (.getWorldTranslation joint))))]
   3.856 +              (.dot (Vector3f. 1 1 1) joint-ref-frame-position))                  
   3.857 +           (take 2 targets))
   3.858 +          (recur (float (* radius 2))))))))
   3.859 +\end{verbatim}
   3.860 +\caption{\label{joint-targets}Program to find the targets of a joint node by exponentially growth of a search cube.}
   3.861 +\end{listing}
   3.862 +
   3.863 +Once \texttt{CORTEX} finds all joints and targets, it creates them using
   3.864 +a dispatch on the metadata of each joint node.
   3.865 +
   3.866 +\begin{listing}
   3.867 +\begin{verbatim}
   3.868 +(defmulti joint-dispatch
   3.869 +  "Translate Blender pseudo-joints into real JME joints."
   3.870 +  (fn [constraints & _] 
   3.871 +    (:type constraints)))
   3.872 +
   3.873 +(defmethod joint-dispatch :point
   3.874 +  [constraints control-a control-b pivot-a pivot-b rotation]
   3.875 +  (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
   3.876 +    (.setLinearLowerLimit Vector3f/ZERO)
   3.877 +    (.setLinearUpperLimit Vector3f/ZERO)))
   3.878 +
   3.879 +(defmethod joint-dispatch :hinge
   3.880 +  [constraints control-a control-b pivot-a pivot-b rotation]
   3.881 +  (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
   3.882 +        [limit-1 limit-2] (:limit constraints)
   3.883 +        hinge-axis (.mult rotation (blender-to-jme axis))]
   3.884 +    (doto (HingeJoint. control-a control-b pivot-a pivot-b 
   3.885 +                       hinge-axis hinge-axis)
   3.886 +      (.setLimit limit-1 limit-2))))
   3.887 +
   3.888 +(defmethod joint-dispatch :cone
   3.889 +  [constraints control-a control-b pivot-a pivot-b rotation]
   3.890 +  (let [limit-xz (:limit-xz constraints)
   3.891 +        limit-xy (:limit-xy constraints)
   3.892 +        twist    (:twist constraints)]
   3.893 +    (doto (ConeJoint. control-a control-b pivot-a pivot-b
   3.894 +                      rotation rotation)
   3.895 +      (.setLimit (float limit-xz) (float limit-xy)
   3.896 +                 (float twist)))))
   3.897 +\end{verbatim}
   3.898 +\caption{\label{joint-dispatch}Program to dispatch on Blender metadata and create joints suitable for physical simulation.}
   3.899 +\end{listing}
   3.900 +
   3.901 +All that is left for joints is to combine the above pieces into
   3.902 +something that can operate on the collection of nodes that a
   3.903 +Blender file represents.
   3.904 +
   3.905 +\begin{listing}
   3.906 +\begin{verbatim}
   3.907 +(defn connect
   3.908 +  "Create a joint between 'obj-a and 'obj-b at the location of
   3.909 +  'joint. The type of joint is determined by the metadata on 'joint.
   3.910 +
   3.911 +   Here are some examples:
   3.912 +   {:type :point}
   3.913 +   {:type :hinge  :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
   3.914 +   (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
   3.915 +
   3.916 +   {:type :cone :limit-xz 0]
   3.917 +                :limit-xy 0]
   3.918 +                :twist 0]}   (use XZY rotation mode in Blender!)"
   3.919 +  [#^Node obj-a #^Node obj-b #^Node joint]
   3.920 +  (let [control-a (.getControl obj-a RigidBodyControl)
   3.921 +        control-b (.getControl obj-b RigidBodyControl)
   3.922 +        joint-center (.getWorldTranslation joint)
   3.923 +        joint-rotation (.toRotationMatrix (.getWorldRotation joint))
   3.924 +        pivot-a (world-to-local obj-a joint-center)
   3.925 +        pivot-b (world-to-local obj-b joint-center)]
   3.926 +    (if-let
   3.927 +        [constraints (map-vals eval (read-string (meta-data joint "joint")))]
   3.928 +      ;; A side-effect of creating a joint registers
   3.929 +      ;; it with both physics objects which in turn
   3.930 +      ;; will register the joint with the physics system
   3.931 +      ;; when the simulation is started.
   3.932 +        (joint-dispatch constraints
   3.933 +                        control-a control-b
   3.934 +                        pivot-a pivot-b
   3.935 +                        joint-rotation))))
   3.936 +\end{verbatim}
   3.937 +\caption{\label{connect}Program to completely create a joint given information from a Blender file.}
   3.938 +\end{listing}
   3.939 +
   3.940 +In general, whenever \texttt{CORTEX} exposes a sense (or in this case
   3.941 +physicality), it provides a function of the type \texttt{sense!}, which
   3.942 +takes in a collection of nodes and augments it to support that
   3.943 +sense. The function returns any controls necessary to use that
   3.944 +sense. In this case \texttt{body!} creates a physical body and returns no
   3.945 +control functions.
   3.946 +
   3.947 +\begin{listing}
   3.948 +\begin{verbatim}
   3.949 +(defn joints!
   3.950 +  "Connect the solid parts of the creature with physical joints. The
   3.951 +   joints are taken from the \"joints\" node in the creature."
   3.952 +  [#^Node creature]
   3.953 +  (dorun
   3.954 +   (map
   3.955 +    (fn [joint]
   3.956 +      (let [[obj-a obj-b] (joint-targets creature joint)]
   3.957 +        (connect obj-a obj-b joint)))
   3.958 +    (joints creature))))
   3.959 +(defn body!
   3.960 +  "Endow the creature with a physical body connected with joints.  The
   3.961 +   particulars of the joints and the masses of each body part are
   3.962 +   determined in Blender."
   3.963 +  [#^Node creature]
   3.964 +  (physical! creature)
   3.965 +  (joints! creature))
   3.966 +\end{verbatim}
   3.967 +\caption{\label{joints}Program to give joints to a creature.}
   3.968 +\end{listing}
   3.969 +
   3.970 +All of the code you have just seen amounts to only 130 lines, yet
   3.971 +because it builds on top of Blender and jMonkeyEngine3, those few
   3.972 +lines pack quite a punch!
   3.973 +
   3.974 +The hand from figure \ref{blender-hand}, which was modeled after
   3.975 +my own right hand, can now be given joints and simulated as a
   3.976 +creature.
   3.977 +
   3.978 +\begin{figure}[htb]
   3.979 +\centering
   3.980 +\includegraphics[width=15cm]{./images/physical-hand.png}
   3.981 +\caption{\label{physical-hand}With the ability to create physical creatures from Blender, \texttt{CORTEX} gets one step closer to becoming a full creature simulation environment.}
   3.982 +\end{figure}
   3.983 +
   3.984 +\subsection{Sight reuses standard video game components\ldots{}}
   3.985 +\label{sec-2-8}
   3.986 +
   3.987 +Vision is one of the most important senses for humans, so I need to
   3.988 +build a simulated sense of vision for my AI. I will do this with
   3.989 +simulated eyes. Each eye can be independently moved and should see
   3.990 +its own version of the world depending on where it is.
   3.991 +
   3.992 +Making these simulated eyes a reality is simple because
   3.993 +jMonkeyEngine already contains extensive support for multiple views
   3.994 +of the same 3D simulated world. The reason jMonkeyEngine has this
   3.995 +support is because the support is necessary to create games with
   3.996 +split-screen views. Multiple views are also used to create
   3.997 +efficient pseudo-reflections by rendering the scene from a certain
   3.998 +perspective and then projecting it back onto a surface in the 3D
   3.999 +world.
  3.1000 +
  3.1001 +\begin{figure}[htb]
  3.1002 +\centering
  3.1003 +\includegraphics[width=10cm]{./images/goldeneye-4-player.png}
  3.1004 +\caption{\label{goldeneye}jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye, which was one of the first games to use split-screen views.}
  3.1005 +\end{figure}
  3.1006 +
  3.1007 +\subsubsection{A Brief Description of jMonkeyEngine's Rendering Pipeline}
  3.1008 +\label{sec-2-8-1}
  3.1009 +
  3.1010 +jMonkeyEngine allows you to create a \texttt{ViewPort}, which represents a
  3.1011 +view of the simulated world. You can create as many of these as you
  3.1012 +want. Every frame, the \texttt{RenderManager} iterates through each
  3.1013 +\texttt{ViewPort}, rendering the scene in the GPU. For each \texttt{ViewPort} there
  3.1014 +is a \texttt{FrameBuffer} which represents the rendered image in the GPU.
  3.1015 +
  3.1016 +\begin{figure}[htb]
  3.1017 +\centering
  3.1018 +\includegraphics[width=10cm]{./images/diagram_rendermanager2.png}
  3.1019 +\caption{\label{rendermanagers}\texttt{ViewPorts} are cameras in the world. During each frame, the \texttt{RenderManager} records a snapshot of what each view is currently seeing; these snapshots are \texttt{FrameBuffer} objects.}
  3.1020 +\end{figure}
  3.1021 +
  3.1022 +Each \texttt{ViewPort} can have any number of attached \texttt{SceneProcessor}
  3.1023 +objects, which are called every time a new frame is rendered. A
  3.1024 +\texttt{SceneProcessor} receives its \texttt{ViewPort's} \texttt{FrameBuffer} and can do
  3.1025 +whatever it wants to the data.  Often this consists of invoking GPU
  3.1026 +specific operations on the rendered image.  The \texttt{SceneProcessor} can
  3.1027 +also copy the GPU image data to RAM and process it with the CPU.
  3.1028 +
  3.1029 +\subsubsection{Appropriating Views for Vision}
  3.1030 +\label{sec-2-8-2}
  3.1031 +
  3.1032 +Each eye in the simulated creature needs its own \texttt{ViewPort} so
  3.1033 +that it can see the world from its own perspective. To this
  3.1034 +\texttt{ViewPort}, I add a \texttt{SceneProcessor} that feeds the visual data to
  3.1035 +any arbitrary continuation function for further processing. That
  3.1036 +continuation function may perform both CPU and GPU operations on
  3.1037 +the data. To make this easy for the continuation function, the
  3.1038 +\texttt{SceneProcessor} maintains appropriately sized buffers in RAM to
  3.1039 +hold the data. It does not do any copying from the GPU to the CPU
  3.1040 +itself because it is a slow operation.
  3.1041 +
  3.1042 +\begin{listing}
  3.1043 +\begin{verbatim}
  3.1044 +(defn vision-pipeline
  3.1045 +  "Create a SceneProcessor object which wraps a vision processing
  3.1046 +  continuation function. The continuation is a function that takes 
  3.1047 +  [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
  3.1048 +  each of which has already been appropriately sized."
  3.1049 +  [continuation]
  3.1050 +  (let [byte-buffer (atom nil)
  3.1051 +	renderer (atom nil)
  3.1052 +        image (atom nil)]
  3.1053 +  (proxy [SceneProcessor] []
  3.1054 +    (initialize
  3.1055 +     [renderManager viewPort]
  3.1056 +     (let [cam (.getCamera viewPort)
  3.1057 +	   width (.getWidth cam)
  3.1058 +	   height (.getHeight cam)]
  3.1059 +       (reset! renderer (.getRenderer renderManager))
  3.1060 +       (reset! byte-buffer
  3.1061 +	     (BufferUtils/createByteBuffer
  3.1062 +	      (* width height 4)))
  3.1063 +        (reset! image (BufferedImage.
  3.1064 +                      width height
  3.1065 +                      BufferedImage/TYPE_4BYTE_ABGR))))
  3.1066 +    (isInitialized [] (not (nil? @byte-buffer)))
  3.1067 +    (reshape [_ _ _])
  3.1068 +    (preFrame [_])
  3.1069 +    (postQueue [_])
  3.1070 +    (postFrame
  3.1071 +     [#^FrameBuffer fb]
  3.1072 +     (.clear @byte-buffer)
  3.1073 +     (continuation @renderer fb @byte-buffer @image))
  3.1074 +    (cleanup []))))
  3.1075 +\end{verbatim}
  3.1076 +\caption{\label{pipeline-1}Function to make the rendered scene in jMonkeyEngine available for further processing.}
  3.1077 +\end{listing}
  3.1078 +
  3.1079 +The continuation function given to \texttt{vision-pipeline} above will be
  3.1080 +given a \texttt{Renderer} and three containers for image data. The
  3.1081 +\texttt{FrameBuffer} references the GPU image data, but the pixel data
  3.1082 +can not be used directly on the CPU. The \texttt{ByteBuffer} and
  3.1083 +\texttt{BufferedImage} are initially "empty" but are sized to hold the
  3.1084 +data in the \texttt{FrameBuffer}. I call transferring the GPU image data
  3.1085 +to the CPU structures "mixing" the image data.
  3.1086 +
  3.1087 +\subsubsection{Optical sensor arrays are described with images and referenced with metadata}
  3.1088 +\label{sec-2-8-3}
  3.1089 +
  3.1090 +The vision pipeline described above handles the flow of rendered
  3.1091 +images. Now, \texttt{CORTEX} needs simulated eyes to serve as the source
  3.1092 +of these images.
  3.1093 +
  3.1094 +An eye is described in Blender in the same way as a joint. They
  3.1095 +are zero dimensional empty objects with no geometry whose local
  3.1096 +coordinate system determines the orientation of the resulting eye.
  3.1097 +All eyes are children of a parent node named "eyes" just as all
  3.1098 +joints have a parent named "joints". An eye binds to the nearest
  3.1099 +physical object with \texttt{bind-sense}.
  3.1100 +
  3.1101 +\begin{listing}
  3.1102 +\begin{verbatim}
  3.1103 +(defn add-eye!
  3.1104 +  "Create a Camera centered on the current position of 'eye which
  3.1105 +   follows the closest physical node in 'creature. The camera will
  3.1106 +   point in the X direction and use the Z vector as up as determined
  3.1107 +   by the rotation of these vectors in Blender coordinate space. Use
  3.1108 +   XZY rotation for the node in Blender."
  3.1109 +  [#^Node creature #^Spatial eye]
  3.1110 +  (let [target (closest-node creature eye)
  3.1111 +        [cam-width cam-height] 
  3.1112 +        ;;[640 480] ;; graphics card on laptop doesn't support
  3.1113 +                    ;; arbitrary dimensions.
  3.1114 +        (eye-dimensions eye)
  3.1115 +        cam (Camera. cam-width cam-height)
  3.1116 +        rot (.getWorldRotation eye)]
  3.1117 +    (.setLocation cam (.getWorldTranslation eye))
  3.1118 +    (.lookAtDirection
  3.1119 +     cam                           ; this part is not a mistake and
  3.1120 +     (.mult rot Vector3f/UNIT_X)   ; is consistent with using Z in
  3.1121 +     (.mult rot Vector3f/UNIT_Y))  ; Blender as the UP vector.
  3.1122 +    (.setFrustumPerspective
  3.1123 +     cam (float 45)
  3.1124 +     (float (/ (.getWidth cam) (.getHeight cam)))
  3.1125 +     (float 1)
  3.1126 +     (float 1000))
  3.1127 +    (bind-sense target cam) cam))
  3.1128 +\end{verbatim}
  3.1129 +\caption{\label{add-eye}Here, the camera is created based on metadata on the eye-node and attached to the nearest physical object with \texttt{bind-sense}}
  3.1130 +\end{listing}
  3.1131 +
  3.1132 +\subsubsection{Simulated Retina}
  3.1133 +\label{sec-2-8-4}
  3.1134 +
  3.1135 +An eye is a surface (the retina) which contains many discrete
  3.1136 +sensors to detect light. These sensors can have different
  3.1137 +light-sensing properties. In humans, each discrete sensor is
  3.1138 +sensitive to red, blue, green, or gray. These different types of
  3.1139 +sensors can have different spatial distributions along the retina.
  3.1140 +In humans, there is a fovea in the center of the retina which has
  3.1141 +a very high density of color sensors, and a blind spot which has
  3.1142 +no sensors at all. Sensor density decreases in proportion to
  3.1143 +distance from the fovea.
  3.1144 +
  3.1145 +I want to be able to model any retinal configuration, so my
  3.1146 +eye-nodes in Blender contain metadata pointing to images that
  3.1147 +describe the precise position of the individual sensors using
  3.1148 +white pixels. The meta-data also describes the precise sensitivity
  3.1149 +to light that the sensors described in the image have. An eye can
  3.1150 +contain any number of these images. For example, the metadata for
  3.1151 +an eye might look like this:
  3.1152 +
  3.1153 +\begin{verbatim}
  3.1154 +{0xFF0000 "Models/test-creature/retina-small.png"}
  3.1155 +\end{verbatim}
  3.1156 +
  3.1157 +\begin{figure}[htb]
  3.1158 +\centering
  3.1159 +\includegraphics[width=7cm]{./images/retina-small.png}
  3.1160 +\caption{\label{retina}An example retinal profile image. White pixels are photo-sensitive elements. The distribution of white pixels is denser in the middle and falls off at the edges and is inspired by the human retina.}
  3.1161 +\end{figure}
  3.1162 +
  3.1163 +Together, the number 0xFF0000 and the image above describe the
  3.1164 +placement of red-sensitive sensory elements.
  3.1165 +
  3.1166 +Meta-data to very crudely approximate a human eye might be
  3.1167 +something like this:
  3.1168 +
  3.1169 +\begin{verbatim}
  3.1170 +(let [retinal-profile "Models/test-creature/retina-small.png"]
  3.1171 +  {0xFF0000 retinal-profile
  3.1172 +   0x00FF00 retinal-profile
  3.1173 +   0x0000FF retinal-profile
  3.1174 +   0xFFFFFF retinal-profile})
  3.1175 +\end{verbatim}
  3.1176 +
  3.1177 +The numbers that serve as keys in the map determine a sensor's
  3.1178 +relative sensitivity to the channels red, green, and blue. These
  3.1179 +sensitivity values are packed into an integer in the order
  3.1180 +\texttt{|\_|R|G|B|} in 8-bit fields. The RGB values of a pixel in the
  3.1181 +image are added together with these sensitivities as linear
  3.1182 +weights. Therefore, 0xFF0000 means sensitive to red only while
  3.1183 +0xFFFFFF means sensitive to all colors equally (gray).
  3.1184 +
  3.1185 +\begin{listing}
  3.1186 +\begin{verbatim}
  3.1187 +(defn vision-kernel
  3.1188 +  "Returns a list of functions, each of which will return a color
  3.1189 +   channel's worth of visual information when called inside a running
  3.1190 +   simulation."
  3.1191 +  [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
  3.1192 +  (let [retinal-map (retina-sensor-profile eye)
  3.1193 +        camera (add-eye! creature eye)
  3.1194 +        vision-image
  3.1195 +        (atom
  3.1196 +         (BufferedImage. (.getWidth camera)
  3.1197 +                         (.getHeight camera)
  3.1198 +                         BufferedImage/TYPE_BYTE_BINARY))
  3.1199 +        register-eye!
  3.1200 +        (runonce
  3.1201 +         (fn [world]
  3.1202 +           (add-camera!
  3.1203 +            world camera
  3.1204 +            (let [counter  (atom 0)]
  3.1205 +              (fn [r fb bb bi]
  3.1206 +                (if (zero? (rem (swap! counter inc) (inc skip)))
  3.1207 +                  (reset! vision-image
  3.1208 +                          (BufferedImage! r fb bb bi))))))))]
  3.1209 +     (vec
  3.1210 +      (map
  3.1211 +       (fn [[key image]]
  3.1212 +         (let [whites (white-coordinates image)
  3.1213 +               topology (vec (collapse whites))
  3.1214 +               sensitivity (sensitivity-presets key key)]
  3.1215 +           (attached-viewport.
  3.1216 +            (fn [world]
  3.1217 +              (register-eye! world)
  3.1218 +              (vector
  3.1219 +               topology
  3.1220 +               (vec 
  3.1221 +                (for [[x y] whites]
  3.1222 +                  (pixel-sense 
  3.1223 +                   sensitivity
  3.1224 +                   (.getRGB @vision-image x y))))))
  3.1225 +            register-eye!)))
  3.1226 +         retinal-map))))
  3.1227 +\end{verbatim}
  3.1228 +\caption{\label{vision-kernel}This is the core of vision in \texttt{CORTEX}. A given eye node is converted into a function that returns visual information from the simulation.}
  3.1229 +\end{listing}
  3.1230 +
  3.1231 +Note that because each of the functions generated by
  3.1232 +\texttt{vision-kernel} shares the same \texttt{register-eye!} function, the eye
  3.1233 +will be registered only once the first time any of the functions
  3.1234 +from the list returned by \texttt{vision-kernel} is called. Each of the
  3.1235 +functions returned by \texttt{vision-kernel} also allows access to the
  3.1236 +\texttt{Viewport} through which it receives images.
  3.1237 +
  3.1238 +All the hard work has been done; all that remains is to apply
  3.1239 +\texttt{vision-kernel} to each eye in the creature and gather the results
  3.1240 +into one list of functions.
  3.1241 +
  3.1242 +
  3.1243 +\begin{listing}
  3.1244 +\begin{verbatim}
  3.1245 +(defn vision!
  3.1246 +  "Returns a list of functions, each of which returns visual sensory
  3.1247 +   data when called inside a running simulation."
  3.1248 +  [#^Node creature & {skip :skip :or {skip 0}}]
  3.1249 +  (reduce
  3.1250 +   concat 
  3.1251 +   (for [eye (eyes creature)]
  3.1252 +     (vision-kernel creature eye))))
  3.1253 +\end{verbatim}
  3.1254 +\caption{\label{vision}With \texttt{vision!}, \texttt{CORTEX} is already a fine simulation environment for experimenting with different types of eyes.}
  3.1255 +\end{listing}
  3.1256 +
  3.1257 +\begin{figure}[htb]
  3.1258 +\centering
  3.1259 +\includegraphics[width=13cm]{./images/worm-vision.png}
  3.1260 +\caption{\label{worm-vision-test.}Simulated vision with a test creature and the human-like eye approximation. Notice how each channel of the eye responds differently to the differently colored balls.}
  3.1261 +\end{figure}
  3.1262 +
  3.1263 +The vision code is not much more complicated than the body code,
  3.1264 +and enables multiple further paths for simulated vision. For
  3.1265 +example, it is quite easy to create bifocal vision -- you just
  3.1266 +make two eyes next to each other in Blender! It is also possible
  3.1267 +to encode vision transforms in the retinal files. For example, the
  3.1268 +human like retina file in figure \ref{retina} approximates a
  3.1269 +log-polar transform.
  3.1270 +
  3.1271 +This vision code has already been absorbed by the jMonkeyEngine
  3.1272 +community and is now (in modified form) part of a system for
  3.1273 +capturing in-game video to a file.
  3.1274 +
  3.1275 +\subsection{\ldots{}but hearing must be built from scratch}
  3.1276 +\label{sec-2-9}
  3.1277 +
  3.1278 +At the end of this chapter I will have simulated ears that work the
  3.1279 +same way as the simulated eyes in the last chapter. I will be able to
  3.1280 +place any number of ear-nodes in a Blender file, and they will bind to
  3.1281 +the closest physical object and follow it as it moves around. Each ear
  3.1282 +will provide access to the sound data it picks up between every frame.
  3.1283 +
  3.1284 +Hearing is one of the more difficult senses to simulate, because there
  3.1285 +is less support for obtaining the actual sound data that is processed
  3.1286 +by jMonkeyEngine3. There is no "split-screen" support for rendering
  3.1287 +sound from different points of view, and there is no way to directly
  3.1288 +access the rendered sound data.
  3.1289 +
  3.1290 +\texttt{CORTEX}'s hearing is unique because it does not have any
  3.1291 +limitations compared to other simulation environments. As far as I
  3.1292 +know, there is no other system that supports multiple listeners,
  3.1293 +and the sound demo at the end of this chapter is the first time
  3.1294 +it's been done in a video game environment.
  3.1295 +
  3.1296 +\subsubsection{Brief Description of jMonkeyEngine's Sound System}
  3.1297 +\label{sec-2-9-1}
  3.1298 +
  3.1299 +jMonkeyEngine's sound system works as follows:
  3.1300 +
  3.1301 +\begin{itemize}
  3.1302 +\item jMonkeyEngine uses the \texttt{AppSettings} for the particular
  3.1303 +application to determine what sort of \texttt{AudioRenderer} should be
  3.1304 +used.
  3.1305 +\item Although some support is provided for multiple AudioRenderer
  3.1306 +backends, jMonkeyEngine at the time of this writing will either
  3.1307 +pick no \texttt{AudioRenderer} at all, or the \texttt{LwjglAudioRenderer}.
  3.1308 +\item jMonkeyEngine tries to figure out what sort of system you're
  3.1309 +running and extracts the appropriate native libraries.
  3.1310 +\item The \texttt{LwjglAudioRenderer} uses the \href{http://lwjgl.org/}{\texttt{LWJGL}} (LightWeight Java Game
  3.1311 +Library) bindings to interface with a C library called \href{http://kcat.strangesoft.net/openal.html}{\texttt{OpenAL}}
  3.1312 +\item \texttt{OpenAL} renders the 3D sound and feeds the rendered sound
  3.1313 +directly to any of various sound output devices with which it
  3.1314 +knows how to communicate.
  3.1315 +\end{itemize}
  3.1316 +
  3.1317 +A consequence of this is that there's no way to access the actual
  3.1318 +sound data produced by \texttt{OpenAL}. Even worse, \texttt{OpenAL} only supports
  3.1319 +one \emph{listener} (it renders sound data from only one perspective),
  3.1320 +which normally isn't a problem for games, but becomes a problem
  3.1321 +when trying to make multiple AI creatures that can each hear the
  3.1322 +world from a different perspective.
  3.1323 +
  3.1324 +To make many AI creatures in jMonkeyEngine that can each hear the
  3.1325 +world from their own perspective, or to make a single creature with
  3.1326 +many ears, it is necessary to go all the way back to \texttt{OpenAL} and
  3.1327 +implement support for simulated hearing there.
  3.1328 +
  3.1329 +\subsubsection{Extending \texttt{OpenAl}}
  3.1330 +\label{sec-2-9-2}
  3.1331 +
  3.1332 +Extending \texttt{OpenAL} to support multiple listeners requires 500
  3.1333 +lines of \texttt{C} code and is too complicated to mention here. Instead,
  3.1334 +I will show a small amount of extension code and go over the high
  3.1335 +level strategy. Full source is of course available with the
  3.1336 +\texttt{CORTEX} distribution if you're interested.
  3.1337 +
  3.1338 +\texttt{OpenAL} goes to great lengths to support many different systems,
  3.1339 +all with different sound capabilities and interfaces. It
  3.1340 +accomplishes this difficult task by providing code for many
  3.1341 +different sound backends in pseudo-objects called \emph{Devices}.
  3.1342 +There's a device for the Linux Open Sound System and the Advanced
  3.1343 +Linux Sound Architecture, there's one for Direct Sound on Windows,
  3.1344 +and there's even one for Solaris. \texttt{OpenAL} solves the problem of
  3.1345 +platform independence by providing all these Devices.
  3.1346 +
  3.1347 +Wrapper libraries such as LWJGL are free to examine the system on
  3.1348 +which they are running and then select an appropriate device for
  3.1349 +that system.
  3.1350 +
  3.1351 +There are also a few "special" devices that don't interface with
  3.1352 +any particular system. These include the Null Device, which
  3.1353 +doesn't do anything, and the Wave Device, which writes whatever
  3.1354 +sound it receives to a file, if everything has been set up
  3.1355 +correctly when configuring \texttt{OpenAL}.
  3.1356 +
  3.1357 +Actual mixing (Doppler shift and distance.environment-based
  3.1358 +attenuation) of the sound data happens in the Devices, and they
  3.1359 +are the only point in the sound rendering process where this data
  3.1360 +is available.
  3.1361 +
  3.1362 +Therefore, in order to support multiple listeners, and get the
  3.1363 +sound data in a form that the AIs can use, it is necessary to
  3.1364 +create a new Device which supports this feature.
  3.1365 +
  3.1366 +Adding a device to OpenAL is rather tricky -- there are five
  3.1367 +separate files in the \texttt{OpenAL} source tree that must be modified
  3.1368 +to do so. I named my device the "Multiple Audio Send" Device, or
  3.1369 +\texttt{Send} Device for short, since it sends audio data back to the
  3.1370 +calling application like an Aux-Send cable on a mixing board.
  3.1371 +
  3.1372 +The main idea behind the Send device is to take advantage of the
  3.1373 +fact that LWJGL only manages one \emph{context} when using OpenAL. A
  3.1374 +\emph{context} is like a container that holds samples and keeps track
  3.1375 +of where the listener is. In order to support multiple listeners,
  3.1376 +the Send device identifies the LWJGL context as the master
  3.1377 +context, and creates any number of slave contexts to represent
  3.1378 +additional listeners. Every time the device renders sound, it
  3.1379 +synchronizes every source from the master LWJGL context to the
  3.1380 +slave contexts. Then, it renders each context separately, using a
  3.1381 +different listener for each one. The rendered sound is made
  3.1382 +available via JNI to jMonkeyEngine.
  3.1383 +
  3.1384 +Switching between contexts is not the normal operation of a
  3.1385 +Device, and one of the problems with doing so is that a Device
  3.1386 +normally keeps around a few pieces of state such as the
  3.1387 +\texttt{ClickRemoval} array above which will become corrupted if the
  3.1388 +contexts are not rendered in parallel. The solution is to create a
  3.1389 +copy of this normally global device state for each context, and
  3.1390 +copy it back and forth into and out of the actual device state
  3.1391 +whenever a context is rendered.
  3.1392 +
  3.1393 +The core of the \texttt{Send} device is the \texttt{syncSources} function, which
  3.1394 +does the job of copying all relevant data from one context to
  3.1395 +another. 
  3.1396 +
  3.1397 +\begin{listing}
  3.1398 +\begin{verbatim}
  3.1399 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 
  3.1400 +		 ALCcontext *masterCtx, ALCcontext *slaveCtx){
  3.1401 +  ALuint master = masterSource->source;
  3.1402 +  ALuint slave = slaveSource->source;
  3.1403 +  ALCcontext *current = alcGetCurrentContext();
  3.1404 +
  3.1405 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
  3.1406 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
  3.1407 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
  3.1408 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
  3.1409 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
  3.1410 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
  3.1411 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
  3.1412 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
  3.1413 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
  3.1414 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
  3.1415 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
  3.1416 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
  3.1417 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
  3.1418 +    
  3.1419 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
  3.1420 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
  3.1421 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
  3.1422 +  
  3.1423 +  syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
  3.1424 +  syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
  3.1425 +
  3.1426 +  alcMakeContextCurrent(masterCtx);
  3.1427 +  ALint source_type;
  3.1428 +  alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
  3.1429 +
  3.1430 +  // Only static sources are currently synchronized! 
  3.1431 +  if (AL_STATIC == source_type){
  3.1432 +    ALint master_buffer;
  3.1433 +    ALint slave_buffer;
  3.1434 +    alGetSourcei(master, AL_BUFFER, &master_buffer);
  3.1435 +    alcMakeContextCurrent(slaveCtx);
  3.1436 +    alGetSourcei(slave, AL_BUFFER, &slave_buffer);
  3.1437 +    if (master_buffer != slave_buffer){
  3.1438 +      alSourcei(slave, AL_BUFFER, master_buffer);
  3.1439 +    }
  3.1440 +  }
  3.1441 +  
  3.1442 +  // Synchronize the state of the two sources.
  3.1443 +  alcMakeContextCurrent(masterCtx);
  3.1444 +  ALint masterState;
  3.1445 +  ALint slaveState;
  3.1446 +
  3.1447 +  alGetSourcei(master, AL_SOURCE_STATE, &masterState);
  3.1448 +  alcMakeContextCurrent(slaveCtx);
  3.1449 +  alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
  3.1450 +
  3.1451 +  if (masterState != slaveState){
  3.1452 +    switch (masterState){
  3.1453 +    case AL_INITIAL : alSourceRewind(slave); break;
  3.1454 +    case AL_PLAYING : alSourcePlay(slave);   break;
  3.1455 +    case AL_PAUSED  : alSourcePause(slave);  break;
  3.1456 +    case AL_STOPPED : alSourceStop(slave);   break;
  3.1457 +    }
  3.1458 +  }
  3.1459 +  // Restore whatever context was previously active.
  3.1460 +  alcMakeContextCurrent(current);
  3.1461 +}
  3.1462 +\end{verbatim}
  3.1463 +\caption{\label{sync-openal-sources}Program for extending \texttt{OpenAL} to support multiple listeners via context copying/switching.}
  3.1464 +\end{listing}
  3.1465 +
  3.1466 +With this special context-switching device, and some ugly JNI
  3.1467 +bindings that are not worth mentioning, \texttt{CORTEX} gains the ability
  3.1468 +to access multiple sound streams from \texttt{OpenAL}. 
  3.1469 +
  3.1470 +\begin{listing}
  3.1471 +\begin{verbatim}
  3.1472 +(defn add-ear!  
  3.1473 +  "Create a Listener centered on the current position of 'ear 
  3.1474 +   which follows the closest physical node in 'creature and 
  3.1475 +   sends sound data to 'continuation."
  3.1476 +  [#^Application world #^Node creature #^Spatial ear continuation]
  3.1477 +  (let [target (closest-node creature ear)
  3.1478 +        lis (Listener.)
  3.1479 +        audio-renderer (.getAudioRenderer world)
  3.1480 +        sp (hearing-pipeline continuation)]
  3.1481 +    (.setLocation lis (.getWorldTranslation ear))
  3.1482 +    (.setRotation lis (.getWorldRotation ear))
  3.1483 +    (bind-sense target lis)
  3.1484 +    (update-listener-velocity! target lis)
  3.1485 +    (.addListener audio-renderer lis)
  3.1486 +    (.registerSoundProcessor audio-renderer lis sp)))
  3.1487 +\end{verbatim}
  3.1488 +\caption{\label{add-ear}Program to create an ear from a Blender empty node. The ear follows around the nearest physical object and passes all sensory data to a continuation function.}
  3.1489 +\end{listing}
  3.1490 +
  3.1491 +The \texttt{Send} device, unlike most of the other devices in \texttt{OpenAL},
  3.1492 +does not render sound unless asked. This enables the system to
  3.1493 +slow down or speed up depending on the needs of the AIs who are
  3.1494 +using it to listen. If the device tried to render samples in
  3.1495 +real-time, a complicated AI whose mind takes 100 seconds of
  3.1496 +computer time to simulate 1 second of AI-time would miss almost
  3.1497 +all of the sound in its environment!
  3.1498 +
  3.1499 +\begin{listing}
  3.1500 +\begin{verbatim}
  3.1501 +(defn hearing-kernel
  3.1502 +  "Returns a function which returns auditory sensory data when called
  3.1503 +   inside a running simulation."
  3.1504 +  [#^Node creature #^Spatial ear]
  3.1505 +  (let [hearing-data (atom [])
  3.1506 +        register-listener!
  3.1507 +        (runonce 
  3.1508 +         (fn [#^Application world]
  3.1509 +           (add-ear!
  3.1510 +            world creature ear
  3.1511 +            (comp #(reset! hearing-data %)
  3.1512 +                  byteBuffer->pulse-vector))))]
  3.1513 +    (fn [#^Application world]
  3.1514 +      (register-listener! world)
  3.1515 +      (let [data @hearing-data
  3.1516 +            topology              
  3.1517 +            (vec (map #(vector % 0) (range 0 (count data))))]
  3.1518 +        [topology data]))))
  3.1519 +    
  3.1520 +(defn hearing!
  3.1521 +  "Endow the creature in a particular world with the sense of
  3.1522 +   hearing. Will return a sequence of functions, one for each ear,
  3.1523 +   which when called will return the auditory data from that ear."
  3.1524 +  [#^Node creature]
  3.1525 +  (for [ear (ears creature)]
  3.1526 +    (hearing-kernel creature ear)))
  3.1527 +\end{verbatim}
  3.1528 +\caption{\label{hearing}Program to enable arbitrary hearing in \texttt{CORTEX}}
  3.1529 +\end{listing}
  3.1530 +
  3.1531 +Armed with these functions, \texttt{CORTEX} is able to test possibly the
  3.1532 +first ever instance of multiple listeners in a video game engine
  3.1533 +based simulation!
  3.1534 +
  3.1535 +\begin{listing}
  3.1536 +\begin{verbatim}
  3.1537 +/**
  3.1538 + * Respond to sound!  This is the brain of an AI entity that 
  3.1539 + * hears its surroundings and reacts to them.
  3.1540 + */
  3.1541 +public void process(ByteBuffer audioSamples, 
  3.1542 +		    int numSamples, AudioFormat format) {
  3.1543 +    audioSamples.clear();
  3.1544 +    byte[] data = new byte[numSamples];
  3.1545 +    float[] out = new float[numSamples];
  3.1546 +    audioSamples.get(data);
  3.1547 +    FloatSampleTools.
  3.1548 +	byte2floatInterleaved
  3.1549 +	(data, 0, out, 0, numSamples/format.getFrameSize(), format);
  3.1550 +
  3.1551 +    float max = Float.NEGATIVE_INFINITY;
  3.1552 +    for (float f : out){if (f > max) max = f;}
  3.1553 +    audioSamples.clear();
  3.1554 +
  3.1555 +    if (max > 0.1){
  3.1556 +	entity.getMaterial().setColor("Color", ColorRGBA.Green);
  3.1557 +    }
  3.1558 +    else {
  3.1559 +	entity.getMaterial().setColor("Color", ColorRGBA.Gray);
  3.1560 +    }
  3.1561 +\end{verbatim}
  3.1562 +\caption{\label{sound-test}Here a simple creature responds to sound by changing its color from gray to green when the total volume goes over a threshold.}
  3.1563 +\end{listing}
  3.1564 +
  3.1565 +\begin{figure}[htb]
  3.1566 +\centering
  3.1567 +\includegraphics[width=10cm]{./images/java-hearing-test.png}
  3.1568 +\caption{\label{sound-cubes.}First ever simulation of multiple listeners in \texttt{CORTEX}. Each cube is a creature which processes sound data with the \texttt{process} function from listing \ref{sound-test}. the ball is constantly emitting a pure tone of constant volume. As it approaches the cubes, they each change color in response to the sound.}
  3.1569 +\end{figure}
  3.1570 +
  3.1571 +This system of hearing has also been co-opted by the
  3.1572 +jMonkeyEngine3 community and is used to record audio for demo
  3.1573 +videos.
  3.1574 +
  3.1575 +\subsection{Hundreds of hair-like elements provide a sense of touch}
  3.1576 +\label{sec-2-10}
  3.1577 +
  3.1578 +Touch is critical to navigation and spatial reasoning and as such I
  3.1579 +need a simulated version of it to give to my AI creatures.
  3.1580 +
  3.1581 +Human skin has a wide array of touch sensors, each of which
  3.1582 +specialize in detecting different vibrational modes and pressures.
  3.1583 +These sensors can integrate a vast expanse of skin (i.e. your
  3.1584 +entire palm), or a tiny patch of skin at the tip of your finger.
  3.1585 +The hairs of the skin help detect objects before they even come
  3.1586 +into contact with the skin proper.
  3.1587 +
  3.1588 +However, touch in my simulated world can not exactly correspond to
  3.1589 +human touch because my creatures are made out of completely rigid
  3.1590 +segments that don't deform like human skin.
  3.1591 +
  3.1592 +Instead of measuring deformation or vibration, I surround each
  3.1593 +rigid part with a plenitude of hair-like objects (\emph{feelers}) which
  3.1594 +do not interact with the physical world. Physical objects can pass
  3.1595 +through them with no effect. The feelers are able to tell when
  3.1596 +other objects pass through them, and they constantly report how
  3.1597 +much of their extent is covered. So even though the creature's body
  3.1598 +parts do not deform, the feelers create a margin around those body
  3.1599 +parts which achieves a sense of touch which is a hybrid between a
  3.1600 +human's sense of deformation and sense from hairs.
  3.1601 +
  3.1602 +Implementing touch in jMonkeyEngine follows a different technical
  3.1603 +route than vision and hearing. Those two senses piggybacked off
  3.1604 +jMonkeyEngine's 3D audio and video rendering subsystems. To
  3.1605 +simulate touch, I use jMonkeyEngine's physics system to execute
  3.1606 +many small collision detections, one for each feeler. The placement
  3.1607 +of the feelers is determined by a UV-mapped image which shows where
  3.1608 +each feeler should be on the 3D surface of the body.
  3.1609 +
  3.1610 +\subsubsection{Defining Touch Meta-Data in Blender}
  3.1611 +\label{sec-2-10-1}
  3.1612 +
  3.1613 +Each geometry can have a single UV map which describes the
  3.1614 +position of the feelers which will constitute its sense of touch.
  3.1615 +This image path is stored under the ``touch'' key. The image itself
  3.1616 +is black and white, with black meaning a feeler length of 0 (no
  3.1617 +feeler is present) and white meaning a feeler length of \texttt{scale},
  3.1618 +which is a float stored under the key "scale".
  3.1619 +
  3.1620 +\begin{listing}
  3.1621 +\begin{verbatim}
  3.1622 +(defn tactile-sensor-profile
  3.1623 +  "Return the touch-sensor distribution image in BufferedImage format,
  3.1624 +   or nil if it does not exist."
  3.1625 +  [#^Geometry obj]
  3.1626 +  (if-let [image-path (meta-data obj "touch")]
  3.1627 +    (load-image image-path)))
  3.1628 +
  3.1629 +(defn tactile-scale
  3.1630 +  "Return the length of each feeler. Default scale is 0.01
  3.1631 +  jMonkeyEngine units."
  3.1632 +  [#^Geometry obj]
  3.1633 +  (if-let [scale (meta-data obj "scale")]
  3.1634 +    scale 0.1))
  3.1635 +\end{verbatim}
  3.1636 +\caption{\label{touch-meta-data}Touch does not use empty nodes, to store metadata, because the metadata of each solid part of a creature's body is sufficient.}
  3.1637 +\end{listing}
  3.1638 +
  3.1639 +Here is an example of a UV-map which specifies the position of
  3.1640 +touch sensors along the surface of the upper segment of a fingertip.
  3.1641 +
  3.1642 +\begin{figure}[htb]
  3.1643 +\centering
  3.1644 +\includegraphics[width=13cm]{./images/finger-UV.png}
  3.1645 +\caption{\label{fingertip-UV}This is the tactile-sensor-profile for the upper segment of a fingertip. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).}
  3.1646 +\end{figure}
  3.1647 +
  3.1648 +\subsubsection{Implementation Summary}
  3.1649 +\label{sec-2-10-2}
  3.1650 +
  3.1651 +To simulate touch there are three conceptual steps. For each solid
  3.1652 +object in the creature, you first have to get UV image and scale
  3.1653 +parameter which define the position and length of the feelers.
  3.1654 +Then, you use the triangles which comprise the mesh and the UV
  3.1655 +data stored in the mesh to determine the world-space position and
  3.1656 +orientation of each feeler. Then once every frame, update these
  3.1657 +positions and orientations to match the current position and
  3.1658 +orientation of the object, and use physics collision detection to
  3.1659 +gather tactile data.
  3.1660 +
  3.1661 +Extracting the meta-data has already been described. The third
  3.1662 +step, physics collision detection, is handled in \texttt{touch-kernel}.
  3.1663 +Translating the positions and orientations of the feelers from the
  3.1664 +UV-map to world-space is itself a three-step process.
  3.1665 +
  3.1666 +\begin{itemize}
  3.1667 +\item Find the triangles which make up the mesh in pixel-space and in
  3.1668 +world-space. $\backslash$(\texttt{triangles}, \texttt{pixel-triangles}).
  3.1669 +
  3.1670 +\item Find the coordinates of each feeler in world-space. These are
  3.1671 +the origins of the feelers. (\texttt{feeler-origins}).
  3.1672 +
  3.1673 +\item Calculate the normals of the triangles in world space, and add
  3.1674 +them to each of the origins of the feelers. These are the
  3.1675 +normalized coordinates of the tips of the feelers.
  3.1676 +(\texttt{feeler-tips}).
  3.1677 +\end{itemize}
  3.1678 +
  3.1679 +\subsubsection{Triangle Math}
  3.1680 +\label{sec-2-10-3}
  3.1681 +
  3.1682 +The rigid objects which make up a creature have an underlying
  3.1683 +\texttt{Geometry}, which is a \texttt{Mesh} plus a \texttt{Material} and other
  3.1684 +important data involved with displaying the object.
  3.1685 +
  3.1686 +A \texttt{Mesh} is composed of \texttt{Triangles}, and each \texttt{Triangle} has three
  3.1687 +vertices which have coordinates in world space and UV space.
  3.1688 +
  3.1689 +Here, \texttt{triangles} gets all the world-space triangles which
  3.1690 +comprise a mesh, while \texttt{pixel-triangles} gets those same triangles
  3.1691 +expressed in pixel coordinates (which are UV coordinates scaled to
  3.1692 +fit the height and width of the UV image).
  3.1693 +
  3.1694 +\begin{listing}
  3.1695 +\begin{verbatim}
  3.1696 +(defn triangle
  3.1697 +  "Get the triangle specified by triangle-index from the mesh."
  3.1698 +  [#^Geometry geo triangle-index]
  3.1699 +  (triangle-seq
  3.1700 +   (let [scratch (Triangle.)]
  3.1701 +     (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
  3.1702 +
  3.1703 +(defn triangles
  3.1704 +  "Return a sequence of all the Triangles which comprise a given
  3.1705 +   Geometry." 
  3.1706 +  [#^Geometry geo]
  3.1707 +  (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
  3.1708 +
  3.1709 +(defn triangle-vertex-indices
  3.1710 +  "Get the triangle vertex indices of a given triangle from a given
  3.1711 +   mesh."
  3.1712 +  [#^Mesh mesh triangle-index]
  3.1713 +  (let [indices (int-array 3)]
  3.1714 +    (.getTriangle mesh triangle-index indices)
  3.1715 +    (vec indices)))
  3.1716 +
  3.1717 +    (defn vertex-UV-coord
  3.1718 +  "Get the UV-coordinates of the vertex named by vertex-index"
  3.1719 +  [#^Mesh mesh vertex-index]
  3.1720 +  (let [UV-buffer
  3.1721 +        (.getData
  3.1722 +         (.getBuffer
  3.1723 +          mesh
  3.1724 +          VertexBuffer$Type/TexCoord))]
  3.1725 +    [(.get UV-buffer (* vertex-index 2))
  3.1726 +     (.get UV-buffer (+ 1 (* vertex-index 2)))]))
  3.1727 +
  3.1728 +(defn pixel-triangle [#^Geometry geo image index]
  3.1729 +  (let [mesh (.getMesh geo)
  3.1730 +        width (.getWidth image)
  3.1731 +        height (.getHeight image)]
  3.1732 +    (vec (map (fn [[u v]] (vector (* width u) (* height v)))
  3.1733 +              (map (partial vertex-UV-coord mesh)
  3.1734 +                   (triangle-vertex-indices mesh index))))))
  3.1735 +
  3.1736 +(defn pixel-triangles 
  3.1737 +  "The pixel-space triangles of the Geometry, in the same order as
  3.1738 +   (triangles geo)"
  3.1739 +  [#^Geometry geo image]
  3.1740 +  (let [height (.getHeight image)
  3.1741 +        width (.getWidth image)]
  3.1742 +    (map (partial pixel-triangle geo image)
  3.1743 +         (range (.getTriangleCount (.getMesh geo))))))
  3.1744 +\end{verbatim}
  3.1745 +\caption{\label{get-triangles}Programs to extract triangles from a geometry and get their vertices in both world and UV-coordinates.}
  3.1746 +\end{listing}
  3.1747 +
  3.1748 +\subsubsection{The Affine Transform from one Triangle to Another}
  3.1749 +\label{sec-2-10-4}
  3.1750 +
  3.1751 +\texttt{pixel-triangles} gives us the mesh triangles expressed in pixel
  3.1752 +coordinates and \texttt{triangles} gives us the mesh triangles expressed
  3.1753 +in world coordinates. The tactile-sensor-profile gives the
  3.1754 +position of each feeler in pixel-space. In order to convert
  3.1755 +pixel-space coordinates into world-space coordinates we need
  3.1756 +something that takes coordinates on the surface of one triangle
  3.1757 +and gives the corresponding coordinates on the surface of another
  3.1758 +triangle.
  3.1759 +
  3.1760 +Triangles are \href{http://mathworld.wolfram.com/AffineTransformation.html }{affine}, which means any triangle can be transformed
  3.1761 +into any other by a combination of translation, scaling, and
  3.1762 +rotation. The affine transformation from one triangle to another
  3.1763 +is readily computable if the triangle is expressed in terms of a
  3.1764 +\(4x4\) matrix.
  3.1765 +
  3.1766 +$$
  3.1767 +\begin{bmatrix}
  3.1768 +x_1 & x_2 & x_3 & n_x \\
  3.1769 +y_1 & y_2 & y_3 & n_y \\ 
  3.1770 +z_1 & z_2 & z_3 & n_z \\
  3.1771 +1 & 1 & 1 & 1 
  3.1772 +\end{bmatrix}
  3.1773 +$$
  3.1774 +
  3.1775 +Here, the first three columns of the matrix are the vertices of
  3.1776 +the triangle. The last column is the right-handed unit normal of
  3.1777 +the triangle.
  3.1778 +
  3.1779 +With two triangles \(T_{1}\) and \(T_{2}\) each expressed as a
  3.1780 +matrix like above, the affine transform from \(T_{1}\) to \(T_{2}\)
  3.1781 +is \(T_{2}T_{1}^{-1}\).
  3.1782 +
  3.1783 +The clojure code below recapitulates the formulas above, using
  3.1784 +jMonkeyEngine's \texttt{Matrix4f} objects, which can describe any affine
  3.1785 +transformation.
  3.1786 +
  3.1787 +\begin{listing}
  3.1788 +\begin{verbatim}
  3.1789 +(defn triangle->matrix4f
  3.1790 +  "Converts the triangle into a 4x4 matrix: The first three columns
  3.1791 +   contain the vertices of the triangle; the last contains the unit
  3.1792 +   normal of the triangle. The bottom row is filled with 1s."
  3.1793 +  [#^Triangle t]
  3.1794 +  (let [mat (Matrix4f.)
  3.1795 +        [vert-1 vert-2 vert-3]
  3.1796 +        (mapv #(.get t %) (range 3))
  3.1797 +        unit-normal (do (.calculateNormal t)(.getNormal t))
  3.1798 +        vertices [vert-1 vert-2 vert-3 unit-normal]]
  3.1799 +    (dorun 
  3.1800 +     (for [row (range 4) col (range 3)]
  3.1801 +       (do
  3.1802 +         (.set mat col row (.get (vertices row) col))
  3.1803 +         (.set mat 3 row 1)))) mat))
  3.1804 +
  3.1805 +(defn triangles->affine-transform
  3.1806 +  "Returns the affine transformation that converts each vertex in the
  3.1807 +   first triangle into the corresponding vertex in the second
  3.1808 +   triangle."
  3.1809 +  [#^Triangle tri-1 #^Triangle tri-2]
  3.1810 +  (.mult 
  3.1811 +   (triangle->matrix4f tri-2)
  3.1812 +   (.invert (triangle->matrix4f tri-1))))
  3.1813 +\end{verbatim}
  3.1814 +\caption{\label{triangle-affine}Program to interpret triangles as affine transforms.}
  3.1815 +\end{listing}
  3.1816 +
  3.1817 +\subsubsection{Triangle Boundaries}
  3.1818 +\label{sec-2-10-5}
  3.1819 +
  3.1820 +For efficiency's sake I will divide the tactile-profile image into
  3.1821 +small squares which inscribe each pixel-triangle, then extract the
  3.1822 +points which lie inside the triangle and map them to 3D-space using
  3.1823 +\texttt{triangle-transform} above. To do this I need a function,
  3.1824 +\texttt{convex-bounds} which finds the smallest box which inscribes a 2D
  3.1825 +triangle.
  3.1826 +
  3.1827 +\texttt{inside-triangle?} determines whether a point is inside a triangle
  3.1828 +in 2D pixel-space.
  3.1829 +
  3.1830 +\begin{listing}
  3.1831 +\begin{verbatim}
  3.1832 +(defn convex-bounds
  3.1833 +  "Returns the smallest square containing the given vertices, as a
  3.1834 +   vector of integers [left top width height]."
  3.1835 +  [verts]
  3.1836 +  (let [xs (map first verts)
  3.1837 +        ys (map second verts)
  3.1838 +        x0 (Math/floor (apply min xs))
  3.1839 +        y0 (Math/floor (apply min ys))
  3.1840 +        x1 (Math/ceil (apply max xs))
  3.1841 +        y1 (Math/ceil (apply max ys))]
  3.1842 +    [x0 y0 (- x1 x0) (- y1 y0)]))
  3.1843 +
  3.1844 +(defn same-side?
  3.1845 +  "Given the points p1 and p2 and the reference point ref, is point p
  3.1846 +  on the same side of the line that goes through p1 and p2 as ref is?" 
  3.1847 +  [p1 p2 ref p]
  3.1848 +  (<=
  3.1849 +   0
  3.1850 +   (.dot 
  3.1851 +    (.cross (.subtract p2 p1) (.subtract p p1))
  3.1852 +    (.cross (.subtract p2 p1) (.subtract ref p1)))))
  3.1853 +
  3.1854 +(defn inside-triangle?
  3.1855 +  "Is the point inside the triangle?"
  3.1856 +  {:author "Dylan Holmes"}
  3.1857 +  [#^Triangle tri #^Vector3f p]
  3.1858 +  (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
  3.1859 +    (and
  3.1860 +     (same-side? vert-1 vert-2 vert-3 p)
  3.1861 +     (same-side? vert-2 vert-3 vert-1 p)
  3.1862 +     (same-side? vert-3 vert-1 vert-2 p))))
  3.1863 +\end{verbatim}
  3.1864 +\caption{\label{in-triangle}Program to efficiently determine point inclusion in a triangle.}
  3.1865 +\end{listing}
  3.1866 +
  3.1867 +\subsubsection{Feeler Coordinates}
  3.1868 +\label{sec-2-10-6}
  3.1869 +
  3.1870 +The triangle-related functions above make short work of
  3.1871 +calculating the positions and orientations of each feeler in
  3.1872 +world-space.
  3.1873 +
  3.1874 +\begin{listing}
  3.1875 +\begin{verbatim}
  3.1876 +(defn feeler-pixel-coords
  3.1877 + "Returns the coordinates of the feelers in pixel space in lists, one
  3.1878 +  list for each triangle, ordered in the same way as (triangles) and
  3.1879 +  (pixel-triangles)."
  3.1880 + [#^Geometry geo image]
  3.1881 + (map 
  3.1882 +  (fn [pixel-triangle]
  3.1883 +    (filter
  3.1884 +     (fn [coord]
  3.1885 +       (inside-triangle? (->triangle pixel-triangle)
  3.1886 +                         (->vector3f coord)))
  3.1887 +       (white-coordinates image (convex-bounds pixel-triangle))))
  3.1888 +  (pixel-triangles geo image)))
  3.1889 +
  3.1890 +(defn feeler-world-coords 
  3.1891 + "Returns the coordinates of the feelers in world space in lists, one
  3.1892 +  list for each triangle, ordered in the same way as (triangles) and
  3.1893 +  (pixel-triangles)."
  3.1894 + [#^Geometry geo image]
  3.1895 + (let [transforms
  3.1896 +       (map #(triangles->affine-transform
  3.1897 +              (->triangle %1) (->triangle %2))
  3.1898 +            (pixel-triangles geo image)
  3.1899 +            (triangles geo))]
  3.1900 +   (map (fn [transform coords]
  3.1901 +          (map #(.mult transform (->vector3f %)) coords))
  3.1902 +        transforms (feeler-pixel-coords geo image))))
  3.1903 +\end{verbatim}
  3.1904 +\caption{\label{feeler-coordinates}Program to get the coordinates of ``feelers '' in both world and UV-coordinates.}
  3.1905 +\end{listing}
  3.1906 +
  3.1907 +\begin{listing}
  3.1908 +\begin{verbatim}
  3.1909 +(defn feeler-origins
  3.1910 +  "The world space coordinates of the root of each feeler."
  3.1911 +  [#^Geometry geo image]
  3.1912 +   (reduce concat (feeler-world-coords geo image)))
  3.1913 +
  3.1914 +(defn feeler-tips
  3.1915 +  "The world space coordinates of the tip of each feeler."
  3.1916 +  [#^Geometry geo image]
  3.1917 +  (let [world-coords (feeler-world-coords geo image)
  3.1918 +        normals
  3.1919 +        (map
  3.1920 +         (fn [triangle]
  3.1921 +           (.calculateNormal triangle)
  3.1922 +           (.clone (.getNormal triangle)))
  3.1923 +         (map ->triangle (triangles geo)))]
  3.1924 +
  3.1925 +    (mapcat (fn [origins normal]
  3.1926 +              (map #(.add % normal) origins))
  3.1927 +            world-coords normals)))
  3.1928 +
  3.1929 +(defn touch-topology
  3.1930 +  [#^Geometry geo image]
  3.1931 +  (collapse (reduce concat (feeler-pixel-coords geo image))))
  3.1932 +\end{verbatim}
  3.1933 +\caption{\label{feeler-tips}Program to get the position of the base and tip of each ``feeler''}
  3.1934 +\end{listing}
  3.1935 +
  3.1936 +\subsubsection{Simulated Touch}
  3.1937 +\label{sec-2-10-7}
  3.1938 +
  3.1939 +Now that the functions to construct feelers are complete,
  3.1940 +\texttt{touch-kernel} generates functions to be called from within a
  3.1941 +simulation that perform the necessary physics collisions to
  3.1942 +collect tactile data, and \texttt{touch!} recursively applies it to every
  3.1943 +node in the creature.
  3.1944 +
  3.1945 +\begin{listing}
  3.1946 +\begin{verbatim}
  3.1947 +(defn set-ray [#^Ray ray #^Matrix4f transform
  3.1948 +               #^Vector3f origin #^Vector3f tip]
  3.1949 +  ;; Doing everything locally reduces garbage collection by enough to
  3.1950 +  ;; be worth it.
  3.1951 +  (.mult transform origin (.getOrigin ray))
  3.1952 +  (.mult transform tip (.getDirection ray))
  3.1953 +  (.subtractLocal (.getDirection ray) (.getOrigin ray))
  3.1954 +  (.normalizeLocal (.getDirection ray)))
  3.1955 +\end{verbatim}
  3.1956 +\caption{\label{set-ray}Efficient program to transform a ray from one position to another.}
  3.1957 +\end{listing}
  3.1958 +
  3.1959 +\begin{listing}
  3.1960 +\begin{verbatim}
  3.1961 +(defn touch-kernel
  3.1962 +  "Constructs a function which will return tactile sensory data from
  3.1963 +   'geo when called from inside a running simulation"
  3.1964 +  [#^Geometry geo]
  3.1965 +  (if-let
  3.1966 +      [profile (tactile-sensor-profile geo)]
  3.1967 +    (let [ray-reference-origins (feeler-origins geo profile)
  3.1968 +          ray-reference-tips (feeler-tips geo profile)
  3.1969 +          ray-length (tactile-scale geo)
  3.1970 +          current-rays (map (fn [_] (Ray.)) ray-reference-origins)
  3.1971 +          topology (touch-topology geo profile)
  3.1972 +          correction (float (* ray-length -0.2))]
  3.1973 +      ;; slight tolerance for very close collisions.
  3.1974 +      (dorun
  3.1975 +       (map (fn [origin tip]
  3.1976 +              (.addLocal origin (.mult (.subtract tip origin)
  3.1977 +                                       correction)))
  3.1978 +            ray-reference-origins ray-reference-tips))
  3.1979 +      (dorun (map #(.setLimit % ray-length) current-rays))
  3.1980 +      (fn [node]
  3.1981 +        (let [transform (.getWorldMatrix geo)]
  3.1982 +          (dorun
  3.1983 +           (map (fn [ray ref-origin ref-tip]
  3.1984 +                  (set-ray ray transform ref-origin ref-tip))
  3.1985 +                current-rays ray-reference-origins
  3.1986 +                ray-reference-tips))
  3.1987 +          (vector
  3.1988 +           topology
  3.1989 +           (vec
  3.1990 +            (for [ray current-rays]
  3.1991 +              (do
  3.1992 +                (let [results (CollisionResults.)]
  3.1993 +                  (.collideWith node ray results)
  3.1994 +                  (let [touch-objects
  3.1995 +                        (filter #(not (= geo (.getGeometry %)))
  3.1996 +                                results)
  3.1997 +                        limit (.getLimit ray)]
  3.1998 +                    [(if (empty? touch-objects)
  3.1999 +                       limit
  3.2000 +                       (let [response
  3.2001 +                             (apply min (map #(.getDistance %)
  3.2002 +                                             touch-objects))]
  3.2003 +                         (FastMath/clamp
  3.2004 +                          (float 
  3.2005 +                           (if (> response limit) (float 0.0)
  3.2006 +                               (+ response correction)))
  3.2007 +                           (float 0.0)
  3.2008 +                           limit)))
  3.2009 +                     limit])))))))))))
  3.2010 +\end{verbatim}
  3.2011 +\caption{\label{touch-kernel}This is the core of touch in \texttt{CORTEX} each feeler follows the object it is bound to, reporting any collisions that may happen.}
  3.2012 +\end{listing}
  3.2013 +
  3.2014 +Armed with the \texttt{touch!} function, \texttt{CORTEX} becomes capable of
  3.2015 +giving creatures a sense of touch. A simple test is to create a
  3.2016 +cube that is outfitted with a uniform distribution of touch
  3.2017 +sensors. It can feel the ground and any balls that it touches.
  3.2018 +
  3.2019 +\begin{listing}
  3.2020 +\begin{verbatim}
  3.2021 +(defn touch! 
  3.2022 +  "Endow the creature with the sense of touch. Returns a sequence of
  3.2023 +   functions, one for each body part with a tactile-sensor-profile,
  3.2024 +   each of which when called returns sensory data for that body part."
  3.2025 +  [#^Node creature]
  3.2026 +  (filter
  3.2027 +   (comp not nil?)
  3.2028 +   (map touch-kernel
  3.2029 +        (filter #(isa? (class %) Geometry)
  3.2030 +                (node-seq creature)))))
  3.2031 +\end{verbatim}
  3.2032 +\caption{\label{touch}\texttt{CORTEX} interface for creating touch in a simulated creature.}
  3.2033 +\end{listing}
  3.2034 +
  3.2035 +The tactile-sensor-profile image for the touch cube is a simple
  3.2036 +cross with a uniform distribution of touch sensors:
  3.2037 +
  3.2038 +\begin{figure}[htb]
  3.2039 +\centering
  3.2040 +\includegraphics[width=7cm]{./images/touch-profile.png}
  3.2041 +\caption{\label{touch-cube-uv-map}The touch profile for the touch-cube. Each pure white pixel defines a touch sensitive feeler.}
  3.2042 +\end{figure}
  3.2043 +
  3.2044 +\begin{figure}[htb]
  3.2045 +\centering
  3.2046 +\includegraphics[width=15cm]{./images/touch-cube.png}
  3.2047 +\caption{\label{touch-cube-uv-map-2}The touch cube reacts to cannonballs. The black, red, and white cross on the right is a visual display of the creature's touch. White means that it is feeling something strongly, black is not feeling anything, and gray is in-between. The cube can feel both the floor and the ball. Notice that when the ball causes the cube to tip, that the bottom face can still feel part of the ground.}
  3.2048 +\end{figure}
  3.2049 +
  3.2050 +\subsection{Proprioception provides knowledge of your own body's position}
  3.2051 +\label{sec-2-11}
  3.2052 +
  3.2053 +Close your eyes, and touch your nose with your right index finger.
  3.2054 +How did you do it? You could not see your hand, and neither your
  3.2055 +hand nor your nose could use the sense of touch to guide the path
  3.2056 +of your hand. There are no sound cues, and Taste and Smell
  3.2057 +certainly don't provide any help. You know where your hand is
  3.2058 +without your other senses because of Proprioception.
  3.2059 +
  3.2060 +Humans can sometimes loose this sense through viral infections or
  3.2061 +damage to the spinal cord or brain, and when they do, they loose
  3.2062 +the ability to control their own bodies without looking directly at
  3.2063 +the parts they want to move. In \href{http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat}{The Man Who Mistook His Wife for a
  3.2064 +Hat} (\cite{man-wife-hat}), a woman named Christina looses this
  3.2065 +sense and has to learn how to move by carefully watching her arms
  3.2066 +and legs. She describes proprioception as the "eyes of the body,
  3.2067 +the way the body sees itself".
  3.2068 +
  3.2069 +Proprioception in humans is mediated by \href{http://en.wikipedia.org/wiki/Articular_capsule}{joint capsules}, \href{http://en.wikipedia.org/wiki/Muscle_spindle}{muscle
  3.2070 +spindles}, and the \href{http://en.wikipedia.org/wiki/Golgi_tendon_organ}{Golgi tendon organs}. These measure the relative
  3.2071 +positions of each body part by monitoring muscle strain and length.
  3.2072 +
  3.2073 +It's clear that this is a vital sense for fluid, graceful movement.
  3.2074 +It's also particularly easy to implement in jMonkeyEngine.
  3.2075 +
  3.2076 +My simulated proprioception calculates the relative angles of each
  3.2077 +joint from the rest position defined in the Blender file. This
  3.2078 +simulates the muscle-spindles and joint capsules. I will deal with
  3.2079 +Golgi tendon organs, which calculate muscle strain, in the next
  3.2080 +section (2.12).
  3.2081 +
  3.2082 +\subsubsection{Helper functions}
  3.2083 +\label{sec-2-11-1}
  3.2084 +
  3.2085 +\texttt{absolute-angle} calculates the angle between two vectors,
  3.2086 +relative to a third axis vector. This angle is the number of
  3.2087 +radians you have to move counterclockwise around the axis vector
  3.2088 +to get from the first to the second vector. It is not commutative
  3.2089 +like a normal dot-product angle is.
  3.2090 +
  3.2091 +The purpose of these functions is to build a system of angle
  3.2092 +measurement that is biologically plausible.
  3.2093 +
  3.2094 +\begin{listing}
  3.2095 +\begin{verbatim}
  3.2096 +(defn right-handed?
  3.2097 +  "true iff the three vectors form a right handed coordinate
  3.2098 +   system. The three vectors do not have to be normalized or
  3.2099 +   orthogonal."
  3.2100 +  [vec1 vec2 vec3]
  3.2101 +  (pos? (.dot (.cross vec1 vec2) vec3)))
  3.2102 +
  3.2103 +(defn absolute-angle
  3.2104 +  "The angle between 'vec1 and 'vec2 around 'axis. In the range 
  3.2105 +   [0 (* 2 Math/PI)]."
  3.2106 +  [vec1 vec2 axis]
  3.2107 +  (let [angle (.angleBetween vec1 vec2)]
  3.2108 +    (if (right-handed? vec1 vec2 axis)
  3.2109 +      angle (- (* 2 Math/PI) angle))))
  3.2110 +\end{verbatim}
  3.2111 +\caption{\label{helpers}Program to measure angles along a vector}
  3.2112 +\end{listing}
  3.2113 +
  3.2114 +\subsubsection{Proprioception Kernel}
  3.2115 +\label{sec-2-11-2}
  3.2116 +
  3.2117 +Given a joint, \texttt{proprioception-kernel} produces a function that
  3.2118 +calculates the Euler angles between the objects the joint
  3.2119 +connects. The only tricky part here is making the angles relative
  3.2120 +to the joint's initial ``straightness''.
  3.2121 +
  3.2122 +\begin{listing}
  3.2123 +\begin{verbatim}
  3.2124 +(defn proprioception-kernel
  3.2125 +  "Returns a function which returns proprioceptive sensory data when
  3.2126 +  called inside a running simulation."
  3.2127 +  [#^Node parts #^Node joint]
  3.2128 +  (let [[obj-a obj-b] (joint-targets parts joint)
  3.2129 +        joint-rot (.getWorldRotation joint)
  3.2130 +        x0 (.mult joint-rot Vector3f/UNIT_X)
  3.2131 +        y0 (.mult joint-rot Vector3f/UNIT_Y)
  3.2132 +        z0 (.mult joint-rot Vector3f/UNIT_Z)]
  3.2133 +    (fn []
  3.2134 +      (let [rot-a (.clone (.getWorldRotation obj-a))
  3.2135 +            rot-b (.clone (.getWorldRotation obj-b))
  3.2136 +            x (.mult rot-a x0)
  3.2137 +            y (.mult rot-a y0)
  3.2138 +            z (.mult rot-a z0)
  3.2139 +
  3.2140 +            X (.mult rot-b x0)
  3.2141 +            Y (.mult rot-b y0)
  3.2142 +            Z (.mult rot-b z0)
  3.2143 +            heading  (Math/atan2 (.dot X z) (.dot X x))
  3.2144 +            pitch  (Math/atan2 (.dot X y) (.dot X x))
  3.2145 +
  3.2146 +            ;; rotate x-vector back to origin
  3.2147 +            reverse
  3.2148 +            (doto (Quaternion.)
  3.2149 +              (.fromAngleAxis
  3.2150 +               (.angleBetween X x)
  3.2151 +               (let [cross (.normalize (.cross X x))]
  3.2152 +                 (if (= 0 (.length cross)) y cross))))
  3.2153 +            roll (absolute-angle (.mult reverse Y) y x)]
  3.2154 +        [heading pitch roll]))))
  3.2155 +
  3.2156 +(defn proprioception!
  3.2157 +  "Endow the creature with the sense of proprioception. Returns a
  3.2158 +   sequence of functions, one for each child of the \"joints\" node in
  3.2159 +   the creature, which each report proprioceptive information about
  3.2160 +   that joint."
  3.2161 +  [#^Node creature]
  3.2162 +  ;; extract the body's joints
  3.2163 +  (let [senses (map (partial proprioception-kernel creature)
  3.2164 +                    (joints creature))]
  3.2165 +    (fn []
  3.2166 +      (map #(%) senses))))
  3.2167 +\end{verbatim}
  3.2168 +\caption{\label{proprioception}Program to return biologically reasonable proprioceptive data for each joint.}
  3.2169 +\end{listing}
  3.2170 +
  3.2171 +\texttt{proprioception!} maps \texttt{proprioception-kernel} across all the
  3.2172 +joints of the creature. It uses the same list of joints that
  3.2173 +\texttt{joints} uses. Proprioception is the easiest sense to implement in
  3.2174 +\texttt{CORTEX}, and it will play a crucial role when efficiently
  3.2175 +implementing empathy.
  3.2176 +
  3.2177 +\begin{figure}[htb]
  3.2178 +\centering
  3.2179 +\includegraphics[width=11cm]{./images/proprio.png}
  3.2180 +\caption{\label{proprio}In the upper right corner, the three proprioceptive angle measurements are displayed. Red is yaw, Green is pitch, and White is roll.}
  3.2181 +\end{figure}
  3.2182 +
  3.2183 +\subsection{Muscles contain both sensors and effectors}
  3.2184 +\label{sec-2-12}
  3.2185 +
  3.2186 +Surprisingly enough, terrestrial creatures only move by using
  3.2187 +torque applied about their joints. There's not a single straight
  3.2188 +line of force in the human body at all! (A straight line of force
  3.2189 +would correspond to some sort of jet or rocket propulsion.)
  3.2190 +
  3.2191 +In humans, muscles are composed of muscle fibers which can contract
  3.2192 +to exert force. The muscle fibers which compose a muscle are
  3.2193 +partitioned into discrete groups which are each controlled by a
  3.2194 +single alpha motor neuron. A single alpha motor neuron might
  3.2195 +control as little as three or as many as one thousand muscle
  3.2196 +fibers. When the alpha motor neuron is engaged by the spinal cord,
  3.2197 +it activates all of the muscle fibers to which it is attached. The
  3.2198 +spinal cord generally engages the alpha motor neurons which control
  3.2199 +few muscle fibers before the motor neurons which control many
  3.2200 +muscle fibers. This recruitment strategy allows for precise
  3.2201 +movements at low strength. The collection of all motor neurons that
  3.2202 +control a muscle is called the motor pool. The brain essentially
  3.2203 +says "activate 30\% of the motor pool" and the spinal cord recruits
  3.2204 +motor neurons until 30\% are activated. Since the distribution of
  3.2205 +power among motor neurons is unequal and recruitment goes from
  3.2206 +weakest to strongest, the first 30\% of the motor pool might be 5\%
  3.2207 +of the strength of the muscle.
  3.2208 +
  3.2209 +My simulated muscles follow a similar design: Each muscle is
  3.2210 +defined by a 1-D array of numbers (the "motor pool"). Each entry in
  3.2211 +the array represents a motor neuron which controls a number of
  3.2212 +muscle fibers equal to the value of the entry. Each muscle has a
  3.2213 +scalar strength factor which determines the total force the muscle
  3.2214 +can exert when all motor neurons are activated. The effector
  3.2215 +function for a muscle takes a number to index into the motor pool,
  3.2216 +and then "activates" all the motor neurons whose index is lower or
  3.2217 +equal to the number. Each motor-neuron will apply force in
  3.2218 +proportion to its value in the array. Lower values cause less
  3.2219 +force. The lower values can be put at the "beginning" of the 1-D
  3.2220 +array to simulate the layout of actual human muscles, which are
  3.2221 +capable of more precise movements when exerting less force. Or, the
  3.2222 +motor pool can simulate more exotic recruitment strategies which do
  3.2223 +not correspond to human muscles.
  3.2224 +
  3.2225 +This 1D array is defined in an image file for ease of
  3.2226 +creation/visualization. Here is an example muscle profile image.
  3.2227 +
  3.2228 +\begin{figure}[htb]
  3.2229 +\centering
  3.2230 +\includegraphics[width=7cm]{./images/basic-muscle.png}
  3.2231 +\caption{\label{muscle-recruit}A muscle profile image that describes the strengths of each motor neuron in a muscle. White is weakest and dark red is strongest. This particular pattern has weaker motor neurons at the beginning, just like human muscle.}
  3.2232 +\end{figure}
  3.2233 +
  3.2234 +\subsubsection{Muscle meta-data}
  3.2235 +\label{sec-2-12-1}
  3.2236 +
  3.2237 +\begin{listing}
  3.2238 +\begin{verbatim}
  3.2239 +(defn muscle-profile-image
  3.2240 +  "Get the muscle-profile image from the node's Blender meta-data."
  3.2241 +  [#^Node muscle]
  3.2242 +  (if-let [image (meta-data muscle "muscle")]
  3.2243 +    (load-image image)))
  3.2244 +
  3.2245 +(defn muscle-strength
  3.2246 +  "Return the strength of this muscle, or 1 if it is not defined."
  3.2247 +  [#^Node muscle]
  3.2248 +  (if-let [strength (meta-data muscle "strength")]
  3.2249 +    strength 1))
  3.2250 +
  3.2251 +(defn motor-pool
  3.2252 +  "Return a vector where each entry is the strength of the \"motor
  3.2253 +   neuron\" at that part in the muscle."
  3.2254 +  [#^Node muscle]
  3.2255 +  (let [profile (muscle-profile-image muscle)]
  3.2256 +    (vec
  3.2257 +     (let [width (.getWidth profile)]
  3.2258 +       (for [x (range width)]
  3.2259 +       (- 255
  3.2260 +          (bit-and
  3.2261 +           0x0000FF
  3.2262 +           (.getRGB profile x 0))))))))
  3.2263 +\end{verbatim}
  3.2264 +\caption{\label{motor-pool}Program to deal with loading muscle data from a Blender file's metadata.}
  3.2265 +\end{listing}
  3.2266 +
  3.2267 +Of note here is \texttt{motor-pool} which interprets the muscle-profile
  3.2268 +image in a way that allows me to use gradients between white and
  3.2269 +red, instead of shades of gray as I've been using for all the
  3.2270 +other senses. This is purely an aesthetic touch.
  3.2271 +
  3.2272 +\subsubsection{Creating muscles}
  3.2273 +\label{sec-2-12-2}
  3.2274 +
  3.2275 +\begin{listing}
  3.2276 +\begin{verbatim}
  3.2277 +(defn movement-kernel
  3.2278 +  "Returns a function which when called with a integer value inside a
  3.2279 +   running simulation will cause movement in the creature according
  3.2280 +   to the muscle's position and strength profile. Each function
  3.2281 +   returns the amount of force applied / max force."
  3.2282 +  [#^Node creature #^Node muscle]
  3.2283 +  (let [target (closest-node creature muscle)
  3.2284 +        axis
  3.2285 +        (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y)
  3.2286 +        strength (muscle-strength muscle)
  3.2287 +        
  3.2288 +        pool (motor-pool muscle)
  3.2289 +        pool-integral (reductions + pool)
  3.2290 +        forces
  3.2291 +        (vec (map  #(float (* strength (/ % (last pool-integral))))
  3.2292 +              pool-integral))
  3.2293 +        control (.getControl target RigidBodyControl)]
  3.2294 +    (fn [n]
  3.2295 +      (let [pool-index (max 0 (min n (dec (count pool))))
  3.2296 +            force (forces pool-index)]
  3.2297 +        (.applyTorque control (.mult axis force))
  3.2298 +        (float (/ force strength))))))
  3.2299 +
  3.2300 +(defn movement!
  3.2301 +  "Endow the creature with the power of movement. Returns a sequence
  3.2302 +   of functions, each of which accept an integer value and will
  3.2303 +   activate their corresponding muscle."
  3.2304 +  [#^Node creature]
  3.2305 +    (for [muscle (muscles creature)]
  3.2306 +      (movement-kernel creature muscle)))
  3.2307 +\end{verbatim}
  3.2308 +\caption{\label{muscle-kernel}This is the core movement function in \texttt{CORTEX}, which implements muscles that report on their activation.}
  3.2309 +\end{listing}
  3.2310 +
  3.2311 +
  3.2312 +\texttt{movement-kernel} creates a function that controls the movement
  3.2313 +of the nearest physical node to the muscle node. The muscle exerts
  3.2314 +a rotational force dependent on it's orientation to the object in
  3.2315 +the Blender file. The function returned by \texttt{movement-kernel} is
  3.2316 +also a sense function: it returns the percent of the total muscle
  3.2317 +strength that is currently being employed. This is analogous to
  3.2318 +muscle tension in humans and completes the sense of proprioception
  3.2319 +begun in the last chapter.
  3.2320 +
  3.2321 +\subsection{\texttt{CORTEX} brings complex creatures to life!}
  3.2322 +\label{sec-2-13}
  3.2323 +
  3.2324 +The ultimate test of \texttt{CORTEX} is to create a creature with the full
  3.2325 +gamut of senses and put it though its paces. 
  3.2326 +
  3.2327 +With all senses enabled, my right hand model looks like an
  3.2328 +intricate marionette hand with several strings for each finger:
  3.2329 +
  3.2330 +\begin{figure}[htb]
  3.2331 +\centering
  3.2332 +\includegraphics[width=11cm]{./images/hand-with-all-senses2.png}
  3.2333 +\caption{\label{hand-nodes-1}View of the hand model with all sense nodes. You can see the joint, muscle, ear, and eye nodes here.}
  3.2334 +\end{figure}
  3.2335 +
  3.2336 +\begin{figure}[htb]
  3.2337 +\centering
  3.2338 +\includegraphics[width=15cm]{./images/hand-with-all-senses3.png}
  3.2339 +\caption{\label{hand-nodes-2}An alternate view of the hand.}
  3.2340 +\end{figure}
  3.2341 +
  3.2342 +With the hand fully rigged with senses, I can run it though a test
  3.2343 +that will test everything. 
  3.2344 +
  3.2345 +\begin{figure}[htb]
  3.2346 +\centering
  3.2347 +\includegraphics[width=15cm]{./images/integration.png}
  3.2348 +\caption{\label{integration}Selected frames from a full test of the hand with all senses. Note especially the interactions the hand has with itself: it feels its own palm and fingers, and when it curls its fingers, it sees them with its eye (which is located in the center of the palm. The red block appears with a pure tone sound. The hand then uses its muscles to launch the cube!}
  3.2349 +\end{figure}
  3.2350 +
  3.2351 +\subsection{\texttt{CORTEX} enables many possibilities for further research}
  3.2352 +\label{sec-2-14}
  3.2353 +
  3.2354 +Often times, the hardest part of building a system involving
  3.2355 +creatures is dealing with physics and graphics. \texttt{CORTEX} removes
  3.2356 +much of this initial difficulty and leaves researchers free to
  3.2357 +directly pursue their ideas. I hope that even novices with a
  3.2358 +passing curiosity about simulated touch or creature evolution will
  3.2359 +be able to use cortex for experimentation. \texttt{CORTEX} is a completely
  3.2360 +simulated world, and far from being a disadvantage, its simulated
  3.2361 +nature enables you to create senses and creatures that would be
  3.2362 +impossible to make in the real world.
  3.2363 +
  3.2364 +While not by any means a complete list, here are some paths
  3.2365 +\texttt{CORTEX} is well suited to help you explore:
  3.2366 +
  3.2367 +\begin{description}
  3.2368 +\item[{Empathy        }] my empathy program leaves many areas for
  3.2369 +improvement, among which are using vision to infer
  3.2370 +proprioception and looking up sensory experience with imagined
  3.2371 +vision, touch, and sound.
  3.2372 +\item[{Evolution}] Karl Sims created a rich environment for simulating
  3.2373 +the evolution of creatures on a Connection Machine
  3.2374 +(\cite{sims-evolving-creatures}). Today, this can be redone
  3.2375 +and expanded with \texttt{CORTEX} on an ordinary computer.
  3.2376 +\item[{Exotic senses }] Cortex enables many fascinating senses that are
  3.2377 +not possible to build in the real world. For example,
  3.2378 +telekinesis is an interesting avenue to explore. You can also
  3.2379 +make a ``semantic'' sense which looks up metadata tags on
  3.2380 +objects in the environment the metadata tags might contain
  3.2381 +other sensory information.
  3.2382 +\item[{Imagination via subworlds}] this would involve a creature with
  3.2383 +an effector which creates an entire new sub-simulation where
  3.2384 +the creature has direct control over placement/creation of
  3.2385 +objects via simulated telekinesis. The creature observes this
  3.2386 +sub-world through its normal senses and uses its observations
  3.2387 +to make predictions about its top level world.
  3.2388 +\item[{Simulated prescience}] step the simulation forward a few ticks,
  3.2389 +gather sensory data, then supply this data for the creature as
  3.2390 +one of its actual senses. The cost of prescience is slowing
  3.2391 +the simulation down by a factor proportional to however far
  3.2392 +you want the entities to see into the future. What happens
  3.2393 +when two evolved creatures that can each see into the future
  3.2394 +fight each other?
  3.2395 +\item[{Swarm creatures}] Program a group of creatures that cooperate
  3.2396 +with each other. Because the creatures would be simulated, you
  3.2397 +could investigate computationally complex rules of behavior
  3.2398 +which still, from the group's point of view, would happen in
  3.2399 +real time. Interactions could be as simple as cellular
  3.2400 +organisms communicating via flashing lights, or as complex as
  3.2401 +humanoids completing social tasks, etc.
  3.2402 +\item[{\texttt{HACKER} for writing muscle-control programs}] Presented with a
  3.2403 +low-level muscle control / sense API, generate higher level
  3.2404 +programs for accomplishing various stated goals. Example goals
  3.2405 +might be "extend all your fingers" or "move your hand into the
  3.2406 +area with blue light" or "decrease the angle of this joint".
  3.2407 +It would be like Sussman's HACKER, except it would operate
  3.2408 +with much more data in a more realistic world. Start off with
  3.2409 +"calisthenics" to develop subroutines over the motor control
  3.2410 +API. The low level programming code might be a turning machine
  3.2411 +that could develop programs to iterate over a "tape" where
  3.2412 +each entry in the tape could control recruitment of the fibers
  3.2413 +in a muscle.
  3.2414 +\item[{Sense fusion}] There is much work to be done on sense
  3.2415 +integration -- building up a coherent picture of the world and
  3.2416 +the things in it. With \texttt{CORTEX} as a base, you can explore
  3.2417 +concepts like self-organizing maps or cross modal clustering
  3.2418 +in ways that have never before been tried.
  3.2419 +\item[{Inverse kinematics}] experiments in sense guided motor control
  3.2420 +are easy given \texttt{CORTEX}'s support -- you can get right to the
  3.2421 +hard control problems without worrying about physics or
  3.2422 +senses.
  3.2423 +\end{description}
  3.2424 +
  3.2425 +\newpage
  3.2426 +
  3.2427 +\section{\texttt{EMPATH}: action recognition in a simulated worm}
  3.2428 +\label{sec-3}
  3.2429 +
  3.2430 +Here I develop a computational model of empathy, using \texttt{CORTEX} as a
  3.2431 +base. Empathy in this context is the ability to observe another
  3.2432 +creature and infer what sorts of sensations that creature is
  3.2433 +feeling. My empathy algorithm involves multiple phases. First is
  3.2434 +free-play, where the creature moves around and gains sensory
  3.2435 +experience. From this experience I construct a representation of the
  3.2436 +creature's sensory state space, which I call \(\Phi\)-space. Using
  3.2437 +\(\Phi\)-space, I construct an efficient function which takes the
  3.2438 +limited data that comes from observing another creature and enriches
  3.2439 +it with a full compliment of imagined sensory data. I can then use
  3.2440 +the imagined sensory data to recognize what the observed creature is
  3.2441 +doing and feeling, using straightforward embodied action predicates.
  3.2442 +This is all demonstrated with using a simple worm-like creature, and
  3.2443 +recognizing worm-actions based on limited data.
  3.2444 +
  3.2445 +\begin{figure}[htb]
  3.2446 +\centering
  3.2447 +\includegraphics[width=10cm]{./images/basic-worm-view.png}
  3.2448 +\caption{\label{basic-worm-view}Here is the worm with which we will be working. It is composed of 5 segments. Each segment has a pair of extensor and flexor muscles. Each of the worm's four joints is a hinge joint which allows about 30 degrees of rotation to either side. Each segment of the worm is touch-capable and has a uniform distribution of touch sensors on each of its faces. Each joint has a proprioceptive sense to detect relative positions. The worm segments are all the same except for the first one, which has a much higher weight than the others to allow for easy manual motor control.}
  3.2449 +\end{figure}
  3.2450 +
  3.2451 +\begin{listing}
  3.2452 +\begin{verbatim}
  3.2453 +(defn worm []
  3.2454 +  (let [model (load-blender-model "Models/worm/worm.blend")]
  3.2455 +    {:body (doto model (body!))
  3.2456 +     :touch (touch! model)
  3.2457 +     :proprioception (proprioception! model)
  3.2458 +     :muscles (movement! model)}))
  3.2459 +\end{verbatim}
  3.2460 +\caption{\label{get-worm}Program for reading a worm from a Blender file and outfitting it with the senses of proprioception, touch, and the ability to move, as specified in the Blender file.}
  3.2461 +\end{listing}
  3.2462 +
  3.2463 +\subsection{Embodiment factors action recognition into manageable parts}
  3.2464 +\label{sec-3-1}
  3.2465 +
  3.2466 +Using empathy, I divide the problem of action recognition into a
  3.2467 +recognition process expressed in the language of a full compliment
  3.2468 +of senses, and an imaginative process that generates full sensory
  3.2469 +data from partial sensory data. Splitting the action recognition
  3.2470 +problem in this manner greatly reduces the total amount of work to
  3.2471 +recognize actions: The imaginative process is mostly just matching
  3.2472 +previous experience, and the recognition process gets to use all
  3.2473 +the senses to directly describe any action.
  3.2474 +
  3.2475 +\subsection{Action recognition is easy with a full gamut of senses}
  3.2476 +\label{sec-3-2}
  3.2477 +
  3.2478 +Embodied representation using multiple senses such as touch,
  3.2479 +proprioception, and muscle tension turns out be exceedingly
  3.2480 +efficient at describing body-centered actions. It is the right
  3.2481 +language for the job. For example, it takes only around 5 lines of
  3.2482 +clojure code to describe the action of curling using embodied
  3.2483 +primitives. It takes about 10 lines to describe the seemingly
  3.2484 +complicated action of wiggling.
  3.2485 +
  3.2486 +The following action predicates each take a stream of sensory
  3.2487 +experience, observe however much of it they desire, and decide
  3.2488 +whether the worm is doing the action they describe. \texttt{curled?}
  3.2489 +relies on proprioception, \texttt{resting?} relies on touch, \texttt{wiggling?}
  3.2490 +relies on a Fourier analysis of muscle contraction, and
  3.2491 +\texttt{grand-circle?} relies on touch and reuses \texttt{curled?} in its
  3.2492 +definition, showing how embodied predicates can be composed.
  3.2493 +
  3.2494 +
  3.2495 +\begin{listing}
  3.2496 +\begin{verbatim}
  3.2497 +(defn curled?
  3.2498 +  "Is the worm curled up?"
  3.2499 +  [experiences]
  3.2500 +  (every?
  3.2501 +   (fn [[_ _ bend]]
  3.2502 +     (> (Math/sin bend) 0.64))
  3.2503 +   (:proprioception (peek experiences))))
  3.2504 +\end{verbatim}
  3.2505 +\caption{\label{curled}Program for detecting whether the worm is curled. This is the simplest action predicate, because it only uses the last frame of sensory experience, and only uses proprioceptive data. Even this simple predicate, however, is automatically frame independent and ignores vermopomorphic\protect\footnotemark \space differences such as worm textures and colors.}
  3.2506 +\end{listing}
  3.2507 +
  3.2508 +\footnotetext{Like \emph{anthropomorphic} except for worms instead of humans.}
  3.2509 +
  3.2510 +\begin{listing}
  3.2511 +\begin{verbatim}
  3.2512 +(defn contact
  3.2513 +  "Determine how much contact a particular worm segment has with
  3.2514 +   other objects. Returns a value between 0 and 1, where 1 is full
  3.2515 +   contact and 0 is no contact."
  3.2516 +  [touch-region [coords contact :as touch]]
  3.2517 +  (-> (zipmap coords contact)
  3.2518 +      (select-keys touch-region)
  3.2519 +      (vals)
  3.2520 +      (#(map first %))
  3.2521 +      (average)
  3.2522 +      (* 10)
  3.2523 +      (- 1)
  3.2524 +      (Math/abs)))
  3.2525 +\end{verbatim}
  3.2526 +\caption{\label{touch-summary}Program for summarizing the touch information in a patch of skin.}
  3.2527 +\end{listing}
  3.2528 +
  3.2529 +
  3.2530 +\begin{listing}
  3.2531 +\begin{verbatim}
  3.2532 +(def worm-segment-bottom (rect-region [8 15] [14 22]))
  3.2533 +
  3.2534 +(defn resting?
  3.2535 +  "Is the worm resting on the ground?"
  3.2536 +  [experiences]
  3.2537 +  (every?
  3.2538 +   (fn [touch-data]
  3.2539 +     (< 0.9 (contact worm-segment-bottom touch-data)))
  3.2540 +   (:touch (peek experiences))))
  3.2541 +\end{verbatim}
  3.2542 +\caption{\label{resting}Program for detecting whether the worm is at rest. This program uses a summary of the tactile information from the underbelly of the worm, and is only true if every segment is touching the floor. Note that this function contains no references to proprioception at all.}
  3.2543 +\end{listing}
  3.2544 +
  3.2545 +\begin{listing}
  3.2546 +\begin{verbatim}
  3.2547 +(def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
  3.2548 +
  3.2549 +(def worm-segment-top-tip (rect-region [0 15] [7 22]))
  3.2550 +
  3.2551 +(defn grand-circle?
  3.2552 +  "Does the worm form a majestic circle (one end touching the other)?"
  3.2553 +  [experiences]
  3.2554 +  (and (curled? experiences)
  3.2555 +       (let [worm-touch (:touch (peek experiences))
  3.2556 +             tail-touch (worm-touch 0)
  3.2557 +             head-touch (worm-touch 4)]
  3.2558 +         (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
  3.2559 +              (< 0.55 (contact worm-segment-top-tip    head-touch))))))
  3.2560 +\end{verbatim}
  3.2561 +\caption{\label{grand-circle}Program for detecting whether the worm is curled up into a full circle. Here the embodied approach begins to shine, as I am able to both use a previous action predicate (\texttt{curled?}) as well as the direct tactile experience of the head and tail.}
  3.2562 +\end{listing}
  3.2563 +
  3.2564 +
  3.2565 +\begin{listing}
  3.2566 +\begin{verbatim}
  3.2567 +(defn fft [nums]
  3.2568 +  (map
  3.2569 +   #(.getReal %)
  3.2570 +   (.transform
  3.2571 +    (FastFourierTransformer. DftNormalization/STANDARD)
  3.2572 +    (double-array nums) TransformType/FORWARD)))
  3.2573 +
  3.2574 +(def indexed (partial map-indexed vector))
  3.2575 +
  3.2576 +(defn max-indexed [s]
  3.2577 +  (first (sort-by (comp - second) (indexed s))))
  3.2578 +
  3.2579 +(defn wiggling?
  3.2580 +  "Is the worm wiggling?"
  3.2581 +  [experiences]
  3.2582 +  (let [analysis-interval 0x40]
  3.2583 +    (when (> (count experiences) analysis-interval)
  3.2584 +      (let [a-flex 3
  3.2585 +            a-ex   2
  3.2586 +            muscle-activity
  3.2587 +            (map :muscle (vector:last-n experiences analysis-interval))
  3.2588 +            base-activity
  3.2589 +            (map #(- (% a-flex) (% a-ex)) muscle-activity)]
  3.2590 +        (= 2
  3.2591 +           (first
  3.2592 +            (max-indexed
  3.2593 +             (map #(Math/abs %)
  3.2594 +                  (take 20 (fft base-activity))))))))))
  3.2595 +\end{verbatim}
  3.2596 +\caption{\label{wiggling}Program for detecting whether the worm has been wiggling for the last few frames. It uses a Fourier analysis of the muscle contractions of the worm's tail to determine wiggling. This is significant because there is no particular frame that clearly indicates that the worm is wiggling --- only when multiple frames are analyzed together is the wiggling revealed. Defining wiggling this way also gives the worm an opportunity to learn and recognize ``frustrated wiggling'', where the worm tries to wiggle but can't. Frustrated wiggling is very visually different from actual wiggling, but this definition gives it to us for free.}
  3.2597 +\end{listing}
  3.2598 +
  3.2599 +With these action predicates, I can now recognize the actions of
  3.2600 +the worm while it is moving under my control and I have access to
  3.2601 +all the worm's senses.
  3.2602 +
  3.2603 +\begin{listing}
  3.2604 +\begin{verbatim}
  3.2605 +(defn debug-experience
  3.2606 +  [experiences text]
  3.2607 +  (cond
  3.2608 +   (grand-circle? experiences) (.setText text "Grand Circle")
  3.2609 +   (curled? experiences)       (.setText text "Curled")
  3.2610 +   (wiggling? experiences)     (.setText text "Wiggling")
  3.2611 +   (resting? experiences)      (.setText text "Resting")))
  3.2612 +\end{verbatim}
  3.2613 +\caption{\label{report-worm-activity}Use the action predicates defined earlier to report on what the worm is doing while in simulation.}
  3.2614 +\end{listing}
  3.2615 +
  3.2616 +\begin{figure}[htb]
  3.2617 +\centering
  3.2618 +\includegraphics[width=10cm]{./images/worm-identify-init.png}
  3.2619 +\caption{\label{basic-worm-view}Using \texttt{debug-experience}, the body-centered predicates work together to classify the behavior of the worm. the predicates are operating with access to the worm's full sensory data.}
  3.2620 +\end{figure}
  3.2621 +
  3.2622 +These action predicates satisfy the recognition requirement of an
  3.2623 +empathic recognition system. There is power in the simplicity of
  3.2624 +the action predicates. They describe their actions without getting
  3.2625 +confused in visual details of the worm. Each one is independent of
  3.2626 +position and rotation, but more than that, they are each
  3.2627 +independent of irrelevant visual details of the worm and the
  3.2628 +environment. They will work regardless of whether the worm is a
  3.2629 +different color or heavily textured, or if the environment has
  3.2630 +strange lighting.
  3.2631 +
  3.2632 +Consider how the human act of jumping might be described with
  3.2633 +body-centered action predicates: You might specify that jumping is
  3.2634 +mainly the feeling of your knees bending, your thigh muscles
  3.2635 +contracting, and your inner ear experiencing a certain sort of back
  3.2636 +and forth acceleration. This representation is a very concrete
  3.2637 +description of jumping, couched in terms of muscles and senses, but
  3.2638 +it also has the ability to describe almost all kinds of jumping, a
  3.2639 +generality that you might think could only be achieved by a very
  3.2640 +abstract description. The body centered jumping predicate does not
  3.2641 +have terms that consider the color of a person's skin or whether
  3.2642 +they are male or female, instead it gets right to the meat of what
  3.2643 +jumping actually \emph{is}.
  3.2644 +
  3.2645 +Of course, the action predicates are not directly applicable to
  3.2646 +video data, which lacks the advanced sensory information which they
  3.2647 +require!
  3.2648 +
  3.2649 +The trick now is to make the action predicates work even when the
  3.2650 +sensory data on which they depend is absent!
  3.2651 +
  3.2652 +\subsection{\(\Phi\)-space describes the worm's experiences}
  3.2653 +\label{sec-3-3}
  3.2654 +
  3.2655 +As a first step towards building empathy, I need to gather all of
  3.2656 +the worm's experiences during free play. I use a simple vector to
  3.2657 +store all the experiences. 
  3.2658 +
  3.2659 +Each element of the experience vector exists in the vast space of
  3.2660 +all possible worm-experiences. Most of this vast space is actually
  3.2661 +unreachable due to physical constraints of the worm's body. For
  3.2662 +example, the worm's segments are connected by hinge joints that put
  3.2663 +a practical limit on the worm's range of motions without limiting
  3.2664 +its degrees of freedom. Some groupings of senses are impossible;
  3.2665 +the worm can not be bent into a circle so that its ends are
  3.2666 +touching and at the same time not also experience the sensation of
  3.2667 +touching itself.
  3.2668 +
  3.2669 +As the worm moves around during free play and its experience vector
  3.2670 +grows larger, the vector begins to define a subspace which is all
  3.2671 +the sensations the worm can practically experience during normal
  3.2672 +operation. I call this subspace \(\Phi\)-space, short for
  3.2673 +physical-space. The experience vector defines a path through
  3.2674 +\(\Phi\)-space. This path has interesting properties that all derive
  3.2675 +from physical embodiment. The proprioceptive components of the path
  3.2676 +vary smoothly, because in order for the worm to move from one
  3.2677 +position to another, it must pass through the intermediate
  3.2678 +positions. The path invariably forms loops as common actions are
  3.2679 +repeated. Finally and most importantly, proprioception alone
  3.2680 +actually gives very strong inference about the other senses. For
  3.2681 +example, when the worm is proprioceptively flat over several
  3.2682 +frames, you can infer that it is touching the ground and that its
  3.2683 +muscles are not active, because if the muscles were active, the
  3.2684 +worm would be moving and would not remain perfectly flat. In order
  3.2685 +to stay flat, the worm has to be touching the ground, or it would
  3.2686 +again be moving out of the flat position due to gravity. If the
  3.2687 +worm is positioned in such a way that it interacts with itself,
  3.2688 +then it is very likely to be feeling the same tactile feelings as
  3.2689 +the last time it was in that position, because it has the same body
  3.2690 +as then. As you observe multiple frames of proprioceptive data, you
  3.2691 +can become increasingly confident about the exact activations of
  3.2692 +the worm's muscles, because it generally takes a unique combination
  3.2693 +of muscle contractions to transform the worm's body along a
  3.2694 +specific path through \(\Phi\)-space.
  3.2695 +
  3.2696 +The worm's total life experience is a long looping path through
  3.2697 +\(\Phi\)-space. I will now introduce simple way of taking that
  3.2698 +experience path and building a function that can infer complete
  3.2699 +sensory experience given only a stream of proprioceptive data. This
  3.2700 +\emph{empathy} function will provide a bridge to use the body centered
  3.2701 +action predicates on video-like streams of information.
  3.2702 +
  3.2703 +\subsection{Empathy is the process of building paths in \(\Phi\)-space}
  3.2704 +\label{sec-3-4}
  3.2705 +
  3.2706 +Here is the core of a basic empathy algorithm, starting with an
  3.2707 +experience vector:
  3.2708 +
  3.2709 +An \emph{experience-index} is an index into the grand experience vector
  3.2710 +that defines the worm's life. It is a time-stamp for each set of
  3.2711 +sensations the worm has experienced.
  3.2712 +
  3.2713 +First, I group the experience-indices into bins according to the
  3.2714 +similarity of their proprioceptive data. I organize my bins into a
  3.2715 +3 level hierarchy. The smallest bins have an approximate size of
  3.2716 +0.001 radians in all proprioceptive dimensions. Each higher level
  3.2717 +is 10x bigger than the level below it.
  3.2718 +
  3.2719 +The bins serve as a hashing function for proprioceptive data. Given
  3.2720 +a single piece of proprioceptive experience, the bins allow me to
  3.2721 +rapidly find all other similar experience-indices of past
  3.2722 +experience that had a very similar proprioceptive configuration.
  3.2723 +When looking up a proprioceptive experience, if the smallest bin
  3.2724 +does not match any previous experience, then I use successively
  3.2725 +larger bins until a match is found or I reach the largest bin.
  3.2726 +
  3.2727 +Given a sequence of proprioceptive input, I use the bins to
  3.2728 +generate a set of similar experiences for each input using the
  3.2729 +tiered proprioceptive bins.
  3.2730 +
  3.2731 +Finally, to infer sensory data, I select the longest consecutive
  3.2732 +chain of experiences that threads through the sets of similar
  3.2733 +experiences, starting with the current moment as a root and going
  3.2734 +backwards. Consecutive experience means that the experiences appear
  3.2735 +next to each other in the experience vector.
  3.2736 +
  3.2737 +A stream of proprioceptive input might be:
  3.2738 +
  3.2739 +\begin{verbatim}
  3.2740 +[ flat, flat, flat, flat, flat, flat, lift-head ]
  3.2741 +\end{verbatim}
  3.2742 +
  3.2743 +The worm's previous experience of lying on the ground and lifting
  3.2744 +its head generates possible interpretations for each frame (the
  3.2745 +numbers are experience-indices):
  3.2746 +
  3.2747 +\clearpage
  3.2748 +
  3.2749 +\begin{verbatim}
  3.2750 +[ flat, flat, flat, flat, flat, flat, flat, lift-head ]
  3.2751 +   1     1     1     1     1     1     1     4     
  3.2752 +   2     2     2     2     2     2     2   
  3.2753 +   3     3     3     3     3     3     3
  3.2754 +   6     6     6     6     6     6     6
  3.2755 +   7     7     7     7     7     7     7
  3.2756 +   8     8     8     8     8     8     8
  3.2757 +   9     9     9     9     9     9     9
  3.2758 +\end{verbatim}
  3.2759 +
  3.2760 +These interpretations suggest a new path through phi space:
  3.2761 +
  3.2762 +\begin{verbatim}
  3.2763 +[ flat, flat, flat, flat, flat, flat, flat, lift-head ]
  3.2764 +   6     7     8     9     1     2     3     4
  3.2765 +\end{verbatim}
  3.2766 +
  3.2767 +The new path through \(\Phi\)-space is synthesized from two actual
  3.2768 +paths that the creature has experienced: the "1-2-3-4" chain and
  3.2769 +the "6-7-8-9" chain. The "1-2-3-4" chain is necessary because it
  3.2770 +ends with the worm lifting its head. It originated from a short
  3.2771 +training session where the worm rested on the floor for a brief
  3.2772 +while and then raised its head. The "6-7-8-9" chain is part of a
  3.2773 +longer chain of inactivity where the worm simply rested on the
  3.2774 +floor without moving. It is preferred over a "1-2-3" chain (which
  3.2775 +also describes inactivity) because it is longer. The main ideas
  3.2776 +again:
  3.2777 +
  3.2778 +\begin{itemize}
  3.2779 +\item Imagined \(\Phi\)-space paths are synthesized by looping and mixing
  3.2780 +previous experiences.
  3.2781 +
  3.2782 +\item Longer experience paths (less edits) are preferred.
  3.2783 +
  3.2784 +\item The present is more important than the past --- more recent
  3.2785 +events take precedence in interpretation.
  3.2786 +\end{itemize}
  3.2787 +
  3.2788 +This algorithm has three advantages: 
  3.2789 +
  3.2790 +\begin{enumerate}
  3.2791 +\item It's simple
  3.2792 +
  3.2793 +\item It's very fast -- retrieving possible interpretations takes
  3.2794 +constant time. Tracing through chains of interpretations takes
  3.2795 +time proportional to the average number of experiences in a
  3.2796 +proprioceptive bin. Redundant experiences in \(\Phi\)-space can be
  3.2797 +merged to save computation.
  3.2798 +
  3.2799 +\item It protects from wrong interpretations of transient ambiguous
  3.2800 +proprioceptive data. For example, if the worm is flat for just
  3.2801 +an instant, this flatness will not be interpreted as implying
  3.2802 +that the worm has its muscles relaxed, since the flatness is
  3.2803 +part of a longer chain which includes a distinct pattern of
  3.2804 +muscle activation. Markov chains or other memoryless statistical
  3.2805 +models that operate on individual frames may very well make this
  3.2806 +mistake.
  3.2807 +\end{enumerate}
  3.2808 +
  3.2809 +\begin{listing}
  3.2810 +\begin{verbatim}
  3.2811 +(defn bin [digits]
  3.2812 +  (fn [angles]
  3.2813 +    (->> angles
  3.2814 +         (flatten)
  3.2815 +         (map (juxt #(Math/sin %) #(Math/cos %)))
  3.2816 +         (flatten)
  3.2817 +         (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
  3.2818 +
  3.2819 +(defn gen-phi-scan 
  3.2820 +  "Nearest-neighbors with binning. Only returns a result if
  3.2821 +   the proprioceptive data is within 10% of a previously recorded
  3.2822 +   result in all dimensions."
  3.2823 +  [phi-space]
  3.2824 +  (let [bin-keys (map bin [3 2 1])
  3.2825 +        bin-maps
  3.2826 +        (map (fn [bin-key]
  3.2827 +               (group-by
  3.2828 +                (comp bin-key :proprioception phi-space)
  3.2829 +                (range (count phi-space)))) bin-keys)
  3.2830 +        lookups (map (fn [bin-key bin-map]
  3.2831 +                       (fn [proprio] (bin-map (bin-key proprio))))
  3.2832 +                     bin-keys bin-maps)]
  3.2833 +    (fn lookup [proprio-data]
  3.2834 +      (set (some #(% proprio-data) lookups)))))
  3.2835 +\end{verbatim}
  3.2836 +\caption{\label{bin}Program to convert an experience vector into a proprioceptively binned lookup function.}
  3.2837 +\end{listing}
  3.2838 +
  3.2839 +\begin{figure}[htb]
  3.2840 +\centering
  3.2841 +\includegraphics[width=10cm]{./images/film-of-imagination.png}
  3.2842 +\caption{\label{phi-space-history-scan}\texttt{longest-thread} finds the longest path of consecutive past experiences to explain proprioceptive worm data from previous data. Here, the film strip represents the creature's previous experience. Sort sequences of memories are spliced together to match the proprioceptive data. Their carry the other senses along with them.}
  3.2843 +\end{figure}
  3.2844 +
  3.2845 +\texttt{longest-thread} infers sensory data by stitching together pieces
  3.2846 +from previous experience. It prefers longer chains of previous
  3.2847 +experience to shorter ones. For example, during training the worm
  3.2848 +might rest on the ground for one second before it performs its
  3.2849 +exercises. If during recognition the worm rests on the ground for
  3.2850 +five seconds, \texttt{longest-thread} will accommodate this five second
  3.2851 +rest period by looping the one second rest chain five times.
  3.2852 +
  3.2853 +\texttt{longest-thread} takes time proportional to the average number of
  3.2854 +entries in a proprioceptive bin, because for each element in the
  3.2855 +starting bin it performs a series of set lookups in the preceding
  3.2856 +bins. If the total history is limited, then this takes time
  3.2857 +proportional to a only a constant multiple of the number of entries
  3.2858 +in the starting bin. This analysis also applies, even if the action
  3.2859 +requires multiple longest chains -- it's still the average number
  3.2860 +of entries in a proprioceptive bin times the desired chain length.
  3.2861 +Because \texttt{longest-thread} is so efficient and simple, I can
  3.2862 +interpret worm-actions in real time.
  3.2863 +
  3.2864 +\begin{listing}
  3.2865 +\begin{verbatim}
  3.2866 +(defn longest-thread
  3.2867 +  "Find the longest thread from phi-index-sets. The index sets should
  3.2868 +   be ordered from most recent to least recent."
  3.2869 +  [phi-index-sets]
  3.2870 +  (loop [result '()
  3.2871 +         [thread-bases & remaining :as phi-index-sets] phi-index-sets]
  3.2872 +    (if (empty? phi-index-sets)
  3.2873 +      (vec result)
  3.2874 +      (let [threads
  3.2875 +            (for [thread-base thread-bases]
  3.2876 +              (loop [thread (list thread-base)
  3.2877 +                     remaining remaining]
  3.2878 +                (let [next-index (dec (first thread))]
  3.2879 +                  (cond (empty? remaining) thread
  3.2880 +                        (contains? (first remaining) next-index)
  3.2881 +                        (recur
  3.2882 +                         (cons next-index thread) (rest remaining))
  3.2883 +                        :else thread))))
  3.2884 +            longest-thread
  3.2885 +            (reduce (fn [thread-a thread-b]
  3.2886 +                      (if (> (count thread-a) (count thread-b))
  3.2887 +                        thread-a thread-b))
  3.2888 +                    '(nil)
  3.2889 +                    threads)]
  3.2890 +        (recur (concat longest-thread result)
  3.2891 +               (drop (count longest-thread) phi-index-sets))))))
  3.2892 +\end{verbatim}
  3.2893 +\caption{\label{longest-thread}Program to calculate empathy by tracing though \(\Phi\)-space and finding the longest (ie. most coherent) interpretation of the data.}
  3.2894 +\end{listing}
  3.2895 +
  3.2896 +There is one final piece, which is to replace missing sensory data
  3.2897 +with a best-guess estimate. While I could fill in missing data by
  3.2898 +using a gradient over the closest known sensory data points,
  3.2899 +averages can be misleading. It is certainly possible to create an
  3.2900 +impossible sensory state by averaging two possible sensory states.
  3.2901 +For example, consider moving your hand in an arc over your head. If
  3.2902 +for some reason you only have the initial and final positions of
  3.2903 +this movement in your \(\Phi\)-space, averaging them together will
  3.2904 +produce the proprioceptive sensation of having your hand \emph{inside}
  3.2905 +your head, which is physically impossible to ever experience
  3.2906 +(barring motor adaption illusions). Therefore I simply replicate
  3.2907 +the most recent sensory experience to fill in the gaps.
  3.2908 +
  3.2909 +\begin{listing}
  3.2910 +\begin{verbatim}
  3.2911 +(defn infer-nils
  3.2912 +  "Replace nils with the next available non-nil element in the
  3.2913 +   sequence, or barring that, 0."
  3.2914 +  [s]
  3.2915 +  (loop [i (dec (count s))
  3.2916 +         v (transient s)]
  3.2917 +    (if (zero? i) (persistent! v)
  3.2918 +        (if-let [cur (v i)]
  3.2919 +          (if (get v (dec i) 0)
  3.2920 +            (recur (dec i) v)
  3.2921 +            (recur (dec i) (assoc! v (dec i) cur)))
  3.2922 +          (recur i (assoc! v i 0))))))
  3.2923 +\end{verbatim}
  3.2924 +\caption{\label{infer-nils}Fill in blanks in sensory experience by replicating the most recent experience.}
  3.2925 +\end{listing}
  3.2926 +
  3.2927 +\subsection{\texttt{EMPATH} recognizes actions efficiently}
  3.2928 +\label{sec-3-5}
  3.2929 +
  3.2930 +To use \texttt{EMPATH} with the worm, I first need to gather a set of
  3.2931 +experiences from the worm that includes the actions I want to
  3.2932 +recognize. The \texttt{generate-phi-space} program (listing
  3.2933 +\ref{generate-phi-space} runs the worm through a series of
  3.2934 +exercises and gathers those experiences into a vector. The
  3.2935 +\texttt{do-all-the-things} program is a routine expressed in a simple
  3.2936 +muscle contraction script language for automated worm control. It
  3.2937 +causes the worm to rest, curl, and wiggle over about 700 frames
  3.2938 +(approx. 11 seconds).
  3.2939 +
  3.2940 +\begin{listing}
  3.2941 +\begin{verbatim}
  3.2942 +(def do-all-the-things 
  3.2943 +  (concat
  3.2944 +   curl-script
  3.2945 +   [[300 :d-ex 40]
  3.2946 +    [320 :d-ex 0]]
  3.2947 +   (shift-script 280 (take 16 wiggle-script))))
  3.2948 +
  3.2949 +(defn generate-phi-space []
  3.2950 +  (let [experiences (atom [])]
  3.2951 +    (run-world
  3.2952 +     (apply-map 
  3.2953 +      worm-world
  3.2954 +      (merge
  3.2955 +       (worm-world-defaults)
  3.2956 +       {:end-frame 700
  3.2957 +        :motor-control
  3.2958 +        (motor-control-program worm-muscle-labels do-all-the-things)
  3.2959 +        :experiences experiences})))
  3.2960 +    @experiences))
  3.2961 +\end{verbatim}
  3.2962 +\caption{\label{generate-phi-space}Program to gather the worm's experiences into a vector for further processing. The \texttt{motor-control-program} line uses a motor control script that causes the worm to execute a series of ``exercises'' that include all the action predicates.}
  3.2963 +\end{listing}
  3.2964 +
  3.2965 +\begin{listing}
  3.2966 +\begin{verbatim}
  3.2967 +(defn init []
  3.2968 +  (def phi-space (generate-phi-space))
  3.2969 +  (def phi-scan (gen-phi-scan phi-space)))
  3.2970 +
  3.2971 +(defn empathy-demonstration []
  3.2972 +  (let [proprio (atom ())]
  3.2973 +    (fn
  3.2974 +      [experiences text]
  3.2975 +      (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
  3.2976 +        (swap! proprio (partial cons phi-indices))
  3.2977 +        (let [exp-thread (longest-thread (take 300 @proprio))
  3.2978 +              empathy (mapv phi-space (infer-nils exp-thread))]
  3.2979 +          (println-repl (vector:last-n exp-thread 22))
  3.2980 +          (cond
  3.2981 +           (grand-circle? empathy) (.setText text "Grand Circle")
  3.2982 +           (curled? empathy)       (.setText text "Curled")
  3.2983 +           (wiggling? empathy)     (.setText text "Wiggling")
  3.2984 +           (resting? empathy)      (.setText text "Resting")
  3.2985 +           :else                   (.setText text "Unknown")))))))
  3.2986 +
  3.2987 +(defn empathy-experiment [record]
  3.2988 +  (.start (worm-world :experience-watch (debug-experience-phi)
  3.2989 +                      :record record :worm worm*)))
  3.2990 +\end{verbatim}
  3.2991 +\caption{\label{empathy-debug}Use \texttt{longest-thread} and a \(\Phi\)-space generated from a short exercise routine to interpret actions during free play.}
  3.2992 +\end{listing}
  3.2993 +
  3.2994 +These programs create a test for the empathy system. First, the
  3.2995 +worm's \(\Phi\)-space is generated from a simple motor script. Then the
  3.2996 +worm is re-created in an environment almost exactly identical to
  3.2997 +the testing environment for the action-predicates, with one major
  3.2998 +difference : the only sensory information available to the system
  3.2999 +is proprioception. From just the proprioception data and
  3.3000 +\(\Phi\)-space, \texttt{longest-thread} synthesizes a complete record the last
  3.3001 +300 sensory experiences of the worm. These synthesized experiences
  3.3002 +are fed directly into the action predicates \texttt{grand-circle?},
  3.3003 +\texttt{curled?}, \texttt{wiggling?}, and \texttt{resting?} and their outputs are
  3.3004 +printed to the screen at each frame.
  3.3005 +
  3.3006 +The result of running \texttt{empathy-experiment} is that the system is
  3.3007 +generally able to interpret worm actions using the action-predicates
  3.3008 +on simulated sensory data just as well as with actual data. Figure
  3.3009 +\ref{empathy-debug-image} was generated using \texttt{empathy-experiment}:
  3.3010 +
  3.3011 +\begin{figure}[htb]
  3.3012 +\centering
  3.3013 +\includegraphics[width=10cm]{./images/empathy-1.png}
  3.3014 +\caption{\label{empathy-debug-image}From only proprioceptive data, \texttt{EMPATH} was able to infer the complete sensory experience and classify four poses (The last panel shows a composite image of \emph{wiggling}, a dynamic pose.)}
  3.3015 +\end{figure}
  3.3016 +
  3.3017 +One way to measure the performance of \texttt{EMPATH} is to compare the
  3.3018 +suitability of the imagined sense experience to trigger the same
  3.3019 +action predicates as the real sensory experience. 
  3.3020 +
  3.3021 +\begin{listing}
  3.3022 +\begin{verbatim}
  3.3023 +(def worm-action-label
  3.3024 +  (juxt grand-circle? curled? wiggling?))
  3.3025 +
  3.3026 +(defn compare-empathy-with-baseline [matches]
  3.3027 +  (let [proprio (atom ())]
  3.3028 +    (fn
  3.3029 +      [experiences text]
  3.3030 +      (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
  3.3031 +        (swap! proprio (partial cons phi-indices))
  3.3032 +        (let [exp-thread (longest-thread (take 300 @proprio))
  3.3033 +              empathy (mapv phi-space (infer-nils exp-thread))
  3.3034 +              experience-matches-empathy
  3.3035 +              (= (worm-action-label experiences)
  3.3036 +                 (worm-action-label empathy))]
  3.3037 +          (println-repl experience-matches-empathy)
  3.3038 +          (swap! matches #(conj % experience-matches-empathy)))))))
  3.3039 +              
  3.3040 +(defn accuracy [v]
  3.3041 +  (float (/ (count (filter true? v)) (count v))))
  3.3042 +
  3.3043 +(defn test-empathy-accuracy []
  3.3044 +  (let [res (atom [])]
  3.3045 +    (run-world
  3.3046 +     (worm-world :experience-watch
  3.3047 +                 (compare-empathy-with-baseline res)
  3.3048 +                 :worm worm*))
  3.3049 +    (accuracy @res)))
  3.3050 +\end{verbatim}
  3.3051 +\caption{\label{test-empathy-accuracy}Determine how closely empathy approximates actual sensory data.}
  3.3052 +\end{listing}
  3.3053 +
  3.3054 +Running \texttt{test-empathy-accuracy} using the very short exercise
  3.3055 +program \texttt{do-all-the-things} defined in listing
  3.3056 +\ref{generate-phi-space}, and then doing a similar pattern of
  3.3057 +activity using manual control of the worm, yields an accuracy of
  3.3058 +around 73\%. This is based on very limited worm experience, and
  3.3059 +almost all errors are due to the worm's \(\Phi\)-space being too
  3.3060 +incomplete to properly interpret common poses. By manually training
  3.3061 +the worm for longer using \texttt{init-interactive} defined in listing
  3.3062 +\ref{manual-phi-space}, the accuracy dramatically improves:
  3.3063 +
  3.3064 +\begin{listing}
  3.3065 +\begin{verbatim}
  3.3066 +(defn init-interactive []
  3.3067 +  (def phi-space
  3.3068 +    (let [experiences (atom [])]
  3.3069 +      (run-world
  3.3070 +       (apply-map 
  3.3071 +        worm-world
  3.3072 +        (merge
  3.3073 +         (worm-world-defaults)
  3.3074 +         {:experiences experiences})))
  3.3075 +      @experiences))
  3.3076 +  (def phi-scan (gen-phi-scan phi-space)))
  3.3077 +\end{verbatim}
  3.3078 +\caption{\label{manual-phi-space}Program to generate \(\Phi\)-space using manual training.}
  3.3079 +\end{listing}
  3.3080 +
  3.3081 +\texttt{init-interactive} allows me to take direct control of the worm's
  3.3082 +muscles and run it through each characteristic movement I care
  3.3083 +about. After about 1 minute of manual training, I was able to
  3.3084 +achieve 95\% accuracy on manual testing of the worm using
  3.3085 +\texttt{test-empathy-accuracy}. The majority of disagreements are near the
  3.3086 +transition boundaries from one type of action to another. During
  3.3087 +these transitions the exact label for the action is often unclear,
  3.3088 +and disagreement between empathy and experience is practically
  3.3089 +irrelevant. Thus, the system's effective identification accuracy is
  3.3090 +even higher than 95\%. When I watch this system myself, I generally
  3.3091 +see no errors in action identification compared to my own judgment
  3.3092 +of what the worm is doing.
  3.3093 +
  3.3094 +\subsection{Digression: Learning touch sensor layout through free play}
  3.3095 +\label{sec-3-6}
  3.3096 +
  3.3097 +In the previous chapter I showed how to compute actions in terms of
  3.3098 +body-centered predicates, but some of those predicates relied on
  3.3099 +the average touch activation of pre-defined regions of the worm's
  3.3100 +skin. What if, instead of receiving touch pre-grouped into the six
  3.3101 +faces of each worm segment, the true partitioning of the worm's
  3.3102 +skin was unknown? This is more similar to how a nerve fiber bundle
  3.3103 +might be arranged inside an animal. While two fibers that are close
  3.3104 +in a nerve bundle \emph{might} correspond to two touch sensors that are
  3.3105 +close together on the skin, the process of taking a complicated
  3.3106 +surface and forcing it into essentially a 2D circle requires that
  3.3107 +some regions of skin that are close together in the animal end up
  3.3108 +far apart in the nerve bundle.
  3.3109 +
  3.3110 +In this chapter I show how to automatically learn the
  3.3111 +skin-partitioning of a worm segment by free exploration. As the
  3.3112 +worm rolls around on the floor, large sections of its surface get
  3.3113 +activated. If the worm has stopped moving, then whatever region of
  3.3114 +skin that is touching the floor is probably an important region,
  3.3115 +and should be recorded. The code I provide relies on the worm
  3.3116 +segment having flat faces, but still demonstrates a primitive kind
  3.3117 +of multi-sensory bootstrapping that I find appealing.
  3.3118 +
  3.3119 +\begin{listing}
  3.3120 +\begin{verbatim}
  3.3121 +(def full-contact [(float 0.0) (float 0.1)])
  3.3122 +
  3.3123 +(defn pure-touch?
  3.3124 +  "This is worm specific code to determine if a large region of touch
  3.3125 +   sensors is either all on or all off."
  3.3126 +  [[coords touch :as touch-data]]
  3.3127 +  (= (set (map first touch)) (set full-contact)))
  3.3128 +\end{verbatim}
  3.3129 +\caption{\label{pure-touch}Program to detect whether the worm is in a resting state with one face touching the floor.}
  3.3130 +\end{listing}
  3.3131 +
  3.3132 +After collecting these important regions, there will many nearly
  3.3133 +similar touch regions. While for some purposes the subtle
  3.3134 +differences between these regions will be important, for my
  3.3135 +purposes I collapse them into mostly non-overlapping sets using
  3.3136 +\texttt{remove-similar} in listing \ref{remove-similar}
  3.3137 +
  3.3138 +\begin{listing}
  3.3139 +\begin{verbatim}
  3.3140 +(defn remove-similar
  3.3141 +  [coll]
  3.3142 +  (loop [result () coll (sort-by (comp - count) coll)]
  3.3143 +    (if (empty? coll) result
  3.3144 +        (let  [[x & xs] coll
  3.3145 +               c (count x)]
  3.3146 +          (if (some
  3.3147 +               (fn [other-set]
  3.3148 +                 (let [oc (count other-set)]
  3.3149 +                   (< (- (count (union other-set x)) c) (* oc 0.1))))
  3.3150 +               xs)
  3.3151 +            (recur result xs)
  3.3152 +            (recur (cons x result) xs))))))
  3.3153 +\end{verbatim}
  3.3154 +\caption{\label{remove-similar}Program to take a list of sets of points and ``collapse them'' so that the remaining sets in the list are significantly different from each other. Prefer smaller sets to larger ones.}
  3.3155 +\end{listing}
  3.3156 +
  3.3157 +Actually running this simulation is easy given \texttt{CORTEX}'s facilities.
  3.3158 +
  3.3159 +\begin{listing}
  3.3160 +\begin{verbatim}
  3.3161 +(defn learn-touch-regions []
  3.3162 +  (let [experiences (atom [])
  3.3163 +        world (apply-map
  3.3164 +               worm-world
  3.3165 +               (assoc (worm-segment-defaults)
  3.3166 +                 :experiences experiences))]
  3.3167 +    (run-world world)
  3.3168 +    (->>
  3.3169 +     @experiences
  3.3170 +     (drop 175)
  3.3171 +     ;; access the single segment's touch data
  3.3172 +     (map (comp first :touch))
  3.3173 +     ;; only deal with "pure" touch data to determine surfaces
  3.3174 +     (filter pure-touch?)
  3.3175 +     ;; associate coordinates with touch values
  3.3176 +     (map (partial apply zipmap))
  3.3177 +     ;; select those regions where contact is being made
  3.3178 +     (map (partial group-by second))
  3.3179 +     (map #(get % full-contact))
  3.3180 +     (map (partial map first))
  3.3181 +     ;; remove redundant/subset regions
  3.3182 +     (map set)
  3.3183 +     remove-similar)))
  3.3184 +
  3.3185 +(defn learn-and-view-touch-regions []
  3.3186 +  (map view-touch-region
  3.3187 +       (learn-touch-regions)))
  3.3188 +\end{verbatim}
  3.3189 +\caption{\label{learn-touch}Collect experiences while the worm moves around. Filter the touch sensations by stable ones, collapse similar ones together, and report the regions learned.}
  3.3190 +\end{listing}
  3.3191 +
  3.3192 +The only thing remaining to define is the particular motion the worm
  3.3193 +must take. I accomplish this with a simple motor control program.
  3.3194 +
  3.3195 +\begin{listing}
  3.3196 +\begin{verbatim}
  3.3197 +(defn touch-kinesthetics []
  3.3198 +  [[170 :lift-1 40]
  3.3199 +   [190 :lift-1 19]
  3.3200 +   [206 :lift-1  0]
  3.3201 +
  3.3202 +   [400 :lift-2 40]
  3.3203 +   [410 :lift-2  0]
  3.3204 +
  3.3205 +   [570 :lift-2 40]
  3.3206 +   [590 :lift-2 21]
  3.3207 +   [606 :lift-2  0]
  3.3208 +
  3.3209 +   [800 :lift-1 30]
  3.3210 +   [809 :lift-1 0]
  3.3211 +
  3.3212 +   [900 :roll-2 40]
  3.3213 +   [905 :roll-2 20]
  3.3214 +   [910 :roll-2  0]
  3.3215 +
  3.3216 +   [1000 :roll-2 40]
  3.3217 +   [1005 :roll-2 20]
  3.3218 +   [1010 :roll-2  0]
  3.3219 +   
  3.3220 +   [1100 :roll-2 40]
  3.3221 +   [1105 :roll-2 20]
  3.3222 +   [1110 :roll-2  0]
  3.3223 +   ])
  3.3224 +\end{verbatim}
  3.3225 +\caption{\label{worm-roll}Motor control program for making the worm roll on the ground. This could also be replaced with random motion.}
  3.3226 +\end{listing}
  3.3227 +
  3.3228 +
  3.3229 +\begin{figure}[htb]
  3.3230 +\centering
  3.3231 +\includegraphics[width=12cm]{./images/worm-roll.png}
  3.3232 +\caption{\label{worm-roll}The small worm rolls around on the floor, driven by the motor control program in listing \ref{worm-roll}.}
  3.3233 +\end{figure}
  3.3234 +
  3.3235 +\begin{figure}[htb]
  3.3236 +\centering
  3.3237 +\includegraphics[width=12cm]{./images/touch-learn.png}
  3.3238 +\caption{\label{worm-touch-map}After completing its adventures, the worm now knows how its touch sensors are arranged along its skin. Each of these six rectangles are touch sensory patterns that were deemed important by \texttt{learn-touch-regions}. Each white square in the rectangles above is a cluster of ``related" touch nodes as determined by the system. The worm has correctly discovered that it has six faces, and has partitioned its sensory map into these six faces.}
  3.3239 +\end{figure}
  3.3240 +
  3.3241 +While simple, \texttt{learn-touch-regions} exploits regularities in both
  3.3242 +the worm's physiology and the worm's environment to correctly
  3.3243 +deduce that the worm has six sides. Note that \texttt{learn-touch-regions}
  3.3244 +would work just as well even if the worm's touch sense data were
  3.3245 +completely scrambled. The cross shape is just for convenience. This
  3.3246 +example justifies the use of pre-defined touch regions in \texttt{EMPATH}.
  3.3247 +
  3.3248 +\subsection{Recognizing an object using embodied representation}
  3.3249 +\label{sec-3-7}
  3.3250 +
  3.3251 +At the beginning of the thesis, I suggested that we might recognize
  3.3252 +the chair in Figure \ref{hidden-chair} by imagining ourselves in
  3.3253 +the position of the man and realizing that he must be sitting on
  3.3254 +something in order to maintain that position. Here, I present a
  3.3255 +brief elaboration on how to this might be done.
  3.3256 +
  3.3257 +First, I need the feeling of leaning or resting \emph{on} some other
  3.3258 +object that is not the floor. This feeling is easy to describe
  3.3259 +using an embodied representation. 
  3.3260 +
  3.3261 +\begin{listing}
  3.3262 +\begin{verbatim}
  3.3263 +(defn draped?
  3.3264 +  "Is the worm:
  3.3265 +    -- not flat (the floor is not a 'chair')
  3.3266 +    -- supported (not using its muscles to hold its position)
  3.3267 +    -- stable (not changing its position)
  3.3268 +    -- touching something (must register contact)"
  3.3269 +  [experiences]
  3.3270 +  (let [b2-hash (bin 2)
  3.3271 +        touch (:touch (peek experiences))
  3.3272 +        total-contact
  3.3273 +        (reduce
  3.3274 +         +
  3.3275 +         (map #(contact all-touch-coordinates %)
  3.3276 +              (rest touch)))]
  3.3277 +    (println total-contact)
  3.3278 +    (and (not (resting? experiences))
  3.3279 +         (every?
  3.3280 +          zero?
  3.3281 +          (-> experiences
  3.3282 +              (vector:last-n 25)
  3.3283 +              (#(map :muscle %))
  3.3284 +              (flatten)))
  3.3285 +         (-> experiences
  3.3286 +             (vector:last-n 20)
  3.3287 +             (#(map (comp b2-hash flatten :proprioception) %))
  3.3288 +             (set)
  3.3289 +             (count) (= 1))
  3.3290 +         (< 0.03 total-contact))))
  3.3291 +\end{verbatim}
  3.3292 +\caption{\label{draped}Program describing the sense of leaning or resting on something. This involves a relaxed posture, the feeling of touching something, and a period of stability where the worm does not move.}
  3.3293 +\end{listing}
  3.3294 +
  3.3295 +\begin{figure}[htb]
  3.3296 +\centering
  3.3297 +\includegraphics[width=13cm]{./images/draped.png}
  3.3298 +\caption{\label{draped-video}The \texttt{draped?} predicate detects the presence of the cube whenever the worm interacts with it. The details of the cube are irrelevant; only the way it influences the worm's body matters. The ``unknown'' label on the fifth frame is due to the fact that the worm is not stationary. \texttt{draped?} will only declare that the worm is draped if it has been still for a while.}
  3.3299 +\end{figure}
  3.3300 +
  3.3301 +Though this is a simple example, using the \texttt{draped?} predicate to
  3.3302 +detect a cube has interesting advantages. The \texttt{draped?} predicate
  3.3303 +describes the cube not in terms of properties that the cube has,
  3.3304 +but instead in terms of how the worm interacts with it physically.
  3.3305 +This means that the cube can still be detected even if it is not
  3.3306 +visible, as long as its influence on the worm's body is visible.
  3.3307 +
  3.3308 +This system will also see the virtual cube created by a
  3.3309 +``mimeworm", which uses its muscles in a very controlled way to
  3.3310 +mimic the appearance of leaning on a cube. The system will
  3.3311 +anticipate that there is an actual invisible cube that provides
  3.3312 +support!
  3.3313 +
  3.3314 +\begin{figure}[htb]
  3.3315 +\centering
  3.3316 +\includegraphics[width=6cm]{./images/pablo-the-mime.png}
  3.3317 +\caption{\label{mime}Can you see the thing that this person is leaning on? What properties does it have, other than how it makes the man's elbow and shoulder feel? I wonder if people who can actually maintain this pose easily still see the support?}
  3.3318 +\end{figure}
  3.3319 +
  3.3320 +This makes me wonder about the psychology of actual mimes. Suppose
  3.3321 +for a moment that people have something analogous to \(\Phi\)-space and
  3.3322 +that one of the ways that they find objects in a scene is by their
  3.3323 +relation to other people's bodies. Suppose that a person watches a
  3.3324 +person miming an invisible wall. For a person with no experience
  3.3325 +with miming, their \(\Phi\)-space will only have entries that describe
  3.3326 +the scene with the sensation of their hands touching a wall. This
  3.3327 +sensation of touch will create a strong impression of a wall, even
  3.3328 +though the wall would have to be invisible. A person with
  3.3329 +experience in miming however, will have entries in their \(\Phi\)-space
  3.3330 +that describe the wall-miming position without a sense of touch. It
  3.3331 +will not seem to such as person that an invisible wall is present,
  3.3332 +but merely that the mime is holding out their hands in a special
  3.3333 +way. Thus, the theory that humans use something like \(\Phi\)-space
  3.3334 +weakly predicts that learning how to mime should break the power of
  3.3335 +miming illusions. Most optical illusions still work no matter how
  3.3336 +much you know about them, so this proposal would be quite
  3.3337 +interesting to test, as it predicts a non-standard result!
  3.3338 +
  3.3339 +
  3.3340 +\clearpage
  3.3341 +
  3.3342 +\section{Contributions}
  3.3343 +\label{sec-4}
  3.3344 +
  3.3345 +The big idea behind this thesis is a new way to represent and
  3.3346 +recognize physical actions, which I call \emph{empathic representation}.
  3.3347 +Actions are represented as predicates which have access to the
  3.3348 +totality of a creature's sensory abilities. To recognize the
  3.3349 +physical actions of another creature similar to yourself, you
  3.3350 +imagine what they would feel by examining the position of their body
  3.3351 +and relating it to your own previous experience.
  3.3352 +
  3.3353 +Empathic representation of physical actions is robust and general.
  3.3354 +Because the representation is body-centered, it avoids baking in a
  3.3355 +particular viewpoint like you might get from learning from example
  3.3356 +videos. Because empathic representation relies on all of a
  3.3357 +creature's senses, it can describe exactly what an action \emph{feels
  3.3358 +like} without getting caught up in irrelevant details such as visual
  3.3359 +appearance. I think it is important that a correct description of
  3.3360 +jumping (for example) should not include irrelevant details such as
  3.3361 +the color of a person's clothes or skin; empathic representation can
  3.3362 +get right to the heart of what jumping is by describing it in terms
  3.3363 +of touch, muscle contractions, and a brief feeling of
  3.3364 +weightlessness. Empathic representation is very low-level in that it
  3.3365 +describes actions using concrete sensory data with little
  3.3366 +abstraction, but it has the generality of much more abstract
  3.3367 +representations!
  3.3368 +
  3.3369 +Another important contribution of this thesis is the development of
  3.3370 +the \texttt{CORTEX} system, a complete environment for creating simulated
  3.3371 +creatures. You have seen how to implement five senses: touch,
  3.3372 +proprioception, hearing, vision, and muscle tension. You have seen
  3.3373 +how to create new creatures using Blender, a 3D modeling tool.
  3.3374 +
  3.3375 +As a minor digression, you also saw how I used \texttt{CORTEX} to enable a
  3.3376 +tiny worm to discover the topology of its skin simply by rolling on
  3.3377 +the ground.  You also saw how to detect objects using only embodied
  3.3378 +predicates. 
  3.3379 +
  3.3380 +In conclusion, for this thesis I:
  3.3381 +
  3.3382 +\begin{itemize}
  3.3383 +\item Developed the idea of embodied representation, which describes
  3.3384 +actions that a creature can do in terms of first-person sensory
  3.3385 +data.
  3.3386 +
  3.3387 +\item Developed a method of empathic action recognition which uses
  3.3388 +previous embodied experience and embodied representation of
  3.3389 +actions to greatly constrain the possible interpretations of an
  3.3390 +action.
  3.3391 +
  3.3392 +\item Created \texttt{EMPATH}, a program which uses empathic action
  3.3393 +recognition to recognize physical actions in a simple model
  3.3394 +involving segmented worm-like creatures.
  3.3395 +
  3.3396 +\item Created \texttt{CORTEX}, a comprehensive platform for embodied AI
  3.3397 +experiments. It is the base on which \texttt{EMPATH} is built.
  3.3398 +\end{itemize}
  3.3399 +
  3.3400 +\clearpage
  3.3401 +\appendix
  3.3402 +
  3.3403 +\section{Appendix: \texttt{CORTEX} User Guide}
  3.3404 +\label{sec-5}
  3.3405 +
  3.3406 +Those who write a thesis should endeavor to make their code not only
  3.3407 +accessible, but actually usable, as a way to pay back the community
  3.3408 +that made the thesis possible in the first place. This thesis would
  3.3409 +not be possible without Free Software such as jMonkeyEngine3,
  3.3410 +Blender, clojure, \texttt{emacs}, \texttt{ffmpeg}, and many other tools. That is
  3.3411 +why I have included this user guide, in the hope that someone else
  3.3412 +might find \texttt{CORTEX} useful.
  3.3413 +
  3.3414 +\subsection{Obtaining \texttt{CORTEX}}
  3.3415 +\label{sec-5-1}
  3.3416 +
  3.3417 +You can get cortex from its mercurial repository at
  3.3418 +\url{http://hg.bortreb.com/cortex}. You may also download \texttt{CORTEX}
  3.3419 +releases at \url{http://aurellem.org/cortex/releases/}. As a condition of
  3.3420 +making this thesis, I have also provided Professor Winston the
  3.3421 +\texttt{CORTEX} source, and he knows how to run the demos and get started.
  3.3422 +You may also email me at \texttt{cortex@aurellem.org} and I may help where
  3.3423 +I can.
  3.3424 +
  3.3425 +\subsection{Running \texttt{CORTEX}}
  3.3426 +\label{sec-5-2}
  3.3427 +
  3.3428 +\texttt{CORTEX} comes with README and INSTALL files that will guide you
  3.3429 +through installation and running the test suite. In particular you
  3.3430 +should look at test \texttt{cortex.test} which contains test suites that
  3.3431 +run through all senses and multiple creatures.
  3.3432 +
  3.3433 +\subsection{Creating creatures}
  3.3434 +\label{sec-5-3}
  3.3435 +
  3.3436 +Creatures are created using \emph{Blender}, a free 3D modeling program.
  3.3437 +You will need Blender version 2.6 when using the \texttt{CORTEX} included
  3.3438 +in this thesis. You create a \texttt{CORTEX} creature in a similar manner
  3.3439 +to modeling anything in Blender, except that you also create
  3.3440 +several trees of empty nodes which define the creature's senses.
  3.3441 +
  3.3442 +\subsubsection{Mass}
  3.3443 +\label{sec-5-3-1}
  3.3444 +
  3.3445 +To give an object mass in \texttt{CORTEX}, add a ``mass'' metadata label
  3.3446 +to the object with the mass in jMonkeyEngine units. Note that
  3.3447 +setting the mass to 0 causes the object to be immovable.
  3.3448 +
  3.3449 +\subsubsection{Joints}
  3.3450 +\label{sec-5-3-2}
  3.3451 +
  3.3452 +Joints are created by creating an empty node named \texttt{joints} and
  3.3453 +then creating any number of empty child nodes to represent your
  3.3454 +creature's joints. The joint will automatically connect the
  3.3455 +closest two physical objects. It will help to set the empty node's
  3.3456 +display mode to ``Arrows'' so that you can clearly see the
  3.3457 +direction of the axes.
  3.3458 +
  3.3459 +Joint nodes should have the following metadata under the ``joint''
  3.3460 +label:
  3.3461 +
  3.3462 +\begin{verbatim}
  3.3463 +;; ONE of the following, under the label "joint":
  3.3464 +{:type :point}
  3.3465 +
  3.3466 +;; OR
  3.3467 +
  3.3468 +{:type :hinge
  3.3469 + :limit [<limit-low> <limit-high>]
  3.3470 + :axis (Vector3f. <x> <y> <z>)}
  3.3471 +;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
  3.3472 +
  3.3473 +;; OR
  3.3474 +
  3.3475 +{:type :cone
  3.3476 + :limit-xz <lim-xz>
  3.3477 + :limit-xy <lim-xy>
  3.3478 + :twist    <lim-twist>}   ;(use XZY rotation mode in Blender!)
  3.3479 +\end{verbatim}
  3.3480 +
  3.3481 +\subsubsection{Eyes}
  3.3482 +\label{sec-5-3-3}
  3.3483 +
  3.3484 +Eyes are created by creating an empty node named \texttt{eyes} and then
  3.3485 +creating any number of empty child nodes to represent your
  3.3486 +creature's eyes.
  3.3487 +
  3.3488 +Eye nodes should have the following metadata under the ``eye''
  3.3489 +label:
  3.3490 +
  3.3491 +\begin{verbatim}
  3.3492 +{:red    <red-retina-definition>
  3.3493 + :blue   <blue-retina-definition>
  3.3494 + :green  <green-retina-definition>
  3.3495 + :all    <all-retina-definition>
  3.3496 + (<0xrrggbb> <custom-retina-image>)...
  3.3497 +}
  3.3498 +\end{verbatim}
  3.3499 +
  3.3500 +Any of the color channels may be omitted. You may also include
  3.3501 +your own color selectors, and in fact :red is equivalent to
  3.3502 +0xFF0000 and so forth. The eye will be placed at the same position
  3.3503 +as the empty node and will bind to the neatest physical object.
  3.3504 +The eye will point outward from the X-axis of the node, and ``up''
  3.3505 +will be in the direction of the X-axis of the node. It will help
  3.3506 +to set the empty node's display mode to ``Arrows'' so that you can
  3.3507 +clearly see the direction of the axes.
  3.3508 +
  3.3509 +Each retina file should contain white pixels wherever you want to be
  3.3510 +sensitive to your chosen color. If you want the entire field of
  3.3511 +view, specify :all of 0xFFFFFF and a retinal map that is entirely
  3.3512 +white. 
  3.3513 +
  3.3514 +Here is a sample retinal map:
  3.3515 +
  3.3516 +\begin{figure}[H]
  3.3517 +\centering
  3.3518 +\includegraphics[width=7cm]{./images/retina-small.png}
  3.3519 +\caption{\label{retina}An example retinal profile image. White pixels are photo-sensitive elements. The distribution of white pixels is denser in the middle and falls off at the edges and is inspired by the human retina.}
  3.3520 +\end{figure}
  3.3521 +
  3.3522 +\subsubsection{Hearing}
  3.3523 +\label{sec-5-3-4}
  3.3524 +
  3.3525 +Ears are created by creating an empty node named \texttt{ears} and then
  3.3526 +creating any number of empty child nodes to represent your
  3.3527 +creature's ears. 
  3.3528 +
  3.3529 +Ear nodes do not require any metadata.
  3.3530 +
  3.3531 +The ear will bind to and follow the closest physical node.
  3.3532 +
  3.3533 +\subsubsection{Touch}
  3.3534 +\label{sec-5-3-5}
  3.3535 +
  3.3536 +Touch is handled similarly to mass. To make a particular object
  3.3537 +touch sensitive, add metadata of the following form under the
  3.3538 +object's ``touch'' metadata field:
  3.3539 +
  3.3540 +\begin{verbatim}
  3.3541 +<touch-UV-map-file-name>
  3.3542 +\end{verbatim}
  3.3543 +
  3.3544 +You may also include an optional ``scale'' metadata number to
  3.3545 +specify the length of the touch feelers. The default is \(0.1\),
  3.3546 +and this is generally sufficient.
  3.3547 +
  3.3548 +The touch UV should contain white pixels for each touch sensor.
  3.3549 +
  3.3550 +Here is an example touch-uv map that approximates a human finger,
  3.3551 +and its corresponding model.
  3.3552 +
  3.3553 +\begin{figure}[htb]
  3.3554 +\centering
  3.3555 +\includegraphics[width=9cm]{./images/finger-UV.png}
  3.3556 +\caption{\label{guide-fingertip-UV}This is the tactile-sensor-profile for the upper segment of a fingertip. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).}
  3.3557 +\end{figure}
  3.3558 +
  3.3559 +\begin{figure}[htb]
  3.3560 +\centering
  3.3561 +\includegraphics[width=9cm]{./images/finger-1.png}
  3.3562 +\caption{\label{guide-fingertip}The fingertip UV-image form above applied to a simple model of a fingertip.}
  3.3563 +\end{figure}
  3.3564 +
  3.3565 +\subsubsection{Proprioception}
  3.3566 +\label{sec-5-3-6}
  3.3567 +
  3.3568 +Proprioception is tied to each joint node -- nothing special must
  3.3569 +be done in a Blender model to enable proprioception other than
  3.3570 +creating joint nodes.
  3.3571 +
  3.3572 +\subsubsection{Muscles}
  3.3573 +\label{sec-5-3-7}
  3.3574 +
  3.3575 +Muscles are created by creating an empty node named \texttt{muscles} and
  3.3576 +then creating any number of empty child nodes to represent your
  3.3577 +creature's muscles.
  3.3578 +
  3.3579 +
  3.3580 +Muscle nodes should have the following metadata under the
  3.3581 +``muscle'' label:
  3.3582 +
  3.3583 +\begin{verbatim}
  3.3584 +<muscle-profile-file-name>
  3.3585 +\end{verbatim}
  3.3586 +
  3.3587 +Muscles should also have a ``strength'' metadata entry describing
  3.3588 +the muscle's total strength at full activation. 
  3.3589 +
  3.3590 +Muscle profiles are simple images that contain the relative amount
  3.3591 +of muscle power in each simulated alpha motor neuron. The width of
  3.3592 +the image is the total size of the motor pool, and the redness of
  3.3593 +each neuron is the relative power of that motor pool.
  3.3594 +
  3.3595 +While the profile image can have any dimensions, only the first
  3.3596 +line of pixels is used to define the muscle. Here is a sample
  3.3597 +muscle profile image that defines a human-like muscle.
  3.3598 +
  3.3599 +\begin{figure}[htb]
  3.3600 +\centering
  3.3601 +\includegraphics[width=7cm]{./images/basic-muscle.png}
  3.3602 +\caption{\label{muscle-recruit}A muscle profile image that describes the strengths of each motor neuron in a muscle. White is weakest and dark red is strongest. This particular pattern has weaker motor neurons at the beginning, just like human muscle.}
  3.3603 +\end{figure}
  3.3604 +
  3.3605 +Muscles twist the nearest physical object about the muscle node's
  3.3606 +Z-axis. I recommend using the ``Single Arrow'' display mode for
  3.3607 +muscles and using the right hand rule to determine which way the
  3.3608 +muscle will twist. To make a segment that can twist in multiple
  3.3609 +directions, create multiple, differently aligned muscles.
  3.3610 +
  3.3611 +\subsection{\texttt{CORTEX} API}
  3.3612 +\label{sec-5-4}
  3.3613 +
  3.3614 +These are the some functions exposed by \texttt{CORTEX} for creating
  3.3615 +worlds and simulating creatures. These are in addition to
  3.3616 +jMonkeyEngine3's extensive library, which is documented elsewhere.
  3.3617 +
  3.3618 +\subsubsection{Simulation}
  3.3619 +\label{sec-5-4-1}
  3.3620 +\begin{description}
  3.3621 +\item[{\texttt{(world root-node key-map setup-fn update-fn)}}] create
  3.3622 +a simulation.
  3.3623 +\begin{description}
  3.3624 +\item[{\emph{root-node}    }] a \texttt{com.jme3.scene.Node} object which
  3.3625 +contains all of the objects that should be in the
  3.3626 +simulation.
  3.3627 +
  3.3628 +\item[{\emph{key-map}      }] a map from strings describing keys to
  3.3629 +functions that should be executed whenever that key is
  3.3630 +pressed. the functions should take a SimpleApplication
  3.3631 +object and a boolean value. The SimpleApplication is the
  3.3632 +current simulation that is running, and the boolean is true
  3.3633 +if the key is being pressed, and false if it is being
  3.3634 +released. As an example,
  3.3635 +\begin{verbatim}
  3.3636 +       {"key-j" (fn [game value] (if value (println "key j pressed")))}
  3.3637 +\end{verbatim}
  3.3638 +is a valid key-map which will cause the simulation to print
  3.3639 +a message whenever the 'j' key on the keyboard is pressed.
  3.3640 +
  3.3641 +\item[{\emph{setup-fn}     }] a function that takes a \texttt{SimpleApplication}
  3.3642 +object. It is called once when initializing the simulation.
  3.3643 +Use it to create things like lights, change the gravity,
  3.3644 +initialize debug nodes, etc.
  3.3645 +
  3.3646 +\item[{\emph{update-fn}    }] this function takes a \texttt{SimpleApplication}
  3.3647 +object and a float and is called every frame of the
  3.3648 +simulation. The float tells how many seconds is has been
  3.3649 +since the last frame was rendered, according to whatever
  3.3650 +clock jme is currently using. The default is to use IsoTimer
  3.3651 +which will result in this value always being the same.
  3.3652 +\end{description}
  3.3653 +
  3.3654 +\item[{\texttt{(position-camera world position rotation)}}] set the position
  3.3655 +of the simulation's main camera.
  3.3656 +
  3.3657 +\item[{\texttt{(enable-debug world)}}] turn on debug wireframes for each
  3.3658 +simulated object.
  3.3659 +
  3.3660 +\item[{\texttt{(set-gravity world gravity)}}] set the gravity of a running
  3.3661 +simulation.
  3.3662 +
  3.3663 +\item[{\texttt{(box length width height \& \{options\})}}] create a box in the
  3.3664 +simulation. Options is a hash map specifying texture, mass,
  3.3665 +etc. Possible options are \texttt{:name}, \texttt{:color}, \texttt{:mass},
  3.3666 +\texttt{:friction}, \texttt{:texture}, \texttt{:material}, \texttt{:position},
  3.3667 +\texttt{:rotation}, \texttt{:shape}, and \texttt{:physical?}.
  3.3668 +
  3.3669 +\item[{\texttt{(sphere radius \& \{options\})}}] create a sphere in the simulation.
  3.3670 +Options are the same as in \texttt{box}.
  3.3671 +
  3.3672 +\item[{\texttt{(load-blender-model file-name)}}] create a node structure
  3.3673 +representing the model described in a Blender file.
  3.3674 +
  3.3675 +\item[{\texttt{(light-up-everything world)}}] distribute a standard compliment
  3.3676 +of lights throughout the simulation. Should be adequate for most
  3.3677 +purposes.
  3.3678 +
  3.3679 +\item[{\texttt{(node-seq node)}}] return a recursive list of the node's
  3.3680 +children.
  3.3681 +
  3.3682 +\item[{\texttt{(nodify name children)}}] construct a node given a node-name and
  3.3683 +desired children.
  3.3684 +
  3.3685 +\item[{\texttt{(add-element world element)}}] add an object to a running world
  3.3686 +simulation.
  3.3687 +
  3.3688 +\item[{\texttt{(set-accuracy world accuracy)}}] change the accuracy of the
  3.3689 +world's physics simulator.
  3.3690 +
  3.3691 +\item[{\texttt{(asset-manager)}}] get an \emph{AssetManager}, a jMonkeyEngine
  3.3692 +construct that is useful for loading textures and is required
  3.3693 +for smooth interaction with jMonkeyEngine library functions.
  3.3694 +
  3.3695 +\item[{\texttt{(load-bullet)}  }] unpack native libraries and initialize the
  3.3696 +bullet physics subsystem. This function is required before
  3.3697 +other world building functions are called.
  3.3698 +\end{description}
  3.3699 +
  3.3700 +\subsubsection{Creature Manipulation / Import}
  3.3701 +\label{sec-5-4-2}
  3.3702 +
  3.3703 +\begin{description}
  3.3704 +\item[{\texttt{(body! creature)}}] give the creature a physical body.
  3.3705 +
  3.3706 +\item[{\texttt{(vision! creature)}}] give the creature a sense of vision.
  3.3707 +Returns a list of functions which will each, when called
  3.3708 +during a simulation, return the vision data for the channel of
  3.3709 +one of the eyes. The functions are ordered depending on the
  3.3710 +alphabetical order of the names of the eye nodes in the
  3.3711 +Blender file. The data returned by the functions is a vector
  3.3712 +containing the eye's \emph{topology}, a vector of coordinates, and
  3.3713 +the eye's \emph{data}, a vector of RGB values filtered by the eye's
  3.3714 +sensitivity.
  3.3715 +
  3.3716 +\item[{\texttt{(hearing! creature)}}] give the creature a sense of hearing.
  3.3717 +Returns a list of functions, one for each ear, that when
  3.3718 +called will return a frame's worth of hearing data for that
  3.3719 +ear. The functions are ordered depending on the alphabetical
  3.3720 +order of the names of the ear nodes in the Blender file. The
  3.3721 +data returned by the functions is an array of PCM (pulse code
  3.3722 +modulated) wav data.
  3.3723 +
  3.3724 +\item[{\texttt{(touch! creature)}}] give the creature a sense of touch. Returns
  3.3725 +a single function that must be called with the \emph{root node} of
  3.3726 +the world, and which will return a vector of \emph{touch-data}
  3.3727 +one entry for each touch sensitive component, each entry of
  3.3728 +which contains a \emph{topology} that specifies the distribution of
  3.3729 +touch sensors, and the \emph{data}, which is a vector of
  3.3730 +\texttt{[activation, length]} pairs for each touch hair.
  3.3731 +
  3.3732 +\item[{\texttt{(proprioception! creature)}}] give the creature the sense of
  3.3733 +proprioception. Returns a list of functions, one for each
  3.3734 +joint, that when called during a running simulation will
  3.3735 +report the \texttt{[heading, pitch, roll]} of the joint.
  3.3736 +
  3.3737 +\item[{\texttt{(movement! creature)}}] give the creature the power of movement.
  3.3738 +Creates a list of functions, one for each muscle, that when
  3.3739 +called with an integer, will set the recruitment of that
  3.3740 +muscle to that integer, and will report the current power
  3.3741 +being exerted by the muscle. Order of muscles is determined by
  3.3742 +the alphabetical sort order of the names of the muscle nodes.
  3.3743 +\end{description}
  3.3744 +
  3.3745 +\subsubsection{Visualization/Debug}
  3.3746 +\label{sec-5-4-3}
  3.3747 +
  3.3748 +\begin{description}
  3.3749 +\item[{\texttt{(view-vision)}}] create a function that when called with a list
  3.3750 +of visual data returned from the functions made by \texttt{vision!}, 
  3.3751 +will display that visual data on the screen.
  3.3752 +
  3.3753 +\item[{\texttt{(view-hearing)}}] same as \texttt{view-vision} but for hearing.
  3.3754 +
  3.3755 +\item[{\texttt{(view-touch)}}] same as \texttt{view-vision} but for touch.
  3.3756 +
  3.3757 +\item[{\texttt{(view-proprioception)}}] same as \texttt{view-vision} but for
  3.3758 +proprioception.
  3.3759 +
  3.3760 +\item[{\texttt{(view-movement)}}] same as \texttt{view-vision} but for muscles.
  3.3761 +
  3.3762 +\item[{\texttt{(view anything)}}] \texttt{view} is a polymorphic function that allows
  3.3763 +you to inspect almost anything you could reasonably expect to
  3.3764 +be able to ``see'' in \texttt{CORTEX}.
  3.3765 +
  3.3766 +\item[{\texttt{(text anything)}}] \texttt{text} is a polymorphic function that allows
  3.3767 +you to convert practically anything into a text string.
  3.3768 +
  3.3769 +\item[{\texttt{(println-repl anything)}}] print messages to clojure's repl
  3.3770 +instead of the simulation's terminal window.
  3.3771 +
  3.3772 +\item[{\texttt{(mega-import-jme3)}}] for experimenting at the REPL. This
  3.3773 +function will import all jMonkeyEngine3 classes for immediate
  3.3774 +use.
  3.3775 +
  3.3776 +\item[{\texttt{(display-dilated-time world timer)}}] Shows the time as it is
  3.3777 +flowing in the simulation on a HUD display.
  3.3778 +\end{description}