diff thesis/old-cortex.org @ 516:ced955c3c84f

resurrect old cortex to fix flow issues.
author Robert McIntyre <rlm@mit.edu>
date Sun, 30 Mar 2014 22:48:19 -0400
parents
children
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/thesis/old-cortex.org	Sun Mar 30 22:48:19 2014 -0400
     1.3 @@ -0,0 +1,3611 @@
     1.4 +#+title: =CORTEX=
     1.5 +#+author: Robert McIntyre
     1.6 +#+email: rlm@mit.edu
     1.7 +#+description: Using embodied AI to facilitate Artificial Imagination.
     1.8 +#+keywords: AI, clojure, embodiment
     1.9 +#+LaTeX_CLASS_OPTIONS: [nofloat]
    1.10 +
    1.11 +* COMMENT templates
    1.12 +   #+caption: 
    1.13 +   #+caption: 
    1.14 +   #+caption: 
    1.15 +   #+caption: 
    1.16 +   #+name: name
    1.17 +   #+begin_listing clojure
    1.18 +   #+BEGIN_SRC clojure
    1.19 +   #+END_SRC
    1.20 +   #+end_listing
    1.21 +
    1.22 +   #+caption: 
    1.23 +   #+caption: 
    1.24 +   #+caption: 
    1.25 +   #+name: name
    1.26 +   #+ATTR_LaTeX: :width 10cm
    1.27 +   [[./images/aurellem-gray.png]]
    1.28 +
    1.29 +    #+caption: 
    1.30 +    #+caption: 
    1.31 +    #+caption: 
    1.32 +    #+caption: 
    1.33 +    #+name: name
    1.34 +    #+begin_listing clojure
    1.35 +    #+BEGIN_SRC clojure
    1.36 +    #+END_SRC
    1.37 +    #+end_listing
    1.38 +
    1.39 +    #+caption: 
    1.40 +    #+caption: 
    1.41 +    #+caption: 
    1.42 +    #+name: name
    1.43 +    #+ATTR_LaTeX: :width 10cm
    1.44 +    [[./images/aurellem-gray.png]]
    1.45 +
    1.46 +
    1.47 +* Empathy and Embodiment as problem solving strategies
    1.48 +  
    1.49 +  By the end of this thesis, you will have seen a novel approach to
    1.50 +  interpreting video using embodiment and empathy. You will have also
    1.51 +  seen one way to efficiently implement empathy for embodied
    1.52 +  creatures. Finally, you will become familiar with =CORTEX=, a system
    1.53 +  for designing and simulating creatures with rich senses, which you
    1.54 +  may choose to use in your own research.
    1.55 +  
    1.56 +  This is the core vision of my thesis: That one of the important ways
    1.57 +  in which we understand others is by imagining ourselves in their
    1.58 +  position and emphatically feeling experiences relative to our own
    1.59 +  bodies. By understanding events in terms of our own previous
    1.60 +  corporeal experience, we greatly constrain the possibilities of what
    1.61 +  would otherwise be an unwieldy exponential search. This extra
    1.62 +  constraint can be the difference between easily understanding what
    1.63 +  is happening in a video and being completely lost in a sea of
    1.64 +  incomprehensible color and movement.
    1.65 +  
    1.66 +** Recognizing actions in video is extremely difficult
    1.67 +
    1.68 +   Consider for example the problem of determining what is happening
    1.69 +   in a video of which this is one frame:
    1.70 +
    1.71 +   #+caption: A cat drinking some water. Identifying this action is 
    1.72 +   #+caption: beyond the state of the art for computers.
    1.73 +   #+ATTR_LaTeX: :width 7cm
    1.74 +   [[./images/cat-drinking.jpg]]
    1.75 +   
    1.76 +   It is currently impossible for any computer program to reliably
    1.77 +   label such a video as ``drinking''. And rightly so -- it is a very
    1.78 +   hard problem! What features can you describe in terms of low level
    1.79 +   functions of pixels that can even begin to describe at a high level
    1.80 +   what is happening here?
    1.81 +  
    1.82 +   Or suppose that you are building a program that recognizes chairs.
    1.83 +   How could you ``see'' the chair in figure \ref{hidden-chair}?
    1.84 +   
    1.85 +   #+caption: The chair in this image is quite obvious to humans, but I 
    1.86 +   #+caption: doubt that any modern computer vision program can find it.
    1.87 +   #+name: hidden-chair
    1.88 +   #+ATTR_LaTeX: :width 10cm
    1.89 +   [[./images/fat-person-sitting-at-desk.jpg]]
    1.90 +   
    1.91 +   Finally, how is it that you can easily tell the difference between
    1.92 +   how the girls /muscles/ are working in figure \ref{girl}?
    1.93 +   
    1.94 +   #+caption: The mysterious ``common sense'' appears here as you are able 
    1.95 +   #+caption: to discern the difference in how the girl's arm muscles
    1.96 +   #+caption: are activated between the two images.
    1.97 +   #+name: girl
    1.98 +   #+ATTR_LaTeX: :width 7cm
    1.99 +   [[./images/wall-push.png]]
   1.100 +  
   1.101 +   Each of these examples tells us something about what might be going
   1.102 +   on in our minds as we easily solve these recognition problems.
   1.103 +   
   1.104 +   The hidden chairs show us that we are strongly triggered by cues
   1.105 +   relating to the position of human bodies, and that we can determine
   1.106 +   the overall physical configuration of a human body even if much of
   1.107 +   that body is occluded.
   1.108 +
   1.109 +   The picture of the girl pushing against the wall tells us that we
   1.110 +   have common sense knowledge about the kinetics of our own bodies.
   1.111 +   We know well how our muscles would have to work to maintain us in
   1.112 +   most positions, and we can easily project this self-knowledge to
   1.113 +   imagined positions triggered by images of the human body.
   1.114 +
   1.115 +** =EMPATH= neatly solves recognition problems  
   1.116 +   
   1.117 +   I propose a system that can express the types of recognition
   1.118 +   problems above in a form amenable to computation. It is split into
   1.119 +   four parts:
   1.120 +
   1.121 +   - Free/Guided Play :: The creature moves around and experiences the
   1.122 +        world through its unique perspective. Many otherwise
   1.123 +        complicated actions are easily described in the language of a
   1.124 +        full suite of body-centered, rich senses. For example,
   1.125 +        drinking is the feeling of water sliding down your throat, and
   1.126 +        cooling your insides. It's often accompanied by bringing your
   1.127 +        hand close to your face, or bringing your face close to water.
   1.128 +        Sitting down is the feeling of bending your knees, activating
   1.129 +        your quadriceps, then feeling a surface with your bottom and
   1.130 +        relaxing your legs. These body-centered action descriptions
   1.131 +        can be either learned or hard coded.
   1.132 +   - Posture Imitation :: When trying to interpret a video or image,
   1.133 +        the creature takes a model of itself and aligns it with
   1.134 +        whatever it sees. This alignment can even cross species, as
   1.135 +        when humans try to align themselves with things like ponies,
   1.136 +        dogs, or other humans with a different body type.
   1.137 +   - Empathy         :: The alignment triggers associations with
   1.138 +        sensory data from prior experiences. For example, the
   1.139 +        alignment itself easily maps to proprioceptive data. Any
   1.140 +        sounds or obvious skin contact in the video can to a lesser
   1.141 +        extent trigger previous experience. Segments of previous
   1.142 +        experiences are stitched together to form a coherent and
   1.143 +        complete sensory portrait of the scene.
   1.144 +   - Recognition      :: With the scene described in terms of first
   1.145 +        person sensory events, the creature can now run its
   1.146 +        action-identification programs on this synthesized sensory
   1.147 +        data, just as it would if it were actually experiencing the
   1.148 +        scene first-hand. If previous experience has been accurately
   1.149 +        retrieved, and if it is analogous enough to the scene, then
   1.150 +        the creature will correctly identify the action in the scene.
   1.151 +   
   1.152 +   For example, I think humans are able to label the cat video as
   1.153 +   ``drinking'' because they imagine /themselves/ as the cat, and
   1.154 +   imagine putting their face up against a stream of water and
   1.155 +   sticking out their tongue. In that imagined world, they can feel
   1.156 +   the cool water hitting their tongue, and feel the water entering
   1.157 +   their body, and are able to recognize that /feeling/ as drinking.
   1.158 +   So, the label of the action is not really in the pixels of the
   1.159 +   image, but is found clearly in a simulation inspired by those
   1.160 +   pixels. An imaginative system, having been trained on drinking and
   1.161 +   non-drinking examples and learning that the most important
   1.162 +   component of drinking is the feeling of water sliding down one's
   1.163 +   throat, would analyze a video of a cat drinking in the following
   1.164 +   manner:
   1.165 +   
   1.166 +   1. Create a physical model of the video by putting a ``fuzzy''
   1.167 +      model of its own body in place of the cat. Possibly also create
   1.168 +      a simulation of the stream of water.
   1.169 +
   1.170 +   2. Play out this simulated scene and generate imagined sensory
   1.171 +      experience. This will include relevant muscle contractions, a
   1.172 +      close up view of the stream from the cat's perspective, and most
   1.173 +      importantly, the imagined feeling of water entering the
   1.174 +      mouth. The imagined sensory experience can come from a
   1.175 +      simulation of the event, but can also be pattern-matched from
   1.176 +      previous, similar embodied experience.
   1.177 +
   1.178 +   3. The action is now easily identified as drinking by the sense of
   1.179 +      taste alone. The other senses (such as the tongue moving in and
   1.180 +      out) help to give plausibility to the simulated action. Note that
   1.181 +      the sense of vision, while critical in creating the simulation,
   1.182 +      is not critical for identifying the action from the simulation.
   1.183 +
   1.184 +   For the chair examples, the process is even easier:
   1.185 +
   1.186 +    1. Align a model of your body to the person in the image.
   1.187 +
   1.188 +    2. Generate proprioceptive sensory data from this alignment.
   1.189 +  
   1.190 +    3. Use the imagined proprioceptive data as a key to lookup related
   1.191 +       sensory experience associated with that particular proproceptive
   1.192 +       feeling.
   1.193 +
   1.194 +    4. Retrieve the feeling of your bottom resting on a surface, your
   1.195 +       knees bent, and your leg muscles relaxed.
   1.196 +
   1.197 +    5. This sensory information is consistent with the =sitting?=
   1.198 +       sensory predicate, so you (and the entity in the image) must be
   1.199 +       sitting.
   1.200 +
   1.201 +    6. There must be a chair-like object since you are sitting.
   1.202 +
   1.203 +   Empathy offers yet another alternative to the age-old AI
   1.204 +   representation question: ``What is a chair?'' --- A chair is the
   1.205 +   feeling of sitting.
   1.206 +
   1.207 +   My program, =EMPATH= uses this empathic problem solving technique
   1.208 +   to interpret the actions of a simple, worm-like creature. 
   1.209 +   
   1.210 +   #+caption: The worm performs many actions during free play such as 
   1.211 +   #+caption: curling, wiggling, and resting.
   1.212 +   #+name: worm-intro
   1.213 +   #+ATTR_LaTeX: :width 15cm
   1.214 +   [[./images/worm-intro-white.png]]
   1.215 +
   1.216 +   #+caption: =EMPATH= recognized and classified each of these 
   1.217 +   #+caption: poses by inferring the complete sensory experience 
   1.218 +   #+caption: from proprioceptive data.
   1.219 +   #+name: worm-recognition-intro
   1.220 +   #+ATTR_LaTeX: :width 15cm
   1.221 +   [[./images/worm-poses.png]]
   1.222 +   
   1.223 +   One powerful advantage of empathic problem solving is that it
   1.224 +   factors the action recognition problem into two easier problems. To
   1.225 +   use empathy, you need an /aligner/, which takes the video and a
   1.226 +   model of your body, and aligns the model with the video. Then, you
   1.227 +   need a /recognizer/, which uses the aligned model to interpret the
   1.228 +   action. The power in this method lies in the fact that you describe
   1.229 +   all actions form a body-centered viewpoint. You are less tied to
   1.230 +   the particulars of any visual representation of the actions. If you
   1.231 +   teach the system what ``running'' is, and you have a good enough
   1.232 +   aligner, the system will from then on be able to recognize running
   1.233 +   from any point of view, even strange points of view like above or
   1.234 +   underneath the runner. This is in contrast to action recognition
   1.235 +   schemes that try to identify actions using a non-embodied approach.
   1.236 +   If these systems learn about running as viewed from the side, they
   1.237 +   will not automatically be able to recognize running from any other
   1.238 +   viewpoint.
   1.239 +
   1.240 +   Another powerful advantage is that using the language of multiple
   1.241 +   body-centered rich senses to describe body-centerd actions offers a
   1.242 +   massive boost in descriptive capability. Consider how difficult it
   1.243 +   would be to compose a set of HOG filters to describe the action of
   1.244 +   a simple worm-creature ``curling'' so that its head touches its
   1.245 +   tail, and then behold the simplicity of describing thus action in a
   1.246 +   language designed for the task (listing \ref{grand-circle-intro}):
   1.247 +
   1.248 +   #+caption: Body-centerd actions are best expressed in a body-centered 
   1.249 +   #+caption: language. This code detects when the worm has curled into a 
   1.250 +   #+caption: full circle. Imagine how you would replicate this functionality
   1.251 +   #+caption: using low-level pixel features such as HOG filters!
   1.252 +   #+name: grand-circle-intro
   1.253 +   #+begin_listing clojure
   1.254 +   #+begin_src clojure
   1.255 +(defn grand-circle?
   1.256 +  "Does the worm form a majestic circle (one end touching the other)?"
   1.257 +  [experiences]
   1.258 +  (and (curled? experiences)
   1.259 +       (let [worm-touch (:touch (peek experiences))
   1.260 +             tail-touch (worm-touch 0)
   1.261 +             head-touch (worm-touch 4)]
   1.262 +         (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
   1.263 +              (< 0.2 (contact worm-segment-top-tip    head-touch))))))
   1.264 +   #+end_src
   1.265 +   #+end_listing
   1.266 +
   1.267 +**  =CORTEX= is a toolkit for building sensate creatures
   1.268 +
   1.269 +   I built =CORTEX= to be a general AI research platform for doing
   1.270 +   experiments involving multiple rich senses and a wide variety and
   1.271 +   number of creatures. I intend it to be useful as a library for many
   1.272 +   more projects than just this thesis. =CORTEX= was necessary to meet
   1.273 +   a need among AI researchers at CSAIL and beyond, which is that
   1.274 +   people often will invent neat ideas that are best expressed in the
   1.275 +   language of creatures and senses, but in order to explore those
   1.276 +   ideas they must first build a platform in which they can create
   1.277 +   simulated creatures with rich senses! There are many ideas that
   1.278 +   would be simple to execute (such as =EMPATH=), but attached to them
   1.279 +   is the multi-month effort to make a good creature simulator. Often,
   1.280 +   that initial investment of time proves to be too much, and the
   1.281 +   project must make do with a lesser environment.
   1.282 +
   1.283 +   =CORTEX= is well suited as an environment for embodied AI research
   1.284 +   for three reasons:
   1.285 +
   1.286 +   - You can create new creatures using Blender, a popular 3D modeling
   1.287 +     program. Each sense can be specified using special blender nodes
   1.288 +     with biologically inspired paramaters. You need not write any
   1.289 +     code to create a creature, and can use a wide library of
   1.290 +     pre-existing blender models as a base for your own creatures.
   1.291 +
   1.292 +   - =CORTEX= implements a wide variety of senses, including touch,
   1.293 +     proprioception, vision, hearing, and muscle tension. Complicated
   1.294 +     senses like touch, and vision involve multiple sensory elements
   1.295 +     embedded in a 2D surface. You have complete control over the
   1.296 +     distribution of these sensor elements through the use of simple
   1.297 +     png image files. In particular, =CORTEX= implements more
   1.298 +     comprehensive hearing than any other creature simulation system
   1.299 +     available. 
   1.300 +
   1.301 +   - =CORTEX= supports any number of creatures and any number of
   1.302 +     senses. Time in =CORTEX= dialates so that the simulated creatures
   1.303 +     always precieve a perfectly smooth flow of time, regardless of
   1.304 +     the actual computational load.
   1.305 +
   1.306 +   =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
   1.307 +   engine designed to create cross-platform 3D desktop games. =CORTEX=
   1.308 +   is mainly written in clojure, a dialect of =LISP= that runs on the
   1.309 +   java virtual machine (JVM). The API for creating and simulating
   1.310 +   creatures and senses is entirely expressed in clojure, though many
   1.311 +   senses are implemented at the layer of jMonkeyEngine or below. For
   1.312 +   example, for the sense of hearing I use a layer of clojure code on
   1.313 +   top of a layer of java JNI bindings that drive a layer of =C++=
   1.314 +   code which implements a modified version of =OpenAL= to support
   1.315 +   multiple listeners. =CORTEX= is the only simulation environment
   1.316 +   that I know of that can support multiple entities that can each
   1.317 +   hear the world from their own perspective. Other senses also
   1.318 +   require a small layer of Java code. =CORTEX= also uses =bullet=, a
   1.319 +   physics simulator written in =C=.
   1.320 +
   1.321 +   #+caption: Here is the worm from figure \ref{worm-intro} modeled 
   1.322 +   #+caption: in Blender, a free 3D-modeling program. Senses and 
   1.323 +   #+caption: joints are described using special nodes in Blender.
   1.324 +   #+name: worm-recognition-intro
   1.325 +   #+ATTR_LaTeX: :width 12cm
   1.326 +   [[./images/blender-worm.png]]
   1.327 +
   1.328 +   Here are some thing I anticipate that =CORTEX= might be used for:
   1.329 +
   1.330 +   - exploring new ideas about sensory integration
   1.331 +   - distributed communication among swarm creatures
   1.332 +   - self-learning using free exploration, 
   1.333 +   - evolutionary algorithms involving creature construction
   1.334 +   - exploration of exoitic senses and effectors that are not possible
   1.335 +     in the real world (such as telekenisis or a semantic sense)
   1.336 +   - imagination using subworlds
   1.337 +
   1.338 +   During one test with =CORTEX=, I created 3,000 creatures each with
   1.339 +   their own independent senses and ran them all at only 1/80 real
   1.340 +   time. In another test, I created a detailed model of my own hand,
   1.341 +   equipped with a realistic distribution of touch (more sensitive at
   1.342 +   the fingertips), as well as eyes and ears, and it ran at around 1/4
   1.343 +   real time.
   1.344 +
   1.345 +#+BEGIN_LaTeX
   1.346 +   \begin{sidewaysfigure}
   1.347 +   \includegraphics[width=9.5in]{images/full-hand.png}
   1.348 +   \caption{
   1.349 +   I modeled my own right hand in Blender and rigged it with all the
   1.350 +   senses that {\tt CORTEX} supports. My simulated hand has a
   1.351 +   biologically inspired distribution of touch sensors. The senses are
   1.352 +   displayed on the right, and the simulation is displayed on the
   1.353 +   left. Notice that my hand is curling its fingers, that it can see
   1.354 +   its own finger from the eye in its palm, and that it can feel its
   1.355 +   own thumb touching its palm.}
   1.356 +   \end{sidewaysfigure}
   1.357 +#+END_LaTeX
   1.358 +
   1.359 +** Contributions
   1.360 +
   1.361 +   - I built =CORTEX=, a comprehensive platform for embodied AI
   1.362 +     experiments. =CORTEX= supports many features lacking in other
   1.363 +     systems, such proper simulation of hearing. It is easy to create
   1.364 +     new =CORTEX= creatures using Blender, a free 3D modeling program.
   1.365 +
   1.366 +   - I built =EMPATH=, which uses =CORTEX= to identify the actions of
   1.367 +     a worm-like creature using a computational model of empathy.
   1.368 +   
   1.369 +* Building =CORTEX=
   1.370 +
   1.371 +  I intend for =CORTEX= to be used as a general-purpose library for
   1.372 +  building creatures and outfitting them with senses, so that it will
   1.373 +  be useful for other researchers who want to test out ideas of their
   1.374 +  own. To this end, wherver I have had to make archetictural choices
   1.375 +  about =CORTEX=, I have chosen to give as much freedom to the user as
   1.376 +  possible, so that =CORTEX= may be used for things I have not
   1.377 +  forseen.
   1.378 +
   1.379 +** Simulation or Reality?
   1.380 +   
   1.381 +   The most important archetictural decision of all is the choice to
   1.382 +   use a computer-simulated environemnt in the first place! The world
   1.383 +   is a vast and rich place, and for now simulations are a very poor
   1.384 +   reflection of its complexity. It may be that there is a significant
   1.385 +   qualatative difference between dealing with senses in the real
   1.386 +   world and dealing with pale facilimilies of them in a simulation.
   1.387 +   What are the advantages and disadvantages of a simulation vs.
   1.388 +   reality?
   1.389 +
   1.390 +*** Simulation
   1.391 +
   1.392 +    The advantages of virtual reality are that when everything is a
   1.393 +    simulation, experiments in that simulation are absolutely
   1.394 +    reproducible. It's also easier to change the character and world
   1.395 +    to explore new situations and different sensory combinations.
   1.396 +
   1.397 +    If the world is to be simulated on a computer, then not only do
   1.398 +    you have to worry about whether the character's senses are rich
   1.399 +    enough to learn from the world, but whether the world itself is
   1.400 +    rendered with enough detail and realism to give enough working
   1.401 +    material to the character's senses. To name just a few
   1.402 +    difficulties facing modern physics simulators: destructibility of
   1.403 +    the environment, simulation of water/other fluids, large areas,
   1.404 +    nonrigid bodies, lots of objects, smoke. I don't know of any
   1.405 +    computer simulation that would allow a character to take a rock
   1.406 +    and grind it into fine dust, then use that dust to make a clay
   1.407 +    sculpture, at least not without spending years calculating the
   1.408 +    interactions of every single small grain of dust. Maybe a
   1.409 +    simulated world with today's limitations doesn't provide enough
   1.410 +    richness for real intelligence to evolve.
   1.411 +
   1.412 +*** Reality
   1.413 +
   1.414 +    The other approach for playing with senses is to hook your
   1.415 +    software up to real cameras, microphones, robots, etc., and let it
   1.416 +    loose in the real world. This has the advantage of eliminating
   1.417 +    concerns about simulating the world at the expense of increasing
   1.418 +    the complexity of implementing the senses. Instead of just
   1.419 +    grabbing the current rendered frame for processing, you have to
   1.420 +    use an actual camera with real lenses and interact with photons to
   1.421 +    get an image. It is much harder to change the character, which is
   1.422 +    now partly a physical robot of some sort, since doing so involves
   1.423 +    changing things around in the real world instead of modifying
   1.424 +    lines of code. While the real world is very rich and definitely
   1.425 +    provides enough stimulation for intelligence to develop as
   1.426 +    evidenced by our own existence, it is also uncontrollable in the
   1.427 +    sense that a particular situation cannot be recreated perfectly or
   1.428 +    saved for later use. It is harder to conduct science because it is
   1.429 +    harder to repeat an experiment. The worst thing about using the
   1.430 +    real world instead of a simulation is the matter of time. Instead
   1.431 +    of simulated time you get the constant and unstoppable flow of
   1.432 +    real time. This severely limits the sorts of software you can use
   1.433 +    to program the AI because all sense inputs must be handled in real
   1.434 +    time. Complicated ideas may have to be implemented in hardware or
   1.435 +    may simply be impossible given the current speed of our
   1.436 +    processors. Contrast this with a simulation, in which the flow of
   1.437 +    time in the simulated world can be slowed down to accommodate the
   1.438 +    limitations of the character's programming. In terms of cost,
   1.439 +    doing everything in software is far cheaper than building custom
   1.440 +    real-time hardware. All you need is a laptop and some patience.
   1.441 +
   1.442 +** Because of Time, simulation is perferable to reality
   1.443 +
   1.444 +   I envision =CORTEX= being used to support rapid prototyping and
   1.445 +   iteration of ideas. Even if I could put together a well constructed
   1.446 +   kit for creating robots, it would still not be enough because of
   1.447 +   the scourge of real-time processing. Anyone who wants to test their
   1.448 +   ideas in the real world must always worry about getting their
   1.449 +   algorithms to run fast enough to process information in real time.
   1.450 +   The need for real time processing only increases if multiple senses
   1.451 +   are involved. In the extreme case, even simple algorithms will have
   1.452 +   to be accelerated by ASIC chips or FPGAs, turning what would
   1.453 +   otherwise be a few lines of code and a 10x speed penality into a
   1.454 +   multi-month ordeal. For this reason, =CORTEX= supports
   1.455 +   /time-dialiation/, which scales back the framerate of the
   1.456 +   simulation in proportion to the amount of processing each frame.
   1.457 +   From the perspective of the creatures inside the simulation, time
   1.458 +   always appears to flow at a constant rate, regardless of how
   1.459 +   complicated the envorimnent becomes or how many creatures are in
   1.460 +   the simulation. The cost is that =CORTEX= can sometimes run slower
   1.461 +   than real time. This can also be an advantage, however ---
   1.462 +   simulations of very simple creatures in =CORTEX= generally run at
   1.463 +   40x on my machine!
   1.464 +
   1.465 +** What is a sense?
   1.466 +   
   1.467 +   If =CORTEX= is to support a wide variety of senses, it would help
   1.468 +   to have a better understanding of what a ``sense'' actually is!
   1.469 +   While vision, touch, and hearing all seem like they are quite
   1.470 +   different things, I was supprised to learn during the course of
   1.471 +   this thesis that they (and all physical senses) can be expressed as
   1.472 +   exactly the same mathematical object due to a dimensional argument!
   1.473 +
   1.474 +   Human beings are three-dimensional objects, and the nerves that
   1.475 +   transmit data from our various sense organs to our brain are
   1.476 +   essentially one-dimensional. This leaves up to two dimensions in
   1.477 +   which our sensory information may flow. For example, imagine your
   1.478 +   skin: it is a two-dimensional surface around a three-dimensional
   1.479 +   object (your body). It has discrete touch sensors embedded at
   1.480 +   various points, and the density of these sensors corresponds to the
   1.481 +   sensitivity of that region of skin. Each touch sensor connects to a
   1.482 +   nerve, all of which eventually are bundled together as they travel
   1.483 +   up the spinal cord to the brain. Intersect the spinal nerves with a
   1.484 +   guillotining plane and you will see all of the sensory data of the
   1.485 +   skin revealed in a roughly circular two-dimensional image which is
   1.486 +   the cross section of the spinal cord. Points on this image that are
   1.487 +   close together in this circle represent touch sensors that are
   1.488 +   /probably/ close together on the skin, although there is of course
   1.489 +   some cutting and rearrangement that has to be done to transfer the
   1.490 +   complicated surface of the skin onto a two dimensional image.
   1.491 +
   1.492 +   Most human senses consist of many discrete sensors of various
   1.493 +   properties distributed along a surface at various densities. For
   1.494 +   skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
   1.495 +   disks, and Ruffini's endings, which detect pressure and vibration
   1.496 +   of various intensities. For ears, it is the stereocilia distributed
   1.497 +   along the basilar membrane inside the cochlea; each one is
   1.498 +   sensitive to a slightly different frequency of sound. For eyes, it
   1.499 +   is rods and cones distributed along the surface of the retina. In
   1.500 +   each case, we can describe the sense with a surface and a
   1.501 +   distribution of sensors along that surface.
   1.502 +
   1.503 +   The neat idea is that every human sense can be effectively
   1.504 +   described in terms of a surface containing embedded sensors. If the
   1.505 +   sense had any more dimensions, then there wouldn't be enough room
   1.506 +   in the spinal chord to transmit the information!
   1.507 +
   1.508 +   Therefore, =CORTEX= must support the ability to create objects and
   1.509 +   then be able to ``paint'' points along their surfaces to describe
   1.510 +   each sense. 
   1.511 +
   1.512 +   Fortunately this idea is already a well known computer graphics
   1.513 +   technique called called /UV-mapping/. The three-dimensional surface
   1.514 +   of a model is cut and smooshed until it fits on a two-dimensional
   1.515 +   image. You paint whatever you want on that image, and when the
   1.516 +   three-dimensional shape is rendered in a game the smooshing and
   1.517 +   cutting is reversed and the image appears on the three-dimensional
   1.518 +   object.
   1.519 +
   1.520 +   To make a sense, interpret the UV-image as describing the
   1.521 +   distribution of that senses sensors. To get different types of
   1.522 +   sensors, you can either use a different color for each type of
   1.523 +   sensor, or use multiple UV-maps, each labeled with that sensor
   1.524 +   type. I generally use a white pixel to mean the presence of a
   1.525 +   sensor and a black pixel to mean the absence of a sensor, and use
   1.526 +   one UV-map for each sensor-type within a given sense. 
   1.527 +
   1.528 +   #+CAPTION: The UV-map for an elongated icososphere. The white
   1.529 +   #+caption: dots each represent a touch sensor. They are dense 
   1.530 +   #+caption: in the regions that describe the tip of the finger, 
   1.531 +   #+caption: and less dense along the dorsal side of the finger 
   1.532 +   #+caption: opposite the tip.
   1.533 +   #+name: finger-UV
   1.534 +   #+ATTR_latex: :width 10cm
   1.535 +   [[./images/finger-UV.png]]
   1.536 +
   1.537 +   #+caption: Ventral side of the UV-mapped finger. Notice the 
   1.538 +   #+caption: density of touch sensors at the tip.
   1.539 +   #+name: finger-side-view
   1.540 +   #+ATTR_LaTeX: :width 10cm
   1.541 +   [[./images/finger-1.png]]
   1.542 +
   1.543 +** Video game engines provide ready-made physics and shading
   1.544 +   
   1.545 +   I did not need to write my own physics simulation code or shader to
   1.546 +   build =CORTEX=. Doing so would lead to a system that is impossible
   1.547 +   for anyone but myself to use anyway. Instead, I use a video game
   1.548 +   engine as a base and modify it to accomodate the additional needs
   1.549 +   of =CORTEX=. Video game engines are an ideal starting point to
   1.550 +   build =CORTEX=, because they are not far from being creature
   1.551 +   building systems themselves.
   1.552 +   
   1.553 +   First off, general purpose video game engines come with a physics
   1.554 +   engine and lighting / sound system. The physics system provides
   1.555 +   tools that can be co-opted to serve as touch, proprioception, and
   1.556 +   muscles. Since some games support split screen views, a good video
   1.557 +   game engine will allow you to efficiently create multiple cameras
   1.558 +   in the simulated world that can be used as eyes. Video game systems
   1.559 +   offer integrated asset management for things like textures and
   1.560 +   creatures models, providing an avenue for defining creatures. They
   1.561 +   also understand UV-mapping, since this technique is used to apply a
   1.562 +   texture to a model. Finally, because video game engines support a
   1.563 +   large number of users, as long as =CORTEX= doesn't stray too far
   1.564 +   from the base system, other researchers can turn to this community
   1.565 +   for help when doing their research.
   1.566 +   
   1.567 +** =CORTEX= is based on jMonkeyEngine3
   1.568 +
   1.569 +   While preparing to build =CORTEX= I studied several video game
   1.570 +   engines to see which would best serve as a base. The top contenders
   1.571 +   were:
   1.572 +
   1.573 +   - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]]    :: The Quake II engine was designed by ID
   1.574 +        software in 1997.  All the source code was released by ID
   1.575 +        software into the Public Domain several years ago, and as a
   1.576 +        result it has been ported to many different languages. This
   1.577 +        engine was famous for its advanced use of realistic shading
   1.578 +        and had decent and fast physics simulation. The main advantage
   1.579 +        of the Quake II engine is its simplicity, but I ultimately
   1.580 +        rejected it because the engine is too tied to the concept of a
   1.581 +        first-person shooter game. One of the problems I had was that
   1.582 +        there does not seem to be any easy way to attach multiple
   1.583 +        cameras to a single character. There are also several physics
   1.584 +        clipping issues that are corrected in a way that only applies
   1.585 +        to the main character and do not apply to arbitrary objects.
   1.586 +
   1.587 +   - [[http://source.valvesoftware.com/][Source Engine]]     :: The Source Engine evolved from the Quake II
   1.588 +        and Quake I engines and is used by Valve in the Half-Life
   1.589 +        series of games. The physics simulation in the Source Engine
   1.590 +        is quite accurate and probably the best out of all the engines
   1.591 +        I investigated. There is also an extensive community actively
   1.592 +        working with the engine. However, applications that use the
   1.593 +        Source Engine must be written in C++, the code is not open, it
   1.594 +        only runs on Windows, and the tools that come with the SDK to
   1.595 +        handle models and textures are complicated and awkward to use.
   1.596 +
   1.597 +   -  [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
   1.598 +        games in Java. It uses OpenGL to render to the screen and uses
   1.599 +        screengraphs to avoid drawing things that do not appear on the
   1.600 +        screen. It has an active community and several games in the
   1.601 +        pipeline. The engine was not built to serve any particular
   1.602 +        game but is instead meant to be used for any 3D game. 
   1.603 +
   1.604 +   I chose jMonkeyEngine3 because it because it had the most features
   1.605 +   out of all the free projects I looked at, and because I could then
   1.606 +   write my code in clojure, an implementation of =LISP= that runs on
   1.607 +   the JVM.
   1.608 +
   1.609 +** =CORTEX= uses Blender to create creature models
   1.610 +
   1.611 +   For the simple worm-like creatures I will use later on in this
   1.612 +   thesis, I could define a simple API in =CORTEX= that would allow
   1.613 +   one to create boxes, spheres, etc., and leave that API as the sole
   1.614 +   way to create creatures. However, for =CORTEX= to truly be useful
   1.615 +   for other projects, it needs a way to construct complicated
   1.616 +   creatures. If possible, it would be nice to leverage work that has
   1.617 +   already been done by the community of 3D modelers, or at least
   1.618 +   enable people who are talented at moedling but not programming to
   1.619 +   design =CORTEX= creatures.
   1.620 +
   1.621 +   Therefore, I use Blender, a free 3D modeling program, as the main
   1.622 +   way to create creatures in =CORTEX=. However, the creatures modeled
   1.623 +   in Blender must also be simple to simulate in jMonkeyEngine3's game
   1.624 +   engine, and must also be easy to rig with =CORTEX='s senses. I
   1.625 +   accomplish this with extensive use of Blender's ``empty nodes.'' 
   1.626 +
   1.627 +   Empty nodes have no mass, physical presence, or appearance, but
   1.628 +   they can hold metadata and have names. I use a tree structure of
   1.629 +   empty nodes to specify senses in the following manner:
   1.630 +
   1.631 +   - Create a single top-level empty node whose name is the name of
   1.632 +     the sense.
   1.633 +   - Add empty nodes which each contain meta-data relevant to the
   1.634 +     sense, including a UV-map describing the number/distribution of
   1.635 +     sensors if applicable.
   1.636 +   - Make each empty-node the child of the top-level node.
   1.637 +     
   1.638 +   #+caption: An example of annoting a creature model with empty
   1.639 +   #+caption: nodes to describe the layout of senses. There are 
   1.640 +   #+caption: multiple empty nodes which each describe the position
   1.641 +   #+caption: of muscles, ears, eyes, or joints.
   1.642 +   #+name: sense-nodes
   1.643 +   #+ATTR_LaTeX: :width 10cm
   1.644 +   [[./images/empty-sense-nodes.png]]
   1.645 +
   1.646 +** Bodies are composed of segments connected by joints
   1.647 +
   1.648 +   Blender is a general purpose animation tool, which has been used in
   1.649 +   the past to create high quality movies such as Sintel
   1.650 +   \cite{blender}. Though Blender can model and render even complicated
   1.651 +   things like water, it is crucual to keep models that are meant to
   1.652 +   be simulated as creatures simple. =Bullet=, which =CORTEX= uses
   1.653 +   though jMonkeyEngine3, is a rigid-body physics system. This offers
   1.654 +   a compromise between the expressiveness of a game level and the
   1.655 +   speed at which it can be simulated, and it means that creatures
   1.656 +   should be naturally expressed as rigid components held together by
   1.657 +   joint constraints.
   1.658 +
   1.659 +   But humans are more like a squishy bag with wrapped around some
   1.660 +   hard bones which define the overall shape. When we move, our skin
   1.661 +   bends and stretches to accomodate the new positions of our bones. 
   1.662 +
   1.663 +   One way to make bodies composed of rigid pieces connected by joints
   1.664 +   /seem/ more human-like is to use an /armature/, (or /rigging/)
   1.665 +   system, which defines a overall ``body mesh'' and defines how the
   1.666 +   mesh deforms as a function of the position of each ``bone'' which
   1.667 +   is a standard rigid body. This technique is used extensively to
   1.668 +   model humans and create realistic animations. It is not a good
   1.669 +   technique for physical simulation, however because it creates a lie
   1.670 +   -- the skin is not a physical part of the simulation and does not
   1.671 +   interact with any objects in the world or itself. Objects will pass
   1.672 +   right though the skin until they come in contact with the
   1.673 +   underlying bone, which is a physical object. Whithout simulating
   1.674 +   the skin, the sense of touch has little meaning, and the creature's
   1.675 +   own vision will lie to it about the true extent of its body.
   1.676 +   Simulating the skin as a physical object requires some way to
   1.677 +   continuously update the physical model of the skin along with the
   1.678 +   movement of the bones, which is unacceptably slow compared to rigid
   1.679 +   body simulation. 
   1.680 +
   1.681 +   Therefore, instead of using the human-like ``deformable bag of
   1.682 +   bones'' approach, I decided to base my body plans on multiple solid
   1.683 +   objects that are connected by joints, inspired by the robot =EVE=
   1.684 +   from the movie WALL-E.
   1.685 +   
   1.686 +   #+caption: =EVE= from the movie WALL-E.  This body plan turns 
   1.687 +   #+caption: out to be much better suited to my purposes than a more 
   1.688 +   #+caption: human-like one.
   1.689 +   #+ATTR_LaTeX: :width 10cm
   1.690 +   [[./images/Eve.jpg]]
   1.691 +
   1.692 +   =EVE='s body is composed of several rigid components that are held
   1.693 +   together by invisible joint constraints. This is what I mean by
   1.694 +   ``eve-like''. The main reason that I use eve-style bodies is for
   1.695 +   efficiency, and so that there will be correspondence between the
   1.696 +   AI's semses and the physical presence of its body. Each individual
   1.697 +   section is simulated by a separate rigid body that corresponds
   1.698 +   exactly with its visual representation and does not change.
   1.699 +   Sections are connected by invisible joints that are well supported
   1.700 +   in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
   1.701 +   can efficiently simulate hundreds of rigid bodies connected by
   1.702 +   joints. Just because sections are rigid does not mean they have to
   1.703 +   stay as one piece forever; they can be dynamically replaced with
   1.704 +   multiple sections to simulate splitting in two. This could be used
   1.705 +   to simulate retractable claws or =EVE='s hands, which are able to
   1.706 +   coalesce into one object in the movie.
   1.707 +
   1.708 +*** Solidifying/Connecting a body
   1.709 +
   1.710 +    =CORTEX= creates a creature in two steps: first, it traverses the
   1.711 +    nodes in the blender file and creates physical representations for
   1.712 +    any of them that have mass defined in their blender meta-data.
   1.713 +
   1.714 +   #+caption: Program for iterating through the nodes in a blender file
   1.715 +   #+caption: and generating physical jMonkeyEngine3 objects with mass
   1.716 +   #+caption: and a matching physics shape.
   1.717 +   #+name: name
   1.718 +   #+begin_listing clojure
   1.719 +   #+begin_src clojure
   1.720 +(defn physical!
   1.721 +  "Iterate through the nodes in creature and make them real physical
   1.722 +   objects in the simulation."
   1.723 +  [#^Node creature]
   1.724 +  (dorun
   1.725 +   (map
   1.726 +    (fn [geom]
   1.727 +      (let [physics-control
   1.728 +            (RigidBodyControl.
   1.729 +             (HullCollisionShape.
   1.730 +              (.getMesh geom))
   1.731 +             (if-let [mass (meta-data geom "mass")]
   1.732 +               (float mass) (float 1)))]
   1.733 +        (.addControl geom physics-control)))
   1.734 +    (filter #(isa? (class %) Geometry )
   1.735 +            (node-seq creature)))))
   1.736 +   #+end_src
   1.737 +   #+end_listing
   1.738 +   
   1.739 +    The next step to making a proper body is to connect those pieces
   1.740 +    together with joints. jMonkeyEngine has a large array of joints
   1.741 +    available via =bullet=, such as Point2Point, Cone, Hinge, and a
   1.742 +    generic Six Degree of Freedom joint, with or without spring
   1.743 +    restitution. 
   1.744 +
   1.745 +    Joints are treated a lot like proper senses, in that there is a
   1.746 +    top-level empty node named ``joints'' whose children each
   1.747 +    represent a joint.
   1.748 +
   1.749 +    #+caption: View of the hand model in Blender showing the main ``joints''
   1.750 +    #+caption: node (highlighted in yellow) and its children which each
   1.751 +    #+caption: represent a joint in the hand. Each joint node has metadata
   1.752 +    #+caption: specifying what sort of joint it is.
   1.753 +    #+name: blender-hand
   1.754 +    #+ATTR_LaTeX: :width 10cm
   1.755 +    [[./images/hand-screenshot1.png]]
   1.756 +
   1.757 +
   1.758 +    =CORTEX='s procedure for binding the creature together with joints
   1.759 +    is as follows:
   1.760 +    
   1.761 +    - Find the children of the ``joints'' node.
   1.762 +    - Determine the two spatials the joint is meant to connect.
   1.763 +    - Create the joint based on the meta-data of the empty node.
   1.764 +
   1.765 +    The higher order function =sense-nodes= from =cortex.sense=
   1.766 +    simplifies finding the joints based on their parent ``joints''
   1.767 +    node.
   1.768 +
   1.769 +   #+caption: Retrieving the children empty nodes from a single 
   1.770 +   #+caption: named empty node is a common pattern in =CORTEX=
   1.771 +   #+caption: further instances of this technique for the senses 
   1.772 +   #+caption: will be omitted
   1.773 +   #+name: get-empty-nodes
   1.774 +   #+begin_listing clojure
   1.775 +   #+begin_src clojure
   1.776 +(defn sense-nodes
   1.777 +  "For some senses there is a special empty blender node whose
   1.778 +   children are considered markers for an instance of that sense. This
   1.779 +   function generates functions to find those children, given the name
   1.780 +   of the special parent node."
   1.781 +  [parent-name]
   1.782 +  (fn [#^Node creature]
   1.783 +    (if-let [sense-node (.getChild creature parent-name)]
   1.784 +      (seq (.getChildren sense-node)) [])))
   1.785 +
   1.786 +(def
   1.787 +  ^{:doc "Return the children of the creature's \"joints\" node."
   1.788 +    :arglists '([creature])}
   1.789 +  joints
   1.790 +  (sense-nodes "joints"))
   1.791 +   #+end_src
   1.792 +   #+end_listing
   1.793 +
   1.794 +    To find a joint's targets, =CORTEX= creates a small cube, centered
   1.795 +    around the empty-node, and grows the cube exponentially until it
   1.796 +    intersects two physical objects. The objects are ordered according
   1.797 +    to the joint's rotation, with the first one being the object that
   1.798 +    has more negative coordinates in the joint's reference frame.
   1.799 +    Since the objects must be physical, the empty-node itself escapes
   1.800 +    detection. Because the objects must be physical, =joint-targets=
   1.801 +    must be called /after/ =physical!= is called.
   1.802 +   
   1.803 +    #+caption: Program to find the targets of a joint node by 
   1.804 +    #+caption: exponentiallly growth of a search cube.
   1.805 +    #+name: joint-targets
   1.806 +    #+begin_listing clojure
   1.807 +    #+begin_src clojure
   1.808 +(defn joint-targets
   1.809 +  "Return the two closest two objects to the joint object, ordered
   1.810 +  from bottom to top according to the joint's rotation."
   1.811 +  [#^Node parts #^Node joint]
   1.812 +  (loop [radius (float 0.01)]
   1.813 +    (let [results (CollisionResults.)]
   1.814 +      (.collideWith
   1.815 +       parts
   1.816 +       (BoundingBox. (.getWorldTranslation joint)
   1.817 +                     radius radius radius) results)
   1.818 +      (let [targets
   1.819 +            (distinct
   1.820 +             (map  #(.getGeometry %) results))]
   1.821 +        (if (>= (count targets) 2)
   1.822 +          (sort-by
   1.823 +           #(let [joint-ref-frame-position
   1.824 +                  (jme-to-blender
   1.825 +                   (.mult
   1.826 +                    (.inverse (.getWorldRotation joint))
   1.827 +                    (.subtract (.getWorldTranslation %)
   1.828 +                               (.getWorldTranslation joint))))]
   1.829 +              (.dot (Vector3f. 1 1 1) joint-ref-frame-position))                  
   1.830 +           (take 2 targets))
   1.831 +          (recur (float (* radius 2))))))))
   1.832 +    #+end_src
   1.833 +    #+end_listing
   1.834 +   
   1.835 +    Once =CORTEX= finds all joints and targets, it creates them using
   1.836 +    a dispatch on the metadata of each joint node.
   1.837 +
   1.838 +    #+caption: Program to dispatch on blender metadata and create joints
   1.839 +    #+caption: sutiable for physical simulation.
   1.840 +    #+name: joint-dispatch
   1.841 +    #+begin_listing clojure
   1.842 +    #+begin_src clojure
   1.843 +(defmulti joint-dispatch
   1.844 +  "Translate blender pseudo-joints into real JME joints."
   1.845 +  (fn [constraints & _] 
   1.846 +    (:type constraints)))
   1.847 +
   1.848 +(defmethod joint-dispatch :point
   1.849 +  [constraints control-a control-b pivot-a pivot-b rotation]
   1.850 +  (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
   1.851 +    (.setLinearLowerLimit Vector3f/ZERO)
   1.852 +    (.setLinearUpperLimit Vector3f/ZERO)))
   1.853 +
   1.854 +(defmethod joint-dispatch :hinge
   1.855 +  [constraints control-a control-b pivot-a pivot-b rotation]
   1.856 +  (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
   1.857 +        [limit-1 limit-2] (:limit constraints)
   1.858 +        hinge-axis (.mult rotation (blender-to-jme axis))]
   1.859 +    (doto (HingeJoint. control-a control-b pivot-a pivot-b 
   1.860 +                       hinge-axis hinge-axis)
   1.861 +      (.setLimit limit-1 limit-2))))
   1.862 +
   1.863 +(defmethod joint-dispatch :cone
   1.864 +  [constraints control-a control-b pivot-a pivot-b rotation]
   1.865 +  (let [limit-xz (:limit-xz constraints)
   1.866 +        limit-xy (:limit-xy constraints)
   1.867 +        twist    (:twist constraints)]
   1.868 +    (doto (ConeJoint. control-a control-b pivot-a pivot-b
   1.869 +                      rotation rotation)
   1.870 +      (.setLimit (float limit-xz) (float limit-xy)
   1.871 +                 (float twist)))))
   1.872 +    #+end_src
   1.873 +    #+end_listing
   1.874 +
   1.875 +    All that is left for joints it to combine the above pieces into a
   1.876 +    something that can operate on the collection of nodes that a
   1.877 +    blender file represents.
   1.878 +
   1.879 +    #+caption: Program to completely create a joint given information 
   1.880 +    #+caption: from a blender file.
   1.881 +    #+name: connect
   1.882 +    #+begin_listing clojure
   1.883 +   #+begin_src clojure
   1.884 +(defn connect
   1.885 +  "Create a joint between 'obj-a and 'obj-b at the location of
   1.886 +  'joint. The type of joint is determined by the metadata on 'joint.
   1.887 +
   1.888 +   Here are some examples:
   1.889 +   {:type :point}
   1.890 +   {:type :hinge  :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
   1.891 +   (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
   1.892 +
   1.893 +   {:type :cone :limit-xz 0]
   1.894 +                :limit-xy 0]
   1.895 +                :twist 0]}   (use XZY rotation mode in blender!)"
   1.896 +  [#^Node obj-a #^Node obj-b #^Node joint]
   1.897 +  (let [control-a (.getControl obj-a RigidBodyControl)
   1.898 +        control-b (.getControl obj-b RigidBodyControl)
   1.899 +        joint-center (.getWorldTranslation joint)
   1.900 +        joint-rotation (.toRotationMatrix (.getWorldRotation joint))
   1.901 +        pivot-a (world-to-local obj-a joint-center)
   1.902 +        pivot-b (world-to-local obj-b joint-center)]
   1.903 +    (if-let
   1.904 +        [constraints (map-vals eval (read-string (meta-data joint "joint")))]
   1.905 +      ;; A side-effect of creating a joint registers
   1.906 +      ;; it with both physics objects which in turn
   1.907 +      ;; will register the joint with the physics system
   1.908 +      ;; when the simulation is started.
   1.909 +        (joint-dispatch constraints
   1.910 +                        control-a control-b
   1.911 +                        pivot-a pivot-b
   1.912 +                        joint-rotation))))
   1.913 +    #+end_src
   1.914 +    #+end_listing
   1.915 +
   1.916 +    In general, whenever =CORTEX= exposes a sense (or in this case
   1.917 +    physicality), it provides a function of the type =sense!=, which
   1.918 +    takes in a collection of nodes and augments it to support that
   1.919 +    sense. The function returns any controlls necessary to use that
   1.920 +    sense. In this case =body!= cerates a physical body and returns no
   1.921 +    control functions.
   1.922 +
   1.923 +    #+caption: Program to give joints to a creature.
   1.924 +    #+name: name
   1.925 +    #+begin_listing clojure
   1.926 +    #+begin_src clojure
   1.927 +(defn joints!
   1.928 +  "Connect the solid parts of the creature with physical joints. The
   1.929 +   joints are taken from the \"joints\" node in the creature."
   1.930 +  [#^Node creature]
   1.931 +  (dorun
   1.932 +   (map
   1.933 +    (fn [joint]
   1.934 +      (let [[obj-a obj-b] (joint-targets creature joint)]
   1.935 +        (connect obj-a obj-b joint)))
   1.936 +    (joints creature))))
   1.937 +(defn body!
   1.938 +  "Endow the creature with a physical body connected with joints.  The
   1.939 +   particulars of the joints and the masses of each body part are
   1.940 +   determined in blender."
   1.941 +  [#^Node creature]
   1.942 +  (physical! creature)
   1.943 +  (joints! creature))
   1.944 +    #+end_src
   1.945 +    #+end_listing
   1.946 +
   1.947 +    All of the code you have just seen amounts to only 130 lines, yet
   1.948 +    because it builds on top of Blender and jMonkeyEngine3, those few
   1.949 +    lines pack quite a punch!
   1.950 +
   1.951 +    The hand from figure \ref{blender-hand}, which was modeled after
   1.952 +    my own right hand, can now be given joints and simulated as a
   1.953 +    creature.
   1.954 +   
   1.955 +    #+caption: With the ability to create physical creatures from blender,
   1.956 +    #+caption: =CORTEX= gets one step closer to becomming a full creature
   1.957 +    #+caption: simulation environment.
   1.958 +    #+name: name
   1.959 +    #+ATTR_LaTeX: :width 15cm
   1.960 +    [[./images/physical-hand.png]]
   1.961 +
   1.962 +** Eyes reuse standard video game components
   1.963 +
   1.964 +   Vision is one of the most important senses for humans, so I need to
   1.965 +   build a simulated sense of vision for my AI. I will do this with
   1.966 +   simulated eyes. Each eye can be independently moved and should see
   1.967 +   its own version of the world depending on where it is.
   1.968 +
   1.969 +   Making these simulated eyes a reality is simple because
   1.970 +   jMonkeyEngine already contains extensive support for multiple views
   1.971 +   of the same 3D simulated world. The reason jMonkeyEngine has this
   1.972 +   support is because the support is necessary to create games with
   1.973 +   split-screen views. Multiple views are also used to create
   1.974 +   efficient pseudo-reflections by rendering the scene from a certain
   1.975 +   perspective and then projecting it back onto a surface in the 3D
   1.976 +   world.
   1.977 +
   1.978 +   #+caption: jMonkeyEngine supports multiple views to enable 
   1.979 +   #+caption: split-screen games, like GoldenEye, which was one of 
   1.980 +   #+caption: the first games to use split-screen views.
   1.981 +   #+name: name
   1.982 +   #+ATTR_LaTeX: :width 10cm
   1.983 +   [[./images/goldeneye-4-player.png]]
   1.984 +
   1.985 +*** A Brief Description of jMonkeyEngine's Rendering Pipeline
   1.986 +
   1.987 +    jMonkeyEngine allows you to create a =ViewPort=, which represents a
   1.988 +    view of the simulated world. You can create as many of these as you
   1.989 +    want. Every frame, the =RenderManager= iterates through each
   1.990 +    =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
   1.991 +    is a =FrameBuffer= which represents the rendered image in the GPU.
   1.992 +  
   1.993 +    #+caption: =ViewPorts= are cameras in the world. During each frame, 
   1.994 +    #+caption: the =RenderManager= records a snapshot of what each view 
   1.995 +    #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
   1.996 +    #+name: rendermanagers
   1.997 +    #+ATTR_LaTeX: :width 10cm
   1.998 +    [[./images/diagram_rendermanager2.png]]
   1.999 +
  1.1000 +    Each =ViewPort= can have any number of attached =SceneProcessor=
  1.1001 +    objects, which are called every time a new frame is rendered. A
  1.1002 +    =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
  1.1003 +    whatever it wants to the data.  Often this consists of invoking GPU
  1.1004 +    specific operations on the rendered image.  The =SceneProcessor= can
  1.1005 +    also copy the GPU image data to RAM and process it with the CPU.
  1.1006 +
  1.1007 +*** Appropriating Views for Vision
  1.1008 +
  1.1009 +    Each eye in the simulated creature needs its own =ViewPort= so
  1.1010 +    that it can see the world from its own perspective. To this
  1.1011 +    =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
  1.1012 +    any arbitrary continuation function for further processing. That
  1.1013 +    continuation function may perform both CPU and GPU operations on
  1.1014 +    the data. To make this easy for the continuation function, the
  1.1015 +    =SceneProcessor= maintains appropriately sized buffers in RAM to
  1.1016 +    hold the data. It does not do any copying from the GPU to the CPU
  1.1017 +    itself because it is a slow operation.
  1.1018 +
  1.1019 +    #+caption: Function to make the rendered secne in jMonkeyEngine 
  1.1020 +    #+caption: available for further processing.
  1.1021 +    #+name: pipeline-1 
  1.1022 +    #+begin_listing clojure
  1.1023 +    #+begin_src clojure
  1.1024 +(defn vision-pipeline
  1.1025 +  "Create a SceneProcessor object which wraps a vision processing
  1.1026 +  continuation function. The continuation is a function that takes 
  1.1027 +  [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
  1.1028 +  each of which has already been appropriately sized."
  1.1029 +  [continuation]
  1.1030 +  (let [byte-buffer (atom nil)
  1.1031 +	renderer (atom nil)
  1.1032 +        image (atom nil)]
  1.1033 +  (proxy [SceneProcessor] []
  1.1034 +    (initialize
  1.1035 +     [renderManager viewPort]
  1.1036 +     (let [cam (.getCamera viewPort)
  1.1037 +	   width (.getWidth cam)
  1.1038 +	   height (.getHeight cam)]
  1.1039 +       (reset! renderer (.getRenderer renderManager))
  1.1040 +       (reset! byte-buffer
  1.1041 +	     (BufferUtils/createByteBuffer
  1.1042 +	      (* width height 4)))
  1.1043 +        (reset! image (BufferedImage.
  1.1044 +                      width height
  1.1045 +                      BufferedImage/TYPE_4BYTE_ABGR))))
  1.1046 +    (isInitialized [] (not (nil? @byte-buffer)))
  1.1047 +    (reshape [_ _ _])
  1.1048 +    (preFrame [_])
  1.1049 +    (postQueue [_])
  1.1050 +    (postFrame
  1.1051 +     [#^FrameBuffer fb]
  1.1052 +     (.clear @byte-buffer)
  1.1053 +     (continuation @renderer fb @byte-buffer @image))
  1.1054 +    (cleanup []))))
  1.1055 +    #+end_src
  1.1056 +    #+end_listing
  1.1057 +
  1.1058 +    The continuation function given to =vision-pipeline= above will be
  1.1059 +    given a =Renderer= and three containers for image data. The
  1.1060 +    =FrameBuffer= references the GPU image data, but the pixel data
  1.1061 +    can not be used directly on the CPU. The =ByteBuffer= and
  1.1062 +    =BufferedImage= are initially "empty" but are sized to hold the
  1.1063 +    data in the =FrameBuffer=. I call transferring the GPU image data
  1.1064 +    to the CPU structures "mixing" the image data.
  1.1065 +
  1.1066 +*** Optical sensor arrays are described with images and referenced with metadata
  1.1067 +
  1.1068 +    The vision pipeline described above handles the flow of rendered
  1.1069 +    images. Now, =CORTEX= needs simulated eyes to serve as the source
  1.1070 +    of these images.
  1.1071 +
  1.1072 +    An eye is described in blender in the same way as a joint. They
  1.1073 +    are zero dimensional empty objects with no geometry whose local
  1.1074 +    coordinate system determines the orientation of the resulting eye.
  1.1075 +    All eyes are children of a parent node named "eyes" just as all
  1.1076 +    joints have a parent named "joints". An eye binds to the nearest
  1.1077 +    physical object with =bind-sense=.
  1.1078 +
  1.1079 +    #+caption: Here, the camera is created based on metadata on the
  1.1080 +    #+caption: eye-node and attached to the nearest physical object 
  1.1081 +    #+caption: with =bind-sense=
  1.1082 +    #+name: add-eye
  1.1083 +    #+begin_listing clojure
  1.1084 +(defn add-eye!
  1.1085 +  "Create a Camera centered on the current position of 'eye which
  1.1086 +   follows the closest physical node in 'creature. The camera will
  1.1087 +   point in the X direction and use the Z vector as up as determined
  1.1088 +   by the rotation of these vectors in blender coordinate space. Use
  1.1089 +   XZY rotation for the node in blender."
  1.1090 +  [#^Node creature #^Spatial eye]
  1.1091 +  (let [target (closest-node creature eye)
  1.1092 +        [cam-width cam-height] 
  1.1093 +        ;;[640 480] ;; graphics card on laptop doesn't support
  1.1094 +                    ;; arbitray dimensions.
  1.1095 +        (eye-dimensions eye)
  1.1096 +        cam (Camera. cam-width cam-height)
  1.1097 +        rot (.getWorldRotation eye)]
  1.1098 +    (.setLocation cam (.getWorldTranslation eye))
  1.1099 +    (.lookAtDirection
  1.1100 +     cam                           ; this part is not a mistake and
  1.1101 +     (.mult rot Vector3f/UNIT_X)   ; is consistent with using Z in
  1.1102 +     (.mult rot Vector3f/UNIT_Y))  ; blender as the UP vector.
  1.1103 +    (.setFrustumPerspective
  1.1104 +     cam (float 45)
  1.1105 +     (float (/ (.getWidth cam) (.getHeight cam)))
  1.1106 +     (float 1)
  1.1107 +     (float 1000))
  1.1108 +    (bind-sense target cam) cam))
  1.1109 +    #+end_listing
  1.1110 +
  1.1111 +*** Simulated Retina 
  1.1112 +
  1.1113 +    An eye is a surface (the retina) which contains many discrete
  1.1114 +    sensors to detect light. These sensors can have different
  1.1115 +    light-sensing properties. In humans, each discrete sensor is
  1.1116 +    sensitive to red, blue, green, or gray. These different types of
  1.1117 +    sensors can have different spatial distributions along the retina.
  1.1118 +    In humans, there is a fovea in the center of the retina which has
  1.1119 +    a very high density of color sensors, and a blind spot which has
  1.1120 +    no sensors at all. Sensor density decreases in proportion to
  1.1121 +    distance from the fovea.
  1.1122 +
  1.1123 +    I want to be able to model any retinal configuration, so my
  1.1124 +    eye-nodes in blender contain metadata pointing to images that
  1.1125 +    describe the precise position of the individual sensors using
  1.1126 +    white pixels. The meta-data also describes the precise sensitivity
  1.1127 +    to light that the sensors described in the image have. An eye can
  1.1128 +    contain any number of these images. For example, the metadata for
  1.1129 +    an eye might look like this:
  1.1130 +
  1.1131 +    #+begin_src clojure
  1.1132 +{0xFF0000 "Models/test-creature/retina-small.png"}
  1.1133 +    #+end_src
  1.1134 +
  1.1135 +    #+caption: An example retinal profile image. White pixels are 
  1.1136 +    #+caption: photo-sensitive elements. The distribution of white 
  1.1137 +    #+caption: pixels is denser in the middle and falls off at the 
  1.1138 +    #+caption: edges and is inspired by the human retina.
  1.1139 +    #+name: retina
  1.1140 +    #+ATTR_LaTeX: :width 7cm
  1.1141 +    [[./images/retina-small.png]]
  1.1142 +
  1.1143 +    Together, the number 0xFF0000 and the image image above describe
  1.1144 +    the placement of red-sensitive sensory elements.
  1.1145 +
  1.1146 +    Meta-data to very crudely approximate a human eye might be
  1.1147 +    something like this:
  1.1148 +
  1.1149 +    #+begin_src clojure
  1.1150 +(let [retinal-profile "Models/test-creature/retina-small.png"]
  1.1151 +  {0xFF0000 retinal-profile
  1.1152 +   0x00FF00 retinal-profile
  1.1153 +   0x0000FF retinal-profile
  1.1154 +   0xFFFFFF retinal-profile})
  1.1155 +    #+end_src
  1.1156 +
  1.1157 +    The numbers that serve as keys in the map determine a sensor's
  1.1158 +    relative sensitivity to the channels red, green, and blue. These
  1.1159 +    sensitivity values are packed into an integer in the order
  1.1160 +    =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
  1.1161 +    image are added together with these sensitivities as linear
  1.1162 +    weights. Therefore, 0xFF0000 means sensitive to red only while
  1.1163 +    0xFFFFFF means sensitive to all colors equally (gray).
  1.1164 +
  1.1165 +    #+caption: This is the core of vision in =CORTEX=. A given eye node 
  1.1166 +    #+caption: is converted into a function that returns visual
  1.1167 +    #+caption: information from the simulation.
  1.1168 +    #+name: vision-kernel
  1.1169 +    #+begin_listing clojure
  1.1170 +    #+BEGIN_SRC clojure
  1.1171 +(defn vision-kernel
  1.1172 +  "Returns a list of functions, each of which will return a color
  1.1173 +   channel's worth of visual information when called inside a running
  1.1174 +   simulation."
  1.1175 +  [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
  1.1176 +  (let [retinal-map (retina-sensor-profile eye)
  1.1177 +        camera (add-eye! creature eye)
  1.1178 +        vision-image
  1.1179 +        (atom
  1.1180 +         (BufferedImage. (.getWidth camera)
  1.1181 +                         (.getHeight camera)
  1.1182 +                         BufferedImage/TYPE_BYTE_BINARY))
  1.1183 +        register-eye!
  1.1184 +        (runonce
  1.1185 +         (fn [world]
  1.1186 +           (add-camera!
  1.1187 +            world camera
  1.1188 +            (let [counter  (atom 0)]
  1.1189 +              (fn [r fb bb bi]
  1.1190 +                (if (zero? (rem (swap! counter inc) (inc skip)))
  1.1191 +                  (reset! vision-image
  1.1192 +                          (BufferedImage! r fb bb bi))))))))]
  1.1193 +     (vec
  1.1194 +      (map
  1.1195 +       (fn [[key image]]
  1.1196 +         (let [whites (white-coordinates image)
  1.1197 +               topology (vec (collapse whites))
  1.1198 +               sensitivity (sensitivity-presets key key)]
  1.1199 +           (attached-viewport.
  1.1200 +            (fn [world]
  1.1201 +              (register-eye! world)
  1.1202 +              (vector
  1.1203 +               topology
  1.1204 +               (vec 
  1.1205 +                (for [[x y] whites]
  1.1206 +                  (pixel-sense 
  1.1207 +                   sensitivity
  1.1208 +                   (.getRGB @vision-image x y))))))
  1.1209 +            register-eye!)))
  1.1210 +         retinal-map))))
  1.1211 +    #+END_SRC
  1.1212 +    #+end_listing
  1.1213 +
  1.1214 +    Note that since each of the functions generated by =vision-kernel=
  1.1215 +    shares the same =register-eye!= function, the eye will be
  1.1216 +    registered only once the first time any of the functions from the
  1.1217 +    list returned by =vision-kernel= is called. Each of the functions
  1.1218 +    returned by =vision-kernel= also allows access to the =Viewport=
  1.1219 +    through which it receives images.
  1.1220 +
  1.1221 +    All the hard work has been done; all that remains is to apply
  1.1222 +    =vision-kernel= to each eye in the creature and gather the results
  1.1223 +    into one list of functions.
  1.1224 +
  1.1225 +
  1.1226 +    #+caption: With =vision!=, =CORTEX= is already a fine simulation 
  1.1227 +    #+caption: environment for experimenting with different types of 
  1.1228 +    #+caption: eyes.
  1.1229 +    #+name: vision!
  1.1230 +    #+begin_listing clojure
  1.1231 +    #+BEGIN_SRC clojure
  1.1232 +(defn vision!
  1.1233 +  "Returns a list of functions, each of which returns visual sensory
  1.1234 +   data when called inside a running simulation."
  1.1235 +  [#^Node creature & {skip :skip :or {skip 0}}]
  1.1236 +  (reduce
  1.1237 +   concat 
  1.1238 +   (for [eye (eyes creature)]
  1.1239 +     (vision-kernel creature eye))))
  1.1240 +    #+END_SRC
  1.1241 +    #+end_listing
  1.1242 +
  1.1243 +    #+caption: Simulated vision with a test creature and the 
  1.1244 +    #+caption: human-like eye approximation. Notice how each channel
  1.1245 +    #+caption: of the eye responds differently to the differently 
  1.1246 +    #+caption: colored balls.
  1.1247 +    #+name: worm-vision-test.
  1.1248 +    #+ATTR_LaTeX: :width 13cm
  1.1249 +    [[./images/worm-vision.png]]
  1.1250 +
  1.1251 +    The vision code is not much more complicated than the body code,
  1.1252 +    and enables multiple further paths for simulated vision. For
  1.1253 +    example, it is quite easy to create bifocal vision -- you just
  1.1254 +    make two eyes next to each other in blender! It is also possible
  1.1255 +    to encode vision transforms in the retinal files. For example, the
  1.1256 +    human like retina file in figure \ref{retina} approximates a
  1.1257 +    log-polar transform.
  1.1258 +
  1.1259 +    This vision code has already been absorbed by the jMonkeyEngine
  1.1260 +    community and is now (in modified form) part of a system for
  1.1261 +    capturing in-game video to a file.
  1.1262 +
  1.1263 +** Hearing is hard; =CORTEX= does it right
  1.1264 +   
  1.1265 +   At the end of this section I will have simulated ears that work the
  1.1266 +   same way as the simulated eyes in the last section. I will be able to
  1.1267 +   place any number of ear-nodes in a blender file, and they will bind to
  1.1268 +   the closest physical object and follow it as it moves around. Each ear
  1.1269 +   will provide access to the sound data it picks up between every frame.
  1.1270 +
  1.1271 +   Hearing is one of the more difficult senses to simulate, because there
  1.1272 +   is less support for obtaining the actual sound data that is processed
  1.1273 +   by jMonkeyEngine3. There is no "split-screen" support for rendering
  1.1274 +   sound from different points of view, and there is no way to directly
  1.1275 +   access the rendered sound data.
  1.1276 +
  1.1277 +   =CORTEX='s hearing is unique because it does not have any
  1.1278 +   limitations compared to other simulation environments. As far as I
  1.1279 +   know, there is no other system that supports multiple listerers,
  1.1280 +   and the sound demo at the end of this section is the first time
  1.1281 +   it's been done in a video game environment.
  1.1282 +
  1.1283 +*** Brief Description of jMonkeyEngine's Sound System
  1.1284 +
  1.1285 +   jMonkeyEngine's sound system works as follows:
  1.1286 +
  1.1287 +   - jMonkeyEngine uses the =AppSettings= for the particular
  1.1288 +     application to determine what sort of =AudioRenderer= should be
  1.1289 +     used.
  1.1290 +   - Although some support is provided for multiple AudioRendering
  1.1291 +     backends, jMonkeyEngine at the time of this writing will either
  1.1292 +     pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
  1.1293 +   - jMonkeyEngine tries to figure out what sort of system you're
  1.1294 +     running and extracts the appropriate native libraries.
  1.1295 +   - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
  1.1296 +     Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
  1.1297 +   - =OpenAL= renders the 3D sound and feeds the rendered sound
  1.1298 +     directly to any of various sound output devices with which it
  1.1299 +     knows how to communicate.
  1.1300 +  
  1.1301 +   A consequence of this is that there's no way to access the actual
  1.1302 +   sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
  1.1303 +   one /listener/ (it renders sound data from only one perspective),
  1.1304 +   which normally isn't a problem for games, but becomes a problem
  1.1305 +   when trying to make multiple AI creatures that can each hear the
  1.1306 +   world from a different perspective.
  1.1307 +
  1.1308 +   To make many AI creatures in jMonkeyEngine that can each hear the
  1.1309 +   world from their own perspective, or to make a single creature with
  1.1310 +   many ears, it is necessary to go all the way back to =OpenAL= and
  1.1311 +   implement support for simulated hearing there.
  1.1312 +
  1.1313 +*** Extending =OpenAl=
  1.1314 +
  1.1315 +    Extending =OpenAL= to support multiple listeners requires 500
  1.1316 +    lines of =C= code and is too hairy to mention here. Instead, I
  1.1317 +    will show a small amount of extension code and go over the high
  1.1318 +    level stragety. Full source is of course available with the
  1.1319 +    =CORTEX= distribution if you're interested.
  1.1320 +
  1.1321 +    =OpenAL= goes to great lengths to support many different systems,
  1.1322 +    all with different sound capabilities and interfaces. It
  1.1323 +    accomplishes this difficult task by providing code for many
  1.1324 +    different sound backends in pseudo-objects called /Devices/.
  1.1325 +    There's a device for the Linux Open Sound System and the Advanced
  1.1326 +    Linux Sound Architecture, there's one for Direct Sound on Windows,
  1.1327 +    and there's even one for Solaris. =OpenAL= solves the problem of
  1.1328 +    platform independence by providing all these Devices.
  1.1329 +
  1.1330 +    Wrapper libraries such as LWJGL are free to examine the system on
  1.1331 +    which they are running and then select an appropriate device for
  1.1332 +    that system.
  1.1333 +
  1.1334 +    There are also a few "special" devices that don't interface with
  1.1335 +    any particular system. These include the Null Device, which
  1.1336 +    doesn't do anything, and the Wave Device, which writes whatever
  1.1337 +    sound it receives to a file, if everything has been set up
  1.1338 +    correctly when configuring =OpenAL=.
  1.1339 +
  1.1340 +    Actual mixing (doppler shift and distance.environment-based
  1.1341 +    attenuation) of the sound data happens in the Devices, and they
  1.1342 +    are the only point in the sound rendering process where this data
  1.1343 +    is available.
  1.1344 +
  1.1345 +    Therefore, in order to support multiple listeners, and get the
  1.1346 +    sound data in a form that the AIs can use, it is necessary to
  1.1347 +    create a new Device which supports this feature.
  1.1348 +
  1.1349 +    Adding a device to OpenAL is rather tricky -- there are five
  1.1350 +    separate files in the =OpenAL= source tree that must be modified
  1.1351 +    to do so. I named my device the "Multiple Audio Send" Device, or
  1.1352 +    =Send= Device for short, since it sends audio data back to the
  1.1353 +    calling application like an Aux-Send cable on a mixing board.
  1.1354 +
  1.1355 +    The main idea behind the Send device is to take advantage of the
  1.1356 +    fact that LWJGL only manages one /context/ when using OpenAL. A
  1.1357 +    /context/ is like a container that holds samples and keeps track
  1.1358 +    of where the listener is. In order to support multiple listeners,
  1.1359 +    the Send device identifies the LWJGL context as the master
  1.1360 +    context, and creates any number of slave contexts to represent
  1.1361 +    additional listeners. Every time the device renders sound, it
  1.1362 +    synchronizes every source from the master LWJGL context to the
  1.1363 +    slave contexts. Then, it renders each context separately, using a
  1.1364 +    different listener for each one. The rendered sound is made
  1.1365 +    available via JNI to jMonkeyEngine.
  1.1366 +
  1.1367 +    Switching between contexts is not the normal operation of a
  1.1368 +    Device, and one of the problems with doing so is that a Device
  1.1369 +    normally keeps around a few pieces of state such as the
  1.1370 +    =ClickRemoval= array above which will become corrupted if the
  1.1371 +    contexts are not rendered in parallel. The solution is to create a
  1.1372 +    copy of this normally global device state for each context, and
  1.1373 +    copy it back and forth into and out of the actual device state
  1.1374 +    whenever a context is rendered.
  1.1375 +
  1.1376 +    The core of the =Send= device is the =syncSources= function, which
  1.1377 +    does the job of copying all relevant data from one context to
  1.1378 +    another. 
  1.1379 +
  1.1380 +    #+caption: Program for extending =OpenAL= to support multiple
  1.1381 +    #+caption: listeners via context copying/switching.
  1.1382 +    #+name: sync-openal-sources
  1.1383 +    #+begin_listing c
  1.1384 +    #+BEGIN_SRC c
  1.1385 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 
  1.1386 +		 ALCcontext *masterCtx, ALCcontext *slaveCtx){
  1.1387 +  ALuint master = masterSource->source;
  1.1388 +  ALuint slave = slaveSource->source;
  1.1389 +  ALCcontext *current = alcGetCurrentContext();
  1.1390 +
  1.1391 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
  1.1392 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
  1.1393 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
  1.1394 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
  1.1395 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
  1.1396 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
  1.1397 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
  1.1398 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
  1.1399 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
  1.1400 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
  1.1401 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
  1.1402 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
  1.1403 +  syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
  1.1404 +    
  1.1405 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
  1.1406 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
  1.1407 +  syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
  1.1408 +  
  1.1409 +  syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
  1.1410 +  syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
  1.1411 +
  1.1412 +  alcMakeContextCurrent(masterCtx);
  1.1413 +  ALint source_type;
  1.1414 +  alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
  1.1415 +
  1.1416 +  // Only static sources are currently synchronized! 
  1.1417 +  if (AL_STATIC == source_type){
  1.1418 +    ALint master_buffer;
  1.1419 +    ALint slave_buffer;
  1.1420 +    alGetSourcei(master, AL_BUFFER, &master_buffer);
  1.1421 +    alcMakeContextCurrent(slaveCtx);
  1.1422 +    alGetSourcei(slave, AL_BUFFER, &slave_buffer);
  1.1423 +    if (master_buffer != slave_buffer){
  1.1424 +      alSourcei(slave, AL_BUFFER, master_buffer);
  1.1425 +    }
  1.1426 +  }
  1.1427 +  
  1.1428 +  // Synchronize the state of the two sources.
  1.1429 +  alcMakeContextCurrent(masterCtx);
  1.1430 +  ALint masterState;
  1.1431 +  ALint slaveState;
  1.1432 +
  1.1433 +  alGetSourcei(master, AL_SOURCE_STATE, &masterState);
  1.1434 +  alcMakeContextCurrent(slaveCtx);
  1.1435 +  alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
  1.1436 +
  1.1437 +  if (masterState != slaveState){
  1.1438 +    switch (masterState){
  1.1439 +    case AL_INITIAL : alSourceRewind(slave); break;
  1.1440 +    case AL_PLAYING : alSourcePlay(slave);   break;
  1.1441 +    case AL_PAUSED  : alSourcePause(slave);  break;
  1.1442 +    case AL_STOPPED : alSourceStop(slave);   break;
  1.1443 +    }
  1.1444 +  }
  1.1445 +  // Restore whatever context was previously active.
  1.1446 +  alcMakeContextCurrent(current);
  1.1447 +}
  1.1448 +    #+END_SRC
  1.1449 +    #+end_listing
  1.1450 +
  1.1451 +    With this special context-switching device, and some ugly JNI
  1.1452 +    bindings that are not worth mentioning, =CORTEX= gains the ability
  1.1453 +    to access multiple sound streams from =OpenAL=. 
  1.1454 +
  1.1455 +    #+caption: Program to create an ear from a blender empty node. The ear
  1.1456 +    #+caption: follows around the nearest physical object and passes 
  1.1457 +    #+caption: all sensory data to a continuation function.
  1.1458 +    #+name: add-ear
  1.1459 +    #+begin_listing clojure
  1.1460 +    #+BEGIN_SRC clojure
  1.1461 +(defn add-ear!  
  1.1462 +  "Create a Listener centered on the current position of 'ear 
  1.1463 +   which follows the closest physical node in 'creature and 
  1.1464 +   sends sound data to 'continuation."
  1.1465 +  [#^Application world #^Node creature #^Spatial ear continuation]
  1.1466 +  (let [target (closest-node creature ear)
  1.1467 +        lis (Listener.)
  1.1468 +        audio-renderer (.getAudioRenderer world)
  1.1469 +        sp (hearing-pipeline continuation)]
  1.1470 +    (.setLocation lis (.getWorldTranslation ear))
  1.1471 +    (.setRotation lis (.getWorldRotation ear))
  1.1472 +    (bind-sense target lis)
  1.1473 +    (update-listener-velocity! target lis)
  1.1474 +    (.addListener audio-renderer lis)
  1.1475 +    (.registerSoundProcessor audio-renderer lis sp)))
  1.1476 +    #+END_SRC
  1.1477 +    #+end_listing
  1.1478 +    
  1.1479 +    The =Send= device, unlike most of the other devices in =OpenAL=,
  1.1480 +    does not render sound unless asked. This enables the system to
  1.1481 +    slow down or speed up depending on the needs of the AIs who are
  1.1482 +    using it to listen. If the device tried to render samples in
  1.1483 +    real-time, a complicated AI whose mind takes 100 seconds of
  1.1484 +    computer time to simulate 1 second of AI-time would miss almost
  1.1485 +    all of the sound in its environment!
  1.1486 +
  1.1487 +    #+caption: Program to enable arbitrary hearing in =CORTEX=
  1.1488 +    #+name: hearing
  1.1489 +    #+begin_listing clojure
  1.1490 +#+BEGIN_SRC clojure
  1.1491 +(defn hearing-kernel
  1.1492 +  "Returns a function which returns auditory sensory data when called
  1.1493 +   inside a running simulation."
  1.1494 +  [#^Node creature #^Spatial ear]
  1.1495 +  (let [hearing-data (atom [])
  1.1496 +        register-listener!
  1.1497 +        (runonce 
  1.1498 +         (fn [#^Application world]
  1.1499 +           (add-ear!
  1.1500 +            world creature ear
  1.1501 +            (comp #(reset! hearing-data %)
  1.1502 +                  byteBuffer->pulse-vector))))]
  1.1503 +    (fn [#^Application world]
  1.1504 +      (register-listener! world)
  1.1505 +      (let [data @hearing-data
  1.1506 +            topology              
  1.1507 +            (vec (map #(vector % 0) (range 0 (count data))))]
  1.1508 +        [topology data]))))
  1.1509 +    
  1.1510 +(defn hearing!
  1.1511 +  "Endow the creature in a particular world with the sense of
  1.1512 +   hearing. Will return a sequence of functions, one for each ear,
  1.1513 +   which when called will return the auditory data from that ear."
  1.1514 +  [#^Node creature]
  1.1515 +  (for [ear (ears creature)]
  1.1516 +    (hearing-kernel creature ear)))
  1.1517 +    #+END_SRC
  1.1518 +    #+end_listing
  1.1519 +
  1.1520 +    Armed with these functions, =CORTEX= is able to test possibly the
  1.1521 +    first ever instance of multiple listeners in a video game engine
  1.1522 +    based simulation!
  1.1523 +
  1.1524 +    #+caption: Here a simple creature responds to sound by changing
  1.1525 +    #+caption: its color from gray to green when the total volume
  1.1526 +    #+caption: goes over a threshold.
  1.1527 +    #+name: sound-test
  1.1528 +    #+begin_listing java
  1.1529 +    #+BEGIN_SRC java
  1.1530 +/**
  1.1531 + * Respond to sound!  This is the brain of an AI entity that 
  1.1532 + * hears its surroundings and reacts to them.
  1.1533 + */
  1.1534 +public void process(ByteBuffer audioSamples, 
  1.1535 +		    int numSamples, AudioFormat format) {
  1.1536 +    audioSamples.clear();
  1.1537 +    byte[] data = new byte[numSamples];
  1.1538 +    float[] out = new float[numSamples];
  1.1539 +    audioSamples.get(data);
  1.1540 +    FloatSampleTools.
  1.1541 +	byte2floatInterleaved
  1.1542 +	(data, 0, out, 0, numSamples/format.getFrameSize(), format);
  1.1543 +
  1.1544 +    float max = Float.NEGATIVE_INFINITY;
  1.1545 +    for (float f : out){if (f > max) max = f;}
  1.1546 +    audioSamples.clear();
  1.1547 +
  1.1548 +    if (max > 0.1){
  1.1549 +	entity.getMaterial().setColor("Color", ColorRGBA.Green);
  1.1550 +    }
  1.1551 +    else {
  1.1552 +	entity.getMaterial().setColor("Color", ColorRGBA.Gray);
  1.1553 +    }
  1.1554 +    #+END_SRC
  1.1555 +    #+end_listing
  1.1556 +
  1.1557 +    #+caption: First ever simulation of multiple listerners in =CORTEX=.
  1.1558 +    #+caption: Each cube is a creature which processes sound data with
  1.1559 +    #+caption: the =process= function from listing \ref{sound-test}. 
  1.1560 +    #+caption: the ball is constantally emiting a pure tone of
  1.1561 +    #+caption: constant volume. As it approaches the cubes, they each
  1.1562 +    #+caption: change color in response to the sound.
  1.1563 +    #+name: sound-cubes.
  1.1564 +    #+ATTR_LaTeX: :width 10cm
  1.1565 +    [[./images/java-hearing-test.png]]
  1.1566 +
  1.1567 +    This system of hearing has also been co-opted by the
  1.1568 +    jMonkeyEngine3 community and is used to record audio for demo
  1.1569 +    videos.
  1.1570 +
  1.1571 +** Touch uses hundreds of hair-like elements
  1.1572 +
  1.1573 +   Touch is critical to navigation and spatial reasoning and as such I
  1.1574 +   need a simulated version of it to give to my AI creatures.
  1.1575 +   
  1.1576 +   Human skin has a wide array of touch sensors, each of which
  1.1577 +   specialize in detecting different vibrational modes and pressures.
  1.1578 +   These sensors can integrate a vast expanse of skin (i.e. your
  1.1579 +   entire palm), or a tiny patch of skin at the tip of your finger.
  1.1580 +   The hairs of the skin help detect objects before they even come
  1.1581 +   into contact with the skin proper.
  1.1582 +   
  1.1583 +   However, touch in my simulated world can not exactly correspond to
  1.1584 +   human touch because my creatures are made out of completely rigid
  1.1585 +   segments that don't deform like human skin.
  1.1586 +   
  1.1587 +   Instead of measuring deformation or vibration, I surround each
  1.1588 +   rigid part with a plenitude of hair-like objects (/feelers/) which
  1.1589 +   do not interact with the physical world. Physical objects can pass
  1.1590 +   through them with no effect. The feelers are able to tell when
  1.1591 +   other objects pass through them, and they constantly report how
  1.1592 +   much of their extent is covered. So even though the creature's body
  1.1593 +   parts do not deform, the feelers create a margin around those body
  1.1594 +   parts which achieves a sense of touch which is a hybrid between a
  1.1595 +   human's sense of deformation and sense from hairs.
  1.1596 +   
  1.1597 +   Implementing touch in jMonkeyEngine follows a different technical
  1.1598 +   route than vision and hearing. Those two senses piggybacked off
  1.1599 +   jMonkeyEngine's 3D audio and video rendering subsystems. To
  1.1600 +   simulate touch, I use jMonkeyEngine's physics system to execute
  1.1601 +   many small collision detections, one for each feeler. The placement
  1.1602 +   of the feelers is determined by a UV-mapped image which shows where
  1.1603 +   each feeler should be on the 3D surface of the body.
  1.1604 +
  1.1605 +*** Defining Touch Meta-Data in Blender
  1.1606 +
  1.1607 +    Each geometry can have a single UV map which describes the
  1.1608 +    position of the feelers which will constitute its sense of touch.
  1.1609 +    This image path is stored under the ``touch'' key. The image itself
  1.1610 +    is black and white, with black meaning a feeler length of 0 (no
  1.1611 +    feeler is present) and white meaning a feeler length of =scale=,
  1.1612 +    which is a float stored under the key "scale".
  1.1613 +
  1.1614 +    #+caption: Touch does not use empty nodes, to store metadata, 
  1.1615 +    #+caption: because the metadata of each solid part of a 
  1.1616 +    #+caption: creature's body is sufficient.
  1.1617 +    #+name: touch-meta-data
  1.1618 +    #+begin_listing clojure
  1.1619 +    #+BEGIN_SRC  clojure
  1.1620 +(defn tactile-sensor-profile
  1.1621 +  "Return the touch-sensor distribution image in BufferedImage format,
  1.1622 +   or nil if it does not exist."
  1.1623 +  [#^Geometry obj]
  1.1624 +  (if-let [image-path (meta-data obj "touch")]
  1.1625 +    (load-image image-path)))
  1.1626 +
  1.1627 +(defn tactile-scale
  1.1628 +  "Return the length of each feeler. Default scale is 0.01
  1.1629 +  jMonkeyEngine units."
  1.1630 +  [#^Geometry obj]
  1.1631 +  (if-let [scale (meta-data obj "scale")]
  1.1632 +    scale 0.1))
  1.1633 +    #+END_SRC
  1.1634 +    #+end_listing
  1.1635 +
  1.1636 +    Here is an example of a UV-map which specifies the position of
  1.1637 +    touch sensors along the surface of the upper segment of a fingertip.
  1.1638 +
  1.1639 +    #+caption: This is the tactile-sensor-profile for the upper segment 
  1.1640 +    #+caption: of a fingertip. It defines regions of high touch sensitivity 
  1.1641 +    #+caption: (where there are many white pixels) and regions of low 
  1.1642 +    #+caption: sensitivity (where white pixels are sparse).
  1.1643 +    #+name: fingertip-UV
  1.1644 +    #+ATTR_LaTeX: :width 13cm
  1.1645 +    [[./images/finger-UV.png]]
  1.1646 +
  1.1647 +*** Implementation Summary
  1.1648 +  
  1.1649 +    To simulate touch there are three conceptual steps. For each solid
  1.1650 +    object in the creature, you first have to get UV image and scale
  1.1651 +    parameter which define the position and length of the feelers.
  1.1652 +    Then, you use the triangles which comprise the mesh and the UV
  1.1653 +    data stored in the mesh to determine the world-space position and
  1.1654 +    orientation of each feeler. Then once every frame, update these
  1.1655 +    positions and orientations to match the current position and
  1.1656 +    orientation of the object, and use physics collision detection to
  1.1657 +    gather tactile data.
  1.1658 +    
  1.1659 +    Extracting the meta-data has already been described. The third
  1.1660 +    step, physics collision detection, is handled in =touch-kernel=.
  1.1661 +    Translating the positions and orientations of the feelers from the
  1.1662 +    UV-map to world-space is itself a three-step process.
  1.1663 +
  1.1664 +    - Find the triangles which make up the mesh in pixel-space and in
  1.1665 +      world-space. \\(=triangles=, =pixel-triangles=).
  1.1666 +
  1.1667 +    - Find the coordinates of each feeler in world-space. These are
  1.1668 +      the origins of the feelers. (=feeler-origins=).
  1.1669 +    
  1.1670 +    - Calculate the normals of the triangles in world space, and add
  1.1671 +      them to each of the origins of the feelers. These are the
  1.1672 +      normalized coordinates of the tips of the feelers.
  1.1673 +      (=feeler-tips=).
  1.1674 +
  1.1675 +*** Triangle Math
  1.1676 +
  1.1677 +    The rigid objects which make up a creature have an underlying
  1.1678 +    =Geometry=, which is a =Mesh= plus a =Material= and other
  1.1679 +    important data involved with displaying the object.
  1.1680 +    
  1.1681 +    A =Mesh= is composed of =Triangles=, and each =Triangle= has three
  1.1682 +    vertices which have coordinates in world space and UV space.
  1.1683 +    
  1.1684 +    Here, =triangles= gets all the world-space triangles which
  1.1685 +    comprise a mesh, while =pixel-triangles= gets those same triangles
  1.1686 +    expressed in pixel coordinates (which are UV coordinates scaled to
  1.1687 +    fit the height and width of the UV image).
  1.1688 +
  1.1689 +    #+caption: Programs to extract triangles from a geometry and get 
  1.1690 +    #+caption: their verticies in both world and UV-coordinates.
  1.1691 +    #+name: get-triangles
  1.1692 +    #+begin_listing clojure
  1.1693 +    #+BEGIN_SRC clojure
  1.1694 +(defn triangle
  1.1695 +  "Get the triangle specified by triangle-index from the mesh."
  1.1696 +  [#^Geometry geo triangle-index]
  1.1697 +  (triangle-seq
  1.1698 +   (let [scratch (Triangle.)]
  1.1699 +     (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
  1.1700 +
  1.1701 +(defn triangles
  1.1702 +  "Return a sequence of all the Triangles which comprise a given
  1.1703 +   Geometry." 
  1.1704 +  [#^Geometry geo]
  1.1705 +  (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
  1.1706 +
  1.1707 +(defn triangle-vertex-indices
  1.1708 +  "Get the triangle vertex indices of a given triangle from a given
  1.1709 +   mesh."
  1.1710 +  [#^Mesh mesh triangle-index]
  1.1711 +  (let [indices (int-array 3)]
  1.1712 +    (.getTriangle mesh triangle-index indices)
  1.1713 +    (vec indices)))
  1.1714 +
  1.1715 +    (defn vertex-UV-coord
  1.1716 +  "Get the UV-coordinates of the vertex named by vertex-index"
  1.1717 +  [#^Mesh mesh vertex-index]
  1.1718 +  (let [UV-buffer
  1.1719 +        (.getData
  1.1720 +         (.getBuffer
  1.1721 +          mesh
  1.1722 +          VertexBuffer$Type/TexCoord))]
  1.1723 +    [(.get UV-buffer (* vertex-index 2))
  1.1724 +     (.get UV-buffer (+ 1 (* vertex-index 2)))]))
  1.1725 +
  1.1726 +(defn pixel-triangle [#^Geometry geo image index]
  1.1727 +  (let [mesh (.getMesh geo)
  1.1728 +        width (.getWidth image)
  1.1729 +        height (.getHeight image)]
  1.1730 +    (vec (map (fn [[u v]] (vector (* width u) (* height v)))
  1.1731 +              (map (partial vertex-UV-coord mesh)
  1.1732 +                   (triangle-vertex-indices mesh index))))))
  1.1733 +
  1.1734 +(defn pixel-triangles 
  1.1735 +  "The pixel-space triangles of the Geometry, in the same order as
  1.1736 +   (triangles geo)"
  1.1737 +  [#^Geometry geo image]
  1.1738 +  (let [height (.getHeight image)
  1.1739 +        width (.getWidth image)]
  1.1740 +    (map (partial pixel-triangle geo image)
  1.1741 +         (range (.getTriangleCount (.getMesh geo))))))
  1.1742 +    #+END_SRC
  1.1743 +    #+end_listing
  1.1744 +    
  1.1745 +*** The Affine Transform from one Triangle to Another
  1.1746 +
  1.1747 +    =pixel-triangles= gives us the mesh triangles expressed in pixel
  1.1748 +    coordinates and =triangles= gives us the mesh triangles expressed
  1.1749 +    in world coordinates. The tactile-sensor-profile gives the
  1.1750 +    position of each feeler in pixel-space. In order to convert
  1.1751 +    pixel-space coordinates into world-space coordinates we need
  1.1752 +    something that takes coordinates on the surface of one triangle
  1.1753 +    and gives the corresponding coordinates on the surface of another
  1.1754 +    triangle.
  1.1755 +    
  1.1756 +    Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed
  1.1757 +    into any other by a combination of translation, scaling, and
  1.1758 +    rotation. The affine transformation from one triangle to another
  1.1759 +    is readily computable if the triangle is expressed in terms of a
  1.1760 +    $4x4$ matrix.
  1.1761 +
  1.1762 +    #+BEGIN_LaTeX
  1.1763 +    $$
  1.1764 +    \begin{bmatrix}
  1.1765 +    x_1 & x_2 & x_3 & n_x \\
  1.1766 +    y_1 & y_2 & y_3 & n_y \\ 
  1.1767 +    z_1 & z_2 & z_3 & n_z \\
  1.1768 +    1 & 1 & 1 & 1 
  1.1769 +    \end{bmatrix}
  1.1770 +    $$
  1.1771 +    #+END_LaTeX
  1.1772 +    
  1.1773 +    Here, the first three columns of the matrix are the vertices of
  1.1774 +    the triangle. The last column is the right-handed unit normal of
  1.1775 +    the triangle.
  1.1776 +    
  1.1777 +    With two triangles $T_{1}$ and $T_{2}$ each expressed as a
  1.1778 +    matrix like above, the affine transform from $T_{1}$ to $T_{2}$
  1.1779 +    is $T_{2}T_{1}^{-1}$.
  1.1780 +    
  1.1781 +    The clojure code below recapitulates the formulas above, using
  1.1782 +    jMonkeyEngine's =Matrix4f= objects, which can describe any affine
  1.1783 +    transformation.
  1.1784 +
  1.1785 +    #+caption: Program to interpert triangles as affine transforms.
  1.1786 +    #+name: triangle-affine
  1.1787 +    #+begin_listing clojure
  1.1788 +    #+BEGIN_SRC clojure
  1.1789 +(defn triangle->matrix4f
  1.1790 +  "Converts the triangle into a 4x4 matrix: The first three columns
  1.1791 +   contain the vertices of the triangle; the last contains the unit
  1.1792 +   normal of the triangle. The bottom row is filled with 1s."
  1.1793 +  [#^Triangle t]
  1.1794 +  (let [mat (Matrix4f.)
  1.1795 +        [vert-1 vert-2 vert-3]
  1.1796 +        (mapv #(.get t %) (range 3))
  1.1797 +        unit-normal (do (.calculateNormal t)(.getNormal t))
  1.1798 +        vertices [vert-1 vert-2 vert-3 unit-normal]]
  1.1799 +    (dorun 
  1.1800 +     (for [row (range 4) col (range 3)]
  1.1801 +       (do
  1.1802 +         (.set mat col row (.get (vertices row) col))
  1.1803 +         (.set mat 3 row 1)))) mat))
  1.1804 +
  1.1805 +(defn triangles->affine-transform
  1.1806 +  "Returns the affine transformation that converts each vertex in the
  1.1807 +   first triangle into the corresponding vertex in the second
  1.1808 +   triangle."
  1.1809 +  [#^Triangle tri-1 #^Triangle tri-2]
  1.1810 +  (.mult 
  1.1811 +   (triangle->matrix4f tri-2)
  1.1812 +   (.invert (triangle->matrix4f tri-1))))
  1.1813 +    #+END_SRC
  1.1814 +    #+end_listing
  1.1815 +
  1.1816 +*** Triangle Boundaries
  1.1817 +  
  1.1818 +For efficiency's sake I will divide the tactile-profile image into
  1.1819 +small squares which inscribe each pixel-triangle, then extract the
  1.1820 +points which lie inside the triangle and map them to 3D-space using
  1.1821 +=triangle-transform= above. To do this I need a function,
  1.1822 +=convex-bounds= which finds the smallest box which inscribes a 2D
  1.1823 +triangle.
  1.1824 +
  1.1825 +=inside-triangle?= determines whether a point is inside a triangle
  1.1826 +in 2D pixel-space.
  1.1827 +
  1.1828 +    #+caption: Program to efficiently determine point includion 
  1.1829 +    #+caption: in a triangle.
  1.1830 +    #+name: in-triangle
  1.1831 +    #+begin_listing clojure
  1.1832 +    #+BEGIN_SRC clojure
  1.1833 +(defn convex-bounds
  1.1834 +  "Returns the smallest square containing the given vertices, as a
  1.1835 +   vector of integers [left top width height]."
  1.1836 +  [verts]
  1.1837 +  (let [xs (map first verts)
  1.1838 +        ys (map second verts)
  1.1839 +        x0 (Math/floor (apply min xs))
  1.1840 +        y0 (Math/floor (apply min ys))
  1.1841 +        x1 (Math/ceil (apply max xs))
  1.1842 +        y1 (Math/ceil (apply max ys))]
  1.1843 +    [x0 y0 (- x1 x0) (- y1 y0)]))
  1.1844 +
  1.1845 +(defn same-side?
  1.1846 +  "Given the points p1 and p2 and the reference point ref, is point p
  1.1847 +  on the same side of the line that goes through p1 and p2 as ref is?" 
  1.1848 +  [p1 p2 ref p]
  1.1849 +  (<=
  1.1850 +   0
  1.1851 +   (.dot 
  1.1852 +    (.cross (.subtract p2 p1) (.subtract p p1))
  1.1853 +    (.cross (.subtract p2 p1) (.subtract ref p1)))))
  1.1854 +
  1.1855 +(defn inside-triangle?
  1.1856 +  "Is the point inside the triangle?"
  1.1857 +  {:author "Dylan Holmes"}
  1.1858 +  [#^Triangle tri #^Vector3f p]
  1.1859 +  (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
  1.1860 +    (and
  1.1861 +     (same-side? vert-1 vert-2 vert-3 p)
  1.1862 +     (same-side? vert-2 vert-3 vert-1 p)
  1.1863 +     (same-side? vert-3 vert-1 vert-2 p))))
  1.1864 +    #+END_SRC
  1.1865 +    #+end_listing
  1.1866 +
  1.1867 +*** Feeler Coordinates
  1.1868 +
  1.1869 +    The triangle-related functions above make short work of
  1.1870 +    calculating the positions and orientations of each feeler in
  1.1871 +    world-space.
  1.1872 +
  1.1873 +    #+caption: Program to get the coordinates of ``feelers '' in 
  1.1874 +    #+caption: both world and UV-coordinates.
  1.1875 +    #+name: feeler-coordinates
  1.1876 +    #+begin_listing clojure
  1.1877 +    #+BEGIN_SRC clojure
  1.1878 +(defn feeler-pixel-coords
  1.1879 + "Returns the coordinates of the feelers in pixel space in lists, one
  1.1880 +  list for each triangle, ordered in the same way as (triangles) and
  1.1881 +  (pixel-triangles)."
  1.1882 + [#^Geometry geo image]
  1.1883 + (map 
  1.1884 +  (fn [pixel-triangle]
  1.1885 +    (filter
  1.1886 +     (fn [coord]
  1.1887 +       (inside-triangle? (->triangle pixel-triangle)
  1.1888 +                         (->vector3f coord)))
  1.1889 +       (white-coordinates image (convex-bounds pixel-triangle))))
  1.1890 +  (pixel-triangles geo image)))
  1.1891 +
  1.1892 +(defn feeler-world-coords 
  1.1893 + "Returns the coordinates of the feelers in world space in lists, one
  1.1894 +  list for each triangle, ordered in the same way as (triangles) and
  1.1895 +  (pixel-triangles)."
  1.1896 + [#^Geometry geo image]
  1.1897 + (let [transforms
  1.1898 +       (map #(triangles->affine-transform
  1.1899 +              (->triangle %1) (->triangle %2))
  1.1900 +            (pixel-triangles geo image)
  1.1901 +            (triangles geo))]
  1.1902 +   (map (fn [transform coords]
  1.1903 +          (map #(.mult transform (->vector3f %)) coords))
  1.1904 +        transforms (feeler-pixel-coords geo image))))
  1.1905 +    #+END_SRC
  1.1906 +    #+end_listing
  1.1907 +
  1.1908 +    #+caption: Program to get the position of the base and tip of 
  1.1909 +    #+caption: each ``feeler''
  1.1910 +    #+name: feeler-tips
  1.1911 +    #+begin_listing clojure
  1.1912 +    #+BEGIN_SRC clojure
  1.1913 +(defn feeler-origins
  1.1914 +  "The world space coordinates of the root of each feeler."
  1.1915 +  [#^Geometry geo image]
  1.1916 +   (reduce concat (feeler-world-coords geo image)))
  1.1917 +
  1.1918 +(defn feeler-tips
  1.1919 +  "The world space coordinates of the tip of each feeler."
  1.1920 +  [#^Geometry geo image]
  1.1921 +  (let [world-coords (feeler-world-coords geo image)
  1.1922 +        normals
  1.1923 +        (map
  1.1924 +         (fn [triangle]
  1.1925 +           (.calculateNormal triangle)
  1.1926 +           (.clone (.getNormal triangle)))
  1.1927 +         (map ->triangle (triangles geo)))]
  1.1928 +
  1.1929 +    (mapcat (fn [origins normal]
  1.1930 +              (map #(.add % normal) origins))
  1.1931 +            world-coords normals)))
  1.1932 +
  1.1933 +(defn touch-topology
  1.1934 +  [#^Geometry geo image]
  1.1935 +  (collapse (reduce concat (feeler-pixel-coords geo image))))
  1.1936 +    #+END_SRC
  1.1937 +    #+end_listing
  1.1938 +
  1.1939 +*** Simulated Touch
  1.1940 +
  1.1941 +    Now that the functions to construct feelers are complete,
  1.1942 +    =touch-kernel= generates functions to be called from within a
  1.1943 +    simulation that perform the necessary physics collisions to
  1.1944 +    collect tactile data, and =touch!= recursively applies it to every
  1.1945 +    node in the creature.
  1.1946 +
  1.1947 +    #+caption: Efficient program to transform a ray from 
  1.1948 +    #+caption: one position to another.
  1.1949 +    #+name: set-ray
  1.1950 +    #+begin_listing clojure
  1.1951 +    #+BEGIN_SRC clojure
  1.1952 +(defn set-ray [#^Ray ray #^Matrix4f transform
  1.1953 +               #^Vector3f origin #^Vector3f tip]
  1.1954 +  ;; Doing everything locally reduces garbage collection by enough to
  1.1955 +  ;; be worth it.
  1.1956 +  (.mult transform origin (.getOrigin ray))
  1.1957 +  (.mult transform tip (.getDirection ray))
  1.1958 +  (.subtractLocal (.getDirection ray) (.getOrigin ray))
  1.1959 +  (.normalizeLocal (.getDirection ray)))
  1.1960 +    #+END_SRC
  1.1961 +    #+end_listing
  1.1962 +
  1.1963 +    #+caption: This is the core of touch in =CORTEX= each feeler 
  1.1964 +    #+caption: follows the object it is bound to, reporting any 
  1.1965 +    #+caption: collisions that may happen.
  1.1966 +    #+name: touch-kernel
  1.1967 +    #+begin_listing clojure
  1.1968 +    #+BEGIN_SRC clojure
  1.1969 +(defn touch-kernel
  1.1970 +  "Constructs a function which will return tactile sensory data from
  1.1971 +   'geo when called from inside a running simulation"
  1.1972 +  [#^Geometry geo]
  1.1973 +  (if-let
  1.1974 +      [profile (tactile-sensor-profile geo)]
  1.1975 +    (let [ray-reference-origins (feeler-origins geo profile)
  1.1976 +          ray-reference-tips (feeler-tips geo profile)
  1.1977 +          ray-length (tactile-scale geo)
  1.1978 +          current-rays (map (fn [_] (Ray.)) ray-reference-origins)
  1.1979 +          topology (touch-topology geo profile)
  1.1980 +          correction (float (* ray-length -0.2))]
  1.1981 +      ;; slight tolerance for very close collisions.
  1.1982 +      (dorun
  1.1983 +       (map (fn [origin tip]
  1.1984 +              (.addLocal origin (.mult (.subtract tip origin)
  1.1985 +                                       correction)))
  1.1986 +            ray-reference-origins ray-reference-tips))
  1.1987 +      (dorun (map #(.setLimit % ray-length) current-rays))
  1.1988 +      (fn [node]
  1.1989 +        (let [transform (.getWorldMatrix geo)]
  1.1990 +          (dorun
  1.1991 +           (map (fn [ray ref-origin ref-tip]
  1.1992 +                  (set-ray ray transform ref-origin ref-tip))
  1.1993 +                current-rays ray-reference-origins
  1.1994 +                ray-reference-tips))
  1.1995 +          (vector
  1.1996 +           topology
  1.1997 +           (vec
  1.1998 +            (for [ray current-rays]
  1.1999 +              (do
  1.2000 +                (let [results (CollisionResults.)]
  1.2001 +                  (.collideWith node ray results)
  1.2002 +                  (let [touch-objects
  1.2003 +                        (filter #(not (= geo (.getGeometry %)))
  1.2004 +                                results)
  1.2005 +                        limit (.getLimit ray)]
  1.2006 +                    [(if (empty? touch-objects)
  1.2007 +                       limit
  1.2008 +                       (let [response
  1.2009 +                             (apply min (map #(.getDistance %)
  1.2010 +                                             touch-objects))]
  1.2011 +                         (FastMath/clamp
  1.2012 +                          (float 
  1.2013 +                           (if (> response limit) (float 0.0)
  1.2014 +                               (+ response correction)))
  1.2015 +                           (float 0.0)
  1.2016 +                           limit)))
  1.2017 +                     limit])))))))))))
  1.2018 +    #+END_SRC
  1.2019 +    #+end_listing
  1.2020 +
  1.2021 +    Armed with the =touch!= function, =CORTEX= becomes capable of
  1.2022 +    giving creatures a sense of touch. A simple test is to create a
  1.2023 +    cube that is outfitted with a uniform distrubition of touch
  1.2024 +    sensors. It can feel the ground and any balls that it touches.
  1.2025 +
  1.2026 +    #+caption: =CORTEX= interface for creating touch in a simulated
  1.2027 +    #+caption: creature.
  1.2028 +    #+name: touch
  1.2029 +    #+begin_listing clojure
  1.2030 +    #+BEGIN_SRC clojure
  1.2031 +(defn touch! 
  1.2032 +  "Endow the creature with the sense of touch. Returns a sequence of
  1.2033 +   functions, one for each body part with a tactile-sensor-profile,
  1.2034 +   each of which when called returns sensory data for that body part."
  1.2035 +  [#^Node creature]
  1.2036 +  (filter
  1.2037 +   (comp not nil?)
  1.2038 +   (map touch-kernel
  1.2039 +        (filter #(isa? (class %) Geometry)
  1.2040 +                (node-seq creature)))))
  1.2041 +    #+END_SRC
  1.2042 +    #+end_listing
  1.2043 +    
  1.2044 +    The tactile-sensor-profile image for the touch cube is a simple
  1.2045 +    cross with a unifom distribution of touch sensors:
  1.2046 +
  1.2047 +    #+caption: The touch profile for the touch-cube. Each pure white 
  1.2048 +    #+caption: pixel defines a touch sensitive feeler.
  1.2049 +    #+name: touch-cube-uv-map
  1.2050 +    #+ATTR_LaTeX: :width 7cm
  1.2051 +    [[./images/touch-profile.png]]
  1.2052 +
  1.2053 +    #+caption: The touch cube reacts to canonballs. The black, red, 
  1.2054 +    #+caption: and white cross on the right is a visual display of 
  1.2055 +    #+caption: the creature's touch. White means that it is feeling 
  1.2056 +    #+caption: something strongly, black is not feeling anything,
  1.2057 +    #+caption: and gray is in-between. The cube can feel both the 
  1.2058 +    #+caption: floor and the ball. Notice that when the ball causes 
  1.2059 +    #+caption: the cube to tip, that the bottom face can still feel 
  1.2060 +    #+caption: part of the ground.
  1.2061 +    #+name: touch-cube-uv-map
  1.2062 +    #+ATTR_LaTeX: :width 15cm
  1.2063 +    [[./images/touch-cube.png]]
  1.2064 +
  1.2065 +** Proprioception is the sense that makes everything ``real''
  1.2066 +
  1.2067 +   Close your eyes, and touch your nose with your right index finger.
  1.2068 +   How did you do it? You could not see your hand, and neither your
  1.2069 +   hand nor your nose could use the sense of touch to guide the path
  1.2070 +   of your hand. There are no sound cues, and Taste and Smell
  1.2071 +   certainly don't provide any help. You know where your hand is
  1.2072 +   without your other senses because of Proprioception.
  1.2073 +   
  1.2074 +   Humans can sometimes loose this sense through viral infections or
  1.2075 +   damage to the spinal cord or brain, and when they do, they loose
  1.2076 +   the ability to control their own bodies without looking directly at
  1.2077 +   the parts they want to move. In [[http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat][The Man Who Mistook His Wife for a
  1.2078 +   Hat]], a woman named Christina looses this sense and has to learn how
  1.2079 +   to move by carefully watching her arms and legs. She describes
  1.2080 +   proprioception as the "eyes of the body, the way the body sees
  1.2081 +   itself".
  1.2082 +   
  1.2083 +   Proprioception in humans is mediated by [[http://en.wikipedia.org/wiki/Articular_capsule][joint capsules]], [[http://en.wikipedia.org/wiki/Muscle_spindle][muscle
  1.2084 +   spindles]], and the [[http://en.wikipedia.org/wiki/Golgi_tendon_organ][Golgi tendon organs]]. These measure the relative
  1.2085 +   positions of each body part by monitoring muscle strain and length.
  1.2086 +   
  1.2087 +   It's clear that this is a vital sense for fluid, graceful movement.
  1.2088 +   It's also particularly easy to implement in jMonkeyEngine.
  1.2089 +   
  1.2090 +   My simulated proprioception calculates the relative angles of each
  1.2091 +   joint from the rest position defined in the blender file. This
  1.2092 +   simulates the muscle-spindles and joint capsules. I will deal with
  1.2093 +   Golgi tendon organs, which calculate muscle strain, in the next
  1.2094 +   section.
  1.2095 +
  1.2096 +*** Helper functions
  1.2097 +
  1.2098 +    =absolute-angle= calculates the angle between two vectors,
  1.2099 +    relative to a third axis vector. This angle is the number of
  1.2100 +    radians you have to move counterclockwise around the axis vector
  1.2101 +    to get from the first to the second vector. It is not commutative
  1.2102 +    like a normal dot-product angle is.
  1.2103 +
  1.2104 +    The purpose of these functions is to build a system of angle
  1.2105 +    measurement that is biologically plausable.
  1.2106 +
  1.2107 +    #+caption: Program to measure angles along a vector
  1.2108 +    #+name: helpers
  1.2109 +    #+begin_listing clojure
  1.2110 +    #+BEGIN_SRC clojure
  1.2111 +(defn right-handed?
  1.2112 +  "true iff the three vectors form a right handed coordinate
  1.2113 +   system. The three vectors do not have to be normalized or
  1.2114 +   orthogonal."
  1.2115 +  [vec1 vec2 vec3]
  1.2116 +  (pos? (.dot (.cross vec1 vec2) vec3)))
  1.2117 +
  1.2118 +(defn absolute-angle
  1.2119 +  "The angle between 'vec1 and 'vec2 around 'axis. In the range 
  1.2120 +   [0 (* 2 Math/PI)]."
  1.2121 +  [vec1 vec2 axis]
  1.2122 +  (let [angle (.angleBetween vec1 vec2)]
  1.2123 +    (if (right-handed? vec1 vec2 axis)
  1.2124 +      angle (- (* 2 Math/PI) angle))))
  1.2125 +    #+END_SRC
  1.2126 +    #+end_listing
  1.2127 +
  1.2128 +*** Proprioception Kernel
  1.2129 +    
  1.2130 +    Given a joint, =proprioception-kernel= produces a function that
  1.2131 +    calculates the Euler angles between the the objects the joint
  1.2132 +    connects. The only tricky part here is making the angles relative
  1.2133 +    to the joint's initial ``straightness''.
  1.2134 +
  1.2135 +    #+caption: Program to return biologially reasonable proprioceptive
  1.2136 +    #+caption: data for each joint.
  1.2137 +    #+name: proprioception
  1.2138 +    #+begin_listing clojure
  1.2139 +    #+BEGIN_SRC clojure
  1.2140 +(defn proprioception-kernel
  1.2141 +  "Returns a function which returns proprioceptive sensory data when
  1.2142 +  called inside a running simulation."
  1.2143 +  [#^Node parts #^Node joint]
  1.2144 +  (let [[obj-a obj-b] (joint-targets parts joint)
  1.2145 +        joint-rot (.getWorldRotation joint)
  1.2146 +        x0 (.mult joint-rot Vector3f/UNIT_X)
  1.2147 +        y0 (.mult joint-rot Vector3f/UNIT_Y)
  1.2148 +        z0 (.mult joint-rot Vector3f/UNIT_Z)]
  1.2149 +    (fn []
  1.2150 +      (let [rot-a (.clone (.getWorldRotation obj-a))
  1.2151 +            rot-b (.clone (.getWorldRotation obj-b))
  1.2152 +            x (.mult rot-a x0)
  1.2153 +            y (.mult rot-a y0)
  1.2154 +            z (.mult rot-a z0)
  1.2155 +
  1.2156 +            X (.mult rot-b x0)
  1.2157 +            Y (.mult rot-b y0)
  1.2158 +            Z (.mult rot-b z0)
  1.2159 +            heading  (Math/atan2 (.dot X z) (.dot X x))
  1.2160 +            pitch  (Math/atan2 (.dot X y) (.dot X x))
  1.2161 +
  1.2162 +            ;; rotate x-vector back to origin
  1.2163 +            reverse
  1.2164 +            (doto (Quaternion.)
  1.2165 +              (.fromAngleAxis
  1.2166 +               (.angleBetween X x)
  1.2167 +               (let [cross (.normalize (.cross X x))]
  1.2168 +                 (if (= 0 (.length cross)) y cross))))
  1.2169 +            roll (absolute-angle (.mult reverse Y) y x)]
  1.2170 +        [heading pitch roll]))))
  1.2171 +
  1.2172 +(defn proprioception!
  1.2173 +  "Endow the creature with the sense of proprioception. Returns a
  1.2174 +   sequence of functions, one for each child of the \"joints\" node in
  1.2175 +   the creature, which each report proprioceptive information about
  1.2176 +   that joint."
  1.2177 +  [#^Node creature]
  1.2178 +  ;; extract the body's joints
  1.2179 +  (let [senses (map (partial proprioception-kernel creature)
  1.2180 +                    (joints creature))]
  1.2181 +    (fn []
  1.2182 +      (map #(%) senses))))
  1.2183 +    #+END_SRC
  1.2184 +    #+end_listing
  1.2185 +
  1.2186 +    =proprioception!= maps =proprioception-kernel= across all the
  1.2187 +    joints of the creature. It uses the same list of joints that
  1.2188 +    =joints= uses. Proprioception is the easiest sense to implement in
  1.2189 +    =CORTEX=, and it will play a crucial role when efficiently
  1.2190 +    implementing empathy.
  1.2191 +
  1.2192 +    #+caption: In the upper right corner, the three proprioceptive
  1.2193 +    #+caption: angle measurements are displayed. Red is yaw, Green is 
  1.2194 +    #+caption: pitch, and White is roll.
  1.2195 +    #+name: proprio
  1.2196 +    #+ATTR_LaTeX: :width 11cm
  1.2197 +    [[./images/proprio.png]]
  1.2198 +
  1.2199 +** Muscles are both effectors and sensors
  1.2200 +
  1.2201 +   Surprisingly enough, terrestrial creatures only move by using
  1.2202 +   torque applied about their joints. There's not a single straight
  1.2203 +   line of force in the human body at all! (A straight line of force
  1.2204 +   would correspond to some sort of jet or rocket propulsion.)
  1.2205 +   
  1.2206 +   In humans, muscles are composed of muscle fibers which can contract
  1.2207 +   to exert force. The muscle fibers which compose a muscle are
  1.2208 +   partitioned into discrete groups which are each controlled by a
  1.2209 +   single alpha motor neuron. A single alpha motor neuron might
  1.2210 +   control as little as three or as many as one thousand muscle
  1.2211 +   fibers. When the alpha motor neuron is engaged by the spinal cord,
  1.2212 +   it activates all of the muscle fibers to which it is attached. The
  1.2213 +   spinal cord generally engages the alpha motor neurons which control
  1.2214 +   few muscle fibers before the motor neurons which control many
  1.2215 +   muscle fibers. This recruitment strategy allows for precise
  1.2216 +   movements at low strength. The collection of all motor neurons that
  1.2217 +   control a muscle is called the motor pool. The brain essentially
  1.2218 +   says "activate 30% of the motor pool" and the spinal cord recruits
  1.2219 +   motor neurons until 30% are activated. Since the distribution of
  1.2220 +   power among motor neurons is unequal and recruitment goes from
  1.2221 +   weakest to strongest, the first 30% of the motor pool might be 5%
  1.2222 +   of the strength of the muscle.
  1.2223 +   
  1.2224 +   My simulated muscles follow a similar design: Each muscle is
  1.2225 +   defined by a 1-D array of numbers (the "motor pool"). Each entry in
  1.2226 +   the array represents a motor neuron which controls a number of
  1.2227 +   muscle fibers equal to the value of the entry. Each muscle has a
  1.2228 +   scalar strength factor which determines the total force the muscle
  1.2229 +   can exert when all motor neurons are activated. The effector
  1.2230 +   function for a muscle takes a number to index into the motor pool,
  1.2231 +   and then "activates" all the motor neurons whose index is lower or
  1.2232 +   equal to the number. Each motor-neuron will apply force in
  1.2233 +   proportion to its value in the array. Lower values cause less
  1.2234 +   force. The lower values can be put at the "beginning" of the 1-D
  1.2235 +   array to simulate the layout of actual human muscles, which are
  1.2236 +   capable of more precise movements when exerting less force. Or, the
  1.2237 +   motor pool can simulate more exotic recruitment strategies which do
  1.2238 +   not correspond to human muscles.
  1.2239 +   
  1.2240 +   This 1D array is defined in an image file for ease of
  1.2241 +   creation/visualization. Here is an example muscle profile image.
  1.2242 +
  1.2243 +   #+caption: A muscle profile image that describes the strengths
  1.2244 +   #+caption: of each motor neuron in a muscle. White is weakest 
  1.2245 +   #+caption: and dark red is strongest. This particular pattern 
  1.2246 +   #+caption: has weaker motor neurons at the beginning, just 
  1.2247 +   #+caption: like human muscle.
  1.2248 +   #+name: muscle-recruit
  1.2249 +   #+ATTR_LaTeX: :width 7cm
  1.2250 +   [[./images/basic-muscle.png]]
  1.2251 +
  1.2252 +*** Muscle meta-data
  1.2253 +
  1.2254 +    #+caption: Program to deal with loading muscle data from a blender
  1.2255 +    #+caption: file's metadata.
  1.2256 +    #+name: motor-pool
  1.2257 +    #+begin_listing clojure
  1.2258 +    #+BEGIN_SRC clojure
  1.2259 +(defn muscle-profile-image
  1.2260 +  "Get the muscle-profile image from the node's blender meta-data."
  1.2261 +  [#^Node muscle]
  1.2262 +  (if-let [image (meta-data muscle "muscle")]
  1.2263 +    (load-image image)))
  1.2264 +
  1.2265 +(defn muscle-strength
  1.2266 +  "Return the strength of this muscle, or 1 if it is not defined."
  1.2267 +  [#^Node muscle]
  1.2268 +  (if-let [strength (meta-data muscle "strength")]
  1.2269 +    strength 1))
  1.2270 +
  1.2271 +(defn motor-pool
  1.2272 +  "Return a vector where each entry is the strength of the \"motor
  1.2273 +   neuron\" at that part in the muscle."
  1.2274 +  [#^Node muscle]
  1.2275 +  (let [profile (muscle-profile-image muscle)]
  1.2276 +    (vec
  1.2277 +     (let [width (.getWidth profile)]
  1.2278 +       (for [x (range width)]
  1.2279 +       (- 255
  1.2280 +          (bit-and
  1.2281 +           0x0000FF
  1.2282 +           (.getRGB profile x 0))))))))
  1.2283 +    #+END_SRC
  1.2284 +    #+end_listing
  1.2285 +
  1.2286 +    Of note here is =motor-pool= which interprets the muscle-profile
  1.2287 +    image in a way that allows me to use gradients between white and
  1.2288 +    red, instead of shades of gray as I've been using for all the
  1.2289 +    other senses. This is purely an aesthetic touch.
  1.2290 +
  1.2291 +*** Creating muscles
  1.2292 +
  1.2293 +    #+caption: This is the core movement functoion in =CORTEX=, which
  1.2294 +    #+caption: implements muscles that report on their activation.
  1.2295 +    #+name: muscle-kernel
  1.2296 +    #+begin_listing clojure
  1.2297 +    #+BEGIN_SRC clojure
  1.2298 +(defn movement-kernel
  1.2299 +  "Returns a function which when called with a integer value inside a
  1.2300 +   running simulation will cause movement in the creature according
  1.2301 +   to the muscle's position and strength profile. Each function
  1.2302 +   returns the amount of force applied / max force."
  1.2303 +  [#^Node creature #^Node muscle]
  1.2304 +  (let [target (closest-node creature muscle)
  1.2305 +        axis
  1.2306 +        (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y)
  1.2307 +        strength (muscle-strength muscle)
  1.2308 +        
  1.2309 +        pool (motor-pool muscle)
  1.2310 +        pool-integral (reductions + pool)
  1.2311 +        forces
  1.2312 +        (vec (map  #(float (* strength (/ % (last pool-integral))))
  1.2313 +              pool-integral))
  1.2314 +        control (.getControl target RigidBodyControl)]
  1.2315 +    ;;(println-repl (.getName target) axis)
  1.2316 +    (fn [n]
  1.2317 +      (let [pool-index (max 0 (min n (dec (count pool))))
  1.2318 +            force (forces pool-index)]
  1.2319 +        (.applyTorque control (.mult axis force))
  1.2320 +        (float (/ force strength))))))
  1.2321 +
  1.2322 +(defn movement!
  1.2323 +  "Endow the creature with the power of movement. Returns a sequence
  1.2324 +   of functions, each of which accept an integer value and will
  1.2325 +   activate their corresponding muscle."
  1.2326 +  [#^Node creature]
  1.2327 +    (for [muscle (muscles creature)]
  1.2328 +      (movement-kernel creature muscle)))
  1.2329 +    #+END_SRC
  1.2330 +    #+end_listing
  1.2331 +
  1.2332 +
  1.2333 +    =movement-kernel= creates a function that will move the nearest
  1.2334 +    physical object to the muscle node. The muscle exerts a rotational
  1.2335 +    force dependent on it's orientation to the object in the blender
  1.2336 +    file. The function returned by =movement-kernel= is also a sense
  1.2337 +    function: it returns the percent of the total muscle strength that
  1.2338 +    is currently being employed. This is analogous to muscle tension
  1.2339 +    in humans and completes the sense of proprioception begun in the
  1.2340 +    last section.
  1.2341 +    
  1.2342 +** =CORTEX= brings complex creatures to life!
  1.2343 +   
  1.2344 +   The ultimate test of =CORTEX= is to create a creature with the full
  1.2345 +   gamut of senses and put it though its paces. 
  1.2346 +
  1.2347 +   With all senses enabled, my right hand model looks like an
  1.2348 +   intricate marionette hand with several strings for each finger:
  1.2349 +
  1.2350 +   #+caption: View of the hand model with all sense nodes. You can see 
  1.2351 +   #+caption: the joint, muscle, ear, and eye nodess here.
  1.2352 +   #+name: hand-nodes-1
  1.2353 +   #+ATTR_LaTeX: :width 11cm
  1.2354 +   [[./images/hand-with-all-senses2.png]]
  1.2355 +
  1.2356 +   #+caption: An alternate view of the hand.
  1.2357 +   #+name: hand-nodes-2
  1.2358 +   #+ATTR_LaTeX: :width 15cm
  1.2359 +   [[./images/hand-with-all-senses3.png]]
  1.2360 +
  1.2361 +   With the hand fully rigged with senses, I can run it though a test
  1.2362 +   that will test everything. 
  1.2363 +
  1.2364 +   #+caption: A full test of the hand with all senses. Note expecially 
  1.2365 +   #+caption: the interactions the hand has with itself: it feels 
  1.2366 +   #+caption: its own palm and fingers, and when it curls its fingers, 
  1.2367 +   #+caption: it sees them with its eye (which is located in the center
  1.2368 +   #+caption: of the palm. The red block appears with a pure tone sound.
  1.2369 +   #+caption: The hand then uses its muscles to launch the cube!
  1.2370 +   #+name: integration
  1.2371 +   #+ATTR_LaTeX: :width 16cm
  1.2372 +   [[./images/integration.png]]
  1.2373 +
  1.2374 +** =CORTEX= enables many possiblities for further research
  1.2375 +
  1.2376 +   Often times, the hardest part of building a system involving
  1.2377 +   creatures is dealing with physics and graphics. =CORTEX= removes
  1.2378 +   much of this initial difficulty and leaves researchers free to
  1.2379 +   directly pursue their ideas. I hope that even undergrads with a
  1.2380 +   passing curiosity about simulated touch or creature evolution will
  1.2381 +   be able to use cortex for experimentation. =CORTEX= is a completely
  1.2382 +   simulated world, and far from being a disadvantage, its simulated
  1.2383 +   nature enables you to create senses and creatures that would be
  1.2384 +   impossible to make in the real world.
  1.2385 +
  1.2386 +   While not by any means a complete list, here are some paths
  1.2387 +   =CORTEX= is well suited to help you explore:
  1.2388 +
  1.2389 +   - Empathy         :: my empathy program leaves many areas for
  1.2390 +        improvement, among which are using vision to infer
  1.2391 +        proprioception and looking up sensory experience with imagined
  1.2392 +        vision, touch, and sound.
  1.2393 +   - Evolution       :: Karl Sims created a rich environment for
  1.2394 +        simulating the evolution of creatures on a connection
  1.2395 +        machine. Today, this can be redone and expanded with =CORTEX=
  1.2396 +        on an ordinary computer.
  1.2397 +   - Exotic senses  :: Cortex enables many fascinating senses that are
  1.2398 +        not possible to build in the real world. For example,
  1.2399 +        telekinesis is an interesting avenue to explore. You can also
  1.2400 +        make a ``semantic'' sense which looks up metadata tags on
  1.2401 +        objects in the environment the metadata tags might contain
  1.2402 +        other sensory information.
  1.2403 +   - Imagination via subworlds :: this would involve a creature with
  1.2404 +        an effector which creates an entire new sub-simulation where
  1.2405 +        the creature has direct control over placement/creation of
  1.2406 +        objects via simulated telekinesis. The creature observes this
  1.2407 +        sub-world through it's normal senses and uses its observations
  1.2408 +        to make predictions about its top level world.
  1.2409 +   - Simulated prescience :: step the simulation forward a few ticks,
  1.2410 +        gather sensory data, then supply this data for the creature as
  1.2411 +        one of its actual senses. The cost of prescience is slowing
  1.2412 +        the simulation down by a factor proportional to however far
  1.2413 +        you want the entities to see into the future. What happens
  1.2414 +        when two evolved creatures that can each see into the future
  1.2415 +        fight each other?
  1.2416 +   - Swarm creatures :: Program a group of creatures that cooperate
  1.2417 +        with each other. Because the creatures would be simulated, you
  1.2418 +        could investigate computationally complex rules of behavior
  1.2419 +        which still, from the group's point of view, would happen in
  1.2420 +        ``real time''. Interactions could be as simple as cellular
  1.2421 +        organisms communicating via flashing lights, or as complex as
  1.2422 +        humanoids completing social tasks, etc.
  1.2423 +   - =HACKER= for writing muscle-control programs :: Presented with
  1.2424 +        low-level muscle control/ sense API, generate higher level
  1.2425 +        programs for accomplishing various stated goals. Example goals
  1.2426 +        might be "extend all your fingers" or "move your hand into the
  1.2427 +        area with blue light" or "decrease the angle of this joint".
  1.2428 +        It would be like Sussman's HACKER, except it would operate
  1.2429 +        with much more data in a more realistic world. Start off with
  1.2430 +        "calisthenics" to develop subroutines over the motor control
  1.2431 +        API. This would be the "spinal chord" of a more intelligent
  1.2432 +        creature. The low level programming code might be a turning
  1.2433 +        machine that could develop programs to iterate over a "tape"
  1.2434 +        where each entry in the tape could control recruitment of the
  1.2435 +        fibers in a muscle.
  1.2436 +   - Sense fusion    :: There is much work to be done on sense
  1.2437 +        integration -- building up a coherent picture of the world and
  1.2438 +        the things in it with =CORTEX= as a base, you can explore
  1.2439 +        concepts like self-organizing maps or cross modal clustering
  1.2440 +        in ways that have never before been tried.
  1.2441 +   - Inverse kinematics :: experiments in sense guided motor control
  1.2442 +        are easy given =CORTEX='s support -- you can get right to the
  1.2443 +        hard control problems without worrying about physics or
  1.2444 +        senses.
  1.2445 +
  1.2446 +* Empathy in a simulated worm
  1.2447 +
  1.2448 +  Here I develop a computational model of empathy, using =CORTEX= as a
  1.2449 +  base. Empathy in this context is the ability to observe another
  1.2450 +  creature and infer what sorts of sensations that creature is
  1.2451 +  feeling. My empathy algorithm involves multiple phases. First is
  1.2452 +  free-play, where the creature moves around and gains sensory
  1.2453 +  experience. From this experience I construct a representation of the
  1.2454 +  creature's sensory state space, which I call \Phi-space. Using
  1.2455 +  \Phi-space, I construct an efficient function which takes the
  1.2456 +  limited data that comes from observing another creature and enriches
  1.2457 +  it full compliment of imagined sensory data. I can then use the
  1.2458 +  imagined sensory data to recognize what the observed creature is
  1.2459 +  doing and feeling, using straightforward embodied action predicates.
  1.2460 +  This is all demonstrated with using a simple worm-like creature, and
  1.2461 +  recognizing worm-actions based on limited data.
  1.2462 +
  1.2463 +  #+caption: Here is the worm with which we will be working. 
  1.2464 +  #+caption: It is composed of 5 segments. Each segment has a 
  1.2465 +  #+caption: pair of extensor and flexor muscles. Each of the 
  1.2466 +  #+caption: worm's four joints is a hinge joint which allows 
  1.2467 +  #+caption: about 30 degrees of rotation to either side. Each segment
  1.2468 +  #+caption: of the worm is touch-capable and has a uniform 
  1.2469 +  #+caption: distribution of touch sensors on each of its faces.
  1.2470 +  #+caption: Each joint has a proprioceptive sense to detect 
  1.2471 +  #+caption: relative positions. The worm segments are all the 
  1.2472 +  #+caption: same except for the first one, which has a much
  1.2473 +  #+caption: higher weight than the others to allow for easy 
  1.2474 +  #+caption: manual motor control.
  1.2475 +  #+name: basic-worm-view
  1.2476 +  #+ATTR_LaTeX: :width 10cm
  1.2477 +  [[./images/basic-worm-view.png]]
  1.2478 +
  1.2479 +  #+caption: Program for reading a worm from a blender file and 
  1.2480 +  #+caption: outfitting it with the senses of proprioception, 
  1.2481 +  #+caption: touch, and the ability to move, as specified in the 
  1.2482 +  #+caption: blender file.
  1.2483 +  #+name: get-worm
  1.2484 +  #+begin_listing clojure
  1.2485 +  #+begin_src clojure
  1.2486 +(defn worm []
  1.2487 +  (let [model (load-blender-model "Models/worm/worm.blend")]
  1.2488 +    {:body (doto model (body!))
  1.2489 +     :touch (touch! model)
  1.2490 +     :proprioception (proprioception! model)
  1.2491 +     :muscles (movement! model)}))
  1.2492 +  #+end_src
  1.2493 +  #+end_listing
  1.2494 +
  1.2495 +** Embodiment factors action recognition into managable parts
  1.2496 +
  1.2497 +   Using empathy, I divide the problem of action recognition into a
  1.2498 +   recognition process expressed in the language of a full compliment
  1.2499 +   of senses, and an imaganitive process that generates full sensory
  1.2500 +   data from partial sensory data. Splitting the action recognition
  1.2501 +   problem in this manner greatly reduces the total amount of work to
  1.2502 +   recognize actions: The imaganitive process is mostly just matching
  1.2503 +   previous experience, and the recognition process gets to use all
  1.2504 +   the senses to directly describe any action.
  1.2505 +
  1.2506 +** Action recognition is easy with a full gamut of senses
  1.2507 +
  1.2508 +   Embodied representations using multiple senses such as touch,
  1.2509 +   proprioception, and muscle tension turns out be be exceedingly
  1.2510 +   efficient at describing body-centered actions. It is the ``right
  1.2511 +   language for the job''. For example, it takes only around 5 lines
  1.2512 +   of LISP code to describe the action of ``curling'' using embodied
  1.2513 +   primitives. It takes about 10 lines to describe the seemingly
  1.2514 +   complicated action of wiggling.
  1.2515 +
  1.2516 +   The following action predicates each take a stream of sensory
  1.2517 +   experience, observe however much of it they desire, and decide
  1.2518 +   whether the worm is doing the action they describe. =curled?=
  1.2519 +   relies on proprioception, =resting?= relies on touch, =wiggling?=
  1.2520 +   relies on a fourier analysis of muscle contraction, and
  1.2521 +   =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
  1.2522 +   
  1.2523 +   #+caption: Program for detecting whether the worm is curled. This is the 
  1.2524 +   #+caption: simplest action predicate, because it only uses the last frame 
  1.2525 +   #+caption: of sensory experience, and only uses proprioceptive data. Even 
  1.2526 +   #+caption: this simple predicate, however, is automatically frame 
  1.2527 +   #+caption: independent and ignores vermopomorphic differences such as 
  1.2528 +   #+caption: worm textures and colors.
  1.2529 +   #+name: curled
  1.2530 +   #+begin_listing clojure
  1.2531 +   #+begin_src clojure
  1.2532 +(defn curled?
  1.2533 +  "Is the worm curled up?"
  1.2534 +  [experiences]
  1.2535 +  (every?
  1.2536 +   (fn [[_ _ bend]]
  1.2537 +     (> (Math/sin bend) 0.64))
  1.2538 +   (:proprioception (peek experiences))))
  1.2539 +   #+end_src
  1.2540 +   #+end_listing
  1.2541 +
  1.2542 +   #+caption: Program for summarizing the touch information in a patch 
  1.2543 +   #+caption: of skin.
  1.2544 +   #+name: touch-summary
  1.2545 +   #+begin_listing clojure
  1.2546 +   #+begin_src clojure
  1.2547 +(defn contact
  1.2548 +  "Determine how much contact a particular worm segment has with
  1.2549 +   other objects. Returns a value between 0 and 1, where 1 is full
  1.2550 +   contact and 0 is no contact."
  1.2551 +  [touch-region [coords contact :as touch]]
  1.2552 +  (-> (zipmap coords contact)
  1.2553 +      (select-keys touch-region)
  1.2554 +      (vals)
  1.2555 +      (#(map first %))
  1.2556 +      (average)
  1.2557 +      (* 10)
  1.2558 +      (- 1)
  1.2559 +      (Math/abs)))
  1.2560 +   #+end_src
  1.2561 +   #+end_listing
  1.2562 +
  1.2563 +
  1.2564 +   #+caption: Program for detecting whether the worm is at rest. This program
  1.2565 +   #+caption: uses a summary of the tactile information from the underbelly 
  1.2566 +   #+caption: of the worm, and is only true if every segment is touching the 
  1.2567 +   #+caption: floor. Note that this function contains no references to 
  1.2568 +   #+caption: proprioction at all.
  1.2569 +   #+name: resting
  1.2570 +#+begin_listing clojure
  1.2571 +   #+begin_src clojure
  1.2572 +(def worm-segment-bottom (rect-region [8 15] [14 22]))
  1.2573 +
  1.2574 +(defn resting?
  1.2575 +  "Is the worm resting on the ground?"
  1.2576 +  [experiences]
  1.2577 +  (every?
  1.2578 +   (fn [touch-data]
  1.2579 +     (< 0.9 (contact worm-segment-bottom touch-data)))
  1.2580 +   (:touch (peek experiences))))
  1.2581 +   #+end_src
  1.2582 +   #+end_listing
  1.2583 +
  1.2584 +   #+caption: Program for detecting whether the worm is curled up into a 
  1.2585 +   #+caption: full circle. Here the embodied approach begins to shine, as
  1.2586 +   #+caption: I am able to both use a previous action predicate (=curled?=)
  1.2587 +   #+caption: as well as the direct tactile experience of the head and tail.
  1.2588 +   #+name: grand-circle
  1.2589 +#+begin_listing clojure
  1.2590 +   #+begin_src clojure
  1.2591 +(def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
  1.2592 +
  1.2593 +(def worm-segment-top-tip (rect-region [0 15] [7 22]))
  1.2594 +
  1.2595 +(defn grand-circle?
  1.2596 +  "Does the worm form a majestic circle (one end touching the other)?"
  1.2597 +  [experiences]
  1.2598 +  (and (curled? experiences)
  1.2599 +       (let [worm-touch (:touch (peek experiences))
  1.2600 +             tail-touch (worm-touch 0)
  1.2601 +             head-touch (worm-touch 4)]
  1.2602 +         (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
  1.2603 +              (< 0.55 (contact worm-segment-top-tip    head-touch))))))
  1.2604 +   #+end_src
  1.2605 +   #+end_listing
  1.2606 +
  1.2607 +
  1.2608 +   #+caption: Program for detecting whether the worm has been wiggling for 
  1.2609 +   #+caption: the last few frames. It uses a fourier analysis of the muscle 
  1.2610 +   #+caption: contractions of the worm's tail to determine wiggling. This is 
  1.2611 +   #+caption: signigicant because there is no particular frame that clearly 
  1.2612 +   #+caption: indicates that the worm is wiggling --- only when multiple frames 
  1.2613 +   #+caption: are analyzed together is the wiggling revealed. Defining 
  1.2614 +   #+caption: wiggling this way also gives the worm an opportunity to learn 
  1.2615 +   #+caption: and recognize ``frustrated wiggling'', where the worm tries to 
  1.2616 +   #+caption: wiggle but can't. Frustrated wiggling is very visually different 
  1.2617 +   #+caption: from actual wiggling, but this definition gives it to us for free.
  1.2618 +   #+name: wiggling
  1.2619 +#+begin_listing clojure
  1.2620 +   #+begin_src clojure
  1.2621 +(defn fft [nums]
  1.2622 +  (map
  1.2623 +   #(.getReal %)
  1.2624 +   (.transform
  1.2625 +    (FastFourierTransformer. DftNormalization/STANDARD)
  1.2626 +    (double-array nums) TransformType/FORWARD)))
  1.2627 +
  1.2628 +(def indexed (partial map-indexed vector))
  1.2629 +
  1.2630 +(defn max-indexed [s]
  1.2631 +  (first (sort-by (comp - second) (indexed s))))
  1.2632 +
  1.2633 +(defn wiggling?
  1.2634 +  "Is the worm wiggling?"
  1.2635 +  [experiences]
  1.2636 +  (let [analysis-interval 0x40]
  1.2637 +    (when (> (count experiences) analysis-interval)
  1.2638 +      (let [a-flex 3
  1.2639 +            a-ex   2
  1.2640 +            muscle-activity
  1.2641 +            (map :muscle (vector:last-n experiences analysis-interval))
  1.2642 +            base-activity
  1.2643 +            (map #(- (% a-flex) (% a-ex)) muscle-activity)]
  1.2644 +        (= 2
  1.2645 +           (first
  1.2646 +            (max-indexed
  1.2647 +             (map #(Math/abs %)
  1.2648 +                  (take 20 (fft base-activity))))))))))
  1.2649 +   #+end_src
  1.2650 +   #+end_listing
  1.2651 +
  1.2652 +   With these action predicates, I can now recognize the actions of
  1.2653 +   the worm while it is moving under my control and I have access to
  1.2654 +   all the worm's senses.
  1.2655 +
  1.2656 +   #+caption: Use the action predicates defined earlier to report on 
  1.2657 +   #+caption: what the worm is doing while in simulation.
  1.2658 +   #+name: report-worm-activity
  1.2659 +#+begin_listing clojure
  1.2660 +   #+begin_src clojure
  1.2661 +(defn debug-experience
  1.2662 +  [experiences text]
  1.2663 +  (cond
  1.2664 +   (grand-circle? experiences) (.setText text "Grand Circle")
  1.2665 +   (curled? experiences)       (.setText text "Curled")
  1.2666 +   (wiggling? experiences)     (.setText text "Wiggling")
  1.2667 +   (resting? experiences)      (.setText text "Resting")))
  1.2668 +   #+end_src
  1.2669 +   #+end_listing
  1.2670 +
  1.2671 +   #+caption: Using =debug-experience=, the body-centered predicates
  1.2672 +   #+caption: work together to classify the behaviour of the worm. 
  1.2673 +   #+caption: the predicates are operating with access to the worm's
  1.2674 +   #+caption: full sensory data.
  1.2675 +   #+name: basic-worm-view
  1.2676 +   #+ATTR_LaTeX: :width 10cm
  1.2677 +   [[./images/worm-identify-init.png]]
  1.2678 +
  1.2679 +   These action predicates satisfy the recognition requirement of an
  1.2680 +   empathic recognition system. There is power in the simplicity of
  1.2681 +   the action predicates. They describe their actions without getting
  1.2682 +   confused in visual details of the worm. Each one is frame
  1.2683 +   independent, but more than that, they are each indepent of
  1.2684 +   irrelevant visual details of the worm and the environment. They
  1.2685 +   will work regardless of whether the worm is a different color or
  1.2686 +   hevaily textured, or if the environment has strange lighting.
  1.2687 +
  1.2688 +   The trick now is to make the action predicates work even when the
  1.2689 +   sensory data on which they depend is absent. If I can do that, then
  1.2690 +   I will have gained much,
  1.2691 +
  1.2692 +** \Phi-space describes the worm's experiences
  1.2693 +   
  1.2694 +   As a first step towards building empathy, I need to gather all of
  1.2695 +   the worm's experiences during free play. I use a simple vector to
  1.2696 +   store all the experiences. 
  1.2697 +
  1.2698 +   Each element of the experience vector exists in the vast space of
  1.2699 +   all possible worm-experiences. Most of this vast space is actually
  1.2700 +   unreachable due to physical constraints of the worm's body. For
  1.2701 +   example, the worm's segments are connected by hinge joints that put
  1.2702 +   a practical limit on the worm's range of motions without limiting
  1.2703 +   its degrees of freedom. Some groupings of senses are impossible;
  1.2704 +   the worm can not be bent into a circle so that its ends are
  1.2705 +   touching and at the same time not also experience the sensation of
  1.2706 +   touching itself.
  1.2707 +
  1.2708 +   As the worm moves around during free play and its experience vector
  1.2709 +   grows larger, the vector begins to define a subspace which is all
  1.2710 +   the sensations the worm can practicaly experience during normal
  1.2711 +   operation. I call this subspace \Phi-space, short for
  1.2712 +   physical-space. The experience vector defines a path through
  1.2713 +   \Phi-space. This path has interesting properties that all derive
  1.2714 +   from physical embodiment. The proprioceptive components are
  1.2715 +   completely smooth, because in order for the worm to move from one
  1.2716 +   position to another, it must pass through the intermediate
  1.2717 +   positions. The path invariably forms loops as actions are repeated.
  1.2718 +   Finally and most importantly, proprioception actually gives very
  1.2719 +   strong inference about the other senses. For example, when the worm
  1.2720 +   is flat, you can infer that it is touching the ground and that its
  1.2721 +   muscles are not active, because if the muscles were active, the
  1.2722 +   worm would be moving and would not be perfectly flat. In order to
  1.2723 +   stay flat, the worm has to be touching the ground, or it would
  1.2724 +   again be moving out of the flat position due to gravity. If the
  1.2725 +   worm is positioned in such a way that it interacts with itself,
  1.2726 +   then it is very likely to be feeling the same tactile feelings as
  1.2727 +   the last time it was in that position, because it has the same body
  1.2728 +   as then. If you observe multiple frames of proprioceptive data,
  1.2729 +   then you can become increasingly confident about the exact
  1.2730 +   activations of the worm's muscles, because it generally takes a
  1.2731 +   unique combination of muscle contractions to transform the worm's
  1.2732 +   body along a specific path through \Phi-space.
  1.2733 +
  1.2734 +   There is a simple way of taking \Phi-space and the total ordering
  1.2735 +   provided by an experience vector and reliably infering the rest of
  1.2736 +   the senses.
  1.2737 +
  1.2738 +** Empathy is the process of tracing though \Phi-space 
  1.2739 +
  1.2740 +   Here is the core of a basic empathy algorithm, starting with an
  1.2741 +   experience vector:
  1.2742 +
  1.2743 +   First, group the experiences into tiered proprioceptive bins. I use
  1.2744 +   powers of 10 and 3 bins, and the smallest bin has an approximate
  1.2745 +   size of 0.001 radians in all proprioceptive dimensions.
  1.2746 +   
  1.2747 +   Then, given a sequence of proprioceptive input, generate a set of
  1.2748 +   matching experience records for each input, using the tiered
  1.2749 +   proprioceptive bins. 
  1.2750 +
  1.2751 +   Finally, to infer sensory data, select the longest consective chain
  1.2752 +   of experiences. Conecutive experience means that the experiences
  1.2753 +   appear next to each other in the experience vector.
  1.2754 +
  1.2755 +   This algorithm has three advantages: 
  1.2756 +
  1.2757 +   1. It's simple
  1.2758 +
  1.2759 +   3. It's very fast -- retrieving possible interpretations takes
  1.2760 +      constant time. Tracing through chains of interpretations takes
  1.2761 +      time proportional to the average number of experiences in a
  1.2762 +      proprioceptive bin. Redundant experiences in \Phi-space can be
  1.2763 +      merged to save computation.
  1.2764 +
  1.2765 +   2. It protects from wrong interpretations of transient ambiguous
  1.2766 +      proprioceptive data. For example, if the worm is flat for just
  1.2767 +      an instant, this flattness will not be interpreted as implying
  1.2768 +      that the worm has its muscles relaxed, since the flattness is
  1.2769 +      part of a longer chain which includes a distinct pattern of
  1.2770 +      muscle activation. Markov chains or other memoryless statistical
  1.2771 +      models that operate on individual frames may very well make this
  1.2772 +      mistake.
  1.2773 +
  1.2774 +   #+caption: Program to convert an experience vector into a 
  1.2775 +   #+caption: proprioceptively binned lookup function.
  1.2776 +   #+name: bin
  1.2777 +#+begin_listing clojure
  1.2778 +   #+begin_src clojure
  1.2779 +(defn bin [digits]
  1.2780 +  (fn [angles]
  1.2781 +    (->> angles
  1.2782 +         (flatten)
  1.2783 +         (map (juxt #(Math/sin %) #(Math/cos %)))
  1.2784 +         (flatten)
  1.2785 +         (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
  1.2786 +
  1.2787 +(defn gen-phi-scan 
  1.2788 +  "Nearest-neighbors with binning. Only returns a result if
  1.2789 +   the propriceptive data is within 10% of a previously recorded
  1.2790 +   result in all dimensions."
  1.2791 +  [phi-space]
  1.2792 +  (let [bin-keys (map bin [3 2 1])
  1.2793 +        bin-maps
  1.2794 +        (map (fn [bin-key]
  1.2795 +               (group-by
  1.2796 +                (comp bin-key :proprioception phi-space)
  1.2797 +                (range (count phi-space)))) bin-keys)
  1.2798 +        lookups (map (fn [bin-key bin-map]
  1.2799 +                       (fn [proprio] (bin-map (bin-key proprio))))
  1.2800 +                     bin-keys bin-maps)]
  1.2801 +    (fn lookup [proprio-data]
  1.2802 +      (set (some #(% proprio-data) lookups)))))
  1.2803 +   #+end_src
  1.2804 +   #+end_listing
  1.2805 +
  1.2806 +   #+caption: =longest-thread= finds the longest path of consecutive 
  1.2807 +   #+caption: experiences to explain proprioceptive worm data.
  1.2808 +   #+name: phi-space-history-scan
  1.2809 +   #+ATTR_LaTeX: :width 10cm
  1.2810 +   [[./images/aurellem-gray.png]]
  1.2811 +
  1.2812 +   =longest-thread= infers sensory data by stitching together pieces
  1.2813 +   from previous experience. It prefers longer chains of previous
  1.2814 +   experience to shorter ones. For example, during training the worm
  1.2815 +   might rest on the ground for one second before it performs its
  1.2816 +   excercises. If during recognition the worm rests on the ground for
  1.2817 +   five seconds, =longest-thread= will accomodate this five second
  1.2818 +   rest period by looping the one second rest chain five times.
  1.2819 +
  1.2820 +   =longest-thread= takes time proportinal to the average number of
  1.2821 +   entries in a proprioceptive bin, because for each element in the
  1.2822 +   starting bin it performes a series of set lookups in the preceeding
  1.2823 +   bins. If the total history is limited, then this is only a constant
  1.2824 +   multiple times the number of entries in the starting bin. This
  1.2825 +   analysis also applies even if the action requires multiple longest
  1.2826 +   chains -- it's still the average number of entries in a
  1.2827 +   proprioceptive bin times the desired chain length. Because
  1.2828 +   =longest-thread= is so efficient and simple, I can interpret
  1.2829 +   worm-actions in real time.
  1.2830 +
  1.2831 +   #+caption: Program to calculate empathy by tracing though \Phi-space
  1.2832 +   #+caption: and finding the longest (ie. most coherent) interpretation
  1.2833 +   #+caption: of the data.
  1.2834 +   #+name: longest-thread
  1.2835 +#+begin_listing clojure
  1.2836 +   #+begin_src clojure
  1.2837 +(defn longest-thread
  1.2838 +  "Find the longest thread from phi-index-sets. The index sets should
  1.2839 +   be ordered from most recent to least recent."
  1.2840 +  [phi-index-sets]
  1.2841 +  (loop [result '()
  1.2842 +         [thread-bases & remaining :as phi-index-sets] phi-index-sets]
  1.2843 +    (if (empty? phi-index-sets)
  1.2844 +      (vec result)
  1.2845 +      (let [threads
  1.2846 +            (for [thread-base thread-bases]
  1.2847 +              (loop [thread (list thread-base)
  1.2848 +                     remaining remaining]
  1.2849 +                (let [next-index (dec (first thread))]
  1.2850 +                  (cond (empty? remaining) thread
  1.2851 +                        (contains? (first remaining) next-index)
  1.2852 +                        (recur
  1.2853 +                         (cons next-index thread) (rest remaining))
  1.2854 +                        :else thread))))
  1.2855 +            longest-thread
  1.2856 +            (reduce (fn [thread-a thread-b]
  1.2857 +                      (if (> (count thread-a) (count thread-b))
  1.2858 +                        thread-a thread-b))
  1.2859 +                    '(nil)
  1.2860 +                    threads)]
  1.2861 +        (recur (concat longest-thread result)
  1.2862 +               (drop (count longest-thread) phi-index-sets))))))
  1.2863 +   #+end_src
  1.2864 +   #+end_listing
  1.2865 +
  1.2866 +   There is one final piece, which is to replace missing sensory data
  1.2867 +   with a best-guess estimate. While I could fill in missing data by
  1.2868 +   using a gradient over the closest known sensory data points,
  1.2869 +   averages can be misleading. It is certainly possible to create an
  1.2870 +   impossible sensory state by averaging two possible sensory states.
  1.2871 +   Therefore, I simply replicate the most recent sensory experience to
  1.2872 +   fill in the gaps.
  1.2873 +
  1.2874 +   #+caption: Fill in blanks in sensory experience by replicating the most 
  1.2875 +   #+caption: recent experience.
  1.2876 +   #+name: infer-nils
  1.2877 +#+begin_listing clojure
  1.2878 +   #+begin_src clojure
  1.2879 +(defn infer-nils
  1.2880 +  "Replace nils with the next available non-nil element in the
  1.2881 +   sequence, or barring that, 0."
  1.2882 +  [s]
  1.2883 +  (loop [i (dec (count s))
  1.2884 +         v (transient s)]
  1.2885 +    (if (zero? i) (persistent! v)
  1.2886 +        (if-let [cur (v i)]
  1.2887 +          (if (get v (dec i) 0)
  1.2888 +            (recur (dec i) v)
  1.2889 +            (recur (dec i) (assoc! v (dec i) cur)))
  1.2890 +          (recur i (assoc! v i 0))))))
  1.2891 +   #+end_src
  1.2892 +   #+end_listing
  1.2893 +  
  1.2894 +** Efficient action recognition with =EMPATH=
  1.2895 +   
  1.2896 +   To use =EMPATH= with the worm, I first need to gather a set of
  1.2897 +   experiences from the worm that includes the actions I want to
  1.2898 +   recognize. The =generate-phi-space= program (listing
  1.2899 +   \ref{generate-phi-space} runs the worm through a series of
  1.2900 +   exercices and gatheres those experiences into a vector. The
  1.2901 +   =do-all-the-things= program is a routine expressed in a simple
  1.2902 +   muscle contraction script language for automated worm control. It
  1.2903 +   causes the worm to rest, curl, and wiggle over about 700 frames
  1.2904 +   (approx. 11 seconds).
  1.2905 +
  1.2906 +   #+caption: Program to gather the worm's experiences into a vector for 
  1.2907 +   #+caption: further processing. The =motor-control-program= line uses
  1.2908 +   #+caption: a motor control script that causes the worm to execute a series
  1.2909 +   #+caption: of ``exercices'' that include all the action predicates.
  1.2910 +   #+name: generate-phi-space
  1.2911 +#+begin_listing clojure 
  1.2912 +   #+begin_src clojure
  1.2913 +(def do-all-the-things 
  1.2914 +  (concat
  1.2915 +   curl-script
  1.2916 +   [[300 :d-ex 40]
  1.2917 +    [320 :d-ex 0]]
  1.2918 +   (shift-script 280 (take 16 wiggle-script))))
  1.2919 +
  1.2920 +(defn generate-phi-space []
  1.2921 +  (let [experiences (atom [])]
  1.2922 +    (run-world
  1.2923 +     (apply-map 
  1.2924 +      worm-world
  1.2925 +      (merge
  1.2926 +       (worm-world-defaults)
  1.2927 +       {:end-frame 700
  1.2928 +        :motor-control
  1.2929 +        (motor-control-program worm-muscle-labels do-all-the-things)
  1.2930 +        :experiences experiences})))
  1.2931 +    @experiences))
  1.2932 +   #+end_src
  1.2933 +   #+end_listing
  1.2934 +
  1.2935 +   #+caption: Use longest thread and a phi-space generated from a short
  1.2936 +   #+caption: exercise routine to interpret actions during free play.
  1.2937 +   #+name: empathy-debug
  1.2938 +#+begin_listing clojure
  1.2939 +   #+begin_src clojure
  1.2940 +(defn init []
  1.2941 +  (def phi-space (generate-phi-space))
  1.2942 +  (def phi-scan (gen-phi-scan phi-space)))
  1.2943 +
  1.2944 +(defn empathy-demonstration []
  1.2945 +  (let [proprio (atom ())]
  1.2946 +    (fn
  1.2947 +      [experiences text]
  1.2948 +      (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
  1.2949 +        (swap! proprio (partial cons phi-indices))
  1.2950 +        (let [exp-thread (longest-thread (take 300 @proprio))
  1.2951 +              empathy (mapv phi-space (infer-nils exp-thread))]
  1.2952 +          (println-repl (vector:last-n exp-thread 22))
  1.2953 +          (cond
  1.2954 +           (grand-circle? empathy) (.setText text "Grand Circle")
  1.2955 +           (curled? empathy)       (.setText text "Curled")
  1.2956 +           (wiggling? empathy)     (.setText text "Wiggling")
  1.2957 +           (resting? empathy)      (.setText text "Resting")
  1.2958 +           :else                       (.setText text "Unknown")))))))
  1.2959 +
  1.2960 +(defn empathy-experiment [record]
  1.2961 +  (.start (worm-world :experience-watch (debug-experience-phi)
  1.2962 +                      :record record :worm worm*)))
  1.2963 +   #+end_src
  1.2964 +   #+end_listing
  1.2965 +   
  1.2966 +   The result of running =empathy-experiment= is that the system is
  1.2967 +   generally able to interpret worm actions using the action-predicates
  1.2968 +   on simulated sensory data just as well as with actual data. Figure
  1.2969 +   \ref{empathy-debug-image} was generated using =empathy-experiment=:
  1.2970 +
  1.2971 +  #+caption: From only proprioceptive data, =EMPATH= was able to infer 
  1.2972 +  #+caption: the complete sensory experience and classify four poses
  1.2973 +  #+caption: (The last panel shows a composite image of \emph{wriggling}, 
  1.2974 +  #+caption: a dynamic pose.)
  1.2975 +  #+name: empathy-debug-image
  1.2976 +  #+ATTR_LaTeX: :width 10cm :placement [H]
  1.2977 +  [[./images/empathy-1.png]]
  1.2978 +
  1.2979 +  One way to measure the performance of =EMPATH= is to compare the
  1.2980 +  sutiability of the imagined sense experience to trigger the same
  1.2981 +  action predicates as the real sensory experience. 
  1.2982 +  
  1.2983 +   #+caption: Determine how closely empathy approximates actual 
  1.2984 +   #+caption: sensory data.
  1.2985 +   #+name: test-empathy-accuracy
  1.2986 +#+begin_listing clojure
  1.2987 +   #+begin_src clojure
  1.2988 +(def worm-action-label
  1.2989 +  (juxt grand-circle? curled? wiggling?))
  1.2990 +
  1.2991 +(defn compare-empathy-with-baseline [matches]
  1.2992 +  (let [proprio (atom ())]
  1.2993 +    (fn
  1.2994 +      [experiences text]
  1.2995 +      (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
  1.2996 +        (swap! proprio (partial cons phi-indices))
  1.2997 +        (let [exp-thread (longest-thread (take 300 @proprio))
  1.2998 +              empathy (mapv phi-space (infer-nils exp-thread))
  1.2999 +              experience-matches-empathy
  1.3000 +              (= (worm-action-label experiences)
  1.3001 +                 (worm-action-label empathy))]
  1.3002 +          (println-repl experience-matches-empathy)
  1.3003 +          (swap! matches #(conj % experience-matches-empathy)))))))
  1.3004 +              
  1.3005 +(defn accuracy [v]
  1.3006 +  (float (/ (count (filter true? v)) (count v))))
  1.3007 +
  1.3008 +(defn test-empathy-accuracy []
  1.3009 +  (let [res (atom [])]
  1.3010 +    (run-world
  1.3011 +     (worm-world :experience-watch
  1.3012 +                 (compare-empathy-with-baseline res)
  1.3013 +                 :worm worm*))
  1.3014 +    (accuracy @res)))
  1.3015 +   #+end_src
  1.3016 +   #+end_listing
  1.3017 +
  1.3018 +  Running =test-empathy-accuracy= using the very short exercise
  1.3019 +  program defined in listing \ref{generate-phi-space}, and then doing
  1.3020 +  a similar pattern of activity manually yeilds an accuracy of around
  1.3021 +  73%. This is based on very limited worm experience. By training the
  1.3022 +  worm for longer, the accuracy dramatically improves.
  1.3023 +
  1.3024 +   #+caption: Program to generate \Phi-space using manual training.
  1.3025 +   #+name: manual-phi-space
  1.3026 +   #+begin_listing clojure
  1.3027 +   #+begin_src clojure
  1.3028 +(defn init-interactive []
  1.3029 +  (def phi-space
  1.3030 +    (let [experiences (atom [])]
  1.3031 +      (run-world
  1.3032 +       (apply-map 
  1.3033 +        worm-world
  1.3034 +        (merge
  1.3035 +         (worm-world-defaults)
  1.3036 +         {:experiences experiences})))
  1.3037 +      @experiences))
  1.3038 +  (def phi-scan (gen-phi-scan phi-space)))
  1.3039 +   #+end_src
  1.3040 +   #+end_listing
  1.3041 +
  1.3042 +  After about 1 minute of manual training, I was able to achieve 95%
  1.3043 +  accuracy on manual testing of the worm using =init-interactive= and
  1.3044 +  =test-empathy-accuracy=. The majority of errors are near the
  1.3045 +  boundaries of transitioning from one type of action to another.
  1.3046 +  During these transitions the exact label for the action is more open
  1.3047 +  to interpretation, and dissaggrement between empathy and experience
  1.3048 +  is more excusable.
  1.3049 +
  1.3050 +** Digression: bootstrapping touch using free exploration
  1.3051 +
  1.3052 +   In the previous section I showed how to compute actions in terms of
  1.3053 +   body-centered predicates which relied averate touch activation of
  1.3054 +   pre-defined regions of the worm's skin. What if, instead of recieving
  1.3055 +   touch pre-grouped into the six faces of each worm segment, the true
  1.3056 +   topology of the worm's skin was unknown? This is more similiar to how
  1.3057 +   a nerve fiber bundle might be arranged. While two fibers that are
  1.3058 +   close in a nerve bundle /might/ correspond to two touch sensors that
  1.3059 +   are close together on the skin, the process of taking a complicated
  1.3060 +   surface and forcing it into essentially a circle requires some cuts
  1.3061 +   and rerragenments.
  1.3062 +   
  1.3063 +   In this section I show how to automatically learn the skin-topology of
  1.3064 +   a worm segment by free exploration. As the worm rolls around on the
  1.3065 +   floor, large sections of its surface get activated. If the worm has
  1.3066 +   stopped moving, then whatever region of skin that is touching the
  1.3067 +   floor is probably an important region, and should be recorded.
  1.3068 +   
  1.3069 +   #+caption: Program to detect whether the worm is in a resting state 
  1.3070 +   #+caption: with one face touching the floor.
  1.3071 +   #+name: pure-touch
  1.3072 +   #+begin_listing clojure
  1.3073 +   #+begin_src clojure
  1.3074 +(def full-contact [(float 0.0) (float 0.1)])
  1.3075 +
  1.3076 +(defn pure-touch?
  1.3077 +  "This is worm specific code to determine if a large region of touch
  1.3078 +   sensors is either all on or all off."
  1.3079 +  [[coords touch :as touch-data]]
  1.3080 +  (= (set (map first touch)) (set full-contact)))
  1.3081 +   #+end_src
  1.3082 +   #+end_listing
  1.3083 +
  1.3084 +   After collecting these important regions, there will many nearly
  1.3085 +   similiar touch regions. While for some purposes the subtle
  1.3086 +   differences between these regions will be important, for my
  1.3087 +   purposes I colapse them into mostly non-overlapping sets using
  1.3088 +   =remove-similiar= in listing \ref{remove-similiar}
  1.3089 +
  1.3090 +   #+caption: Program to take a lits of set of points and ``collapse them''
  1.3091 +   #+caption: so that the remaining sets in the list are siginificantly 
  1.3092 +   #+caption: different from each other. Prefer smaller sets to larger ones.
  1.3093 +   #+name: remove-similiar
  1.3094 +   #+begin_listing clojure
  1.3095 +   #+begin_src clojure
  1.3096 +(defn remove-similar
  1.3097 +  [coll]
  1.3098 +  (loop [result () coll (sort-by (comp - count) coll)]
  1.3099 +    (if (empty? coll) result
  1.3100 +        (let  [[x & xs] coll
  1.3101 +               c (count x)]
  1.3102 +          (if (some
  1.3103 +               (fn [other-set]
  1.3104 +                 (let [oc (count other-set)]
  1.3105 +                   (< (- (count (union other-set x)) c) (* oc 0.1))))
  1.3106 +               xs)
  1.3107 +            (recur result xs)
  1.3108 +            (recur (cons x result) xs))))))
  1.3109 +   #+end_src
  1.3110 +   #+end_listing
  1.3111 +
  1.3112 +   Actually running this simulation is easy given =CORTEX='s facilities.
  1.3113 +
  1.3114 +   #+caption: Collect experiences while the worm moves around. Filter the touch 
  1.3115 +   #+caption: sensations by stable ones, collapse similiar ones together, 
  1.3116 +   #+caption: and report the regions learned.
  1.3117 +   #+name: learn-touch
  1.3118 +   #+begin_listing clojure
  1.3119 +   #+begin_src clojure
  1.3120 +(defn learn-touch-regions []
  1.3121 +  (let [experiences (atom [])
  1.3122 +        world (apply-map
  1.3123 +               worm-world
  1.3124 +               (assoc (worm-segment-defaults)
  1.3125 +                 :experiences experiences))]
  1.3126 +    (run-world world)
  1.3127 +    (->>
  1.3128 +     @experiences
  1.3129 +     (drop 175)
  1.3130 +     ;; access the single segment's touch data
  1.3131 +     (map (comp first :touch))
  1.3132 +     ;; only deal with "pure" touch data to determine surfaces
  1.3133 +     (filter pure-touch?)
  1.3134 +     ;; associate coordinates with touch values
  1.3135 +     (map (partial apply zipmap))
  1.3136 +     ;; select those regions where contact is being made
  1.3137 +     (map (partial group-by second))
  1.3138 +     (map #(get % full-contact))
  1.3139 +     (map (partial map first))
  1.3140 +     ;; remove redundant/subset regions
  1.3141 +     (map set)
  1.3142 +     remove-similar)))
  1.3143 +
  1.3144 +(defn learn-and-view-touch-regions []
  1.3145 +  (map view-touch-region
  1.3146 +       (learn-touch-regions)))
  1.3147 +   #+end_src
  1.3148 +   #+end_listing
  1.3149 +
  1.3150 +   The only thing remining to define is the particular motion the worm
  1.3151 +   must take. I accomplish this with a simple motor control program.
  1.3152 +
  1.3153 +   #+caption: Motor control program for making the worm roll on the ground.
  1.3154 +   #+caption: This could also be replaced with random motion.
  1.3155 +   #+name: worm-roll
  1.3156 +   #+begin_listing clojure
  1.3157 +   #+begin_src clojure
  1.3158 +(defn touch-kinesthetics []
  1.3159 +  [[170 :lift-1 40]
  1.3160 +   [190 :lift-1 19]
  1.3161 +   [206 :lift-1  0]
  1.3162 +
  1.3163 +   [400 :lift-2 40]
  1.3164 +   [410 :lift-2  0]
  1.3165 +
  1.3166 +   [570 :lift-2 40]
  1.3167 +   [590 :lift-2 21]
  1.3168 +   [606 :lift-2  0]
  1.3169 +
  1.3170 +   [800 :lift-1 30]
  1.3171 +   [809 :lift-1 0]
  1.3172 +
  1.3173 +   [900 :roll-2 40]
  1.3174 +   [905 :roll-2 20]
  1.3175 +   [910 :roll-2  0]
  1.3176 +
  1.3177 +   [1000 :roll-2 40]
  1.3178 +   [1005 :roll-2 20]
  1.3179 +   [1010 :roll-2  0]
  1.3180 +   
  1.3181 +   [1100 :roll-2 40]
  1.3182 +   [1105 :roll-2 20]
  1.3183 +   [1110 :roll-2  0]
  1.3184 +   ])
  1.3185 +   #+end_src
  1.3186 +   #+end_listing
  1.3187 +
  1.3188 +
  1.3189 +   #+caption: The small worm rolls around on the floor, driven
  1.3190 +   #+caption: by the motor control program in listing \ref{worm-roll}.
  1.3191 +   #+name: worm-roll
  1.3192 +   #+ATTR_LaTeX: :width 12cm
  1.3193 +   [[./images/worm-roll.png]]
  1.3194 +
  1.3195 +
  1.3196 +   #+caption: After completing its adventures, the worm now knows 
  1.3197 +   #+caption: how its touch sensors are arranged along its skin. These 
  1.3198 +   #+caption: are the regions that were deemed important by 
  1.3199 +   #+caption: =learn-touch-regions=. Note that the worm has discovered
  1.3200 +   #+caption: that it has six sides.
  1.3201 +   #+name: worm-touch-map
  1.3202 +   #+ATTR_LaTeX: :width 12cm
  1.3203 +   [[./images/touch-learn.png]]
  1.3204 +
  1.3205 +   While simple, =learn-touch-regions= exploits regularities in both
  1.3206 +   the worm's physiology and the worm's environment to correctly
  1.3207 +   deduce that the worm has six sides. Note that =learn-touch-regions=
  1.3208 +   would work just as well even if the worm's touch sense data were
  1.3209 +   completely scrambled. The cross shape is just for convienence. This
  1.3210 +   example justifies the use of pre-defined touch regions in =EMPATH=.
  1.3211 +
  1.3212 +* Contributions
  1.3213 +  
  1.3214 +  In this thesis you have seen the =CORTEX= system, a complete
  1.3215 +  environment for creating simulated creatures. You have seen how to
  1.3216 +  implement five senses including touch, proprioception, hearing,
  1.3217 +  vision, and muscle tension. You have seen how to create new creatues
  1.3218 +  using blender, a 3D modeling tool. I hope that =CORTEX= will be
  1.3219 +  useful in further research projects. To this end I have included the
  1.3220 +  full source to =CORTEX= along with a large suite of tests and
  1.3221 +  examples. I have also created a user guide for =CORTEX= which is
  1.3222 +  inculded in an appendix to this thesis.
  1.3223 +
  1.3224 +  You have also seen how I used =CORTEX= as a platform to attach the
  1.3225 +  /action recognition/ problem, which is the problem of recognizing
  1.3226 +  actions in video. You saw a simple system called =EMPATH= which
  1.3227 +  ientifies actions by first describing actions in a body-centerd,
  1.3228 +  rich sense language, then infering a full range of sensory
  1.3229 +  experience from limited data using previous experience gained from
  1.3230 +  free play.
  1.3231 +
  1.3232 +  As a minor digression, you also saw how I used =CORTEX= to enable a
  1.3233 +  tiny worm to discover the topology of its skin simply by rolling on
  1.3234 +  the ground. 
  1.3235 +
  1.3236 +  In conclusion, the main contributions of this thesis are:
  1.3237 +
  1.3238 +  - =CORTEX=, a system for creating simulated creatures with rich
  1.3239 +    senses.
  1.3240 +  - =EMPATH=, a program for recognizing actions by imagining sensory
  1.3241 +    experience. 
  1.3242 +
  1.3243 +# An anatomical joke:
  1.3244 +# - Training
  1.3245 +# - Skeletal imitation
  1.3246 +# - Sensory fleshing-out
  1.3247 +# - Classification
  1.3248 +#+BEGIN_LaTeX
  1.3249 +\appendix
  1.3250 +#+END_LaTeX
  1.3251 +* Appendix: =CORTEX= User Guide
  1.3252 +
  1.3253 +  Those who write a thesis should endeavor to make their code not only
  1.3254 +  accessable, but actually useable, as a way to pay back the community
  1.3255 +  that made the thesis possible in the first place. This thesis would
  1.3256 +  not be possible without Free Software such as jMonkeyEngine3,
  1.3257 +  Blender, clojure, emacs, ffmpeg, and many other tools. That is why I
  1.3258 +  have included this user guide, in the hope that someone else might
  1.3259 +  find =CORTEX= useful.
  1.3260 +
  1.3261 +** Obtaining =CORTEX= 
  1.3262 +
  1.3263 +   You can get cortex from its mercurial repository at
  1.3264 +   http://hg.bortreb.com/cortex. You may also download =CORTEX=
  1.3265 +   releases at http://aurellem.org/cortex/releases/. As a condition of
  1.3266 +   making this thesis, I have also provided Professor Winston the
  1.3267 +   =CORTEX= source, and he knows how to run the demos and get started.
  1.3268 +   You may also email me at =cortex@aurellem.org= and I may help where
  1.3269 +   I can.
  1.3270 +
  1.3271 +** Running =CORTEX= 
  1.3272 +   
  1.3273 +   =CORTEX= comes with README and INSTALL files that will guide you
  1.3274 +   through installation and running the test suite. In particular you
  1.3275 +   should look at test =cortex.test= which contains test suites that
  1.3276 +   run through all senses and multiple creatures.
  1.3277 +
  1.3278 +** Creating creatures
  1.3279 +
  1.3280 +   Creatures are created using /Blender/, a free 3D modeling program.
  1.3281 +   You will need Blender version 2.6 when using the =CORTEX= included
  1.3282 +   in this thesis. You create a =CORTEX= creature in a similiar manner
  1.3283 +   to modeling anything in Blender, except that you also create
  1.3284 +   several trees of empty nodes which define the creature's senses.
  1.3285 +
  1.3286 +*** Mass 
  1.3287 +    
  1.3288 +    To give an object mass in =CORTEX=, add a ``mass'' metadata label
  1.3289 +    to the object with the mass in jMonkeyEngine units. Note that
  1.3290 +    setting the mass to 0 causes the object to be immovable.
  1.3291 +
  1.3292 +*** Joints
  1.3293 +
  1.3294 +    Joints are created by creating an empty node named =joints= and
  1.3295 +    then creating any number of empty child nodes to represent your
  1.3296 +    creature's joints. The joint will automatically connect the
  1.3297 +    closest two physical objects. It will help to set the empty node's
  1.3298 +    display mode to ``Arrows'' so that you can clearly see the
  1.3299 +    direction of the axes.
  1.3300 +   
  1.3301 +    Joint nodes should have the following metadata under the ``joint''
  1.3302 +    label:
  1.3303 +
  1.3304 +    #+BEGIN_SRC clojure
  1.3305 +;; ONE OF the following, under the label "joint":
  1.3306 +{:type :point}
  1.3307 +
  1.3308 +;; OR
  1.3309 +
  1.3310 +{:type :hinge
  1.3311 + :limit [<limit-low> <limit-high>]
  1.3312 + :axis (Vector3f. <x> <y> <z>)}
  1.3313 +;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
  1.3314 +
  1.3315 +;; OR
  1.3316 +
  1.3317 +{:type :cone
  1.3318 + :limit-xz <lim-xz>
  1.3319 + :limit-xy <lim-xy>
  1.3320 + :twist    <lim-twist>}   ;(use XZY rotation mode in blender!)
  1.3321 +    #+END_SRC
  1.3322 +
  1.3323 +*** Eyes
  1.3324 +
  1.3325 +    Eyes are created by creating an empty node named =eyes= and then
  1.3326 +    creating any number of empty child nodes to represent your
  1.3327 +    creature's eyes.
  1.3328 +
  1.3329 +    Eye nodes should have the following metadata under the ``eye''
  1.3330 +    label:
  1.3331 +
  1.3332 +#+BEGIN_SRC clojure
  1.3333 +{:red    <red-retina-definition>
  1.3334 + :blue   <blue-retina-definition>
  1.3335 + :green  <green-retina-definition>
  1.3336 + :all    <all-retina-definition>
  1.3337 + (<0xrrggbb> <custom-retina-image>)...
  1.3338 +}
  1.3339 +#+END_SRC
  1.3340 +
  1.3341 +    Any of the color channels may be omitted. You may also include
  1.3342 +    your own color selectors, and in fact :red is equivalent to
  1.3343 +    0xFF0000 and so forth. The eye will be placed at the same position
  1.3344 +    as the empty node and will bind to the neatest physical object.
  1.3345 +    The eye will point outward from the X-axis of the node, and ``up''
  1.3346 +    will be in the direction of the X-axis of the node. It will help
  1.3347 +    to set the empty node's display mode to ``Arrows'' so that you can
  1.3348 +    clearly see the direction of the axes.
  1.3349 +
  1.3350 +    Each retina file should contain white pixels whever you want to be
  1.3351 +    sensitive to your chosen color. If you want the entire field of
  1.3352 +    view, specify :all of 0xFFFFFF and a retinal map that is entirely
  1.3353 +    white. 
  1.3354 +
  1.3355 +    Here is a sample retinal map:
  1.3356 +
  1.3357 +    #+caption: An example retinal profile image. White pixels are 
  1.3358 +    #+caption: photo-sensitive elements. The distribution of white 
  1.3359 +    #+caption: pixels is denser in the middle and falls off at the 
  1.3360 +    #+caption: edges and is inspired by the human retina.
  1.3361 +    #+name: retina
  1.3362 +    #+ATTR_LaTeX: :width 7cm :placement [H]
  1.3363 +    [[./images/retina-small.png]]
  1.3364 +
  1.3365 +*** Hearing
  1.3366 +
  1.3367 +    Ears are created by creating an empty node named =ears= and then
  1.3368 +    creating any number of empty child nodes to represent your
  1.3369 +    creature's ears. 
  1.3370 +
  1.3371 +    Ear nodes do not require any metadata.
  1.3372 +
  1.3373 +    The ear will bind to and follow the closest physical node.
  1.3374 +
  1.3375 +*** Touch
  1.3376 +
  1.3377 +    Touch is handled similarly to mass. To make a particular object
  1.3378 +    touch sensitive, add metadata of the following form under the
  1.3379 +    object's ``touch'' metadata field:
  1.3380 +    
  1.3381 +    #+BEGIN_EXAMPLE
  1.3382 +    <touch-UV-map-file-name>    
  1.3383 +    #+END_EXAMPLE
  1.3384 +
  1.3385 +    You may also include an optional ``scale'' metadata number to
  1.3386 +    specifiy the length of the touch feelers. The default is $0.1$,
  1.3387 +    and this is generally sufficient.
  1.3388 +
  1.3389 +    The touch UV should contain white pixels for each touch sensor.
  1.3390 +
  1.3391 +    Here is an example touch-uv map that approximates a human finger,
  1.3392 +    and its corresponding model.
  1.3393 +
  1.3394 +    #+caption: This is the tactile-sensor-profile for the upper segment 
  1.3395 +    #+caption: of a fingertip. It defines regions of high touch sensitivity 
  1.3396 +    #+caption: (where there are many white pixels) and regions of low 
  1.3397 +    #+caption: sensitivity (where white pixels are sparse).
  1.3398 +    #+name: guide-fingertip-UV
  1.3399 +    #+ATTR_LaTeX: :width 9cm :placement [H]
  1.3400 +    [[./images/finger-UV.png]]
  1.3401 +
  1.3402 +    #+caption: The fingertip UV-image form above applied to a simple
  1.3403 +    #+caption: model of a fingertip.
  1.3404 +    #+name: guide-fingertip
  1.3405 +    #+ATTR_LaTeX: :width 9cm :placement [H]
  1.3406 +    [[./images/finger-2.png]]
  1.3407 +
  1.3408 +*** Propriocepotion
  1.3409 +
  1.3410 +    Proprioception is tied to each joint node -- nothing special must
  1.3411 +    be done in a blender model to enable proprioception other than
  1.3412 +    creating joint nodes.
  1.3413 +
  1.3414 +*** Muscles
  1.3415 +
  1.3416 +    Muscles are created by creating an empty node named =muscles= and
  1.3417 +    then creating any number of empty child nodes to represent your
  1.3418 +    creature's muscles.
  1.3419 +
  1.3420 +    
  1.3421 +    Muscle nodes should have the following metadata under the
  1.3422 +    ``muscle'' label:
  1.3423 +
  1.3424 +    #+BEGIN_EXAMPLE
  1.3425 +    <muscle-profile-file-name>
  1.3426 +    #+END_EXAMPLE
  1.3427 +
  1.3428 +    Muscles should also have a ``strength'' metadata entry describing
  1.3429 +    the muscle's total strength at full activation. 
  1.3430 +
  1.3431 +    Muscle profiles are simple images that contain the relative amount
  1.3432 +    of muscle power in each simulated alpha motor neuron. The width of
  1.3433 +    the image is the total size of the motor pool, and the redness of
  1.3434 +    each neuron is the relative power of that motor pool.
  1.3435 +
  1.3436 +    While the profile image can have any dimensions, only the first
  1.3437 +    line of pixels is used to define the muscle. Here is a sample
  1.3438 +    muscle profile image that defines a human-like muscle.
  1.3439 +
  1.3440 +    #+caption: A muscle profile image that describes the strengths
  1.3441 +    #+caption: of each motor neuron in a muscle. White is weakest 
  1.3442 +    #+caption: and dark red is strongest. This particular pattern 
  1.3443 +    #+caption: has weaker motor neurons at the beginning, just 
  1.3444 +    #+caption: like human muscle.
  1.3445 +    #+name: muscle-recruit
  1.3446 +    #+ATTR_LaTeX: :width 7cm :placement [H]
  1.3447 +    [[./images/basic-muscle.png]]
  1.3448 +    
  1.3449 +    Muscles twist the nearest physical object about the muscle node's
  1.3450 +    Z-axis. I recommend using the ``Single Arrow'' display mode for
  1.3451 +    muscles and using the right hand rule to determine which way the
  1.3452 +    muscle will twist. To make a segment that can twist in multiple
  1.3453 +    directions, create multiple, differently aligned muscles.
  1.3454 +
  1.3455 +** =CORTEX= API
  1.3456 +
  1.3457 +   These are the some functions exposed by =CORTEX= for creating
  1.3458 +   worlds and simulating creatures. These are in addition to
  1.3459 +   jMonkeyEngine3's extensive library, which is documented elsewhere.
  1.3460 +
  1.3461 +*** Simulation
  1.3462 +   - =(world root-node key-map setup-fn update-fn)= :: create
  1.3463 +        a simulation.
  1.3464 +     - /root-node/     :: a =com.jme3.scene.Node= object which
  1.3465 +          contains all of the objects that should be in the
  1.3466 +          simulation.
  1.3467 +
  1.3468 +     - /key-map/       :: a map from strings describing keys to
  1.3469 +          functions that should be executed whenever that key is
  1.3470 +          pressed. the functions should take a SimpleApplication
  1.3471 +          object and a boolean value. The SimpleApplication is the
  1.3472 +          current simulation that is running, and the boolean is true
  1.3473 +          if the key is being pressed, and false if it is being
  1.3474 +          released. As an example,
  1.3475 +          #+BEGIN_SRC clojure
  1.3476 +       {"key-j" (fn [game value] (if value (println "key j pressed")))}		  
  1.3477 +	  #+END_SRC
  1.3478 +	  is a valid key-map which will cause the simulation to print
  1.3479 +          a message whenever the 'j' key on the keyboard is pressed.
  1.3480 +
  1.3481 +     - /setup-fn/      :: a function that takes a =SimpleApplication=
  1.3482 +          object. It is called once when initializing the simulation.
  1.3483 +          Use it to create things like lights, change the gravity,
  1.3484 +          initialize debug nodes, etc.
  1.3485 +
  1.3486 +     - /update-fn/     :: this function takes a =SimpleApplication=
  1.3487 +          object and a float and is called every frame of the
  1.3488 +          simulation. The float tells how many seconds is has been
  1.3489 +          since the last frame was rendered, according to whatever
  1.3490 +          clock jme is currently using. The default is to use IsoTimer
  1.3491 +          which will result in this value always being the same.
  1.3492 +
  1.3493 +   - =(position-camera world position rotation)= :: set the position
  1.3494 +        of the simulation's main camera.
  1.3495 +
  1.3496 +   - =(enable-debug world)= :: turn on debug wireframes for each
  1.3497 +        simulated object.
  1.3498 +
  1.3499 +   - =(set-gravity world gravity)= :: set the gravity of a running
  1.3500 +        simulation.
  1.3501 +
  1.3502 +   - =(box length width height & {options})= :: create a box in the
  1.3503 +        simulation. Options is a hash map specifying texture, mass,
  1.3504 +        etc. Possible options are =:name=, =:color=, =:mass=,
  1.3505 +        =:friction=, =:texture=, =:material=, =:position=,
  1.3506 +        =:rotation=, =:shape=, and =:physical?=.
  1.3507 +
  1.3508 +   - =(sphere radius & {options})= :: create a sphere in the simulation.
  1.3509 +        Options are the same as in =box=.
  1.3510 +
  1.3511 +   - =(load-blender-model file-name)= :: create a node structure
  1.3512 +        representing that described in a blender file.
  1.3513 +
  1.3514 +   - =(light-up-everything world)= :: distribute a standard compliment
  1.3515 +        of lights throught the simulation. Should be adequate for most
  1.3516 +        purposes.
  1.3517 +
  1.3518 +   - =(node-seq node)= :: return a recursuve list of the node's
  1.3519 +        children.
  1.3520 +
  1.3521 +   - =(nodify name children)= :: construct a node given a node-name and
  1.3522 +        desired children.
  1.3523 +
  1.3524 +   - =(add-element world element)= :: add an object to a running world
  1.3525 +        simulation.
  1.3526 +
  1.3527 +   - =(set-accuracy world accuracy)= :: change the accuracy of the
  1.3528 +        world's physics simulator.
  1.3529 +
  1.3530 +   - =(asset-manager)= :: get an /AssetManager/, a jMonkeyEngine
  1.3531 +        construct that is useful for loading textures and is required
  1.3532 +        for smooth interaction with jMonkeyEngine library functions.
  1.3533 +
  1.3534 +   - =(load-bullet)=   :: unpack native libraries and initialize
  1.3535 +        blender. This function is required before other world building
  1.3536 +        functions are called.
  1.3537 +	
  1.3538 +*** Creature Manipulation / Import
  1.3539 +
  1.3540 +   - =(body! creature)= :: give the creature a physical body.
  1.3541 +
  1.3542 +   - =(vision! creature)= :: give the creature a sense of vision.
  1.3543 +        Returns a list of functions which will each, when called
  1.3544 +        during a simulation, return the vision data for the channel of
  1.3545 +        one of the eyes. The functions are ordered depending on the
  1.3546 +        alphabetical order of the names of the eye nodes in the
  1.3547 +        blender file. The data returned by the functions is a vector
  1.3548 +        containing the eye's /topology/, a vector of coordinates, and
  1.3549 +        the eye's /data/, a vector of RGB values filtered by the eye's
  1.3550 +        sensitivity. 
  1.3551 +
  1.3552 +   - =(hearing! creature)= :: give the creature a sense of hearing.
  1.3553 +        Returns a list of functions, one for each ear, that when
  1.3554 +        called will return a frame's worth of hearing data for that
  1.3555 +        ear. The functions are ordered depending on the alphabetical
  1.3556 +        order of the names of the ear nodes in the blender file. The
  1.3557 +        data returned by the functions is an array PCM encoded wav
  1.3558 +        data. 
  1.3559 +
  1.3560 +   - =(touch! creature)= :: give the creature a sense of touch. Returns
  1.3561 +        a single function that must be called with the /root node/ of
  1.3562 +        the world, and which will return a vector of /touch-data/
  1.3563 +        one entry for each touch sensitive component, each entry of
  1.3564 +        which contains a /topology/ that specifies the distribution of
  1.3565 +        touch sensors, and the /data/, which is a vector of
  1.3566 +        =[activation, length]= pairs for each touch hair.
  1.3567 +
  1.3568 +   - =(proprioception! creature)= :: give the creature the sense of
  1.3569 +        proprioception. Returns a list of functions, one for each
  1.3570 +        joint, that when called during a running simulation will
  1.3571 +        report the =[headnig, pitch, roll]= of the joint.
  1.3572 +
  1.3573 +   - =(movement! creature)= :: give the creature the power of movement.
  1.3574 +        Creates a list of functions, one for each muscle, that when
  1.3575 +        called with an integer, will set the recruitment of that
  1.3576 +        muscle to that integer, and will report the current power
  1.3577 +        being exerted by the muscle. Order of muscles is determined by
  1.3578 +        the alphabetical sort order of the names of the muscle nodes.
  1.3579 +	
  1.3580 +*** Visualization/Debug
  1.3581 +
  1.3582 +   - =(view-vision)= :: create a function that when called with a list
  1.3583 +        of visual data returned from the functions made by =vision!=, 
  1.3584 +        will display that visual data on the screen.
  1.3585 +
  1.3586 +   - =(view-hearing)= :: same as =view-vision= but for hearing.
  1.3587 +
  1.3588 +   - =(view-touch)= :: same as =view-vision= but for touch.
  1.3589 +
  1.3590 +   - =(view-proprioception)= :: same as =view-vision= but for
  1.3591 +        proprioception.
  1.3592 +
  1.3593 +   - =(view-movement)= :: same as =view-vision= but for
  1.3594 +        proprioception.
  1.3595 +
  1.3596 +   - =(view anything)= :: =view= is a polymorphic function that allows
  1.3597 +        you to inspect almost anything you could reasonably expect to
  1.3598 +        be able to ``see'' in =CORTEX=.
  1.3599 +
  1.3600 +   - =(text anything)= :: =text= is a polymorphic function that allows
  1.3601 +        you to convert practically anything into a text string.	
  1.3602 +
  1.3603 +   - =(println-repl anything)= :: print messages to clojure's repl
  1.3604 +        instead of the simulation's terminal window.
  1.3605 +
  1.3606 +   - =(mega-import-jme3)= :: for experimenting at the REPL. This
  1.3607 +        function will import all jMonkeyEngine3 classes for immediate
  1.3608 +        use.
  1.3609 +
  1.3610 +   - =(display-dialated-time world timer)= :: Shows the time as it is
  1.3611 +        flowing in the simulation on a HUD display.
  1.3612 +
  1.3613 +
  1.3614 +