Mercurial > cortex
view thesis/cortex.tex @ 573:ebdedb039cbb tip
add release.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sun, 19 Apr 2015 04:01:53 -0700 |
parents | 202c6d19acad |
children |
line wrap: on
line source
2 \section{Empathy $\backslash$ Embodiment: problem solving strategies}3 \label{sec-1}5 By the time you have read this thesis, you will understand a novel6 approach to representing and recognizing physical actions using7 embodiment and empathy. You will also see one way to efficiently8 implement physical empathy for embodied creatures. Finally, you will9 become familiar with \texttt{CORTEX}, a system for designing and simulating10 creatures with rich senses, which I have designed as a library that11 you can use in your own research. Note that I \emph{do not} process video12 directly --- I start with knowledge of the positions of a creature's13 body parts and work from there.15 This is the core vision of my thesis: That one of the important ways16 in which we understand others is by imagining ourselves in their17 position and empathically feeling experiences relative to our own18 bodies. By understanding events in terms of our own previous19 corporeal experience, we greatly constrain the possibilities of what20 would otherwise be an unwieldy exponential search. This extra21 constraint can be the difference between easily understanding what22 is happening in a video and being completely lost in a sea of23 incomprehensible color and movement.25 \subsection{The problem: recognizing actions is hard!}26 \label{sec-1-1}28 Examine figure \ref{cat-drink}. What is happening? As you, and29 indeed very young children, can easily determine, this is an image30 of drinking.32 \begin{figure}[htb]33 \centering34 \includegraphics[width=7cm]{./images/cat-drinking.jpg}35 \caption{\label{cat-drink}A cat drinking some water. Identifying this action is beyond the capabilities of existing computer vision systems.}36 \end{figure}38 Nevertheless, it is beyond the state of the art for a computer39 vision program to describe what's happening in this image. Part of40 the problem is that many computer vision systems focus on41 pixel-level details or comparisons to example images (such as42 \cite{volume-action-recognition}), but the 3D world is so variable43 that it is hard to describe the world in terms of possible images.45 In fact, the contents of a scene may have much less to do with46 pixel probabilities than with recognizing various affordances:47 things you can move, objects you can grasp, spaces that can be48 filled . For example, what processes might enable you to see the49 chair in figure \ref{hidden-chair}?51 \begin{figure}[htb]52 \centering53 \includegraphics[width=10cm]{./images/fat-person-sitting-at-desk.jpg}54 \caption{\label{hidden-chair}The chair in this image is quite obvious to humans, but it can't be found by any modern computer vision program.}55 \end{figure}57 Finally, how is it that you can easily tell the difference between58 how the girl's \emph{muscles} are working in figure \ref{girl}?60 \begin{figure}[htb]61 \centering62 \includegraphics[width=7cm]{./images/wall-push.png}63 \caption{\label{girl}The mysterious ``common sense'' appears here as you are able to discern the difference in how the girl's arm muscles are activated between the two images. When you compare these two images, do you feel something in your own arm muscles?}64 \end{figure}66 Each of these examples tells us something about what might be going67 on in our minds as we easily solve these recognition problems:69 \begin{itemize}70 \item The hidden chair shows us that we are strongly triggered by cues71 relating to the position of human bodies, and that we can72 determine the overall physical configuration of a human body even73 if much of that body is occluded.75 \item The picture of the girl pushing against the wall tells us that we76 have common sense knowledge about the kinetics of our own bodies.77 We know well how our muscles would have to work to maintain us in78 most positions, and we can easily project this self-knowledge to79 imagined positions triggered by images of the human body.81 \item The cat tells us that imagination of some kind plays an important82 role in understanding actions. The question is: Can we be more83 precise about what sort of imagination is required to understand84 these actions?85 \end{itemize}87 \subsection{A step forward: the sensorimotor-centered approach}88 \label{sec-1-2}90 In this thesis, I explore the idea that our knowledge of our own91 bodies, combined with our own rich senses, enables us to recognize92 the actions of others.94 For example, I think humans are able to label the cat video as95 ``drinking'' because they imagine \emph{themselves} as the cat, and96 imagine putting their face up against a stream of water and97 sticking out their tongue. In that imagined world, they can feel98 the cool water hitting their tongue, and feel the water entering99 their body, and are able to recognize that \emph{feeling} as drinking.100 So, the label of the action is not really in the pixels of the101 image, but is found clearly in a simulation / recollection inspired102 by those pixels. An imaginative system, having been trained on103 drinking and non-drinking examples and learning that the most104 important component of drinking is the feeling of water flowing105 down one's throat, would analyze a video of a cat drinking in the106 following manner:108 \begin{enumerate}109 \item Create a physical model of the video by putting a ``fuzzy''110 model of its own body in place of the cat. Possibly also create111 a simulation of the stream of water.113 \item Play out this simulated scene and generate imagined sensory114 experience. This will include relevant muscle contractions, a115 close up view of the stream from the cat's perspective, and most116 importantly, the imagined feeling of water entering the mouth.117 The imagined sensory experience can come from a simulation of118 the event, but can also be pattern-matched from previous,119 similar embodied experience.121 \item The action is now easily identified as drinking by the sense of122 taste alone. The other senses (such as the tongue moving in and123 out) help to give plausibility to the simulated action. Note that124 the sense of vision, while critical in creating the simulation,125 is not critical for identifying the action from the simulation.126 \end{enumerate}128 For the chair examples, the process is even easier:130 \begin{enumerate}131 \item Align a model of your body to the person in the image.133 \item Generate proprioceptive sensory data from this alignment.135 \item Use the imagined proprioceptive data as a key to lookup related136 sensory experience associated with that particular proprioceptive137 feeling.139 \item Retrieve the feeling of your bottom resting on a surface, your140 knees bent, and your leg muscles relaxed.142 \item This sensory information is consistent with your \texttt{sitting?}143 sensory predicate, so you (and the entity in the image) must be144 sitting.146 \item There must be a chair-like object since you are sitting.147 \end{enumerate}149 Empathy offers yet another alternative to the age-old AI150 representation question: ``What is a chair?'' --- A chair is the151 feeling of sitting!153 One powerful advantage of empathic problem solving is that it154 factors the action recognition problem into two easier problems. To155 use empathy, you need an \emph{aligner}, which takes the video and a156 model of your body, and aligns the model with the video. Then, you157 need a \emph{recognizer}, which uses the aligned model to interpret the158 action. The power in this method lies in the fact that you describe159 all actions from a body-centered viewpoint. You are less tied to160 the particulars of any visual representation of the actions. If you161 teach the system what ``running'' is, and you have a good enough162 aligner, the system will from then on be able to recognize running163 from any point of view -- even strange points of view like above or164 underneath the runner. This is in contrast to action recognition165 schemes that try to identify actions using a non-embodied approach.166 If these systems learn about running as viewed from the side, they167 will not automatically be able to recognize running from any other168 viewpoint.170 Another powerful advantage is that using the language of multiple171 body-centered rich senses to describe body-centered actions offers172 a massive boost in descriptive capability. Consider how difficult173 it would be to compose a set of HOG (Histogram of Oriented174 Gradients) filters to describe the action of a simple worm-creature175 ``curling'' so that its head touches its tail, and then behold the176 simplicity of describing thus action in a language designed for the177 task (listing \ref{grand-circle-intro}):179 \begin{listing}180 \begin{verbatim}181 (defn grand-circle?182 "Does the worm form a majestic circle (one end touching the other)?"183 [experiences]184 (and (curled? experiences)185 (let [worm-touch (:touch (peek experiences))186 tail-touch (worm-touch 0)187 head-touch (worm-touch 4)]188 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))189 (< 0.2 (contact worm-segment-top-tip head-touch))))))190 \end{verbatim}191 \caption{\label{grand-circle-intro}Body-centered actions are best expressed in a body-centered language. This code detects when the worm has curled into a full circle. Imagine how you would replicate this functionality using low-level pixel features such as HOG filters!}192 \end{listing}194 \subsection{\texttt{EMPATH} recognizes actions using empathy}195 \label{sec-1-3}197 Exploring these ideas further demands a concrete implementation, so198 first, I built a system for constructing virtual creatures with199 physiologically plausible sensorimotor systems and detailed200 environments. The result is \texttt{CORTEX}, which I describe in chapter201 \ref{sec-2}.203 Next, I wrote routines which enabled a simple worm-like creature to204 infer the actions of a second worm-like creature, using only its205 own prior sensorimotor experiences and knowledge of the second206 worm's joint positions. This program, \texttt{EMPATH}, is described in207 chapter \ref{sec-3}. It's main components are:209 \begin{description}210 \item[{Embodied Action Definitions}] Many otherwise complicated actions211 are easily described in the language of a full suite of212 body-centered, rich senses and experiences. For example,213 drinking is the feeling of water flowing down your throat, and214 cooling your insides. It's often accompanied by bringing your215 hand close to your face, or bringing your face close to water.216 Sitting down is the feeling of bending your knees, activating217 your quadriceps, then feeling a surface with your bottom and218 relaxing your legs. These body-centered action descriptions219 can be either learned or hard coded.221 \item[{Guided Play }] The creature moves around and experiences the222 world through its unique perspective. As the creature moves,223 it gathers experiences that satisfy the embodied action224 definitions.226 \item[{Posture Imitation}] When trying to interpret a video or image,227 the creature takes a model of itself and aligns it with228 whatever it sees. This alignment might even cross species, as229 when humans try to align themselves with things like ponies,230 dogs, or other humans with a different body type.232 \item[{Empathy }] The alignment triggers associations with233 sensory data from prior experiences. For example, the234 alignment itself easily maps to proprioceptive data. Any235 sounds or obvious skin contact in the video can to a lesser236 extent trigger previous experience keyed to hearing or touch.237 Segments of previous experiences gained from play are stitched238 together to form a coherent and complete sensory portrait of239 the scene.241 \item[{Recognition}] With the scene described in terms of remembered242 first person sensory events, the creature can now run its243 action-definition programs (such as the one in listing244 \ref{grand-circle-intro}) on this synthesized sensory data,245 just as it would if it were actually experiencing the scene246 first-hand. If previous experience has been accurately247 retrieved, and if it is analogous enough to the scene, then248 the creature will correctly identify the action in the scene.249 \end{description}251 My program \texttt{EMPATH} uses this empathic problem solving technique252 to interpret the actions of a simple, worm-like creature.254 \begin{figure}[htb]255 \centering256 \includegraphics[width=15cm]{./images/worm-intro-white.png}257 \caption{\label{worm-intro}The worm performs many actions during free play such as curling, wiggling, and resting.}258 \end{figure}260 \begin{figure}[htb]261 \centering262 \includegraphics[width=15cm]{./images/worm-poses.png}263 \caption{\label{worm-recognition-intro}\texttt{EMPATH} recognized and classified each of these poses by inferring the complete sensory experience from proprioceptive data.}264 \end{figure}266 \subsubsection{Main Results}267 \label{sec-1-3-1}269 \begin{itemize}270 \item After one-shot supervised training, \texttt{EMPATH} was able to271 recognize a wide variety of static poses and dynamic272 actions---ranging from curling in a circle to wiggling with a273 particular frequency --- with 95$\backslash$ accuracy.275 \item These results were completely independent of viewing angle276 because the underlying body-centered language fundamentally is277 independent; once an action is learned, it can be recognized278 equally well from any viewing angle.280 \item \texttt{EMPATH} is surprisingly short; the sensorimotor-centered281 language provided by \texttt{CORTEX} resulted in extremely economical282 recognition routines --- about 500 lines in all --- suggesting283 that such representations are very powerful, and often284 indispensable for the types of recognition tasks considered here.286 \item For expediency's sake, I relied on direct knowledge of joint287 positions in this proof of concept. However, I believe that the288 structure of \texttt{EMPATH} and \texttt{CORTEX} will make future work to289 enable video analysis much easier than it would otherwise be.290 \end{itemize}292 \subsection{\texttt{EMPATH} is built on \texttt{CORTEX}, a creature builder.}293 \label{sec-1-4}295 I built \texttt{CORTEX} to be a general AI research platform for doing296 experiments involving multiple rich senses and a wide variety and297 number of creatures. I intend it to be useful as a library for many298 more projects than just this thesis. \texttt{CORTEX} was necessary to meet299 a need among AI researchers at CSAIL and beyond, which is that300 people often will invent wonderful ideas that are best expressed in301 the language of creatures and senses, but in order to explore those302 ideas they must first build a platform in which they can create303 simulated creatures with rich senses! There are many ideas that304 would be simple to execute (such as \texttt{EMPATH} or Larson's305 self-organizing maps (\cite{larson-symbols})), but attached to them306 is the multi-month effort to make a good creature simulator. Often,307 that initial investment of time proves to be too much, and the308 project must make do with a lesser environment or be abandoned309 entirely.311 \texttt{CORTEX} is well suited as an environment for embodied AI research312 for three reasons:314 \begin{itemize}315 \item You can design new creatures using Blender (\cite{blender}), a316 popular, free 3D modeling program. Each sense can be specified317 using special Blender nodes with biologically inspired318 parameters. You need not write any code to create a creature, and319 can use a wide library of pre-existing Blender models as a base320 for your own creatures.322 \item \texttt{CORTEX} implements a wide variety of senses: touch,323 proprioception, vision, hearing, and muscle tension. Complicated324 senses like touch and vision involve multiple sensory elements325 embedded in a 2D surface. You have complete control over the326 distribution of these sensor elements through the use of simple327 image files. \texttt{CORTEX} implements more comprehensive hearing than328 any other creature simulation system available.330 \item \texttt{CORTEX} supports any number of creatures and any number of331 senses. Time in \texttt{CORTEX} dilates so that the simulated creatures332 always perceive a perfectly smooth flow of time, regardless of333 the actual computational load.334 \end{itemize}336 \texttt{CORTEX} is built on top of \texttt{jMonkeyEngine3}337 (\cite{jmonkeyengine}), which is a video game engine designed to338 create cross-platform 3D desktop games. \texttt{CORTEX} is mainly written339 in clojure, a dialect of \texttt{LISP} that runs on the Java Virtual340 Machine (JVM). The API for creating and simulating creatures and341 senses is entirely expressed in clojure, though many senses are342 implemented at the layer of jMonkeyEngine or below. For example,343 for the sense of hearing I use a layer of clojure code on top of a344 layer of java JNI bindings that drive a layer of \texttt{C++} code which345 implements a modified version of \texttt{OpenAL} to support multiple346 listeners. \texttt{CORTEX} is the only simulation environment that I know347 of that can support multiple entities that can each hear the world348 from their own perspective. Other senses also require a small layer349 of Java code. \texttt{CORTEX} also uses \texttt{bullet}, a physics simulator350 written in \texttt{C}.352 \begin{figure}[htb]353 \centering354 \includegraphics[width=12cm]{./images/blender-worm.png}355 \caption{\label{worm-recognition-intro-2}Here is the worm from figure \ref{worm-intro} modeled in Blender, a free 3D-modeling program. Senses and joints are described using special nodes in Blender.}356 \end{figure}358 Here are some things I anticipate that \texttt{CORTEX} might be used for:360 \begin{itemize}361 \item exploring new ideas about sensory integration362 \item distributed communication among swarm creatures363 \item self-learning using free exploration,364 \item evolutionary algorithms involving creature construction365 \item exploration of exotic senses and effectors that are not possible366 in the real world (such as telekinesis or a semantic sense)367 \item imagination using subworlds368 \end{itemize}370 During one test with \texttt{CORTEX}, I created 3,000 creatures each with371 its own independent senses and ran them all at only 1/80 real time.372 In another test, I created a detailed model of my own hand,373 equipped with a realistic distribution of touch (more sensitive at374 the fingertips), as well as eyes and ears, and it ran at around 1/4375 real time.377 \begin{sidewaysfigure}378 \includegraphics[width=8.5in]{images/full-hand.png}379 \caption{380 I modeled my own right hand in Blender and rigged it with all the381 senses that {\tt CORTEX} supports. My simulated hand has a382 biologically inspired distribution of touch sensors. The senses are383 displayed on the right (the red/black squares are raw sensory output),384 and the simulation is displayed on the385 left. Notice that my hand is curling its fingers, that it can see386 its own finger from the eye in its palm, and that it can feel its387 own thumb touching its palm.}388 \end{sidewaysfigure}390 \section{Designing \texttt{CORTEX}}391 \label{sec-2}393 In this chapter, I outline the design decisions that went into394 making \texttt{CORTEX}, along with some details about its implementation.395 (A practical guide to getting started with \texttt{CORTEX}, which skips396 over the history and implementation details presented here, is397 provided in an appendix at the end of this thesis.)399 Throughout this project, I intended for \texttt{CORTEX} to be flexible and400 extensible enough to be useful for other researchers who want to401 test ideas of their own. To this end, wherever I have had to make402 architectural choices about \texttt{CORTEX}, I have chosen to give as much403 freedom to the user as possible, so that \texttt{CORTEX} may be used for404 things I have not foreseen.406 \subsection{Building in simulation versus reality}407 \label{sec-2-1}408 The most important architectural decision of all is the choice to409 use a computer-simulated environment in the first place! The world410 is a vast and rich place, and for now simulations are a very poor411 reflection of its complexity. It may be that there is a significant412 qualitative difference between dealing with senses in the real413 world and dealing with pale facsimiles of them in a simulation414 (\cite{brooks-representation}). What are the advantages and415 disadvantages of a simulation vs. reality?417 \subsubsection{Simulation}418 \label{sec-2-1-1}420 The advantages of virtual reality are that when everything is a421 simulation, experiments in that simulation are absolutely422 reproducible. It's also easier to change the creature and423 environment to explore new situations and different sensory424 combinations.426 If the world is to be simulated on a computer, then not only do427 you have to worry about whether the creature's senses are rich428 enough to learn from the world, but whether the world itself is429 rendered with enough detail and realism to give enough working430 material to the creature's senses. To name just a few431 difficulties facing modern physics simulators: destructibility of432 the environment, simulation of water/other fluids, large areas,433 nonrigid bodies, lots of objects, smoke. I don't know of any434 computer simulation that would allow a creature to take a rock435 and grind it into fine dust, then use that dust to make a clay436 sculpture, at least not without spending years calculating the437 interactions of every single small grain of dust. Maybe a438 simulated world with today's limitations doesn't provide enough439 richness for real intelligence to evolve.441 \subsubsection{Reality}442 \label{sec-2-1-2}444 The other approach for playing with senses is to hook your445 software up to real cameras, microphones, robots, etc., and let it446 loose in the real world. This has the advantage of eliminating447 concerns about simulating the world at the expense of increasing448 the complexity of implementing the senses. Instead of just449 grabbing the current rendered frame for processing, you have to450 use an actual camera with real lenses and interact with photons to451 get an image. It is much harder to change the creature, which is452 now partly a physical robot of some sort, since doing so involves453 changing things around in the real world instead of modifying454 lines of code. While the real world is very rich and definitely455 provides enough stimulation for intelligence to develop (as456 evidenced by our own existence), it is also uncontrollable in the457 sense that a particular situation cannot be recreated perfectly or458 saved for later use. It is harder to conduct Science because it is459 harder to repeat an experiment. The worst thing about using the460 real world instead of a simulation is the matter of time. Instead461 of simulated time you get the constant and unstoppable flow of462 real time. This severely limits the sorts of software you can use463 to program an AI, because all sense inputs must be handled in real464 time. Complicated ideas may have to be implemented in hardware or465 may simply be impossible given the current speed of our466 processors. Contrast this with a simulation, in which the flow of467 time in the simulated world can be slowed down to accommodate the468 limitations of the creature's programming. In terms of cost, doing469 everything in software is far cheaper than building custom470 real-time hardware. All you need is a laptop and some patience.472 \subsection{Simulated time enables rapid prototyping $\backslash$ simple programs}473 \label{sec-2-2}475 I envision \texttt{CORTEX} being used to support rapid prototyping and476 iteration of ideas. Even if I could put together a well constructed477 kit for creating robots, it would still not be enough because of478 the scourge of real-time processing. Anyone who wants to test their479 ideas in the real world must always worry about getting their480 algorithms to run fast enough to process information in real time.481 The need for real time processing only increases if multiple senses482 are involved. In the extreme case, even simple algorithms will have483 to be accelerated by ASIC chips or FPGAs, turning what would484 otherwise be a few lines of code and a 10x speed penalty into a485 multi-month ordeal. For this reason, \texttt{CORTEX} supports486 \emph{time-dilation}, which scales back the framerate of the simulation487 in proportion to the amount of processing each frame. From the488 perspective of the creatures inside the simulation, time always489 appears to flow at a constant rate, regardless of how complicated490 the environment becomes or how many creatures are in the491 simulation. The cost is that \texttt{CORTEX} can sometimes run slower than492 real time. Time dilation works both ways, however --- simulations493 of very simple creatures in \texttt{CORTEX} generally run at 40x real-time494 on my machine!496 \subsection{All sense organs are two-dimensional surfaces}497 \label{sec-2-3}499 If \texttt{CORTEX} is to support a wide variety of senses, it would help500 to have a better understanding of what a sense actually is! While501 vision, touch, and hearing all seem like they are quite different502 things, I was surprised to learn during the course of this thesis503 that they (and all physical senses) can be expressed as exactly the504 same mathematical object!506 Human beings are three-dimensional objects, and the nerves that507 transmit data from our various sense organs to our brain are508 essentially one-dimensional. This leaves up to two dimensions in509 which our sensory information may flow. For example, imagine your510 skin: it is a two-dimensional surface around a three-dimensional511 object (your body). It has discrete touch sensors embedded at512 various points, and the density of these sensors corresponds to the513 sensitivity of that region of skin. Each touch sensor connects to a514 nerve, all of which eventually are bundled together as they travel515 up the spinal cord to the brain. Intersect the spinal nerves with a516 guillotining plane and you will see all of the sensory data of the517 skin revealed in a roughly circular two-dimensional image which is518 the cross section of the spinal cord. Points on this image that are519 close together in this circle represent touch sensors that are520 \emph{probably} close together on the skin, although there is of course521 some cutting and rearrangement that has to be done to transfer the522 complicated surface of the skin onto a two dimensional image.524 Most human senses consist of many discrete sensors of various525 properties distributed along a surface at various densities. For526 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's527 disks, and Ruffini's endings (\cite{textbook901}), which detect528 pressure and vibration of various intensities. For ears, it is the529 stereocilia distributed along the basilar membrane inside the530 cochlea; each one is sensitive to a slightly different frequency of531 sound. For eyes, it is rods and cones distributed along the surface532 of the retina. In each case, we can describe the sense with a533 surface and a distribution of sensors along that surface.535 In fact, almost every human sense can be effectively described in536 terms of a surface containing embedded sensors. If the sense had537 any more dimensions, then there wouldn't be enough room in the538 spinal cord to transmit the information!540 Therefore, \texttt{CORTEX} must support the ability to create objects and541 then be able to ``paint'' points along their surfaces to describe542 each sense.544 Fortunately this idea is already a well known computer graphics545 technique called \emph{UV-mapping}. In UV-mapping, the three-dimensional546 surface of a model is cut and smooshed until it fits on a547 two-dimensional image. You paint whatever you want on that image,548 and when the three-dimensional shape is rendered in a game the549 smooshing and cutting is reversed and the image appears on the550 three-dimensional object.552 To make a sense, interpret the UV-image as describing the553 distribution of that senses' sensors. To get different types of554 sensors, you can either use a different color for each type of555 sensor, or use multiple UV-maps, each labeled with that sensor556 type. I generally use a white pixel to mean the presence of a557 sensor and a black pixel to mean the absence of a sensor, and use558 one UV-map for each sensor-type within a given sense.560 \begin{figure}[htb]561 \centering562 \includegraphics[width=10cm]{./images/finger-UV.png}563 \caption{\label{finger-UV}The UV-map for an elongated icososphere. The white dots each represent a touch sensor. They are dense in the regions that describe the tip of the finger, and less dense along the dorsal side of the finger opposite the tip.}564 \end{figure}566 \begin{figure}[htb]567 \centering568 \includegraphics[width=10cm]{./images/finger-1.png}569 \caption{\label{finger-side-view}Ventral side of the UV-mapped finger. Note the density of touch sensors at the tip.}570 \end{figure}572 \subsection{Video game engines provide ready-made physics and shading}573 \label{sec-2-4}575 I did not need to write my own physics simulation code or shader to576 build \texttt{CORTEX}. Doing so would lead to a system that is impossible577 for anyone but myself to use anyway. Instead, I use a video game578 engine as a base and modify it to accommodate the additional needs579 of \texttt{CORTEX}. Video game engines are an ideal starting point to580 build \texttt{CORTEX}, because they are not far from being creature581 building systems themselves.583 First off, general purpose video game engines come with a physics584 engine and lighting / sound system. The physics system provides585 tools that can be co-opted to serve as touch, proprioception, and586 muscles. Because some games support split screen views, a good587 video game engine will allow you to efficiently create multiple588 cameras in the simulated world that can be used as eyes. Video game589 systems offer integrated asset management for things like textures590 and creature models, providing an avenue for defining creatures.591 They also understand UV-mapping, because this technique is used to592 apply a texture to a model. Finally, because video game engines593 support a large number of developers, as long as \texttt{CORTEX} doesn't594 stray too far from the base system, other researchers can turn to595 this community for help when doing their research.597 \subsection{\texttt{CORTEX} is based on jMonkeyEngine3}598 \label{sec-2-5}600 While preparing to build \texttt{CORTEX} I studied several video game601 engines to see which would best serve as a base. The top contenders602 were:604 \begin{description}605 \item[{\href{http://www.idsoftware.com}{Quake II}/\href{http://www.bytonic.de/html/jake2.html}{Jake2}}] The Quake II engine was designed by ID software606 in 1997. All the source code was released by ID software into607 the Public Domain several years ago, and as a result it has608 been ported to many different languages. This engine was609 famous for its advanced use of realistic shading and it had610 decent and fast physics simulation. The main advantage of the611 Quake II engine is its simplicity, but I ultimately rejected612 it because the engine is too tied to the concept of a613 first-person shooter game. One of the problems I had was that614 there does not seem to be any easy way to attach multiple615 cameras to a single character. There are also several physics616 clipping issues that are corrected in a way that only applies617 to the main character and do not apply to arbitrary objects.619 \item[{\href{http://source.valvesoftware.com/}{Source Engine} }] The Source Engine evolved from the Quake II620 and Quake I engines and is used by Valve in the Half-Life621 series of games. The physics simulation in the Source Engine622 is quite accurate and probably the best out of all the engines623 I investigated. There is also an extensive community actively624 working with the engine. However, applications that use the625 Source Engine must be written in C++, the code is not open, it626 only runs on Windows, and the tools that come with the SDK to627 handle models and textures are complicated and awkward to use.629 \item[{\href{http://jmonkeyengine.com/}{jMonkeyEngine3}}] jMonkeyEngine3 is a new library for creating630 games in Java. It uses OpenGL to render to the screen and uses631 screengraphs to avoid drawing things that do not appear on the632 screen. It has an active community and several games in the633 pipeline. The engine was not built to serve any particular634 game but is instead meant to be used for any 3D game.635 \end{description}637 I chose jMonkeyEngine3 because it had the most features out of all638 the free projects I looked at, and because I could then write my639 code in clojure, an implementation of \texttt{LISP} that runs on the JVM.641 \subsection{\texttt{CORTEX} uses Blender to create creature models}642 \label{sec-2-6}644 For the simple worm-like creatures I will use later on in this645 thesis, I could define a simple API in \texttt{CORTEX} that would allow646 one to create boxes, spheres, etc., and leave that API as the sole647 way to create creatures. However, for \texttt{CORTEX} to truly be useful648 for other projects, it needs a way to construct complicated649 creatures. If possible, it would be nice to leverage work that has650 already been done by the community of 3D modelers, or at least651 enable people who are talented at modeling but not programming to652 design \texttt{CORTEX} creatures.654 Therefore I use Blender, a free 3D modeling program, as the main655 way to create creatures in \texttt{CORTEX}. However, the creatures modeled656 in Blender must also be simple to simulate in jMonkeyEngine3's game657 engine, and must also be easy to rig with \texttt{CORTEX}'s senses. I658 accomplish this with extensive use of Blender's ``empty nodes.''660 Empty nodes have no mass, physical presence, or appearance, but661 they can hold metadata and have names. I use a tree structure of662 empty nodes to specify senses in the following manner:664 \begin{itemize}665 \item Create a single top-level empty node whose name is the name of666 the sense.667 \item Add empty nodes which each contain meta-data relevant to the668 sense, including a UV-map describing the number/distribution of669 sensors if applicable.670 \item Make each empty-node the child of the top-level node.671 \end{itemize}673 \begin{figure}[htb]674 \centering675 \includegraphics[width=10cm]{./images/empty-sense-nodes.png}676 \caption{\label{sense-nodes}An example of annotating a creature model with empty nodes to describe the layout of senses. There are multiple empty nodes which each describe the position of muscles, ears, eyes, or joints.}677 \end{figure}679 \subsection{Bodies are composed of segments connected by joints}680 \label{sec-2-7}682 Blender is a general purpose animation tool, which has been used in683 the past to create high quality movies such as Sintel684 (\cite{blender}). Though Blender can model and render even685 complicated things like water, it is crucial to keep models that686 are meant to be simulated as creatures simple. \texttt{Bullet}, which687 \texttt{CORTEX} uses though jMonkeyEngine3, is a rigid-body physics688 system. This offers a compromise between the expressiveness of a689 game level and the speed at which it can be simulated, and it means690 that creatures should be naturally expressed as rigid components691 held together by joint constraints.693 But humans are more like a squishy bag wrapped around some hard694 bones which define the overall shape. When we move, our skin bends695 and stretches to accommodate the new positions of our bones.697 One way to make bodies composed of rigid pieces connected by joints698 \emph{seem} more human-like is to use an \emph{armature}, (or \emph{rigging})699 system, which defines a overall ``body mesh'' and defines how the700 mesh deforms as a function of the position of each ``bone'' which701 is a standard rigid body. This technique is used extensively to702 model humans and create realistic animations. It is not a good703 technique for physical simulation because it is a lie -- the skin704 is not a physical part of the simulation and does not interact with705 any objects in the world or itself. Objects will pass right though706 the skin until they come in contact with the underlying bone, which707 is a physical object. Without simulating the skin, the sense of708 touch has little meaning, and the creature's own vision will lie to709 it about the true extent of its body. Simulating the skin as a710 physical object requires some way to continuously update the711 physical model of the skin along with the movement of the bones,712 which is unacceptably slow compared to rigid body simulation.714 Therefore, instead of using the human-like ``bony meatbag''715 approach, I decided to base my body plans on multiple solid objects716 that are connected by joints, inspired by the robot \texttt{EVE} from the717 movie WALL-E.719 \begin{figure}[htb]720 \centering721 \includegraphics[width=10cm]{./images/Eve.jpg}722 \caption{\texttt{EVE} from the movie WALL-E. This body plan turns out to be much better suited to my purposes than a more human-like one.}723 \end{figure}725 \texttt{EVE}'s body is composed of several rigid components that are held726 together by invisible joint constraints. This is what I mean by727 \emph{eve-like}. The main reason that I use eve-like bodies is for728 simulation efficiency, and so that there will be correspondence729 between the AI's senses and the physical presence of its body. Each730 individual section is simulated by a separate rigid body that731 corresponds exactly with its visual representation and does not732 change. Sections are connected by invisible joints that are well733 supported in jMonkeyEngine3. Bullet, the physics backend for734 jMonkeyEngine3, can efficiently simulate hundreds of rigid bodies735 connected by joints. Just because sections are rigid does not mean736 they have to stay as one piece forever; they can be dynamically737 replaced with multiple sections to simulate splitting in two. This738 could be used to simulate retractable claws or \texttt{EVE}'s hands, which739 are able to coalesce into one object in the movie.741 \subsubsection{Solidifying/Connecting a body}742 \label{sec-2-7-1}744 \texttt{CORTEX} creates a creature in two steps: first, it traverses the745 nodes in the Blender file and creates physical representations for746 any of them that have mass defined in their Blender meta-data.748 \begin{listing}749 \begin{verbatim}750 (defn physical!751 "Iterate through the nodes in creature and make them real physical752 objects in the simulation."753 [#^Node creature]754 (dorun755 (map756 (fn [geom]757 (let [physics-control758 (RigidBodyControl.759 (HullCollisionShape.760 (.getMesh geom))761 (if-let [mass (meta-data geom "mass")]762 (float mass) (float 1)))]763 (.addControl geom physics-control)))764 (filter #(isa? (class %) Geometry )765 (node-seq creature)))))766 \end{verbatim}767 \caption{\label{physical}Program for iterating through the nodes in a Blender file and generating physical jMonkeyEngine3 objects with mass and a matching physics shape.}768 \end{listing}770 The next step to making a proper body is to connect those pieces771 together with joints. jMonkeyEngine has a large array of joints772 available via \texttt{bullet}, such as Point2Point, Cone, Hinge, and a773 generic Six Degree of Freedom joint, with or without spring774 restitution.776 Joints are treated a lot like proper senses, in that there is a777 top-level empty node named ``joints'' whose children each778 represent a joint.780 \begin{figure}[htb]781 \centering782 \includegraphics[width=10cm]{./images/hand-screenshot1.png}783 \caption{\label{blender-hand}View of the hand model in Blender showing the main ``joints'' node (highlighted in yellow) and its children which each represent a joint in the hand. Each joint node has metadata specifying what sort of joint it is.}784 \end{figure}787 \texttt{CORTEX}'s procedure for binding the creature together with joints788 is as follows:790 \begin{itemize}791 \item Find the children of the ``joints'' node.792 \item Determine the two spatials the joint is meant to connect.793 \item Create the joint based on the meta-data of the empty node.794 \end{itemize}796 The higher order function \texttt{sense-nodes} from \texttt{cortex.sense}797 simplifies finding the joints based on their parent ``joints''798 node.800 \begin{listing}801 \begin{verbatim}802 (defn sense-nodes803 "For some senses there is a special empty Blender node whose804 children are considered markers for an instance of that sense. This805 function generates functions to find those children, given the name806 of the special parent node."807 [parent-name]808 (fn [#^Node creature]809 (if-let [sense-node (.getChild creature parent-name)]810 (seq (.getChildren sense-node)) [])))812 (def813 ^{:doc "Return the children of the creature's \"joints\" node."814 :arglists '([creature])}815 joints816 (sense-nodes "joints"))817 \end{verbatim}818 \caption{\label{get-empty-nodes}Retrieving the children empty nodes from a single named empty node is a common pattern in \texttt{CORTEX}. Further instances of this technique for the senses will be omitted}819 \end{listing}821 To find a joint's targets, \texttt{CORTEX} creates a small cube, centered822 around the empty-node, and grows the cube exponentially until it823 intersects two physical objects. The objects are ordered according824 to the joint's rotation, with the first one being the object that825 has more negative coordinates in the joint's reference frame.826 Because the objects must be physical, the empty-node itself827 escapes detection. Because the objects must be physical,828 \texttt{joint-targets} must be called \emph{after} \texttt{physical!} is called.830 \begin{listing}831 \begin{verbatim}832 (defn joint-targets833 "Return the two closest two objects to the joint object, ordered834 from bottom to top according to the joint's rotation."835 [#^Node parts #^Node joint]836 (loop [radius (float 0.01)]837 (let [results (CollisionResults.)]838 (.collideWith839 parts840 (BoundingBox. (.getWorldTranslation joint)841 radius radius radius) results)842 (let [targets843 (distinct844 (map #(.getGeometry %) results))]845 (if (>= (count targets) 2)846 (sort-by847 #(let [joint-ref-frame-position848 (jme-to-blender849 (.mult850 (.inverse (.getWorldRotation joint))851 (.subtract (.getWorldTranslation %)852 (.getWorldTranslation joint))))]853 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))854 (take 2 targets))855 (recur (float (* radius 2))))))))856 \end{verbatim}857 \caption{\label{joint-targets}Program to find the targets of a joint node by exponentially growth of a search cube.}858 \end{listing}860 Once \texttt{CORTEX} finds all joints and targets, it creates them using861 a dispatch on the metadata of each joint node.863 \begin{listing}864 \begin{verbatim}865 (defmulti joint-dispatch866 "Translate Blender pseudo-joints into real JME joints."867 (fn [constraints & _]868 (:type constraints)))870 (defmethod joint-dispatch :point871 [constraints control-a control-b pivot-a pivot-b rotation]872 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)873 (.setLinearLowerLimit Vector3f/ZERO)874 (.setLinearUpperLimit Vector3f/ZERO)))876 (defmethod joint-dispatch :hinge877 [constraints control-a control-b pivot-a pivot-b rotation]878 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)879 [limit-1 limit-2] (:limit constraints)880 hinge-axis (.mult rotation (blender-to-jme axis))]881 (doto (HingeJoint. control-a control-b pivot-a pivot-b882 hinge-axis hinge-axis)883 (.setLimit limit-1 limit-2))))885 (defmethod joint-dispatch :cone886 [constraints control-a control-b pivot-a pivot-b rotation]887 (let [limit-xz (:limit-xz constraints)888 limit-xy (:limit-xy constraints)889 twist (:twist constraints)]890 (doto (ConeJoint. control-a control-b pivot-a pivot-b891 rotation rotation)892 (.setLimit (float limit-xz) (float limit-xy)893 (float twist)))))894 \end{verbatim}895 \caption{\label{joint-dispatch}Program to dispatch on Blender metadata and create joints suitable for physical simulation.}896 \end{listing}898 All that is left for joints is to combine the above pieces into899 something that can operate on the collection of nodes that a900 Blender file represents.902 \begin{listing}903 \begin{verbatim}904 (defn connect905 "Create a joint between 'obj-a and 'obj-b at the location of906 'joint. The type of joint is determined by the metadata on 'joint.908 Here are some examples:909 {:type :point}910 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}911 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)913 {:type :cone :limit-xz 0]914 :limit-xy 0]915 :twist 0]} (use XZY rotation mode in Blender!)"916 [#^Node obj-a #^Node obj-b #^Node joint]917 (let [control-a (.getControl obj-a RigidBodyControl)918 control-b (.getControl obj-b RigidBodyControl)919 joint-center (.getWorldTranslation joint)920 joint-rotation (.toRotationMatrix (.getWorldRotation joint))921 pivot-a (world-to-local obj-a joint-center)922 pivot-b (world-to-local obj-b joint-center)]923 (if-let924 [constraints (map-vals eval (read-string (meta-data joint "joint")))]925 ;; A side-effect of creating a joint registers926 ;; it with both physics objects which in turn927 ;; will register the joint with the physics system928 ;; when the simulation is started.929 (joint-dispatch constraints930 control-a control-b931 pivot-a pivot-b932 joint-rotation))))933 \end{verbatim}934 \caption{\label{connect}Program to completely create a joint given information from a Blender file.}935 \end{listing}937 In general, whenever \texttt{CORTEX} exposes a sense (or in this case938 physicality), it provides a function of the type \texttt{sense!}, which939 takes in a collection of nodes and augments it to support that940 sense. The function returns any controls necessary to use that941 sense. In this case \texttt{body!} creates a physical body and returns no942 control functions.944 \begin{listing}945 \begin{verbatim}946 (defn joints!947 "Connect the solid parts of the creature with physical joints. The948 joints are taken from the \"joints\" node in the creature."949 [#^Node creature]950 (dorun951 (map952 (fn [joint]953 (let [[obj-a obj-b] (joint-targets creature joint)]954 (connect obj-a obj-b joint)))955 (joints creature))))956 (defn body!957 "Endow the creature with a physical body connected with joints. The958 particulars of the joints and the masses of each body part are959 determined in Blender."960 [#^Node creature]961 (physical! creature)962 (joints! creature))963 \end{verbatim}964 \caption{\label{joints}Program to give joints to a creature.}965 \end{listing}967 All of the code you have just seen amounts to only 130 lines, yet968 because it builds on top of Blender and jMonkeyEngine3, those few969 lines pack quite a punch!971 The hand from figure \ref{blender-hand}, which was modeled after972 my own right hand, can now be given joints and simulated as a973 creature.975 \begin{figure}[htb]976 \centering977 \includegraphics[width=15cm]{./images/physical-hand.png}978 \caption{\label{physical-hand}With the ability to create physical creatures from Blender, \texttt{CORTEX} gets one step closer to becoming a full creature simulation environment.}979 \end{figure}981 \subsection{Sight reuses standard video game components\ldots{}}982 \label{sec-2-8}984 Vision is one of the most important senses for humans, so I need to985 build a simulated sense of vision for my AI. I will do this with986 simulated eyes. Each eye can be independently moved and should see987 its own version of the world depending on where it is.989 Making these simulated eyes a reality is simple because990 jMonkeyEngine already contains extensive support for multiple views991 of the same 3D simulated world. The reason jMonkeyEngine has this992 support is because the support is necessary to create games with993 split-screen views. Multiple views are also used to create994 efficient pseudo-reflections by rendering the scene from a certain995 perspective and then projecting it back onto a surface in the 3D996 world.998 \begin{figure}[htb]999 \centering1000 \includegraphics[width=10cm]{./images/goldeneye-4-player.png}1001 \caption{\label{goldeneye}jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye, which was one of the first games to use split-screen views.}1002 \end{figure}1004 \subsubsection{A Brief Description of jMonkeyEngine's Rendering Pipeline}1005 \label{sec-2-8-1}1007 jMonkeyEngine allows you to create a \texttt{ViewPort}, which represents a1008 view of the simulated world. You can create as many of these as you1009 want. Every frame, the \texttt{RenderManager} iterates through each1010 \texttt{ViewPort}, rendering the scene in the GPU. For each \texttt{ViewPort} there1011 is a \texttt{FrameBuffer} which represents the rendered image in the GPU.1013 \begin{figure}[htb]1014 \centering1015 \includegraphics[width=10cm]{./images/diagram_rendermanager2.png}1016 \caption{\label{rendermanagers}\texttt{ViewPorts} are cameras in the world. During each frame, the \texttt{RenderManager} records a snapshot of what each view is currently seeing; these snapshots are \texttt{FrameBuffer} objects.}1017 \end{figure}1019 Each \texttt{ViewPort} can have any number of attached \texttt{SceneProcessor}1020 objects, which are called every time a new frame is rendered. A1021 \texttt{SceneProcessor} receives its \texttt{ViewPort's} \texttt{FrameBuffer} and can do1022 whatever it wants to the data. Often this consists of invoking GPU1023 specific operations on the rendered image. The \texttt{SceneProcessor} can1024 also copy the GPU image data to RAM and process it with the CPU.1026 \subsubsection{Appropriating Views for Vision}1027 \label{sec-2-8-2}1029 Each eye in the simulated creature needs its own \texttt{ViewPort} so1030 that it can see the world from its own perspective. To this1031 \texttt{ViewPort}, I add a \texttt{SceneProcessor} that feeds the visual data to1032 any arbitrary continuation function for further processing. That1033 continuation function may perform both CPU and GPU operations on1034 the data. To make this easy for the continuation function, the1035 \texttt{SceneProcessor} maintains appropriately sized buffers in RAM to1036 hold the data. It does not do any copying from the GPU to the CPU1037 itself because it is a slow operation.1039 \begin{listing}1040 \begin{verbatim}1041 (defn vision-pipeline1042 "Create a SceneProcessor object which wraps a vision processing1043 continuation function. The continuation is a function that takes1044 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],1045 each of which has already been appropriately sized."1046 [continuation]1047 (let [byte-buffer (atom nil)1048 renderer (atom nil)1049 image (atom nil)]1050 (proxy [SceneProcessor] []1051 (initialize1052 [renderManager viewPort]1053 (let [cam (.getCamera viewPort)1054 width (.getWidth cam)1055 height (.getHeight cam)]1056 (reset! renderer (.getRenderer renderManager))1057 (reset! byte-buffer1058 (BufferUtils/createByteBuffer1059 (* width height 4)))1060 (reset! image (BufferedImage.1061 width height1062 BufferedImage/TYPE_4BYTE_ABGR))))1063 (isInitialized [] (not (nil? @byte-buffer)))1064 (reshape [_ _ _])1065 (preFrame [_])1066 (postQueue [_])1067 (postFrame1068 [#^FrameBuffer fb]1069 (.clear @byte-buffer)1070 (continuation @renderer fb @byte-buffer @image))1071 (cleanup []))))1072 \end{verbatim}1073 \caption{\label{pipeline-1}Function to make the rendered scene in jMonkeyEngine available for further processing.}1074 \end{listing}1076 The continuation function given to \texttt{vision-pipeline} above will be1077 given a \texttt{Renderer} and three containers for image data. The1078 \texttt{FrameBuffer} references the GPU image data, but the pixel data1079 can not be used directly on the CPU. The \texttt{ByteBuffer} and1080 \texttt{BufferedImage} are initially "empty" but are sized to hold the1081 data in the \texttt{FrameBuffer}. I call transferring the GPU image data1082 to the CPU structures "mixing" the image data.1084 \subsubsection{Optical sensor arrays are described with images and referenced with metadata}1085 \label{sec-2-8-3}1087 The vision pipeline described above handles the flow of rendered1088 images. Now, \texttt{CORTEX} needs simulated eyes to serve as the source1089 of these images.1091 An eye is described in Blender in the same way as a joint. They1092 are zero dimensional empty objects with no geometry whose local1093 coordinate system determines the orientation of the resulting eye.1094 All eyes are children of a parent node named "eyes" just as all1095 joints have a parent named "joints". An eye binds to the nearest1096 physical object with \texttt{bind-sense}.1098 \begin{listing}1099 \begin{verbatim}1100 (defn add-eye!1101 "Create a Camera centered on the current position of 'eye which1102 follows the closest physical node in 'creature. The camera will1103 point in the X direction and use the Z vector as up as determined1104 by the rotation of these vectors in Blender coordinate space. Use1105 XZY rotation for the node in Blender."1106 [#^Node creature #^Spatial eye]1107 (let [target (closest-node creature eye)1108 [cam-width cam-height]1109 ;;[640 480] ;; graphics card on laptop doesn't support1110 ;; arbitrary dimensions.1111 (eye-dimensions eye)1112 cam (Camera. cam-width cam-height)1113 rot (.getWorldRotation eye)]1114 (.setLocation cam (.getWorldTranslation eye))1115 (.lookAtDirection1116 cam ; this part is not a mistake and1117 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in1118 (.mult rot Vector3f/UNIT_Y)) ; Blender as the UP vector.1119 (.setFrustumPerspective1120 cam (float 45)1121 (float (/ (.getWidth cam) (.getHeight cam)))1122 (float 1)1123 (float 1000))1124 (bind-sense target cam) cam))1125 \end{verbatim}1126 \caption{\label{add-eye}Here, the camera is created based on metadata on the eye-node and attached to the nearest physical object with \texttt{bind-sense}}1127 \end{listing}1129 \subsubsection{Simulated Retina}1130 \label{sec-2-8-4}1132 An eye is a surface (the retina) which contains many discrete1133 sensors to detect light. These sensors can have different1134 light-sensing properties. In humans, each discrete sensor is1135 sensitive to red, blue, green, or gray. These different types of1136 sensors can have different spatial distributions along the retina.1137 In humans, there is a fovea in the center of the retina which has1138 a very high density of color sensors, and a blind spot which has1139 no sensors at all. Sensor density decreases in proportion to1140 distance from the fovea.1142 I want to be able to model any retinal configuration, so my1143 eye-nodes in Blender contain metadata pointing to images that1144 describe the precise position of the individual sensors using1145 white pixels. The meta-data also describes the precise sensitivity1146 to light that the sensors described in the image have. An eye can1147 contain any number of these images. For example, the metadata for1148 an eye might look like this:1150 \begin{verbatim}1151 {0xFF0000 "Models/test-creature/retina-small.png"}1152 \end{verbatim}1154 \begin{figure}[htb]1155 \centering1156 \includegraphics[width=7cm]{./images/retina-small.png}1157 \caption{\label{retina}An example retinal profile image. White pixels are photo-sensitive elements. The distribution of white pixels is denser in the middle and falls off at the edges and is inspired by the human retina.}1158 \end{figure}1160 Together, the number 0xFF0000 and the image above describe the1161 placement of red-sensitive sensory elements.1163 Meta-data to very crudely approximate a human eye might be1164 something like this:1166 \begin{verbatim}1167 (let [retinal-profile "Models/test-creature/retina-small.png"]1168 {0xFF0000 retinal-profile1169 0x00FF00 retinal-profile1170 0x0000FF retinal-profile1171 0xFFFFFF retinal-profile})1172 \end{verbatim}1174 The numbers that serve as keys in the map determine a sensor's1175 relative sensitivity to the channels red, green, and blue. These1176 sensitivity values are packed into an integer in the order1177 \texttt{|\_|R|G|B|} in 8-bit fields. The RGB values of a pixel in the1178 image are added together with these sensitivities as linear1179 weights. Therefore, 0xFF0000 means sensitive to red only while1180 0xFFFFFF means sensitive to all colors equally (gray).1182 \begin{listing}1183 \begin{verbatim}1184 (defn vision-kernel1185 "Returns a list of functions, each of which will return a color1186 channel's worth of visual information when called inside a running1187 simulation."1188 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]1189 (let [retinal-map (retina-sensor-profile eye)1190 camera (add-eye! creature eye)1191 vision-image1192 (atom1193 (BufferedImage. (.getWidth camera)1194 (.getHeight camera)1195 BufferedImage/TYPE_BYTE_BINARY))1196 register-eye!1197 (runonce1198 (fn [world]1199 (add-camera!1200 world camera1201 (let [counter (atom 0)]1202 (fn [r fb bb bi]1203 (if (zero? (rem (swap! counter inc) (inc skip)))1204 (reset! vision-image1205 (BufferedImage! r fb bb bi))))))))]1206 (vec1207 (map1208 (fn [[key image]]1209 (let [whites (white-coordinates image)1210 topology (vec (collapse whites))1211 sensitivity (sensitivity-presets key key)]1212 (attached-viewport.1213 (fn [world]1214 (register-eye! world)1215 (vector1216 topology1217 (vec1218 (for [[x y] whites]1219 (pixel-sense1220 sensitivity1221 (.getRGB @vision-image x y))))))1222 register-eye!)))1223 retinal-map))))1224 \end{verbatim}1225 \caption{\label{vision-kernel}This is the core of vision in \texttt{CORTEX}. A given eye node is converted into a function that returns visual information from the simulation.}1226 \end{listing}1228 Note that because each of the functions generated by1229 \texttt{vision-kernel} shares the same \texttt{register-eye!} function, the eye1230 will be registered only once the first time any of the functions1231 from the list returned by \texttt{vision-kernel} is called. Each of the1232 functions returned by \texttt{vision-kernel} also allows access to the1233 \texttt{Viewport} through which it receives images.1235 All the hard work has been done; all that remains is to apply1236 \texttt{vision-kernel} to each eye in the creature and gather the results1237 into one list of functions.1240 \begin{listing}1241 \begin{verbatim}1242 (defn vision!1243 "Returns a list of functions, each of which returns visual sensory1244 data when called inside a running simulation."1245 [#^Node creature & {skip :skip :or {skip 0}}]1246 (reduce1247 concat1248 (for [eye (eyes creature)]1249 (vision-kernel creature eye))))1250 \end{verbatim}1251 \caption{\label{vision}With \texttt{vision!}, \texttt{CORTEX} is already a fine simulation environment for experimenting with different types of eyes.}1252 \end{listing}1254 \begin{figure}[htb]1255 \centering1256 \includegraphics[width=13cm]{./images/worm-vision.png}1257 \caption{\label{worm-vision-test.}Simulated vision with a test creature and the human-like eye approximation. Notice how each channel of the eye responds differently to the differently colored balls.}1258 \end{figure}1260 The vision code is not much more complicated than the body code,1261 and enables multiple further paths for simulated vision. For1262 example, it is quite easy to create bifocal vision -- you just1263 make two eyes next to each other in Blender! It is also possible1264 to encode vision transforms in the retinal files. For example, the1265 human like retina file in figure \ref{retina} approximates a1266 log-polar transform.1268 This vision code has already been absorbed by the jMonkeyEngine1269 community and is now (in modified form) part of a system for1270 capturing in-game video to a file.1272 \subsection{\ldots{}but hearing must be built from scratch}1273 \label{sec-2-9}1275 At the end of this chapter I will have simulated ears that work the1276 same way as the simulated eyes in the last chapter. I will be able to1277 place any number of ear-nodes in a Blender file, and they will bind to1278 the closest physical object and follow it as it moves around. Each ear1279 will provide access to the sound data it picks up between every frame.1281 Hearing is one of the more difficult senses to simulate, because there1282 is less support for obtaining the actual sound data that is processed1283 by jMonkeyEngine3. There is no "split-screen" support for rendering1284 sound from different points of view, and there is no way to directly1285 access the rendered sound data.1287 \texttt{CORTEX}'s hearing is unique because it does not have any1288 limitations compared to other simulation environments. As far as I1289 know, there is no other system that supports multiple listeners,1290 and the sound demo at the end of this chapter is the first time1291 it's been done in a video game environment.1293 \subsubsection{Brief Description of jMonkeyEngine's Sound System}1294 \label{sec-2-9-1}1296 jMonkeyEngine's sound system works as follows:1298 \begin{itemize}1299 \item jMonkeyEngine uses the \texttt{AppSettings} for the particular1300 application to determine what sort of \texttt{AudioRenderer} should be1301 used.1302 \item Although some support is provided for multiple AudioRenderer1303 backends, jMonkeyEngine at the time of this writing will either1304 pick no \texttt{AudioRenderer} at all, or the \texttt{LwjglAudioRenderer}.1305 \item jMonkeyEngine tries to figure out what sort of system you're1306 running and extracts the appropriate native libraries.1307 \item The \texttt{LwjglAudioRenderer} uses the \href{http://lwjgl.org/}{\texttt{LWJGL}} (LightWeight Java Game1308 Library) bindings to interface with a C library called \href{http://kcat.strangesoft.net/openal.html}{\texttt{OpenAL}}1309 \item \texttt{OpenAL} renders the 3D sound and feeds the rendered sound1310 directly to any of various sound output devices with which it1311 knows how to communicate.1312 \end{itemize}1314 A consequence of this is that there's no way to access the actual1315 sound data produced by \texttt{OpenAL}. Even worse, \texttt{OpenAL} only supports1316 one \emph{listener} (it renders sound data from only one perspective),1317 which normally isn't a problem for games, but becomes a problem1318 when trying to make multiple AI creatures that can each hear the1319 world from a different perspective.1321 To make many AI creatures in jMonkeyEngine that can each hear the1322 world from their own perspective, or to make a single creature with1323 many ears, it is necessary to go all the way back to \texttt{OpenAL} and1324 implement support for simulated hearing there.1326 \subsubsection{Extending \texttt{OpenAl}}1327 \label{sec-2-9-2}1329 Extending \texttt{OpenAL} to support multiple listeners requires 5001330 lines of \texttt{C} code and is too complicated to mention here. Instead,1331 I will show a small amount of extension code and go over the high1332 level strategy. Full source is of course available with the1333 \texttt{CORTEX} distribution if you're interested.1335 \texttt{OpenAL} goes to great lengths to support many different systems,1336 all with different sound capabilities and interfaces. It1337 accomplishes this difficult task by providing code for many1338 different sound backends in pseudo-objects called \emph{Devices}.1339 There's a device for the Linux Open Sound System and the Advanced1340 Linux Sound Architecture, there's one for Direct Sound on Windows,1341 and there's even one for Solaris. \texttt{OpenAL} solves the problem of1342 platform independence by providing all these Devices.1344 Wrapper libraries such as LWJGL are free to examine the system on1345 which they are running and then select an appropriate device for1346 that system.1348 There are also a few "special" devices that don't interface with1349 any particular system. These include the Null Device, which1350 doesn't do anything, and the Wave Device, which writes whatever1351 sound it receives to a file, if everything has been set up1352 correctly when configuring \texttt{OpenAL}.1354 Actual mixing (Doppler shift and distance.environment-based1355 attenuation) of the sound data happens in the Devices, and they1356 are the only point in the sound rendering process where this data1357 is available.1359 Therefore, in order to support multiple listeners, and get the1360 sound data in a form that the AIs can use, it is necessary to1361 create a new Device which supports this feature.1363 Adding a device to OpenAL is rather tricky -- there are five1364 separate files in the \texttt{OpenAL} source tree that must be modified1365 to do so. I named my device the "Multiple Audio Send" Device, or1366 \texttt{Send} Device for short, since it sends audio data back to the1367 calling application like an Aux-Send cable on a mixing board.1369 The main idea behind the Send device is to take advantage of the1370 fact that LWJGL only manages one \emph{context} when using OpenAL. A1371 \emph{context} is like a container that holds samples and keeps track1372 of where the listener is. In order to support multiple listeners,1373 the Send device identifies the LWJGL context as the master1374 context, and creates any number of slave contexts to represent1375 additional listeners. Every time the device renders sound, it1376 synchronizes every source from the master LWJGL context to the1377 slave contexts. Then, it renders each context separately, using a1378 different listener for each one. The rendered sound is made1379 available via JNI to jMonkeyEngine.1381 Switching between contexts is not the normal operation of a1382 Device, and one of the problems with doing so is that a Device1383 normally keeps around a few pieces of state such as the1384 \texttt{ClickRemoval} array above which will become corrupted if the1385 contexts are not rendered in parallel. The solution is to create a1386 copy of this normally global device state for each context, and1387 copy it back and forth into and out of the actual device state1388 whenever a context is rendered.1390 The core of the \texttt{Send} device is the \texttt{syncSources} function, which1391 does the job of copying all relevant data from one context to1392 another.1394 \begin{listing}1395 \begin{verbatim}1396 void syncSources(ALsource *masterSource, ALsource *slaveSource,1397 ALCcontext *masterCtx, ALCcontext *slaveCtx){1398 ALuint master = masterSource->source;1399 ALuint slave = slaveSource->source;1400 ALCcontext *current = alcGetCurrentContext();1402 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);1403 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);1404 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);1405 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);1406 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);1407 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);1408 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);1409 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);1410 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);1411 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);1412 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);1413 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);1414 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);1416 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);1417 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);1418 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);1420 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);1421 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);1423 alcMakeContextCurrent(masterCtx);1424 ALint source_type;1425 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);1427 // Only static sources are currently synchronized!1428 if (AL_STATIC == source_type){1429 ALint master_buffer;1430 ALint slave_buffer;1431 alGetSourcei(master, AL_BUFFER, &master_buffer);1432 alcMakeContextCurrent(slaveCtx);1433 alGetSourcei(slave, AL_BUFFER, &slave_buffer);1434 if (master_buffer != slave_buffer){1435 alSourcei(slave, AL_BUFFER, master_buffer);1436 }1437 }1439 // Synchronize the state of the two sources.1440 alcMakeContextCurrent(masterCtx);1441 ALint masterState;1442 ALint slaveState;1444 alGetSourcei(master, AL_SOURCE_STATE, &masterState);1445 alcMakeContextCurrent(slaveCtx);1446 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);1448 if (masterState != slaveState){1449 switch (masterState){1450 case AL_INITIAL : alSourceRewind(slave); break;1451 case AL_PLAYING : alSourcePlay(slave); break;1452 case AL_PAUSED : alSourcePause(slave); break;1453 case AL_STOPPED : alSourceStop(slave); break;1454 }1455 }1456 // Restore whatever context was previously active.1457 alcMakeContextCurrent(current);1458 }1459 \end{verbatim}1460 \caption{\label{sync-openal-sources}Program for extending \texttt{OpenAL} to support multiple listeners via context copying/switching.}1461 \end{listing}1463 With this special context-switching device, and some ugly JNI1464 bindings that are not worth mentioning, \texttt{CORTEX} gains the ability1465 to access multiple sound streams from \texttt{OpenAL}.1467 \begin{listing}1468 \begin{verbatim}1469 (defn add-ear!1470 "Create a Listener centered on the current position of 'ear1471 which follows the closest physical node in 'creature and1472 sends sound data to 'continuation."1473 [#^Application world #^Node creature #^Spatial ear continuation]1474 (let [target (closest-node creature ear)1475 lis (Listener.)1476 audio-renderer (.getAudioRenderer world)1477 sp (hearing-pipeline continuation)]1478 (.setLocation lis (.getWorldTranslation ear))1479 (.setRotation lis (.getWorldRotation ear))1480 (bind-sense target lis)1481 (update-listener-velocity! target lis)1482 (.addListener audio-renderer lis)1483 (.registerSoundProcessor audio-renderer lis sp)))1484 \end{verbatim}1485 \caption{\label{add-ear}Program to create an ear from a Blender empty node. The ear follows around the nearest physical object and passes all sensory data to a continuation function.}1486 \end{listing}1488 The \texttt{Send} device, unlike most of the other devices in \texttt{OpenAL},1489 does not render sound unless asked. This enables the system to1490 slow down or speed up depending on the needs of the AIs who are1491 using it to listen. If the device tried to render samples in1492 real-time, a complicated AI whose mind takes 100 seconds of1493 computer time to simulate 1 second of AI-time would miss almost1494 all of the sound in its environment!1496 \begin{listing}1497 \begin{verbatim}1498 (defn hearing-kernel1499 "Returns a function which returns auditory sensory data when called1500 inside a running simulation."1501 [#^Node creature #^Spatial ear]1502 (let [hearing-data (atom [])1503 register-listener!1504 (runonce1505 (fn [#^Application world]1506 (add-ear!1507 world creature ear1508 (comp #(reset! hearing-data %)1509 byteBuffer->pulse-vector))))]1510 (fn [#^Application world]1511 (register-listener! world)1512 (let [data @hearing-data1513 topology1514 (vec (map #(vector % 0) (range 0 (count data))))]1515 [topology data]))))1517 (defn hearing!1518 "Endow the creature in a particular world with the sense of1519 hearing. Will return a sequence of functions, one for each ear,1520 which when called will return the auditory data from that ear."1521 [#^Node creature]1522 (for [ear (ears creature)]1523 (hearing-kernel creature ear)))1524 \end{verbatim}1525 \caption{\label{hearing}Program to enable arbitrary hearing in \texttt{CORTEX}}1526 \end{listing}1528 Armed with these functions, \texttt{CORTEX} is able to test possibly the1529 first ever instance of multiple listeners in a video game engine1530 based simulation!1532 \begin{listing}1533 \begin{verbatim}1534 /**1535 * Respond to sound! This is the brain of an AI entity that1536 * hears its surroundings and reacts to them.1537 */1538 public void process(ByteBuffer audioSamples,1539 int numSamples, AudioFormat format) {1540 audioSamples.clear();1541 byte[] data = new byte[numSamples];1542 float[] out = new float[numSamples];1543 audioSamples.get(data);1544 FloatSampleTools.1545 byte2floatInterleaved1546 (data, 0, out, 0, numSamples/format.getFrameSize(), format);1548 float max = Float.NEGATIVE_INFINITY;1549 for (float f : out){if (f > max) max = f;}1550 audioSamples.clear();1552 if (max > 0.1){1553 entity.getMaterial().setColor("Color", ColorRGBA.Green);1554 }1555 else {1556 entity.getMaterial().setColor("Color", ColorRGBA.Gray);1557 }1558 \end{verbatim}1559 \caption{\label{sound-test}Here a simple creature responds to sound by changing its color from gray to green when the total volume goes over a threshold.}1560 \end{listing}1562 \begin{figure}[htb]1563 \centering1564 \includegraphics[width=10cm]{./images/java-hearing-test.png}1565 \caption{\label{sound-cubes.}First ever simulation of multiple listeners in \texttt{CORTEX}. Each cube is a creature which processes sound data with the \texttt{process} function from listing \ref{sound-test}. the ball is constantly emitting a pure tone of constant volume. As it approaches the cubes, they each change color in response to the sound.}1566 \end{figure}1568 This system of hearing has also been co-opted by the1569 jMonkeyEngine3 community and is used to record audio for demo1570 videos.1572 \subsection{Hundreds of hair-like elements provide a sense of touch}1573 \label{sec-2-10}1575 Touch is critical to navigation and spatial reasoning and as such I1576 need a simulated version of it to give to my AI creatures.1578 Human skin has a wide array of touch sensors, each of which1579 specialize in detecting different vibrational modes and pressures.1580 These sensors can integrate a vast expanse of skin (i.e. your1581 entire palm), or a tiny patch of skin at the tip of your finger.1582 The hairs of the skin help detect objects before they even come1583 into contact with the skin proper.1585 However, touch in my simulated world can not exactly correspond to1586 human touch because my creatures are made out of completely rigid1587 segments that don't deform like human skin.1589 Instead of measuring deformation or vibration, I surround each1590 rigid part with a plenitude of hair-like objects (\emph{feelers}) which1591 do not interact with the physical world. Physical objects can pass1592 through them with no effect. The feelers are able to tell when1593 other objects pass through them, and they constantly report how1594 much of their extent is covered. So even though the creature's body1595 parts do not deform, the feelers create a margin around those body1596 parts which achieves a sense of touch which is a hybrid between a1597 human's sense of deformation and sense from hairs.1599 Implementing touch in jMonkeyEngine follows a different technical1600 route than vision and hearing. Those two senses piggybacked off1601 jMonkeyEngine's 3D audio and video rendering subsystems. To1602 simulate touch, I use jMonkeyEngine's physics system to execute1603 many small collision detections, one for each feeler. The placement1604 of the feelers is determined by a UV-mapped image which shows where1605 each feeler should be on the 3D surface of the body.1607 \subsubsection{Defining Touch Meta-Data in Blender}1608 \label{sec-2-10-1}1610 Each geometry can have a single UV map which describes the1611 position of the feelers which will constitute its sense of touch.1612 This image path is stored under the ``touch'' key. The image itself1613 is black and white, with black meaning a feeler length of 0 (no1614 feeler is present) and white meaning a feeler length of \texttt{scale},1615 which is a float stored under the key "scale".1617 \begin{listing}1618 \begin{verbatim}1619 (defn tactile-sensor-profile1620 "Return the touch-sensor distribution image in BufferedImage format,1621 or nil if it does not exist."1622 [#^Geometry obj]1623 (if-let [image-path (meta-data obj "touch")]1624 (load-image image-path)))1626 (defn tactile-scale1627 "Return the length of each feeler. Default scale is 0.011628 jMonkeyEngine units."1629 [#^Geometry obj]1630 (if-let [scale (meta-data obj "scale")]1631 scale 0.1))1632 \end{verbatim}1633 \caption{\label{touch-meta-data}Touch does not use empty nodes, to store metadata, because the metadata of each solid part of a creature's body is sufficient.}1634 \end{listing}1636 Here is an example of a UV-map which specifies the position of1637 touch sensors along the surface of the upper segment of a fingertip.1639 \begin{figure}[htb]1640 \centering1641 \includegraphics[width=13cm]{./images/finger-UV.png}1642 \caption{\label{fingertip-UV}This is the tactile-sensor-profile for the upper segment of a fingertip. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).}1643 \end{figure}1645 \subsubsection{Implementation Summary}1646 \label{sec-2-10-2}1648 To simulate touch there are three conceptual steps. For each solid1649 object in the creature, you first have to get UV image and scale1650 parameter which define the position and length of the feelers.1651 Then, you use the triangles which comprise the mesh and the UV1652 data stored in the mesh to determine the world-space position and1653 orientation of each feeler. Then once every frame, update these1654 positions and orientations to match the current position and1655 orientation of the object, and use physics collision detection to1656 gather tactile data.1658 Extracting the meta-data has already been described. The third1659 step, physics collision detection, is handled in \texttt{touch-kernel}.1660 Translating the positions and orientations of the feelers from the1661 UV-map to world-space is itself a three-step process.1663 \begin{itemize}1664 \item Find the triangles which make up the mesh in pixel-space and in1665 world-space. $\backslash$(\texttt{triangles}, \texttt{pixel-triangles}).1667 \item Find the coordinates of each feeler in world-space. These are1668 the origins of the feelers. (\texttt{feeler-origins}).1670 \item Calculate the normals of the triangles in world space, and add1671 them to each of the origins of the feelers. These are the1672 normalized coordinates of the tips of the feelers.1673 (\texttt{feeler-tips}).1674 \end{itemize}1676 \subsubsection{Triangle Math}1677 \label{sec-2-10-3}1679 The rigid objects which make up a creature have an underlying1680 \texttt{Geometry}, which is a \texttt{Mesh} plus a \texttt{Material} and other1681 important data involved with displaying the object.1683 A \texttt{Mesh} is composed of \texttt{Triangles}, and each \texttt{Triangle} has three1684 vertices which have coordinates in world space and UV space.1686 Here, \texttt{triangles} gets all the world-space triangles which1687 comprise a mesh, while \texttt{pixel-triangles} gets those same triangles1688 expressed in pixel coordinates (which are UV coordinates scaled to1689 fit the height and width of the UV image).1691 \begin{listing}1692 \begin{verbatim}1693 (defn triangle1694 "Get the triangle specified by triangle-index from the mesh."1695 [#^Geometry geo triangle-index]1696 (triangle-seq1697 (let [scratch (Triangle.)]1698 (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))1700 (defn triangles1701 "Return a sequence of all the Triangles which comprise a given1702 Geometry."1703 [#^Geometry geo]1704 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))1706 (defn triangle-vertex-indices1707 "Get the triangle vertex indices of a given triangle from a given1708 mesh."1709 [#^Mesh mesh triangle-index]1710 (let [indices (int-array 3)]1711 (.getTriangle mesh triangle-index indices)1712 (vec indices)))1714 (defn vertex-UV-coord1715 "Get the UV-coordinates of the vertex named by vertex-index"1716 [#^Mesh mesh vertex-index]1717 (let [UV-buffer1718 (.getData1719 (.getBuffer1720 mesh1721 VertexBuffer$Type/TexCoord))]1722 [(.get UV-buffer (* vertex-index 2))1723 (.get UV-buffer (+ 1 (* vertex-index 2)))]))1725 (defn pixel-triangle [#^Geometry geo image index]1726 (let [mesh (.getMesh geo)1727 width (.getWidth image)1728 height (.getHeight image)]1729 (vec (map (fn [[u v]] (vector (* width u) (* height v)))1730 (map (partial vertex-UV-coord mesh)1731 (triangle-vertex-indices mesh index))))))1733 (defn pixel-triangles1734 "The pixel-space triangles of the Geometry, in the same order as1735 (triangles geo)"1736 [#^Geometry geo image]1737 (let [height (.getHeight image)1738 width (.getWidth image)]1739 (map (partial pixel-triangle geo image)1740 (range (.getTriangleCount (.getMesh geo))))))1741 \end{verbatim}1742 \caption{\label{get-triangles}Programs to extract triangles from a geometry and get their vertices in both world and UV-coordinates.}1743 \end{listing}1745 \subsubsection{The Affine Transform from one Triangle to Another}1746 \label{sec-2-10-4}1748 \texttt{pixel-triangles} gives us the mesh triangles expressed in pixel1749 coordinates and \texttt{triangles} gives us the mesh triangles expressed1750 in world coordinates. The tactile-sensor-profile gives the1751 position of each feeler in pixel-space. In order to convert1752 pixel-space coordinates into world-space coordinates we need1753 something that takes coordinates on the surface of one triangle1754 and gives the corresponding coordinates on the surface of another1755 triangle.1757 Triangles are \href{http://mathworld.wolfram.com/AffineTransformation.html }{affine}, which means any triangle can be transformed1758 into any other by a combination of translation, scaling, and1759 rotation. The affine transformation from one triangle to another1760 is readily computable if the triangle is expressed in terms of a1761 \(4x4\) matrix.1763 $$1764 \begin{bmatrix}1765 x_1 & x_2 & x_3 & n_x \\1766 y_1 & y_2 & y_3 & n_y \\1767 z_1 & z_2 & z_3 & n_z \\1768 1 & 1 & 1 & 11769 \end{bmatrix}1770 $$1772 Here, the first three columns of the matrix are the vertices of1773 the triangle. The last column is the right-handed unit normal of1774 the triangle.1776 With two triangles \(T_{1}\) and \(T_{2}\) each expressed as a1777 matrix like above, the affine transform from \(T_{1}\) to \(T_{2}\)1778 is \(T_{2}T_{1}^{-1}\).1780 The clojure code below recapitulates the formulas above, using1781 jMonkeyEngine's \texttt{Matrix4f} objects, which can describe any affine1782 transformation.1784 \begin{listing}1785 \begin{verbatim}1786 (defn triangle->matrix4f1787 "Converts the triangle into a 4x4 matrix: The first three columns1788 contain the vertices of the triangle; the last contains the unit1789 normal of the triangle. The bottom row is filled with 1s."1790 [#^Triangle t]1791 (let [mat (Matrix4f.)1792 [vert-1 vert-2 vert-3]1793 (mapv #(.get t %) (range 3))1794 unit-normal (do (.calculateNormal t)(.getNormal t))1795 vertices [vert-1 vert-2 vert-3 unit-normal]]1796 (dorun1797 (for [row (range 4) col (range 3)]1798 (do1799 (.set mat col row (.get (vertices row) col))1800 (.set mat 3 row 1)))) mat))1802 (defn triangles->affine-transform1803 "Returns the affine transformation that converts each vertex in the1804 first triangle into the corresponding vertex in the second1805 triangle."1806 [#^Triangle tri-1 #^Triangle tri-2]1807 (.mult1808 (triangle->matrix4f tri-2)1809 (.invert (triangle->matrix4f tri-1))))1810 \end{verbatim}1811 \caption{\label{triangle-affine}Program to interpret triangles as affine transforms.}1812 \end{listing}1814 \subsubsection{Triangle Boundaries}1815 \label{sec-2-10-5}1817 For efficiency's sake I will divide the tactile-profile image into1818 small squares which inscribe each pixel-triangle, then extract the1819 points which lie inside the triangle and map them to 3D-space using1820 \texttt{triangle-transform} above. To do this I need a function,1821 \texttt{convex-bounds} which finds the smallest box which inscribes a 2D1822 triangle.1824 \texttt{inside-triangle?} determines whether a point is inside a triangle1825 in 2D pixel-space.1827 \begin{listing}1828 \begin{verbatim}1829 (defn convex-bounds1830 "Returns the smallest square containing the given vertices, as a1831 vector of integers [left top width height]."1832 [verts]1833 (let [xs (map first verts)1834 ys (map second verts)1835 x0 (Math/floor (apply min xs))1836 y0 (Math/floor (apply min ys))1837 x1 (Math/ceil (apply max xs))1838 y1 (Math/ceil (apply max ys))]1839 [x0 y0 (- x1 x0) (- y1 y0)]))1841 (defn same-side?1842 "Given the points p1 and p2 and the reference point ref, is point p1843 on the same side of the line that goes through p1 and p2 as ref is?"1844 [p1 p2 ref p]1845 (<=1846 01847 (.dot1848 (.cross (.subtract p2 p1) (.subtract p p1))1849 (.cross (.subtract p2 p1) (.subtract ref p1)))))1851 (defn inside-triangle?1852 "Is the point inside the triangle?"1853 {:author "Dylan Holmes"}1854 [#^Triangle tri #^Vector3f p]1855 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]1856 (and1857 (same-side? vert-1 vert-2 vert-3 p)1858 (same-side? vert-2 vert-3 vert-1 p)1859 (same-side? vert-3 vert-1 vert-2 p))))1860 \end{verbatim}1861 \caption{\label{in-triangle}Program to efficiently determine point inclusion in a triangle.}1862 \end{listing}1864 \subsubsection{Feeler Coordinates}1865 \label{sec-2-10-6}1867 The triangle-related functions above make short work of1868 calculating the positions and orientations of each feeler in1869 world-space.1871 \begin{listing}1872 \begin{verbatim}1873 (defn feeler-pixel-coords1874 "Returns the coordinates of the feelers in pixel space in lists, one1875 list for each triangle, ordered in the same way as (triangles) and1876 (pixel-triangles)."1877 [#^Geometry geo image]1878 (map1879 (fn [pixel-triangle]1880 (filter1881 (fn [coord]1882 (inside-triangle? (->triangle pixel-triangle)1883 (->vector3f coord)))1884 (white-coordinates image (convex-bounds pixel-triangle))))1885 (pixel-triangles geo image)))1887 (defn feeler-world-coords1888 "Returns the coordinates of the feelers in world space in lists, one1889 list for each triangle, ordered in the same way as (triangles) and1890 (pixel-triangles)."1891 [#^Geometry geo image]1892 (let [transforms1893 (map #(triangles->affine-transform1894 (->triangle %1) (->triangle %2))1895 (pixel-triangles geo image)1896 (triangles geo))]1897 (map (fn [transform coords]1898 (map #(.mult transform (->vector3f %)) coords))1899 transforms (feeler-pixel-coords geo image))))1900 \end{verbatim}1901 \caption{\label{feeler-coordinates}Program to get the coordinates of ``feelers '' in both world and UV-coordinates.}1902 \end{listing}1904 \begin{listing}1905 \begin{verbatim}1906 (defn feeler-origins1907 "The world space coordinates of the root of each feeler."1908 [#^Geometry geo image]1909 (reduce concat (feeler-world-coords geo image)))1911 (defn feeler-tips1912 "The world space coordinates of the tip of each feeler."1913 [#^Geometry geo image]1914 (let [world-coords (feeler-world-coords geo image)1915 normals1916 (map1917 (fn [triangle]1918 (.calculateNormal triangle)1919 (.clone (.getNormal triangle)))1920 (map ->triangle (triangles geo)))]1922 (mapcat (fn [origins normal]1923 (map #(.add % normal) origins))1924 world-coords normals)))1926 (defn touch-topology1927 [#^Geometry geo image]1928 (collapse (reduce concat (feeler-pixel-coords geo image))))1929 \end{verbatim}1930 \caption{\label{feeler-tips}Program to get the position of the base and tip of each ``feeler''}1931 \end{listing}1933 \subsubsection{Simulated Touch}1934 \label{sec-2-10-7}1936 Now that the functions to construct feelers are complete,1937 \texttt{touch-kernel} generates functions to be called from within a1938 simulation that perform the necessary physics collisions to1939 collect tactile data, and \texttt{touch!} recursively applies it to every1940 node in the creature.1942 \begin{listing}1943 \begin{verbatim}1944 (defn set-ray [#^Ray ray #^Matrix4f transform1945 #^Vector3f origin #^Vector3f tip]1946 ;; Doing everything locally reduces garbage collection by enough to1947 ;; be worth it.1948 (.mult transform origin (.getOrigin ray))1949 (.mult transform tip (.getDirection ray))1950 (.subtractLocal (.getDirection ray) (.getOrigin ray))1951 (.normalizeLocal (.getDirection ray)))1952 \end{verbatim}1953 \caption{\label{set-ray}Efficient program to transform a ray from one position to another.}1954 \end{listing}1956 \begin{listing}1957 \begin{verbatim}1958 (defn touch-kernel1959 "Constructs a function which will return tactile sensory data from1960 'geo when called from inside a running simulation"1961 [#^Geometry geo]1962 (if-let1963 [profile (tactile-sensor-profile geo)]1964 (let [ray-reference-origins (feeler-origins geo profile)1965 ray-reference-tips (feeler-tips geo profile)1966 ray-length (tactile-scale geo)1967 current-rays (map (fn [_] (Ray.)) ray-reference-origins)1968 topology (touch-topology geo profile)1969 correction (float (* ray-length -0.2))]1970 ;; slight tolerance for very close collisions.1971 (dorun1972 (map (fn [origin tip]1973 (.addLocal origin (.mult (.subtract tip origin)1974 correction)))1975 ray-reference-origins ray-reference-tips))1976 (dorun (map #(.setLimit % ray-length) current-rays))1977 (fn [node]1978 (let [transform (.getWorldMatrix geo)]1979 (dorun1980 (map (fn [ray ref-origin ref-tip]1981 (set-ray ray transform ref-origin ref-tip))1982 current-rays ray-reference-origins1983 ray-reference-tips))1984 (vector1985 topology1986 (vec1987 (for [ray current-rays]1988 (do1989 (let [results (CollisionResults.)]1990 (.collideWith node ray results)1991 (let [touch-objects1992 (filter #(not (= geo (.getGeometry %)))1993 results)1994 limit (.getLimit ray)]1995 [(if (empty? touch-objects)1996 limit1997 (let [response1998 (apply min (map #(.getDistance %)1999 touch-objects))]2000 (FastMath/clamp2001 (float2002 (if (> response limit) (float 0.0)2003 (+ response correction)))2004 (float 0.0)2005 limit)))2006 limit])))))))))))2007 \end{verbatim}2008 \caption{\label{touch-kernel}This is the core of touch in \texttt{CORTEX} each feeler follows the object it is bound to, reporting any collisions that may happen.}2009 \end{listing}2011 Armed with the \texttt{touch!} function, \texttt{CORTEX} becomes capable of2012 giving creatures a sense of touch. A simple test is to create a2013 cube that is outfitted with a uniform distribution of touch2014 sensors. It can feel the ground and any balls that it touches.2016 \begin{listing}2017 \begin{verbatim}2018 (defn touch!2019 "Endow the creature with the sense of touch. Returns a sequence of2020 functions, one for each body part with a tactile-sensor-profile,2021 each of which when called returns sensory data for that body part."2022 [#^Node creature]2023 (filter2024 (comp not nil?)2025 (map touch-kernel2026 (filter #(isa? (class %) Geometry)2027 (node-seq creature)))))2028 \end{verbatim}2029 \caption{\label{touch}\texttt{CORTEX} interface for creating touch in a simulated creature.}2030 \end{listing}2032 The tactile-sensor-profile image for the touch cube is a simple2033 cross with a uniform distribution of touch sensors:2035 \begin{figure}[htb]2036 \centering2037 \includegraphics[width=7cm]{./images/touch-profile.png}2038 \caption{\label{touch-cube-uv-map}The touch profile for the touch-cube. Each pure white pixel defines a touch sensitive feeler.}2039 \end{figure}2041 \begin{figure}[htb]2042 \centering2043 \includegraphics[width=15cm]{./images/touch-cube.png}2044 \caption{\label{touch-cube-uv-map-2}The touch cube reacts to cannonballs. The black, red, and white cross on the right is a visual display of the creature's touch. White means that it is feeling something strongly, black is not feeling anything, and gray is in-between. The cube can feel both the floor and the ball. Notice that when the ball causes the cube to tip, that the bottom face can still feel part of the ground.}2045 \end{figure}2047 \subsection{Proprioception provides knowledge of your own body's position}2048 \label{sec-2-11}2050 Close your eyes, and touch your nose with your right index finger.2051 How did you do it? You could not see your hand, and neither your2052 hand nor your nose could use the sense of touch to guide the path2053 of your hand. There are no sound cues, and Taste and Smell2054 certainly don't provide any help. You know where your hand is2055 without your other senses because of Proprioception.2057 Humans can sometimes loose this sense through viral infections or2058 damage to the spinal cord or brain, and when they do, they loose2059 the ability to control their own bodies without looking directly at2060 the parts they want to move. In \href{http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat}{The Man Who Mistook His Wife for a2061 Hat} (\cite{man-wife-hat}), a woman named Christina looses this2062 sense and has to learn how to move by carefully watching her arms2063 and legs. She describes proprioception as the "eyes of the body,2064 the way the body sees itself".2066 Proprioception in humans is mediated by \href{http://en.wikipedia.org/wiki/Articular_capsule}{joint capsules}, \href{http://en.wikipedia.org/wiki/Muscle_spindle}{muscle2067 spindles}, and the \href{http://en.wikipedia.org/wiki/Golgi_tendon_organ}{Golgi tendon organs}. These measure the relative2068 positions of each body part by monitoring muscle strain and length.2070 It's clear that this is a vital sense for fluid, graceful movement.2071 It's also particularly easy to implement in jMonkeyEngine.2073 My simulated proprioception calculates the relative angles of each2074 joint from the rest position defined in the Blender file. This2075 simulates the muscle-spindles and joint capsules. I will deal with2076 Golgi tendon organs, which calculate muscle strain, in the next2077 section (2.12).2079 \subsubsection{Helper functions}2080 \label{sec-2-11-1}2082 \texttt{absolute-angle} calculates the angle between two vectors,2083 relative to a third axis vector. This angle is the number of2084 radians you have to move counterclockwise around the axis vector2085 to get from the first to the second vector. It is not commutative2086 like a normal dot-product angle is.2088 The purpose of these functions is to build a system of angle2089 measurement that is biologically plausible.2091 \begin{listing}2092 \begin{verbatim}2093 (defn right-handed?2094 "true iff the three vectors form a right handed coordinate2095 system. The three vectors do not have to be normalized or2096 orthogonal."2097 [vec1 vec2 vec3]2098 (pos? (.dot (.cross vec1 vec2) vec3)))2100 (defn absolute-angle2101 "The angle between 'vec1 and 'vec2 around 'axis. In the range2102 [0 (* 2 Math/PI)]."2103 [vec1 vec2 axis]2104 (let [angle (.angleBetween vec1 vec2)]2105 (if (right-handed? vec1 vec2 axis)2106 angle (- (* 2 Math/PI) angle))))2107 \end{verbatim}2108 \caption{\label{helpers}Program to measure angles along a vector}2109 \end{listing}2111 \subsubsection{Proprioception Kernel}2112 \label{sec-2-11-2}2114 Given a joint, \texttt{proprioception-kernel} produces a function that2115 calculates the Euler angles between the objects the joint2116 connects. The only tricky part here is making the angles relative2117 to the joint's initial ``straightness''.2119 \begin{listing}2120 \begin{verbatim}2121 (defn proprioception-kernel2122 "Returns a function which returns proprioceptive sensory data when2123 called inside a running simulation."2124 [#^Node parts #^Node joint]2125 (let [[obj-a obj-b] (joint-targets parts joint)2126 joint-rot (.getWorldRotation joint)2127 x0 (.mult joint-rot Vector3f/UNIT_X)2128 y0 (.mult joint-rot Vector3f/UNIT_Y)2129 z0 (.mult joint-rot Vector3f/UNIT_Z)]2130 (fn []2131 (let [rot-a (.clone (.getWorldRotation obj-a))2132 rot-b (.clone (.getWorldRotation obj-b))2133 x (.mult rot-a x0)2134 y (.mult rot-a y0)2135 z (.mult rot-a z0)2137 X (.mult rot-b x0)2138 Y (.mult rot-b y0)2139 Z (.mult rot-b z0)2140 heading (Math/atan2 (.dot X z) (.dot X x))2141 pitch (Math/atan2 (.dot X y) (.dot X x))2143 ;; rotate x-vector back to origin2144 reverse2145 (doto (Quaternion.)2146 (.fromAngleAxis2147 (.angleBetween X x)2148 (let [cross (.normalize (.cross X x))]2149 (if (= 0 (.length cross)) y cross))))2150 roll (absolute-angle (.mult reverse Y) y x)]2151 [heading pitch roll]))))2153 (defn proprioception!2154 "Endow the creature with the sense of proprioception. Returns a2155 sequence of functions, one for each child of the \"joints\" node in2156 the creature, which each report proprioceptive information about2157 that joint."2158 [#^Node creature]2159 ;; extract the body's joints2160 (let [senses (map (partial proprioception-kernel creature)2161 (joints creature))]2162 (fn []2163 (map #(%) senses))))2164 \end{verbatim}2165 \caption{\label{proprioception}Program to return biologically reasonable proprioceptive data for each joint.}2166 \end{listing}2168 \texttt{proprioception!} maps \texttt{proprioception-kernel} across all the2169 joints of the creature. It uses the same list of joints that2170 \texttt{joints} uses. Proprioception is the easiest sense to implement in2171 \texttt{CORTEX}, and it will play a crucial role when efficiently2172 implementing empathy.2174 \begin{figure}[htb]2175 \centering2176 \includegraphics[width=11cm]{./images/proprio.png}2177 \caption{\label{proprio}In the upper right corner, the three proprioceptive angle measurements are displayed. Red is yaw, Green is pitch, and White is roll.}2178 \end{figure}2180 \subsection{Muscles contain both sensors and effectors}2181 \label{sec-2-12}2183 Surprisingly enough, terrestrial creatures only move by using2184 torque applied about their joints. There's not a single straight2185 line of force in the human body at all! (A straight line of force2186 would correspond to some sort of jet or rocket propulsion.)2188 In humans, muscles are composed of muscle fibers which can contract2189 to exert force. The muscle fibers which compose a muscle are2190 partitioned into discrete groups which are each controlled by a2191 single alpha motor neuron. A single alpha motor neuron might2192 control as little as three or as many as one thousand muscle2193 fibers. When the alpha motor neuron is engaged by the spinal cord,2194 it activates all of the muscle fibers to which it is attached. The2195 spinal cord generally engages the alpha motor neurons which control2196 few muscle fibers before the motor neurons which control many2197 muscle fibers. This recruitment strategy allows for precise2198 movements at low strength. The collection of all motor neurons that2199 control a muscle is called the motor pool. The brain essentially2200 says "activate 30\% of the motor pool" and the spinal cord recruits2201 motor neurons until 30\% are activated. Since the distribution of2202 power among motor neurons is unequal and recruitment goes from2203 weakest to strongest, the first 30\% of the motor pool might be 5\%2204 of the strength of the muscle.2206 My simulated muscles follow a similar design: Each muscle is2207 defined by a 1-D array of numbers (the "motor pool"). Each entry in2208 the array represents a motor neuron which controls a number of2209 muscle fibers equal to the value of the entry. Each muscle has a2210 scalar strength factor which determines the total force the muscle2211 can exert when all motor neurons are activated. The effector2212 function for a muscle takes a number to index into the motor pool,2213 and then "activates" all the motor neurons whose index is lower or2214 equal to the number. Each motor-neuron will apply force in2215 proportion to its value in the array. Lower values cause less2216 force. The lower values can be put at the "beginning" of the 1-D2217 array to simulate the layout of actual human muscles, which are2218 capable of more precise movements when exerting less force. Or, the2219 motor pool can simulate more exotic recruitment strategies which do2220 not correspond to human muscles.2222 This 1D array is defined in an image file for ease of2223 creation/visualization. Here is an example muscle profile image.2225 \begin{figure}[htb]2226 \centering2227 \includegraphics[width=7cm]{./images/basic-muscle.png}2228 \caption{\label{muscle-recruit}A muscle profile image that describes the strengths of each motor neuron in a muscle. White is weakest and dark red is strongest. This particular pattern has weaker motor neurons at the beginning, just like human muscle.}2229 \end{figure}2231 \subsubsection{Muscle meta-data}2232 \label{sec-2-12-1}2234 \begin{listing}2235 \begin{verbatim}2236 (defn muscle-profile-image2237 "Get the muscle-profile image from the node's Blender meta-data."2238 [#^Node muscle]2239 (if-let [image (meta-data muscle "muscle")]2240 (load-image image)))2242 (defn muscle-strength2243 "Return the strength of this muscle, or 1 if it is not defined."2244 [#^Node muscle]2245 (if-let [strength (meta-data muscle "strength")]2246 strength 1))2248 (defn motor-pool2249 "Return a vector where each entry is the strength of the \"motor2250 neuron\" at that part in the muscle."2251 [#^Node muscle]2252 (let [profile (muscle-profile-image muscle)]2253 (vec2254 (let [width (.getWidth profile)]2255 (for [x (range width)]2256 (- 2552257 (bit-and2258 0x0000FF2259 (.getRGB profile x 0))))))))2260 \end{verbatim}2261 \caption{\label{motor-pool}Program to deal with loading muscle data from a Blender file's metadata.}2262 \end{listing}2264 Of note here is \texttt{motor-pool} which interprets the muscle-profile2265 image in a way that allows me to use gradients between white and2266 red, instead of shades of gray as I've been using for all the2267 other senses. This is purely an aesthetic touch.2269 \subsubsection{Creating muscles}2270 \label{sec-2-12-2}2272 \begin{listing}2273 \begin{verbatim}2274 (defn movement-kernel2275 "Returns a function which when called with a integer value inside a2276 running simulation will cause movement in the creature according2277 to the muscle's position and strength profile. Each function2278 returns the amount of force applied / max force."2279 [#^Node creature #^Node muscle]2280 (let [target (closest-node creature muscle)2281 axis2282 (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y)2283 strength (muscle-strength muscle)2285 pool (motor-pool muscle)2286 pool-integral (reductions + pool)2287 forces2288 (vec (map #(float (* strength (/ % (last pool-integral))))2289 pool-integral))2290 control (.getControl target RigidBodyControl)]2291 (fn [n]2292 (let [pool-index (max 0 (min n (dec (count pool))))2293 force (forces pool-index)]2294 (.applyTorque control (.mult axis force))2295 (float (/ force strength))))))2297 (defn movement!2298 "Endow the creature with the power of movement. Returns a sequence2299 of functions, each of which accept an integer value and will2300 activate their corresponding muscle."2301 [#^Node creature]2302 (for [muscle (muscles creature)]2303 (movement-kernel creature muscle)))2304 \end{verbatim}2305 \caption{\label{muscle-kernel}This is the core movement function in \texttt{CORTEX}, which implements muscles that report on their activation.}2306 \end{listing}2309 \texttt{movement-kernel} creates a function that controls the movement2310 of the nearest physical node to the muscle node. The muscle exerts2311 a rotational force dependent on it's orientation to the object in2312 the Blender file. The function returned by \texttt{movement-kernel} is2313 also a sense function: it returns the percent of the total muscle2314 strength that is currently being employed. This is analogous to2315 muscle tension in humans and completes the sense of proprioception2316 begun in the last chapter.2318 \subsection{\texttt{CORTEX} brings complex creatures to life!}2319 \label{sec-2-13}2321 The ultimate test of \texttt{CORTEX} is to create a creature with the full2322 gamut of senses and put it though its paces.2324 With all senses enabled, my right hand model looks like an2325 intricate marionette hand with several strings for each finger:2327 \begin{figure}[htb]2328 \centering2329 \includegraphics[width=11cm]{./images/hand-with-all-senses2.png}2330 \caption{\label{hand-nodes-1}View of the hand model with all sense nodes. You can see the joint, muscle, ear, and eye nodes here.}2331 \end{figure}2333 \begin{figure}[htb]2334 \centering2335 \includegraphics[width=15cm]{./images/hand-with-all-senses3.png}2336 \caption{\label{hand-nodes-2}An alternate view of the hand.}2337 \end{figure}2339 With the hand fully rigged with senses, I can run it though a test2340 that will test everything.2342 \begin{figure}[htb]2343 \centering2344 \includegraphics[width=15cm]{./images/integration.png}2345 \caption{\label{integration}Selected frames from a full test of the hand with all senses. Note especially the interactions the hand has with itself: it feels its own palm and fingers, and when it curls its fingers, it sees them with its eye (which is located in the center of the palm. The red block appears with a pure tone sound. The hand then uses its muscles to launch the cube!}2346 \end{figure}2348 \subsection{\texttt{CORTEX} enables many possibilities for further research}2349 \label{sec-2-14}2351 Often times, the hardest part of building a system involving2352 creatures is dealing with physics and graphics. \texttt{CORTEX} removes2353 much of this initial difficulty and leaves researchers free to2354 directly pursue their ideas. I hope that even novices with a2355 passing curiosity about simulated touch or creature evolution will2356 be able to use cortex for experimentation. \texttt{CORTEX} is a completely2357 simulated world, and far from being a disadvantage, its simulated2358 nature enables you to create senses and creatures that would be2359 impossible to make in the real world.2361 While not by any means a complete list, here are some paths2362 \texttt{CORTEX} is well suited to help you explore:2364 \begin{description}2365 \item[{Empathy }] my empathy program leaves many areas for2366 improvement, among which are using vision to infer2367 proprioception and looking up sensory experience with imagined2368 vision, touch, and sound.2369 \item[{Evolution}] Karl Sims created a rich environment for simulating2370 the evolution of creatures on a Connection Machine2371 (\cite{sims-evolving-creatures}). Today, this can be redone2372 and expanded with \texttt{CORTEX} on an ordinary computer.2373 \item[{Exotic senses }] Cortex enables many fascinating senses that are2374 not possible to build in the real world. For example,2375 telekinesis is an interesting avenue to explore. You can also2376 make a ``semantic'' sense which looks up metadata tags on2377 objects in the environment the metadata tags might contain2378 other sensory information.2379 \item[{Imagination via subworlds}] this would involve a creature with2380 an effector which creates an entire new sub-simulation where2381 the creature has direct control over placement/creation of2382 objects via simulated telekinesis. The creature observes this2383 sub-world through its normal senses and uses its observations2384 to make predictions about its top level world.2385 \item[{Simulated prescience}] step the simulation forward a few ticks,2386 gather sensory data, then supply this data for the creature as2387 one of its actual senses. The cost of prescience is slowing2388 the simulation down by a factor proportional to however far2389 you want the entities to see into the future. What happens2390 when two evolved creatures that can each see into the future2391 fight each other?2392 \item[{Swarm creatures}] Program a group of creatures that cooperate2393 with each other. Because the creatures would be simulated, you2394 could investigate computationally complex rules of behavior2395 which still, from the group's point of view, would happen in2396 real time. Interactions could be as simple as cellular2397 organisms communicating via flashing lights, or as complex as2398 humanoids completing social tasks, etc.2399 \item[{\texttt{HACKER} for writing muscle-control programs}] Presented with a2400 low-level muscle control / sense API, generate higher level2401 programs for accomplishing various stated goals. Example goals2402 might be "extend all your fingers" or "move your hand into the2403 area with blue light" or "decrease the angle of this joint".2404 It would be like Sussman's HACKER, except it would operate2405 with much more data in a more realistic world. Start off with2406 "calisthenics" to develop subroutines over the motor control2407 API. The low level programming code might be a turning machine2408 that could develop programs to iterate over a "tape" where2409 each entry in the tape could control recruitment of the fibers2410 in a muscle.2411 \item[{Sense fusion}] There is much work to be done on sense2412 integration -- building up a coherent picture of the world and2413 the things in it. With \texttt{CORTEX} as a base, you can explore2414 concepts like self-organizing maps or cross modal clustering2415 in ways that have never before been tried.2416 \item[{Inverse kinematics}] experiments in sense guided motor control2417 are easy given \texttt{CORTEX}'s support -- you can get right to the2418 hard control problems without worrying about physics or2419 senses.2420 \end{description}2422 \newpage2424 \section{\texttt{EMPATH}: action recognition in a simulated worm}2425 \label{sec-3}2427 Here I develop a computational model of empathy, using \texttt{CORTEX} as a2428 base. Empathy in this context is the ability to observe another2429 creature and infer what sorts of sensations that creature is2430 feeling. My empathy algorithm involves multiple phases. First is2431 free-play, where the creature moves around and gains sensory2432 experience. From this experience I construct a representation of the2433 creature's sensory state space, which I call \(\Phi\)-space. Using2434 \(\Phi\)-space, I construct an efficient function which takes the2435 limited data that comes from observing another creature and enriches2436 it with a full compliment of imagined sensory data. I can then use2437 the imagined sensory data to recognize what the observed creature is2438 doing and feeling, using straightforward embodied action predicates.2439 This is all demonstrated with using a simple worm-like creature, and2440 recognizing worm-actions based on limited data.2442 \begin{figure}[htb]2443 \centering2444 \includegraphics[width=10cm]{./images/basic-worm-view.png}2445 \caption{\label{basic-worm-view}Here is the worm with which we will be working. It is composed of 5 segments. Each segment has a pair of extensor and flexor muscles. Each of the worm's four joints is a hinge joint which allows about 30 degrees of rotation to either side. Each segment of the worm is touch-capable and has a uniform distribution of touch sensors on each of its faces. Each joint has a proprioceptive sense to detect relative positions. The worm segments are all the same except for the first one, which has a much higher weight than the others to allow for easy manual motor control.}2446 \end{figure}2448 \begin{listing}2449 \begin{verbatim}2450 (defn worm []2451 (let [model (load-blender-model "Models/worm/worm.blend")]2452 {:body (doto model (body!))2453 :touch (touch! model)2454 :proprioception (proprioception! model)2455 :muscles (movement! model)}))2456 \end{verbatim}2457 \caption{\label{get-worm}Program for reading a worm from a Blender file and outfitting it with the senses of proprioception, touch, and the ability to move, as specified in the Blender file.}2458 \end{listing}2460 \subsection{Embodiment factors action recognition into manageable parts}2461 \label{sec-3-1}2463 Using empathy, I divide the problem of action recognition into a2464 recognition process expressed in the language of a full compliment2465 of senses, and an imaginative process that generates full sensory2466 data from partial sensory data. Splitting the action recognition2467 problem in this manner greatly reduces the total amount of work to2468 recognize actions: The imaginative process is mostly just matching2469 previous experience, and the recognition process gets to use all2470 the senses to directly describe any action.2472 \subsection{Action recognition is easy with a full gamut of senses}2473 \label{sec-3-2}2475 Embodied representation using multiple senses such as touch,2476 proprioception, and muscle tension turns out be exceedingly2477 efficient at describing body-centered actions. It is the right2478 language for the job. For example, it takes only around 5 lines of2479 clojure code to describe the action of curling using embodied2480 primitives. It takes about 10 lines to describe the seemingly2481 complicated action of wiggling.2483 The following action predicates each take a stream of sensory2484 experience, observe however much of it they desire, and decide2485 whether the worm is doing the action they describe. \texttt{curled?}2486 relies on proprioception, \texttt{resting?} relies on touch, \texttt{wiggling?}2487 relies on a Fourier analysis of muscle contraction, and2488 \texttt{grand-circle?} relies on touch and reuses \texttt{curled?} in its2489 definition, showing how embodied predicates can be composed.2492 \begin{listing}2493 \begin{verbatim}2494 (defn curled?2495 "Is the worm curled up?"2496 [experiences]2497 (every?2498 (fn [[_ _ bend]]2499 (> (Math/sin bend) 0.64))2500 (:proprioception (peek experiences))))2501 \end{verbatim}2502 \caption{\label{curled}Program for detecting whether the worm is curled. This is the simplest action predicate, because it only uses the last frame of sensory experience, and only uses proprioceptive data. Even this simple predicate, however, is automatically frame independent and ignores vermopomorphic\protect\footnotemark \space differences such as worm textures and colors.}2503 \end{listing}2505 \footnotetext{Like \emph{anthropomorphic} except for worms instead of humans.}2507 \begin{listing}2508 \begin{verbatim}2509 (defn contact2510 "Determine how much contact a particular worm segment has with2511 other objects. Returns a value between 0 and 1, where 1 is full2512 contact and 0 is no contact."2513 [touch-region [coords contact :as touch]]2514 (-> (zipmap coords contact)2515 (select-keys touch-region)2516 (vals)2517 (#(map first %))2518 (average)2519 (* 10)2520 (- 1)2521 (Math/abs)))2522 \end{verbatim}2523 \caption{\label{touch-summary}Program for summarizing the touch information in a patch of skin.}2524 \end{listing}2527 \begin{listing}2528 \begin{verbatim}2529 (def worm-segment-bottom (rect-region [8 15] [14 22]))2531 (defn resting?2532 "Is the worm resting on the ground?"2533 [experiences]2534 (every?2535 (fn [touch-data]2536 (< 0.9 (contact worm-segment-bottom touch-data)))2537 (:touch (peek experiences))))2538 \end{verbatim}2539 \caption{\label{resting}Program for detecting whether the worm is at rest. This program uses a summary of the tactile information from the underbelly of the worm, and is only true if every segment is touching the floor. Note that this function contains no references to proprioception at all.}2540 \end{listing}2542 \begin{listing}2543 \begin{verbatim}2544 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))2546 (def worm-segment-top-tip (rect-region [0 15] [7 22]))2548 (defn grand-circle?2549 "Does the worm form a majestic circle (one end touching the other)?"2550 [experiences]2551 (and (curled? experiences)2552 (let [worm-touch (:touch (peek experiences))2553 tail-touch (worm-touch 0)2554 head-touch (worm-touch 4)]2555 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))2556 (< 0.55 (contact worm-segment-top-tip head-touch))))))2557 \end{verbatim}2558 \caption{\label{grand-circle}Program for detecting whether the worm is curled up into a full circle. Here the embodied approach begins to shine, as I am able to both use a previous action predicate (\texttt{curled?}) as well as the direct tactile experience of the head and tail.}2559 \end{listing}2562 \begin{listing}2563 \begin{verbatim}2564 (defn fft [nums]2565 (map2566 #(.getReal %)2567 (.transform2568 (FastFourierTransformer. DftNormalization/STANDARD)2569 (double-array nums) TransformType/FORWARD)))2571 (def indexed (partial map-indexed vector))2573 (defn max-indexed [s]2574 (first (sort-by (comp - second) (indexed s))))2576 (defn wiggling?2577 "Is the worm wiggling?"2578 [experiences]2579 (let [analysis-interval 0x40]2580 (when (> (count experiences) analysis-interval)2581 (let [a-flex 32582 a-ex 22583 muscle-activity2584 (map :muscle (vector:last-n experiences analysis-interval))2585 base-activity2586 (map #(- (% a-flex) (% a-ex)) muscle-activity)]2587 (= 22588 (first2589 (max-indexed2590 (map #(Math/abs %)2591 (take 20 (fft base-activity))))))))))2592 \end{verbatim}2593 \caption{\label{wiggling}Program for detecting whether the worm has been wiggling for the last few frames. It uses a Fourier analysis of the muscle contractions of the worm's tail to determine wiggling. This is significant because there is no particular frame that clearly indicates that the worm is wiggling --- only when multiple frames are analyzed together is the wiggling revealed. Defining wiggling this way also gives the worm an opportunity to learn and recognize ``frustrated wiggling'', where the worm tries to wiggle but can't. Frustrated wiggling is very visually different from actual wiggling, but this definition gives it to us for free.}2594 \end{listing}2596 With these action predicates, I can now recognize the actions of2597 the worm while it is moving under my control and I have access to2598 all the worm's senses.2600 \begin{listing}2601 \begin{verbatim}2602 (defn debug-experience2603 [experiences text]2604 (cond2605 (grand-circle? experiences) (.setText text "Grand Circle")2606 (curled? experiences) (.setText text "Curled")2607 (wiggling? experiences) (.setText text "Wiggling")2608 (resting? experiences) (.setText text "Resting")))2609 \end{verbatim}2610 \caption{\label{report-worm-activity}Use the action predicates defined earlier to report on what the worm is doing while in simulation.}2611 \end{listing}2613 \begin{figure}[htb]2614 \centering2615 \includegraphics[width=10cm]{./images/worm-identify-init.png}2616 \caption{\label{basic-worm-view}Using \texttt{debug-experience}, the body-centered predicates work together to classify the behavior of the worm. the predicates are operating with access to the worm's full sensory data.}2617 \end{figure}2619 These action predicates satisfy the recognition requirement of an2620 empathic recognition system. There is power in the simplicity of2621 the action predicates. They describe their actions without getting2622 confused in visual details of the worm. Each one is independent of2623 position and rotation, but more than that, they are each2624 independent of irrelevant visual details of the worm and the2625 environment. They will work regardless of whether the worm is a2626 different color or heavily textured, or if the environment has2627 strange lighting.2629 Consider how the human act of jumping might be described with2630 body-centered action predicates: You might specify that jumping is2631 mainly the feeling of your knees bending, your thigh muscles2632 contracting, and your inner ear experiencing a certain sort of back2633 and forth acceleration. This representation is a very concrete2634 description of jumping, couched in terms of muscles and senses, but2635 it also has the ability to describe almost all kinds of jumping, a2636 generality that you might think could only be achieved by a very2637 abstract description. The body centered jumping predicate does not2638 have terms that consider the color of a person's skin or whether2639 they are male or female, instead it gets right to the meat of what2640 jumping actually \emph{is}.2642 Of course, the action predicates are not directly applicable to2643 video data, which lacks the advanced sensory information which they2644 require!2646 The trick now is to make the action predicates work even when the2647 sensory data on which they depend is absent!2649 \subsection{\(\Phi\)-space describes the worm's experiences}2650 \label{sec-3-3}2652 As a first step towards building empathy, I need to gather all of2653 the worm's experiences during free play. I use a simple vector to2654 store all the experiences.2656 Each element of the experience vector exists in the vast space of2657 all possible worm-experiences. Most of this vast space is actually2658 unreachable due to physical constraints of the worm's body. For2659 example, the worm's segments are connected by hinge joints that put2660 a practical limit on the worm's range of motions without limiting2661 its degrees of freedom. Some groupings of senses are impossible;2662 the worm can not be bent into a circle so that its ends are2663 touching and at the same time not also experience the sensation of2664 touching itself.2666 As the worm moves around during free play and its experience vector2667 grows larger, the vector begins to define a subspace which is all2668 the sensations the worm can practically experience during normal2669 operation. I call this subspace \(\Phi\)-space, short for2670 physical-space. The experience vector defines a path through2671 \(\Phi\)-space. This path has interesting properties that all derive2672 from physical embodiment. The proprioceptive components of the path2673 vary smoothly, because in order for the worm to move from one2674 position to another, it must pass through the intermediate2675 positions. The path invariably forms loops as common actions are2676 repeated. Finally and most importantly, proprioception alone2677 actually gives very strong inference about the other senses. For2678 example, when the worm is proprioceptively flat over several2679 frames, you can infer that it is touching the ground and that its2680 muscles are not active, because if the muscles were active, the2681 worm would be moving and would not remain perfectly flat. In order2682 to stay flat, the worm has to be touching the ground, or it would2683 again be moving out of the flat position due to gravity. If the2684 worm is positioned in such a way that it interacts with itself,2685 then it is very likely to be feeling the same tactile feelings as2686 the last time it was in that position, because it has the same body2687 as then. As you observe multiple frames of proprioceptive data, you2688 can become increasingly confident about the exact activations of2689 the worm's muscles, because it generally takes a unique combination2690 of muscle contractions to transform the worm's body along a2691 specific path through \(\Phi\)-space.2693 The worm's total life experience is a long looping path through2694 \(\Phi\)-space. I will now introduce simple way of taking that2695 experience path and building a function that can infer complete2696 sensory experience given only a stream of proprioceptive data. This2697 \emph{empathy} function will provide a bridge to use the body centered2698 action predicates on video-like streams of information.2700 \subsection{Empathy is the process of building paths in \(\Phi\)-space}2701 \label{sec-3-4}2703 Here is the core of a basic empathy algorithm, starting with an2704 experience vector:2706 An \emph{experience-index} is an index into the grand experience vector2707 that defines the worm's life. It is a time-stamp for each set of2708 sensations the worm has experienced.2710 First, I group the experience-indices into bins according to the2711 similarity of their proprioceptive data. I organize my bins into a2712 3 level hierarchy. The smallest bins have an approximate size of2713 0.001 radians in all proprioceptive dimensions. Each higher level2714 is 10x bigger than the level below it.2716 The bins serve as a hashing function for proprioceptive data. Given2717 a single piece of proprioceptive experience, the bins allow me to2718 rapidly find all other similar experience-indices of past2719 experience that had a very similar proprioceptive configuration.2720 When looking up a proprioceptive experience, if the smallest bin2721 does not match any previous experience, then I use successively2722 larger bins until a match is found or I reach the largest bin.2724 Given a sequence of proprioceptive input, I use the bins to2725 generate a set of similar experiences for each input using the2726 tiered proprioceptive bins.2728 Finally, to infer sensory data, I select the longest consecutive2729 chain of experiences that threads through the sets of similar2730 experiences, starting with the current moment as a root and going2731 backwards. Consecutive experience means that the experiences appear2732 next to each other in the experience vector.2734 A stream of proprioceptive input might be:2736 \begin{verbatim}2737 [ flat, flat, flat, flat, flat, flat, lift-head ]2738 \end{verbatim}2740 The worm's previous experience of lying on the ground and lifting2741 its head generates possible interpretations for each frame (the2742 numbers are experience-indices):2744 \clearpage2746 \begin{verbatim}2747 [ flat, flat, flat, flat, flat, flat, flat, lift-head ]2748 1 1 1 1 1 1 1 42749 2 2 2 2 2 2 22750 3 3 3 3 3 3 32751 6 6 6 6 6 6 62752 7 7 7 7 7 7 72753 8 8 8 8 8 8 82754 9 9 9 9 9 9 92755 \end{verbatim}2757 These interpretations suggest a new path through phi space:2759 \begin{verbatim}2760 [ flat, flat, flat, flat, flat, flat, flat, lift-head ]2761 6 7 8 9 1 2 3 42762 \end{verbatim}2764 The new path through \(\Phi\)-space is synthesized from two actual2765 paths that the creature has experienced: the "1-2-3-4" chain and2766 the "6-7-8-9" chain. The "1-2-3-4" chain is necessary because it2767 ends with the worm lifting its head. It originated from a short2768 training session where the worm rested on the floor for a brief2769 while and then raised its head. The "6-7-8-9" chain is part of a2770 longer chain of inactivity where the worm simply rested on the2771 floor without moving. It is preferred over a "1-2-3" chain (which2772 also describes inactivity) because it is longer. The main ideas2773 again:2775 \begin{itemize}2776 \item Imagined \(\Phi\)-space paths are synthesized by looping and mixing2777 previous experiences.2779 \item Longer experience paths (less edits) are preferred.2781 \item The present is more important than the past --- more recent2782 events take precedence in interpretation.2783 \end{itemize}2785 This algorithm has three advantages:2787 \begin{enumerate}2788 \item It's simple2790 \item It's very fast -- retrieving possible interpretations takes2791 constant time. Tracing through chains of interpretations takes2792 time proportional to the average number of experiences in a2793 proprioceptive bin. Redundant experiences in \(\Phi\)-space can be2794 merged to save computation.2796 \item It protects from wrong interpretations of transient ambiguous2797 proprioceptive data. For example, if the worm is flat for just2798 an instant, this flatness will not be interpreted as implying2799 that the worm has its muscles relaxed, since the flatness is2800 part of a longer chain which includes a distinct pattern of2801 muscle activation. Markov chains or other memoryless statistical2802 models that operate on individual frames may very well make this2803 mistake.2804 \end{enumerate}2806 \begin{listing}2807 \begin{verbatim}2808 (defn bin [digits]2809 (fn [angles]2810 (->> angles2811 (flatten)2812 (map (juxt #(Math/sin %) #(Math/cos %)))2813 (flatten)2814 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))2816 (defn gen-phi-scan2817 "Nearest-neighbors with binning. Only returns a result if2818 the proprioceptive data is within 10% of a previously recorded2819 result in all dimensions."2820 [phi-space]2821 (let [bin-keys (map bin [3 2 1])2822 bin-maps2823 (map (fn [bin-key]2824 (group-by2825 (comp bin-key :proprioception phi-space)2826 (range (count phi-space)))) bin-keys)2827 lookups (map (fn [bin-key bin-map]2828 (fn [proprio] (bin-map (bin-key proprio))))2829 bin-keys bin-maps)]2830 (fn lookup [proprio-data]2831 (set (some #(% proprio-data) lookups)))))2832 \end{verbatim}2833 \caption{\label{bin}Program to convert an experience vector into a proprioceptively binned lookup function.}2834 \end{listing}2836 \begin{figure}[htb]2837 \centering2838 \includegraphics[width=10cm]{./images/film-of-imagination.png}2839 \caption{\label{phi-space-history-scan}\texttt{longest-thread} finds the longest path of consecutive past experiences to explain proprioceptive worm data from previous data. Here, the film strip represents the creature's previous experience. Sort sequences of memories are spliced together to match the proprioceptive data. Their carry the other senses along with them.}2840 \end{figure}2842 \texttt{longest-thread} infers sensory data by stitching together pieces2843 from previous experience. It prefers longer chains of previous2844 experience to shorter ones. For example, during training the worm2845 might rest on the ground for one second before it performs its2846 exercises. If during recognition the worm rests on the ground for2847 five seconds, \texttt{longest-thread} will accommodate this five second2848 rest period by looping the one second rest chain five times.2850 \texttt{longest-thread} takes time proportional to the average number of2851 entries in a proprioceptive bin, because for each element in the2852 starting bin it performs a series of set lookups in the preceding2853 bins. If the total history is limited, then this takes time2854 proportional to a only a constant multiple of the number of entries2855 in the starting bin. This analysis also applies, even if the action2856 requires multiple longest chains -- it's still the average number2857 of entries in a proprioceptive bin times the desired chain length.2858 Because \texttt{longest-thread} is so efficient and simple, I can2859 interpret worm-actions in real time.2861 \begin{listing}2862 \begin{verbatim}2863 (defn longest-thread2864 "Find the longest thread from phi-index-sets. The index sets should2865 be ordered from most recent to least recent."2866 [phi-index-sets]2867 (loop [result '()2868 [thread-bases & remaining :as phi-index-sets] phi-index-sets]2869 (if (empty? phi-index-sets)2870 (vec result)2871 (let [threads2872 (for [thread-base thread-bases]2873 (loop [thread (list thread-base)2874 remaining remaining]2875 (let [next-index (dec (first thread))]2876 (cond (empty? remaining) thread2877 (contains? (first remaining) next-index)2878 (recur2879 (cons next-index thread) (rest remaining))2880 :else thread))))2881 longest-thread2882 (reduce (fn [thread-a thread-b]2883 (if (> (count thread-a) (count thread-b))2884 thread-a thread-b))2885 '(nil)2886 threads)]2887 (recur (concat longest-thread result)2888 (drop (count longest-thread) phi-index-sets))))))2889 \end{verbatim}2890 \caption{\label{longest-thread}Program to calculate empathy by tracing though \(\Phi\)-space and finding the longest (ie. most coherent) interpretation of the data.}2891 \end{listing}2893 There is one final piece, which is to replace missing sensory data2894 with a best-guess estimate. While I could fill in missing data by2895 using a gradient over the closest known sensory data points,2896 averages can be misleading. It is certainly possible to create an2897 impossible sensory state by averaging two possible sensory states.2898 For example, consider moving your hand in an arc over your head. If2899 for some reason you only have the initial and final positions of2900 this movement in your \(\Phi\)-space, averaging them together will2901 produce the proprioceptive sensation of having your hand \emph{inside}2902 your head, which is physically impossible to ever experience2903 (barring motor adaption illusions). Therefore I simply replicate2904 the most recent sensory experience to fill in the gaps.2906 \begin{listing}2907 \begin{verbatim}2908 (defn infer-nils2909 "Replace nils with the next available non-nil element in the2910 sequence, or barring that, 0."2911 [s]2912 (loop [i (dec (count s))2913 v (transient s)]2914 (if (zero? i) (persistent! v)2915 (if-let [cur (v i)]2916 (if (get v (dec i) 0)2917 (recur (dec i) v)2918 (recur (dec i) (assoc! v (dec i) cur)))2919 (recur i (assoc! v i 0))))))2920 \end{verbatim}2921 \caption{\label{infer-nils}Fill in blanks in sensory experience by replicating the most recent experience.}2922 \end{listing}2924 \subsection{\texttt{EMPATH} recognizes actions efficiently}2925 \label{sec-3-5}2927 To use \texttt{EMPATH} with the worm, I first need to gather a set of2928 experiences from the worm that includes the actions I want to2929 recognize. The \texttt{generate-phi-space} program (listing2930 \ref{generate-phi-space} runs the worm through a series of2931 exercises and gathers those experiences into a vector. The2932 \texttt{do-all-the-things} program is a routine expressed in a simple2933 muscle contraction script language for automated worm control. It2934 causes the worm to rest, curl, and wiggle over about 700 frames2935 (approx. 11 seconds).2937 \begin{listing}2938 \begin{verbatim}2939 (def do-all-the-things2940 (concat2941 curl-script2942 [[300 :d-ex 40]2943 [320 :d-ex 0]]2944 (shift-script 280 (take 16 wiggle-script))))2946 (defn generate-phi-space []2947 (let [experiences (atom [])]2948 (run-world2949 (apply-map2950 worm-world2951 (merge2952 (worm-world-defaults)2953 {:end-frame 7002954 :motor-control2955 (motor-control-program worm-muscle-labels do-all-the-things)2956 :experiences experiences})))2957 @experiences))2958 \end{verbatim}2959 \caption{\label{generate-phi-space}Program to gather the worm's experiences into a vector for further processing. The \texttt{motor-control-program} line uses a motor control script that causes the worm to execute a series of ``exercises'' that include all the action predicates.}2960 \end{listing}2962 \begin{listing}2963 \begin{verbatim}2964 (defn init []2965 (def phi-space (generate-phi-space))2966 (def phi-scan (gen-phi-scan phi-space)))2968 (defn empathy-demonstration []2969 (let [proprio (atom ())]2970 (fn2971 [experiences text]2972 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]2973 (swap! proprio (partial cons phi-indices))2974 (let [exp-thread (longest-thread (take 300 @proprio))2975 empathy (mapv phi-space (infer-nils exp-thread))]2976 (println-repl (vector:last-n exp-thread 22))2977 (cond2978 (grand-circle? empathy) (.setText text "Grand Circle")2979 (curled? empathy) (.setText text "Curled")2980 (wiggling? empathy) (.setText text "Wiggling")2981 (resting? empathy) (.setText text "Resting")2982 :else (.setText text "Unknown")))))))2984 (defn empathy-experiment [record]2985 (.start (worm-world :experience-watch (debug-experience-phi)2986 :record record :worm worm*)))2987 \end{verbatim}2988 \caption{\label{empathy-debug}Use \texttt{longest-thread} and a \(\Phi\)-space generated from a short exercise routine to interpret actions during free play.}2989 \end{listing}2991 These programs create a test for the empathy system. First, the2992 worm's \(\Phi\)-space is generated from a simple motor script. Then the2993 worm is re-created in an environment almost exactly identical to2994 the testing environment for the action-predicates, with one major2995 difference : the only sensory information available to the system2996 is proprioception. From just the proprioception data and2997 \(\Phi\)-space, \texttt{longest-thread} synthesizes a complete record the last2998 300 sensory experiences of the worm. These synthesized experiences2999 are fed directly into the action predicates \texttt{grand-circle?},3000 \texttt{curled?}, \texttt{wiggling?}, and \texttt{resting?} and their outputs are3001 printed to the screen at each frame.3003 The result of running \texttt{empathy-experiment} is that the system is3004 generally able to interpret worm actions using the action-predicates3005 on simulated sensory data just as well as with actual data. Figure3006 \ref{empathy-debug-image} was generated using \texttt{empathy-experiment}:3008 \begin{figure}[htb]3009 \centering3010 \includegraphics[width=10cm]{./images/empathy-1.png}3011 \caption{\label{empathy-debug-image}From only proprioceptive data, \texttt{EMPATH} was able to infer the complete sensory experience and classify four poses (The last panel shows a composite image of \emph{wiggling}, a dynamic pose.)}3012 \end{figure}3014 One way to measure the performance of \texttt{EMPATH} is to compare the3015 suitability of the imagined sense experience to trigger the same3016 action predicates as the real sensory experience.3018 \begin{listing}3019 \begin{verbatim}3020 (def worm-action-label3021 (juxt grand-circle? curled? wiggling?))3023 (defn compare-empathy-with-baseline [matches]3024 (let [proprio (atom ())]3025 (fn3026 [experiences text]3027 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]3028 (swap! proprio (partial cons phi-indices))3029 (let [exp-thread (longest-thread (take 300 @proprio))3030 empathy (mapv phi-space (infer-nils exp-thread))3031 experience-matches-empathy3032 (= (worm-action-label experiences)3033 (worm-action-label empathy))]3034 (println-repl experience-matches-empathy)3035 (swap! matches #(conj % experience-matches-empathy)))))))3037 (defn accuracy [v]3038 (float (/ (count (filter true? v)) (count v))))3040 (defn test-empathy-accuracy []3041 (let [res (atom [])]3042 (run-world3043 (worm-world :experience-watch3044 (compare-empathy-with-baseline res)3045 :worm worm*))3046 (accuracy @res)))3047 \end{verbatim}3048 \caption{\label{test-empathy-accuracy}Determine how closely empathy approximates actual sensory data.}3049 \end{listing}3051 Running \texttt{test-empathy-accuracy} using the very short exercise3052 program \texttt{do-all-the-things} defined in listing3053 \ref{generate-phi-space}, and then doing a similar pattern of3054 activity using manual control of the worm, yields an accuracy of3055 around 73\%. This is based on very limited worm experience, and3056 almost all errors are due to the worm's \(\Phi\)-space being too3057 incomplete to properly interpret common poses. By manually training3058 the worm for longer using \texttt{init-interactive} defined in listing3059 \ref{manual-phi-space}, the accuracy dramatically improves:3061 \begin{listing}3062 \begin{verbatim}3063 (defn init-interactive []3064 (def phi-space3065 (let [experiences (atom [])]3066 (run-world3067 (apply-map3068 worm-world3069 (merge3070 (worm-world-defaults)3071 {:experiences experiences})))3072 @experiences))3073 (def phi-scan (gen-phi-scan phi-space)))3074 \end{verbatim}3075 \caption{\label{manual-phi-space}Program to generate \(\Phi\)-space using manual training.}3076 \end{listing}3078 \texttt{init-interactive} allows me to take direct control of the worm's3079 muscles and run it through each characteristic movement I care3080 about. After about 1 minute of manual training, I was able to3081 achieve 95\% accuracy on manual testing of the worm using3082 \texttt{test-empathy-accuracy}. The majority of disagreements are near the3083 transition boundaries from one type of action to another. During3084 these transitions the exact label for the action is often unclear,3085 and disagreement between empathy and experience is practically3086 irrelevant. Thus, the system's effective identification accuracy is3087 even higher than 95\%. When I watch this system myself, I generally3088 see no errors in action identification compared to my own judgment3089 of what the worm is doing.3091 \subsection{Digression: Learning touch sensor layout through free play}3092 \label{sec-3-6}3094 In the previous chapter I showed how to compute actions in terms of3095 body-centered predicates, but some of those predicates relied on3096 the average touch activation of pre-defined regions of the worm's3097 skin. What if, instead of receiving touch pre-grouped into the six3098 faces of each worm segment, the true partitioning of the worm's3099 skin was unknown? This is more similar to how a nerve fiber bundle3100 might be arranged inside an animal. While two fibers that are close3101 in a nerve bundle \emph{might} correspond to two touch sensors that are3102 close together on the skin, the process of taking a complicated3103 surface and forcing it into essentially a 2D circle requires that3104 some regions of skin that are close together in the animal end up3105 far apart in the nerve bundle.3107 In this chapter I show how to automatically learn the3108 skin-partitioning of a worm segment by free exploration. As the3109 worm rolls around on the floor, large sections of its surface get3110 activated. If the worm has stopped moving, then whatever region of3111 skin that is touching the floor is probably an important region,3112 and should be recorded. The code I provide relies on the worm3113 segment having flat faces, but still demonstrates a primitive kind3114 of multi-sensory bootstrapping that I find appealing.3116 \begin{listing}3117 \begin{verbatim}3118 (def full-contact [(float 0.0) (float 0.1)])3120 (defn pure-touch?3121 "This is worm specific code to determine if a large region of touch3122 sensors is either all on or all off."3123 [[coords touch :as touch-data]]3124 (= (set (map first touch)) (set full-contact)))3125 \end{verbatim}3126 \caption{\label{pure-touch}Program to detect whether the worm is in a resting state with one face touching the floor.}3127 \end{listing}3129 After collecting these important regions, there will many nearly3130 similar touch regions. While for some purposes the subtle3131 differences between these regions will be important, for my3132 purposes I collapse them into mostly non-overlapping sets using3133 \texttt{remove-similar} in listing \ref{remove-similar}3135 \begin{listing}3136 \begin{verbatim}3137 (defn remove-similar3138 [coll]3139 (loop [result () coll (sort-by (comp - count) coll)]3140 (if (empty? coll) result3141 (let [[x & xs] coll3142 c (count x)]3143 (if (some3144 (fn [other-set]3145 (let [oc (count other-set)]3146 (< (- (count (union other-set x)) c) (* oc 0.1))))3147 xs)3148 (recur result xs)3149 (recur (cons x result) xs))))))3150 \end{verbatim}3151 \caption{\label{remove-similar}Program to take a list of sets of points and ``collapse them'' so that the remaining sets in the list are significantly different from each other. Prefer smaller sets to larger ones.}3152 \end{listing}3154 Actually running this simulation is easy given \texttt{CORTEX}'s facilities.3156 \begin{listing}3157 \begin{verbatim}3158 (defn learn-touch-regions []3159 (let [experiences (atom [])3160 world (apply-map3161 worm-world3162 (assoc (worm-segment-defaults)3163 :experiences experiences))]3164 (run-world world)3165 (->>3166 @experiences3167 (drop 175)3168 ;; access the single segment's touch data3169 (map (comp first :touch))3170 ;; only deal with "pure" touch data to determine surfaces3171 (filter pure-touch?)3172 ;; associate coordinates with touch values3173 (map (partial apply zipmap))3174 ;; select those regions where contact is being made3175 (map (partial group-by second))3176 (map #(get % full-contact))3177 (map (partial map first))3178 ;; remove redundant/subset regions3179 (map set)3180 remove-similar)))3182 (defn learn-and-view-touch-regions []3183 (map view-touch-region3184 (learn-touch-regions)))3185 \end{verbatim}3186 \caption{\label{learn-touch}Collect experiences while the worm moves around. Filter the touch sensations by stable ones, collapse similar ones together, and report the regions learned.}3187 \end{listing}3189 The only thing remaining to define is the particular motion the worm3190 must take. I accomplish this with a simple motor control program.3192 \begin{listing}3193 \begin{verbatim}3194 (defn touch-kinesthetics []3195 [[170 :lift-1 40]3196 [190 :lift-1 19]3197 [206 :lift-1 0]3199 [400 :lift-2 40]3200 [410 :lift-2 0]3202 [570 :lift-2 40]3203 [590 :lift-2 21]3204 [606 :lift-2 0]3206 [800 :lift-1 30]3207 [809 :lift-1 0]3209 [900 :roll-2 40]3210 [905 :roll-2 20]3211 [910 :roll-2 0]3213 [1000 :roll-2 40]3214 [1005 :roll-2 20]3215 [1010 :roll-2 0]3217 [1100 :roll-2 40]3218 [1105 :roll-2 20]3219 [1110 :roll-2 0]3220 ])3221 \end{verbatim}3222 \caption{\label{worm-roll}Motor control program for making the worm roll on the ground. This could also be replaced with random motion.}3223 \end{listing}3226 \begin{figure}[htb]3227 \centering3228 \includegraphics[width=12cm]{./images/worm-roll.png}3229 \caption{\label{worm-roll}The small worm rolls around on the floor, driven by the motor control program in listing \ref{worm-roll}.}3230 \end{figure}3232 \begin{figure}[htb]3233 \centering3234 \includegraphics[width=12cm]{./images/touch-learn.png}3235 \caption{\label{worm-touch-map}After completing its adventures, the worm now knows how its touch sensors are arranged along its skin. Each of these six rectangles are touch sensory patterns that were deemed important by \texttt{learn-touch-regions}. Each white square in the rectangles above is a cluster of ``related" touch nodes as determined by the system. The worm has correctly discovered that it has six faces, and has partitioned its sensory map into these six faces.}3236 \end{figure}3238 While simple, \texttt{learn-touch-regions} exploits regularities in both3239 the worm's physiology and the worm's environment to correctly3240 deduce that the worm has six sides. Note that \texttt{learn-touch-regions}3241 would work just as well even if the worm's touch sense data were3242 completely scrambled. The cross shape is just for convenience. This3243 example justifies the use of pre-defined touch regions in \texttt{EMPATH}.3245 \subsection{Recognizing an object using embodied representation}3246 \label{sec-3-7}3248 At the beginning of the thesis, I suggested that we might recognize3249 the chair in Figure \ref{hidden-chair} by imagining ourselves in3250 the position of the man and realizing that he must be sitting on3251 something in order to maintain that position. Here, I present a3252 brief elaboration on how to this might be done.3254 First, I need the feeling of leaning or resting \emph{on} some other3255 object that is not the floor. This feeling is easy to describe3256 using an embodied representation.3258 \begin{listing}3259 \begin{verbatim}3260 (defn draped?3261 "Is the worm:3262 -- not flat (the floor is not a 'chair')3263 -- supported (not using its muscles to hold its position)3264 -- stable (not changing its position)3265 -- touching something (must register contact)"3266 [experiences]3267 (let [b2-hash (bin 2)3268 touch (:touch (peek experiences))3269 total-contact3270 (reduce3271 +3272 (map #(contact all-touch-coordinates %)3273 (rest touch)))]3274 (println total-contact)3275 (and (not (resting? experiences))3276 (every?3277 zero?3278 (-> experiences3279 (vector:last-n 25)3280 (#(map :muscle %))3281 (flatten)))3282 (-> experiences3283 (vector:last-n 20)3284 (#(map (comp b2-hash flatten :proprioception) %))3285 (set)3286 (count) (= 1))3287 (< 0.03 total-contact))))3288 \end{verbatim}3289 \caption{\label{draped}Program describing the sense of leaning or resting on something. This involves a relaxed posture, the feeling of touching something, and a period of stability where the worm does not move.}3290 \end{listing}3292 \begin{figure}[htb]3293 \centering3294 \includegraphics[width=13cm]{./images/draped.png}3295 \caption{\label{draped-video}The \texttt{draped?} predicate detects the presence of the cube whenever the worm interacts with it. The details of the cube are irrelevant; only the way it influences the worm's body matters. The ``unknown'' label on the fifth frame is due to the fact that the worm is not stationary. \texttt{draped?} will only declare that the worm is draped if it has been still for a while.}3296 \end{figure}3298 Though this is a simple example, using the \texttt{draped?} predicate to3299 detect a cube has interesting advantages. The \texttt{draped?} predicate3300 describes the cube not in terms of properties that the cube has,3301 but instead in terms of how the worm interacts with it physically.3302 This means that the cube can still be detected even if it is not3303 visible, as long as its influence on the worm's body is visible.3305 This system will also see the virtual cube created by a3306 ``mimeworm", which uses its muscles in a very controlled way to3307 mimic the appearance of leaning on a cube. The system will3308 anticipate that there is an actual invisible cube that provides3309 support!3311 \begin{figure}[htb]3312 \centering3313 \includegraphics[width=6cm]{./images/pablo-the-mime.png}3314 \caption{\label{mime}Can you see the thing that this person is leaning on? What properties does it have, other than how it makes the man's elbow and shoulder feel? I wonder if people who can actually maintain this pose easily still see the support?}3315 \end{figure}3317 This makes me wonder about the psychology of actual mimes. Suppose3318 for a moment that people have something analogous to \(\Phi\)-space and3319 that one of the ways that they find objects in a scene is by their3320 relation to other people's bodies. Suppose that a person watches a3321 person miming an invisible wall. For a person with no experience3322 with miming, their \(\Phi\)-space will only have entries that describe3323 the scene with the sensation of their hands touching a wall. This3324 sensation of touch will create a strong impression of a wall, even3325 though the wall would have to be invisible. A person with3326 experience in miming however, will have entries in their \(\Phi\)-space3327 that describe the wall-miming position without a sense of touch. It3328 will not seem to such as person that an invisible wall is present,3329 but merely that the mime is holding out their hands in a special3330 way. Thus, the theory that humans use something like \(\Phi\)-space3331 weakly predicts that learning how to mime should break the power of3332 miming illusions. Most optical illusions still work no matter how3333 much you know about them, so this proposal would be quite3334 interesting to test, as it predicts a non-standard result!3337 \clearpage3339 \section{Contributions}3340 \label{sec-4}3342 The big idea behind this thesis is a new way to represent and3343 recognize physical actions, which I call \emph{empathic representation}.3344 Actions are represented as predicates which have access to the3345 totality of a creature's sensory abilities. To recognize the3346 physical actions of another creature similar to yourself, you3347 imagine what they would feel by examining the position of their body3348 and relating it to your own previous experience.3350 Empathic representation of physical actions is robust and general.3351 Because the representation is body-centered, it avoids baking in a3352 particular viewpoint like you might get from learning from example3353 videos. Because empathic representation relies on all of a3354 creature's senses, it can describe exactly what an action \emph{feels3355 like} without getting caught up in irrelevant details such as visual3356 appearance. I think it is important that a correct description of3357 jumping (for example) should not include irrelevant details such as3358 the color of a person's clothes or skin; empathic representation can3359 get right to the heart of what jumping is by describing it in terms3360 of touch, muscle contractions, and a brief feeling of3361 weightlessness. Empathic representation is very low-level in that it3362 describes actions using concrete sensory data with little3363 abstraction, but it has the generality of much more abstract3364 representations!3366 Another important contribution of this thesis is the development of3367 the \texttt{CORTEX} system, a complete environment for creating simulated3368 creatures. You have seen how to implement five senses: touch,3369 proprioception, hearing, vision, and muscle tension. You have seen3370 how to create new creatures using Blender, a 3D modeling tool.3372 As a minor digression, you also saw how I used \texttt{CORTEX} to enable a3373 tiny worm to discover the topology of its skin simply by rolling on3374 the ground. You also saw how to detect objects using only embodied3375 predicates.3377 In conclusion, for this thesis I:3379 \begin{itemize}3380 \item Developed the idea of embodied representation, which describes3381 actions that a creature can do in terms of first-person sensory3382 data.3384 \item Developed a method of empathic action recognition which uses3385 previous embodied experience and embodied representation of3386 actions to greatly constrain the possible interpretations of an3387 action.3389 \item Created \texttt{EMPATH}, a program which uses empathic action3390 recognition to recognize physical actions in a simple model3391 involving segmented worm-like creatures.3393 \item Created \texttt{CORTEX}, a comprehensive platform for embodied AI3394 experiments. It is the base on which \texttt{EMPATH} is built.3395 \end{itemize}3397 \clearpage3398 \appendix3400 \section{Appendix: \texttt{CORTEX} User Guide}3401 \label{sec-5}3403 Those who write a thesis should endeavor to make their code not only3404 accessible, but actually usable, as a way to pay back the community3405 that made the thesis possible in the first place. This thesis would3406 not be possible without Free Software such as jMonkeyEngine3,3407 Blender, clojure, \texttt{emacs}, \texttt{ffmpeg}, and many other tools. That is3408 why I have included this user guide, in the hope that someone else3409 might find \texttt{CORTEX} useful.3411 \subsection{Obtaining \texttt{CORTEX}}3412 \label{sec-5-1}3414 You can get cortex from its mercurial repository at3415 \url{http://hg.bortreb.com/cortex}. You may also download \texttt{CORTEX}3416 releases at \url{http://aurellem.org/cortex/releases/}. As a condition of3417 making this thesis, I have also provided Professor Winston the3418 \texttt{CORTEX} source, and he knows how to run the demos and get started.3419 You may also email me at \texttt{cortex@aurellem.org} and I may help where3420 I can.3422 \subsection{Running \texttt{CORTEX}}3423 \label{sec-5-2}3425 \texttt{CORTEX} comes with README and INSTALL files that will guide you3426 through installation and running the test suite. In particular you3427 should look at test \texttt{cortex.test} which contains test suites that3428 run through all senses and multiple creatures.3430 \subsection{Creating creatures}3431 \label{sec-5-3}3433 Creatures are created using \emph{Blender}, a free 3D modeling program.3434 You will need Blender version 2.6 when using the \texttt{CORTEX} included3435 in this thesis. You create a \texttt{CORTEX} creature in a similar manner3436 to modeling anything in Blender, except that you also create3437 several trees of empty nodes which define the creature's senses.3439 \subsubsection{Mass}3440 \label{sec-5-3-1}3442 To give an object mass in \texttt{CORTEX}, add a ``mass'' metadata label3443 to the object with the mass in jMonkeyEngine units. Note that3444 setting the mass to 0 causes the object to be immovable.3446 \subsubsection{Joints}3447 \label{sec-5-3-2}3449 Joints are created by creating an empty node named \texttt{joints} and3450 then creating any number of empty child nodes to represent your3451 creature's joints. The joint will automatically connect the3452 closest two physical objects. It will help to set the empty node's3453 display mode to ``Arrows'' so that you can clearly see the3454 direction of the axes.3456 Joint nodes should have the following metadata under the ``joint''3457 label:3459 \begin{verbatim}3460 ;; ONE of the following, under the label "joint":3461 {:type :point}3463 ;; OR3465 {:type :hinge3466 :limit [<limit-low> <limit-high>]3467 :axis (Vector3f. <x> <y> <z>)}3468 ;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)3470 ;; OR3472 {:type :cone3473 :limit-xz <lim-xz>3474 :limit-xy <lim-xy>3475 :twist <lim-twist>} ;(use XZY rotation mode in Blender!)3476 \end{verbatim}3478 \subsubsection{Eyes}3479 \label{sec-5-3-3}3481 Eyes are created by creating an empty node named \texttt{eyes} and then3482 creating any number of empty child nodes to represent your3483 creature's eyes.3485 Eye nodes should have the following metadata under the ``eye''3486 label:3488 \begin{verbatim}3489 {:red <red-retina-definition>3490 :blue <blue-retina-definition>3491 :green <green-retina-definition>3492 :all <all-retina-definition>3493 (<0xrrggbb> <custom-retina-image>)...3494 }3495 \end{verbatim}3497 Any of the color channels may be omitted. You may also include3498 your own color selectors, and in fact :red is equivalent to3499 0xFF0000 and so forth. The eye will be placed at the same position3500 as the empty node and will bind to the neatest physical object.3501 The eye will point outward from the X-axis of the node, and ``up''3502 will be in the direction of the X-axis of the node. It will help3503 to set the empty node's display mode to ``Arrows'' so that you can3504 clearly see the direction of the axes.3506 Each retina file should contain white pixels wherever you want to be3507 sensitive to your chosen color. If you want the entire field of3508 view, specify :all of 0xFFFFFF and a retinal map that is entirely3509 white.3511 Here is a sample retinal map:3513 \begin{figure}[H]3514 \centering3515 \includegraphics[width=7cm]{./images/retina-small.png}3516 \caption{\label{retina}An example retinal profile image. White pixels are photo-sensitive elements. The distribution of white pixels is denser in the middle and falls off at the edges and is inspired by the human retina.}3517 \end{figure}3519 \subsubsection{Hearing}3520 \label{sec-5-3-4}3522 Ears are created by creating an empty node named \texttt{ears} and then3523 creating any number of empty child nodes to represent your3524 creature's ears.3526 Ear nodes do not require any metadata.3528 The ear will bind to and follow the closest physical node.3530 \subsubsection{Touch}3531 \label{sec-5-3-5}3533 Touch is handled similarly to mass. To make a particular object3534 touch sensitive, add metadata of the following form under the3535 object's ``touch'' metadata field:3537 \begin{verbatim}3538 <touch-UV-map-file-name>3539 \end{verbatim}3541 You may also include an optional ``scale'' metadata number to3542 specify the length of the touch feelers. The default is \(0.1\),3543 and this is generally sufficient.3545 The touch UV should contain white pixels for each touch sensor.3547 Here is an example touch-uv map that approximates a human finger,3548 and its corresponding model.3550 \begin{figure}[htb]3551 \centering3552 \includegraphics[width=9cm]{./images/finger-UV.png}3553 \caption{\label{guide-fingertip-UV}This is the tactile-sensor-profile for the upper segment of a fingertip. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).}3554 \end{figure}3556 \begin{figure}[htb]3557 \centering3558 \includegraphics[width=9cm]{./images/finger-1.png}3559 \caption{\label{guide-fingertip}The fingertip UV-image form above applied to a simple model of a fingertip.}3560 \end{figure}3562 \subsubsection{Proprioception}3563 \label{sec-5-3-6}3565 Proprioception is tied to each joint node -- nothing special must3566 be done in a Blender model to enable proprioception other than3567 creating joint nodes.3569 \subsubsection{Muscles}3570 \label{sec-5-3-7}3572 Muscles are created by creating an empty node named \texttt{muscles} and3573 then creating any number of empty child nodes to represent your3574 creature's muscles.3577 Muscle nodes should have the following metadata under the3578 ``muscle'' label:3580 \begin{verbatim}3581 <muscle-profile-file-name>3582 \end{verbatim}3584 Muscles should also have a ``strength'' metadata entry describing3585 the muscle's total strength at full activation.3587 Muscle profiles are simple images that contain the relative amount3588 of muscle power in each simulated alpha motor neuron. The width of3589 the image is the total size of the motor pool, and the redness of3590 each neuron is the relative power of that motor pool.3592 While the profile image can have any dimensions, only the first3593 line of pixels is used to define the muscle. Here is a sample3594 muscle profile image that defines a human-like muscle.3596 \begin{figure}[htb]3597 \centering3598 \includegraphics[width=7cm]{./images/basic-muscle.png}3599 \caption{\label{muscle-recruit}A muscle profile image that describes the strengths of each motor neuron in a muscle. White is weakest and dark red is strongest. This particular pattern has weaker motor neurons at the beginning, just like human muscle.}3600 \end{figure}3602 Muscles twist the nearest physical object about the muscle node's3603 Z-axis. I recommend using the ``Single Arrow'' display mode for3604 muscles and using the right hand rule to determine which way the3605 muscle will twist. To make a segment that can twist in multiple3606 directions, create multiple, differently aligned muscles.3608 \subsection{\texttt{CORTEX} API}3609 \label{sec-5-4}3611 These are the some functions exposed by \texttt{CORTEX} for creating3612 worlds and simulating creatures. These are in addition to3613 jMonkeyEngine3's extensive library, which is documented elsewhere.3615 \subsubsection{Simulation}3616 \label{sec-5-4-1}3617 \begin{description}3618 \item[{\texttt{(world root-node key-map setup-fn update-fn)}}] create3619 a simulation.3620 \begin{description}3621 \item[{\emph{root-node} }] a \texttt{com.jme3.scene.Node} object which3622 contains all of the objects that should be in the3623 simulation.3625 \item[{\emph{key-map} }] a map from strings describing keys to3626 functions that should be executed whenever that key is3627 pressed. the functions should take a SimpleApplication3628 object and a boolean value. The SimpleApplication is the3629 current simulation that is running, and the boolean is true3630 if the key is being pressed, and false if it is being3631 released. As an example,3632 \begin{verbatim}3633 {"key-j" (fn [game value] (if value (println "key j pressed")))}3634 \end{verbatim}3635 is a valid key-map which will cause the simulation to print3636 a message whenever the 'j' key on the keyboard is pressed.3638 \item[{\emph{setup-fn} }] a function that takes a \texttt{SimpleApplication}3639 object. It is called once when initializing the simulation.3640 Use it to create things like lights, change the gravity,3641 initialize debug nodes, etc.3643 \item[{\emph{update-fn} }] this function takes a \texttt{SimpleApplication}3644 object and a float and is called every frame of the3645 simulation. The float tells how many seconds is has been3646 since the last frame was rendered, according to whatever3647 clock jme is currently using. The default is to use IsoTimer3648 which will result in this value always being the same.3649 \end{description}3651 \item[{\texttt{(position-camera world position rotation)}}] set the position3652 of the simulation's main camera.3654 \item[{\texttt{(enable-debug world)}}] turn on debug wireframes for each3655 simulated object.3657 \item[{\texttt{(set-gravity world gravity)}}] set the gravity of a running3658 simulation.3660 \item[{\texttt{(box length width height \& \{options\})}}] create a box in the3661 simulation. Options is a hash map specifying texture, mass,3662 etc. Possible options are \texttt{:name}, \texttt{:color}, \texttt{:mass},3663 \texttt{:friction}, \texttt{:texture}, \texttt{:material}, \texttt{:position},3664 \texttt{:rotation}, \texttt{:shape}, and \texttt{:physical?}.3666 \item[{\texttt{(sphere radius \& \{options\})}}] create a sphere in the simulation.3667 Options are the same as in \texttt{box}.3669 \item[{\texttt{(load-blender-model file-name)}}] create a node structure3670 representing the model described in a Blender file.3672 \item[{\texttt{(light-up-everything world)}}] distribute a standard compliment3673 of lights throughout the simulation. Should be adequate for most3674 purposes.3676 \item[{\texttt{(node-seq node)}}] return a recursive list of the node's3677 children.3679 \item[{\texttt{(nodify name children)}}] construct a node given a node-name and3680 desired children.3682 \item[{\texttt{(add-element world element)}}] add an object to a running world3683 simulation.3685 \item[{\texttt{(set-accuracy world accuracy)}}] change the accuracy of the3686 world's physics simulator.3688 \item[{\texttt{(asset-manager)}}] get an \emph{AssetManager}, a jMonkeyEngine3689 construct that is useful for loading textures and is required3690 for smooth interaction with jMonkeyEngine library functions.3692 \item[{\texttt{(load-bullet)} }] unpack native libraries and initialize the3693 bullet physics subsystem. This function is required before3694 other world building functions are called.3695 \end{description}3697 \subsubsection{Creature Manipulation / Import}3698 \label{sec-5-4-2}3700 \begin{description}3701 \item[{\texttt{(body! creature)}}] give the creature a physical body.3703 \item[{\texttt{(vision! creature)}}] give the creature a sense of vision.3704 Returns a list of functions which will each, when called3705 during a simulation, return the vision data for the channel of3706 one of the eyes. The functions are ordered depending on the3707 alphabetical order of the names of the eye nodes in the3708 Blender file. The data returned by the functions is a vector3709 containing the eye's \emph{topology}, a vector of coordinates, and3710 the eye's \emph{data}, a vector of RGB values filtered by the eye's3711 sensitivity.3713 \item[{\texttt{(hearing! creature)}}] give the creature a sense of hearing.3714 Returns a list of functions, one for each ear, that when3715 called will return a frame's worth of hearing data for that3716 ear. The functions are ordered depending on the alphabetical3717 order of the names of the ear nodes in the Blender file. The3718 data returned by the functions is an array of PCM (pulse code3719 modulated) wav data.3721 \item[{\texttt{(touch! creature)}}] give the creature a sense of touch. Returns3722 a single function that must be called with the \emph{root node} of3723 the world, and which will return a vector of \emph{touch-data}3724 one entry for each touch sensitive component, each entry of3725 which contains a \emph{topology} that specifies the distribution of3726 touch sensors, and the \emph{data}, which is a vector of3727 \texttt{[activation, length]} pairs for each touch hair.3729 \item[{\texttt{(proprioception! creature)}}] give the creature the sense of3730 proprioception. Returns a list of functions, one for each3731 joint, that when called during a running simulation will3732 report the \texttt{[heading, pitch, roll]} of the joint.3734 \item[{\texttt{(movement! creature)}}] give the creature the power of movement.3735 Creates a list of functions, one for each muscle, that when3736 called with an integer, will set the recruitment of that3737 muscle to that integer, and will report the current power3738 being exerted by the muscle. Order of muscles is determined by3739 the alphabetical sort order of the names of the muscle nodes.3740 \end{description}3742 \subsubsection{Visualization/Debug}3743 \label{sec-5-4-3}3745 \begin{description}3746 \item[{\texttt{(view-vision)}}] create a function that when called with a list3747 of visual data returned from the functions made by \texttt{vision!},3748 will display that visual data on the screen.3750 \item[{\texttt{(view-hearing)}}] same as \texttt{view-vision} but for hearing.3752 \item[{\texttt{(view-touch)}}] same as \texttt{view-vision} but for touch.3754 \item[{\texttt{(view-proprioception)}}] same as \texttt{view-vision} but for3755 proprioception.3757 \item[{\texttt{(view-movement)}}] same as \texttt{view-vision} but for muscles.3759 \item[{\texttt{(view anything)}}] \texttt{view} is a polymorphic function that allows3760 you to inspect almost anything you could reasonably expect to3761 be able to ``see'' in \texttt{CORTEX}.3763 \item[{\texttt{(text anything)}}] \texttt{text} is a polymorphic function that allows3764 you to convert practically anything into a text string.3766 \item[{\texttt{(println-repl anything)}}] print messages to clojure's repl3767 instead of the simulation's terminal window.3769 \item[{\texttt{(mega-import-jme3)}}] for experimenting at the REPL. This3770 function will import all jMonkeyEngine3 classes for immediate3771 use.3773 \item[{\texttt{(display-dilated-time world timer)}}] Shows the time as it is3774 flowing in the simulation on a HUD display.3775 \end{description}