rlm@425: #+title: =CORTEX= rlm@425: #+author: Robert McIntyre rlm@425: #+email: rlm@mit.edu rlm@425: #+description: Using embodied AI to facilitate Artificial Imagination. rlm@425: #+keywords: AI, clojure, embodiment rlm@451: #+LaTeX_CLASS_OPTIONS: [nofloat] rlm@422: rlm@453: * Empathy and Embodiment as problem solving strategies rlm@437: rlm@437: By the end of this thesis, you will have seen a novel approach to rlm@437: interpreting video using embodiment and empathy. You will have also rlm@437: seen one way to efficiently implement empathy for embodied rlm@447: creatures. Finally, you will become familiar with =CORTEX=, a system rlm@447: for designing and simulating creatures with rich senses, which you rlm@447: may choose to use in your own research. rlm@437: rlm@441: This is the core vision of my thesis: That one of the important ways rlm@441: in which we understand others is by imagining ourselves in their rlm@441: position and emphatically feeling experiences relative to our own rlm@441: bodies. By understanding events in terms of our own previous rlm@441: corporeal experience, we greatly constrain the possibilities of what rlm@441: would otherwise be an unwieldy exponential search. This extra rlm@441: constraint can be the difference between easily understanding what rlm@441: is happening in a video and being completely lost in a sea of rlm@441: incomprehensible color and movement. rlm@435: rlm@436: ** Recognizing actions in video is extremely difficult rlm@437: rlm@447: Consider for example the problem of determining what is happening rlm@447: in a video of which this is one frame: rlm@437: rlm@441: #+caption: A cat drinking some water. Identifying this action is rlm@441: #+caption: beyond the state of the art for computers. rlm@441: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/cat-drinking.jpg]] rlm@441: rlm@441: It is currently impossible for any computer program to reliably rlm@447: label such a video as ``drinking''. And rightly so -- it is a very rlm@441: hard problem! What features can you describe in terms of low level rlm@441: functions of pixels that can even begin to describe at a high level rlm@441: what is happening here? rlm@437: rlm@447: Or suppose that you are building a program that recognizes chairs. rlm@448: How could you ``see'' the chair in figure \ref{hidden-chair}? rlm@441: rlm@441: #+caption: The chair in this image is quite obvious to humans, but I rlm@448: #+caption: doubt that any modern computer vision program can find it. rlm@441: #+name: hidden-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/fat-person-sitting-at-desk.jpg]] rlm@441: rlm@441: Finally, how is it that you can easily tell the difference between rlm@441: how the girls /muscles/ are working in figure \ref{girl}? rlm@441: rlm@441: #+caption: The mysterious ``common sense'' appears here as you are able rlm@441: #+caption: to discern the difference in how the girl's arm muscles rlm@441: #+caption: are activated between the two images. rlm@441: #+name: girl rlm@448: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: Each of these examples tells us something about what might be going rlm@441: on in our minds as we easily solve these recognition problems. rlm@441: rlm@441: The hidden chairs show us that we are strongly triggered by cues rlm@447: relating to the position of human bodies, and that we can determine rlm@447: the overall physical configuration of a human body even if much of rlm@447: that body is occluded. rlm@437: rlm@441: The picture of the girl pushing against the wall tells us that we rlm@441: have common sense knowledge about the kinetics of our own bodies. rlm@441: We know well how our muscles would have to work to maintain us in rlm@441: most positions, and we can easily project this self-knowledge to rlm@441: imagined positions triggered by images of the human body. rlm@441: rlm@441: ** =EMPATH= neatly solves recognition problems rlm@441: rlm@441: I propose a system that can express the types of recognition rlm@441: problems above in a form amenable to computation. It is split into rlm@441: four parts: rlm@441: rlm@448: - Free/Guided Play :: The creature moves around and experiences the rlm@448: world through its unique perspective. Many otherwise rlm@448: complicated actions are easily described in the language of a rlm@448: full suite of body-centered, rich senses. For example, rlm@448: drinking is the feeling of water sliding down your throat, and rlm@448: cooling your insides. It's often accompanied by bringing your rlm@448: hand close to your face, or bringing your face close to water. rlm@448: Sitting down is the feeling of bending your knees, activating rlm@448: your quadriceps, then feeling a surface with your bottom and rlm@448: relaxing your legs. These body-centered action descriptions rlm@448: can be either learned or hard coded. rlm@448: - Posture Imitation :: When trying to interpret a video or image, rlm@448: the creature takes a model of itself and aligns it with rlm@448: whatever it sees. This alignment can even cross species, as rlm@448: when humans try to align themselves with things like ponies, rlm@448: dogs, or other humans with a different body type. rlm@448: - Empathy :: The alignment triggers associations with rlm@448: sensory data from prior experiences. For example, the rlm@448: alignment itself easily maps to proprioceptive data. Any rlm@448: sounds or obvious skin contact in the video can to a lesser rlm@448: extent trigger previous experience. Segments of previous rlm@448: experiences are stitched together to form a coherent and rlm@448: complete sensory portrait of the scene. rlm@448: - Recognition :: With the scene described in terms of first rlm@448: person sensory events, the creature can now run its rlm@447: action-identification programs on this synthesized sensory rlm@447: data, just as it would if it were actually experiencing the rlm@447: scene first-hand. If previous experience has been accurately rlm@447: retrieved, and if it is analogous enough to the scene, then rlm@447: the creature will correctly identify the action in the scene. rlm@447: rlm@441: For example, I think humans are able to label the cat video as rlm@447: ``drinking'' because they imagine /themselves/ as the cat, and rlm@441: imagine putting their face up against a stream of water and rlm@441: sticking out their tongue. In that imagined world, they can feel rlm@441: the cool water hitting their tongue, and feel the water entering rlm@447: their body, and are able to recognize that /feeling/ as drinking. rlm@447: So, the label of the action is not really in the pixels of the rlm@447: image, but is found clearly in a simulation inspired by those rlm@447: pixels. An imaginative system, having been trained on drinking and rlm@447: non-drinking examples and learning that the most important rlm@447: component of drinking is the feeling of water sliding down one's rlm@447: throat, would analyze a video of a cat drinking in the following rlm@447: manner: rlm@441: rlm@447: 1. Create a physical model of the video by putting a ``fuzzy'' rlm@447: model of its own body in place of the cat. Possibly also create rlm@447: a simulation of the stream of water. rlm@441: rlm@441: 2. Play out this simulated scene and generate imagined sensory rlm@441: experience. This will include relevant muscle contractions, a rlm@441: close up view of the stream from the cat's perspective, and most rlm@441: importantly, the imagined feeling of water entering the rlm@443: mouth. The imagined sensory experience can come from a rlm@441: simulation of the event, but can also be pattern-matched from rlm@441: previous, similar embodied experience. rlm@441: rlm@441: 3. The action is now easily identified as drinking by the sense of rlm@441: taste alone. The other senses (such as the tongue moving in and rlm@441: out) help to give plausibility to the simulated action. Note that rlm@441: the sense of vision, while critical in creating the simulation, rlm@441: is not critical for identifying the action from the simulation. rlm@441: rlm@441: For the chair examples, the process is even easier: rlm@441: rlm@441: 1. Align a model of your body to the person in the image. rlm@441: rlm@441: 2. Generate proprioceptive sensory data from this alignment. rlm@437: rlm@441: 3. Use the imagined proprioceptive data as a key to lookup related rlm@441: sensory experience associated with that particular proproceptive rlm@441: feeling. rlm@437: rlm@443: 4. Retrieve the feeling of your bottom resting on a surface, your rlm@443: knees bent, and your leg muscles relaxed. rlm@437: rlm@441: 5. This sensory information is consistent with the =sitting?= rlm@441: sensory predicate, so you (and the entity in the image) must be rlm@441: sitting. rlm@440: rlm@441: 6. There must be a chair-like object since you are sitting. rlm@440: rlm@441: Empathy offers yet another alternative to the age-old AI rlm@441: representation question: ``What is a chair?'' --- A chair is the rlm@441: feeling of sitting. rlm@441: rlm@441: My program, =EMPATH= uses this empathic problem solving technique rlm@441: to interpret the actions of a simple, worm-like creature. rlm@437: rlm@441: #+caption: The worm performs many actions during free play such as rlm@441: #+caption: curling, wiggling, and resting. rlm@441: #+name: worm-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-intro-white.png]] rlm@437: rlm@447: #+caption: =EMPATH= recognized and classified each of these poses by rlm@447: #+caption: inferring the complete sensory experience from rlm@447: #+caption: proprioceptive data. rlm@441: #+name: worm-recognition-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-poses.png]] rlm@441: rlm@441: One powerful advantage of empathic problem solving is that it rlm@441: factors the action recognition problem into two easier problems. To rlm@441: use empathy, you need an /aligner/, which takes the video and a rlm@441: model of your body, and aligns the model with the video. Then, you rlm@441: need a /recognizer/, which uses the aligned model to interpret the rlm@441: action. The power in this method lies in the fact that you describe rlm@448: all actions form a body-centered viewpoint. You are less tied to rlm@447: the particulars of any visual representation of the actions. If you rlm@441: teach the system what ``running'' is, and you have a good enough rlm@441: aligner, the system will from then on be able to recognize running rlm@441: from any point of view, even strange points of view like above or rlm@441: underneath the runner. This is in contrast to action recognition rlm@448: schemes that try to identify actions using a non-embodied approach. rlm@448: If these systems learn about running as viewed from the side, they rlm@448: will not automatically be able to recognize running from any other rlm@448: viewpoint. rlm@441: rlm@441: Another powerful advantage is that using the language of multiple rlm@441: body-centered rich senses to describe body-centerd actions offers a rlm@441: massive boost in descriptive capability. Consider how difficult it rlm@441: would be to compose a set of HOG filters to describe the action of rlm@447: a simple worm-creature ``curling'' so that its head touches its rlm@447: tail, and then behold the simplicity of describing thus action in a rlm@441: language designed for the task (listing \ref{grand-circle-intro}): rlm@441: rlm@446: #+caption: Body-centerd actions are best expressed in a body-centered rlm@446: #+caption: language. This code detects when the worm has curled into a rlm@446: #+caption: full circle. Imagine how you would replicate this functionality rlm@446: #+caption: using low-level pixel features such as HOG filters! rlm@446: #+name: grand-circle-intro rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@446: #+begin_src clojure rlm@446: (defn grand-circle? rlm@446: "Does the worm form a majestic circle (one end touching the other)?" rlm@446: [experiences] rlm@446: (and (curled? experiences) rlm@446: (let [worm-touch (:touch (peek experiences)) rlm@446: tail-touch (worm-touch 0) rlm@446: head-touch (worm-touch 4)] rlm@446: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@446: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@446: #+end_src rlm@446: #+end_listing rlm@446: rlm@435: rlm@449: ** =CORTEX= is a toolkit for building sensate creatures rlm@435: rlm@448: I built =CORTEX= to be a general AI research platform for doing rlm@448: experiments involving multiple rich senses and a wide variety and rlm@448: number of creatures. I intend it to be useful as a library for many rlm@448: more projects than just this one. =CORTEX= was necessary to meet a rlm@448: need among AI researchers at CSAIL and beyond, which is that people rlm@448: often will invent neat ideas that are best expressed in the rlm@448: language of creatures and senses, but in order to explore those rlm@448: ideas they must first build a platform in which they can create rlm@448: simulated creatures with rich senses! There are many ideas that rlm@448: would be simple to execute (such as =EMPATH=), but attached to them rlm@448: is the multi-month effort to make a good creature simulator. Often, rlm@448: that initial investment of time proves to be too much, and the rlm@448: project must make do with a lesser environment. rlm@435: rlm@448: =CORTEX= is well suited as an environment for embodied AI research rlm@448: for three reasons: rlm@448: rlm@448: - You can create new creatures using Blender, a popular 3D modeling rlm@448: program. Each sense can be specified using special blender nodes rlm@448: with biologically inspired paramaters. You need not write any rlm@448: code to create a creature, and can use a wide library of rlm@448: pre-existing blender models as a base for your own creatures. rlm@448: rlm@448: - =CORTEX= implements a wide variety of senses, including touch, rlm@448: proprioception, vision, hearing, and muscle tension. Complicated rlm@448: senses like touch, and vision involve multiple sensory elements rlm@448: embedded in a 2D surface. You have complete control over the rlm@448: distribution of these sensor elements through the use of simple rlm@448: png image files. In particular, =CORTEX= implements more rlm@448: comprehensive hearing than any other creature simulation system rlm@448: available. rlm@448: rlm@448: - =CORTEX= supports any number of creatures and any number of rlm@448: senses. Time in =CORTEX= dialates so that the simulated creatures rlm@448: always precieve a perfectly smooth flow of time, regardless of rlm@448: the actual computational load. rlm@448: rlm@448: =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game rlm@448: engine designed to create cross-platform 3D desktop games. =CORTEX= rlm@448: is mainly written in clojure, a dialect of =LISP= that runs on the rlm@448: java virtual machine (JVM). The API for creating and simulating rlm@449: creatures and senses is entirely expressed in clojure, though many rlm@449: senses are implemented at the layer of jMonkeyEngine or below. For rlm@449: example, for the sense of hearing I use a layer of clojure code on rlm@449: top of a layer of java JNI bindings that drive a layer of =C++= rlm@449: code which implements a modified version of =OpenAL= to support rlm@449: multiple listeners. =CORTEX= is the only simulation environment rlm@449: that I know of that can support multiple entities that can each rlm@449: hear the world from their own perspective. Other senses also rlm@449: require a small layer of Java code. =CORTEX= also uses =bullet=, a rlm@449: physics simulator written in =C=. rlm@448: rlm@448: #+caption: Here is the worm from above modeled in Blender, a free rlm@448: #+caption: 3D-modeling program. Senses and joints are described rlm@448: #+caption: using special nodes in Blender. rlm@448: #+name: worm-recognition-intro rlm@448: #+ATTR_LaTeX: :width 12cm rlm@448: [[./images/blender-worm.png]] rlm@448: rlm@449: Here are some thing I anticipate that =CORTEX= might be used for: rlm@449: rlm@449: - exploring new ideas about sensory integration rlm@449: - distributed communication among swarm creatures rlm@449: - self-learning using free exploration, rlm@449: - evolutionary algorithms involving creature construction rlm@449: - exploration of exoitic senses and effectors that are not possible rlm@449: in the real world (such as telekenisis or a semantic sense) rlm@449: - imagination using subworlds rlm@449: rlm@451: During one test with =CORTEX=, I created 3,000 creatures each with rlm@448: their own independent senses and ran them all at only 1/80 real rlm@448: time. In another test, I created a detailed model of my own hand, rlm@448: equipped with a realistic distribution of touch (more sensitive at rlm@448: the fingertips), as well as eyes and ears, and it ran at around 1/4 rlm@451: real time. rlm@448: rlm@451: #+BEGIN_LaTeX rlm@449: \begin{sidewaysfigure} rlm@449: \includegraphics[width=9.5in]{images/full-hand.png} rlm@451: \caption{ rlm@451: I modeled my own right hand in Blender and rigged it with all the rlm@451: senses that {\tt CORTEX} supports. My simulated hand has a rlm@451: biologically inspired distribution of touch sensors. The senses are rlm@451: displayed on the right, and the simulation is displayed on the rlm@451: left. Notice that my hand is curling its fingers, that it can see rlm@451: its own finger from the eye in its palm, and that it can feel its rlm@451: own thumb touching its palm.} rlm@449: \end{sidewaysfigure} rlm@451: #+END_LaTeX rlm@448: rlm@437: ** Contributions rlm@435: rlm@451: - I built =CORTEX=, a comprehensive platform for embodied AI rlm@451: experiments. =CORTEX= supports many features lacking in other rlm@451: systems, such proper simulation of hearing. It is easy to create rlm@451: new =CORTEX= creatures using Blender, a free 3D modeling program. rlm@449: rlm@451: - I built =EMPATH=, which uses =CORTEX= to identify the actions of rlm@451: a worm-like creature using a computational model of empathy. rlm@449: rlm@436: * Building =CORTEX= rlm@435: rlm@436: ** To explore embodiment, we need a world, body, and senses rlm@435: rlm@436: ** Because of Time, simulation is perferable to reality rlm@435: rlm@436: ** Video game engines are a great starting point rlm@435: rlm@436: ** Bodies are composed of segments connected by joints rlm@435: rlm@436: ** Eyes reuse standard video game components rlm@436: rlm@436: ** Hearing is hard; =CORTEX= does it right rlm@436: rlm@436: ** Touch uses hundreds of hair-like elements rlm@436: rlm@440: ** Proprioception is the sense that makes everything ``real'' rlm@436: rlm@436: ** Muscles are both effectors and sensors rlm@436: rlm@436: ** =CORTEX= brings complex creatures to life! rlm@436: rlm@436: ** =CORTEX= enables many possiblities for further research rlm@435: rlm@435: * Empathy in a simulated worm rlm@435: rlm@449: Here I develop a computational model of empathy, using =CORTEX= as a rlm@449: base. Empathy in this context is the ability to observe another rlm@449: creature and infer what sorts of sensations that creature is rlm@449: feeling. My empathy algorithm involves multiple phases. First is rlm@449: free-play, where the creature moves around and gains sensory rlm@449: experience. From this experience I construct a representation of the rlm@449: creature's sensory state space, which I call \Phi-space. Using rlm@449: \Phi-space, I construct an efficient function which takes the rlm@449: limited data that comes from observing another creature and enriches rlm@449: it full compliment of imagined sensory data. I can then use the rlm@449: imagined sensory data to recognize what the observed creature is rlm@449: doing and feeling, using straightforward embodied action predicates. rlm@449: This is all demonstrated with using a simple worm-like creature, and rlm@449: recognizing worm-actions based on limited data. rlm@449: rlm@449: #+caption: Here is the worm with which we will be working. rlm@449: #+caption: It is composed of 5 segments. Each segment has a rlm@449: #+caption: pair of extensor and flexor muscles. Each of the rlm@449: #+caption: worm's four joints is a hinge joint which allows rlm@451: #+caption: about 30 degrees of rotation to either side. Each segment rlm@449: #+caption: of the worm is touch-capable and has a uniform rlm@449: #+caption: distribution of touch sensors on each of its faces. rlm@449: #+caption: Each joint has a proprioceptive sense to detect rlm@449: #+caption: relative positions. The worm segments are all the rlm@449: #+caption: same except for the first one, which has a much rlm@449: #+caption: higher weight than the others to allow for easy rlm@449: #+caption: manual motor control. rlm@449: #+name: basic-worm-view rlm@449: #+ATTR_LaTeX: :width 10cm rlm@449: [[./images/basic-worm-view.png]] rlm@449: rlm@449: #+caption: Program for reading a worm from a blender file and rlm@449: #+caption: outfitting it with the senses of proprioception, rlm@449: #+caption: touch, and the ability to move, as specified in the rlm@449: #+caption: blender file. rlm@449: #+name: get-worm rlm@449: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn worm [] rlm@449: (let [model (load-blender-model "Models/worm/worm.blend")] rlm@449: {:body (doto model (body!)) rlm@449: :touch (touch! model) rlm@449: :proprioception (proprioception! model) rlm@449: :muscles (movement! model)})) rlm@449: #+end_src rlm@449: #+end_listing rlm@452: rlm@436: ** Embodiment factors action recognition into managable parts rlm@435: rlm@449: Using empathy, I divide the problem of action recognition into a rlm@449: recognition process expressed in the language of a full compliment rlm@449: of senses, and an imaganitive process that generates full sensory rlm@449: data from partial sensory data. Splitting the action recognition rlm@449: problem in this manner greatly reduces the total amount of work to rlm@449: recognize actions: The imaganitive process is mostly just matching rlm@449: previous experience, and the recognition process gets to use all rlm@449: the senses to directly describe any action. rlm@449: rlm@436: ** Action recognition is easy with a full gamut of senses rlm@435: rlm@449: Embodied representations using multiple senses such as touch, rlm@449: proprioception, and muscle tension turns out be be exceedingly rlm@449: efficient at describing body-centered actions. It is the ``right rlm@449: language for the job''. For example, it takes only around 5 lines rlm@449: of LISP code to describe the action of ``curling'' using embodied rlm@451: primitives. It takes about 10 lines to describe the seemingly rlm@449: complicated action of wiggling. rlm@449: rlm@449: The following action predicates each take a stream of sensory rlm@449: experience, observe however much of it they desire, and decide rlm@449: whether the worm is doing the action they describe. =curled?= rlm@449: relies on proprioception, =resting?= relies on touch, =wiggling?= rlm@449: relies on a fourier analysis of muscle contraction, and rlm@449: =grand-circle?= relies on touch and reuses =curled?= as a gaurd. rlm@449: rlm@449: #+caption: Program for detecting whether the worm is curled. This is the rlm@449: #+caption: simplest action predicate, because it only uses the last frame rlm@449: #+caption: of sensory experience, and only uses proprioceptive data. Even rlm@449: #+caption: this simple predicate, however, is automatically frame rlm@449: #+caption: independent and ignores vermopomorphic differences such as rlm@449: #+caption: worm textures and colors. rlm@449: #+name: curled rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn curled? rlm@449: "Is the worm curled up?" rlm@449: [experiences] rlm@449: (every? rlm@449: (fn [[_ _ bend]] rlm@449: (> (Math/sin bend) 0.64)) rlm@449: (:proprioception (peek experiences)))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Program for summarizing the touch information in a patch rlm@449: #+caption: of skin. rlm@449: #+name: touch-summary rlm@452: #+attr_latex: [htpb] rlm@452: rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn contact rlm@449: "Determine how much contact a particular worm segment has with rlm@449: other objects. Returns a value between 0 and 1, where 1 is full rlm@449: contact and 0 is no contact." rlm@449: [touch-region [coords contact :as touch]] rlm@449: (-> (zipmap coords contact) rlm@449: (select-keys touch-region) rlm@449: (vals) rlm@449: (#(map first %)) rlm@449: (average) rlm@449: (* 10) rlm@449: (- 1) rlm@449: (Math/abs))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: rlm@449: #+caption: Program for detecting whether the worm is at rest. This program rlm@449: #+caption: uses a summary of the tactile information from the underbelly rlm@449: #+caption: of the worm, and is only true if every segment is touching the rlm@449: #+caption: floor. Note that this function contains no references to rlm@449: #+caption: proprioction at all. rlm@449: #+name: resting rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (def worm-segment-bottom (rect-region [8 15] [14 22])) rlm@449: rlm@449: (defn resting? rlm@449: "Is the worm resting on the ground?" rlm@449: [experiences] rlm@449: (every? rlm@449: (fn [touch-data] rlm@449: (< 0.9 (contact worm-segment-bottom touch-data))) rlm@449: (:touch (peek experiences)))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Program for detecting whether the worm is curled up into a rlm@449: #+caption: full circle. Here the embodied approach begins to shine, as rlm@449: #+caption: I am able to both use a previous action predicate (=curled?=) rlm@449: #+caption: as well as the direct tactile experience of the head and tail. rlm@449: #+name: grand-circle rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (def worm-segment-bottom-tip (rect-region [15 15] [22 22])) rlm@449: rlm@449: (def worm-segment-top-tip (rect-region [0 15] [7 22])) rlm@449: rlm@449: (defn grand-circle? rlm@449: "Does the worm form a majestic circle (one end touching the other)?" rlm@449: [experiences] rlm@449: (and (curled? experiences) rlm@449: (let [worm-touch (:touch (peek experiences)) rlm@449: tail-touch (worm-touch 0) rlm@449: head-touch (worm-touch 4)] rlm@449: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@449: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: rlm@449: #+caption: Program for detecting whether the worm has been wiggling for rlm@449: #+caption: the last few frames. It uses a fourier analysis of the muscle rlm@449: #+caption: contractions of the worm's tail to determine wiggling. This is rlm@449: #+caption: signigicant because there is no particular frame that clearly rlm@449: #+caption: indicates that the worm is wiggling --- only when multiple frames rlm@449: #+caption: are analyzed together is the wiggling revealed. Defining rlm@449: #+caption: wiggling this way also gives the worm an opportunity to learn rlm@449: #+caption: and recognize ``frustrated wiggling'', where the worm tries to rlm@449: #+caption: wiggle but can't. Frustrated wiggling is very visually different rlm@449: #+caption: from actual wiggling, but this definition gives it to us for free. rlm@449: #+name: wiggling rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn fft [nums] rlm@449: (map rlm@449: #(.getReal %) rlm@449: (.transform rlm@449: (FastFourierTransformer. DftNormalization/STANDARD) rlm@449: (double-array nums) TransformType/FORWARD))) rlm@449: rlm@449: (def indexed (partial map-indexed vector)) rlm@449: rlm@449: (defn max-indexed [s] rlm@449: (first (sort-by (comp - second) (indexed s)))) rlm@449: rlm@449: (defn wiggling? rlm@449: "Is the worm wiggling?" rlm@449: [experiences] rlm@449: (let [analysis-interval 0x40] rlm@449: (when (> (count experiences) analysis-interval) rlm@449: (let [a-flex 3 rlm@449: a-ex 2 rlm@449: muscle-activity rlm@449: (map :muscle (vector:last-n experiences analysis-interval)) rlm@449: base-activity rlm@449: (map #(- (% a-flex) (% a-ex)) muscle-activity)] rlm@449: (= 2 rlm@449: (first rlm@449: (max-indexed rlm@449: (map #(Math/abs %) rlm@449: (take 20 (fft base-activity)))))))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: With these action predicates, I can now recognize the actions of rlm@449: the worm while it is moving under my control and I have access to rlm@449: all the worm's senses. rlm@449: rlm@449: #+caption: Use the action predicates defined earlier to report on rlm@449: #+caption: what the worm is doing while in simulation. rlm@449: #+name: report-worm-activity rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn debug-experience rlm@449: [experiences text] rlm@449: (cond rlm@449: (grand-circle? experiences) (.setText text "Grand Circle") rlm@449: (curled? experiences) (.setText text "Curled") rlm@449: (wiggling? experiences) (.setText text "Wiggling") rlm@449: (resting? experiences) (.setText text "Resting"))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Using =debug-experience=, the body-centered predicates rlm@449: #+caption: work together to classify the behaviour of the worm. rlm@451: #+caption: the predicates are operating with access to the worm's rlm@451: #+caption: full sensory data. rlm@449: #+name: basic-worm-view rlm@449: #+ATTR_LaTeX: :width 10cm rlm@449: [[./images/worm-identify-init.png]] rlm@449: rlm@449: These action predicates satisfy the recognition requirement of an rlm@451: empathic recognition system. There is power in the simplicity of rlm@451: the action predicates. They describe their actions without getting rlm@451: confused in visual details of the worm. Each one is frame rlm@451: independent, but more than that, they are each indepent of rlm@449: irrelevant visual details of the worm and the environment. They rlm@449: will work regardless of whether the worm is a different color or rlm@451: hevaily textured, or if the environment has strange lighting. rlm@449: rlm@449: The trick now is to make the action predicates work even when the rlm@449: sensory data on which they depend is absent. If I can do that, then rlm@449: I will have gained much, rlm@435: rlm@436: ** \Phi-space describes the worm's experiences rlm@449: rlm@449: As a first step towards building empathy, I need to gather all of rlm@449: the worm's experiences during free play. I use a simple vector to rlm@449: store all the experiences. rlm@449: rlm@449: Each element of the experience vector exists in the vast space of rlm@449: all possible worm-experiences. Most of this vast space is actually rlm@449: unreachable due to physical constraints of the worm's body. For rlm@449: example, the worm's segments are connected by hinge joints that put rlm@451: a practical limit on the worm's range of motions without limiting rlm@451: its degrees of freedom. Some groupings of senses are impossible; rlm@451: the worm can not be bent into a circle so that its ends are rlm@451: touching and at the same time not also experience the sensation of rlm@451: touching itself. rlm@449: rlm@451: As the worm moves around during free play and its experience vector rlm@451: grows larger, the vector begins to define a subspace which is all rlm@451: the sensations the worm can practicaly experience during normal rlm@451: operation. I call this subspace \Phi-space, short for rlm@451: physical-space. The experience vector defines a path through rlm@451: \Phi-space. This path has interesting properties that all derive rlm@451: from physical embodiment. The proprioceptive components are rlm@451: completely smooth, because in order for the worm to move from one rlm@451: position to another, it must pass through the intermediate rlm@451: positions. The path invariably forms loops as actions are repeated. rlm@451: Finally and most importantly, proprioception actually gives very rlm@451: strong inference about the other senses. For example, when the worm rlm@451: is flat, you can infer that it is touching the ground and that its rlm@451: muscles are not active, because if the muscles were active, the rlm@451: worm would be moving and would not be perfectly flat. In order to rlm@451: stay flat, the worm has to be touching the ground, or it would rlm@451: again be moving out of the flat position due to gravity. If the rlm@451: worm is positioned in such a way that it interacts with itself, rlm@451: then it is very likely to be feeling the same tactile feelings as rlm@451: the last time it was in that position, because it has the same body rlm@451: as then. If you observe multiple frames of proprioceptive data, rlm@451: then you can become increasingly confident about the exact rlm@451: activations of the worm's muscles, because it generally takes a rlm@451: unique combination of muscle contractions to transform the worm's rlm@451: body along a specific path through \Phi-space. rlm@449: rlm@449: There is a simple way of taking \Phi-space and the total ordering rlm@449: provided by an experience vector and reliably infering the rest of rlm@449: the senses. rlm@435: rlm@436: ** Empathy is the process of tracing though \Phi-space rlm@449: rlm@450: Here is the core of a basic empathy algorithm, starting with an rlm@451: experience vector: rlm@451: rlm@451: First, group the experiences into tiered proprioceptive bins. I use rlm@451: powers of 10 and 3 bins, and the smallest bin has an approximate rlm@451: size of 0.001 radians in all proprioceptive dimensions. rlm@450: rlm@450: Then, given a sequence of proprioceptive input, generate a set of rlm@451: matching experience records for each input, using the tiered rlm@451: proprioceptive bins. rlm@449: rlm@450: Finally, to infer sensory data, select the longest consective chain rlm@451: of experiences. Conecutive experience means that the experiences rlm@451: appear next to each other in the experience vector. rlm@449: rlm@450: This algorithm has three advantages: rlm@450: rlm@450: 1. It's simple rlm@450: rlm@451: 3. It's very fast -- retrieving possible interpretations takes rlm@451: constant time. Tracing through chains of interpretations takes rlm@451: time proportional to the average number of experiences in a rlm@451: proprioceptive bin. Redundant experiences in \Phi-space can be rlm@451: merged to save computation. rlm@450: rlm@450: 2. It protects from wrong interpretations of transient ambiguous rlm@451: proprioceptive data. For example, if the worm is flat for just rlm@450: an instant, this flattness will not be interpreted as implying rlm@450: that the worm has its muscles relaxed, since the flattness is rlm@450: part of a longer chain which includes a distinct pattern of rlm@451: muscle activation. Markov chains or other memoryless statistical rlm@451: models that operate on individual frames may very well make this rlm@451: mistake. rlm@450: rlm@450: #+caption: Program to convert an experience vector into a rlm@450: #+caption: proprioceptively binned lookup function. rlm@450: #+name: bin rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@450: #+begin_src clojure rlm@449: (defn bin [digits] rlm@449: (fn [angles] rlm@449: (->> angles rlm@449: (flatten) rlm@449: (map (juxt #(Math/sin %) #(Math/cos %))) rlm@449: (flatten) rlm@449: (mapv #(Math/round (* % (Math/pow 10 (dec digits)))))))) rlm@449: rlm@449: (defn gen-phi-scan rlm@450: "Nearest-neighbors with binning. Only returns a result if rlm@450: the propriceptive data is within 10% of a previously recorded rlm@450: result in all dimensions." rlm@450: [phi-space] rlm@449: (let [bin-keys (map bin [3 2 1]) rlm@449: bin-maps rlm@449: (map (fn [bin-key] rlm@449: (group-by rlm@449: (comp bin-key :proprioception phi-space) rlm@449: (range (count phi-space)))) bin-keys) rlm@449: lookups (map (fn [bin-key bin-map] rlm@450: (fn [proprio] (bin-map (bin-key proprio)))) rlm@450: bin-keys bin-maps)] rlm@449: (fn lookup [proprio-data] rlm@449: (set (some #(% proprio-data) lookups))))) rlm@450: #+end_src rlm@450: #+end_listing rlm@449: rlm@451: #+caption: =longest-thread= finds the longest path of consecutive rlm@451: #+caption: experiences to explain proprioceptive worm data. rlm@451: #+name: phi-space-history-scan rlm@451: #+ATTR_LaTeX: :width 10cm rlm@451: [[./images/aurellem-gray.png]] rlm@451: rlm@451: =longest-thread= infers sensory data by stitching together pieces rlm@451: from previous experience. It prefers longer chains of previous rlm@451: experience to shorter ones. For example, during training the worm rlm@451: might rest on the ground for one second before it performs its rlm@451: excercises. If during recognition the worm rests on the ground for rlm@451: five seconds, =longest-thread= will accomodate this five second rlm@451: rest period by looping the one second rest chain five times. rlm@451: rlm@451: =longest-thread= takes time proportinal to the average number of rlm@451: entries in a proprioceptive bin, because for each element in the rlm@451: starting bin it performes a series of set lookups in the preceeding rlm@451: bins. If the total history is limited, then this is only a constant rlm@451: multiple times the number of entries in the starting bin. This rlm@451: analysis also applies even if the action requires multiple longest rlm@451: chains -- it's still the average number of entries in a rlm@451: proprioceptive bin times the desired chain length. Because rlm@451: =longest-thread= is so efficient and simple, I can interpret rlm@451: worm-actions in real time. rlm@449: rlm@450: #+caption: Program to calculate empathy by tracing though \Phi-space rlm@450: #+caption: and finding the longest (ie. most coherent) interpretation rlm@450: #+caption: of the data. rlm@450: #+name: longest-thread rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@450: #+begin_src clojure rlm@449: (defn longest-thread rlm@449: "Find the longest thread from phi-index-sets. The index sets should rlm@449: be ordered from most recent to least recent." rlm@449: [phi-index-sets] rlm@449: (loop [result '() rlm@449: [thread-bases & remaining :as phi-index-sets] phi-index-sets] rlm@449: (if (empty? phi-index-sets) rlm@449: (vec result) rlm@449: (let [threads rlm@449: (for [thread-base thread-bases] rlm@449: (loop [thread (list thread-base) rlm@449: remaining remaining] rlm@449: (let [next-index (dec (first thread))] rlm@449: (cond (empty? remaining) thread rlm@449: (contains? (first remaining) next-index) rlm@449: (recur rlm@449: (cons next-index thread) (rest remaining)) rlm@449: :else thread)))) rlm@449: longest-thread rlm@449: (reduce (fn [thread-a thread-b] rlm@449: (if (> (count thread-a) (count thread-b)) rlm@449: thread-a thread-b)) rlm@449: '(nil) rlm@449: threads)] rlm@449: (recur (concat longest-thread result) rlm@449: (drop (count longest-thread) phi-index-sets)))))) rlm@450: #+end_src rlm@450: #+end_listing rlm@450: rlm@451: There is one final piece, which is to replace missing sensory data rlm@451: with a best-guess estimate. While I could fill in missing data by rlm@451: using a gradient over the closest known sensory data points, rlm@451: averages can be misleading. It is certainly possible to create an rlm@451: impossible sensory state by averaging two possible sensory states. rlm@451: Therefore, I simply replicate the most recent sensory experience to rlm@451: fill in the gaps. rlm@449: rlm@449: #+caption: Fill in blanks in sensory experience by replicating the most rlm@449: #+caption: recent experience. rlm@449: #+name: infer-nils rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn infer-nils rlm@449: "Replace nils with the next available non-nil element in the rlm@449: sequence, or barring that, 0." rlm@449: [s] rlm@449: (loop [i (dec (count s)) rlm@449: v (transient s)] rlm@449: (if (zero? i) (persistent! v) rlm@449: (if-let [cur (v i)] rlm@449: (if (get v (dec i) 0) rlm@449: (recur (dec i) v) rlm@449: (recur (dec i) (assoc! v (dec i) cur))) rlm@449: (recur i (assoc! v i 0)))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@435: rlm@441: ** Efficient action recognition with =EMPATH= rlm@451: rlm@451: To use =EMPATH= with the worm, I first need to gather a set of rlm@451: experiences from the worm that includes the actions I want to rlm@452: recognize. The =generate-phi-space= program (listing rlm@451: \ref{generate-phi-space} runs the worm through a series of rlm@451: exercices and gatheres those experiences into a vector. The rlm@451: =do-all-the-things= program is a routine expressed in a simple rlm@452: muscle contraction script language for automated worm control. It rlm@452: causes the worm to rest, curl, and wiggle over about 700 frames rlm@452: (approx. 11 seconds). rlm@425: rlm@451: #+caption: Program to gather the worm's experiences into a vector for rlm@451: #+caption: further processing. The =motor-control-program= line uses rlm@451: #+caption: a motor control script that causes the worm to execute a series rlm@451: #+caption: of ``exercices'' that include all the action predicates. rlm@451: #+name: generate-phi-space rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (def do-all-the-things rlm@451: (concat rlm@451: curl-script rlm@451: [[300 :d-ex 40] rlm@451: [320 :d-ex 0]] rlm@451: (shift-script 280 (take 16 wiggle-script)))) rlm@451: rlm@451: (defn generate-phi-space [] rlm@451: (let [experiences (atom [])] rlm@451: (run-world rlm@451: (apply-map rlm@451: worm-world rlm@451: (merge rlm@451: (worm-world-defaults) rlm@451: {:end-frame 700 rlm@451: :motor-control rlm@451: (motor-control-program worm-muscle-labels do-all-the-things) rlm@451: :experiences experiences}))) rlm@451: @experiences)) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: #+caption: Use longest thread and a phi-space generated from a short rlm@451: #+caption: exercise routine to interpret actions during free play. rlm@451: #+name: empathy-debug rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (defn init [] rlm@451: (def phi-space (generate-phi-space)) rlm@451: (def phi-scan (gen-phi-scan phi-space))) rlm@451: rlm@451: (defn empathy-demonstration [] rlm@451: (let [proprio (atom ())] rlm@451: (fn rlm@451: [experiences text] rlm@451: (let [phi-indices (phi-scan (:proprioception (peek experiences)))] rlm@451: (swap! proprio (partial cons phi-indices)) rlm@451: (let [exp-thread (longest-thread (take 300 @proprio)) rlm@451: empathy (mapv phi-space (infer-nils exp-thread))] rlm@451: (println-repl (vector:last-n exp-thread 22)) rlm@451: (cond rlm@451: (grand-circle? empathy) (.setText text "Grand Circle") rlm@451: (curled? empathy) (.setText text "Curled") rlm@451: (wiggling? empathy) (.setText text "Wiggling") rlm@451: (resting? empathy) (.setText text "Resting") rlm@451: :else (.setText text "Unknown"))))))) rlm@451: rlm@451: (defn empathy-experiment [record] rlm@451: (.start (worm-world :experience-watch (debug-experience-phi) rlm@451: :record record :worm worm*))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: The result of running =empathy-experiment= is that the system is rlm@451: generally able to interpret worm actions using the action-predicates rlm@451: on simulated sensory data just as well as with actual data. Figure rlm@451: \ref{empathy-debug-image} was generated using =empathy-experiment=: rlm@451: rlm@451: #+caption: From only proprioceptive data, =EMPATH= was able to infer rlm@451: #+caption: the complete sensory experience and classify four poses rlm@451: #+caption: (The last panel shows a composite image of \emph{wriggling}, rlm@451: #+caption: a dynamic pose.) rlm@451: #+name: empathy-debug-image rlm@451: #+ATTR_LaTeX: :width 10cm :placement [H] rlm@451: [[./images/empathy-1.png]] rlm@451: rlm@451: One way to measure the performance of =EMPATH= is to compare the rlm@451: sutiability of the imagined sense experience to trigger the same rlm@451: action predicates as the real sensory experience. rlm@451: rlm@451: #+caption: Determine how closely empathy approximates actual rlm@451: #+caption: sensory data. rlm@451: #+name: test-empathy-accuracy rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (def worm-action-label rlm@451: (juxt grand-circle? curled? wiggling?)) rlm@451: rlm@451: (defn compare-empathy-with-baseline [matches] rlm@451: (let [proprio (atom ())] rlm@451: (fn rlm@451: [experiences text] rlm@451: (let [phi-indices (phi-scan (:proprioception (peek experiences)))] rlm@451: (swap! proprio (partial cons phi-indices)) rlm@451: (let [exp-thread (longest-thread (take 300 @proprio)) rlm@451: empathy (mapv phi-space (infer-nils exp-thread)) rlm@451: experience-matches-empathy rlm@451: (= (worm-action-label experiences) rlm@451: (worm-action-label empathy))] rlm@451: (println-repl experience-matches-empathy) rlm@451: (swap! matches #(conj % experience-matches-empathy))))))) rlm@451: rlm@451: (defn accuracy [v] rlm@451: (float (/ (count (filter true? v)) (count v)))) rlm@451: rlm@451: (defn test-empathy-accuracy [] rlm@451: (let [res (atom [])] rlm@451: (run-world rlm@451: (worm-world :experience-watch rlm@451: (compare-empathy-with-baseline res) rlm@451: :worm worm*)) rlm@451: (accuracy @res))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: Running =test-empathy-accuracy= using the very short exercise rlm@451: program defined in listing \ref{generate-phi-space}, and then doing rlm@451: a similar pattern of activity manually yeilds an accuracy of around rlm@451: 73%. This is based on very limited worm experience. By training the rlm@451: worm for longer, the accuracy dramatically improves. rlm@451: rlm@451: #+caption: Program to generate \Phi-space using manual training. rlm@451: #+name: manual-phi-space rlm@452: #+attr_latex: [htpb] rlm@451: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (defn init-interactive [] rlm@451: (def phi-space rlm@451: (let [experiences (atom [])] rlm@451: (run-world rlm@451: (apply-map rlm@451: worm-world rlm@451: (merge rlm@451: (worm-world-defaults) rlm@451: {:experiences experiences}))) rlm@451: @experiences)) rlm@451: (def phi-scan (gen-phi-scan phi-space))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: After about 1 minute of manual training, I was able to achieve 95% rlm@451: accuracy on manual testing of the worm using =init-interactive= and rlm@452: =test-empathy-accuracy=. The majority of errors are near the rlm@452: boundaries of transitioning from one type of action to another. rlm@452: During these transitions the exact label for the action is more open rlm@452: to interpretation, and dissaggrement between empathy and experience rlm@452: is more excusable. rlm@450: rlm@449: ** Digression: bootstrapping touch using free exploration rlm@449: rlm@452: In the previous section I showed how to compute actions in terms of rlm@452: body-centered predicates which relied averate touch activation of rlm@452: pre-defined regions of the worm's skin. What if, instead of recieving rlm@452: touch pre-grouped into the six faces of each worm segment, the true rlm@452: topology of the worm's skin was unknown? This is more similiar to how rlm@452: a nerve fiber bundle might be arranged. While two fibers that are rlm@452: close in a nerve bundle /might/ correspond to two touch sensors that rlm@452: are close together on the skin, the process of taking a complicated rlm@452: surface and forcing it into essentially a circle requires some cuts rlm@452: and rerragenments. rlm@452: rlm@452: In this section I show how to automatically learn the skin-topology of rlm@452: a worm segment by free exploration. As the worm rolls around on the rlm@452: floor, large sections of its surface get activated. If the worm has rlm@452: stopped moving, then whatever region of skin that is touching the rlm@452: floor is probably an important region, and should be recorded. rlm@452: rlm@452: #+caption: Program to detect whether the worm is in a resting state rlm@452: #+caption: with one face touching the floor. rlm@452: #+name: pure-touch rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (def full-contact [(float 0.0) (float 0.1)]) rlm@452: rlm@452: (defn pure-touch? rlm@452: "This is worm specific code to determine if a large region of touch rlm@452: sensors is either all on or all off." rlm@452: [[coords touch :as touch-data]] rlm@452: (= (set (map first touch)) (set full-contact))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: After collecting these important regions, there will many nearly rlm@452: similiar touch regions. While for some purposes the subtle rlm@452: differences between these regions will be important, for my rlm@452: purposes I colapse them into mostly non-overlapping sets using rlm@452: =remove-similiar= in listing \ref{remove-similiar} rlm@452: rlm@452: #+caption: Program to take a lits of set of points and ``collapse them'' rlm@452: #+caption: so that the remaining sets in the list are siginificantly rlm@452: #+caption: different from each other. Prefer smaller sets to larger ones. rlm@452: #+name: remove-similiar rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn remove-similar rlm@452: [coll] rlm@452: (loop [result () coll (sort-by (comp - count) coll)] rlm@452: (if (empty? coll) result rlm@452: (let [[x & xs] coll rlm@452: c (count x)] rlm@452: (if (some rlm@452: (fn [other-set] rlm@452: (let [oc (count other-set)] rlm@452: (< (- (count (union other-set x)) c) (* oc 0.1)))) rlm@452: xs) rlm@452: (recur result xs) rlm@452: (recur (cons x result) xs)))))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: Actually running this simulation is easy given =CORTEX='s facilities. rlm@452: rlm@452: #+caption: Collect experiences while the worm moves around. Filter the touch rlm@452: #+caption: sensations by stable ones, collapse similiar ones together, rlm@452: #+caption: and report the regions learned. rlm@452: #+name: learn-touch rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn learn-touch-regions [] rlm@452: (let [experiences (atom []) rlm@452: world (apply-map rlm@452: worm-world rlm@452: (assoc (worm-segment-defaults) rlm@452: :experiences experiences))] rlm@452: (run-world world) rlm@452: (->> rlm@452: @experiences rlm@452: (drop 175) rlm@452: ;; access the single segment's touch data rlm@452: (map (comp first :touch)) rlm@452: ;; only deal with "pure" touch data to determine surfaces rlm@452: (filter pure-touch?) rlm@452: ;; associate coordinates with touch values rlm@452: (map (partial apply zipmap)) rlm@452: ;; select those regions where contact is being made rlm@452: (map (partial group-by second)) rlm@452: (map #(get % full-contact)) rlm@452: (map (partial map first)) rlm@452: ;; remove redundant/subset regions rlm@452: (map set) rlm@452: remove-similar))) rlm@452: rlm@452: (defn learn-and-view-touch-regions [] rlm@452: (map view-touch-region rlm@452: (learn-touch-regions))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: The only thing remining to define is the particular motion the worm rlm@452: must take. I accomplish this with a simple motor control program. rlm@452: rlm@452: #+caption: Motor control program for making the worm roll on the ground. rlm@452: #+caption: This could also be replaced with random motion. rlm@452: #+name: worm-roll rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn touch-kinesthetics [] rlm@452: [[170 :lift-1 40] rlm@452: [190 :lift-1 19] rlm@452: [206 :lift-1 0] rlm@452: rlm@452: [400 :lift-2 40] rlm@452: [410 :lift-2 0] rlm@452: rlm@452: [570 :lift-2 40] rlm@452: [590 :lift-2 21] rlm@452: [606 :lift-2 0] rlm@452: rlm@452: [800 :lift-1 30] rlm@452: [809 :lift-1 0] rlm@452: rlm@452: [900 :roll-2 40] rlm@452: [905 :roll-2 20] rlm@452: [910 :roll-2 0] rlm@452: rlm@452: [1000 :roll-2 40] rlm@452: [1005 :roll-2 20] rlm@452: [1010 :roll-2 0] rlm@452: rlm@452: [1100 :roll-2 40] rlm@452: [1105 :roll-2 20] rlm@452: [1110 :roll-2 0] rlm@452: ]) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: rlm@452: #+caption: The small worm rolls around on the floor, driven rlm@452: #+caption: by the motor control program in listing \ref{worm-roll}. rlm@452: #+name: worm-roll rlm@452: #+ATTR_LaTeX: :width 12cm rlm@452: [[./images/worm-roll.png]] rlm@452: rlm@452: rlm@452: #+caption: After completing its adventures, the worm now knows rlm@452: #+caption: how its touch sensors are arranged along its skin. These rlm@452: #+caption: are the regions that were deemed important by rlm@452: #+caption: =learn-touch-regions=. Note that the worm has discovered rlm@452: #+caption: that it has six sides. rlm@452: #+name: worm-touch-map rlm@452: #+ATTR_LaTeX: :width 12cm rlm@452: [[./images/touch-learn.png]] rlm@452: rlm@452: While simple, =learn-touch-regions= exploits regularities in both rlm@452: the worm's physiology and the worm's environment to correctly rlm@452: deduce that the worm has six sides. Note that =learn-touch-regions= rlm@452: would work just as well even if the worm's touch sense data were rlm@452: completely scrambled. The cross shape is just for convienence. This rlm@452: example justifies the use of pre-defined touch regions in =EMPATH=. rlm@452: rlm@432: * Contributions rlm@454: rlm@461: In this thesis you have seen the =CORTEX= system, a complete rlm@461: environment for creating simulated creatures. You have seen how to rlm@461: implement five senses including touch, proprioception, hearing, rlm@461: vision, and muscle tension. You have seen how to create new creatues rlm@461: using blender, a 3D modeling tool. I hope that =CORTEX= will be rlm@461: useful in further research projects. To this end I have included the rlm@461: full source to =CORTEX= along with a large suite of tests and rlm@461: examples. I have also created a user guide for =CORTEX= which is rlm@461: inculded in an appendix to this thesis. rlm@447: rlm@461: You have also seen how I used =CORTEX= as a platform to attach the rlm@461: /action recognition/ problem, which is the problem of recognizing rlm@461: actions in video. You saw a simple system called =EMPATH= which rlm@461: ientifies actions by first describing actions in a body-centerd, rlm@461: rich sense language, then infering a full range of sensory rlm@461: experience from limited data using previous experience gained from rlm@461: free play. rlm@447: rlm@461: As a minor digression, you also saw how I used =CORTEX= to enable a rlm@461: tiny worm to discover the topology of its skin simply by rolling on rlm@461: the ground. rlm@461: rlm@461: In conclusion, the main contributions of this thesis are: rlm@461: rlm@461: - =CORTEX=, a system for creating simulated creatures with rich rlm@461: senses. rlm@461: - =EMPATH=, a program for recognizing actions by imagining sensory rlm@461: experience. rlm@447: rlm@447: # An anatomical joke: rlm@447: # - Training rlm@447: # - Skeletal imitation rlm@447: # - Sensory fleshing-out rlm@447: # - Classification