rlm@425: #+title: =CORTEX= rlm@425: #+author: Robert McIntyre rlm@425: #+email: rlm@mit.edu rlm@425: #+description: Using embodied AI to facilitate Artificial Imagination. rlm@425: #+keywords: AI, clojure, embodiment rlm@422: rlm@437: rlm@439: * Empathy and Embodiment as problem solving strategies rlm@437: rlm@437: By the end of this thesis, you will have seen a novel approach to rlm@437: interpreting video using embodiment and empathy. You will have also rlm@437: seen one way to efficiently implement empathy for embodied rlm@447: creatures. Finally, you will become familiar with =CORTEX=, a system rlm@447: for designing and simulating creatures with rich senses, which you rlm@447: may choose to use in your own research. rlm@437: rlm@441: This is the core vision of my thesis: That one of the important ways rlm@441: in which we understand others is by imagining ourselves in their rlm@441: position and emphatically feeling experiences relative to our own rlm@441: bodies. By understanding events in terms of our own previous rlm@441: corporeal experience, we greatly constrain the possibilities of what rlm@441: would otherwise be an unwieldy exponential search. This extra rlm@441: constraint can be the difference between easily understanding what rlm@441: is happening in a video and being completely lost in a sea of rlm@441: incomprehensible color and movement. rlm@435: rlm@436: ** Recognizing actions in video is extremely difficult rlm@437: rlm@447: Consider for example the problem of determining what is happening rlm@447: in a video of which this is one frame: rlm@437: rlm@441: #+caption: A cat drinking some water. Identifying this action is rlm@441: #+caption: beyond the state of the art for computers. rlm@441: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/cat-drinking.jpg]] rlm@441: rlm@441: It is currently impossible for any computer program to reliably rlm@447: label such a video as ``drinking''. And rightly so -- it is a very rlm@441: hard problem! What features can you describe in terms of low level rlm@441: functions of pixels that can even begin to describe at a high level rlm@441: what is happening here? rlm@437: rlm@447: Or suppose that you are building a program that recognizes chairs. rlm@448: How could you ``see'' the chair in figure \ref{hidden-chair}? rlm@441: rlm@441: #+caption: The chair in this image is quite obvious to humans, but I rlm@448: #+caption: doubt that any modern computer vision program can find it. rlm@441: #+name: hidden-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/fat-person-sitting-at-desk.jpg]] rlm@441: rlm@441: Finally, how is it that you can easily tell the difference between rlm@441: how the girls /muscles/ are working in figure \ref{girl}? rlm@441: rlm@441: #+caption: The mysterious ``common sense'' appears here as you are able rlm@441: #+caption: to discern the difference in how the girl's arm muscles rlm@441: #+caption: are activated between the two images. rlm@441: #+name: girl rlm@448: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: Each of these examples tells us something about what might be going rlm@441: on in our minds as we easily solve these recognition problems. rlm@441: rlm@441: The hidden chairs show us that we are strongly triggered by cues rlm@447: relating to the position of human bodies, and that we can determine rlm@447: the overall physical configuration of a human body even if much of rlm@447: that body is occluded. rlm@437: rlm@441: The picture of the girl pushing against the wall tells us that we rlm@441: have common sense knowledge about the kinetics of our own bodies. rlm@441: We know well how our muscles would have to work to maintain us in rlm@441: most positions, and we can easily project this self-knowledge to rlm@441: imagined positions triggered by images of the human body. rlm@441: rlm@441: ** =EMPATH= neatly solves recognition problems rlm@441: rlm@441: I propose a system that can express the types of recognition rlm@441: problems above in a form amenable to computation. It is split into rlm@441: four parts: rlm@441: rlm@448: - Free/Guided Play :: The creature moves around and experiences the rlm@448: world through its unique perspective. Many otherwise rlm@448: complicated actions are easily described in the language of a rlm@448: full suite of body-centered, rich senses. For example, rlm@448: drinking is the feeling of water sliding down your throat, and rlm@448: cooling your insides. It's often accompanied by bringing your rlm@448: hand close to your face, or bringing your face close to water. rlm@448: Sitting down is the feeling of bending your knees, activating rlm@448: your quadriceps, then feeling a surface with your bottom and rlm@448: relaxing your legs. These body-centered action descriptions rlm@448: can be either learned or hard coded. rlm@448: - Posture Imitation :: When trying to interpret a video or image, rlm@448: the creature takes a model of itself and aligns it with rlm@448: whatever it sees. This alignment can even cross species, as rlm@448: when humans try to align themselves with things like ponies, rlm@448: dogs, or other humans with a different body type. rlm@448: - Empathy :: The alignment triggers associations with rlm@448: sensory data from prior experiences. For example, the rlm@448: alignment itself easily maps to proprioceptive data. Any rlm@448: sounds or obvious skin contact in the video can to a lesser rlm@448: extent trigger previous experience. Segments of previous rlm@448: experiences are stitched together to form a coherent and rlm@448: complete sensory portrait of the scene. rlm@448: - Recognition :: With the scene described in terms of first rlm@448: person sensory events, the creature can now run its rlm@447: action-identification programs on this synthesized sensory rlm@447: data, just as it would if it were actually experiencing the rlm@447: scene first-hand. If previous experience has been accurately rlm@447: retrieved, and if it is analogous enough to the scene, then rlm@447: the creature will correctly identify the action in the scene. rlm@447: rlm@441: For example, I think humans are able to label the cat video as rlm@447: ``drinking'' because they imagine /themselves/ as the cat, and rlm@441: imagine putting their face up against a stream of water and rlm@441: sticking out their tongue. In that imagined world, they can feel rlm@441: the cool water hitting their tongue, and feel the water entering rlm@447: their body, and are able to recognize that /feeling/ as drinking. rlm@447: So, the label of the action is not really in the pixels of the rlm@447: image, but is found clearly in a simulation inspired by those rlm@447: pixels. An imaginative system, having been trained on drinking and rlm@447: non-drinking examples and learning that the most important rlm@447: component of drinking is the feeling of water sliding down one's rlm@447: throat, would analyze a video of a cat drinking in the following rlm@447: manner: rlm@441: rlm@447: 1. Create a physical model of the video by putting a ``fuzzy'' rlm@447: model of its own body in place of the cat. Possibly also create rlm@447: a simulation of the stream of water. rlm@441: rlm@441: 2. Play out this simulated scene and generate imagined sensory rlm@441: experience. This will include relevant muscle contractions, a rlm@441: close up view of the stream from the cat's perspective, and most rlm@441: importantly, the imagined feeling of water entering the rlm@443: mouth. The imagined sensory experience can come from a rlm@441: simulation of the event, but can also be pattern-matched from rlm@441: previous, similar embodied experience. rlm@441: rlm@441: 3. The action is now easily identified as drinking by the sense of rlm@441: taste alone. The other senses (such as the tongue moving in and rlm@441: out) help to give plausibility to the simulated action. Note that rlm@441: the sense of vision, while critical in creating the simulation, rlm@441: is not critical for identifying the action from the simulation. rlm@441: rlm@441: For the chair examples, the process is even easier: rlm@441: rlm@441: 1. Align a model of your body to the person in the image. rlm@441: rlm@441: 2. Generate proprioceptive sensory data from this alignment. rlm@437: rlm@441: 3. Use the imagined proprioceptive data as a key to lookup related rlm@441: sensory experience associated with that particular proproceptive rlm@441: feeling. rlm@437: rlm@443: 4. Retrieve the feeling of your bottom resting on a surface, your rlm@443: knees bent, and your leg muscles relaxed. rlm@437: rlm@441: 5. This sensory information is consistent with the =sitting?= rlm@441: sensory predicate, so you (and the entity in the image) must be rlm@441: sitting. rlm@440: rlm@441: 6. There must be a chair-like object since you are sitting. rlm@440: rlm@441: Empathy offers yet another alternative to the age-old AI rlm@441: representation question: ``What is a chair?'' --- A chair is the rlm@441: feeling of sitting. rlm@441: rlm@441: My program, =EMPATH= uses this empathic problem solving technique rlm@441: to interpret the actions of a simple, worm-like creature. rlm@437: rlm@441: #+caption: The worm performs many actions during free play such as rlm@441: #+caption: curling, wiggling, and resting. rlm@441: #+name: worm-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-intro-white.png]] rlm@437: rlm@447: #+caption: =EMPATH= recognized and classified each of these poses by rlm@447: #+caption: inferring the complete sensory experience from rlm@447: #+caption: proprioceptive data. rlm@441: #+name: worm-recognition-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-poses.png]] rlm@441: rlm@441: One powerful advantage of empathic problem solving is that it rlm@441: factors the action recognition problem into two easier problems. To rlm@441: use empathy, you need an /aligner/, which takes the video and a rlm@441: model of your body, and aligns the model with the video. Then, you rlm@441: need a /recognizer/, which uses the aligned model to interpret the rlm@441: action. The power in this method lies in the fact that you describe rlm@448: all actions form a body-centered viewpoint. You are less tied to rlm@447: the particulars of any visual representation of the actions. If you rlm@441: teach the system what ``running'' is, and you have a good enough rlm@441: aligner, the system will from then on be able to recognize running rlm@441: from any point of view, even strange points of view like above or rlm@441: underneath the runner. This is in contrast to action recognition rlm@448: schemes that try to identify actions using a non-embodied approach. rlm@448: If these systems learn about running as viewed from the side, they rlm@448: will not automatically be able to recognize running from any other rlm@448: viewpoint. rlm@441: rlm@441: Another powerful advantage is that using the language of multiple rlm@441: body-centered rich senses to describe body-centerd actions offers a rlm@441: massive boost in descriptive capability. Consider how difficult it rlm@441: would be to compose a set of HOG filters to describe the action of rlm@447: a simple worm-creature ``curling'' so that its head touches its rlm@447: tail, and then behold the simplicity of describing thus action in a rlm@441: language designed for the task (listing \ref{grand-circle-intro}): rlm@441: rlm@446: #+caption: Body-centerd actions are best expressed in a body-centered rlm@446: #+caption: language. This code detects when the worm has curled into a rlm@446: #+caption: full circle. Imagine how you would replicate this functionality rlm@446: #+caption: using low-level pixel features such as HOG filters! rlm@446: #+name: grand-circle-intro rlm@446: #+begin_listing clojure rlm@446: #+begin_src clojure rlm@446: (defn grand-circle? rlm@446: "Does the worm form a majestic circle (one end touching the other)?" rlm@446: [experiences] rlm@446: (and (curled? experiences) rlm@446: (let [worm-touch (:touch (peek experiences)) rlm@446: tail-touch (worm-touch 0) rlm@446: head-touch (worm-touch 4)] rlm@446: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@446: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@446: #+end_src rlm@446: #+end_listing rlm@446: rlm@435: rlm@437: ** =CORTEX= is a toolkit for building sensate creatures rlm@435: rlm@448: I built =CORTEX= to be a general AI research platform for doing rlm@448: experiments involving multiple rich senses and a wide variety and rlm@448: number of creatures. I intend it to be useful as a library for many rlm@448: more projects than just this one. =CORTEX= was necessary to meet a rlm@448: need among AI researchers at CSAIL and beyond, which is that people rlm@448: often will invent neat ideas that are best expressed in the rlm@448: language of creatures and senses, but in order to explore those rlm@448: ideas they must first build a platform in which they can create rlm@448: simulated creatures with rich senses! There are many ideas that rlm@448: would be simple to execute (such as =EMPATH=), but attached to them rlm@448: is the multi-month effort to make a good creature simulator. Often, rlm@448: that initial investment of time proves to be too much, and the rlm@448: project must make do with a lesser environment. rlm@435: rlm@448: =CORTEX= is well suited as an environment for embodied AI research rlm@448: for three reasons: rlm@448: rlm@448: - You can create new creatures using Blender, a popular 3D modeling rlm@448: program. Each sense can be specified using special blender nodes rlm@448: with biologically inspired paramaters. You need not write any rlm@448: code to create a creature, and can use a wide library of rlm@448: pre-existing blender models as a base for your own creatures. rlm@448: rlm@448: - =CORTEX= implements a wide variety of senses, including touch, rlm@448: proprioception, vision, hearing, and muscle tension. Complicated rlm@448: senses like touch, and vision involve multiple sensory elements rlm@448: embedded in a 2D surface. You have complete control over the rlm@448: distribution of these sensor elements through the use of simple rlm@448: png image files. In particular, =CORTEX= implements more rlm@448: comprehensive hearing than any other creature simulation system rlm@448: available. rlm@448: rlm@448: - =CORTEX= supports any number of creatures and any number of rlm@448: senses. Time in =CORTEX= dialates so that the simulated creatures rlm@448: always precieve a perfectly smooth flow of time, regardless of rlm@448: the actual computational load. rlm@448: rlm@448: =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game rlm@448: engine designed to create cross-platform 3D desktop games. =CORTEX= rlm@448: is mainly written in clojure, a dialect of =LISP= that runs on the rlm@448: java virtual machine (JVM). The API for creating and simulating rlm@448: creatures is entirely expressed in clojure. Hearing is implemented rlm@448: as a layer of clojure code on top of a layer of java code on top of rlm@448: a layer of =C++= code which implements a modified version of rlm@448: =OpenAL= to support multiple listeners. =CORTEX= is the only rlm@448: simulation environment that I know of that can support multiple rlm@448: entities that can each hear the world from their own perspective. rlm@448: Other senses also require a small layer of Java code. =CORTEX= also rlm@448: uses =bullet=, a physics simulator written in =C=. rlm@448: rlm@448: #+caption: Here is the worm from above modeled in Blender, a free rlm@448: #+caption: 3D-modeling program. Senses and joints are described rlm@448: #+caption: using special nodes in Blender. rlm@448: #+name: worm-recognition-intro rlm@448: #+ATTR_LaTeX: :width 12cm rlm@448: [[./images/blender-worm.png]] rlm@448: rlm@448: During one test with =CORTEX=, I created 3,000 entities each with rlm@448: their own independent senses and ran them all at only 1/80 real rlm@448: time. In another test, I created a detailed model of my own hand, rlm@448: equipped with a realistic distribution of touch (more sensitive at rlm@448: the fingertips), as well as eyes and ears, and it ran at around 1/4 rlm@448: real time. rlm@448: rlm@448: #+caption: Here is the worm from above modeled in Blender, a free rlm@448: #+caption: 3D-modeling program. Senses and joints are described rlm@448: #+caption: using special nodes in Blender. rlm@448: #+name: worm-recognition-intro rlm@448: #+ATTR_LaTeX: :width 15cm rlm@448: [[./images/full-hand.png]] rlm@448: rlm@448: rlm@448: rlm@448: rlm@448: rlm@437: ** Contributions rlm@435: rlm@436: * Building =CORTEX= rlm@435: rlm@436: ** To explore embodiment, we need a world, body, and senses rlm@435: rlm@436: ** Because of Time, simulation is perferable to reality rlm@435: rlm@436: ** Video game engines are a great starting point rlm@435: rlm@436: ** Bodies are composed of segments connected by joints rlm@435: rlm@436: ** Eyes reuse standard video game components rlm@436: rlm@436: ** Hearing is hard; =CORTEX= does it right rlm@436: rlm@436: ** Touch uses hundreds of hair-like elements rlm@436: rlm@440: ** Proprioception is the sense that makes everything ``real'' rlm@436: rlm@436: ** Muscles are both effectors and sensors rlm@436: rlm@436: ** =CORTEX= brings complex creatures to life! rlm@436: rlm@436: ** =CORTEX= enables many possiblities for further research rlm@435: rlm@435: * Empathy in a simulated worm rlm@435: rlm@436: ** Embodiment factors action recognition into managable parts rlm@435: rlm@436: ** Action recognition is easy with a full gamut of senses rlm@435: rlm@437: ** Digression: bootstrapping touch using free exploration rlm@435: rlm@436: ** \Phi-space describes the worm's experiences rlm@435: rlm@436: ** Empathy is the process of tracing though \Phi-space rlm@435: rlm@441: ** Efficient action recognition with =EMPATH= rlm@425: rlm@432: * Contributions rlm@432: - Built =CORTEX=, a comprehensive platform for embodied AI rlm@432: experiments. Has many new features lacking in other systems, such rlm@432: as sound. Easy to model/create new creatures. rlm@432: - created a novel concept for action recognition by using artificial rlm@432: imagination. rlm@426: rlm@436: In the second half of the thesis I develop a computational model of rlm@436: empathy, using =CORTEX= as a base. Empathy in this context is the rlm@436: ability to observe another creature and infer what sorts of sensations rlm@436: that creature is feeling. My empathy algorithm involves multiple rlm@436: phases. First is free-play, where the creature moves around and gains rlm@436: sensory experience. From this experience I construct a representation rlm@447: of the creature's sensory state space, which I call \Phi-space. Using rlm@447: \Phi-space, I construct an efficient function for enriching the rlm@436: limited data that comes from observing another creature with a full rlm@436: compliment of imagined sensory data based on previous experience. I rlm@436: can then use the imagined sensory data to recognize what the observed rlm@436: creature is doing and feeling, using straightforward embodied action rlm@436: predicates. This is all demonstrated with using a simple worm-like rlm@436: creature, and recognizing worm-actions based on limited data. rlm@432: rlm@436: Embodied representation using multiple senses such as touch, rlm@436: proprioception, and muscle tension turns out be be exceedingly rlm@436: efficient at describing body-centered actions. It is the ``right rlm@436: language for the job''. For example, it takes only around 5 lines of rlm@436: LISP code to describe the action of ``curling'' using embodied rlm@436: primitives. It takes about 8 lines to describe the seemingly rlm@436: complicated action of wiggling. rlm@432: rlm@437: rlm@437: rlm@437: * COMMENT names for cortex rlm@447: - bioland rlm@447: rlm@447: rlm@447: rlm@447: rlm@447: # An anatomical joke: rlm@447: # - Training rlm@447: # - Skeletal imitation rlm@447: # - Sensory fleshing-out rlm@447: # - Classification