rlm@425: #+title: =CORTEX= rlm@425: #+author: Robert McIntyre rlm@425: #+email: rlm@mit.edu rlm@425: #+description: Using embodied AI to facilitate Artificial Imagination. rlm@425: #+keywords: AI, clojure, embodiment rlm@422: rlm@437: rlm@439: * Empathy and Embodiment as problem solving strategies rlm@437: rlm@437: By the end of this thesis, you will have seen a novel approach to rlm@437: interpreting video using embodiment and empathy. You will have also rlm@437: seen one way to efficiently implement empathy for embodied rlm@447: creatures. Finally, you will become familiar with =CORTEX=, a system rlm@447: for designing and simulating creatures with rich senses, which you rlm@447: may choose to use in your own research. rlm@437: rlm@441: This is the core vision of my thesis: That one of the important ways rlm@441: in which we understand others is by imagining ourselves in their rlm@441: position and emphatically feeling experiences relative to our own rlm@441: bodies. By understanding events in terms of our own previous rlm@441: corporeal experience, we greatly constrain the possibilities of what rlm@441: would otherwise be an unwieldy exponential search. This extra rlm@441: constraint can be the difference between easily understanding what rlm@441: is happening in a video and being completely lost in a sea of rlm@441: incomprehensible color and movement. rlm@435: rlm@436: ** Recognizing actions in video is extremely difficult rlm@437: rlm@447: Consider for example the problem of determining what is happening rlm@447: in a video of which this is one frame: rlm@437: rlm@441: #+caption: A cat drinking some water. Identifying this action is rlm@441: #+caption: beyond the state of the art for computers. rlm@441: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/cat-drinking.jpg]] rlm@441: rlm@441: It is currently impossible for any computer program to reliably rlm@447: label such a video as ``drinking''. And rightly so -- it is a very rlm@441: hard problem! What features can you describe in terms of low level rlm@441: functions of pixels that can even begin to describe at a high level rlm@441: what is happening here? rlm@437: rlm@447: Or suppose that you are building a program that recognizes chairs. rlm@447: How could you ``see'' the chair in figure \ref{invisible-chair} and rlm@447: figure \ref{hidden-chair}? rlm@441: rlm@441: #+caption: When you look at this, do you think ``chair''? I certainly do. rlm@441: #+name: invisible-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/invisible-chair.png]] rlm@441: rlm@441: #+caption: The chair in this image is quite obvious to humans, but I rlm@441: #+caption: doubt that any computer program can find it. rlm@441: #+name: hidden-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/fat-person-sitting-at-desk.jpg]] rlm@441: rlm@441: Finally, how is it that you can easily tell the difference between rlm@441: how the girls /muscles/ are working in figure \ref{girl}? rlm@441: rlm@441: #+caption: The mysterious ``common sense'' appears here as you are able rlm@441: #+caption: to discern the difference in how the girl's arm muscles rlm@441: #+caption: are activated between the two images. rlm@441: #+name: girl rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: Each of these examples tells us something about what might be going rlm@441: on in our minds as we easily solve these recognition problems. rlm@441: rlm@441: The hidden chairs show us that we are strongly triggered by cues rlm@447: relating to the position of human bodies, and that we can determine rlm@447: the overall physical configuration of a human body even if much of rlm@447: that body is occluded. rlm@437: rlm@441: The picture of the girl pushing against the wall tells us that we rlm@441: have common sense knowledge about the kinetics of our own bodies. rlm@441: We know well how our muscles would have to work to maintain us in rlm@441: most positions, and we can easily project this self-knowledge to rlm@441: imagined positions triggered by images of the human body. rlm@441: rlm@441: ** =EMPATH= neatly solves recognition problems rlm@441: rlm@441: I propose a system that can express the types of recognition rlm@441: problems above in a form amenable to computation. It is split into rlm@441: four parts: rlm@441: rlm@447: - Free/Guided Play (Training) :: The creature moves around and rlm@447: experiences the world through its unique perspective. Many rlm@447: otherwise complicated actions are easily described in the rlm@447: language of a full suite of body-centered, rich senses. For rlm@447: example, drinking is the feeling of water sliding down your rlm@447: throat, and cooling your insides. It's often accompanied by rlm@447: bringing your hand close to your face, or bringing your face rlm@447: close to water. Sitting down is the feeling of bending your rlm@447: knees, activating your quadriceps, then feeling a surface with rlm@447: your bottom and relaxing your legs. These body-centered action rlm@441: descriptions can be either learned or hard coded. rlm@447: - Alignment (Posture imitation) :: When trying to interpret a video rlm@447: or image, the creature takes a model of itself and aligns it rlm@447: with whatever it sees. This alignment can even cross species, rlm@447: as when humans try to align themselves with things like rlm@447: ponies, dogs, or other humans with a different body type. rlm@447: - Empathy (Sensory extrapolation) :: The alignment triggers rlm@447: associations with sensory data from prior experiences. For rlm@447: example, the alignment itself easily maps to proprioceptive rlm@447: data. Any sounds or obvious skin contact in the video can to a rlm@447: lesser extent trigger previous experience. Segments of rlm@447: previous experiences are stitched together to form a coherent rlm@447: and complete sensory portrait of the scene. rlm@447: - Recognition (Classification) :: With the scene described in terms rlm@447: of first person sensory events, the creature can now run its rlm@447: action-identification programs on this synthesized sensory rlm@447: data, just as it would if it were actually experiencing the rlm@447: scene first-hand. If previous experience has been accurately rlm@447: retrieved, and if it is analogous enough to the scene, then rlm@447: the creature will correctly identify the action in the scene. rlm@447: rlm@441: For example, I think humans are able to label the cat video as rlm@447: ``drinking'' because they imagine /themselves/ as the cat, and rlm@441: imagine putting their face up against a stream of water and rlm@441: sticking out their tongue. In that imagined world, they can feel rlm@441: the cool water hitting their tongue, and feel the water entering rlm@447: their body, and are able to recognize that /feeling/ as drinking. rlm@447: So, the label of the action is not really in the pixels of the rlm@447: image, but is found clearly in a simulation inspired by those rlm@447: pixels. An imaginative system, having been trained on drinking and rlm@447: non-drinking examples and learning that the most important rlm@447: component of drinking is the feeling of water sliding down one's rlm@447: throat, would analyze a video of a cat drinking in the following rlm@447: manner: rlm@441: rlm@447: 1. Create a physical model of the video by putting a ``fuzzy'' rlm@447: model of its own body in place of the cat. Possibly also create rlm@447: a simulation of the stream of water. rlm@441: rlm@441: 2. Play out this simulated scene and generate imagined sensory rlm@441: experience. This will include relevant muscle contractions, a rlm@441: close up view of the stream from the cat's perspective, and most rlm@441: importantly, the imagined feeling of water entering the rlm@443: mouth. The imagined sensory experience can come from a rlm@441: simulation of the event, but can also be pattern-matched from rlm@441: previous, similar embodied experience. rlm@441: rlm@441: 3. The action is now easily identified as drinking by the sense of rlm@441: taste alone. The other senses (such as the tongue moving in and rlm@441: out) help to give plausibility to the simulated action. Note that rlm@441: the sense of vision, while critical in creating the simulation, rlm@441: is not critical for identifying the action from the simulation. rlm@441: rlm@441: For the chair examples, the process is even easier: rlm@441: rlm@441: 1. Align a model of your body to the person in the image. rlm@441: rlm@441: 2. Generate proprioceptive sensory data from this alignment. rlm@437: rlm@441: 3. Use the imagined proprioceptive data as a key to lookup related rlm@441: sensory experience associated with that particular proproceptive rlm@441: feeling. rlm@437: rlm@443: 4. Retrieve the feeling of your bottom resting on a surface, your rlm@443: knees bent, and your leg muscles relaxed. rlm@437: rlm@441: 5. This sensory information is consistent with the =sitting?= rlm@441: sensory predicate, so you (and the entity in the image) must be rlm@441: sitting. rlm@440: rlm@441: 6. There must be a chair-like object since you are sitting. rlm@440: rlm@441: Empathy offers yet another alternative to the age-old AI rlm@441: representation question: ``What is a chair?'' --- A chair is the rlm@441: feeling of sitting. rlm@441: rlm@441: My program, =EMPATH= uses this empathic problem solving technique rlm@441: to interpret the actions of a simple, worm-like creature. rlm@437: rlm@441: #+caption: The worm performs many actions during free play such as rlm@441: #+caption: curling, wiggling, and resting. rlm@441: #+name: worm-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-intro-white.png]] rlm@437: rlm@447: #+caption: =EMPATH= recognized and classified each of these poses by rlm@447: #+caption: inferring the complete sensory experience from rlm@447: #+caption: proprioceptive data. rlm@441: #+name: worm-recognition-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-poses.png]] rlm@441: rlm@441: One powerful advantage of empathic problem solving is that it rlm@441: factors the action recognition problem into two easier problems. To rlm@441: use empathy, you need an /aligner/, which takes the video and a rlm@441: model of your body, and aligns the model with the video. Then, you rlm@441: need a /recognizer/, which uses the aligned model to interpret the rlm@441: action. The power in this method lies in the fact that you describe rlm@447: all actions form a body-centered, viewpoint You are less tied to rlm@447: the particulars of any visual representation of the actions. If you rlm@441: teach the system what ``running'' is, and you have a good enough rlm@441: aligner, the system will from then on be able to recognize running rlm@441: from any point of view, even strange points of view like above or rlm@441: underneath the runner. This is in contrast to action recognition rlm@441: schemes that try to identify actions using a non-embodied approach rlm@447: such as TODO:REFERENCE. If these systems learn about running as rlm@447: viewed from the side, they will not automatically be able to rlm@447: recognize running from any other viewpoint. rlm@441: rlm@441: Another powerful advantage is that using the language of multiple rlm@441: body-centered rich senses to describe body-centerd actions offers a rlm@441: massive boost in descriptive capability. Consider how difficult it rlm@441: would be to compose a set of HOG filters to describe the action of rlm@447: a simple worm-creature ``curling'' so that its head touches its rlm@447: tail, and then behold the simplicity of describing thus action in a rlm@441: language designed for the task (listing \ref{grand-circle-intro}): rlm@441: rlm@446: #+caption: Body-centerd actions are best expressed in a body-centered rlm@446: #+caption: language. This code detects when the worm has curled into a rlm@446: #+caption: full circle. Imagine how you would replicate this functionality rlm@446: #+caption: using low-level pixel features such as HOG filters! rlm@446: #+name: grand-circle-intro rlm@446: #+begin_listing clojure rlm@446: #+begin_src clojure rlm@446: (defn grand-circle? rlm@446: "Does the worm form a majestic circle (one end touching the other)?" rlm@446: [experiences] rlm@446: (and (curled? experiences) rlm@446: (let [worm-touch (:touch (peek experiences)) rlm@446: tail-touch (worm-touch 0) rlm@446: head-touch (worm-touch 4)] rlm@446: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@446: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@446: #+end_src rlm@446: #+end_listing rlm@446: rlm@435: rlm@437: ** =CORTEX= is a toolkit for building sensate creatures rlm@435: rlm@436: Hand integration demo rlm@435: rlm@437: ** Contributions rlm@435: rlm@436: * Building =CORTEX= rlm@435: rlm@436: ** To explore embodiment, we need a world, body, and senses rlm@435: rlm@436: ** Because of Time, simulation is perferable to reality rlm@435: rlm@436: ** Video game engines are a great starting point rlm@435: rlm@436: ** Bodies are composed of segments connected by joints rlm@435: rlm@436: ** Eyes reuse standard video game components rlm@436: rlm@436: ** Hearing is hard; =CORTEX= does it right rlm@436: rlm@436: ** Touch uses hundreds of hair-like elements rlm@436: rlm@440: ** Proprioception is the sense that makes everything ``real'' rlm@436: rlm@436: ** Muscles are both effectors and sensors rlm@436: rlm@436: ** =CORTEX= brings complex creatures to life! rlm@436: rlm@436: ** =CORTEX= enables many possiblities for further research rlm@435: rlm@435: * Empathy in a simulated worm rlm@435: rlm@436: ** Embodiment factors action recognition into managable parts rlm@435: rlm@436: ** Action recognition is easy with a full gamut of senses rlm@435: rlm@437: ** Digression: bootstrapping touch using free exploration rlm@435: rlm@436: ** \Phi-space describes the worm's experiences rlm@435: rlm@436: ** Empathy is the process of tracing though \Phi-space rlm@435: rlm@441: ** Efficient action recognition with =EMPATH= rlm@425: rlm@432: * Contributions rlm@432: - Built =CORTEX=, a comprehensive platform for embodied AI rlm@432: experiments. Has many new features lacking in other systems, such rlm@432: as sound. Easy to model/create new creatures. rlm@432: - created a novel concept for action recognition by using artificial rlm@432: imagination. rlm@426: rlm@436: In the second half of the thesis I develop a computational model of rlm@436: empathy, using =CORTEX= as a base. Empathy in this context is the rlm@436: ability to observe another creature and infer what sorts of sensations rlm@436: that creature is feeling. My empathy algorithm involves multiple rlm@436: phases. First is free-play, where the creature moves around and gains rlm@436: sensory experience. From this experience I construct a representation rlm@447: of the creature's sensory state space, which I call \Phi-space. Using rlm@447: \Phi-space, I construct an efficient function for enriching the rlm@436: limited data that comes from observing another creature with a full rlm@436: compliment of imagined sensory data based on previous experience. I rlm@436: can then use the imagined sensory data to recognize what the observed rlm@436: creature is doing and feeling, using straightforward embodied action rlm@436: predicates. This is all demonstrated with using a simple worm-like rlm@436: creature, and recognizing worm-actions based on limited data. rlm@432: rlm@436: Embodied representation using multiple senses such as touch, rlm@436: proprioception, and muscle tension turns out be be exceedingly rlm@436: efficient at describing body-centered actions. It is the ``right rlm@436: language for the job''. For example, it takes only around 5 lines of rlm@436: LISP code to describe the action of ``curling'' using embodied rlm@436: primitives. It takes about 8 lines to describe the seemingly rlm@436: complicated action of wiggling. rlm@432: rlm@437: rlm@437: rlm@437: * COMMENT names for cortex rlm@447: - bioland rlm@447: rlm@447: rlm@447: rlm@447: rlm@447: # An anatomical joke: rlm@447: # - Training rlm@447: # - Skeletal imitation rlm@447: # - Sensory fleshing-out rlm@447: # - Classification