rlm@425: #+title: =CORTEX= rlm@425: #+author: Robert McIntyre rlm@425: #+email: rlm@mit.edu rlm@425: #+description: Using embodied AI to facilitate Artificial Imagination. rlm@425: #+keywords: AI, clojure, embodiment rlm@422: rlm@437: rlm@439: * Empathy and Embodiment as problem solving strategies rlm@437: rlm@437: By the end of this thesis, you will have seen a novel approach to rlm@437: interpreting video using embodiment and empathy. You will have also rlm@437: seen one way to efficiently implement empathy for embodied rlm@441: creatures. Finally, you will become familiar with =CORTEX=, a rlm@441: system for designing and simulating creatures with rich senses, rlm@441: which you may choose to use in your own research. rlm@437: rlm@441: This is the core vision of my thesis: That one of the important ways rlm@441: in which we understand others is by imagining ourselves in their rlm@441: position and emphatically feeling experiences relative to our own rlm@441: bodies. By understanding events in terms of our own previous rlm@441: corporeal experience, we greatly constrain the possibilities of what rlm@441: would otherwise be an unwieldy exponential search. This extra rlm@441: constraint can be the difference between easily understanding what rlm@441: is happening in a video and being completely lost in a sea of rlm@441: incomprehensible color and movement. rlm@435: rlm@436: ** Recognizing actions in video is extremely difficult rlm@437: rlm@441: Consider for example the problem of determining what is happening in rlm@441: a video of which this is one frame: rlm@437: rlm@441: #+caption: A cat drinking some water. Identifying this action is rlm@441: #+caption: beyond the state of the art for computers. rlm@441: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/cat-drinking.jpg]] rlm@441: rlm@441: It is currently impossible for any computer program to reliably rlm@441: label such an video as "drinking". And rightly so -- it is a very rlm@441: hard problem! What features can you describe in terms of low level rlm@441: functions of pixels that can even begin to describe at a high level rlm@441: what is happening here? rlm@437: rlm@441: Or suppose that you are building a program that recognizes rlm@441: chairs. How could you ``see'' the chair in figure rlm@441: \ref{invisible-chair} and figure \ref{hidden-chair}? rlm@441: rlm@441: #+caption: When you look at this, do you think ``chair''? I certainly do. rlm@441: #+name: invisible-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/invisible-chair.png]] rlm@441: rlm@441: #+caption: The chair in this image is quite obvious to humans, but I rlm@441: #+caption: doubt that any computer program can find it. rlm@441: #+name: hidden-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/fat-person-sitting-at-desk.jpg]] rlm@441: rlm@441: Finally, how is it that you can easily tell the difference between rlm@441: how the girls /muscles/ are working in figure \ref{girl}? rlm@441: rlm@441: #+caption: The mysterious ``common sense'' appears here as you are able rlm@441: #+caption: to discern the difference in how the girl's arm muscles rlm@441: #+caption: are activated between the two images. rlm@441: #+name: girl rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: Each of these examples tells us something about what might be going rlm@441: on in our minds as we easily solve these recognition problems. rlm@441: rlm@441: The hidden chairs show us that we are strongly triggered by cues rlm@441: relating to the position of human bodies, and that we can rlm@441: determine the overall physical configuration of a human body even rlm@441: if much of that body is occluded. rlm@437: rlm@441: The picture of the girl pushing against the wall tells us that we rlm@441: have common sense knowledge about the kinetics of our own bodies. rlm@441: We know well how our muscles would have to work to maintain us in rlm@441: most positions, and we can easily project this self-knowledge to rlm@441: imagined positions triggered by images of the human body. rlm@441: rlm@441: ** =EMPATH= neatly solves recognition problems rlm@441: rlm@441: I propose a system that can express the types of recognition rlm@441: problems above in a form amenable to computation. It is split into rlm@441: four parts: rlm@441: rlm@441: - Free/Guided Play :: The creature moves around and experiences the rlm@441: world through its unique perspective. Many otherwise rlm@441: complicated actions are easily described in the language of a rlm@441: full suite of body-centered, rich senses. For example, rlm@441: drinking is the feeling of water sliding down your throat, and rlm@441: cooling your insides. It's often accompanied by bringing your rlm@441: hand close to your face, or bringing your face close to rlm@441: water. Sitting down is the feeling of bending your knees, rlm@441: activating your quadriceps, then feeling a surface with your rlm@441: bottom and relaxing your legs. These body-centered action rlm@441: descriptions can be either learned or hard coded. rlm@441: - Alignment :: When trying to interpret a video or image, the rlm@441: creature takes a model of itself and aligns it with rlm@441: whatever it sees. This can be a rather loose rlm@441: alignment that can cross species, as when humans try rlm@441: to align themselves with things like ponies, dogs, rlm@441: or other humans with a different body type. rlm@441: - Empathy :: The alignment triggers the memories of previous rlm@441: experience. For example, the alignment itself easily rlm@441: maps to proprioceptive data. Any sounds or obvious rlm@441: skin contact in the video can to a lesser extent rlm@441: trigger previous experience. The creatures previous rlm@441: experience is chained together in short bursts to rlm@441: coherently describe the new scene. rlm@441: - Recognition :: With the scene now described in terms of past rlm@441: experience, the creature can now run its rlm@441: action-identification programs on this synthesized rlm@441: sensory data, just as it would if it were actually rlm@441: experiencing the scene first-hand. If previous rlm@441: experience has been accurately retrieved, and if rlm@441: it is analogous enough to the scene, then the rlm@441: creature will correctly identify the action in the rlm@441: scene. rlm@441: rlm@441: rlm@441: For example, I think humans are able to label the cat video as rlm@441: "drinking" because they imagine /themselves/ as the cat, and rlm@441: imagine putting their face up against a stream of water and rlm@441: sticking out their tongue. In that imagined world, they can feel rlm@441: the cool water hitting their tongue, and feel the water entering rlm@441: their body, and are able to recognize that /feeling/ as rlm@441: drinking. So, the label of the action is not really in the pixels rlm@441: of the image, but is found clearly in a simulation inspired by rlm@441: those pixels. An imaginative system, having been trained on rlm@441: drinking and non-drinking examples and learning that the most rlm@441: important component of drinking is the feeling of water sliding rlm@441: down one's throat, would analyze a video of a cat drinking in the rlm@441: following manner: rlm@441: rlm@441: 1. Create a physical model of the video by putting a "fuzzy" model rlm@441: of its own body in place of the cat. Possibly also create a rlm@441: simulation of the stream of water. rlm@441: rlm@441: 2. Play out this simulated scene and generate imagined sensory rlm@441: experience. This will include relevant muscle contractions, a rlm@441: close up view of the stream from the cat's perspective, and most rlm@441: importantly, the imagined feeling of water entering the rlm@441: mouth. The imagined sensory experience can come from both a rlm@441: simulation of the event, but can also be pattern-matched from rlm@441: previous, similar embodied experience. rlm@441: rlm@441: 3. The action is now easily identified as drinking by the sense of rlm@441: taste alone. The other senses (such as the tongue moving in and rlm@441: out) help to give plausibility to the simulated action. Note that rlm@441: the sense of vision, while critical in creating the simulation, rlm@441: is not critical for identifying the action from the simulation. rlm@441: rlm@441: For the chair examples, the process is even easier: rlm@441: rlm@441: 1. Align a model of your body to the person in the image. rlm@441: rlm@441: 2. Generate proprioceptive sensory data from this alignment. rlm@437: rlm@441: 3. Use the imagined proprioceptive data as a key to lookup related rlm@441: sensory experience associated with that particular proproceptive rlm@441: feeling. rlm@437: rlm@441: 4. Retrieve the feeling of your bottom resting on a surface and rlm@441: your leg muscles relaxed. rlm@437: rlm@441: 5. This sensory information is consistent with the =sitting?= rlm@441: sensory predicate, so you (and the entity in the image) must be rlm@441: sitting. rlm@440: rlm@441: 6. There must be a chair-like object since you are sitting. rlm@440: rlm@441: Empathy offers yet another alternative to the age-old AI rlm@441: representation question: ``What is a chair?'' --- A chair is the rlm@441: feeling of sitting. rlm@441: rlm@441: My program, =EMPATH= uses this empathic problem solving technique rlm@441: to interpret the actions of a simple, worm-like creature. rlm@437: rlm@441: #+caption: The worm performs many actions during free play such as rlm@441: #+caption: curling, wiggling, and resting. rlm@441: #+name: worm-intro rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: #+caption: This sensory predicate detects when the worm is resting on the rlm@441: #+caption: ground. rlm@441: #+name: resting-intro rlm@441: #+begin_listing clojure rlm@441: #+begin_src clojure rlm@441: (defn resting? rlm@441: "Is the worm resting on the ground?" rlm@441: [experiences] rlm@441: (every? rlm@441: (fn [touch-data] rlm@441: (< 0.9 (contact worm-segment-bottom touch-data))) rlm@441: (:touch (peek experiences)))) rlm@441: #+end_src rlm@441: #+end_listing rlm@437: rlm@441: #+caption: Body-centerd actions are best expressed in a body-centered rlm@441: #+caption: language. This code detects when the worm has curled into a rlm@441: #+caption: full circle. Imagine how you would replicate this functionality rlm@441: #+caption: using low-level pixel features such as HOG filters! rlm@441: #+name: grand-circle-intro rlm@441: #+begin_listing clojure rlm@441: #+begin_src clojure rlm@441: (defn grand-circle? rlm@441: "Does the worm form a majestic circle (one end touching the other)?" rlm@441: [experiences] rlm@441: (and (curled? experiences) rlm@441: (let [worm-touch (:touch (peek experiences)) rlm@441: tail-touch (worm-touch 0) rlm@441: head-touch (worm-touch 4)] rlm@441: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@441: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@441: #+end_src rlm@441: #+end_listing rlm@437: rlm@441: #+caption: Even complicated actions such as ``wiggling'' are fairly simple rlm@441: #+caption: to describe with a rich enough language. rlm@441: #+name: wiggling-intro rlm@441: #+begin_listing clojure rlm@441: #+begin_src clojure rlm@441: (defn wiggling? rlm@441: "Is the worm wiggling?" rlm@441: [experiences] rlm@441: (let [analysis-interval 0x40] rlm@441: (when (> (count experiences) analysis-interval) rlm@441: (let [a-flex 3 rlm@441: a-ex 2 rlm@441: muscle-activity rlm@441: (map :muscle (vector:last-n experiences analysis-interval)) rlm@441: base-activity rlm@441: (map #(- (% a-flex) (% a-ex)) muscle-activity)] rlm@441: (= 2 rlm@441: (first rlm@441: (max-indexed rlm@441: (map #(Math/abs %) rlm@441: (take 20 (fft base-activity)))))))))) rlm@441: #+end_src rlm@441: #+end_listing rlm@435: rlm@441: #+caption: The actions of a worm in a video can be recognized by rlm@441: #+caption: proprioceptive data and sentory predicates by filling rlm@441: #+caption: in the missing sensory detail with previous experience. rlm@441: #+name: worm-recognition-intro rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@435: rlm@441: rlm@441: One powerful advantage of empathic problem solving is that it rlm@441: factors the action recognition problem into two easier problems. To rlm@441: use empathy, you need an /aligner/, which takes the video and a rlm@441: model of your body, and aligns the model with the video. Then, you rlm@441: need a /recognizer/, which uses the aligned model to interpret the rlm@441: action. The power in this method lies in the fact that you describe rlm@441: all actions form a body-centered, rich viewpoint. This way, if you rlm@441: teach the system what ``running'' is, and you have a good enough rlm@441: aligner, the system will from then on be able to recognize running rlm@441: from any point of view, even strange points of view like above or rlm@441: underneath the runner. This is in contrast to action recognition rlm@441: schemes that try to identify actions using a non-embodied approach rlm@441: such as TODO:REFERENCE. If these systems learn about running as viewed rlm@441: from the side, they will not automatically be able to recognize rlm@441: running from any other viewpoint. rlm@441: rlm@441: Another powerful advantage is that using the language of multiple rlm@441: body-centered rich senses to describe body-centerd actions offers a rlm@441: massive boost in descriptive capability. Consider how difficult it rlm@441: would be to compose a set of HOG filters to describe the action of rlm@441: a simple worm-creature "curling" so that its head touches its tail, rlm@441: and then behold the simplicity of describing thus action in a rlm@441: language designed for the task (listing \ref{grand-circle-intro}): rlm@441: rlm@435: rlm@437: ** =CORTEX= is a toolkit for building sensate creatures rlm@435: rlm@436: Hand integration demo rlm@435: rlm@437: ** Contributions rlm@435: rlm@436: * Building =CORTEX= rlm@435: rlm@436: ** To explore embodiment, we need a world, body, and senses rlm@435: rlm@436: ** Because of Time, simulation is perferable to reality rlm@435: rlm@436: ** Video game engines are a great starting point rlm@435: rlm@436: ** Bodies are composed of segments connected by joints rlm@435: rlm@436: ** Eyes reuse standard video game components rlm@436: rlm@436: ** Hearing is hard; =CORTEX= does it right rlm@436: rlm@436: ** Touch uses hundreds of hair-like elements rlm@436: rlm@440: ** Proprioception is the sense that makes everything ``real'' rlm@436: rlm@436: ** Muscles are both effectors and sensors rlm@436: rlm@436: ** =CORTEX= brings complex creatures to life! rlm@436: rlm@436: ** =CORTEX= enables many possiblities for further research rlm@435: rlm@435: * Empathy in a simulated worm rlm@435: rlm@436: ** Embodiment factors action recognition into managable parts rlm@435: rlm@436: ** Action recognition is easy with a full gamut of senses rlm@435: rlm@437: ** Digression: bootstrapping touch using free exploration rlm@435: rlm@436: ** \Phi-space describes the worm's experiences rlm@435: rlm@436: ** Empathy is the process of tracing though \Phi-space rlm@435: rlm@441: ** Efficient action recognition with =EMPATH= rlm@425: rlm@432: * Contributions rlm@432: - Built =CORTEX=, a comprehensive platform for embodied AI rlm@432: experiments. Has many new features lacking in other systems, such rlm@432: as sound. Easy to model/create new creatures. rlm@432: - created a novel concept for action recognition by using artificial rlm@432: imagination. rlm@426: rlm@436: In the second half of the thesis I develop a computational model of rlm@436: empathy, using =CORTEX= as a base. Empathy in this context is the rlm@436: ability to observe another creature and infer what sorts of sensations rlm@436: that creature is feeling. My empathy algorithm involves multiple rlm@436: phases. First is free-play, where the creature moves around and gains rlm@436: sensory experience. From this experience I construct a representation rlm@436: of the creature's sensory state space, which I call \phi-space. Using rlm@436: \phi-space, I construct an efficient function for enriching the rlm@436: limited data that comes from observing another creature with a full rlm@436: compliment of imagined sensory data based on previous experience. I rlm@436: can then use the imagined sensory data to recognize what the observed rlm@436: creature is doing and feeling, using straightforward embodied action rlm@436: predicates. This is all demonstrated with using a simple worm-like rlm@436: creature, and recognizing worm-actions based on limited data. rlm@432: rlm@436: Embodied representation using multiple senses such as touch, rlm@436: proprioception, and muscle tension turns out be be exceedingly rlm@436: efficient at describing body-centered actions. It is the ``right rlm@436: language for the job''. For example, it takes only around 5 lines of rlm@436: LISP code to describe the action of ``curling'' using embodied rlm@436: primitives. It takes about 8 lines to describe the seemingly rlm@436: complicated action of wiggling. rlm@432: rlm@437: rlm@437: rlm@437: * COMMENT names for cortex rlm@437: - bioland