Mercurial > cortex
view thesis/org/first-chapter.org @ 545:b2c66ea58c39
changes from athena.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Mon, 28 Apr 2014 12:59:08 -0400 |
parents | 5205535237fb |
children |
line wrap: on
line source
1 #+title: =CORTEX=2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description: Using embodied AI to facilitate Artificial Imagination.5 #+keywords: AI, clojure, embodiment6 #+SETUPFILE: ../../aurellem/org/setup.org7 #+INCLUDE: ../../aurellem/org/level-0.org8 #+babel: :mkdirp yes :noweb yes :exports both9 #+OPTIONS: toc:nil, num:nil11 * Artificial Imagination12 Imagine watching a video of someone skateboarding. When you watch13 the video, you can imagine yourself skateboarding, and your14 knowledge of the human body and its dynamics guides your15 interpretation of the scene. For example, even if the skateboarder16 is partially occluded, you can infer the positions of his arms and17 body from your own knowledge of how your body would be positioned if18 you were skateboarding. If the skateboarder suffers an accident, you19 wince in sympathy, imagining the pain your own body would experience20 if it were in the same situation. This empathy with other people21 guides our understanding of whatever they are doing because it is a22 powerful constraint on what is probable and possible. In order to23 make use of this powerful empathy constraint, I need a system that24 can generate and make sense of sensory data from the many different25 senses that humans possess. The two key proprieties of such a system26 are /embodiment/ and /imagination/.28 ** What is imagination?30 One kind of imagination is /sympathetic/ imagination: you imagine31 yourself in the position of something/someone you are32 observing. This type of imagination comes into play when you follow33 along visually when watching someone perform actions, or when you34 sympathetically grimace when someone hurts themselves. This type of35 imagination uses the constraints you have learned about your own36 body to highly constrain the possibilities in whatever you are37 seeing. It uses all your senses to including your senses of touch,38 proprioception, etc. Humans are flexible when it comes to "putting39 themselves in another's shoes," and can sympathetically understand40 not only other humans, but entities ranging from animals to cartoon41 characters to [[http://www.youtube.com/watch?v=0jz4HcwTQmU][single dots]] on a screen!43 # and can infer intention from the actions of not only other humans,44 # but also animals, cartoon characters, and even abstract moving dots45 # on a screen!47 Another kind of imagination is /predictive/ imagination: you48 construct scenes in your mind that are not entirely related to49 whatever you are observing, but instead are predictions of the50 future or simply flights of fancy. You use this type of imagination51 to plan out multi-step actions, or play out dangerous situations in52 your mind so as to avoid messing them up in reality.54 Of course, sympathetic and predictive imagination blend into each55 other and are not completely separate concepts. One dimension along56 which you can distinguish types of imagination is dependence on raw57 sense data. Sympathetic imagination is highly constrained by your58 senses, while predictive imagination can be more or less dependent59 on your senses depending on how far ahead you imagine. Daydreaming60 is an extreme form of predictive imagination that wanders through61 different possibilities without concern for whether they are62 related to whatever is happening in reality.64 For this thesis, I will mostly focus on sympathetic imagination and65 the constraint it provides for understanding sensory data.67 ** What problems can imagination solve?69 Consider a video of a cat drinking some water.71 #+caption: A cat drinking some water. Identifying this action is beyond the state of the art for computers.72 #+ATTR_LaTeX: width=5cm73 [[../images/cat-drinking.jpg]]75 It is currently impossible for any computer program to reliably76 label such an video as "drinking". I think humans are able to label77 such video as "drinking" because they imagine /themselves/ as the78 cat, and imagine putting their face up against a stream of water79 and sticking out their tongue. In that imagined world, they can80 feel the cool water hitting their tongue, and feel the water81 entering their body, and are able to recognize that /feeling/ as82 drinking. So, the label of the action is not really in the pixels83 of the image, but is found clearly in a simulation inspired by84 those pixels. An imaginative system, having been trained on85 drinking and non-drinking examples and learning that the most86 important component of drinking is the feeling of water sliding87 down one's throat, would analyze a video of a cat drinking in the88 following manner:90 - Create a physical model of the video by putting a "fuzzy" model91 of its own body in place of the cat. Also, create a simulation of92 the stream of water.94 - Play out this simulated scene and generate imagined sensory95 experience. This will include relevant muscle contractions, a96 close up view of the stream from the cat's perspective, and most97 importantly, the imagined feeling of water entering the mouth.99 - The action is now easily identified as drinking by the sense of100 taste alone. The other senses (such as the tongue moving in and101 out) help to give plausibility to the simulated action. Note that102 the sense of vision, while critical in creating the simulation,103 is not critical for identifying the action from the simulation.105 More generally, I expect imaginative systems to be particularly106 good at identifying embodied actions in videos.108 * Cortex110 The previous example involves liquids, the sense of taste, and111 imagining oneself as a cat. For this thesis I constrain myself to112 simpler, more easily digitizable senses and situations.114 My system, =CORTEX= performs imagination in two different simplified115 worlds: /worm world/ and /stick-figure world/. In each of these116 worlds, entities capable of imagination recognize actions by117 simulating the experience from their own perspective, and then118 recognizing the action from a database of examples.120 In order to serve as a framework for experiments in imagination,121 =CORTEX= requires simulated bodies, worlds, and senses like vision,122 hearing, touch, proprioception, etc.124 ** A Video Game Engine takes care of some of the groundwork126 When it comes to simulation environments, the engines used to127 create the worlds in video games offer top-notch physics and128 graphics support. These engines also have limited support for129 creating cameras and rendering 3D sound, which can be repurposed130 for vision and hearing respectively. Physics collision detection131 can be expanded to create a sense of touch.133 jMonkeyEngine3 is one such engine for creating video games in134 Java. It uses OpenGL to render to the screen and uses screengraphs135 to avoid drawing things that do not appear on the screen. It has an136 active community and several games in the pipeline. The engine was137 not built to serve any particular game but is instead meant to be138 used for any 3D game. I chose jMonkeyEngine3 it because it had the139 most features out of all the open projects I looked at, and because140 I could then write my code in Clojure, an implementation of LISP141 that runs on the JVM.143 ** =CORTEX= Extends jMonkeyEngine3 to implement rich senses145 Using the game-making primitives provided by jMonkeyEngine3, I have146 constructed every major human sense except for smell and147 taste. =CORTEX= also provides an interface for creating creatures148 in Blender, a 3D modeling environment, and then "rigging" the149 creatures with senses using 3D annotations in Blender. A creature150 can have any number of senses, and there can be any number of151 creatures in a simulation.153 The senses available in =CORTEX= are:155 - [[../../cortex/html/vision.html][Vision]]156 - [[../../cortex/html/hearing.html][Hearing]]157 - [[../../cortex/html/touch.html][Touch]]158 - [[../../cortex/html/proprioception.html][Proprioception]]159 - [[../../cortex/html/movement.html][Muscle Tension]]161 * A roadmap for =CORTEX= experiments163 ** Worm World165 Worms in =CORTEX= are segmented creatures which vary in length and166 number of segments, and have the senses of vision, proprioception,167 touch, and muscle tension.169 #+attr_html: width=755170 #+caption: This is the tactile-sensor-profile for the upper segment of a worm. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).171 [[../images/finger-UV.png]]174 #+begin_html175 <div class="figure">176 <center>177 <video controls="controls" width="550">178 <source src="../video/worm-touch.ogg" type="video/ogg"179 preload="none" />180 </video>181 <br> <a href="http://youtu.be/RHx2wqzNVcU"> YouTube </a>182 </center>183 <p>The worm responds to touch.</p>184 </div>185 #+end_html187 #+begin_html188 <div class="figure">189 <center>190 <video controls="controls" width="550">191 <source src="../video/test-proprioception.ogg" type="video/ogg"192 preload="none" />193 </video>194 <br> <a href="http://youtu.be/JjdDmyM8b0w"> YouTube </a>195 </center>196 <p>Proprioception in a worm. The proprioceptive readout is197 in the upper left corner of the screen.</p>198 </div>199 #+end_html201 A worm is trained in various actions such as sinusoidal movement,202 curling, flailing, and spinning by directly playing motor203 contractions while the worm "feels" the experience. These actions204 are recorded both as vectors of muscle tension, touch, and205 proprioceptive data, but also in higher level forms such as206 frequencies of the various contractions and a symbolic name for the207 action.209 Then, the worm watches a video of another worm performing one of210 the actions, and must judge which action was performed. Normally211 this would be an extremely difficult problem, but the worm is able212 to greatly diminish the search space through sympathetic213 imagination. First, it creates an imagined copy of its body which214 it observes from a third person point of view. Then for each frame215 of the video, it maneuvers its simulated body to be in registration216 with the worm depicted in the video. The physical constraints217 imposed by the physics simulation greatly decrease the number of218 poses that have to be tried, making the search feasible. As the219 imaginary worm moves, it generates imaginary muscle tension and220 proprioceptive sensations. The worm determines the action not by221 vision, but by matching the imagined proprioceptive data with222 previous examples.224 By using non-visual sensory data such as touch, the worms can also225 answer body related questions such as "did your head touch your226 tail?" and "did worm A touch worm B?"228 The proprioceptive information used for action identification is229 body-centric, so only the registration step is dependent on point230 of view, not the identification step. Registration is not specific231 to any particular action. Thus, action identification can be232 divided into a point-of-view dependent generic registration step,233 and a action-specific step that is body-centered and invariant to234 point of view.236 ** Stick Figure World238 This environment is similar to Worm World, except the creatures are239 more complicated and the actions and questions more varied. It is240 an experiment to see how far imagination can go in interpreting241 actions.