annotate thesis/cortex.org @ 437:c1e6b7221b2f

progress on intro.
author Robert McIntyre <rlm@mit.edu>
date Sun, 23 Mar 2014 22:20:44 -0400
parents 853377051f1e
children 4dcb923c9b16
rev   line source
rlm@425 1 #+title: =CORTEX=
rlm@425 2 #+author: Robert McIntyre
rlm@425 3 #+email: rlm@mit.edu
rlm@425 4 #+description: Using embodied AI to facilitate Artificial Imagination.
rlm@425 5 #+keywords: AI, clojure, embodiment
rlm@422 6
rlm@437 7
rlm@437 8 * Empathy and Embodiment as a problem solving strategy
rlm@437 9
rlm@437 10 By the end of this thesis, you will have seen a novel approach to
rlm@437 11 interpreting video using embodiment and empathy. You will have also
rlm@437 12 seen one way to efficiently implement empathy for embodied
rlm@437 13 creatures.
rlm@437 14
rlm@437 15 The core vision of this thesis is that one of the important ways in
rlm@437 16 which we understand others is by imagining ourselves in their
rlm@437 17 posistion and empathicaly feeling experiences based on our own past
rlm@437 18 experiences and imagination.
rlm@437 19
rlm@437 20 By understanding events in terms of our own previous corperal
rlm@437 21 experience, we greatly constrain the possibilities of what would
rlm@437 22 otherwise be an unweidly exponential search. This extra constraint
rlm@437 23 can be the difference between easily understanding what is happening
rlm@437 24 in a video and being completely lost in a sea of incomprehensible
rlm@437 25 color and movement.
rlm@435 26
rlm@436 27 ** Recognizing actions in video is extremely difficult
rlm@437 28
rlm@437 29 Consider for example the problem of determining what is happening in
rlm@437 30 a video of which this is one frame:
rlm@437 31
rlm@437 32 #+caption: A cat drinking some water. Identifying this action is beyond the state of the art for computers.
rlm@437 33 #+ATTR_LaTeX: :width 7cm
rlm@437 34 [[./images/cat-drinking.jpg]]
rlm@437 35
rlm@437 36 It is currently impossible for any computer program to reliably
rlm@437 37 label such an video as "drinking". And rightly so -- it is a very
rlm@437 38 hard problem! What features can you describe in terms of low level
rlm@437 39 functions of pixels that can even begin to describe what is
rlm@437 40 happening here?
rlm@437 41
rlm@437 42 Or suppose that you are building a program that recognizes
rlm@437 43 chairs. How could you ``see'' the chair in the following picture?
rlm@437 44
rlm@437 45 #+caption: When you look at this, do you think ``chair''? I certainly do.
rlm@437 46 #+ATTR_LaTeX: :width 10cm
rlm@437 47 [[./images/invisible-chair.png]]
rlm@437 48
rlm@437 49 #+caption: The chair in this image is quite obvious to humans, but I doubt any computer program can find it.
rlm@437 50 #+ATTR_LaTeX: :width 10cm
rlm@437 51 [[./images/fat-person-sitting-at-desk.jpg]]
rlm@437 52
rlm@437 53
rlm@437 54 I think humans are able to label
rlm@437 55 such video as "drinking" because they imagine /themselves/ as the
rlm@437 56 cat, and imagine putting their face up against a stream of water and
rlm@437 57 sticking out their tongue. In that imagined world, they can feel the
rlm@437 58 cool water hitting their tongue, and feel the water entering their
rlm@437 59 body, and are able to recognize that /feeling/ as drinking. So, the
rlm@437 60 label of the action is not really in the pixels of the image, but is
rlm@437 61 found clearly in a simulation inspired by those pixels. An
rlm@437 62 imaginative system, having been trained on drinking and non-drinking
rlm@437 63 examples and learning that the most important component of drinking
rlm@437 64 is the feeling of water sliding down one's throat, would analyze a
rlm@437 65 video of a cat drinking in the following manner:
rlm@437 66
rlm@437 67 - Create a physical model of the video by putting a "fuzzy" model
rlm@437 68 of its own body in place of the cat. Also, create a simulation of
rlm@437 69 the stream of water.
rlm@437 70
rlm@437 71 - Play out this simulated scene and generate imagined sensory
rlm@437 72 experience. This will include relevant muscle contractions, a
rlm@437 73 close up view of the stream from the cat's perspective, and most
rlm@437 74 importantly, the imagined feeling of water entering the mouth.
rlm@437 75
rlm@437 76 - The action is now easily identified as drinking by the sense of
rlm@437 77 taste alone. The other senses (such as the tongue moving in and
rlm@437 78 out) help to give plausibility to the simulated action. Note that
rlm@437 79 the sense of vision, while critical in creating the simulation,
rlm@437 80 is not critical for identifying the action from the simulation.
rlm@437 81
rlm@437 82
rlm@437 83
rlm@437 84
rlm@437 85
rlm@437 86
rlm@437 87
rlm@436 88 cat drinking, mimes, leaning, common sense
rlm@435 89
rlm@437 90 ** =EMPATH= neatly solves recognition problems
rlm@437 91
rlm@437 92 factorization , right language, etc
rlm@435 93
rlm@436 94 a new possibility for the question ``what is a chair?'' -- it's the
rlm@436 95 feeling of your butt on something and your knees bent, with your
rlm@436 96 back muscles and legs relaxed.
rlm@435 97
rlm@437 98 ** =CORTEX= is a toolkit for building sensate creatures
rlm@435 99
rlm@436 100 Hand integration demo
rlm@435 101
rlm@437 102 ** Contributions
rlm@435 103
rlm@436 104 * Building =CORTEX=
rlm@435 105
rlm@436 106 ** To explore embodiment, we need a world, body, and senses
rlm@435 107
rlm@436 108 ** Because of Time, simulation is perferable to reality
rlm@435 109
rlm@436 110 ** Video game engines are a great starting point
rlm@435 111
rlm@436 112 ** Bodies are composed of segments connected by joints
rlm@435 113
rlm@436 114 ** Eyes reuse standard video game components
rlm@436 115
rlm@436 116 ** Hearing is hard; =CORTEX= does it right
rlm@436 117
rlm@436 118 ** Touch uses hundreds of hair-like elements
rlm@436 119
rlm@436 120 ** Proprioception is the force that makes everything ``real''
rlm@436 121
rlm@436 122 ** Muscles are both effectors and sensors
rlm@436 123
rlm@436 124 ** =CORTEX= brings complex creatures to life!
rlm@436 125
rlm@436 126 ** =CORTEX= enables many possiblities for further research
rlm@435 127
rlm@435 128 * Empathy in a simulated worm
rlm@435 129
rlm@436 130 ** Embodiment factors action recognition into managable parts
rlm@435 131
rlm@436 132 ** Action recognition is easy with a full gamut of senses
rlm@435 133
rlm@437 134 ** Digression: bootstrapping touch using free exploration
rlm@435 135
rlm@436 136 ** \Phi-space describes the worm's experiences
rlm@435 137
rlm@436 138 ** Empathy is the process of tracing though \Phi-space
rlm@435 139
rlm@436 140 ** Efficient action recognition via empathy
rlm@425 141
rlm@432 142 * Contributions
rlm@432 143 - Built =CORTEX=, a comprehensive platform for embodied AI
rlm@432 144 experiments. Has many new features lacking in other systems, such
rlm@432 145 as sound. Easy to model/create new creatures.
rlm@432 146 - created a novel concept for action recognition by using artificial
rlm@432 147 imagination.
rlm@426 148
rlm@436 149 In the second half of the thesis I develop a computational model of
rlm@436 150 empathy, using =CORTEX= as a base. Empathy in this context is the
rlm@436 151 ability to observe another creature and infer what sorts of sensations
rlm@436 152 that creature is feeling. My empathy algorithm involves multiple
rlm@436 153 phases. First is free-play, where the creature moves around and gains
rlm@436 154 sensory experience. From this experience I construct a representation
rlm@436 155 of the creature's sensory state space, which I call \phi-space. Using
rlm@436 156 \phi-space, I construct an efficient function for enriching the
rlm@436 157 limited data that comes from observing another creature with a full
rlm@436 158 compliment of imagined sensory data based on previous experience. I
rlm@436 159 can then use the imagined sensory data to recognize what the observed
rlm@436 160 creature is doing and feeling, using straightforward embodied action
rlm@436 161 predicates. This is all demonstrated with using a simple worm-like
rlm@436 162 creature, and recognizing worm-actions based on limited data.
rlm@432 163
rlm@436 164 Embodied representation using multiple senses such as touch,
rlm@436 165 proprioception, and muscle tension turns out be be exceedingly
rlm@436 166 efficient at describing body-centered actions. It is the ``right
rlm@436 167 language for the job''. For example, it takes only around 5 lines of
rlm@436 168 LISP code to describe the action of ``curling'' using embodied
rlm@436 169 primitives. It takes about 8 lines to describe the seemingly
rlm@436 170 complicated action of wiggling.
rlm@432 171
rlm@437 172
rlm@437 173
rlm@437 174 * COMMENT names for cortex
rlm@437 175 - bioland