rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@422
|
6
|
rlm@437
|
7
|
rlm@439
|
8 * Empathy and Embodiment as problem solving strategies
|
rlm@437
|
9
|
rlm@437
|
10 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
11 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
12 seen one way to efficiently implement empathy for embodied
|
rlm@437
|
13 creatures.
|
rlm@437
|
14
|
rlm@437
|
15 The core vision of this thesis is that one of the important ways in
|
rlm@437
|
16 which we understand others is by imagining ourselves in their
|
rlm@437
|
17 posistion and empathicaly feeling experiences based on our own past
|
rlm@437
|
18 experiences and imagination.
|
rlm@437
|
19
|
rlm@437
|
20 By understanding events in terms of our own previous corperal
|
rlm@437
|
21 experience, we greatly constrain the possibilities of what would
|
rlm@437
|
22 otherwise be an unweidly exponential search. This extra constraint
|
rlm@437
|
23 can be the difference between easily understanding what is happening
|
rlm@437
|
24 in a video and being completely lost in a sea of incomprehensible
|
rlm@437
|
25 color and movement.
|
rlm@435
|
26
|
rlm@436
|
27 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
28
|
rlm@437
|
29 Consider for example the problem of determining what is happening in
|
rlm@437
|
30 a video of which this is one frame:
|
rlm@437
|
31
|
rlm@439
|
32 #+caption: A cat drinking some water. Identifying this action is
|
rlm@439
|
33 #+caption: beyond the state of the art for computers.
|
rlm@437
|
34 #+ATTR_LaTeX: :width 7cm
|
rlm@437
|
35 [[./images/cat-drinking.jpg]]
|
rlm@437
|
36
|
rlm@437
|
37 It is currently impossible for any computer program to reliably
|
rlm@437
|
38 label such an video as "drinking". And rightly so -- it is a very
|
rlm@437
|
39 hard problem! What features can you describe in terms of low level
|
rlm@437
|
40 functions of pixels that can even begin to describe what is
|
rlm@437
|
41 happening here?
|
rlm@437
|
42
|
rlm@437
|
43 Or suppose that you are building a program that recognizes
|
rlm@437
|
44 chairs. How could you ``see'' the chair in the following picture?
|
rlm@437
|
45
|
rlm@437
|
46 #+caption: When you look at this, do you think ``chair''? I certainly do.
|
rlm@437
|
47 #+ATTR_LaTeX: :width 10cm
|
rlm@437
|
48 [[./images/invisible-chair.png]]
|
rlm@437
|
49
|
rlm@439
|
50 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@439
|
51 #+caption: doubt that any computer program can find it.
|
rlm@437
|
52 #+ATTR_LaTeX: :width 10cm
|
rlm@437
|
53 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@437
|
54
|
rlm@437
|
55
|
rlm@437
|
56 I think humans are able to label
|
rlm@437
|
57 such video as "drinking" because they imagine /themselves/ as the
|
rlm@437
|
58 cat, and imagine putting their face up against a stream of water and
|
rlm@437
|
59 sticking out their tongue. In that imagined world, they can feel the
|
rlm@437
|
60 cool water hitting their tongue, and feel the water entering their
|
rlm@437
|
61 body, and are able to recognize that /feeling/ as drinking. So, the
|
rlm@437
|
62 label of the action is not really in the pixels of the image, but is
|
rlm@437
|
63 found clearly in a simulation inspired by those pixels. An
|
rlm@437
|
64 imaginative system, having been trained on drinking and non-drinking
|
rlm@437
|
65 examples and learning that the most important component of drinking
|
rlm@437
|
66 is the feeling of water sliding down one's throat, would analyze a
|
rlm@437
|
67 video of a cat drinking in the following manner:
|
rlm@437
|
68
|
rlm@437
|
69 - Create a physical model of the video by putting a "fuzzy" model
|
rlm@437
|
70 of its own body in place of the cat. Also, create a simulation of
|
rlm@437
|
71 the stream of water.
|
rlm@437
|
72
|
rlm@437
|
73 - Play out this simulated scene and generate imagined sensory
|
rlm@437
|
74 experience. This will include relevant muscle contractions, a
|
rlm@437
|
75 close up view of the stream from the cat's perspective, and most
|
rlm@437
|
76 importantly, the imagined feeling of water entering the mouth.
|
rlm@437
|
77
|
rlm@437
|
78 - The action is now easily identified as drinking by the sense of
|
rlm@437
|
79 taste alone. The other senses (such as the tongue moving in and
|
rlm@437
|
80 out) help to give plausibility to the simulated action. Note that
|
rlm@437
|
81 the sense of vision, while critical in creating the simulation,
|
rlm@437
|
82 is not critical for identifying the action from the simulation.
|
rlm@437
|
83
|
rlm@437
|
84
|
rlm@437
|
85
|
rlm@437
|
86
|
rlm@437
|
87
|
rlm@437
|
88
|
rlm@437
|
89
|
rlm@436
|
90 cat drinking, mimes, leaning, common sense
|
rlm@435
|
91
|
rlm@437
|
92 ** =EMPATH= neatly solves recognition problems
|
rlm@437
|
93
|
rlm@437
|
94 factorization , right language, etc
|
rlm@435
|
95
|
rlm@436
|
96 a new possibility for the question ``what is a chair?'' -- it's the
|
rlm@436
|
97 feeling of your butt on something and your knees bent, with your
|
rlm@436
|
98 back muscles and legs relaxed.
|
rlm@435
|
99
|
rlm@437
|
100 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
101
|
rlm@436
|
102 Hand integration demo
|
rlm@435
|
103
|
rlm@437
|
104 ** Contributions
|
rlm@435
|
105
|
rlm@436
|
106 * Building =CORTEX=
|
rlm@435
|
107
|
rlm@436
|
108 ** To explore embodiment, we need a world, body, and senses
|
rlm@435
|
109
|
rlm@436
|
110 ** Because of Time, simulation is perferable to reality
|
rlm@435
|
111
|
rlm@436
|
112 ** Video game engines are a great starting point
|
rlm@435
|
113
|
rlm@436
|
114 ** Bodies are composed of segments connected by joints
|
rlm@435
|
115
|
rlm@436
|
116 ** Eyes reuse standard video game components
|
rlm@436
|
117
|
rlm@436
|
118 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
119
|
rlm@436
|
120 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
121
|
rlm@436
|
122 ** Proprioception is the force that makes everything ``real''
|
rlm@436
|
123
|
rlm@436
|
124 ** Muscles are both effectors and sensors
|
rlm@436
|
125
|
rlm@436
|
126 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
127
|
rlm@436
|
128 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
129
|
rlm@435
|
130 * Empathy in a simulated worm
|
rlm@435
|
131
|
rlm@436
|
132 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
133
|
rlm@436
|
134 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
135
|
rlm@437
|
136 ** Digression: bootstrapping touch using free exploration
|
rlm@435
|
137
|
rlm@436
|
138 ** \Phi-space describes the worm's experiences
|
rlm@435
|
139
|
rlm@436
|
140 ** Empathy is the process of tracing though \Phi-space
|
rlm@435
|
141
|
rlm@436
|
142 ** Efficient action recognition via empathy
|
rlm@425
|
143
|
rlm@432
|
144 * Contributions
|
rlm@432
|
145 - Built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@432
|
146 experiments. Has many new features lacking in other systems, such
|
rlm@432
|
147 as sound. Easy to model/create new creatures.
|
rlm@432
|
148 - created a novel concept for action recognition by using artificial
|
rlm@432
|
149 imagination.
|
rlm@426
|
150
|
rlm@436
|
151 In the second half of the thesis I develop a computational model of
|
rlm@436
|
152 empathy, using =CORTEX= as a base. Empathy in this context is the
|
rlm@436
|
153 ability to observe another creature and infer what sorts of sensations
|
rlm@436
|
154 that creature is feeling. My empathy algorithm involves multiple
|
rlm@436
|
155 phases. First is free-play, where the creature moves around and gains
|
rlm@436
|
156 sensory experience. From this experience I construct a representation
|
rlm@436
|
157 of the creature's sensory state space, which I call \phi-space. Using
|
rlm@436
|
158 \phi-space, I construct an efficient function for enriching the
|
rlm@436
|
159 limited data that comes from observing another creature with a full
|
rlm@436
|
160 compliment of imagined sensory data based on previous experience. I
|
rlm@436
|
161 can then use the imagined sensory data to recognize what the observed
|
rlm@436
|
162 creature is doing and feeling, using straightforward embodied action
|
rlm@436
|
163 predicates. This is all demonstrated with using a simple worm-like
|
rlm@436
|
164 creature, and recognizing worm-actions based on limited data.
|
rlm@432
|
165
|
rlm@436
|
166 Embodied representation using multiple senses such as touch,
|
rlm@436
|
167 proprioception, and muscle tension turns out be be exceedingly
|
rlm@436
|
168 efficient at describing body-centered actions. It is the ``right
|
rlm@436
|
169 language for the job''. For example, it takes only around 5 lines of
|
rlm@436
|
170 LISP code to describe the action of ``curling'' using embodied
|
rlm@436
|
171 primitives. It takes about 8 lines to describe the seemingly
|
rlm@436
|
172 complicated action of wiggling.
|
rlm@432
|
173
|
rlm@437
|
174
|
rlm@437
|
175
|
rlm@437
|
176 * COMMENT names for cortex
|
rlm@437
|
177 - bioland |