rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@422
|
6
|
rlm@437
|
7
|
rlm@439
|
8 * Empathy and Embodiment as problem solving strategies
|
rlm@437
|
9
|
rlm@437
|
10 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
11 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
12 seen one way to efficiently implement empathy for embodied
|
rlm@441
|
13 creatures. Finally, you will become familiar with =CORTEX=, a
|
rlm@441
|
14 system for designing and simulating creatures with rich senses,
|
rlm@441
|
15 which you may choose to use in your own research.
|
rlm@437
|
16
|
rlm@441
|
17 This is the core vision of my thesis: That one of the important ways
|
rlm@441
|
18 in which we understand others is by imagining ourselves in their
|
rlm@441
|
19 position and emphatically feeling experiences relative to our own
|
rlm@441
|
20 bodies. By understanding events in terms of our own previous
|
rlm@441
|
21 corporeal experience, we greatly constrain the possibilities of what
|
rlm@441
|
22 would otherwise be an unwieldy exponential search. This extra
|
rlm@441
|
23 constraint can be the difference between easily understanding what
|
rlm@441
|
24 is happening in a video and being completely lost in a sea of
|
rlm@441
|
25 incomprehensible color and movement.
|
rlm@435
|
26
|
rlm@436
|
27 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
28
|
rlm@441
|
29 Consider for example the problem of determining what is happening in
|
rlm@441
|
30 a video of which this is one frame:
|
rlm@437
|
31
|
rlm@441
|
32 #+caption: A cat drinking some water. Identifying this action is
|
rlm@441
|
33 #+caption: beyond the state of the art for computers.
|
rlm@441
|
34 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
35 [[./images/cat-drinking.jpg]]
|
rlm@441
|
36
|
rlm@441
|
37 It is currently impossible for any computer program to reliably
|
rlm@442
|
38 label such a video as "drinking". And rightly so -- it is a very
|
rlm@441
|
39 hard problem! What features can you describe in terms of low level
|
rlm@441
|
40 functions of pixels that can even begin to describe at a high level
|
rlm@441
|
41 what is happening here?
|
rlm@437
|
42
|
rlm@441
|
43 Or suppose that you are building a program that recognizes
|
rlm@441
|
44 chairs. How could you ``see'' the chair in figure
|
rlm@441
|
45 \ref{invisible-chair} and figure \ref{hidden-chair}?
|
rlm@441
|
46
|
rlm@441
|
47 #+caption: When you look at this, do you think ``chair''? I certainly do.
|
rlm@441
|
48 #+name: invisible-chair
|
rlm@441
|
49 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
50 [[./images/invisible-chair.png]]
|
rlm@441
|
51
|
rlm@441
|
52 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@441
|
53 #+caption: doubt that any computer program can find it.
|
rlm@441
|
54 #+name: hidden-chair
|
rlm@441
|
55 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
56 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@441
|
57
|
rlm@441
|
58 Finally, how is it that you can easily tell the difference between
|
rlm@441
|
59 how the girls /muscles/ are working in figure \ref{girl}?
|
rlm@441
|
60
|
rlm@441
|
61 #+caption: The mysterious ``common sense'' appears here as you are able
|
rlm@441
|
62 #+caption: to discern the difference in how the girl's arm muscles
|
rlm@441
|
63 #+caption: are activated between the two images.
|
rlm@441
|
64 #+name: girl
|
rlm@441
|
65 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
66 [[./images/wall-push.png]]
|
rlm@437
|
67
|
rlm@441
|
68 Each of these examples tells us something about what might be going
|
rlm@441
|
69 on in our minds as we easily solve these recognition problems.
|
rlm@441
|
70
|
rlm@441
|
71 The hidden chairs show us that we are strongly triggered by cues
|
rlm@441
|
72 relating to the position of human bodies, and that we can
|
rlm@441
|
73 determine the overall physical configuration of a human body even
|
rlm@441
|
74 if much of that body is occluded.
|
rlm@437
|
75
|
rlm@441
|
76 The picture of the girl pushing against the wall tells us that we
|
rlm@441
|
77 have common sense knowledge about the kinetics of our own bodies.
|
rlm@441
|
78 We know well how our muscles would have to work to maintain us in
|
rlm@441
|
79 most positions, and we can easily project this self-knowledge to
|
rlm@441
|
80 imagined positions triggered by images of the human body.
|
rlm@441
|
81
|
rlm@441
|
82 ** =EMPATH= neatly solves recognition problems
|
rlm@441
|
83
|
rlm@441
|
84 I propose a system that can express the types of recognition
|
rlm@441
|
85 problems above in a form amenable to computation. It is split into
|
rlm@441
|
86 four parts:
|
rlm@441
|
87
|
rlm@441
|
88 - Free/Guided Play :: The creature moves around and experiences the
|
rlm@441
|
89 world through its unique perspective. Many otherwise
|
rlm@441
|
90 complicated actions are easily described in the language of a
|
rlm@441
|
91 full suite of body-centered, rich senses. For example,
|
rlm@441
|
92 drinking is the feeling of water sliding down your throat, and
|
rlm@441
|
93 cooling your insides. It's often accompanied by bringing your
|
rlm@441
|
94 hand close to your face, or bringing your face close to
|
rlm@441
|
95 water. Sitting down is the feeling of bending your knees,
|
rlm@441
|
96 activating your quadriceps, then feeling a surface with your
|
rlm@441
|
97 bottom and relaxing your legs. These body-centered action
|
rlm@441
|
98 descriptions can be either learned or hard coded.
|
rlm@441
|
99 - Alignment :: When trying to interpret a video or image, the
|
rlm@441
|
100 creature takes a model of itself and aligns it with
|
rlm@441
|
101 whatever it sees. This can be a rather loose
|
rlm@441
|
102 alignment that can cross species, as when humans try
|
rlm@441
|
103 to align themselves with things like ponies, dogs,
|
rlm@441
|
104 or other humans with a different body type.
|
rlm@441
|
105 - Empathy :: The alignment triggers the memories of previous
|
rlm@441
|
106 experience. For example, the alignment itself easily
|
rlm@441
|
107 maps to proprioceptive data. Any sounds or obvious
|
rlm@441
|
108 skin contact in the video can to a lesser extent
|
rlm@441
|
109 trigger previous experience. The creatures previous
|
rlm@441
|
110 experience is chained together in short bursts to
|
rlm@441
|
111 coherently describe the new scene.
|
rlm@441
|
112 - Recognition :: With the scene now described in terms of past
|
rlm@441
|
113 experience, the creature can now run its
|
rlm@441
|
114 action-identification programs on this synthesized
|
rlm@441
|
115 sensory data, just as it would if it were actually
|
rlm@441
|
116 experiencing the scene first-hand. If previous
|
rlm@441
|
117 experience has been accurately retrieved, and if
|
rlm@441
|
118 it is analogous enough to the scene, then the
|
rlm@441
|
119 creature will correctly identify the action in the
|
rlm@441
|
120 scene.
|
rlm@441
|
121
|
rlm@441
|
122
|
rlm@441
|
123 For example, I think humans are able to label the cat video as
|
rlm@441
|
124 "drinking" because they imagine /themselves/ as the cat, and
|
rlm@441
|
125 imagine putting their face up against a stream of water and
|
rlm@441
|
126 sticking out their tongue. In that imagined world, they can feel
|
rlm@441
|
127 the cool water hitting their tongue, and feel the water entering
|
rlm@441
|
128 their body, and are able to recognize that /feeling/ as
|
rlm@441
|
129 drinking. So, the label of the action is not really in the pixels
|
rlm@441
|
130 of the image, but is found clearly in a simulation inspired by
|
rlm@441
|
131 those pixels. An imaginative system, having been trained on
|
rlm@441
|
132 drinking and non-drinking examples and learning that the most
|
rlm@441
|
133 important component of drinking is the feeling of water sliding
|
rlm@441
|
134 down one's throat, would analyze a video of a cat drinking in the
|
rlm@441
|
135 following manner:
|
rlm@441
|
136
|
rlm@441
|
137 1. Create a physical model of the video by putting a "fuzzy" model
|
rlm@441
|
138 of its own body in place of the cat. Possibly also create a
|
rlm@441
|
139 simulation of the stream of water.
|
rlm@441
|
140
|
rlm@441
|
141 2. Play out this simulated scene and generate imagined sensory
|
rlm@441
|
142 experience. This will include relevant muscle contractions, a
|
rlm@441
|
143 close up view of the stream from the cat's perspective, and most
|
rlm@441
|
144 importantly, the imagined feeling of water entering the
|
rlm@443
|
145 mouth. The imagined sensory experience can come from a
|
rlm@441
|
146 simulation of the event, but can also be pattern-matched from
|
rlm@441
|
147 previous, similar embodied experience.
|
rlm@441
|
148
|
rlm@441
|
149 3. The action is now easily identified as drinking by the sense of
|
rlm@441
|
150 taste alone. The other senses (such as the tongue moving in and
|
rlm@441
|
151 out) help to give plausibility to the simulated action. Note that
|
rlm@441
|
152 the sense of vision, while critical in creating the simulation,
|
rlm@441
|
153 is not critical for identifying the action from the simulation.
|
rlm@441
|
154
|
rlm@441
|
155 For the chair examples, the process is even easier:
|
rlm@441
|
156
|
rlm@441
|
157 1. Align a model of your body to the person in the image.
|
rlm@441
|
158
|
rlm@441
|
159 2. Generate proprioceptive sensory data from this alignment.
|
rlm@437
|
160
|
rlm@441
|
161 3. Use the imagined proprioceptive data as a key to lookup related
|
rlm@441
|
162 sensory experience associated with that particular proproceptive
|
rlm@441
|
163 feeling.
|
rlm@437
|
164
|
rlm@443
|
165 4. Retrieve the feeling of your bottom resting on a surface, your
|
rlm@443
|
166 knees bent, and your leg muscles relaxed.
|
rlm@437
|
167
|
rlm@441
|
168 5. This sensory information is consistent with the =sitting?=
|
rlm@441
|
169 sensory predicate, so you (and the entity in the image) must be
|
rlm@441
|
170 sitting.
|
rlm@440
|
171
|
rlm@441
|
172 6. There must be a chair-like object since you are sitting.
|
rlm@440
|
173
|
rlm@441
|
174 Empathy offers yet another alternative to the age-old AI
|
rlm@441
|
175 representation question: ``What is a chair?'' --- A chair is the
|
rlm@441
|
176 feeling of sitting.
|
rlm@441
|
177
|
rlm@441
|
178 My program, =EMPATH= uses this empathic problem solving technique
|
rlm@441
|
179 to interpret the actions of a simple, worm-like creature.
|
rlm@437
|
180
|
rlm@441
|
181 #+caption: The worm performs many actions during free play such as
|
rlm@441
|
182 #+caption: curling, wiggling, and resting.
|
rlm@441
|
183 #+name: worm-intro
|
rlm@446
|
184 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
185 [[./images/worm-intro-white.png]]
|
rlm@437
|
186
|
rlm@441
|
187 #+caption: The actions of a worm in a video can be recognized by
|
rlm@441
|
188 #+caption: proprioceptive data and sentory predicates by filling
|
rlm@441
|
189 #+caption: in the missing sensory detail with previous experience.
|
rlm@441
|
190 #+name: worm-recognition-intro
|
rlm@446
|
191 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
192 [[./images/worm-poses.png]]
|
rlm@437
|
193
|
rlm@441
|
194
|
rlm@441
|
195 One powerful advantage of empathic problem solving is that it
|
rlm@441
|
196 factors the action recognition problem into two easier problems. To
|
rlm@441
|
197 use empathy, you need an /aligner/, which takes the video and a
|
rlm@441
|
198 model of your body, and aligns the model with the video. Then, you
|
rlm@441
|
199 need a /recognizer/, which uses the aligned model to interpret the
|
rlm@441
|
200 action. The power in this method lies in the fact that you describe
|
rlm@441
|
201 all actions form a body-centered, rich viewpoint. This way, if you
|
rlm@441
|
202 teach the system what ``running'' is, and you have a good enough
|
rlm@441
|
203 aligner, the system will from then on be able to recognize running
|
rlm@441
|
204 from any point of view, even strange points of view like above or
|
rlm@441
|
205 underneath the runner. This is in contrast to action recognition
|
rlm@441
|
206 schemes that try to identify actions using a non-embodied approach
|
rlm@441
|
207 such as TODO:REFERENCE. If these systems learn about running as viewed
|
rlm@441
|
208 from the side, they will not automatically be able to recognize
|
rlm@441
|
209 running from any other viewpoint.
|
rlm@441
|
210
|
rlm@441
|
211 Another powerful advantage is that using the language of multiple
|
rlm@441
|
212 body-centered rich senses to describe body-centerd actions offers a
|
rlm@441
|
213 massive boost in descriptive capability. Consider how difficult it
|
rlm@441
|
214 would be to compose a set of HOG filters to describe the action of
|
rlm@441
|
215 a simple worm-creature "curling" so that its head touches its tail,
|
rlm@441
|
216 and then behold the simplicity of describing thus action in a
|
rlm@441
|
217 language designed for the task (listing \ref{grand-circle-intro}):
|
rlm@441
|
218
|
rlm@446
|
219 #+caption: Body-centerd actions are best expressed in a body-centered
|
rlm@446
|
220 #+caption: language. This code detects when the worm has curled into a
|
rlm@446
|
221 #+caption: full circle. Imagine how you would replicate this functionality
|
rlm@446
|
222 #+caption: using low-level pixel features such as HOG filters!
|
rlm@446
|
223 #+name: grand-circle-intro
|
rlm@446
|
224 #+begin_listing clojure
|
rlm@446
|
225 #+begin_src clojure
|
rlm@446
|
226 (defn grand-circle?
|
rlm@446
|
227 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@446
|
228 [experiences]
|
rlm@446
|
229 (and (curled? experiences)
|
rlm@446
|
230 (let [worm-touch (:touch (peek experiences))
|
rlm@446
|
231 tail-touch (worm-touch 0)
|
rlm@446
|
232 head-touch (worm-touch 4)]
|
rlm@446
|
233 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@446
|
234 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@446
|
235 #+end_src
|
rlm@446
|
236 #+end_listing
|
rlm@446
|
237
|
rlm@435
|
238
|
rlm@437
|
239 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
240
|
rlm@436
|
241 Hand integration demo
|
rlm@435
|
242
|
rlm@437
|
243 ** Contributions
|
rlm@435
|
244
|
rlm@436
|
245 * Building =CORTEX=
|
rlm@435
|
246
|
rlm@436
|
247 ** To explore embodiment, we need a world, body, and senses
|
rlm@435
|
248
|
rlm@436
|
249 ** Because of Time, simulation is perferable to reality
|
rlm@435
|
250
|
rlm@436
|
251 ** Video game engines are a great starting point
|
rlm@435
|
252
|
rlm@436
|
253 ** Bodies are composed of segments connected by joints
|
rlm@435
|
254
|
rlm@436
|
255 ** Eyes reuse standard video game components
|
rlm@436
|
256
|
rlm@436
|
257 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
258
|
rlm@436
|
259 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
260
|
rlm@440
|
261 ** Proprioception is the sense that makes everything ``real''
|
rlm@436
|
262
|
rlm@436
|
263 ** Muscles are both effectors and sensors
|
rlm@436
|
264
|
rlm@436
|
265 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
266
|
rlm@436
|
267 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
268
|
rlm@435
|
269 * Empathy in a simulated worm
|
rlm@435
|
270
|
rlm@436
|
271 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
272
|
rlm@436
|
273 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
274
|
rlm@437
|
275 ** Digression: bootstrapping touch using free exploration
|
rlm@435
|
276
|
rlm@436
|
277 ** \Phi-space describes the worm's experiences
|
rlm@435
|
278
|
rlm@436
|
279 ** Empathy is the process of tracing though \Phi-space
|
rlm@435
|
280
|
rlm@441
|
281 ** Efficient action recognition with =EMPATH=
|
rlm@425
|
282
|
rlm@432
|
283 * Contributions
|
rlm@432
|
284 - Built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@432
|
285 experiments. Has many new features lacking in other systems, such
|
rlm@432
|
286 as sound. Easy to model/create new creatures.
|
rlm@432
|
287 - created a novel concept for action recognition by using artificial
|
rlm@432
|
288 imagination.
|
rlm@426
|
289
|
rlm@436
|
290 In the second half of the thesis I develop a computational model of
|
rlm@436
|
291 empathy, using =CORTEX= as a base. Empathy in this context is the
|
rlm@436
|
292 ability to observe another creature and infer what sorts of sensations
|
rlm@436
|
293 that creature is feeling. My empathy algorithm involves multiple
|
rlm@436
|
294 phases. First is free-play, where the creature moves around and gains
|
rlm@436
|
295 sensory experience. From this experience I construct a representation
|
rlm@436
|
296 of the creature's sensory state space, which I call \phi-space. Using
|
rlm@436
|
297 \phi-space, I construct an efficient function for enriching the
|
rlm@436
|
298 limited data that comes from observing another creature with a full
|
rlm@436
|
299 compliment of imagined sensory data based on previous experience. I
|
rlm@436
|
300 can then use the imagined sensory data to recognize what the observed
|
rlm@436
|
301 creature is doing and feeling, using straightforward embodied action
|
rlm@436
|
302 predicates. This is all demonstrated with using a simple worm-like
|
rlm@436
|
303 creature, and recognizing worm-actions based on limited data.
|
rlm@432
|
304
|
rlm@436
|
305 Embodied representation using multiple senses such as touch,
|
rlm@436
|
306 proprioception, and muscle tension turns out be be exceedingly
|
rlm@436
|
307 efficient at describing body-centered actions. It is the ``right
|
rlm@436
|
308 language for the job''. For example, it takes only around 5 lines of
|
rlm@436
|
309 LISP code to describe the action of ``curling'' using embodied
|
rlm@436
|
310 primitives. It takes about 8 lines to describe the seemingly
|
rlm@436
|
311 complicated action of wiggling.
|
rlm@432
|
312
|
rlm@437
|
313
|
rlm@437
|
314
|
rlm@437
|
315 * COMMENT names for cortex
|
rlm@437
|
316 - bioland |