rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@422
|
6
|
rlm@437
|
7
|
rlm@439
|
8 * Empathy and Embodiment as problem solving strategies
|
rlm@437
|
9
|
rlm@437
|
10 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
11 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
12 seen one way to efficiently implement empathy for embodied
|
rlm@447
|
13 creatures. Finally, you will become familiar with =CORTEX=, a system
|
rlm@447
|
14 for designing and simulating creatures with rich senses, which you
|
rlm@447
|
15 may choose to use in your own research.
|
rlm@437
|
16
|
rlm@441
|
17 This is the core vision of my thesis: That one of the important ways
|
rlm@441
|
18 in which we understand others is by imagining ourselves in their
|
rlm@441
|
19 position and emphatically feeling experiences relative to our own
|
rlm@441
|
20 bodies. By understanding events in terms of our own previous
|
rlm@441
|
21 corporeal experience, we greatly constrain the possibilities of what
|
rlm@441
|
22 would otherwise be an unwieldy exponential search. This extra
|
rlm@441
|
23 constraint can be the difference between easily understanding what
|
rlm@441
|
24 is happening in a video and being completely lost in a sea of
|
rlm@441
|
25 incomprehensible color and movement.
|
rlm@435
|
26
|
rlm@436
|
27 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
28
|
rlm@447
|
29 Consider for example the problem of determining what is happening
|
rlm@447
|
30 in a video of which this is one frame:
|
rlm@437
|
31
|
rlm@441
|
32 #+caption: A cat drinking some water. Identifying this action is
|
rlm@441
|
33 #+caption: beyond the state of the art for computers.
|
rlm@441
|
34 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
35 [[./images/cat-drinking.jpg]]
|
rlm@441
|
36
|
rlm@441
|
37 It is currently impossible for any computer program to reliably
|
rlm@447
|
38 label such a video as ``drinking''. And rightly so -- it is a very
|
rlm@441
|
39 hard problem! What features can you describe in terms of low level
|
rlm@441
|
40 functions of pixels that can even begin to describe at a high level
|
rlm@441
|
41 what is happening here?
|
rlm@437
|
42
|
rlm@447
|
43 Or suppose that you are building a program that recognizes chairs.
|
rlm@448
|
44 How could you ``see'' the chair in figure \ref{hidden-chair}?
|
rlm@441
|
45
|
rlm@441
|
46 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@448
|
47 #+caption: doubt that any modern computer vision program can find it.
|
rlm@441
|
48 #+name: hidden-chair
|
rlm@441
|
49 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
50 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@441
|
51
|
rlm@441
|
52 Finally, how is it that you can easily tell the difference between
|
rlm@441
|
53 how the girls /muscles/ are working in figure \ref{girl}?
|
rlm@441
|
54
|
rlm@441
|
55 #+caption: The mysterious ``common sense'' appears here as you are able
|
rlm@441
|
56 #+caption: to discern the difference in how the girl's arm muscles
|
rlm@441
|
57 #+caption: are activated between the two images.
|
rlm@441
|
58 #+name: girl
|
rlm@448
|
59 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
60 [[./images/wall-push.png]]
|
rlm@437
|
61
|
rlm@441
|
62 Each of these examples tells us something about what might be going
|
rlm@441
|
63 on in our minds as we easily solve these recognition problems.
|
rlm@441
|
64
|
rlm@441
|
65 The hidden chairs show us that we are strongly triggered by cues
|
rlm@447
|
66 relating to the position of human bodies, and that we can determine
|
rlm@447
|
67 the overall physical configuration of a human body even if much of
|
rlm@447
|
68 that body is occluded.
|
rlm@437
|
69
|
rlm@441
|
70 The picture of the girl pushing against the wall tells us that we
|
rlm@441
|
71 have common sense knowledge about the kinetics of our own bodies.
|
rlm@441
|
72 We know well how our muscles would have to work to maintain us in
|
rlm@441
|
73 most positions, and we can easily project this self-knowledge to
|
rlm@441
|
74 imagined positions triggered by images of the human body.
|
rlm@441
|
75
|
rlm@441
|
76 ** =EMPATH= neatly solves recognition problems
|
rlm@441
|
77
|
rlm@441
|
78 I propose a system that can express the types of recognition
|
rlm@441
|
79 problems above in a form amenable to computation. It is split into
|
rlm@441
|
80 four parts:
|
rlm@441
|
81
|
rlm@448
|
82 - Free/Guided Play :: The creature moves around and experiences the
|
rlm@448
|
83 world through its unique perspective. Many otherwise
|
rlm@448
|
84 complicated actions are easily described in the language of a
|
rlm@448
|
85 full suite of body-centered, rich senses. For example,
|
rlm@448
|
86 drinking is the feeling of water sliding down your throat, and
|
rlm@448
|
87 cooling your insides. It's often accompanied by bringing your
|
rlm@448
|
88 hand close to your face, or bringing your face close to water.
|
rlm@448
|
89 Sitting down is the feeling of bending your knees, activating
|
rlm@448
|
90 your quadriceps, then feeling a surface with your bottom and
|
rlm@448
|
91 relaxing your legs. These body-centered action descriptions
|
rlm@448
|
92 can be either learned or hard coded.
|
rlm@448
|
93 - Posture Imitation :: When trying to interpret a video or image,
|
rlm@448
|
94 the creature takes a model of itself and aligns it with
|
rlm@448
|
95 whatever it sees. This alignment can even cross species, as
|
rlm@448
|
96 when humans try to align themselves with things like ponies,
|
rlm@448
|
97 dogs, or other humans with a different body type.
|
rlm@448
|
98 - Empathy :: The alignment triggers associations with
|
rlm@448
|
99 sensory data from prior experiences. For example, the
|
rlm@448
|
100 alignment itself easily maps to proprioceptive data. Any
|
rlm@448
|
101 sounds or obvious skin contact in the video can to a lesser
|
rlm@448
|
102 extent trigger previous experience. Segments of previous
|
rlm@448
|
103 experiences are stitched together to form a coherent and
|
rlm@448
|
104 complete sensory portrait of the scene.
|
rlm@448
|
105 - Recognition :: With the scene described in terms of first
|
rlm@448
|
106 person sensory events, the creature can now run its
|
rlm@447
|
107 action-identification programs on this synthesized sensory
|
rlm@447
|
108 data, just as it would if it were actually experiencing the
|
rlm@447
|
109 scene first-hand. If previous experience has been accurately
|
rlm@447
|
110 retrieved, and if it is analogous enough to the scene, then
|
rlm@447
|
111 the creature will correctly identify the action in the scene.
|
rlm@447
|
112
|
rlm@441
|
113 For example, I think humans are able to label the cat video as
|
rlm@447
|
114 ``drinking'' because they imagine /themselves/ as the cat, and
|
rlm@441
|
115 imagine putting their face up against a stream of water and
|
rlm@441
|
116 sticking out their tongue. In that imagined world, they can feel
|
rlm@441
|
117 the cool water hitting their tongue, and feel the water entering
|
rlm@447
|
118 their body, and are able to recognize that /feeling/ as drinking.
|
rlm@447
|
119 So, the label of the action is not really in the pixels of the
|
rlm@447
|
120 image, but is found clearly in a simulation inspired by those
|
rlm@447
|
121 pixels. An imaginative system, having been trained on drinking and
|
rlm@447
|
122 non-drinking examples and learning that the most important
|
rlm@447
|
123 component of drinking is the feeling of water sliding down one's
|
rlm@447
|
124 throat, would analyze a video of a cat drinking in the following
|
rlm@447
|
125 manner:
|
rlm@441
|
126
|
rlm@447
|
127 1. Create a physical model of the video by putting a ``fuzzy''
|
rlm@447
|
128 model of its own body in place of the cat. Possibly also create
|
rlm@447
|
129 a simulation of the stream of water.
|
rlm@441
|
130
|
rlm@441
|
131 2. Play out this simulated scene and generate imagined sensory
|
rlm@441
|
132 experience. This will include relevant muscle contractions, a
|
rlm@441
|
133 close up view of the stream from the cat's perspective, and most
|
rlm@441
|
134 importantly, the imagined feeling of water entering the
|
rlm@443
|
135 mouth. The imagined sensory experience can come from a
|
rlm@441
|
136 simulation of the event, but can also be pattern-matched from
|
rlm@441
|
137 previous, similar embodied experience.
|
rlm@441
|
138
|
rlm@441
|
139 3. The action is now easily identified as drinking by the sense of
|
rlm@441
|
140 taste alone. The other senses (such as the tongue moving in and
|
rlm@441
|
141 out) help to give plausibility to the simulated action. Note that
|
rlm@441
|
142 the sense of vision, while critical in creating the simulation,
|
rlm@441
|
143 is not critical for identifying the action from the simulation.
|
rlm@441
|
144
|
rlm@441
|
145 For the chair examples, the process is even easier:
|
rlm@441
|
146
|
rlm@441
|
147 1. Align a model of your body to the person in the image.
|
rlm@441
|
148
|
rlm@441
|
149 2. Generate proprioceptive sensory data from this alignment.
|
rlm@437
|
150
|
rlm@441
|
151 3. Use the imagined proprioceptive data as a key to lookup related
|
rlm@441
|
152 sensory experience associated with that particular proproceptive
|
rlm@441
|
153 feeling.
|
rlm@437
|
154
|
rlm@443
|
155 4. Retrieve the feeling of your bottom resting on a surface, your
|
rlm@443
|
156 knees bent, and your leg muscles relaxed.
|
rlm@437
|
157
|
rlm@441
|
158 5. This sensory information is consistent with the =sitting?=
|
rlm@441
|
159 sensory predicate, so you (and the entity in the image) must be
|
rlm@441
|
160 sitting.
|
rlm@440
|
161
|
rlm@441
|
162 6. There must be a chair-like object since you are sitting.
|
rlm@440
|
163
|
rlm@441
|
164 Empathy offers yet another alternative to the age-old AI
|
rlm@441
|
165 representation question: ``What is a chair?'' --- A chair is the
|
rlm@441
|
166 feeling of sitting.
|
rlm@441
|
167
|
rlm@441
|
168 My program, =EMPATH= uses this empathic problem solving technique
|
rlm@441
|
169 to interpret the actions of a simple, worm-like creature.
|
rlm@437
|
170
|
rlm@441
|
171 #+caption: The worm performs many actions during free play such as
|
rlm@441
|
172 #+caption: curling, wiggling, and resting.
|
rlm@441
|
173 #+name: worm-intro
|
rlm@446
|
174 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
175 [[./images/worm-intro-white.png]]
|
rlm@437
|
176
|
rlm@447
|
177 #+caption: =EMPATH= recognized and classified each of these poses by
|
rlm@447
|
178 #+caption: inferring the complete sensory experience from
|
rlm@447
|
179 #+caption: proprioceptive data.
|
rlm@441
|
180 #+name: worm-recognition-intro
|
rlm@446
|
181 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
182 [[./images/worm-poses.png]]
|
rlm@441
|
183
|
rlm@441
|
184 One powerful advantage of empathic problem solving is that it
|
rlm@441
|
185 factors the action recognition problem into two easier problems. To
|
rlm@441
|
186 use empathy, you need an /aligner/, which takes the video and a
|
rlm@441
|
187 model of your body, and aligns the model with the video. Then, you
|
rlm@441
|
188 need a /recognizer/, which uses the aligned model to interpret the
|
rlm@441
|
189 action. The power in this method lies in the fact that you describe
|
rlm@448
|
190 all actions form a body-centered viewpoint. You are less tied to
|
rlm@447
|
191 the particulars of any visual representation of the actions. If you
|
rlm@441
|
192 teach the system what ``running'' is, and you have a good enough
|
rlm@441
|
193 aligner, the system will from then on be able to recognize running
|
rlm@441
|
194 from any point of view, even strange points of view like above or
|
rlm@441
|
195 underneath the runner. This is in contrast to action recognition
|
rlm@448
|
196 schemes that try to identify actions using a non-embodied approach.
|
rlm@448
|
197 If these systems learn about running as viewed from the side, they
|
rlm@448
|
198 will not automatically be able to recognize running from any other
|
rlm@448
|
199 viewpoint.
|
rlm@441
|
200
|
rlm@441
|
201 Another powerful advantage is that using the language of multiple
|
rlm@441
|
202 body-centered rich senses to describe body-centerd actions offers a
|
rlm@441
|
203 massive boost in descriptive capability. Consider how difficult it
|
rlm@441
|
204 would be to compose a set of HOG filters to describe the action of
|
rlm@447
|
205 a simple worm-creature ``curling'' so that its head touches its
|
rlm@447
|
206 tail, and then behold the simplicity of describing thus action in a
|
rlm@441
|
207 language designed for the task (listing \ref{grand-circle-intro}):
|
rlm@441
|
208
|
rlm@446
|
209 #+caption: Body-centerd actions are best expressed in a body-centered
|
rlm@446
|
210 #+caption: language. This code detects when the worm has curled into a
|
rlm@446
|
211 #+caption: full circle. Imagine how you would replicate this functionality
|
rlm@446
|
212 #+caption: using low-level pixel features such as HOG filters!
|
rlm@446
|
213 #+name: grand-circle-intro
|
rlm@446
|
214 #+begin_listing clojure
|
rlm@446
|
215 #+begin_src clojure
|
rlm@446
|
216 (defn grand-circle?
|
rlm@446
|
217 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@446
|
218 [experiences]
|
rlm@446
|
219 (and (curled? experiences)
|
rlm@446
|
220 (let [worm-touch (:touch (peek experiences))
|
rlm@446
|
221 tail-touch (worm-touch 0)
|
rlm@446
|
222 head-touch (worm-touch 4)]
|
rlm@446
|
223 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@446
|
224 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@446
|
225 #+end_src
|
rlm@446
|
226 #+end_listing
|
rlm@446
|
227
|
rlm@435
|
228
|
rlm@449
|
229 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
230
|
rlm@448
|
231 I built =CORTEX= to be a general AI research platform for doing
|
rlm@448
|
232 experiments involving multiple rich senses and a wide variety and
|
rlm@448
|
233 number of creatures. I intend it to be useful as a library for many
|
rlm@448
|
234 more projects than just this one. =CORTEX= was necessary to meet a
|
rlm@448
|
235 need among AI researchers at CSAIL and beyond, which is that people
|
rlm@448
|
236 often will invent neat ideas that are best expressed in the
|
rlm@448
|
237 language of creatures and senses, but in order to explore those
|
rlm@448
|
238 ideas they must first build a platform in which they can create
|
rlm@448
|
239 simulated creatures with rich senses! There are many ideas that
|
rlm@448
|
240 would be simple to execute (such as =EMPATH=), but attached to them
|
rlm@448
|
241 is the multi-month effort to make a good creature simulator. Often,
|
rlm@448
|
242 that initial investment of time proves to be too much, and the
|
rlm@448
|
243 project must make do with a lesser environment.
|
rlm@435
|
244
|
rlm@448
|
245 =CORTEX= is well suited as an environment for embodied AI research
|
rlm@448
|
246 for three reasons:
|
rlm@448
|
247
|
rlm@448
|
248 - You can create new creatures using Blender, a popular 3D modeling
|
rlm@448
|
249 program. Each sense can be specified using special blender nodes
|
rlm@448
|
250 with biologically inspired paramaters. You need not write any
|
rlm@448
|
251 code to create a creature, and can use a wide library of
|
rlm@448
|
252 pre-existing blender models as a base for your own creatures.
|
rlm@448
|
253
|
rlm@448
|
254 - =CORTEX= implements a wide variety of senses, including touch,
|
rlm@448
|
255 proprioception, vision, hearing, and muscle tension. Complicated
|
rlm@448
|
256 senses like touch, and vision involve multiple sensory elements
|
rlm@448
|
257 embedded in a 2D surface. You have complete control over the
|
rlm@448
|
258 distribution of these sensor elements through the use of simple
|
rlm@448
|
259 png image files. In particular, =CORTEX= implements more
|
rlm@448
|
260 comprehensive hearing than any other creature simulation system
|
rlm@448
|
261 available.
|
rlm@448
|
262
|
rlm@448
|
263 - =CORTEX= supports any number of creatures and any number of
|
rlm@448
|
264 senses. Time in =CORTEX= dialates so that the simulated creatures
|
rlm@448
|
265 always precieve a perfectly smooth flow of time, regardless of
|
rlm@448
|
266 the actual computational load.
|
rlm@448
|
267
|
rlm@448
|
268 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
|
rlm@448
|
269 engine designed to create cross-platform 3D desktop games. =CORTEX=
|
rlm@448
|
270 is mainly written in clojure, a dialect of =LISP= that runs on the
|
rlm@448
|
271 java virtual machine (JVM). The API for creating and simulating
|
rlm@449
|
272 creatures and senses is entirely expressed in clojure, though many
|
rlm@449
|
273 senses are implemented at the layer of jMonkeyEngine or below. For
|
rlm@449
|
274 example, for the sense of hearing I use a layer of clojure code on
|
rlm@449
|
275 top of a layer of java JNI bindings that drive a layer of =C++=
|
rlm@449
|
276 code which implements a modified version of =OpenAL= to support
|
rlm@449
|
277 multiple listeners. =CORTEX= is the only simulation environment
|
rlm@449
|
278 that I know of that can support multiple entities that can each
|
rlm@449
|
279 hear the world from their own perspective. Other senses also
|
rlm@449
|
280 require a small layer of Java code. =CORTEX= also uses =bullet=, a
|
rlm@449
|
281 physics simulator written in =C=.
|
rlm@448
|
282
|
rlm@448
|
283 #+caption: Here is the worm from above modeled in Blender, a free
|
rlm@448
|
284 #+caption: 3D-modeling program. Senses and joints are described
|
rlm@448
|
285 #+caption: using special nodes in Blender.
|
rlm@448
|
286 #+name: worm-recognition-intro
|
rlm@448
|
287 #+ATTR_LaTeX: :width 12cm
|
rlm@448
|
288 [[./images/blender-worm.png]]
|
rlm@448
|
289
|
rlm@449
|
290 Here are some thing I anticipate that =CORTEX= might be used for:
|
rlm@449
|
291
|
rlm@449
|
292 - exploring new ideas about sensory integration
|
rlm@449
|
293 - distributed communication among swarm creatures
|
rlm@449
|
294 - self-learning using free exploration,
|
rlm@449
|
295 - evolutionary algorithms involving creature construction
|
rlm@449
|
296 - exploration of exoitic senses and effectors that are not possible
|
rlm@449
|
297 in the real world (such as telekenisis or a semantic sense)
|
rlm@449
|
298 - imagination using subworlds
|
rlm@449
|
299
|
rlm@448
|
300 During one test with =CORTEX=, I created 3,000 entities each with
|
rlm@448
|
301 their own independent senses and ran them all at only 1/80 real
|
rlm@448
|
302 time. In another test, I created a detailed model of my own hand,
|
rlm@448
|
303 equipped with a realistic distribution of touch (more sensitive at
|
rlm@448
|
304 the fingertips), as well as eyes and ears, and it ran at around 1/4
|
rlm@449
|
305 real time.
|
rlm@448
|
306
|
rlm@449
|
307 #+BEGIN_LaTeX
|
rlm@449
|
308 \begin{sidewaysfigure}
|
rlm@449
|
309 \includegraphics[width=9.5in]{images/full-hand.png}
|
rlm@449
|
310 \caption{Here is the worm from above modeled in Blender,
|
rlm@449
|
311 a free 3D-modeling program. Senses and joints are described
|
rlm@449
|
312 using special nodes in Blender. The senses are displayed on
|
rlm@449
|
313 the right, and the simulation is displayed on the left. Notice
|
rlm@449
|
314 that the hand is curling its fingers, that it can see its own
|
rlm@449
|
315 finger from the eye in its palm, and thta it can feel its own
|
rlm@449
|
316 thumb touching its palm.}
|
rlm@449
|
317 \end{sidewaysfigure}
|
rlm@449
|
318 #+END_LaTeX
|
rlm@448
|
319
|
rlm@437
|
320 ** Contributions
|
rlm@435
|
321
|
rlm@449
|
322 I built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@449
|
323 experiments. =CORTEX= many new features lacking in other systems,
|
rlm@449
|
324 such as sound. It is easy to create new creatures using Blender, a
|
rlm@449
|
325 free 3D modeling program.
|
rlm@449
|
326
|
rlm@449
|
327 I built =EMPATH=, which uses =CORTEX= to identify the actions of a
|
rlm@449
|
328 worm-like creature using a computational model of empathy.
|
rlm@449
|
329
|
rlm@436
|
330 * Building =CORTEX=
|
rlm@435
|
331
|
rlm@436
|
332 ** To explore embodiment, we need a world, body, and senses
|
rlm@435
|
333
|
rlm@436
|
334 ** Because of Time, simulation is perferable to reality
|
rlm@435
|
335
|
rlm@436
|
336 ** Video game engines are a great starting point
|
rlm@435
|
337
|
rlm@436
|
338 ** Bodies are composed of segments connected by joints
|
rlm@435
|
339
|
rlm@436
|
340 ** Eyes reuse standard video game components
|
rlm@436
|
341
|
rlm@436
|
342 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
343
|
rlm@436
|
344 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
345
|
rlm@440
|
346 ** Proprioception is the sense that makes everything ``real''
|
rlm@436
|
347
|
rlm@436
|
348 ** Muscles are both effectors and sensors
|
rlm@436
|
349
|
rlm@436
|
350 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
351
|
rlm@436
|
352 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
353
|
rlm@435
|
354 * Empathy in a simulated worm
|
rlm@435
|
355
|
rlm@449
|
356 Here I develop a computational model of empathy, using =CORTEX= as a
|
rlm@449
|
357 base. Empathy in this context is the ability to observe another
|
rlm@449
|
358 creature and infer what sorts of sensations that creature is
|
rlm@449
|
359 feeling. My empathy algorithm involves multiple phases. First is
|
rlm@449
|
360 free-play, where the creature moves around and gains sensory
|
rlm@449
|
361 experience. From this experience I construct a representation of the
|
rlm@449
|
362 creature's sensory state space, which I call \Phi-space. Using
|
rlm@449
|
363 \Phi-space, I construct an efficient function which takes the
|
rlm@449
|
364 limited data that comes from observing another creature and enriches
|
rlm@449
|
365 it full compliment of imagined sensory data. I can then use the
|
rlm@449
|
366 imagined sensory data to recognize what the observed creature is
|
rlm@449
|
367 doing and feeling, using straightforward embodied action predicates.
|
rlm@449
|
368 This is all demonstrated with using a simple worm-like creature, and
|
rlm@449
|
369 recognizing worm-actions based on limited data.
|
rlm@449
|
370
|
rlm@449
|
371 #+caption: Here is the worm with which we will be working.
|
rlm@449
|
372 #+caption: It is composed of 5 segments. Each segment has a
|
rlm@449
|
373 #+caption: pair of extensor and flexor muscles. Each of the
|
rlm@449
|
374 #+caption: worm's four joints is a hinge joint which allows
|
rlm@449
|
375 #+caption: 30 degrees of rotation to either side. Each segment
|
rlm@449
|
376 #+caption: of the worm is touch-capable and has a uniform
|
rlm@449
|
377 #+caption: distribution of touch sensors on each of its faces.
|
rlm@449
|
378 #+caption: Each joint has a proprioceptive sense to detect
|
rlm@449
|
379 #+caption: relative positions. The worm segments are all the
|
rlm@449
|
380 #+caption: same except for the first one, which has a much
|
rlm@449
|
381 #+caption: higher weight than the others to allow for easy
|
rlm@449
|
382 #+caption: manual motor control.
|
rlm@449
|
383 #+name: basic-worm-view
|
rlm@449
|
384 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
385 [[./images/basic-worm-view.png]]
|
rlm@449
|
386
|
rlm@449
|
387 #+caption: Program for reading a worm from a blender file and
|
rlm@449
|
388 #+caption: outfitting it with the senses of proprioception,
|
rlm@449
|
389 #+caption: touch, and the ability to move, as specified in the
|
rlm@449
|
390 #+caption: blender file.
|
rlm@449
|
391 #+name: get-worm
|
rlm@449
|
392 #+begin_listing clojure
|
rlm@449
|
393 #+begin_src clojure
|
rlm@449
|
394 (defn worm []
|
rlm@449
|
395 (let [model (load-blender-model "Models/worm/worm.blend")]
|
rlm@449
|
396 {:body (doto model (body!))
|
rlm@449
|
397 :touch (touch! model)
|
rlm@449
|
398 :proprioception (proprioception! model)
|
rlm@449
|
399 :muscles (movement! model)}))
|
rlm@449
|
400 #+end_src
|
rlm@449
|
401 #+end_listing
|
rlm@449
|
402
|
rlm@436
|
403 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
404
|
rlm@449
|
405 Using empathy, I divide the problem of action recognition into a
|
rlm@449
|
406 recognition process expressed in the language of a full compliment
|
rlm@449
|
407 of senses, and an imaganitive process that generates full sensory
|
rlm@449
|
408 data from partial sensory data. Splitting the action recognition
|
rlm@449
|
409 problem in this manner greatly reduces the total amount of work to
|
rlm@449
|
410 recognize actions: The imaganitive process is mostly just matching
|
rlm@449
|
411 previous experience, and the recognition process gets to use all
|
rlm@449
|
412 the senses to directly describe any action.
|
rlm@449
|
413
|
rlm@436
|
414 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
415
|
rlm@449
|
416 Embodied representations using multiple senses such as touch,
|
rlm@449
|
417 proprioception, and muscle tension turns out be be exceedingly
|
rlm@449
|
418 efficient at describing body-centered actions. It is the ``right
|
rlm@449
|
419 language for the job''. For example, it takes only around 5 lines
|
rlm@449
|
420 of LISP code to describe the action of ``curling'' using embodied
|
rlm@449
|
421 primitives. It takes about 8 lines to describe the seemingly
|
rlm@449
|
422 complicated action of wiggling.
|
rlm@449
|
423
|
rlm@449
|
424 The following action predicates each take a stream of sensory
|
rlm@449
|
425 experience, observe however much of it they desire, and decide
|
rlm@449
|
426 whether the worm is doing the action they describe. =curled?=
|
rlm@449
|
427 relies on proprioception, =resting?= relies on touch, =wiggling?=
|
rlm@449
|
428 relies on a fourier analysis of muscle contraction, and
|
rlm@449
|
429 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
|
rlm@449
|
430
|
rlm@449
|
431 #+caption: Program for detecting whether the worm is curled. This is the
|
rlm@449
|
432 #+caption: simplest action predicate, because it only uses the last frame
|
rlm@449
|
433 #+caption: of sensory experience, and only uses proprioceptive data. Even
|
rlm@449
|
434 #+caption: this simple predicate, however, is automatically frame
|
rlm@449
|
435 #+caption: independent and ignores vermopomorphic differences such as
|
rlm@449
|
436 #+caption: worm textures and colors.
|
rlm@449
|
437 #+name: curled
|
rlm@449
|
438 #+begin_listing clojure
|
rlm@449
|
439 #+begin_src clojure
|
rlm@449
|
440 (defn curled?
|
rlm@449
|
441 "Is the worm curled up?"
|
rlm@449
|
442 [experiences]
|
rlm@449
|
443 (every?
|
rlm@449
|
444 (fn [[_ _ bend]]
|
rlm@449
|
445 (> (Math/sin bend) 0.64))
|
rlm@449
|
446 (:proprioception (peek experiences))))
|
rlm@449
|
447 #+end_src
|
rlm@449
|
448 #+end_listing
|
rlm@449
|
449
|
rlm@449
|
450 #+caption: Program for summarizing the touch information in a patch
|
rlm@449
|
451 #+caption: of skin.
|
rlm@449
|
452 #+name: touch-summary
|
rlm@449
|
453 #+begin_listing clojure
|
rlm@449
|
454 #+begin_src clojure
|
rlm@449
|
455 (defn contact
|
rlm@449
|
456 "Determine how much contact a particular worm segment has with
|
rlm@449
|
457 other objects. Returns a value between 0 and 1, where 1 is full
|
rlm@449
|
458 contact and 0 is no contact."
|
rlm@449
|
459 [touch-region [coords contact :as touch]]
|
rlm@449
|
460 (-> (zipmap coords contact)
|
rlm@449
|
461 (select-keys touch-region)
|
rlm@449
|
462 (vals)
|
rlm@449
|
463 (#(map first %))
|
rlm@449
|
464 (average)
|
rlm@449
|
465 (* 10)
|
rlm@449
|
466 (- 1)
|
rlm@449
|
467 (Math/abs)))
|
rlm@449
|
468 #+end_src
|
rlm@449
|
469 #+end_listing
|
rlm@449
|
470
|
rlm@449
|
471
|
rlm@449
|
472 #+caption: Program for detecting whether the worm is at rest. This program
|
rlm@449
|
473 #+caption: uses a summary of the tactile information from the underbelly
|
rlm@449
|
474 #+caption: of the worm, and is only true if every segment is touching the
|
rlm@449
|
475 #+caption: floor. Note that this function contains no references to
|
rlm@449
|
476 #+caption: proprioction at all.
|
rlm@449
|
477 #+name: resting
|
rlm@449
|
478 #+begin_listing clojure
|
rlm@449
|
479 #+begin_src clojure
|
rlm@449
|
480 (def worm-segment-bottom (rect-region [8 15] [14 22]))
|
rlm@449
|
481
|
rlm@449
|
482 (defn resting?
|
rlm@449
|
483 "Is the worm resting on the ground?"
|
rlm@449
|
484 [experiences]
|
rlm@449
|
485 (every?
|
rlm@449
|
486 (fn [touch-data]
|
rlm@449
|
487 (< 0.9 (contact worm-segment-bottom touch-data)))
|
rlm@449
|
488 (:touch (peek experiences))))
|
rlm@449
|
489 #+end_src
|
rlm@449
|
490 #+end_listing
|
rlm@449
|
491
|
rlm@449
|
492 #+caption: Program for detecting whether the worm is curled up into a
|
rlm@449
|
493 #+caption: full circle. Here the embodied approach begins to shine, as
|
rlm@449
|
494 #+caption: I am able to both use a previous action predicate (=curled?=)
|
rlm@449
|
495 #+caption: as well as the direct tactile experience of the head and tail.
|
rlm@449
|
496 #+name: grand-circle
|
rlm@449
|
497 #+begin_listing clojure
|
rlm@449
|
498 #+begin_src clojure
|
rlm@449
|
499 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
|
rlm@449
|
500
|
rlm@449
|
501 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
|
rlm@449
|
502
|
rlm@449
|
503 (defn grand-circle?
|
rlm@449
|
504 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@449
|
505 [experiences]
|
rlm@449
|
506 (and (curled? experiences)
|
rlm@449
|
507 (let [worm-touch (:touch (peek experiences))
|
rlm@449
|
508 tail-touch (worm-touch 0)
|
rlm@449
|
509 head-touch (worm-touch 4)]
|
rlm@449
|
510 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@449
|
511 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@449
|
512 #+end_src
|
rlm@449
|
513 #+end_listing
|
rlm@449
|
514
|
rlm@449
|
515
|
rlm@449
|
516 #+caption: Program for detecting whether the worm has been wiggling for
|
rlm@449
|
517 #+caption: the last few frames. It uses a fourier analysis of the muscle
|
rlm@449
|
518 #+caption: contractions of the worm's tail to determine wiggling. This is
|
rlm@449
|
519 #+caption: signigicant because there is no particular frame that clearly
|
rlm@449
|
520 #+caption: indicates that the worm is wiggling --- only when multiple frames
|
rlm@449
|
521 #+caption: are analyzed together is the wiggling revealed. Defining
|
rlm@449
|
522 #+caption: wiggling this way also gives the worm an opportunity to learn
|
rlm@449
|
523 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
|
rlm@449
|
524 #+caption: wiggle but can't. Frustrated wiggling is very visually different
|
rlm@449
|
525 #+caption: from actual wiggling, but this definition gives it to us for free.
|
rlm@449
|
526 #+name: wiggling
|
rlm@449
|
527 #+begin_listing clojure
|
rlm@449
|
528 #+begin_src clojure
|
rlm@449
|
529 (defn fft [nums]
|
rlm@449
|
530 (map
|
rlm@449
|
531 #(.getReal %)
|
rlm@449
|
532 (.transform
|
rlm@449
|
533 (FastFourierTransformer. DftNormalization/STANDARD)
|
rlm@449
|
534 (double-array nums) TransformType/FORWARD)))
|
rlm@449
|
535
|
rlm@449
|
536 (def indexed (partial map-indexed vector))
|
rlm@449
|
537
|
rlm@449
|
538 (defn max-indexed [s]
|
rlm@449
|
539 (first (sort-by (comp - second) (indexed s))))
|
rlm@449
|
540
|
rlm@449
|
541 (defn wiggling?
|
rlm@449
|
542 "Is the worm wiggling?"
|
rlm@449
|
543 [experiences]
|
rlm@449
|
544 (let [analysis-interval 0x40]
|
rlm@449
|
545 (when (> (count experiences) analysis-interval)
|
rlm@449
|
546 (let [a-flex 3
|
rlm@449
|
547 a-ex 2
|
rlm@449
|
548 muscle-activity
|
rlm@449
|
549 (map :muscle (vector:last-n experiences analysis-interval))
|
rlm@449
|
550 base-activity
|
rlm@449
|
551 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
|
rlm@449
|
552 (= 2
|
rlm@449
|
553 (first
|
rlm@449
|
554 (max-indexed
|
rlm@449
|
555 (map #(Math/abs %)
|
rlm@449
|
556 (take 20 (fft base-activity))))))))))
|
rlm@449
|
557 #+end_src
|
rlm@449
|
558 #+end_listing
|
rlm@449
|
559
|
rlm@449
|
560 With these action predicates, I can now recognize the actions of
|
rlm@449
|
561 the worm while it is moving under my control and I have access to
|
rlm@449
|
562 all the worm's senses.
|
rlm@449
|
563
|
rlm@449
|
564 #+caption: Use the action predicates defined earlier to report on
|
rlm@449
|
565 #+caption: what the worm is doing while in simulation.
|
rlm@449
|
566 #+name: report-worm-activity
|
rlm@449
|
567 #+begin_listing clojure
|
rlm@449
|
568 #+begin_src clojure
|
rlm@449
|
569 (defn debug-experience
|
rlm@449
|
570 [experiences text]
|
rlm@449
|
571 (cond
|
rlm@449
|
572 (grand-circle? experiences) (.setText text "Grand Circle")
|
rlm@449
|
573 (curled? experiences) (.setText text "Curled")
|
rlm@449
|
574 (wiggling? experiences) (.setText text "Wiggling")
|
rlm@449
|
575 (resting? experiences) (.setText text "Resting")))
|
rlm@449
|
576 #+end_src
|
rlm@449
|
577 #+end_listing
|
rlm@449
|
578
|
rlm@449
|
579 #+caption: Using =debug-experience=, the body-centered predicates
|
rlm@449
|
580 #+caption: work together to classify the behaviour of the worm.
|
rlm@449
|
581 #+caption: while under manual motor control.
|
rlm@449
|
582 #+name: basic-worm-view
|
rlm@449
|
583 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
584 [[./images/worm-identify-init.png]]
|
rlm@449
|
585
|
rlm@449
|
586 These action predicates satisfy the recognition requirement of an
|
rlm@449
|
587 empathic recognition system. There is a lot of power in the
|
rlm@449
|
588 simplicity of the action predicates. They describe their actions
|
rlm@449
|
589 without getting confused in visual details of the worm. Each one is
|
rlm@449
|
590 frame independent, but more than that, they are each indepent of
|
rlm@449
|
591 irrelevant visual details of the worm and the environment. They
|
rlm@449
|
592 will work regardless of whether the worm is a different color or
|
rlm@449
|
593 hevaily textured, or of the environment has strange lighting.
|
rlm@449
|
594
|
rlm@449
|
595 The trick now is to make the action predicates work even when the
|
rlm@449
|
596 sensory data on which they depend is absent. If I can do that, then
|
rlm@449
|
597 I will have gained much,
|
rlm@435
|
598
|
rlm@436
|
599 ** \Phi-space describes the worm's experiences
|
rlm@449
|
600
|
rlm@449
|
601 As a first step towards building empathy, I need to gather all of
|
rlm@449
|
602 the worm's experiences during free play. I use a simple vector to
|
rlm@449
|
603 store all the experiences.
|
rlm@449
|
604
|
rlm@449
|
605 #+caption: Program to gather the worm's experiences into a vector for
|
rlm@449
|
606 #+caption: further processing. The =motor-control-program= line uses
|
rlm@449
|
607 #+caption: a motor control script that causes the worm to execute a series
|
rlm@449
|
608 #+caption: of ``exercices'' that include all the action predicates.
|
rlm@449
|
609 #+name: generate-phi-space
|
rlm@449
|
610 #+begin_listing clojure
|
rlm@449
|
611 #+begin_src clojure
|
rlm@449
|
612 (defn generate-phi-space []
|
rlm@449
|
613 (let [experiences (atom [])]
|
rlm@449
|
614 (run-world
|
rlm@449
|
615 (apply-map
|
rlm@449
|
616 worm-world
|
rlm@449
|
617 (merge
|
rlm@449
|
618 (worm-world-defaults)
|
rlm@449
|
619 {:end-frame 700
|
rlm@449
|
620 :motor-control
|
rlm@449
|
621 (motor-control-program worm-muscle-labels do-all-the-things)
|
rlm@449
|
622 :experiences experiences})))
|
rlm@449
|
623 @experiences))
|
rlm@449
|
624 #+end_src
|
rlm@449
|
625 #+end_listing
|
rlm@449
|
626
|
rlm@449
|
627 Each element of the experience vector exists in the vast space of
|
rlm@449
|
628 all possible worm-experiences. Most of this vast space is actually
|
rlm@449
|
629 unreachable due to physical constraints of the worm's body. For
|
rlm@449
|
630 example, the worm's segments are connected by hinge joints that put
|
rlm@449
|
631 a practical limit on the worm's degrees of freedom. Also, the worm
|
rlm@449
|
632 can not be bent into a circle so that its ends are touching and at
|
rlm@449
|
633 the same time not also experience the sensation of touching itself.
|
rlm@449
|
634
|
rlm@449
|
635 As the worm moves around during free play and the vector grows
|
rlm@449
|
636 larger, the vector begins to define a subspace which is all the
|
rlm@449
|
637 practical experiences the worm can experience during normal
|
rlm@449
|
638 operation, which I call \Phi-space, short for physical-space. The
|
rlm@449
|
639 vector defines a path through \Phi-space. This path has interesting
|
rlm@449
|
640 properties that all derive from embodiment. The proprioceptive
|
rlm@449
|
641 components are completely smooth, because in order for the worm to
|
rlm@449
|
642 move from one position to another, it must pass through the
|
rlm@449
|
643 intermediate positions. The path invariably forms loops as actions
|
rlm@449
|
644 are repeated. Finally and most importantly, proprioception actually
|
rlm@449
|
645 gives very strong inference about the other senses. For example,
|
rlm@449
|
646 when the worm is flat, you can infer that it is touching the ground
|
rlm@449
|
647 and that its muscles are not active, because if the muscles were
|
rlm@449
|
648 active, the worm would be moving and would not be perfectly flat.
|
rlm@449
|
649 In order to stay flat, the worm has to be touching the ground, or
|
rlm@449
|
650 it would again be moving out of the flat position due to gravity.
|
rlm@449
|
651 If the worm is positioned in such a way that it interacts with
|
rlm@449
|
652 itself, then it is very likely to be feeling the same tactile
|
rlm@449
|
653 feelings as the last time it was in that position, because it has
|
rlm@449
|
654 the same body as then. If you observe multiple frames of
|
rlm@449
|
655 proprioceptive data, then you can become increasingly confident
|
rlm@449
|
656 about the exact activations of the worm's muscles, because it
|
rlm@449
|
657 generally takes a unique combination of muscle contractions to
|
rlm@449
|
658 transform the worm's body along a specific path through \Phi-space.
|
rlm@449
|
659
|
rlm@449
|
660 There is a simple way of taking \Phi-space and the total ordering
|
rlm@449
|
661 provided by an experience vector and reliably infering the rest of
|
rlm@449
|
662 the senses.
|
rlm@435
|
663
|
rlm@436
|
664 ** Empathy is the process of tracing though \Phi-space
|
rlm@449
|
665
|
rlm@449
|
666
|
rlm@449
|
667
|
rlm@449
|
668 (defn bin [digits]
|
rlm@449
|
669 (fn [angles]
|
rlm@449
|
670 (->> angles
|
rlm@449
|
671 (flatten)
|
rlm@449
|
672 (map (juxt #(Math/sin %) #(Math/cos %)))
|
rlm@449
|
673 (flatten)
|
rlm@449
|
674 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
|
rlm@449
|
675
|
rlm@449
|
676 (defn gen-phi-scan
|
rlm@449
|
677 "Nearest-neighbors with spatial binning. Only returns a result if
|
rlm@449
|
678 the propriceptive data is within 10% of a previously recorded
|
rlm@449
|
679 result in all dimensions."
|
rlm@449
|
680
|
rlm@449
|
681 [phi-space]
|
rlm@449
|
682 (let [bin-keys (map bin [3 2 1])
|
rlm@449
|
683 bin-maps
|
rlm@449
|
684 (map (fn [bin-key]
|
rlm@449
|
685 (group-by
|
rlm@449
|
686 (comp bin-key :proprioception phi-space)
|
rlm@449
|
687 (range (count phi-space)))) bin-keys)
|
rlm@449
|
688 lookups (map (fn [bin-key bin-map]
|
rlm@449
|
689 (fn [proprio] (bin-map (bin-key proprio))))
|
rlm@449
|
690 bin-keys bin-maps)]
|
rlm@449
|
691 (fn lookup [proprio-data]
|
rlm@449
|
692 (set (some #(% proprio-data) lookups)))))
|
rlm@449
|
693
|
rlm@449
|
694
|
rlm@449
|
695 (defn longest-thread
|
rlm@449
|
696 "Find the longest thread from phi-index-sets. The index sets should
|
rlm@449
|
697 be ordered from most recent to least recent."
|
rlm@449
|
698 [phi-index-sets]
|
rlm@449
|
699 (loop [result '()
|
rlm@449
|
700 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
|
rlm@449
|
701 (if (empty? phi-index-sets)
|
rlm@449
|
702 (vec result)
|
rlm@449
|
703 (let [threads
|
rlm@449
|
704 (for [thread-base thread-bases]
|
rlm@449
|
705 (loop [thread (list thread-base)
|
rlm@449
|
706 remaining remaining]
|
rlm@449
|
707 (let [next-index (dec (first thread))]
|
rlm@449
|
708 (cond (empty? remaining) thread
|
rlm@449
|
709 (contains? (first remaining) next-index)
|
rlm@449
|
710 (recur
|
rlm@449
|
711 (cons next-index thread) (rest remaining))
|
rlm@449
|
712 :else thread))))
|
rlm@449
|
713 longest-thread
|
rlm@449
|
714 (reduce (fn [thread-a thread-b]
|
rlm@449
|
715 (if (> (count thread-a) (count thread-b))
|
rlm@449
|
716 thread-a thread-b))
|
rlm@449
|
717 '(nil)
|
rlm@449
|
718 threads)]
|
rlm@449
|
719 (recur (concat longest-thread result)
|
rlm@449
|
720 (drop (count longest-thread) phi-index-sets))))))
|
rlm@449
|
721
|
rlm@449
|
722 There is one final piece, which is to replace missing sensory data
|
rlm@449
|
723 with a best-guess estimate. While I could fill in missing data by
|
rlm@449
|
724 using a gradient over the closest known sensory data points, averages
|
rlm@449
|
725 can be misleading. It is certainly possible to create an impossible
|
rlm@449
|
726 sensory state by averaging two possible sensory states. Therefore, I
|
rlm@449
|
727 simply replicate the most recent sensory experience to fill in the
|
rlm@449
|
728 gaps.
|
rlm@449
|
729
|
rlm@449
|
730 #+caption: Fill in blanks in sensory experience by replicating the most
|
rlm@449
|
731 #+caption: recent experience.
|
rlm@449
|
732 #+name: infer-nils
|
rlm@449
|
733 #+begin_listing clojure
|
rlm@449
|
734 #+begin_src clojure
|
rlm@449
|
735 (defn infer-nils
|
rlm@449
|
736 "Replace nils with the next available non-nil element in the
|
rlm@449
|
737 sequence, or barring that, 0."
|
rlm@449
|
738 [s]
|
rlm@449
|
739 (loop [i (dec (count s))
|
rlm@449
|
740 v (transient s)]
|
rlm@449
|
741 (if (zero? i) (persistent! v)
|
rlm@449
|
742 (if-let [cur (v i)]
|
rlm@449
|
743 (if (get v (dec i) 0)
|
rlm@449
|
744 (recur (dec i) v)
|
rlm@449
|
745 (recur (dec i) (assoc! v (dec i) cur)))
|
rlm@449
|
746 (recur i (assoc! v i 0))))))
|
rlm@449
|
747 #+end_src
|
rlm@449
|
748 #+end_listing
|
rlm@449
|
749
|
rlm@449
|
750
|
rlm@449
|
751
|
rlm@449
|
752
|
rlm@435
|
753
|
rlm@441
|
754 ** Efficient action recognition with =EMPATH=
|
rlm@425
|
755
|
rlm@449
|
756 ** Digression: bootstrapping touch using free exploration
|
rlm@449
|
757
|
rlm@432
|
758 * Contributions
|
rlm@447
|
759
|
rlm@447
|
760
|
rlm@447
|
761
|
rlm@447
|
762
|
rlm@447
|
763 # An anatomical joke:
|
rlm@447
|
764 # - Training
|
rlm@447
|
765 # - Skeletal imitation
|
rlm@447
|
766 # - Sensory fleshing-out
|
rlm@447
|
767 # - Classification
|