rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@451
|
6 #+LaTeX_CLASS_OPTIONS: [nofloat]
|
rlm@422
|
7
|
rlm@465
|
8 * COMMENT templates
|
rlm@465
|
9 #+caption:
|
rlm@465
|
10 #+caption:
|
rlm@465
|
11 #+caption:
|
rlm@465
|
12 #+caption:
|
rlm@465
|
13 #+name: name
|
rlm@465
|
14 #+begin_listing clojure
|
rlm@465
|
15 #+begin_src clojure
|
rlm@465
|
16 #+end_src
|
rlm@465
|
17 #+end_listing
|
rlm@465
|
18
|
rlm@465
|
19 #+caption:
|
rlm@465
|
20 #+caption:
|
rlm@465
|
21 #+caption:
|
rlm@465
|
22 #+name: name
|
rlm@465
|
23 #+ATTR_LaTeX: :width 10cm
|
rlm@466
|
24 [[./images/aurellem-gray.png]]
|
rlm@465
|
25
|
rlm@465
|
26
|
rlm@465
|
27
|
rlm@465
|
28 * COMMENT Empathy and Embodiment as problem solving strategies
|
rlm@437
|
29
|
rlm@437
|
30 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
31 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
32 seen one way to efficiently implement empathy for embodied
|
rlm@447
|
33 creatures. Finally, you will become familiar with =CORTEX=, a system
|
rlm@447
|
34 for designing and simulating creatures with rich senses, which you
|
rlm@447
|
35 may choose to use in your own research.
|
rlm@437
|
36
|
rlm@441
|
37 This is the core vision of my thesis: That one of the important ways
|
rlm@441
|
38 in which we understand others is by imagining ourselves in their
|
rlm@441
|
39 position and emphatically feeling experiences relative to our own
|
rlm@441
|
40 bodies. By understanding events in terms of our own previous
|
rlm@441
|
41 corporeal experience, we greatly constrain the possibilities of what
|
rlm@441
|
42 would otherwise be an unwieldy exponential search. This extra
|
rlm@441
|
43 constraint can be the difference between easily understanding what
|
rlm@441
|
44 is happening in a video and being completely lost in a sea of
|
rlm@441
|
45 incomprehensible color and movement.
|
rlm@435
|
46
|
rlm@436
|
47 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
48
|
rlm@447
|
49 Consider for example the problem of determining what is happening
|
rlm@447
|
50 in a video of which this is one frame:
|
rlm@437
|
51
|
rlm@441
|
52 #+caption: A cat drinking some water. Identifying this action is
|
rlm@441
|
53 #+caption: beyond the state of the art for computers.
|
rlm@441
|
54 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
55 [[./images/cat-drinking.jpg]]
|
rlm@441
|
56
|
rlm@441
|
57 It is currently impossible for any computer program to reliably
|
rlm@447
|
58 label such a video as ``drinking''. And rightly so -- it is a very
|
rlm@441
|
59 hard problem! What features can you describe in terms of low level
|
rlm@441
|
60 functions of pixels that can even begin to describe at a high level
|
rlm@441
|
61 what is happening here?
|
rlm@437
|
62
|
rlm@447
|
63 Or suppose that you are building a program that recognizes chairs.
|
rlm@448
|
64 How could you ``see'' the chair in figure \ref{hidden-chair}?
|
rlm@441
|
65
|
rlm@441
|
66 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@448
|
67 #+caption: doubt that any modern computer vision program can find it.
|
rlm@441
|
68 #+name: hidden-chair
|
rlm@441
|
69 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
70 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@441
|
71
|
rlm@441
|
72 Finally, how is it that you can easily tell the difference between
|
rlm@441
|
73 how the girls /muscles/ are working in figure \ref{girl}?
|
rlm@441
|
74
|
rlm@441
|
75 #+caption: The mysterious ``common sense'' appears here as you are able
|
rlm@441
|
76 #+caption: to discern the difference in how the girl's arm muscles
|
rlm@441
|
77 #+caption: are activated between the two images.
|
rlm@441
|
78 #+name: girl
|
rlm@448
|
79 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
80 [[./images/wall-push.png]]
|
rlm@437
|
81
|
rlm@441
|
82 Each of these examples tells us something about what might be going
|
rlm@441
|
83 on in our minds as we easily solve these recognition problems.
|
rlm@441
|
84
|
rlm@441
|
85 The hidden chairs show us that we are strongly triggered by cues
|
rlm@447
|
86 relating to the position of human bodies, and that we can determine
|
rlm@447
|
87 the overall physical configuration of a human body even if much of
|
rlm@447
|
88 that body is occluded.
|
rlm@437
|
89
|
rlm@441
|
90 The picture of the girl pushing against the wall tells us that we
|
rlm@441
|
91 have common sense knowledge about the kinetics of our own bodies.
|
rlm@441
|
92 We know well how our muscles would have to work to maintain us in
|
rlm@441
|
93 most positions, and we can easily project this self-knowledge to
|
rlm@441
|
94 imagined positions triggered by images of the human body.
|
rlm@441
|
95
|
rlm@441
|
96 ** =EMPATH= neatly solves recognition problems
|
rlm@441
|
97
|
rlm@441
|
98 I propose a system that can express the types of recognition
|
rlm@441
|
99 problems above in a form amenable to computation. It is split into
|
rlm@441
|
100 four parts:
|
rlm@441
|
101
|
rlm@448
|
102 - Free/Guided Play :: The creature moves around and experiences the
|
rlm@448
|
103 world through its unique perspective. Many otherwise
|
rlm@448
|
104 complicated actions are easily described in the language of a
|
rlm@448
|
105 full suite of body-centered, rich senses. For example,
|
rlm@448
|
106 drinking is the feeling of water sliding down your throat, and
|
rlm@448
|
107 cooling your insides. It's often accompanied by bringing your
|
rlm@448
|
108 hand close to your face, or bringing your face close to water.
|
rlm@448
|
109 Sitting down is the feeling of bending your knees, activating
|
rlm@448
|
110 your quadriceps, then feeling a surface with your bottom and
|
rlm@448
|
111 relaxing your legs. These body-centered action descriptions
|
rlm@448
|
112 can be either learned or hard coded.
|
rlm@448
|
113 - Posture Imitation :: When trying to interpret a video or image,
|
rlm@448
|
114 the creature takes a model of itself and aligns it with
|
rlm@448
|
115 whatever it sees. This alignment can even cross species, as
|
rlm@448
|
116 when humans try to align themselves with things like ponies,
|
rlm@448
|
117 dogs, or other humans with a different body type.
|
rlm@448
|
118 - Empathy :: The alignment triggers associations with
|
rlm@448
|
119 sensory data from prior experiences. For example, the
|
rlm@448
|
120 alignment itself easily maps to proprioceptive data. Any
|
rlm@448
|
121 sounds or obvious skin contact in the video can to a lesser
|
rlm@448
|
122 extent trigger previous experience. Segments of previous
|
rlm@448
|
123 experiences are stitched together to form a coherent and
|
rlm@448
|
124 complete sensory portrait of the scene.
|
rlm@448
|
125 - Recognition :: With the scene described in terms of first
|
rlm@448
|
126 person sensory events, the creature can now run its
|
rlm@447
|
127 action-identification programs on this synthesized sensory
|
rlm@447
|
128 data, just as it would if it were actually experiencing the
|
rlm@447
|
129 scene first-hand. If previous experience has been accurately
|
rlm@447
|
130 retrieved, and if it is analogous enough to the scene, then
|
rlm@447
|
131 the creature will correctly identify the action in the scene.
|
rlm@447
|
132
|
rlm@441
|
133 For example, I think humans are able to label the cat video as
|
rlm@447
|
134 ``drinking'' because they imagine /themselves/ as the cat, and
|
rlm@441
|
135 imagine putting their face up against a stream of water and
|
rlm@441
|
136 sticking out their tongue. In that imagined world, they can feel
|
rlm@441
|
137 the cool water hitting their tongue, and feel the water entering
|
rlm@447
|
138 their body, and are able to recognize that /feeling/ as drinking.
|
rlm@447
|
139 So, the label of the action is not really in the pixels of the
|
rlm@447
|
140 image, but is found clearly in a simulation inspired by those
|
rlm@447
|
141 pixels. An imaginative system, having been trained on drinking and
|
rlm@447
|
142 non-drinking examples and learning that the most important
|
rlm@447
|
143 component of drinking is the feeling of water sliding down one's
|
rlm@447
|
144 throat, would analyze a video of a cat drinking in the following
|
rlm@447
|
145 manner:
|
rlm@441
|
146
|
rlm@447
|
147 1. Create a physical model of the video by putting a ``fuzzy''
|
rlm@447
|
148 model of its own body in place of the cat. Possibly also create
|
rlm@447
|
149 a simulation of the stream of water.
|
rlm@441
|
150
|
rlm@441
|
151 2. Play out this simulated scene and generate imagined sensory
|
rlm@441
|
152 experience. This will include relevant muscle contractions, a
|
rlm@441
|
153 close up view of the stream from the cat's perspective, and most
|
rlm@441
|
154 importantly, the imagined feeling of water entering the
|
rlm@443
|
155 mouth. The imagined sensory experience can come from a
|
rlm@441
|
156 simulation of the event, but can also be pattern-matched from
|
rlm@441
|
157 previous, similar embodied experience.
|
rlm@441
|
158
|
rlm@441
|
159 3. The action is now easily identified as drinking by the sense of
|
rlm@441
|
160 taste alone. The other senses (such as the tongue moving in and
|
rlm@441
|
161 out) help to give plausibility to the simulated action. Note that
|
rlm@441
|
162 the sense of vision, while critical in creating the simulation,
|
rlm@441
|
163 is not critical for identifying the action from the simulation.
|
rlm@441
|
164
|
rlm@441
|
165 For the chair examples, the process is even easier:
|
rlm@441
|
166
|
rlm@441
|
167 1. Align a model of your body to the person in the image.
|
rlm@441
|
168
|
rlm@441
|
169 2. Generate proprioceptive sensory data from this alignment.
|
rlm@437
|
170
|
rlm@441
|
171 3. Use the imagined proprioceptive data as a key to lookup related
|
rlm@441
|
172 sensory experience associated with that particular proproceptive
|
rlm@441
|
173 feeling.
|
rlm@437
|
174
|
rlm@443
|
175 4. Retrieve the feeling of your bottom resting on a surface, your
|
rlm@443
|
176 knees bent, and your leg muscles relaxed.
|
rlm@437
|
177
|
rlm@441
|
178 5. This sensory information is consistent with the =sitting?=
|
rlm@441
|
179 sensory predicate, so you (and the entity in the image) must be
|
rlm@441
|
180 sitting.
|
rlm@440
|
181
|
rlm@441
|
182 6. There must be a chair-like object since you are sitting.
|
rlm@440
|
183
|
rlm@441
|
184 Empathy offers yet another alternative to the age-old AI
|
rlm@441
|
185 representation question: ``What is a chair?'' --- A chair is the
|
rlm@441
|
186 feeling of sitting.
|
rlm@441
|
187
|
rlm@441
|
188 My program, =EMPATH= uses this empathic problem solving technique
|
rlm@441
|
189 to interpret the actions of a simple, worm-like creature.
|
rlm@437
|
190
|
rlm@441
|
191 #+caption: The worm performs many actions during free play such as
|
rlm@441
|
192 #+caption: curling, wiggling, and resting.
|
rlm@441
|
193 #+name: worm-intro
|
rlm@446
|
194 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
195 [[./images/worm-intro-white.png]]
|
rlm@437
|
196
|
rlm@462
|
197 #+caption: =EMPATH= recognized and classified each of these
|
rlm@462
|
198 #+caption: poses by inferring the complete sensory experience
|
rlm@462
|
199 #+caption: from proprioceptive data.
|
rlm@441
|
200 #+name: worm-recognition-intro
|
rlm@446
|
201 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
202 [[./images/worm-poses.png]]
|
rlm@441
|
203
|
rlm@441
|
204 One powerful advantage of empathic problem solving is that it
|
rlm@441
|
205 factors the action recognition problem into two easier problems. To
|
rlm@441
|
206 use empathy, you need an /aligner/, which takes the video and a
|
rlm@441
|
207 model of your body, and aligns the model with the video. Then, you
|
rlm@441
|
208 need a /recognizer/, which uses the aligned model to interpret the
|
rlm@441
|
209 action. The power in this method lies in the fact that you describe
|
rlm@448
|
210 all actions form a body-centered viewpoint. You are less tied to
|
rlm@447
|
211 the particulars of any visual representation of the actions. If you
|
rlm@441
|
212 teach the system what ``running'' is, and you have a good enough
|
rlm@441
|
213 aligner, the system will from then on be able to recognize running
|
rlm@441
|
214 from any point of view, even strange points of view like above or
|
rlm@441
|
215 underneath the runner. This is in contrast to action recognition
|
rlm@448
|
216 schemes that try to identify actions using a non-embodied approach.
|
rlm@448
|
217 If these systems learn about running as viewed from the side, they
|
rlm@448
|
218 will not automatically be able to recognize running from any other
|
rlm@448
|
219 viewpoint.
|
rlm@441
|
220
|
rlm@441
|
221 Another powerful advantage is that using the language of multiple
|
rlm@441
|
222 body-centered rich senses to describe body-centerd actions offers a
|
rlm@441
|
223 massive boost in descriptive capability. Consider how difficult it
|
rlm@441
|
224 would be to compose a set of HOG filters to describe the action of
|
rlm@447
|
225 a simple worm-creature ``curling'' so that its head touches its
|
rlm@447
|
226 tail, and then behold the simplicity of describing thus action in a
|
rlm@441
|
227 language designed for the task (listing \ref{grand-circle-intro}):
|
rlm@441
|
228
|
rlm@446
|
229 #+caption: Body-centerd actions are best expressed in a body-centered
|
rlm@446
|
230 #+caption: language. This code detects when the worm has curled into a
|
rlm@446
|
231 #+caption: full circle. Imagine how you would replicate this functionality
|
rlm@446
|
232 #+caption: using low-level pixel features such as HOG filters!
|
rlm@446
|
233 #+name: grand-circle-intro
|
rlm@452
|
234 #+attr_latex: [htpb]
|
rlm@452
|
235 #+begin_listing clojure
|
rlm@446
|
236 #+begin_src clojure
|
rlm@446
|
237 (defn grand-circle?
|
rlm@446
|
238 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@446
|
239 [experiences]
|
rlm@446
|
240 (and (curled? experiences)
|
rlm@446
|
241 (let [worm-touch (:touch (peek experiences))
|
rlm@446
|
242 tail-touch (worm-touch 0)
|
rlm@446
|
243 head-touch (worm-touch 4)]
|
rlm@462
|
244 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
|
rlm@462
|
245 (< 0.2 (contact worm-segment-top-tip head-touch))))))
|
rlm@446
|
246 #+end_src
|
rlm@446
|
247 #+end_listing
|
rlm@446
|
248
|
rlm@435
|
249
|
rlm@449
|
250 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
251
|
rlm@448
|
252 I built =CORTEX= to be a general AI research platform for doing
|
rlm@448
|
253 experiments involving multiple rich senses and a wide variety and
|
rlm@448
|
254 number of creatures. I intend it to be useful as a library for many
|
rlm@462
|
255 more projects than just this thesis. =CORTEX= was necessary to meet
|
rlm@462
|
256 a need among AI researchers at CSAIL and beyond, which is that
|
rlm@462
|
257 people often will invent neat ideas that are best expressed in the
|
rlm@448
|
258 language of creatures and senses, but in order to explore those
|
rlm@448
|
259 ideas they must first build a platform in which they can create
|
rlm@448
|
260 simulated creatures with rich senses! There are many ideas that
|
rlm@448
|
261 would be simple to execute (such as =EMPATH=), but attached to them
|
rlm@448
|
262 is the multi-month effort to make a good creature simulator. Often,
|
rlm@448
|
263 that initial investment of time proves to be too much, and the
|
rlm@448
|
264 project must make do with a lesser environment.
|
rlm@435
|
265
|
rlm@448
|
266 =CORTEX= is well suited as an environment for embodied AI research
|
rlm@448
|
267 for three reasons:
|
rlm@448
|
268
|
rlm@448
|
269 - You can create new creatures using Blender, a popular 3D modeling
|
rlm@448
|
270 program. Each sense can be specified using special blender nodes
|
rlm@448
|
271 with biologically inspired paramaters. You need not write any
|
rlm@448
|
272 code to create a creature, and can use a wide library of
|
rlm@448
|
273 pre-existing blender models as a base for your own creatures.
|
rlm@448
|
274
|
rlm@448
|
275 - =CORTEX= implements a wide variety of senses, including touch,
|
rlm@448
|
276 proprioception, vision, hearing, and muscle tension. Complicated
|
rlm@448
|
277 senses like touch, and vision involve multiple sensory elements
|
rlm@448
|
278 embedded in a 2D surface. You have complete control over the
|
rlm@448
|
279 distribution of these sensor elements through the use of simple
|
rlm@448
|
280 png image files. In particular, =CORTEX= implements more
|
rlm@448
|
281 comprehensive hearing than any other creature simulation system
|
rlm@448
|
282 available.
|
rlm@448
|
283
|
rlm@448
|
284 - =CORTEX= supports any number of creatures and any number of
|
rlm@448
|
285 senses. Time in =CORTEX= dialates so that the simulated creatures
|
rlm@448
|
286 always precieve a perfectly smooth flow of time, regardless of
|
rlm@448
|
287 the actual computational load.
|
rlm@448
|
288
|
rlm@448
|
289 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
|
rlm@448
|
290 engine designed to create cross-platform 3D desktop games. =CORTEX=
|
rlm@448
|
291 is mainly written in clojure, a dialect of =LISP= that runs on the
|
rlm@448
|
292 java virtual machine (JVM). The API for creating and simulating
|
rlm@449
|
293 creatures and senses is entirely expressed in clojure, though many
|
rlm@449
|
294 senses are implemented at the layer of jMonkeyEngine or below. For
|
rlm@449
|
295 example, for the sense of hearing I use a layer of clojure code on
|
rlm@449
|
296 top of a layer of java JNI bindings that drive a layer of =C++=
|
rlm@449
|
297 code which implements a modified version of =OpenAL= to support
|
rlm@449
|
298 multiple listeners. =CORTEX= is the only simulation environment
|
rlm@449
|
299 that I know of that can support multiple entities that can each
|
rlm@449
|
300 hear the world from their own perspective. Other senses also
|
rlm@449
|
301 require a small layer of Java code. =CORTEX= also uses =bullet=, a
|
rlm@449
|
302 physics simulator written in =C=.
|
rlm@448
|
303
|
rlm@448
|
304 #+caption: Here is the worm from above modeled in Blender, a free
|
rlm@448
|
305 #+caption: 3D-modeling program. Senses and joints are described
|
rlm@448
|
306 #+caption: using special nodes in Blender.
|
rlm@448
|
307 #+name: worm-recognition-intro
|
rlm@448
|
308 #+ATTR_LaTeX: :width 12cm
|
rlm@448
|
309 [[./images/blender-worm.png]]
|
rlm@448
|
310
|
rlm@449
|
311 Here are some thing I anticipate that =CORTEX= might be used for:
|
rlm@449
|
312
|
rlm@449
|
313 - exploring new ideas about sensory integration
|
rlm@449
|
314 - distributed communication among swarm creatures
|
rlm@449
|
315 - self-learning using free exploration,
|
rlm@449
|
316 - evolutionary algorithms involving creature construction
|
rlm@449
|
317 - exploration of exoitic senses and effectors that are not possible
|
rlm@449
|
318 in the real world (such as telekenisis or a semantic sense)
|
rlm@449
|
319 - imagination using subworlds
|
rlm@449
|
320
|
rlm@451
|
321 During one test with =CORTEX=, I created 3,000 creatures each with
|
rlm@448
|
322 their own independent senses and ran them all at only 1/80 real
|
rlm@448
|
323 time. In another test, I created a detailed model of my own hand,
|
rlm@448
|
324 equipped with a realistic distribution of touch (more sensitive at
|
rlm@448
|
325 the fingertips), as well as eyes and ears, and it ran at around 1/4
|
rlm@451
|
326 real time.
|
rlm@448
|
327
|
rlm@451
|
328 #+BEGIN_LaTeX
|
rlm@449
|
329 \begin{sidewaysfigure}
|
rlm@449
|
330 \includegraphics[width=9.5in]{images/full-hand.png}
|
rlm@451
|
331 \caption{
|
rlm@451
|
332 I modeled my own right hand in Blender and rigged it with all the
|
rlm@451
|
333 senses that {\tt CORTEX} supports. My simulated hand has a
|
rlm@451
|
334 biologically inspired distribution of touch sensors. The senses are
|
rlm@451
|
335 displayed on the right, and the simulation is displayed on the
|
rlm@451
|
336 left. Notice that my hand is curling its fingers, that it can see
|
rlm@451
|
337 its own finger from the eye in its palm, and that it can feel its
|
rlm@451
|
338 own thumb touching its palm.}
|
rlm@449
|
339 \end{sidewaysfigure}
|
rlm@451
|
340 #+END_LaTeX
|
rlm@448
|
341
|
rlm@437
|
342 ** Contributions
|
rlm@435
|
343
|
rlm@451
|
344 - I built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@451
|
345 experiments. =CORTEX= supports many features lacking in other
|
rlm@451
|
346 systems, such proper simulation of hearing. It is easy to create
|
rlm@451
|
347 new =CORTEX= creatures using Blender, a free 3D modeling program.
|
rlm@449
|
348
|
rlm@451
|
349 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
|
rlm@451
|
350 a worm-like creature using a computational model of empathy.
|
rlm@449
|
351
|
rlm@436
|
352 * Building =CORTEX=
|
rlm@435
|
353
|
rlm@462
|
354 I intend for =CORTEX= to be used as a general purpose library for
|
rlm@462
|
355 building creatures and outfitting them with senses, so that it will
|
rlm@462
|
356 be useful for other researchers who want to test out ideas of their
|
rlm@462
|
357 own. To this end, wherver I have had to make archetictural choices
|
rlm@462
|
358 about =CORTEX=, I have chosen to give as much freedom to the user as
|
rlm@462
|
359 possible, so that =CORTEX= may be used for things I have not
|
rlm@462
|
360 forseen.
|
rlm@462
|
361
|
rlm@465
|
362 ** COMMENT Simulation or Reality?
|
rlm@462
|
363
|
rlm@462
|
364 The most important archetictural decision of all is the choice to
|
rlm@462
|
365 use a computer-simulated environemnt in the first place! The world
|
rlm@462
|
366 is a vast and rich place, and for now simulations are a very poor
|
rlm@462
|
367 reflection of its complexity. It may be that there is a significant
|
rlm@462
|
368 qualatative difference between dealing with senses in the real
|
rlm@462
|
369 world and dealing with pale facilimilies of them in a
|
rlm@462
|
370 simulation. What are the advantages and disadvantages of a
|
rlm@462
|
371 simulation vs. reality?
|
rlm@462
|
372
|
rlm@462
|
373 *** Simulation
|
rlm@462
|
374
|
rlm@462
|
375 The advantages of virtual reality are that when everything is a
|
rlm@462
|
376 simulation, experiments in that simulation are absolutely
|
rlm@462
|
377 reproducible. It's also easier to change the character and world
|
rlm@462
|
378 to explore new situations and different sensory combinations.
|
rlm@462
|
379
|
rlm@462
|
380 If the world is to be simulated on a computer, then not only do
|
rlm@462
|
381 you have to worry about whether the character's senses are rich
|
rlm@462
|
382 enough to learn from the world, but whether the world itself is
|
rlm@462
|
383 rendered with enough detail and realism to give enough working
|
rlm@462
|
384 material to the character's senses. To name just a few
|
rlm@462
|
385 difficulties facing modern physics simulators: destructibility of
|
rlm@462
|
386 the environment, simulation of water/other fluids, large areas,
|
rlm@462
|
387 nonrigid bodies, lots of objects, smoke. I don't know of any
|
rlm@462
|
388 computer simulation that would allow a character to take a rock
|
rlm@462
|
389 and grind it into fine dust, then use that dust to make a clay
|
rlm@462
|
390 sculpture, at least not without spending years calculating the
|
rlm@462
|
391 interactions of every single small grain of dust. Maybe a
|
rlm@462
|
392 simulated world with today's limitations doesn't provide enough
|
rlm@462
|
393 richness for real intelligence to evolve.
|
rlm@462
|
394
|
rlm@462
|
395 *** Reality
|
rlm@462
|
396
|
rlm@462
|
397 The other approach for playing with senses is to hook your
|
rlm@462
|
398 software up to real cameras, microphones, robots, etc., and let it
|
rlm@462
|
399 loose in the real world. This has the advantage of eliminating
|
rlm@462
|
400 concerns about simulating the world at the expense of increasing
|
rlm@462
|
401 the complexity of implementing the senses. Instead of just
|
rlm@462
|
402 grabbing the current rendered frame for processing, you have to
|
rlm@462
|
403 use an actual camera with real lenses and interact with photons to
|
rlm@462
|
404 get an image. It is much harder to change the character, which is
|
rlm@462
|
405 now partly a physical robot of some sort, since doing so involves
|
rlm@462
|
406 changing things around in the real world instead of modifying
|
rlm@462
|
407 lines of code. While the real world is very rich and definitely
|
rlm@462
|
408 provides enough stimulation for intelligence to develop as
|
rlm@462
|
409 evidenced by our own existence, it is also uncontrollable in the
|
rlm@462
|
410 sense that a particular situation cannot be recreated perfectly or
|
rlm@462
|
411 saved for later use. It is harder to conduct science because it is
|
rlm@462
|
412 harder to repeat an experiment. The worst thing about using the
|
rlm@462
|
413 real world instead of a simulation is the matter of time. Instead
|
rlm@462
|
414 of simulated time you get the constant and unstoppable flow of
|
rlm@462
|
415 real time. This severely limits the sorts of software you can use
|
rlm@462
|
416 to program the AI because all sense inputs must be handled in real
|
rlm@462
|
417 time. Complicated ideas may have to be implemented in hardware or
|
rlm@462
|
418 may simply be impossible given the current speed of our
|
rlm@462
|
419 processors. Contrast this with a simulation, in which the flow of
|
rlm@462
|
420 time in the simulated world can be slowed down to accommodate the
|
rlm@462
|
421 limitations of the character's programming. In terms of cost,
|
rlm@462
|
422 doing everything in software is far cheaper than building custom
|
rlm@462
|
423 real-time hardware. All you need is a laptop and some patience.
|
rlm@435
|
424
|
rlm@465
|
425 ** COMMENT Because of Time, simulation is perferable to reality
|
rlm@435
|
426
|
rlm@462
|
427 I envision =CORTEX= being used to support rapid prototyping and
|
rlm@462
|
428 iteration of ideas. Even if I could put together a well constructed
|
rlm@462
|
429 kit for creating robots, it would still not be enough because of
|
rlm@462
|
430 the scourge of real-time processing. Anyone who wants to test their
|
rlm@462
|
431 ideas in the real world must always worry about getting their
|
rlm@465
|
432 algorithms to run fast enough to process information in real time.
|
rlm@465
|
433 The need for real time processing only increases if multiple senses
|
rlm@465
|
434 are involved. In the extreme case, even simple algorithms will have
|
rlm@465
|
435 to be accelerated by ASIC chips or FPGAs, turning what would
|
rlm@465
|
436 otherwise be a few lines of code and a 10x speed penality into a
|
rlm@465
|
437 multi-month ordeal. For this reason, =CORTEX= supports
|
rlm@462
|
438 /time-dialiation/, which scales back the framerate of the
|
rlm@465
|
439 simulation in proportion to the amount of processing each frame.
|
rlm@465
|
440 From the perspective of the creatures inside the simulation, time
|
rlm@465
|
441 always appears to flow at a constant rate, regardless of how
|
rlm@462
|
442 complicated the envorimnent becomes or how many creatures are in
|
rlm@462
|
443 the simulation. The cost is that =CORTEX= can sometimes run slower
|
rlm@462
|
444 than real time. This can also be an advantage, however ---
|
rlm@462
|
445 simulations of very simple creatures in =CORTEX= generally run at
|
rlm@462
|
446 40x on my machine!
|
rlm@462
|
447
|
rlm@465
|
448 ** COMMENT Video game engines are a great starting point
|
rlm@462
|
449
|
rlm@462
|
450 I did not need to write my own physics simulation code or shader to
|
rlm@462
|
451 build =CORTEX=. Doing so would lead to a system that is impossible
|
rlm@462
|
452 for anyone but myself to use anyway. Instead, I use a video game
|
rlm@462
|
453 engine as a base and modify it to accomodate the additional needs
|
rlm@462
|
454 of =CORTEX=. Video game engines are an ideal starting point to
|
rlm@462
|
455 build =CORTEX=, because they are not far from being creature
|
rlm@463
|
456 building systems themselves.
|
rlm@462
|
457
|
rlm@462
|
458 First off, general purpose video game engines come with a physics
|
rlm@462
|
459 engine and lighting / sound system. The physics system provides
|
rlm@462
|
460 tools that can be co-opted to serve as touch, proprioception, and
|
rlm@462
|
461 muscles. Since some games support split screen views, a good video
|
rlm@462
|
462 game engine will allow you to efficiently create multiple cameras
|
rlm@463
|
463 in the simulated world that can be used as eyes. Video game systems
|
rlm@463
|
464 offer integrated asset management for things like textures and
|
rlm@463
|
465 creatures models, providing an avenue for defining creatures.
|
rlm@463
|
466 Finally, because video game engines support a large number of
|
rlm@463
|
467 users, if I don't stray too far from the base system, other
|
rlm@463
|
468 researchers can turn to this community for help when doing their
|
rlm@463
|
469 research.
|
rlm@463
|
470
|
rlm@465
|
471 ** COMMENT =CORTEX= is based on jMonkeyEngine3
|
rlm@463
|
472
|
rlm@463
|
473 While preparing to build =CORTEX= I studied several video game
|
rlm@463
|
474 engines to see which would best serve as a base. The top contenders
|
rlm@463
|
475 were:
|
rlm@463
|
476
|
rlm@463
|
477 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
|
rlm@463
|
478 software in 1997. All the source code was released by ID
|
rlm@463
|
479 software into the Public Domain several years ago, and as a
|
rlm@463
|
480 result it has been ported to many different languages. This
|
rlm@463
|
481 engine was famous for its advanced use of realistic shading
|
rlm@463
|
482 and had decent and fast physics simulation. The main advantage
|
rlm@463
|
483 of the Quake II engine is its simplicity, but I ultimately
|
rlm@463
|
484 rejected it because the engine is too tied to the concept of a
|
rlm@463
|
485 first-person shooter game. One of the problems I had was that
|
rlm@463
|
486 there does not seem to be any easy way to attach multiple
|
rlm@463
|
487 cameras to a single character. There are also several physics
|
rlm@463
|
488 clipping issues that are corrected in a way that only applies
|
rlm@463
|
489 to the main character and do not apply to arbitrary objects.
|
rlm@463
|
490
|
rlm@463
|
491 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
|
rlm@463
|
492 and Quake I engines and is used by Valve in the Half-Life
|
rlm@463
|
493 series of games. The physics simulation in the Source Engine
|
rlm@463
|
494 is quite accurate and probably the best out of all the engines
|
rlm@463
|
495 I investigated. There is also an extensive community actively
|
rlm@463
|
496 working with the engine. However, applications that use the
|
rlm@463
|
497 Source Engine must be written in C++, the code is not open, it
|
rlm@463
|
498 only runs on Windows, and the tools that come with the SDK to
|
rlm@463
|
499 handle models and textures are complicated and awkward to use.
|
rlm@463
|
500
|
rlm@463
|
501 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
|
rlm@463
|
502 games in Java. It uses OpenGL to render to the screen and uses
|
rlm@463
|
503 screengraphs to avoid drawing things that do not appear on the
|
rlm@463
|
504 screen. It has an active community and several games in the
|
rlm@463
|
505 pipeline. The engine was not built to serve any particular
|
rlm@463
|
506 game but is instead meant to be used for any 3D game.
|
rlm@463
|
507
|
rlm@463
|
508 I chose jMonkeyEngine3 because it because it had the most features
|
rlm@464
|
509 out of all the free projects I looked at, and because I could then
|
rlm@463
|
510 write my code in clojure, an implementation of =LISP= that runs on
|
rlm@463
|
511 the JVM.
|
rlm@435
|
512
|
rlm@467
|
513 ** COMMENT Bodies are composed of segments connected by joints
|
rlm@435
|
514
|
rlm@464
|
515 For the simple worm-like creatures I will use later on in this
|
rlm@464
|
516 thesis, I could define a simple API in =CORTEX= that would allow
|
rlm@464
|
517 one to create boxes, spheres, etc., and leave that API as the sole
|
rlm@464
|
518 way to create creatures. However, for =CORTEX= to truly be useful
|
rlm@464
|
519 for other projects, it needs to have a way to construct complicated
|
rlm@464
|
520 creatures. If possible, it would be nice to leverage work that has
|
rlm@464
|
521 already been done by the community of 3D modelers, or at least
|
rlm@464
|
522 enable people who are talented at moedling but not programming to
|
rlm@464
|
523 design =CORTEX= creatures.
|
rlm@464
|
524
|
rlm@464
|
525 Therefore, I use Blender, a free 3D modeling program, as the main
|
rlm@464
|
526 way to create creatures in =CORTEX=. However, the creatures modeled
|
rlm@464
|
527 in Blender must also be simple to simulate in jMonkeyEngine3's game
|
rlm@464
|
528 engine, and must also be easy to rig with =CORTEX='s senses.
|
rlm@464
|
529
|
rlm@464
|
530 While trying to find a good compromise for body-design, one option
|
rlm@464
|
531 I ultimately rejected is to use blender's [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Rigging/Armatures][armature]] system. The idea
|
rlm@464
|
532 would have been to define a mesh which describes the creature's
|
rlm@466
|
533 entire body. To this you add a skeleton which deforms this mesh
|
rlm@466
|
534 (called rigging). This technique is used extensively to model
|
rlm@466
|
535 humans and create realistic animations. It is not a good technique
|
rlm@466
|
536 for physical simulation, because deformable surfaces are hard to
|
rlm@466
|
537 model. Humans work like a squishy bag with some hard bones to give
|
rlm@466
|
538 it shape. The bones are easy to simulate physically, but they
|
rlm@466
|
539 interact with thr world though the skin, which is contiguous, but
|
rlm@466
|
540 does not have a constant shape. In order to simulate skin you need
|
rlm@466
|
541 some way to continuously update the physical model of the skin
|
rlm@466
|
542 along with the movement of the bones. Given that bullet is
|
rlm@466
|
543 optimized for rigid, solid objects, this leads to unmanagable
|
rlm@466
|
544 computation and incorrect simulation.
|
rlm@464
|
545
|
rlm@464
|
546 Instead of using the human-like ``deformable bag of bones''
|
rlm@464
|
547 approach, I decided to base my body plans on multiple solid objects
|
rlm@464
|
548 that are connected by joints, inspired by the robot =EVE= from the
|
rlm@464
|
549 movie WALL-E.
|
rlm@464
|
550
|
rlm@464
|
551 #+caption: =EVE= from the movie WALL-E. This body plan turns
|
rlm@464
|
552 #+caption: out to be much better suited to my purposes than a more
|
rlm@464
|
553 #+caption: human-like one.
|
rlm@465
|
554 #+ATTR_LaTeX: :width 10cm
|
rlm@464
|
555 [[./images/Eve.jpg]]
|
rlm@464
|
556
|
rlm@464
|
557 =EVE='s body is composed of several rigid components that are held
|
rlm@464
|
558 together by invisible joint constraints. This is what I mean by
|
rlm@464
|
559 ``eve-like''. The main reason that I use eve-style bodies is for
|
rlm@464
|
560 efficiency, and so that there will be correspondence between the
|
rlm@464
|
561 AI's vision and the physical presence of its body. Each individual
|
rlm@464
|
562 section is simulated by a separate rigid body that corresponds
|
rlm@464
|
563 exactly with its visual representation and does not change.
|
rlm@464
|
564 Sections are connected by invisible joints that are well supported
|
rlm@464
|
565 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
|
rlm@464
|
566 can efficiently simulate hundreds of rigid bodies connected by
|
rlm@464
|
567 joints. Sections do not have to stay as one piece forever; they can
|
rlm@464
|
568 be dynamically replaced with multiple sections to simulate
|
rlm@464
|
569 splitting in two. This could be used to simulate retractable claws
|
rlm@464
|
570 or =EVE='s hands, which are able to coalesce into one object in the
|
rlm@464
|
571 movie.
|
rlm@465
|
572
|
rlm@465
|
573 *** Solidifying/Connecting the body
|
rlm@465
|
574
|
rlm@465
|
575 Importing bodies from =CORTEX= into blender involves encoding
|
rlm@465
|
576 metadata into the blender file that specifies the mass of each
|
rlm@465
|
577 component and the joints by which those components are connected. I
|
rlm@465
|
578 do this in Blender in two ways. First is by using the ``metadata''
|
rlm@465
|
579 field of each solid object to specify the mass. Second is by using
|
rlm@465
|
580 Blender ``empty nodes'' to specify the position and type of each
|
rlm@465
|
581 joint. Empty nodes have no mass, physical presence, or appearance,
|
rlm@465
|
582 but they can hold metadata and have names. I use a tree structure
|
rlm@465
|
583 of empty nodes to specify joints. There is a parent node named
|
rlm@465
|
584 ``joints'', and a series of empty child nodes of the ``joints''
|
rlm@465
|
585 node that each represent a single joint.
|
rlm@465
|
586
|
rlm@465
|
587 #+caption: View of the hand model in Blender showing the main ``joints''
|
rlm@465
|
588 #+caption: node (highlighted in yellow) and its children which each
|
rlm@465
|
589 #+caption: represent a joint in the hand. Each joint node has metadata
|
rlm@465
|
590 #+caption: specifying what sort of joint it is.
|
rlm@466
|
591 #+name: blender-hand
|
rlm@465
|
592 #+ATTR_LaTeX: :width 10cm
|
rlm@465
|
593 [[./images/hand-screenshot1.png]]
|
rlm@465
|
594
|
rlm@465
|
595
|
rlm@466
|
596 =CORTEX= creates a creature in two steps: first, it traverses the
|
rlm@466
|
597 nodes in the blender file and creates physical representations for
|
rlm@466
|
598 any of them that have mass defined.
|
rlm@466
|
599
|
rlm@466
|
600 #+caption: Program for iterating through the nodes in a blender file
|
rlm@466
|
601 #+caption: and generating physical jMonkeyEngine3 objects with mass
|
rlm@466
|
602 #+caption: and a matching physics shape.
|
rlm@466
|
603 #+name: name
|
rlm@466
|
604 #+begin_listing clojure
|
rlm@466
|
605 #+begin_src clojure
|
rlm@466
|
606 (defn physical!
|
rlm@466
|
607 "Iterate through the nodes in creature and make them real physical
|
rlm@466
|
608 objects in the simulation."
|
rlm@466
|
609 [#^Node creature]
|
rlm@466
|
610 (dorun
|
rlm@466
|
611 (map
|
rlm@466
|
612 (fn [geom]
|
rlm@466
|
613 (let [physics-control
|
rlm@466
|
614 (RigidBodyControl.
|
rlm@466
|
615 (HullCollisionShape.
|
rlm@466
|
616 (.getMesh geom))
|
rlm@466
|
617 (if-let [mass (meta-data geom "mass")]
|
rlm@466
|
618 (float mass) (float 1)))]
|
rlm@466
|
619 (.addControl geom physics-control)))
|
rlm@466
|
620 (filter #(isa? (class %) Geometry )
|
rlm@466
|
621 (node-seq creature)))))
|
rlm@466
|
622 #+end_src
|
rlm@466
|
623 #+end_listing
|
rlm@465
|
624
|
rlm@466
|
625 The next step to making a proper body is to connect those pieces
|
rlm@466
|
626 together with joints. jMonkeyEngine has a large array of joints
|
rlm@466
|
627 available via =bullet=, such as Point2Point, Cone, Hinge, and a
|
rlm@466
|
628 generic Six Degree of Freedom joint, with or without spring
|
rlm@466
|
629 restitution. =CORTEX='s procedure for binding the creature together
|
rlm@466
|
630 with joints is as follows:
|
rlm@465
|
631
|
rlm@466
|
632 - Find the children of the "joints" node.
|
rlm@466
|
633 - Determine the two spatials the joint is meant to connect.
|
rlm@466
|
634 - Create the joint based on the meta-data of the empty node.
|
rlm@466
|
635
|
rlm@466
|
636 The higher order function =sense-nodes= from =cortex.sense=
|
rlm@466
|
637 simplifies finding the joints based on their parent ``joints''
|
rlm@466
|
638 node.
|
rlm@466
|
639
|
rlm@466
|
640 #+caption: Retrieving the children empty nodes from a single
|
rlm@466
|
641 #+caption: named empty node is a common pattern in =CORTEX=
|
rlm@466
|
642 #+caption: further instances of this technique for the senses
|
rlm@466
|
643 #+caption: will be omitted
|
rlm@466
|
644 #+name: get-empty-nodes
|
rlm@466
|
645 #+begin_listing clojure
|
rlm@466
|
646 #+begin_src clojure
|
rlm@466
|
647 (defn sense-nodes
|
rlm@466
|
648 "For some senses there is a special empty blender node whose
|
rlm@466
|
649 children are considered markers for an instance of that sense. This
|
rlm@466
|
650 function generates functions to find those children, given the name
|
rlm@466
|
651 of the special parent node."
|
rlm@466
|
652 [parent-name]
|
rlm@466
|
653 (fn [#^Node creature]
|
rlm@466
|
654 (if-let [sense-node (.getChild creature parent-name)]
|
rlm@466
|
655 (seq (.getChildren sense-node)) [])))
|
rlm@466
|
656
|
rlm@466
|
657 (def
|
rlm@466
|
658 ^{:doc "Return the children of the creature's \"joints\" node."
|
rlm@466
|
659 :arglists '([creature])}
|
rlm@466
|
660 joints
|
rlm@466
|
661 (sense-nodes "joints"))
|
rlm@466
|
662 #+end_src
|
rlm@466
|
663 #+end_listing
|
rlm@466
|
664
|
rlm@466
|
665 To find a joint's targets targets, =CORTEX= creates a small cube,
|
rlm@466
|
666 centered around the empty-node, and grows the cube exponentially
|
rlm@466
|
667 until it intersects two /physical/ objects. The objects are ordered
|
rlm@466
|
668 according to the joint's rotation, with the first one being the
|
rlm@466
|
669 object that has more negative coordinates in the joint's reference
|
rlm@466
|
670 frame. Since the objects must be physical, the empty-node itself
|
rlm@466
|
671 escapes detection. Because the objects must be physical,
|
rlm@466
|
672 =joint-targets= must be called /after/ =physical!= is called.
|
rlm@464
|
673
|
rlm@466
|
674 #+caption: Program to find the targets of a joint node by
|
rlm@466
|
675 #+caption: exponentiallly growth of a search cube.
|
rlm@466
|
676 #+name: joint-targets
|
rlm@466
|
677 #+begin_listing clojure
|
rlm@466
|
678 #+begin_src clojure
|
rlm@466
|
679 (defn joint-targets
|
rlm@466
|
680 "Return the two closest two objects to the joint object, ordered
|
rlm@466
|
681 from bottom to top according to the joint's rotation."
|
rlm@466
|
682 [#^Node parts #^Node joint]
|
rlm@466
|
683 (loop [radius (float 0.01)]
|
rlm@466
|
684 (let [results (CollisionResults.)]
|
rlm@466
|
685 (.collideWith
|
rlm@466
|
686 parts
|
rlm@466
|
687 (BoundingBox. (.getWorldTranslation joint)
|
rlm@466
|
688 radius radius radius) results)
|
rlm@466
|
689 (let [targets
|
rlm@466
|
690 (distinct
|
rlm@466
|
691 (map #(.getGeometry %) results))]
|
rlm@466
|
692 (if (>= (count targets) 2)
|
rlm@466
|
693 (sort-by
|
rlm@466
|
694 #(let [joint-ref-frame-position
|
rlm@466
|
695 (jme-to-blender
|
rlm@466
|
696 (.mult
|
rlm@466
|
697 (.inverse (.getWorldRotation joint))
|
rlm@466
|
698 (.subtract (.getWorldTranslation %)
|
rlm@466
|
699 (.getWorldTranslation joint))))]
|
rlm@466
|
700 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
|
rlm@466
|
701 (take 2 targets))
|
rlm@466
|
702 (recur (float (* radius 2))))))))
|
rlm@466
|
703 #+end_src
|
rlm@466
|
704 #+end_listing
|
rlm@464
|
705
|
rlm@466
|
706 Once =CORTEX= finds all joints and targets, it creates them using a
|
rlm@466
|
707 simple dispatch on the metadata of the joint node.
|
rlm@466
|
708
|
rlm@466
|
709 #+caption: Program to dispatch on blender metadata and create joints
|
rlm@466
|
710 #+caption: sutiable for physical simulation.
|
rlm@466
|
711 #+name: joint-dispatch
|
rlm@466
|
712 #+begin_listing clojure
|
rlm@466
|
713 #+begin_src clojure
|
rlm@466
|
714 (defmulti joint-dispatch
|
rlm@466
|
715 "Translate blender pseudo-joints into real JME joints."
|
rlm@466
|
716 (fn [constraints & _]
|
rlm@466
|
717 (:type constraints)))
|
rlm@466
|
718
|
rlm@466
|
719 (defmethod joint-dispatch :point
|
rlm@466
|
720 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
721 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
|
rlm@466
|
722 (.setLinearLowerLimit Vector3f/ZERO)
|
rlm@466
|
723 (.setLinearUpperLimit Vector3f/ZERO)))
|
rlm@466
|
724
|
rlm@466
|
725 (defmethod joint-dispatch :hinge
|
rlm@466
|
726 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
727 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
|
rlm@466
|
728 [limit-1 limit-2] (:limit constraints)
|
rlm@466
|
729 hinge-axis (.mult rotation (blender-to-jme axis))]
|
rlm@466
|
730 (doto (HingeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
731 hinge-axis hinge-axis)
|
rlm@466
|
732 (.setLimit limit-1 limit-2))))
|
rlm@466
|
733
|
rlm@466
|
734 (defmethod joint-dispatch :cone
|
rlm@466
|
735 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
736 (let [limit-xz (:limit-xz constraints)
|
rlm@466
|
737 limit-xy (:limit-xy constraints)
|
rlm@466
|
738 twist (:twist constraints)]
|
rlm@466
|
739 (doto (ConeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
740 rotation rotation)
|
rlm@466
|
741 (.setLimit (float limit-xz) (float limit-xy)
|
rlm@466
|
742 (float twist)))))
|
rlm@466
|
743 #+end_src
|
rlm@466
|
744 #+end_listing
|
rlm@466
|
745
|
rlm@466
|
746 All that is left for joints it to combine the above pieces into a
|
rlm@466
|
747 something that can operate on the collection of nodes that a
|
rlm@466
|
748 blender file represents.
|
rlm@466
|
749
|
rlm@466
|
750 #+caption: Program to completely create a joint given information
|
rlm@466
|
751 #+caption: from a blender file.
|
rlm@466
|
752 #+name: connect
|
rlm@466
|
753 #+begin_listing clojure
|
rlm@466
|
754 #+begin_src clojure
|
rlm@466
|
755 (defn connect
|
rlm@466
|
756 "Create a joint between 'obj-a and 'obj-b at the location of
|
rlm@466
|
757 'joint. The type of joint is determined by the metadata on 'joint.
|
rlm@466
|
758
|
rlm@466
|
759 Here are some examples:
|
rlm@466
|
760 {:type :point}
|
rlm@466
|
761 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
|
rlm@466
|
762 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
|
rlm@466
|
763
|
rlm@466
|
764 {:type :cone :limit-xz 0]
|
rlm@466
|
765 :limit-xy 0]
|
rlm@466
|
766 :twist 0]} (use XZY rotation mode in blender!)"
|
rlm@466
|
767 [#^Node obj-a #^Node obj-b #^Node joint]
|
rlm@466
|
768 (let [control-a (.getControl obj-a RigidBodyControl)
|
rlm@466
|
769 control-b (.getControl obj-b RigidBodyControl)
|
rlm@466
|
770 joint-center (.getWorldTranslation joint)
|
rlm@466
|
771 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
|
rlm@466
|
772 pivot-a (world-to-local obj-a joint-center)
|
rlm@466
|
773 pivot-b (world-to-local obj-b joint-center)]
|
rlm@466
|
774 (if-let
|
rlm@466
|
775 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
|
rlm@466
|
776 ;; A side-effect of creating a joint registers
|
rlm@466
|
777 ;; it with both physics objects which in turn
|
rlm@466
|
778 ;; will register the joint with the physics system
|
rlm@466
|
779 ;; when the simulation is started.
|
rlm@466
|
780 (joint-dispatch constraints
|
rlm@466
|
781 control-a control-b
|
rlm@466
|
782 pivot-a pivot-b
|
rlm@466
|
783 joint-rotation))))
|
rlm@466
|
784 #+end_src
|
rlm@466
|
785 #+end_listing
|
rlm@466
|
786
|
rlm@466
|
787 In general, whenever =CORTEX= exposes a sense (or in this case
|
rlm@466
|
788 physicality), it provides a function of the type =sense!=, which
|
rlm@466
|
789 takes in a collection of nodes and augments it to support that
|
rlm@466
|
790 sense. The function returns any controlls necessary to use that
|
rlm@466
|
791 sense. In this case =body!= cerates a physical body and returns no
|
rlm@466
|
792 control functions.
|
rlm@466
|
793
|
rlm@466
|
794 #+caption: Program to give joints to a creature.
|
rlm@466
|
795 #+name: name
|
rlm@466
|
796 #+begin_listing clojure
|
rlm@466
|
797 #+begin_src clojure
|
rlm@466
|
798 (defn joints!
|
rlm@466
|
799 "Connect the solid parts of the creature with physical joints. The
|
rlm@466
|
800 joints are taken from the \"joints\" node in the creature."
|
rlm@466
|
801 [#^Node creature]
|
rlm@466
|
802 (dorun
|
rlm@466
|
803 (map
|
rlm@466
|
804 (fn [joint]
|
rlm@466
|
805 (let [[obj-a obj-b] (joint-targets creature joint)]
|
rlm@466
|
806 (connect obj-a obj-b joint)))
|
rlm@466
|
807 (joints creature))))
|
rlm@466
|
808 (defn body!
|
rlm@466
|
809 "Endow the creature with a physical body connected with joints. The
|
rlm@466
|
810 particulars of the joints and the masses of each body part are
|
rlm@466
|
811 determined in blender."
|
rlm@466
|
812 [#^Node creature]
|
rlm@466
|
813 (physical! creature)
|
rlm@466
|
814 (joints! creature))
|
rlm@466
|
815 #+end_src
|
rlm@466
|
816 #+end_listing
|
rlm@466
|
817
|
rlm@466
|
818 All of the code you have just seen amounts to only 130 lines, yet
|
rlm@466
|
819 because it builds on top of Blender and jMonkeyEngine3, those few
|
rlm@466
|
820 lines pack quite a punch!
|
rlm@466
|
821
|
rlm@466
|
822 The hand from figure \ref{blender-hand}, which was modeled after my
|
rlm@466
|
823 own right hand, can now be given joints and simulated as a
|
rlm@466
|
824 creature.
|
rlm@466
|
825
|
rlm@466
|
826 #+caption: With the ability to create physical creatures from blender,
|
rlm@466
|
827 #+caption: =CORTEX= gets one step closer to a full creature simulation
|
rlm@466
|
828 #+caption: environment.
|
rlm@466
|
829 #+name: name
|
rlm@466
|
830 #+ATTR_LaTeX: :width 15cm
|
rlm@466
|
831 [[./images/physical-hand.png]]
|
rlm@466
|
832
|
rlm@464
|
833
|
rlm@436
|
834 ** Eyes reuse standard video game components
|
rlm@436
|
835
|
rlm@436
|
836 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
837
|
rlm@436
|
838 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
839
|
rlm@440
|
840 ** Proprioception is the sense that makes everything ``real''
|
rlm@436
|
841
|
rlm@436
|
842 ** Muscles are both effectors and sensors
|
rlm@436
|
843
|
rlm@436
|
844 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
845
|
rlm@436
|
846 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
847
|
rlm@465
|
848 * COMMENT Empathy in a simulated worm
|
rlm@435
|
849
|
rlm@449
|
850 Here I develop a computational model of empathy, using =CORTEX= as a
|
rlm@449
|
851 base. Empathy in this context is the ability to observe another
|
rlm@449
|
852 creature and infer what sorts of sensations that creature is
|
rlm@449
|
853 feeling. My empathy algorithm involves multiple phases. First is
|
rlm@449
|
854 free-play, where the creature moves around and gains sensory
|
rlm@449
|
855 experience. From this experience I construct a representation of the
|
rlm@449
|
856 creature's sensory state space, which I call \Phi-space. Using
|
rlm@449
|
857 \Phi-space, I construct an efficient function which takes the
|
rlm@449
|
858 limited data that comes from observing another creature and enriches
|
rlm@449
|
859 it full compliment of imagined sensory data. I can then use the
|
rlm@449
|
860 imagined sensory data to recognize what the observed creature is
|
rlm@449
|
861 doing and feeling, using straightforward embodied action predicates.
|
rlm@449
|
862 This is all demonstrated with using a simple worm-like creature, and
|
rlm@449
|
863 recognizing worm-actions based on limited data.
|
rlm@449
|
864
|
rlm@449
|
865 #+caption: Here is the worm with which we will be working.
|
rlm@449
|
866 #+caption: It is composed of 5 segments. Each segment has a
|
rlm@449
|
867 #+caption: pair of extensor and flexor muscles. Each of the
|
rlm@449
|
868 #+caption: worm's four joints is a hinge joint which allows
|
rlm@451
|
869 #+caption: about 30 degrees of rotation to either side. Each segment
|
rlm@449
|
870 #+caption: of the worm is touch-capable and has a uniform
|
rlm@449
|
871 #+caption: distribution of touch sensors on each of its faces.
|
rlm@449
|
872 #+caption: Each joint has a proprioceptive sense to detect
|
rlm@449
|
873 #+caption: relative positions. The worm segments are all the
|
rlm@449
|
874 #+caption: same except for the first one, which has a much
|
rlm@449
|
875 #+caption: higher weight than the others to allow for easy
|
rlm@449
|
876 #+caption: manual motor control.
|
rlm@449
|
877 #+name: basic-worm-view
|
rlm@449
|
878 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
879 [[./images/basic-worm-view.png]]
|
rlm@449
|
880
|
rlm@449
|
881 #+caption: Program for reading a worm from a blender file and
|
rlm@449
|
882 #+caption: outfitting it with the senses of proprioception,
|
rlm@449
|
883 #+caption: touch, and the ability to move, as specified in the
|
rlm@449
|
884 #+caption: blender file.
|
rlm@449
|
885 #+name: get-worm
|
rlm@449
|
886 #+begin_listing clojure
|
rlm@449
|
887 #+begin_src clojure
|
rlm@449
|
888 (defn worm []
|
rlm@449
|
889 (let [model (load-blender-model "Models/worm/worm.blend")]
|
rlm@449
|
890 {:body (doto model (body!))
|
rlm@449
|
891 :touch (touch! model)
|
rlm@449
|
892 :proprioception (proprioception! model)
|
rlm@449
|
893 :muscles (movement! model)}))
|
rlm@449
|
894 #+end_src
|
rlm@449
|
895 #+end_listing
|
rlm@452
|
896
|
rlm@436
|
897 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
898
|
rlm@449
|
899 Using empathy, I divide the problem of action recognition into a
|
rlm@449
|
900 recognition process expressed in the language of a full compliment
|
rlm@449
|
901 of senses, and an imaganitive process that generates full sensory
|
rlm@449
|
902 data from partial sensory data. Splitting the action recognition
|
rlm@449
|
903 problem in this manner greatly reduces the total amount of work to
|
rlm@449
|
904 recognize actions: The imaganitive process is mostly just matching
|
rlm@449
|
905 previous experience, and the recognition process gets to use all
|
rlm@449
|
906 the senses to directly describe any action.
|
rlm@449
|
907
|
rlm@436
|
908 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
909
|
rlm@449
|
910 Embodied representations using multiple senses such as touch,
|
rlm@449
|
911 proprioception, and muscle tension turns out be be exceedingly
|
rlm@449
|
912 efficient at describing body-centered actions. It is the ``right
|
rlm@449
|
913 language for the job''. For example, it takes only around 5 lines
|
rlm@449
|
914 of LISP code to describe the action of ``curling'' using embodied
|
rlm@451
|
915 primitives. It takes about 10 lines to describe the seemingly
|
rlm@449
|
916 complicated action of wiggling.
|
rlm@449
|
917
|
rlm@449
|
918 The following action predicates each take a stream of sensory
|
rlm@449
|
919 experience, observe however much of it they desire, and decide
|
rlm@449
|
920 whether the worm is doing the action they describe. =curled?=
|
rlm@449
|
921 relies on proprioception, =resting?= relies on touch, =wiggling?=
|
rlm@449
|
922 relies on a fourier analysis of muscle contraction, and
|
rlm@449
|
923 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
|
rlm@449
|
924
|
rlm@449
|
925 #+caption: Program for detecting whether the worm is curled. This is the
|
rlm@449
|
926 #+caption: simplest action predicate, because it only uses the last frame
|
rlm@449
|
927 #+caption: of sensory experience, and only uses proprioceptive data. Even
|
rlm@449
|
928 #+caption: this simple predicate, however, is automatically frame
|
rlm@449
|
929 #+caption: independent and ignores vermopomorphic differences such as
|
rlm@449
|
930 #+caption: worm textures and colors.
|
rlm@449
|
931 #+name: curled
|
rlm@452
|
932 #+attr_latex: [htpb]
|
rlm@452
|
933 #+begin_listing clojure
|
rlm@449
|
934 #+begin_src clojure
|
rlm@449
|
935 (defn curled?
|
rlm@449
|
936 "Is the worm curled up?"
|
rlm@449
|
937 [experiences]
|
rlm@449
|
938 (every?
|
rlm@449
|
939 (fn [[_ _ bend]]
|
rlm@449
|
940 (> (Math/sin bend) 0.64))
|
rlm@449
|
941 (:proprioception (peek experiences))))
|
rlm@449
|
942 #+end_src
|
rlm@449
|
943 #+end_listing
|
rlm@449
|
944
|
rlm@449
|
945 #+caption: Program for summarizing the touch information in a patch
|
rlm@449
|
946 #+caption: of skin.
|
rlm@449
|
947 #+name: touch-summary
|
rlm@452
|
948 #+attr_latex: [htpb]
|
rlm@452
|
949
|
rlm@452
|
950 #+begin_listing clojure
|
rlm@449
|
951 #+begin_src clojure
|
rlm@449
|
952 (defn contact
|
rlm@449
|
953 "Determine how much contact a particular worm segment has with
|
rlm@449
|
954 other objects. Returns a value between 0 and 1, where 1 is full
|
rlm@449
|
955 contact and 0 is no contact."
|
rlm@449
|
956 [touch-region [coords contact :as touch]]
|
rlm@449
|
957 (-> (zipmap coords contact)
|
rlm@449
|
958 (select-keys touch-region)
|
rlm@449
|
959 (vals)
|
rlm@449
|
960 (#(map first %))
|
rlm@449
|
961 (average)
|
rlm@449
|
962 (* 10)
|
rlm@449
|
963 (- 1)
|
rlm@449
|
964 (Math/abs)))
|
rlm@449
|
965 #+end_src
|
rlm@449
|
966 #+end_listing
|
rlm@449
|
967
|
rlm@449
|
968
|
rlm@449
|
969 #+caption: Program for detecting whether the worm is at rest. This program
|
rlm@449
|
970 #+caption: uses a summary of the tactile information from the underbelly
|
rlm@449
|
971 #+caption: of the worm, and is only true if every segment is touching the
|
rlm@449
|
972 #+caption: floor. Note that this function contains no references to
|
rlm@449
|
973 #+caption: proprioction at all.
|
rlm@449
|
974 #+name: resting
|
rlm@452
|
975 #+attr_latex: [htpb]
|
rlm@452
|
976 #+begin_listing clojure
|
rlm@449
|
977 #+begin_src clojure
|
rlm@449
|
978 (def worm-segment-bottom (rect-region [8 15] [14 22]))
|
rlm@449
|
979
|
rlm@449
|
980 (defn resting?
|
rlm@449
|
981 "Is the worm resting on the ground?"
|
rlm@449
|
982 [experiences]
|
rlm@449
|
983 (every?
|
rlm@449
|
984 (fn [touch-data]
|
rlm@449
|
985 (< 0.9 (contact worm-segment-bottom touch-data)))
|
rlm@449
|
986 (:touch (peek experiences))))
|
rlm@449
|
987 #+end_src
|
rlm@449
|
988 #+end_listing
|
rlm@449
|
989
|
rlm@449
|
990 #+caption: Program for detecting whether the worm is curled up into a
|
rlm@449
|
991 #+caption: full circle. Here the embodied approach begins to shine, as
|
rlm@449
|
992 #+caption: I am able to both use a previous action predicate (=curled?=)
|
rlm@449
|
993 #+caption: as well as the direct tactile experience of the head and tail.
|
rlm@449
|
994 #+name: grand-circle
|
rlm@452
|
995 #+attr_latex: [htpb]
|
rlm@452
|
996 #+begin_listing clojure
|
rlm@449
|
997 #+begin_src clojure
|
rlm@449
|
998 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
|
rlm@449
|
999
|
rlm@449
|
1000 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
|
rlm@449
|
1001
|
rlm@449
|
1002 (defn grand-circle?
|
rlm@449
|
1003 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@449
|
1004 [experiences]
|
rlm@449
|
1005 (and (curled? experiences)
|
rlm@449
|
1006 (let [worm-touch (:touch (peek experiences))
|
rlm@449
|
1007 tail-touch (worm-touch 0)
|
rlm@449
|
1008 head-touch (worm-touch 4)]
|
rlm@449
|
1009 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@449
|
1010 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@449
|
1011 #+end_src
|
rlm@449
|
1012 #+end_listing
|
rlm@449
|
1013
|
rlm@449
|
1014
|
rlm@449
|
1015 #+caption: Program for detecting whether the worm has been wiggling for
|
rlm@449
|
1016 #+caption: the last few frames. It uses a fourier analysis of the muscle
|
rlm@449
|
1017 #+caption: contractions of the worm's tail to determine wiggling. This is
|
rlm@449
|
1018 #+caption: signigicant because there is no particular frame that clearly
|
rlm@449
|
1019 #+caption: indicates that the worm is wiggling --- only when multiple frames
|
rlm@449
|
1020 #+caption: are analyzed together is the wiggling revealed. Defining
|
rlm@449
|
1021 #+caption: wiggling this way also gives the worm an opportunity to learn
|
rlm@449
|
1022 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
|
rlm@449
|
1023 #+caption: wiggle but can't. Frustrated wiggling is very visually different
|
rlm@449
|
1024 #+caption: from actual wiggling, but this definition gives it to us for free.
|
rlm@449
|
1025 #+name: wiggling
|
rlm@452
|
1026 #+attr_latex: [htpb]
|
rlm@452
|
1027 #+begin_listing clojure
|
rlm@449
|
1028 #+begin_src clojure
|
rlm@449
|
1029 (defn fft [nums]
|
rlm@449
|
1030 (map
|
rlm@449
|
1031 #(.getReal %)
|
rlm@449
|
1032 (.transform
|
rlm@449
|
1033 (FastFourierTransformer. DftNormalization/STANDARD)
|
rlm@449
|
1034 (double-array nums) TransformType/FORWARD)))
|
rlm@449
|
1035
|
rlm@449
|
1036 (def indexed (partial map-indexed vector))
|
rlm@449
|
1037
|
rlm@449
|
1038 (defn max-indexed [s]
|
rlm@449
|
1039 (first (sort-by (comp - second) (indexed s))))
|
rlm@449
|
1040
|
rlm@449
|
1041 (defn wiggling?
|
rlm@449
|
1042 "Is the worm wiggling?"
|
rlm@449
|
1043 [experiences]
|
rlm@449
|
1044 (let [analysis-interval 0x40]
|
rlm@449
|
1045 (when (> (count experiences) analysis-interval)
|
rlm@449
|
1046 (let [a-flex 3
|
rlm@449
|
1047 a-ex 2
|
rlm@449
|
1048 muscle-activity
|
rlm@449
|
1049 (map :muscle (vector:last-n experiences analysis-interval))
|
rlm@449
|
1050 base-activity
|
rlm@449
|
1051 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
|
rlm@449
|
1052 (= 2
|
rlm@449
|
1053 (first
|
rlm@449
|
1054 (max-indexed
|
rlm@449
|
1055 (map #(Math/abs %)
|
rlm@449
|
1056 (take 20 (fft base-activity))))))))))
|
rlm@449
|
1057 #+end_src
|
rlm@449
|
1058 #+end_listing
|
rlm@449
|
1059
|
rlm@449
|
1060 With these action predicates, I can now recognize the actions of
|
rlm@449
|
1061 the worm while it is moving under my control and I have access to
|
rlm@449
|
1062 all the worm's senses.
|
rlm@449
|
1063
|
rlm@449
|
1064 #+caption: Use the action predicates defined earlier to report on
|
rlm@449
|
1065 #+caption: what the worm is doing while in simulation.
|
rlm@449
|
1066 #+name: report-worm-activity
|
rlm@452
|
1067 #+attr_latex: [htpb]
|
rlm@452
|
1068 #+begin_listing clojure
|
rlm@449
|
1069 #+begin_src clojure
|
rlm@449
|
1070 (defn debug-experience
|
rlm@449
|
1071 [experiences text]
|
rlm@449
|
1072 (cond
|
rlm@449
|
1073 (grand-circle? experiences) (.setText text "Grand Circle")
|
rlm@449
|
1074 (curled? experiences) (.setText text "Curled")
|
rlm@449
|
1075 (wiggling? experiences) (.setText text "Wiggling")
|
rlm@449
|
1076 (resting? experiences) (.setText text "Resting")))
|
rlm@449
|
1077 #+end_src
|
rlm@449
|
1078 #+end_listing
|
rlm@449
|
1079
|
rlm@449
|
1080 #+caption: Using =debug-experience=, the body-centered predicates
|
rlm@449
|
1081 #+caption: work together to classify the behaviour of the worm.
|
rlm@451
|
1082 #+caption: the predicates are operating with access to the worm's
|
rlm@451
|
1083 #+caption: full sensory data.
|
rlm@449
|
1084 #+name: basic-worm-view
|
rlm@449
|
1085 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
1086 [[./images/worm-identify-init.png]]
|
rlm@449
|
1087
|
rlm@449
|
1088 These action predicates satisfy the recognition requirement of an
|
rlm@451
|
1089 empathic recognition system. There is power in the simplicity of
|
rlm@451
|
1090 the action predicates. They describe their actions without getting
|
rlm@451
|
1091 confused in visual details of the worm. Each one is frame
|
rlm@451
|
1092 independent, but more than that, they are each indepent of
|
rlm@449
|
1093 irrelevant visual details of the worm and the environment. They
|
rlm@449
|
1094 will work regardless of whether the worm is a different color or
|
rlm@451
|
1095 hevaily textured, or if the environment has strange lighting.
|
rlm@449
|
1096
|
rlm@449
|
1097 The trick now is to make the action predicates work even when the
|
rlm@449
|
1098 sensory data on which they depend is absent. If I can do that, then
|
rlm@449
|
1099 I will have gained much,
|
rlm@435
|
1100
|
rlm@436
|
1101 ** \Phi-space describes the worm's experiences
|
rlm@449
|
1102
|
rlm@449
|
1103 As a first step towards building empathy, I need to gather all of
|
rlm@449
|
1104 the worm's experiences during free play. I use a simple vector to
|
rlm@449
|
1105 store all the experiences.
|
rlm@449
|
1106
|
rlm@449
|
1107 Each element of the experience vector exists in the vast space of
|
rlm@449
|
1108 all possible worm-experiences. Most of this vast space is actually
|
rlm@449
|
1109 unreachable due to physical constraints of the worm's body. For
|
rlm@449
|
1110 example, the worm's segments are connected by hinge joints that put
|
rlm@451
|
1111 a practical limit on the worm's range of motions without limiting
|
rlm@451
|
1112 its degrees of freedom. Some groupings of senses are impossible;
|
rlm@451
|
1113 the worm can not be bent into a circle so that its ends are
|
rlm@451
|
1114 touching and at the same time not also experience the sensation of
|
rlm@451
|
1115 touching itself.
|
rlm@449
|
1116
|
rlm@451
|
1117 As the worm moves around during free play and its experience vector
|
rlm@451
|
1118 grows larger, the vector begins to define a subspace which is all
|
rlm@451
|
1119 the sensations the worm can practicaly experience during normal
|
rlm@451
|
1120 operation. I call this subspace \Phi-space, short for
|
rlm@451
|
1121 physical-space. The experience vector defines a path through
|
rlm@451
|
1122 \Phi-space. This path has interesting properties that all derive
|
rlm@451
|
1123 from physical embodiment. The proprioceptive components are
|
rlm@451
|
1124 completely smooth, because in order for the worm to move from one
|
rlm@451
|
1125 position to another, it must pass through the intermediate
|
rlm@451
|
1126 positions. The path invariably forms loops as actions are repeated.
|
rlm@451
|
1127 Finally and most importantly, proprioception actually gives very
|
rlm@451
|
1128 strong inference about the other senses. For example, when the worm
|
rlm@451
|
1129 is flat, you can infer that it is touching the ground and that its
|
rlm@451
|
1130 muscles are not active, because if the muscles were active, the
|
rlm@451
|
1131 worm would be moving and would not be perfectly flat. In order to
|
rlm@451
|
1132 stay flat, the worm has to be touching the ground, or it would
|
rlm@451
|
1133 again be moving out of the flat position due to gravity. If the
|
rlm@451
|
1134 worm is positioned in such a way that it interacts with itself,
|
rlm@451
|
1135 then it is very likely to be feeling the same tactile feelings as
|
rlm@451
|
1136 the last time it was in that position, because it has the same body
|
rlm@451
|
1137 as then. If you observe multiple frames of proprioceptive data,
|
rlm@451
|
1138 then you can become increasingly confident about the exact
|
rlm@451
|
1139 activations of the worm's muscles, because it generally takes a
|
rlm@451
|
1140 unique combination of muscle contractions to transform the worm's
|
rlm@451
|
1141 body along a specific path through \Phi-space.
|
rlm@449
|
1142
|
rlm@449
|
1143 There is a simple way of taking \Phi-space and the total ordering
|
rlm@449
|
1144 provided by an experience vector and reliably infering the rest of
|
rlm@449
|
1145 the senses.
|
rlm@435
|
1146
|
rlm@436
|
1147 ** Empathy is the process of tracing though \Phi-space
|
rlm@449
|
1148
|
rlm@450
|
1149 Here is the core of a basic empathy algorithm, starting with an
|
rlm@451
|
1150 experience vector:
|
rlm@451
|
1151
|
rlm@451
|
1152 First, group the experiences into tiered proprioceptive bins. I use
|
rlm@451
|
1153 powers of 10 and 3 bins, and the smallest bin has an approximate
|
rlm@451
|
1154 size of 0.001 radians in all proprioceptive dimensions.
|
rlm@450
|
1155
|
rlm@450
|
1156 Then, given a sequence of proprioceptive input, generate a set of
|
rlm@451
|
1157 matching experience records for each input, using the tiered
|
rlm@451
|
1158 proprioceptive bins.
|
rlm@449
|
1159
|
rlm@450
|
1160 Finally, to infer sensory data, select the longest consective chain
|
rlm@451
|
1161 of experiences. Conecutive experience means that the experiences
|
rlm@451
|
1162 appear next to each other in the experience vector.
|
rlm@449
|
1163
|
rlm@450
|
1164 This algorithm has three advantages:
|
rlm@450
|
1165
|
rlm@450
|
1166 1. It's simple
|
rlm@450
|
1167
|
rlm@451
|
1168 3. It's very fast -- retrieving possible interpretations takes
|
rlm@451
|
1169 constant time. Tracing through chains of interpretations takes
|
rlm@451
|
1170 time proportional to the average number of experiences in a
|
rlm@451
|
1171 proprioceptive bin. Redundant experiences in \Phi-space can be
|
rlm@451
|
1172 merged to save computation.
|
rlm@450
|
1173
|
rlm@450
|
1174 2. It protects from wrong interpretations of transient ambiguous
|
rlm@451
|
1175 proprioceptive data. For example, if the worm is flat for just
|
rlm@450
|
1176 an instant, this flattness will not be interpreted as implying
|
rlm@450
|
1177 that the worm has its muscles relaxed, since the flattness is
|
rlm@450
|
1178 part of a longer chain which includes a distinct pattern of
|
rlm@451
|
1179 muscle activation. Markov chains or other memoryless statistical
|
rlm@451
|
1180 models that operate on individual frames may very well make this
|
rlm@451
|
1181 mistake.
|
rlm@450
|
1182
|
rlm@450
|
1183 #+caption: Program to convert an experience vector into a
|
rlm@450
|
1184 #+caption: proprioceptively binned lookup function.
|
rlm@450
|
1185 #+name: bin
|
rlm@452
|
1186 #+attr_latex: [htpb]
|
rlm@452
|
1187 #+begin_listing clojure
|
rlm@450
|
1188 #+begin_src clojure
|
rlm@449
|
1189 (defn bin [digits]
|
rlm@449
|
1190 (fn [angles]
|
rlm@449
|
1191 (->> angles
|
rlm@449
|
1192 (flatten)
|
rlm@449
|
1193 (map (juxt #(Math/sin %) #(Math/cos %)))
|
rlm@449
|
1194 (flatten)
|
rlm@449
|
1195 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
|
rlm@449
|
1196
|
rlm@449
|
1197 (defn gen-phi-scan
|
rlm@450
|
1198 "Nearest-neighbors with binning. Only returns a result if
|
rlm@450
|
1199 the propriceptive data is within 10% of a previously recorded
|
rlm@450
|
1200 result in all dimensions."
|
rlm@450
|
1201 [phi-space]
|
rlm@449
|
1202 (let [bin-keys (map bin [3 2 1])
|
rlm@449
|
1203 bin-maps
|
rlm@449
|
1204 (map (fn [bin-key]
|
rlm@449
|
1205 (group-by
|
rlm@449
|
1206 (comp bin-key :proprioception phi-space)
|
rlm@449
|
1207 (range (count phi-space)))) bin-keys)
|
rlm@449
|
1208 lookups (map (fn [bin-key bin-map]
|
rlm@450
|
1209 (fn [proprio] (bin-map (bin-key proprio))))
|
rlm@450
|
1210 bin-keys bin-maps)]
|
rlm@449
|
1211 (fn lookup [proprio-data]
|
rlm@449
|
1212 (set (some #(% proprio-data) lookups)))))
|
rlm@450
|
1213 #+end_src
|
rlm@450
|
1214 #+end_listing
|
rlm@449
|
1215
|
rlm@451
|
1216 #+caption: =longest-thread= finds the longest path of consecutive
|
rlm@451
|
1217 #+caption: experiences to explain proprioceptive worm data.
|
rlm@451
|
1218 #+name: phi-space-history-scan
|
rlm@451
|
1219 #+ATTR_LaTeX: :width 10cm
|
rlm@451
|
1220 [[./images/aurellem-gray.png]]
|
rlm@451
|
1221
|
rlm@451
|
1222 =longest-thread= infers sensory data by stitching together pieces
|
rlm@451
|
1223 from previous experience. It prefers longer chains of previous
|
rlm@451
|
1224 experience to shorter ones. For example, during training the worm
|
rlm@451
|
1225 might rest on the ground for one second before it performs its
|
rlm@451
|
1226 excercises. If during recognition the worm rests on the ground for
|
rlm@451
|
1227 five seconds, =longest-thread= will accomodate this five second
|
rlm@451
|
1228 rest period by looping the one second rest chain five times.
|
rlm@451
|
1229
|
rlm@451
|
1230 =longest-thread= takes time proportinal to the average number of
|
rlm@451
|
1231 entries in a proprioceptive bin, because for each element in the
|
rlm@451
|
1232 starting bin it performes a series of set lookups in the preceeding
|
rlm@451
|
1233 bins. If the total history is limited, then this is only a constant
|
rlm@451
|
1234 multiple times the number of entries in the starting bin. This
|
rlm@451
|
1235 analysis also applies even if the action requires multiple longest
|
rlm@451
|
1236 chains -- it's still the average number of entries in a
|
rlm@451
|
1237 proprioceptive bin times the desired chain length. Because
|
rlm@451
|
1238 =longest-thread= is so efficient and simple, I can interpret
|
rlm@451
|
1239 worm-actions in real time.
|
rlm@449
|
1240
|
rlm@450
|
1241 #+caption: Program to calculate empathy by tracing though \Phi-space
|
rlm@450
|
1242 #+caption: and finding the longest (ie. most coherent) interpretation
|
rlm@450
|
1243 #+caption: of the data.
|
rlm@450
|
1244 #+name: longest-thread
|
rlm@452
|
1245 #+attr_latex: [htpb]
|
rlm@452
|
1246 #+begin_listing clojure
|
rlm@450
|
1247 #+begin_src clojure
|
rlm@449
|
1248 (defn longest-thread
|
rlm@449
|
1249 "Find the longest thread from phi-index-sets. The index sets should
|
rlm@449
|
1250 be ordered from most recent to least recent."
|
rlm@449
|
1251 [phi-index-sets]
|
rlm@449
|
1252 (loop [result '()
|
rlm@449
|
1253 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
|
rlm@449
|
1254 (if (empty? phi-index-sets)
|
rlm@449
|
1255 (vec result)
|
rlm@449
|
1256 (let [threads
|
rlm@449
|
1257 (for [thread-base thread-bases]
|
rlm@449
|
1258 (loop [thread (list thread-base)
|
rlm@449
|
1259 remaining remaining]
|
rlm@449
|
1260 (let [next-index (dec (first thread))]
|
rlm@449
|
1261 (cond (empty? remaining) thread
|
rlm@449
|
1262 (contains? (first remaining) next-index)
|
rlm@449
|
1263 (recur
|
rlm@449
|
1264 (cons next-index thread) (rest remaining))
|
rlm@449
|
1265 :else thread))))
|
rlm@449
|
1266 longest-thread
|
rlm@449
|
1267 (reduce (fn [thread-a thread-b]
|
rlm@449
|
1268 (if (> (count thread-a) (count thread-b))
|
rlm@449
|
1269 thread-a thread-b))
|
rlm@449
|
1270 '(nil)
|
rlm@449
|
1271 threads)]
|
rlm@449
|
1272 (recur (concat longest-thread result)
|
rlm@449
|
1273 (drop (count longest-thread) phi-index-sets))))))
|
rlm@450
|
1274 #+end_src
|
rlm@450
|
1275 #+end_listing
|
rlm@450
|
1276
|
rlm@451
|
1277 There is one final piece, which is to replace missing sensory data
|
rlm@451
|
1278 with a best-guess estimate. While I could fill in missing data by
|
rlm@451
|
1279 using a gradient over the closest known sensory data points,
|
rlm@451
|
1280 averages can be misleading. It is certainly possible to create an
|
rlm@451
|
1281 impossible sensory state by averaging two possible sensory states.
|
rlm@451
|
1282 Therefore, I simply replicate the most recent sensory experience to
|
rlm@451
|
1283 fill in the gaps.
|
rlm@449
|
1284
|
rlm@449
|
1285 #+caption: Fill in blanks in sensory experience by replicating the most
|
rlm@449
|
1286 #+caption: recent experience.
|
rlm@449
|
1287 #+name: infer-nils
|
rlm@452
|
1288 #+attr_latex: [htpb]
|
rlm@452
|
1289 #+begin_listing clojure
|
rlm@449
|
1290 #+begin_src clojure
|
rlm@449
|
1291 (defn infer-nils
|
rlm@449
|
1292 "Replace nils with the next available non-nil element in the
|
rlm@449
|
1293 sequence, or barring that, 0."
|
rlm@449
|
1294 [s]
|
rlm@449
|
1295 (loop [i (dec (count s))
|
rlm@449
|
1296 v (transient s)]
|
rlm@449
|
1297 (if (zero? i) (persistent! v)
|
rlm@449
|
1298 (if-let [cur (v i)]
|
rlm@449
|
1299 (if (get v (dec i) 0)
|
rlm@449
|
1300 (recur (dec i) v)
|
rlm@449
|
1301 (recur (dec i) (assoc! v (dec i) cur)))
|
rlm@449
|
1302 (recur i (assoc! v i 0))))))
|
rlm@449
|
1303 #+end_src
|
rlm@449
|
1304 #+end_listing
|
rlm@435
|
1305
|
rlm@441
|
1306 ** Efficient action recognition with =EMPATH=
|
rlm@451
|
1307
|
rlm@451
|
1308 To use =EMPATH= with the worm, I first need to gather a set of
|
rlm@451
|
1309 experiences from the worm that includes the actions I want to
|
rlm@452
|
1310 recognize. The =generate-phi-space= program (listing
|
rlm@451
|
1311 \ref{generate-phi-space} runs the worm through a series of
|
rlm@451
|
1312 exercices and gatheres those experiences into a vector. The
|
rlm@451
|
1313 =do-all-the-things= program is a routine expressed in a simple
|
rlm@452
|
1314 muscle contraction script language for automated worm control. It
|
rlm@452
|
1315 causes the worm to rest, curl, and wiggle over about 700 frames
|
rlm@452
|
1316 (approx. 11 seconds).
|
rlm@425
|
1317
|
rlm@451
|
1318 #+caption: Program to gather the worm's experiences into a vector for
|
rlm@451
|
1319 #+caption: further processing. The =motor-control-program= line uses
|
rlm@451
|
1320 #+caption: a motor control script that causes the worm to execute a series
|
rlm@451
|
1321 #+caption: of ``exercices'' that include all the action predicates.
|
rlm@451
|
1322 #+name: generate-phi-space
|
rlm@452
|
1323 #+attr_latex: [htpb]
|
rlm@452
|
1324 #+begin_listing clojure
|
rlm@451
|
1325 #+begin_src clojure
|
rlm@451
|
1326 (def do-all-the-things
|
rlm@451
|
1327 (concat
|
rlm@451
|
1328 curl-script
|
rlm@451
|
1329 [[300 :d-ex 40]
|
rlm@451
|
1330 [320 :d-ex 0]]
|
rlm@451
|
1331 (shift-script 280 (take 16 wiggle-script))))
|
rlm@451
|
1332
|
rlm@451
|
1333 (defn generate-phi-space []
|
rlm@451
|
1334 (let [experiences (atom [])]
|
rlm@451
|
1335 (run-world
|
rlm@451
|
1336 (apply-map
|
rlm@451
|
1337 worm-world
|
rlm@451
|
1338 (merge
|
rlm@451
|
1339 (worm-world-defaults)
|
rlm@451
|
1340 {:end-frame 700
|
rlm@451
|
1341 :motor-control
|
rlm@451
|
1342 (motor-control-program worm-muscle-labels do-all-the-things)
|
rlm@451
|
1343 :experiences experiences})))
|
rlm@451
|
1344 @experiences))
|
rlm@451
|
1345 #+end_src
|
rlm@451
|
1346 #+end_listing
|
rlm@451
|
1347
|
rlm@451
|
1348 #+caption: Use longest thread and a phi-space generated from a short
|
rlm@451
|
1349 #+caption: exercise routine to interpret actions during free play.
|
rlm@451
|
1350 #+name: empathy-debug
|
rlm@452
|
1351 #+attr_latex: [htpb]
|
rlm@452
|
1352 #+begin_listing clojure
|
rlm@451
|
1353 #+begin_src clojure
|
rlm@451
|
1354 (defn init []
|
rlm@451
|
1355 (def phi-space (generate-phi-space))
|
rlm@451
|
1356 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
1357
|
rlm@451
|
1358 (defn empathy-demonstration []
|
rlm@451
|
1359 (let [proprio (atom ())]
|
rlm@451
|
1360 (fn
|
rlm@451
|
1361 [experiences text]
|
rlm@451
|
1362 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
1363 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
1364 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
1365 empathy (mapv phi-space (infer-nils exp-thread))]
|
rlm@451
|
1366 (println-repl (vector:last-n exp-thread 22))
|
rlm@451
|
1367 (cond
|
rlm@451
|
1368 (grand-circle? empathy) (.setText text "Grand Circle")
|
rlm@451
|
1369 (curled? empathy) (.setText text "Curled")
|
rlm@451
|
1370 (wiggling? empathy) (.setText text "Wiggling")
|
rlm@451
|
1371 (resting? empathy) (.setText text "Resting")
|
rlm@451
|
1372 :else (.setText text "Unknown")))))))
|
rlm@451
|
1373
|
rlm@451
|
1374 (defn empathy-experiment [record]
|
rlm@451
|
1375 (.start (worm-world :experience-watch (debug-experience-phi)
|
rlm@451
|
1376 :record record :worm worm*)))
|
rlm@451
|
1377 #+end_src
|
rlm@451
|
1378 #+end_listing
|
rlm@451
|
1379
|
rlm@451
|
1380 The result of running =empathy-experiment= is that the system is
|
rlm@451
|
1381 generally able to interpret worm actions using the action-predicates
|
rlm@451
|
1382 on simulated sensory data just as well as with actual data. Figure
|
rlm@451
|
1383 \ref{empathy-debug-image} was generated using =empathy-experiment=:
|
rlm@451
|
1384
|
rlm@451
|
1385 #+caption: From only proprioceptive data, =EMPATH= was able to infer
|
rlm@451
|
1386 #+caption: the complete sensory experience and classify four poses
|
rlm@451
|
1387 #+caption: (The last panel shows a composite image of \emph{wriggling},
|
rlm@451
|
1388 #+caption: a dynamic pose.)
|
rlm@451
|
1389 #+name: empathy-debug-image
|
rlm@451
|
1390 #+ATTR_LaTeX: :width 10cm :placement [H]
|
rlm@451
|
1391 [[./images/empathy-1.png]]
|
rlm@451
|
1392
|
rlm@451
|
1393 One way to measure the performance of =EMPATH= is to compare the
|
rlm@451
|
1394 sutiability of the imagined sense experience to trigger the same
|
rlm@451
|
1395 action predicates as the real sensory experience.
|
rlm@451
|
1396
|
rlm@451
|
1397 #+caption: Determine how closely empathy approximates actual
|
rlm@451
|
1398 #+caption: sensory data.
|
rlm@451
|
1399 #+name: test-empathy-accuracy
|
rlm@452
|
1400 #+attr_latex: [htpb]
|
rlm@452
|
1401 #+begin_listing clojure
|
rlm@451
|
1402 #+begin_src clojure
|
rlm@451
|
1403 (def worm-action-label
|
rlm@451
|
1404 (juxt grand-circle? curled? wiggling?))
|
rlm@451
|
1405
|
rlm@451
|
1406 (defn compare-empathy-with-baseline [matches]
|
rlm@451
|
1407 (let [proprio (atom ())]
|
rlm@451
|
1408 (fn
|
rlm@451
|
1409 [experiences text]
|
rlm@451
|
1410 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
1411 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
1412 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
1413 empathy (mapv phi-space (infer-nils exp-thread))
|
rlm@451
|
1414 experience-matches-empathy
|
rlm@451
|
1415 (= (worm-action-label experiences)
|
rlm@451
|
1416 (worm-action-label empathy))]
|
rlm@451
|
1417 (println-repl experience-matches-empathy)
|
rlm@451
|
1418 (swap! matches #(conj % experience-matches-empathy)))))))
|
rlm@451
|
1419
|
rlm@451
|
1420 (defn accuracy [v]
|
rlm@451
|
1421 (float (/ (count (filter true? v)) (count v))))
|
rlm@451
|
1422
|
rlm@451
|
1423 (defn test-empathy-accuracy []
|
rlm@451
|
1424 (let [res (atom [])]
|
rlm@451
|
1425 (run-world
|
rlm@451
|
1426 (worm-world :experience-watch
|
rlm@451
|
1427 (compare-empathy-with-baseline res)
|
rlm@451
|
1428 :worm worm*))
|
rlm@451
|
1429 (accuracy @res)))
|
rlm@451
|
1430 #+end_src
|
rlm@451
|
1431 #+end_listing
|
rlm@451
|
1432
|
rlm@451
|
1433 Running =test-empathy-accuracy= using the very short exercise
|
rlm@451
|
1434 program defined in listing \ref{generate-phi-space}, and then doing
|
rlm@451
|
1435 a similar pattern of activity manually yeilds an accuracy of around
|
rlm@451
|
1436 73%. This is based on very limited worm experience. By training the
|
rlm@451
|
1437 worm for longer, the accuracy dramatically improves.
|
rlm@451
|
1438
|
rlm@451
|
1439 #+caption: Program to generate \Phi-space using manual training.
|
rlm@451
|
1440 #+name: manual-phi-space
|
rlm@452
|
1441 #+attr_latex: [htpb]
|
rlm@451
|
1442 #+begin_listing clojure
|
rlm@451
|
1443 #+begin_src clojure
|
rlm@451
|
1444 (defn init-interactive []
|
rlm@451
|
1445 (def phi-space
|
rlm@451
|
1446 (let [experiences (atom [])]
|
rlm@451
|
1447 (run-world
|
rlm@451
|
1448 (apply-map
|
rlm@451
|
1449 worm-world
|
rlm@451
|
1450 (merge
|
rlm@451
|
1451 (worm-world-defaults)
|
rlm@451
|
1452 {:experiences experiences})))
|
rlm@451
|
1453 @experiences))
|
rlm@451
|
1454 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
1455 #+end_src
|
rlm@451
|
1456 #+end_listing
|
rlm@451
|
1457
|
rlm@451
|
1458 After about 1 minute of manual training, I was able to achieve 95%
|
rlm@451
|
1459 accuracy on manual testing of the worm using =init-interactive= and
|
rlm@452
|
1460 =test-empathy-accuracy=. The majority of errors are near the
|
rlm@452
|
1461 boundaries of transitioning from one type of action to another.
|
rlm@452
|
1462 During these transitions the exact label for the action is more open
|
rlm@452
|
1463 to interpretation, and dissaggrement between empathy and experience
|
rlm@452
|
1464 is more excusable.
|
rlm@450
|
1465
|
rlm@449
|
1466 ** Digression: bootstrapping touch using free exploration
|
rlm@449
|
1467
|
rlm@452
|
1468 In the previous section I showed how to compute actions in terms of
|
rlm@452
|
1469 body-centered predicates which relied averate touch activation of
|
rlm@452
|
1470 pre-defined regions of the worm's skin. What if, instead of recieving
|
rlm@452
|
1471 touch pre-grouped into the six faces of each worm segment, the true
|
rlm@452
|
1472 topology of the worm's skin was unknown? This is more similiar to how
|
rlm@452
|
1473 a nerve fiber bundle might be arranged. While two fibers that are
|
rlm@452
|
1474 close in a nerve bundle /might/ correspond to two touch sensors that
|
rlm@452
|
1475 are close together on the skin, the process of taking a complicated
|
rlm@452
|
1476 surface and forcing it into essentially a circle requires some cuts
|
rlm@452
|
1477 and rerragenments.
|
rlm@452
|
1478
|
rlm@452
|
1479 In this section I show how to automatically learn the skin-topology of
|
rlm@452
|
1480 a worm segment by free exploration. As the worm rolls around on the
|
rlm@452
|
1481 floor, large sections of its surface get activated. If the worm has
|
rlm@452
|
1482 stopped moving, then whatever region of skin that is touching the
|
rlm@452
|
1483 floor is probably an important region, and should be recorded.
|
rlm@452
|
1484
|
rlm@452
|
1485 #+caption: Program to detect whether the worm is in a resting state
|
rlm@452
|
1486 #+caption: with one face touching the floor.
|
rlm@452
|
1487 #+name: pure-touch
|
rlm@452
|
1488 #+begin_listing clojure
|
rlm@452
|
1489 #+begin_src clojure
|
rlm@452
|
1490 (def full-contact [(float 0.0) (float 0.1)])
|
rlm@452
|
1491
|
rlm@452
|
1492 (defn pure-touch?
|
rlm@452
|
1493 "This is worm specific code to determine if a large region of touch
|
rlm@452
|
1494 sensors is either all on or all off."
|
rlm@452
|
1495 [[coords touch :as touch-data]]
|
rlm@452
|
1496 (= (set (map first touch)) (set full-contact)))
|
rlm@452
|
1497 #+end_src
|
rlm@452
|
1498 #+end_listing
|
rlm@452
|
1499
|
rlm@452
|
1500 After collecting these important regions, there will many nearly
|
rlm@452
|
1501 similiar touch regions. While for some purposes the subtle
|
rlm@452
|
1502 differences between these regions will be important, for my
|
rlm@452
|
1503 purposes I colapse them into mostly non-overlapping sets using
|
rlm@452
|
1504 =remove-similiar= in listing \ref{remove-similiar}
|
rlm@452
|
1505
|
rlm@452
|
1506 #+caption: Program to take a lits of set of points and ``collapse them''
|
rlm@452
|
1507 #+caption: so that the remaining sets in the list are siginificantly
|
rlm@452
|
1508 #+caption: different from each other. Prefer smaller sets to larger ones.
|
rlm@452
|
1509 #+name: remove-similiar
|
rlm@452
|
1510 #+begin_listing clojure
|
rlm@452
|
1511 #+begin_src clojure
|
rlm@452
|
1512 (defn remove-similar
|
rlm@452
|
1513 [coll]
|
rlm@452
|
1514 (loop [result () coll (sort-by (comp - count) coll)]
|
rlm@452
|
1515 (if (empty? coll) result
|
rlm@452
|
1516 (let [[x & xs] coll
|
rlm@452
|
1517 c (count x)]
|
rlm@452
|
1518 (if (some
|
rlm@452
|
1519 (fn [other-set]
|
rlm@452
|
1520 (let [oc (count other-set)]
|
rlm@452
|
1521 (< (- (count (union other-set x)) c) (* oc 0.1))))
|
rlm@452
|
1522 xs)
|
rlm@452
|
1523 (recur result xs)
|
rlm@452
|
1524 (recur (cons x result) xs))))))
|
rlm@452
|
1525 #+end_src
|
rlm@452
|
1526 #+end_listing
|
rlm@452
|
1527
|
rlm@452
|
1528 Actually running this simulation is easy given =CORTEX='s facilities.
|
rlm@452
|
1529
|
rlm@452
|
1530 #+caption: Collect experiences while the worm moves around. Filter the touch
|
rlm@452
|
1531 #+caption: sensations by stable ones, collapse similiar ones together,
|
rlm@452
|
1532 #+caption: and report the regions learned.
|
rlm@452
|
1533 #+name: learn-touch
|
rlm@452
|
1534 #+begin_listing clojure
|
rlm@452
|
1535 #+begin_src clojure
|
rlm@452
|
1536 (defn learn-touch-regions []
|
rlm@452
|
1537 (let [experiences (atom [])
|
rlm@452
|
1538 world (apply-map
|
rlm@452
|
1539 worm-world
|
rlm@452
|
1540 (assoc (worm-segment-defaults)
|
rlm@452
|
1541 :experiences experiences))]
|
rlm@452
|
1542 (run-world world)
|
rlm@452
|
1543 (->>
|
rlm@452
|
1544 @experiences
|
rlm@452
|
1545 (drop 175)
|
rlm@452
|
1546 ;; access the single segment's touch data
|
rlm@452
|
1547 (map (comp first :touch))
|
rlm@452
|
1548 ;; only deal with "pure" touch data to determine surfaces
|
rlm@452
|
1549 (filter pure-touch?)
|
rlm@452
|
1550 ;; associate coordinates with touch values
|
rlm@452
|
1551 (map (partial apply zipmap))
|
rlm@452
|
1552 ;; select those regions where contact is being made
|
rlm@452
|
1553 (map (partial group-by second))
|
rlm@452
|
1554 (map #(get % full-contact))
|
rlm@452
|
1555 (map (partial map first))
|
rlm@452
|
1556 ;; remove redundant/subset regions
|
rlm@452
|
1557 (map set)
|
rlm@452
|
1558 remove-similar)))
|
rlm@452
|
1559
|
rlm@452
|
1560 (defn learn-and-view-touch-regions []
|
rlm@452
|
1561 (map view-touch-region
|
rlm@452
|
1562 (learn-touch-regions)))
|
rlm@452
|
1563 #+end_src
|
rlm@452
|
1564 #+end_listing
|
rlm@452
|
1565
|
rlm@452
|
1566 The only thing remining to define is the particular motion the worm
|
rlm@452
|
1567 must take. I accomplish this with a simple motor control program.
|
rlm@452
|
1568
|
rlm@452
|
1569 #+caption: Motor control program for making the worm roll on the ground.
|
rlm@452
|
1570 #+caption: This could also be replaced with random motion.
|
rlm@452
|
1571 #+name: worm-roll
|
rlm@452
|
1572 #+begin_listing clojure
|
rlm@452
|
1573 #+begin_src clojure
|
rlm@452
|
1574 (defn touch-kinesthetics []
|
rlm@452
|
1575 [[170 :lift-1 40]
|
rlm@452
|
1576 [190 :lift-1 19]
|
rlm@452
|
1577 [206 :lift-1 0]
|
rlm@452
|
1578
|
rlm@452
|
1579 [400 :lift-2 40]
|
rlm@452
|
1580 [410 :lift-2 0]
|
rlm@452
|
1581
|
rlm@452
|
1582 [570 :lift-2 40]
|
rlm@452
|
1583 [590 :lift-2 21]
|
rlm@452
|
1584 [606 :lift-2 0]
|
rlm@452
|
1585
|
rlm@452
|
1586 [800 :lift-1 30]
|
rlm@452
|
1587 [809 :lift-1 0]
|
rlm@452
|
1588
|
rlm@452
|
1589 [900 :roll-2 40]
|
rlm@452
|
1590 [905 :roll-2 20]
|
rlm@452
|
1591 [910 :roll-2 0]
|
rlm@452
|
1592
|
rlm@452
|
1593 [1000 :roll-2 40]
|
rlm@452
|
1594 [1005 :roll-2 20]
|
rlm@452
|
1595 [1010 :roll-2 0]
|
rlm@452
|
1596
|
rlm@452
|
1597 [1100 :roll-2 40]
|
rlm@452
|
1598 [1105 :roll-2 20]
|
rlm@452
|
1599 [1110 :roll-2 0]
|
rlm@452
|
1600 ])
|
rlm@452
|
1601 #+end_src
|
rlm@452
|
1602 #+end_listing
|
rlm@452
|
1603
|
rlm@452
|
1604
|
rlm@452
|
1605 #+caption: The small worm rolls around on the floor, driven
|
rlm@452
|
1606 #+caption: by the motor control program in listing \ref{worm-roll}.
|
rlm@452
|
1607 #+name: worm-roll
|
rlm@452
|
1608 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
1609 [[./images/worm-roll.png]]
|
rlm@452
|
1610
|
rlm@452
|
1611
|
rlm@452
|
1612 #+caption: After completing its adventures, the worm now knows
|
rlm@452
|
1613 #+caption: how its touch sensors are arranged along its skin. These
|
rlm@452
|
1614 #+caption: are the regions that were deemed important by
|
rlm@452
|
1615 #+caption: =learn-touch-regions=. Note that the worm has discovered
|
rlm@452
|
1616 #+caption: that it has six sides.
|
rlm@452
|
1617 #+name: worm-touch-map
|
rlm@452
|
1618 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
1619 [[./images/touch-learn.png]]
|
rlm@452
|
1620
|
rlm@452
|
1621 While simple, =learn-touch-regions= exploits regularities in both
|
rlm@452
|
1622 the worm's physiology and the worm's environment to correctly
|
rlm@452
|
1623 deduce that the worm has six sides. Note that =learn-touch-regions=
|
rlm@452
|
1624 would work just as well even if the worm's touch sense data were
|
rlm@452
|
1625 completely scrambled. The cross shape is just for convienence. This
|
rlm@452
|
1626 example justifies the use of pre-defined touch regions in =EMPATH=.
|
rlm@452
|
1627
|
rlm@465
|
1628 * COMMENT Contributions
|
rlm@454
|
1629
|
rlm@461
|
1630 In this thesis you have seen the =CORTEX= system, a complete
|
rlm@461
|
1631 environment for creating simulated creatures. You have seen how to
|
rlm@461
|
1632 implement five senses including touch, proprioception, hearing,
|
rlm@461
|
1633 vision, and muscle tension. You have seen how to create new creatues
|
rlm@461
|
1634 using blender, a 3D modeling tool. I hope that =CORTEX= will be
|
rlm@461
|
1635 useful in further research projects. To this end I have included the
|
rlm@461
|
1636 full source to =CORTEX= along with a large suite of tests and
|
rlm@461
|
1637 examples. I have also created a user guide for =CORTEX= which is
|
rlm@461
|
1638 inculded in an appendix to this thesis.
|
rlm@447
|
1639
|
rlm@461
|
1640 You have also seen how I used =CORTEX= as a platform to attach the
|
rlm@461
|
1641 /action recognition/ problem, which is the problem of recognizing
|
rlm@461
|
1642 actions in video. You saw a simple system called =EMPATH= which
|
rlm@461
|
1643 ientifies actions by first describing actions in a body-centerd,
|
rlm@461
|
1644 rich sense language, then infering a full range of sensory
|
rlm@461
|
1645 experience from limited data using previous experience gained from
|
rlm@461
|
1646 free play.
|
rlm@447
|
1647
|
rlm@461
|
1648 As a minor digression, you also saw how I used =CORTEX= to enable a
|
rlm@461
|
1649 tiny worm to discover the topology of its skin simply by rolling on
|
rlm@461
|
1650 the ground.
|
rlm@461
|
1651
|
rlm@461
|
1652 In conclusion, the main contributions of this thesis are:
|
rlm@461
|
1653
|
rlm@461
|
1654 - =CORTEX=, a system for creating simulated creatures with rich
|
rlm@461
|
1655 senses.
|
rlm@461
|
1656 - =EMPATH=, a program for recognizing actions by imagining sensory
|
rlm@461
|
1657 experience.
|
rlm@447
|
1658
|
rlm@447
|
1659 # An anatomical joke:
|
rlm@447
|
1660 # - Training
|
rlm@447
|
1661 # - Skeletal imitation
|
rlm@447
|
1662 # - Sensory fleshing-out
|
rlm@447
|
1663 # - Classification
|