rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@451
|
6 #+LaTeX_CLASS_OPTIONS: [nofloat]
|
rlm@422
|
7
|
rlm@465
|
8 * COMMENT templates
|
rlm@470
|
9 #+caption:
|
rlm@470
|
10 #+caption:
|
rlm@470
|
11 #+caption:
|
rlm@470
|
12 #+caption:
|
rlm@470
|
13 #+name: name
|
rlm@470
|
14 #+begin_listing clojure
|
rlm@470
|
15 #+end_listing
|
rlm@465
|
16
|
rlm@470
|
17 #+caption:
|
rlm@470
|
18 #+caption:
|
rlm@470
|
19 #+caption:
|
rlm@470
|
20 #+name: name
|
rlm@470
|
21 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
22 [[./images/aurellem-gray.png]]
|
rlm@470
|
23
|
rlm@470
|
24 #+caption:
|
rlm@470
|
25 #+caption:
|
rlm@470
|
26 #+caption:
|
rlm@470
|
27 #+caption:
|
rlm@470
|
28 #+name: name
|
rlm@470
|
29 #+begin_listing clojure
|
rlm@470
|
30 #+end_listing
|
rlm@470
|
31
|
rlm@470
|
32 #+caption:
|
rlm@470
|
33 #+caption:
|
rlm@470
|
34 #+caption:
|
rlm@470
|
35 #+name: name
|
rlm@470
|
36 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
37 [[./images/aurellem-gray.png]]
|
rlm@470
|
38
|
rlm@465
|
39
|
rlm@465
|
40 * COMMENT Empathy and Embodiment as problem solving strategies
|
rlm@437
|
41
|
rlm@437
|
42 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
43 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
44 seen one way to efficiently implement empathy for embodied
|
rlm@447
|
45 creatures. Finally, you will become familiar with =CORTEX=, a system
|
rlm@447
|
46 for designing and simulating creatures with rich senses, which you
|
rlm@447
|
47 may choose to use in your own research.
|
rlm@437
|
48
|
rlm@441
|
49 This is the core vision of my thesis: That one of the important ways
|
rlm@441
|
50 in which we understand others is by imagining ourselves in their
|
rlm@441
|
51 position and emphatically feeling experiences relative to our own
|
rlm@441
|
52 bodies. By understanding events in terms of our own previous
|
rlm@441
|
53 corporeal experience, we greatly constrain the possibilities of what
|
rlm@441
|
54 would otherwise be an unwieldy exponential search. This extra
|
rlm@441
|
55 constraint can be the difference between easily understanding what
|
rlm@441
|
56 is happening in a video and being completely lost in a sea of
|
rlm@441
|
57 incomprehensible color and movement.
|
rlm@435
|
58
|
rlm@436
|
59 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
60
|
rlm@447
|
61 Consider for example the problem of determining what is happening
|
rlm@447
|
62 in a video of which this is one frame:
|
rlm@437
|
63
|
rlm@441
|
64 #+caption: A cat drinking some water. Identifying this action is
|
rlm@441
|
65 #+caption: beyond the state of the art for computers.
|
rlm@441
|
66 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
67 [[./images/cat-drinking.jpg]]
|
rlm@441
|
68
|
rlm@441
|
69 It is currently impossible for any computer program to reliably
|
rlm@447
|
70 label such a video as ``drinking''. And rightly so -- it is a very
|
rlm@441
|
71 hard problem! What features can you describe in terms of low level
|
rlm@441
|
72 functions of pixels that can even begin to describe at a high level
|
rlm@441
|
73 what is happening here?
|
rlm@437
|
74
|
rlm@447
|
75 Or suppose that you are building a program that recognizes chairs.
|
rlm@448
|
76 How could you ``see'' the chair in figure \ref{hidden-chair}?
|
rlm@441
|
77
|
rlm@441
|
78 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@448
|
79 #+caption: doubt that any modern computer vision program can find it.
|
rlm@441
|
80 #+name: hidden-chair
|
rlm@441
|
81 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
82 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@441
|
83
|
rlm@441
|
84 Finally, how is it that you can easily tell the difference between
|
rlm@441
|
85 how the girls /muscles/ are working in figure \ref{girl}?
|
rlm@441
|
86
|
rlm@441
|
87 #+caption: The mysterious ``common sense'' appears here as you are able
|
rlm@441
|
88 #+caption: to discern the difference in how the girl's arm muscles
|
rlm@441
|
89 #+caption: are activated between the two images.
|
rlm@441
|
90 #+name: girl
|
rlm@448
|
91 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
92 [[./images/wall-push.png]]
|
rlm@437
|
93
|
rlm@441
|
94 Each of these examples tells us something about what might be going
|
rlm@441
|
95 on in our minds as we easily solve these recognition problems.
|
rlm@441
|
96
|
rlm@441
|
97 The hidden chairs show us that we are strongly triggered by cues
|
rlm@447
|
98 relating to the position of human bodies, and that we can determine
|
rlm@447
|
99 the overall physical configuration of a human body even if much of
|
rlm@447
|
100 that body is occluded.
|
rlm@437
|
101
|
rlm@441
|
102 The picture of the girl pushing against the wall tells us that we
|
rlm@441
|
103 have common sense knowledge about the kinetics of our own bodies.
|
rlm@441
|
104 We know well how our muscles would have to work to maintain us in
|
rlm@441
|
105 most positions, and we can easily project this self-knowledge to
|
rlm@441
|
106 imagined positions triggered by images of the human body.
|
rlm@441
|
107
|
rlm@441
|
108 ** =EMPATH= neatly solves recognition problems
|
rlm@441
|
109
|
rlm@441
|
110 I propose a system that can express the types of recognition
|
rlm@441
|
111 problems above in a form amenable to computation. It is split into
|
rlm@441
|
112 four parts:
|
rlm@441
|
113
|
rlm@448
|
114 - Free/Guided Play :: The creature moves around and experiences the
|
rlm@448
|
115 world through its unique perspective. Many otherwise
|
rlm@448
|
116 complicated actions are easily described in the language of a
|
rlm@448
|
117 full suite of body-centered, rich senses. For example,
|
rlm@448
|
118 drinking is the feeling of water sliding down your throat, and
|
rlm@448
|
119 cooling your insides. It's often accompanied by bringing your
|
rlm@448
|
120 hand close to your face, or bringing your face close to water.
|
rlm@448
|
121 Sitting down is the feeling of bending your knees, activating
|
rlm@448
|
122 your quadriceps, then feeling a surface with your bottom and
|
rlm@448
|
123 relaxing your legs. These body-centered action descriptions
|
rlm@448
|
124 can be either learned or hard coded.
|
rlm@448
|
125 - Posture Imitation :: When trying to interpret a video or image,
|
rlm@448
|
126 the creature takes a model of itself and aligns it with
|
rlm@448
|
127 whatever it sees. This alignment can even cross species, as
|
rlm@448
|
128 when humans try to align themselves with things like ponies,
|
rlm@448
|
129 dogs, or other humans with a different body type.
|
rlm@448
|
130 - Empathy :: The alignment triggers associations with
|
rlm@448
|
131 sensory data from prior experiences. For example, the
|
rlm@448
|
132 alignment itself easily maps to proprioceptive data. Any
|
rlm@448
|
133 sounds or obvious skin contact in the video can to a lesser
|
rlm@448
|
134 extent trigger previous experience. Segments of previous
|
rlm@448
|
135 experiences are stitched together to form a coherent and
|
rlm@448
|
136 complete sensory portrait of the scene.
|
rlm@448
|
137 - Recognition :: With the scene described in terms of first
|
rlm@448
|
138 person sensory events, the creature can now run its
|
rlm@447
|
139 action-identification programs on this synthesized sensory
|
rlm@447
|
140 data, just as it would if it were actually experiencing the
|
rlm@447
|
141 scene first-hand. If previous experience has been accurately
|
rlm@447
|
142 retrieved, and if it is analogous enough to the scene, then
|
rlm@447
|
143 the creature will correctly identify the action in the scene.
|
rlm@447
|
144
|
rlm@441
|
145 For example, I think humans are able to label the cat video as
|
rlm@447
|
146 ``drinking'' because they imagine /themselves/ as the cat, and
|
rlm@441
|
147 imagine putting their face up against a stream of water and
|
rlm@441
|
148 sticking out their tongue. In that imagined world, they can feel
|
rlm@441
|
149 the cool water hitting their tongue, and feel the water entering
|
rlm@447
|
150 their body, and are able to recognize that /feeling/ as drinking.
|
rlm@447
|
151 So, the label of the action is not really in the pixels of the
|
rlm@447
|
152 image, but is found clearly in a simulation inspired by those
|
rlm@447
|
153 pixels. An imaginative system, having been trained on drinking and
|
rlm@447
|
154 non-drinking examples and learning that the most important
|
rlm@447
|
155 component of drinking is the feeling of water sliding down one's
|
rlm@447
|
156 throat, would analyze a video of a cat drinking in the following
|
rlm@447
|
157 manner:
|
rlm@441
|
158
|
rlm@447
|
159 1. Create a physical model of the video by putting a ``fuzzy''
|
rlm@447
|
160 model of its own body in place of the cat. Possibly also create
|
rlm@447
|
161 a simulation of the stream of water.
|
rlm@441
|
162
|
rlm@441
|
163 2. Play out this simulated scene and generate imagined sensory
|
rlm@441
|
164 experience. This will include relevant muscle contractions, a
|
rlm@441
|
165 close up view of the stream from the cat's perspective, and most
|
rlm@441
|
166 importantly, the imagined feeling of water entering the
|
rlm@443
|
167 mouth. The imagined sensory experience can come from a
|
rlm@441
|
168 simulation of the event, but can also be pattern-matched from
|
rlm@441
|
169 previous, similar embodied experience.
|
rlm@441
|
170
|
rlm@441
|
171 3. The action is now easily identified as drinking by the sense of
|
rlm@441
|
172 taste alone. The other senses (such as the tongue moving in and
|
rlm@441
|
173 out) help to give plausibility to the simulated action. Note that
|
rlm@441
|
174 the sense of vision, while critical in creating the simulation,
|
rlm@441
|
175 is not critical for identifying the action from the simulation.
|
rlm@441
|
176
|
rlm@441
|
177 For the chair examples, the process is even easier:
|
rlm@441
|
178
|
rlm@441
|
179 1. Align a model of your body to the person in the image.
|
rlm@441
|
180
|
rlm@441
|
181 2. Generate proprioceptive sensory data from this alignment.
|
rlm@437
|
182
|
rlm@441
|
183 3. Use the imagined proprioceptive data as a key to lookup related
|
rlm@441
|
184 sensory experience associated with that particular proproceptive
|
rlm@441
|
185 feeling.
|
rlm@437
|
186
|
rlm@443
|
187 4. Retrieve the feeling of your bottom resting on a surface, your
|
rlm@443
|
188 knees bent, and your leg muscles relaxed.
|
rlm@437
|
189
|
rlm@441
|
190 5. This sensory information is consistent with the =sitting?=
|
rlm@441
|
191 sensory predicate, so you (and the entity in the image) must be
|
rlm@441
|
192 sitting.
|
rlm@440
|
193
|
rlm@441
|
194 6. There must be a chair-like object since you are sitting.
|
rlm@440
|
195
|
rlm@441
|
196 Empathy offers yet another alternative to the age-old AI
|
rlm@441
|
197 representation question: ``What is a chair?'' --- A chair is the
|
rlm@441
|
198 feeling of sitting.
|
rlm@441
|
199
|
rlm@441
|
200 My program, =EMPATH= uses this empathic problem solving technique
|
rlm@441
|
201 to interpret the actions of a simple, worm-like creature.
|
rlm@437
|
202
|
rlm@441
|
203 #+caption: The worm performs many actions during free play such as
|
rlm@441
|
204 #+caption: curling, wiggling, and resting.
|
rlm@441
|
205 #+name: worm-intro
|
rlm@446
|
206 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
207 [[./images/worm-intro-white.png]]
|
rlm@437
|
208
|
rlm@462
|
209 #+caption: =EMPATH= recognized and classified each of these
|
rlm@462
|
210 #+caption: poses by inferring the complete sensory experience
|
rlm@462
|
211 #+caption: from proprioceptive data.
|
rlm@441
|
212 #+name: worm-recognition-intro
|
rlm@446
|
213 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
214 [[./images/worm-poses.png]]
|
rlm@441
|
215
|
rlm@441
|
216 One powerful advantage of empathic problem solving is that it
|
rlm@441
|
217 factors the action recognition problem into two easier problems. To
|
rlm@441
|
218 use empathy, you need an /aligner/, which takes the video and a
|
rlm@441
|
219 model of your body, and aligns the model with the video. Then, you
|
rlm@441
|
220 need a /recognizer/, which uses the aligned model to interpret the
|
rlm@441
|
221 action. The power in this method lies in the fact that you describe
|
rlm@448
|
222 all actions form a body-centered viewpoint. You are less tied to
|
rlm@447
|
223 the particulars of any visual representation of the actions. If you
|
rlm@441
|
224 teach the system what ``running'' is, and you have a good enough
|
rlm@441
|
225 aligner, the system will from then on be able to recognize running
|
rlm@441
|
226 from any point of view, even strange points of view like above or
|
rlm@441
|
227 underneath the runner. This is in contrast to action recognition
|
rlm@448
|
228 schemes that try to identify actions using a non-embodied approach.
|
rlm@448
|
229 If these systems learn about running as viewed from the side, they
|
rlm@448
|
230 will not automatically be able to recognize running from any other
|
rlm@448
|
231 viewpoint.
|
rlm@441
|
232
|
rlm@441
|
233 Another powerful advantage is that using the language of multiple
|
rlm@441
|
234 body-centered rich senses to describe body-centerd actions offers a
|
rlm@441
|
235 massive boost in descriptive capability. Consider how difficult it
|
rlm@441
|
236 would be to compose a set of HOG filters to describe the action of
|
rlm@447
|
237 a simple worm-creature ``curling'' so that its head touches its
|
rlm@447
|
238 tail, and then behold the simplicity of describing thus action in a
|
rlm@441
|
239 language designed for the task (listing \ref{grand-circle-intro}):
|
rlm@441
|
240
|
rlm@446
|
241 #+caption: Body-centerd actions are best expressed in a body-centered
|
rlm@446
|
242 #+caption: language. This code detects when the worm has curled into a
|
rlm@446
|
243 #+caption: full circle. Imagine how you would replicate this functionality
|
rlm@446
|
244 #+caption: using low-level pixel features such as HOG filters!
|
rlm@446
|
245 #+name: grand-circle-intro
|
rlm@452
|
246 #+attr_latex: [htpb]
|
rlm@452
|
247 #+begin_listing clojure
|
rlm@446
|
248 #+begin_src clojure
|
rlm@446
|
249 (defn grand-circle?
|
rlm@446
|
250 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@446
|
251 [experiences]
|
rlm@446
|
252 (and (curled? experiences)
|
rlm@446
|
253 (let [worm-touch (:touch (peek experiences))
|
rlm@446
|
254 tail-touch (worm-touch 0)
|
rlm@446
|
255 head-touch (worm-touch 4)]
|
rlm@462
|
256 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
|
rlm@462
|
257 (< 0.2 (contact worm-segment-top-tip head-touch))))))
|
rlm@446
|
258 #+end_src
|
rlm@446
|
259 #+end_listing
|
rlm@446
|
260
|
rlm@435
|
261
|
rlm@449
|
262 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
263
|
rlm@448
|
264 I built =CORTEX= to be a general AI research platform for doing
|
rlm@448
|
265 experiments involving multiple rich senses and a wide variety and
|
rlm@448
|
266 number of creatures. I intend it to be useful as a library for many
|
rlm@462
|
267 more projects than just this thesis. =CORTEX= was necessary to meet
|
rlm@462
|
268 a need among AI researchers at CSAIL and beyond, which is that
|
rlm@462
|
269 people often will invent neat ideas that are best expressed in the
|
rlm@448
|
270 language of creatures and senses, but in order to explore those
|
rlm@448
|
271 ideas they must first build a platform in which they can create
|
rlm@448
|
272 simulated creatures with rich senses! There are many ideas that
|
rlm@448
|
273 would be simple to execute (such as =EMPATH=), but attached to them
|
rlm@448
|
274 is the multi-month effort to make a good creature simulator. Often,
|
rlm@448
|
275 that initial investment of time proves to be too much, and the
|
rlm@448
|
276 project must make do with a lesser environment.
|
rlm@435
|
277
|
rlm@448
|
278 =CORTEX= is well suited as an environment for embodied AI research
|
rlm@448
|
279 for three reasons:
|
rlm@448
|
280
|
rlm@448
|
281 - You can create new creatures using Blender, a popular 3D modeling
|
rlm@448
|
282 program. Each sense can be specified using special blender nodes
|
rlm@448
|
283 with biologically inspired paramaters. You need not write any
|
rlm@448
|
284 code to create a creature, and can use a wide library of
|
rlm@448
|
285 pre-existing blender models as a base for your own creatures.
|
rlm@448
|
286
|
rlm@448
|
287 - =CORTEX= implements a wide variety of senses, including touch,
|
rlm@448
|
288 proprioception, vision, hearing, and muscle tension. Complicated
|
rlm@448
|
289 senses like touch, and vision involve multiple sensory elements
|
rlm@448
|
290 embedded in a 2D surface. You have complete control over the
|
rlm@448
|
291 distribution of these sensor elements through the use of simple
|
rlm@448
|
292 png image files. In particular, =CORTEX= implements more
|
rlm@448
|
293 comprehensive hearing than any other creature simulation system
|
rlm@448
|
294 available.
|
rlm@448
|
295
|
rlm@448
|
296 - =CORTEX= supports any number of creatures and any number of
|
rlm@448
|
297 senses. Time in =CORTEX= dialates so that the simulated creatures
|
rlm@448
|
298 always precieve a perfectly smooth flow of time, regardless of
|
rlm@448
|
299 the actual computational load.
|
rlm@448
|
300
|
rlm@448
|
301 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
|
rlm@448
|
302 engine designed to create cross-platform 3D desktop games. =CORTEX=
|
rlm@448
|
303 is mainly written in clojure, a dialect of =LISP= that runs on the
|
rlm@448
|
304 java virtual machine (JVM). The API for creating and simulating
|
rlm@449
|
305 creatures and senses is entirely expressed in clojure, though many
|
rlm@449
|
306 senses are implemented at the layer of jMonkeyEngine or below. For
|
rlm@449
|
307 example, for the sense of hearing I use a layer of clojure code on
|
rlm@449
|
308 top of a layer of java JNI bindings that drive a layer of =C++=
|
rlm@449
|
309 code which implements a modified version of =OpenAL= to support
|
rlm@449
|
310 multiple listeners. =CORTEX= is the only simulation environment
|
rlm@449
|
311 that I know of that can support multiple entities that can each
|
rlm@449
|
312 hear the world from their own perspective. Other senses also
|
rlm@449
|
313 require a small layer of Java code. =CORTEX= also uses =bullet=, a
|
rlm@449
|
314 physics simulator written in =C=.
|
rlm@448
|
315
|
rlm@448
|
316 #+caption: Here is the worm from above modeled in Blender, a free
|
rlm@448
|
317 #+caption: 3D-modeling program. Senses and joints are described
|
rlm@448
|
318 #+caption: using special nodes in Blender.
|
rlm@448
|
319 #+name: worm-recognition-intro
|
rlm@448
|
320 #+ATTR_LaTeX: :width 12cm
|
rlm@448
|
321 [[./images/blender-worm.png]]
|
rlm@448
|
322
|
rlm@449
|
323 Here are some thing I anticipate that =CORTEX= might be used for:
|
rlm@449
|
324
|
rlm@449
|
325 - exploring new ideas about sensory integration
|
rlm@449
|
326 - distributed communication among swarm creatures
|
rlm@449
|
327 - self-learning using free exploration,
|
rlm@449
|
328 - evolutionary algorithms involving creature construction
|
rlm@449
|
329 - exploration of exoitic senses and effectors that are not possible
|
rlm@449
|
330 in the real world (such as telekenisis or a semantic sense)
|
rlm@449
|
331 - imagination using subworlds
|
rlm@449
|
332
|
rlm@451
|
333 During one test with =CORTEX=, I created 3,000 creatures each with
|
rlm@448
|
334 their own independent senses and ran them all at only 1/80 real
|
rlm@448
|
335 time. In another test, I created a detailed model of my own hand,
|
rlm@448
|
336 equipped with a realistic distribution of touch (more sensitive at
|
rlm@448
|
337 the fingertips), as well as eyes and ears, and it ran at around 1/4
|
rlm@451
|
338 real time.
|
rlm@448
|
339
|
rlm@451
|
340 #+BEGIN_LaTeX
|
rlm@449
|
341 \begin{sidewaysfigure}
|
rlm@449
|
342 \includegraphics[width=9.5in]{images/full-hand.png}
|
rlm@451
|
343 \caption{
|
rlm@451
|
344 I modeled my own right hand in Blender and rigged it with all the
|
rlm@451
|
345 senses that {\tt CORTEX} supports. My simulated hand has a
|
rlm@451
|
346 biologically inspired distribution of touch sensors. The senses are
|
rlm@451
|
347 displayed on the right, and the simulation is displayed on the
|
rlm@451
|
348 left. Notice that my hand is curling its fingers, that it can see
|
rlm@451
|
349 its own finger from the eye in its palm, and that it can feel its
|
rlm@451
|
350 own thumb touching its palm.}
|
rlm@449
|
351 \end{sidewaysfigure}
|
rlm@451
|
352 #+END_LaTeX
|
rlm@448
|
353
|
rlm@437
|
354 ** Contributions
|
rlm@435
|
355
|
rlm@451
|
356 - I built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@451
|
357 experiments. =CORTEX= supports many features lacking in other
|
rlm@451
|
358 systems, such proper simulation of hearing. It is easy to create
|
rlm@451
|
359 new =CORTEX= creatures using Blender, a free 3D modeling program.
|
rlm@449
|
360
|
rlm@451
|
361 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
|
rlm@451
|
362 a worm-like creature using a computational model of empathy.
|
rlm@449
|
363
|
rlm@436
|
364 * Building =CORTEX=
|
rlm@435
|
365
|
rlm@462
|
366 I intend for =CORTEX= to be used as a general purpose library for
|
rlm@462
|
367 building creatures and outfitting them with senses, so that it will
|
rlm@462
|
368 be useful for other researchers who want to test out ideas of their
|
rlm@462
|
369 own. To this end, wherver I have had to make archetictural choices
|
rlm@462
|
370 about =CORTEX=, I have chosen to give as much freedom to the user as
|
rlm@462
|
371 possible, so that =CORTEX= may be used for things I have not
|
rlm@462
|
372 forseen.
|
rlm@462
|
373
|
rlm@465
|
374 ** COMMENT Simulation or Reality?
|
rlm@462
|
375
|
rlm@462
|
376 The most important archetictural decision of all is the choice to
|
rlm@462
|
377 use a computer-simulated environemnt in the first place! The world
|
rlm@462
|
378 is a vast and rich place, and for now simulations are a very poor
|
rlm@462
|
379 reflection of its complexity. It may be that there is a significant
|
rlm@462
|
380 qualatative difference between dealing with senses in the real
|
rlm@468
|
381 world and dealing with pale facilimilies of them in a simulation.
|
rlm@468
|
382 What are the advantages and disadvantages of a simulation vs.
|
rlm@468
|
383 reality?
|
rlm@462
|
384
|
rlm@462
|
385 *** Simulation
|
rlm@462
|
386
|
rlm@462
|
387 The advantages of virtual reality are that when everything is a
|
rlm@462
|
388 simulation, experiments in that simulation are absolutely
|
rlm@462
|
389 reproducible. It's also easier to change the character and world
|
rlm@462
|
390 to explore new situations and different sensory combinations.
|
rlm@462
|
391
|
rlm@462
|
392 If the world is to be simulated on a computer, then not only do
|
rlm@462
|
393 you have to worry about whether the character's senses are rich
|
rlm@462
|
394 enough to learn from the world, but whether the world itself is
|
rlm@462
|
395 rendered with enough detail and realism to give enough working
|
rlm@462
|
396 material to the character's senses. To name just a few
|
rlm@462
|
397 difficulties facing modern physics simulators: destructibility of
|
rlm@462
|
398 the environment, simulation of water/other fluids, large areas,
|
rlm@462
|
399 nonrigid bodies, lots of objects, smoke. I don't know of any
|
rlm@462
|
400 computer simulation that would allow a character to take a rock
|
rlm@462
|
401 and grind it into fine dust, then use that dust to make a clay
|
rlm@462
|
402 sculpture, at least not without spending years calculating the
|
rlm@462
|
403 interactions of every single small grain of dust. Maybe a
|
rlm@462
|
404 simulated world with today's limitations doesn't provide enough
|
rlm@462
|
405 richness for real intelligence to evolve.
|
rlm@462
|
406
|
rlm@462
|
407 *** Reality
|
rlm@462
|
408
|
rlm@462
|
409 The other approach for playing with senses is to hook your
|
rlm@462
|
410 software up to real cameras, microphones, robots, etc., and let it
|
rlm@462
|
411 loose in the real world. This has the advantage of eliminating
|
rlm@462
|
412 concerns about simulating the world at the expense of increasing
|
rlm@462
|
413 the complexity of implementing the senses. Instead of just
|
rlm@462
|
414 grabbing the current rendered frame for processing, you have to
|
rlm@462
|
415 use an actual camera with real lenses and interact with photons to
|
rlm@462
|
416 get an image. It is much harder to change the character, which is
|
rlm@462
|
417 now partly a physical robot of some sort, since doing so involves
|
rlm@462
|
418 changing things around in the real world instead of modifying
|
rlm@462
|
419 lines of code. While the real world is very rich and definitely
|
rlm@462
|
420 provides enough stimulation for intelligence to develop as
|
rlm@462
|
421 evidenced by our own existence, it is also uncontrollable in the
|
rlm@462
|
422 sense that a particular situation cannot be recreated perfectly or
|
rlm@462
|
423 saved for later use. It is harder to conduct science because it is
|
rlm@462
|
424 harder to repeat an experiment. The worst thing about using the
|
rlm@462
|
425 real world instead of a simulation is the matter of time. Instead
|
rlm@462
|
426 of simulated time you get the constant and unstoppable flow of
|
rlm@462
|
427 real time. This severely limits the sorts of software you can use
|
rlm@462
|
428 to program the AI because all sense inputs must be handled in real
|
rlm@462
|
429 time. Complicated ideas may have to be implemented in hardware or
|
rlm@462
|
430 may simply be impossible given the current speed of our
|
rlm@462
|
431 processors. Contrast this with a simulation, in which the flow of
|
rlm@462
|
432 time in the simulated world can be slowed down to accommodate the
|
rlm@462
|
433 limitations of the character's programming. In terms of cost,
|
rlm@462
|
434 doing everything in software is far cheaper than building custom
|
rlm@462
|
435 real-time hardware. All you need is a laptop and some patience.
|
rlm@435
|
436
|
rlm@465
|
437 ** COMMENT Because of Time, simulation is perferable to reality
|
rlm@435
|
438
|
rlm@462
|
439 I envision =CORTEX= being used to support rapid prototyping and
|
rlm@462
|
440 iteration of ideas. Even if I could put together a well constructed
|
rlm@462
|
441 kit for creating robots, it would still not be enough because of
|
rlm@462
|
442 the scourge of real-time processing. Anyone who wants to test their
|
rlm@462
|
443 ideas in the real world must always worry about getting their
|
rlm@465
|
444 algorithms to run fast enough to process information in real time.
|
rlm@465
|
445 The need for real time processing only increases if multiple senses
|
rlm@465
|
446 are involved. In the extreme case, even simple algorithms will have
|
rlm@465
|
447 to be accelerated by ASIC chips or FPGAs, turning what would
|
rlm@465
|
448 otherwise be a few lines of code and a 10x speed penality into a
|
rlm@465
|
449 multi-month ordeal. For this reason, =CORTEX= supports
|
rlm@462
|
450 /time-dialiation/, which scales back the framerate of the
|
rlm@465
|
451 simulation in proportion to the amount of processing each frame.
|
rlm@465
|
452 From the perspective of the creatures inside the simulation, time
|
rlm@465
|
453 always appears to flow at a constant rate, regardless of how
|
rlm@462
|
454 complicated the envorimnent becomes or how many creatures are in
|
rlm@462
|
455 the simulation. The cost is that =CORTEX= can sometimes run slower
|
rlm@462
|
456 than real time. This can also be an advantage, however ---
|
rlm@462
|
457 simulations of very simple creatures in =CORTEX= generally run at
|
rlm@462
|
458 40x on my machine!
|
rlm@462
|
459
|
rlm@469
|
460 ** COMMENT What is a sense?
|
rlm@468
|
461
|
rlm@468
|
462 If =CORTEX= is to support a wide variety of senses, it would help
|
rlm@468
|
463 to have a better understanding of what a ``sense'' actually is!
|
rlm@468
|
464 While vision, touch, and hearing all seem like they are quite
|
rlm@468
|
465 different things, I was supprised to learn during the course of
|
rlm@468
|
466 this thesis that they (and all physical senses) can be expressed as
|
rlm@468
|
467 exactly the same mathematical object due to a dimensional argument!
|
rlm@468
|
468
|
rlm@468
|
469 Human beings are three-dimensional objects, and the nerves that
|
rlm@468
|
470 transmit data from our various sense organs to our brain are
|
rlm@468
|
471 essentially one-dimensional. This leaves up to two dimensions in
|
rlm@468
|
472 which our sensory information may flow. For example, imagine your
|
rlm@468
|
473 skin: it is a two-dimensional surface around a three-dimensional
|
rlm@468
|
474 object (your body). It has discrete touch sensors embedded at
|
rlm@468
|
475 various points, and the density of these sensors corresponds to the
|
rlm@468
|
476 sensitivity of that region of skin. Each touch sensor connects to a
|
rlm@468
|
477 nerve, all of which eventually are bundled together as they travel
|
rlm@468
|
478 up the spinal cord to the brain. Intersect the spinal nerves with a
|
rlm@468
|
479 guillotining plane and you will see all of the sensory data of the
|
rlm@468
|
480 skin revealed in a roughly circular two-dimensional image which is
|
rlm@468
|
481 the cross section of the spinal cord. Points on this image that are
|
rlm@468
|
482 close together in this circle represent touch sensors that are
|
rlm@468
|
483 /probably/ close together on the skin, although there is of course
|
rlm@468
|
484 some cutting and rearrangement that has to be done to transfer the
|
rlm@468
|
485 complicated surface of the skin onto a two dimensional image.
|
rlm@468
|
486
|
rlm@468
|
487 Most human senses consist of many discrete sensors of various
|
rlm@468
|
488 properties distributed along a surface at various densities. For
|
rlm@468
|
489 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
|
rlm@468
|
490 disks, and Ruffini's endings, which detect pressure and vibration
|
rlm@468
|
491 of various intensities. For ears, it is the stereocilia distributed
|
rlm@468
|
492 along the basilar membrane inside the cochlea; each one is
|
rlm@468
|
493 sensitive to a slightly different frequency of sound. For eyes, it
|
rlm@468
|
494 is rods and cones distributed along the surface of the retina. In
|
rlm@468
|
495 each case, we can describe the sense with a surface and a
|
rlm@468
|
496 distribution of sensors along that surface.
|
rlm@468
|
497
|
rlm@468
|
498 The neat idea is that every human sense can be effectively
|
rlm@468
|
499 described in terms of a surface containing embedded sensors. If the
|
rlm@468
|
500 sense had any more dimensions, then there wouldn't be enough room
|
rlm@468
|
501 in the spinal chord to transmit the information!
|
rlm@468
|
502
|
rlm@468
|
503 Therefore, =CORTEX= must support the ability to create objects and
|
rlm@468
|
504 then be able to ``paint'' points along their surfaces to describe
|
rlm@468
|
505 each sense.
|
rlm@468
|
506
|
rlm@468
|
507 Fortunately this idea is already a well known computer graphics
|
rlm@468
|
508 technique called called /UV-mapping/. The three-dimensional surface
|
rlm@468
|
509 of a model is cut and smooshed until it fits on a two-dimensional
|
rlm@468
|
510 image. You paint whatever you want on that image, and when the
|
rlm@468
|
511 three-dimensional shape is rendered in a game the smooshing and
|
rlm@468
|
512 cutting is reversed and the image appears on the three-dimensional
|
rlm@468
|
513 object.
|
rlm@468
|
514
|
rlm@468
|
515 To make a sense, interpret the UV-image as describing the
|
rlm@468
|
516 distribution of that senses sensors. To get different types of
|
rlm@468
|
517 sensors, you can either use a different color for each type of
|
rlm@468
|
518 sensor, or use multiple UV-maps, each labeled with that sensor
|
rlm@468
|
519 type. I generally use a white pixel to mean the presence of a
|
rlm@468
|
520 sensor and a black pixel to mean the absence of a sensor, and use
|
rlm@468
|
521 one UV-map for each sensor-type within a given sense.
|
rlm@468
|
522
|
rlm@468
|
523 #+CAPTION: The UV-map for an elongated icososphere. The white
|
rlm@468
|
524 #+caption: dots each represent a touch sensor. They are dense
|
rlm@468
|
525 #+caption: in the regions that describe the tip of the finger,
|
rlm@468
|
526 #+caption: and less dense along the dorsal side of the finger
|
rlm@468
|
527 #+caption: opposite the tip.
|
rlm@468
|
528 #+name: finger-UV
|
rlm@468
|
529 #+ATTR_latex: :width 10cm
|
rlm@468
|
530 [[./images/finger-UV.png]]
|
rlm@468
|
531
|
rlm@468
|
532 #+caption: Ventral side of the UV-mapped finger. Notice the
|
rlm@468
|
533 #+caption: density of touch sensors at the tip.
|
rlm@468
|
534 #+name: finger-side-view
|
rlm@468
|
535 #+ATTR_LaTeX: :width 10cm
|
rlm@468
|
536 [[./images/finger-1.png]]
|
rlm@468
|
537
|
rlm@465
|
538 ** COMMENT Video game engines are a great starting point
|
rlm@462
|
539
|
rlm@462
|
540 I did not need to write my own physics simulation code or shader to
|
rlm@462
|
541 build =CORTEX=. Doing so would lead to a system that is impossible
|
rlm@462
|
542 for anyone but myself to use anyway. Instead, I use a video game
|
rlm@462
|
543 engine as a base and modify it to accomodate the additional needs
|
rlm@462
|
544 of =CORTEX=. Video game engines are an ideal starting point to
|
rlm@462
|
545 build =CORTEX=, because they are not far from being creature
|
rlm@463
|
546 building systems themselves.
|
rlm@462
|
547
|
rlm@462
|
548 First off, general purpose video game engines come with a physics
|
rlm@462
|
549 engine and lighting / sound system. The physics system provides
|
rlm@462
|
550 tools that can be co-opted to serve as touch, proprioception, and
|
rlm@462
|
551 muscles. Since some games support split screen views, a good video
|
rlm@462
|
552 game engine will allow you to efficiently create multiple cameras
|
rlm@463
|
553 in the simulated world that can be used as eyes. Video game systems
|
rlm@463
|
554 offer integrated asset management for things like textures and
|
rlm@468
|
555 creatures models, providing an avenue for defining creatures. They
|
rlm@468
|
556 also understand UV-mapping, since this technique is used to apply a
|
rlm@468
|
557 texture to a model. Finally, because video game engines support a
|
rlm@468
|
558 large number of users, as long as =CORTEX= doesn't stray too far
|
rlm@468
|
559 from the base system, other researchers can turn to this community
|
rlm@468
|
560 for help when doing their research.
|
rlm@463
|
561
|
rlm@465
|
562 ** COMMENT =CORTEX= is based on jMonkeyEngine3
|
rlm@463
|
563
|
rlm@463
|
564 While preparing to build =CORTEX= I studied several video game
|
rlm@463
|
565 engines to see which would best serve as a base. The top contenders
|
rlm@463
|
566 were:
|
rlm@463
|
567
|
rlm@463
|
568 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
|
rlm@463
|
569 software in 1997. All the source code was released by ID
|
rlm@463
|
570 software into the Public Domain several years ago, and as a
|
rlm@463
|
571 result it has been ported to many different languages. This
|
rlm@463
|
572 engine was famous for its advanced use of realistic shading
|
rlm@463
|
573 and had decent and fast physics simulation. The main advantage
|
rlm@463
|
574 of the Quake II engine is its simplicity, but I ultimately
|
rlm@463
|
575 rejected it because the engine is too tied to the concept of a
|
rlm@463
|
576 first-person shooter game. One of the problems I had was that
|
rlm@463
|
577 there does not seem to be any easy way to attach multiple
|
rlm@463
|
578 cameras to a single character. There are also several physics
|
rlm@463
|
579 clipping issues that are corrected in a way that only applies
|
rlm@463
|
580 to the main character and do not apply to arbitrary objects.
|
rlm@463
|
581
|
rlm@463
|
582 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
|
rlm@463
|
583 and Quake I engines and is used by Valve in the Half-Life
|
rlm@463
|
584 series of games. The physics simulation in the Source Engine
|
rlm@463
|
585 is quite accurate and probably the best out of all the engines
|
rlm@463
|
586 I investigated. There is also an extensive community actively
|
rlm@463
|
587 working with the engine. However, applications that use the
|
rlm@463
|
588 Source Engine must be written in C++, the code is not open, it
|
rlm@463
|
589 only runs on Windows, and the tools that come with the SDK to
|
rlm@463
|
590 handle models and textures are complicated and awkward to use.
|
rlm@463
|
591
|
rlm@463
|
592 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
|
rlm@463
|
593 games in Java. It uses OpenGL to render to the screen and uses
|
rlm@463
|
594 screengraphs to avoid drawing things that do not appear on the
|
rlm@463
|
595 screen. It has an active community and several games in the
|
rlm@463
|
596 pipeline. The engine was not built to serve any particular
|
rlm@463
|
597 game but is instead meant to be used for any 3D game.
|
rlm@463
|
598
|
rlm@463
|
599 I chose jMonkeyEngine3 because it because it had the most features
|
rlm@464
|
600 out of all the free projects I looked at, and because I could then
|
rlm@463
|
601 write my code in clojure, an implementation of =LISP= that runs on
|
rlm@463
|
602 the JVM.
|
rlm@435
|
603
|
rlm@469
|
604 ** COMMENT =CORTEX= uses Blender to create creature models
|
rlm@435
|
605
|
rlm@464
|
606 For the simple worm-like creatures I will use later on in this
|
rlm@464
|
607 thesis, I could define a simple API in =CORTEX= that would allow
|
rlm@464
|
608 one to create boxes, spheres, etc., and leave that API as the sole
|
rlm@464
|
609 way to create creatures. However, for =CORTEX= to truly be useful
|
rlm@468
|
610 for other projects, it needs a way to construct complicated
|
rlm@464
|
611 creatures. If possible, it would be nice to leverage work that has
|
rlm@464
|
612 already been done by the community of 3D modelers, or at least
|
rlm@464
|
613 enable people who are talented at moedling but not programming to
|
rlm@468
|
614 design =CORTEX= creatures.
|
rlm@464
|
615
|
rlm@464
|
616 Therefore, I use Blender, a free 3D modeling program, as the main
|
rlm@464
|
617 way to create creatures in =CORTEX=. However, the creatures modeled
|
rlm@464
|
618 in Blender must also be simple to simulate in jMonkeyEngine3's game
|
rlm@468
|
619 engine, and must also be easy to rig with =CORTEX='s senses. I
|
rlm@468
|
620 accomplish this with extensive use of Blender's ``empty nodes.''
|
rlm@464
|
621
|
rlm@468
|
622 Empty nodes have no mass, physical presence, or appearance, but
|
rlm@468
|
623 they can hold metadata and have names. I use a tree structure of
|
rlm@468
|
624 empty nodes to specify senses in the following manner:
|
rlm@468
|
625
|
rlm@468
|
626 - Create a single top-level empty node whose name is the name of
|
rlm@468
|
627 the sense.
|
rlm@468
|
628 - Add empty nodes which each contain meta-data relevant to the
|
rlm@468
|
629 sense, including a UV-map describing the number/distribution of
|
rlm@468
|
630 sensors if applicable.
|
rlm@468
|
631 - Make each empty-node the child of the top-level node.
|
rlm@468
|
632
|
rlm@468
|
633 #+caption: An example of annoting a creature model with empty
|
rlm@468
|
634 #+caption: nodes to describe the layout of senses. There are
|
rlm@468
|
635 #+caption: multiple empty nodes which each describe the position
|
rlm@468
|
636 #+caption: of muscles, ears, eyes, or joints.
|
rlm@468
|
637 #+name: sense-nodes
|
rlm@468
|
638 #+ATTR_LaTeX: :width 10cm
|
rlm@468
|
639 [[./images/empty-sense-nodes.png]]
|
rlm@468
|
640
|
rlm@469
|
641 ** COMMENT Bodies are composed of segments connected by joints
|
rlm@468
|
642
|
rlm@468
|
643 Blender is a general purpose animation tool, which has been used in
|
rlm@468
|
644 the past to create high quality movies such as Sintel
|
rlm@468
|
645 \cite{sintel}. Though Blender can model and render even complicated
|
rlm@468
|
646 things like water, it is crucual to keep models that are meant to
|
rlm@468
|
647 be simulated as creatures simple. =Bullet=, which =CORTEX= uses
|
rlm@468
|
648 though jMonkeyEngine3, is a rigid-body physics system. This offers
|
rlm@468
|
649 a compromise between the expressiveness of a game level and the
|
rlm@468
|
650 speed at which it can be simulated, and it means that creatures
|
rlm@468
|
651 should be naturally expressed as rigid components held together by
|
rlm@468
|
652 joint constraints.
|
rlm@468
|
653
|
rlm@468
|
654 But humans are more like a squishy bag with wrapped around some
|
rlm@468
|
655 hard bones which define the overall shape. When we move, our skin
|
rlm@468
|
656 bends and stretches to accomodate the new positions of our bones.
|
rlm@468
|
657
|
rlm@468
|
658 One way to make bodies composed of rigid pieces connected by joints
|
rlm@468
|
659 /seem/ more human-like is to use an /armature/, (or /rigging/)
|
rlm@468
|
660 system, which defines a overall ``body mesh'' and defines how the
|
rlm@468
|
661 mesh deforms as a function of the position of each ``bone'' which
|
rlm@468
|
662 is a standard rigid body. This technique is used extensively to
|
rlm@468
|
663 model humans and create realistic animations. It is not a good
|
rlm@468
|
664 technique for physical simulation, however because it creates a lie
|
rlm@468
|
665 -- the skin is not a physical part of the simulation and does not
|
rlm@468
|
666 interact with any objects in the world or itself. Objects will pass
|
rlm@468
|
667 right though the skin until they come in contact with the
|
rlm@468
|
668 underlying bone, which is a physical object. Whithout simulating
|
rlm@468
|
669 the skin, the sense of touch has little meaning, and the creature's
|
rlm@468
|
670 own vision will lie to it about the true extent of its body.
|
rlm@468
|
671 Simulating the skin as a physical object requires some way to
|
rlm@468
|
672 continuously update the physical model of the skin along with the
|
rlm@468
|
673 movement of the bones, which is unacceptably slow compared to rigid
|
rlm@468
|
674 body simulation.
|
rlm@468
|
675
|
rlm@468
|
676 Therefore, instead of using the human-like ``deformable bag of
|
rlm@468
|
677 bones'' approach, I decided to base my body plans on multiple solid
|
rlm@468
|
678 objects that are connected by joints, inspired by the robot =EVE=
|
rlm@468
|
679 from the movie WALL-E.
|
rlm@464
|
680
|
rlm@464
|
681 #+caption: =EVE= from the movie WALL-E. This body plan turns
|
rlm@464
|
682 #+caption: out to be much better suited to my purposes than a more
|
rlm@464
|
683 #+caption: human-like one.
|
rlm@465
|
684 #+ATTR_LaTeX: :width 10cm
|
rlm@464
|
685 [[./images/Eve.jpg]]
|
rlm@464
|
686
|
rlm@464
|
687 =EVE='s body is composed of several rigid components that are held
|
rlm@464
|
688 together by invisible joint constraints. This is what I mean by
|
rlm@464
|
689 ``eve-like''. The main reason that I use eve-style bodies is for
|
rlm@464
|
690 efficiency, and so that there will be correspondence between the
|
rlm@468
|
691 AI's semses and the physical presence of its body. Each individual
|
rlm@464
|
692 section is simulated by a separate rigid body that corresponds
|
rlm@464
|
693 exactly with its visual representation and does not change.
|
rlm@464
|
694 Sections are connected by invisible joints that are well supported
|
rlm@464
|
695 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
|
rlm@464
|
696 can efficiently simulate hundreds of rigid bodies connected by
|
rlm@468
|
697 joints. Just because sections are rigid does not mean they have to
|
rlm@468
|
698 stay as one piece forever; they can be dynamically replaced with
|
rlm@468
|
699 multiple sections to simulate splitting in two. This could be used
|
rlm@468
|
700 to simulate retractable claws or =EVE='s hands, which are able to
|
rlm@468
|
701 coalesce into one object in the movie.
|
rlm@465
|
702
|
rlm@469
|
703 *** Solidifying/Connecting a body
|
rlm@465
|
704
|
rlm@469
|
705 =CORTEX= creates a creature in two steps: first, it traverses the
|
rlm@469
|
706 nodes in the blender file and creates physical representations for
|
rlm@469
|
707 any of them that have mass defined in their blender meta-data.
|
rlm@466
|
708
|
rlm@466
|
709 #+caption: Program for iterating through the nodes in a blender file
|
rlm@466
|
710 #+caption: and generating physical jMonkeyEngine3 objects with mass
|
rlm@466
|
711 #+caption: and a matching physics shape.
|
rlm@466
|
712 #+name: name
|
rlm@466
|
713 #+begin_listing clojure
|
rlm@466
|
714 #+begin_src clojure
|
rlm@466
|
715 (defn physical!
|
rlm@466
|
716 "Iterate through the nodes in creature and make them real physical
|
rlm@466
|
717 objects in the simulation."
|
rlm@466
|
718 [#^Node creature]
|
rlm@466
|
719 (dorun
|
rlm@466
|
720 (map
|
rlm@466
|
721 (fn [geom]
|
rlm@466
|
722 (let [physics-control
|
rlm@466
|
723 (RigidBodyControl.
|
rlm@466
|
724 (HullCollisionShape.
|
rlm@466
|
725 (.getMesh geom))
|
rlm@466
|
726 (if-let [mass (meta-data geom "mass")]
|
rlm@466
|
727 (float mass) (float 1)))]
|
rlm@466
|
728 (.addControl geom physics-control)))
|
rlm@466
|
729 (filter #(isa? (class %) Geometry )
|
rlm@466
|
730 (node-seq creature)))))
|
rlm@466
|
731 #+end_src
|
rlm@466
|
732 #+end_listing
|
rlm@465
|
733
|
rlm@469
|
734 The next step to making a proper body is to connect those pieces
|
rlm@469
|
735 together with joints. jMonkeyEngine has a large array of joints
|
rlm@469
|
736 available via =bullet=, such as Point2Point, Cone, Hinge, and a
|
rlm@469
|
737 generic Six Degree of Freedom joint, with or without spring
|
rlm@469
|
738 restitution.
|
rlm@465
|
739
|
rlm@469
|
740 Joints are treated a lot like proper senses, in that there is a
|
rlm@469
|
741 top-level empty node named ``joints'' whose children each
|
rlm@469
|
742 represent a joint.
|
rlm@466
|
743
|
rlm@469
|
744 #+caption: View of the hand model in Blender showing the main ``joints''
|
rlm@469
|
745 #+caption: node (highlighted in yellow) and its children which each
|
rlm@469
|
746 #+caption: represent a joint in the hand. Each joint node has metadata
|
rlm@469
|
747 #+caption: specifying what sort of joint it is.
|
rlm@469
|
748 #+name: blender-hand
|
rlm@469
|
749 #+ATTR_LaTeX: :width 10cm
|
rlm@469
|
750 [[./images/hand-screenshot1.png]]
|
rlm@469
|
751
|
rlm@469
|
752
|
rlm@469
|
753 =CORTEX='s procedure for binding the creature together with joints
|
rlm@469
|
754 is as follows:
|
rlm@469
|
755
|
rlm@469
|
756 - Find the children of the ``joints'' node.
|
rlm@469
|
757 - Determine the two spatials the joint is meant to connect.
|
rlm@469
|
758 - Create the joint based on the meta-data of the empty node.
|
rlm@469
|
759
|
rlm@469
|
760 The higher order function =sense-nodes= from =cortex.sense=
|
rlm@469
|
761 simplifies finding the joints based on their parent ``joints''
|
rlm@469
|
762 node.
|
rlm@466
|
763
|
rlm@466
|
764 #+caption: Retrieving the children empty nodes from a single
|
rlm@466
|
765 #+caption: named empty node is a common pattern in =CORTEX=
|
rlm@466
|
766 #+caption: further instances of this technique for the senses
|
rlm@466
|
767 #+caption: will be omitted
|
rlm@466
|
768 #+name: get-empty-nodes
|
rlm@466
|
769 #+begin_listing clojure
|
rlm@466
|
770 #+begin_src clojure
|
rlm@466
|
771 (defn sense-nodes
|
rlm@466
|
772 "For some senses there is a special empty blender node whose
|
rlm@466
|
773 children are considered markers for an instance of that sense. This
|
rlm@466
|
774 function generates functions to find those children, given the name
|
rlm@466
|
775 of the special parent node."
|
rlm@466
|
776 [parent-name]
|
rlm@466
|
777 (fn [#^Node creature]
|
rlm@466
|
778 (if-let [sense-node (.getChild creature parent-name)]
|
rlm@466
|
779 (seq (.getChildren sense-node)) [])))
|
rlm@466
|
780
|
rlm@466
|
781 (def
|
rlm@466
|
782 ^{:doc "Return the children of the creature's \"joints\" node."
|
rlm@466
|
783 :arglists '([creature])}
|
rlm@466
|
784 joints
|
rlm@466
|
785 (sense-nodes "joints"))
|
rlm@466
|
786 #+end_src
|
rlm@466
|
787 #+end_listing
|
rlm@466
|
788
|
rlm@469
|
789 To find a joint's targets, =CORTEX= creates a small cube, centered
|
rlm@469
|
790 around the empty-node, and grows the cube exponentially until it
|
rlm@469
|
791 intersects two physical objects. The objects are ordered according
|
rlm@469
|
792 to the joint's rotation, with the first one being the object that
|
rlm@469
|
793 has more negative coordinates in the joint's reference frame.
|
rlm@469
|
794 Since the objects must be physical, the empty-node itself escapes
|
rlm@469
|
795 detection. Because the objects must be physical, =joint-targets=
|
rlm@469
|
796 must be called /after/ =physical!= is called.
|
rlm@464
|
797
|
rlm@469
|
798 #+caption: Program to find the targets of a joint node by
|
rlm@469
|
799 #+caption: exponentiallly growth of a search cube.
|
rlm@469
|
800 #+name: joint-targets
|
rlm@469
|
801 #+begin_listing clojure
|
rlm@469
|
802 #+begin_src clojure
|
rlm@466
|
803 (defn joint-targets
|
rlm@466
|
804 "Return the two closest two objects to the joint object, ordered
|
rlm@466
|
805 from bottom to top according to the joint's rotation."
|
rlm@466
|
806 [#^Node parts #^Node joint]
|
rlm@466
|
807 (loop [radius (float 0.01)]
|
rlm@466
|
808 (let [results (CollisionResults.)]
|
rlm@466
|
809 (.collideWith
|
rlm@466
|
810 parts
|
rlm@466
|
811 (BoundingBox. (.getWorldTranslation joint)
|
rlm@466
|
812 radius radius radius) results)
|
rlm@466
|
813 (let [targets
|
rlm@466
|
814 (distinct
|
rlm@466
|
815 (map #(.getGeometry %) results))]
|
rlm@466
|
816 (if (>= (count targets) 2)
|
rlm@466
|
817 (sort-by
|
rlm@466
|
818 #(let [joint-ref-frame-position
|
rlm@466
|
819 (jme-to-blender
|
rlm@466
|
820 (.mult
|
rlm@466
|
821 (.inverse (.getWorldRotation joint))
|
rlm@466
|
822 (.subtract (.getWorldTranslation %)
|
rlm@466
|
823 (.getWorldTranslation joint))))]
|
rlm@466
|
824 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
|
rlm@466
|
825 (take 2 targets))
|
rlm@466
|
826 (recur (float (* radius 2))))))))
|
rlm@469
|
827 #+end_src
|
rlm@469
|
828 #+end_listing
|
rlm@464
|
829
|
rlm@469
|
830 Once =CORTEX= finds all joints and targets, it creates them using
|
rlm@469
|
831 a dispatch on the metadata of each joint node.
|
rlm@466
|
832
|
rlm@469
|
833 #+caption: Program to dispatch on blender metadata and create joints
|
rlm@469
|
834 #+caption: sutiable for physical simulation.
|
rlm@469
|
835 #+name: joint-dispatch
|
rlm@469
|
836 #+begin_listing clojure
|
rlm@469
|
837 #+begin_src clojure
|
rlm@466
|
838 (defmulti joint-dispatch
|
rlm@466
|
839 "Translate blender pseudo-joints into real JME joints."
|
rlm@466
|
840 (fn [constraints & _]
|
rlm@466
|
841 (:type constraints)))
|
rlm@466
|
842
|
rlm@466
|
843 (defmethod joint-dispatch :point
|
rlm@466
|
844 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
845 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
|
rlm@466
|
846 (.setLinearLowerLimit Vector3f/ZERO)
|
rlm@466
|
847 (.setLinearUpperLimit Vector3f/ZERO)))
|
rlm@466
|
848
|
rlm@466
|
849 (defmethod joint-dispatch :hinge
|
rlm@466
|
850 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
851 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
|
rlm@466
|
852 [limit-1 limit-2] (:limit constraints)
|
rlm@466
|
853 hinge-axis (.mult rotation (blender-to-jme axis))]
|
rlm@466
|
854 (doto (HingeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
855 hinge-axis hinge-axis)
|
rlm@466
|
856 (.setLimit limit-1 limit-2))))
|
rlm@466
|
857
|
rlm@466
|
858 (defmethod joint-dispatch :cone
|
rlm@466
|
859 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
860 (let [limit-xz (:limit-xz constraints)
|
rlm@466
|
861 limit-xy (:limit-xy constraints)
|
rlm@466
|
862 twist (:twist constraints)]
|
rlm@466
|
863 (doto (ConeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
864 rotation rotation)
|
rlm@466
|
865 (.setLimit (float limit-xz) (float limit-xy)
|
rlm@466
|
866 (float twist)))))
|
rlm@469
|
867 #+end_src
|
rlm@469
|
868 #+end_listing
|
rlm@466
|
869
|
rlm@469
|
870 All that is left for joints it to combine the above pieces into a
|
rlm@469
|
871 something that can operate on the collection of nodes that a
|
rlm@469
|
872 blender file represents.
|
rlm@466
|
873
|
rlm@469
|
874 #+caption: Program to completely create a joint given information
|
rlm@469
|
875 #+caption: from a blender file.
|
rlm@469
|
876 #+name: connect
|
rlm@469
|
877 #+begin_listing clojure
|
rlm@466
|
878 #+begin_src clojure
|
rlm@466
|
879 (defn connect
|
rlm@466
|
880 "Create a joint between 'obj-a and 'obj-b at the location of
|
rlm@466
|
881 'joint. The type of joint is determined by the metadata on 'joint.
|
rlm@466
|
882
|
rlm@466
|
883 Here are some examples:
|
rlm@466
|
884 {:type :point}
|
rlm@466
|
885 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
|
rlm@466
|
886 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
|
rlm@466
|
887
|
rlm@466
|
888 {:type :cone :limit-xz 0]
|
rlm@466
|
889 :limit-xy 0]
|
rlm@466
|
890 :twist 0]} (use XZY rotation mode in blender!)"
|
rlm@466
|
891 [#^Node obj-a #^Node obj-b #^Node joint]
|
rlm@466
|
892 (let [control-a (.getControl obj-a RigidBodyControl)
|
rlm@466
|
893 control-b (.getControl obj-b RigidBodyControl)
|
rlm@466
|
894 joint-center (.getWorldTranslation joint)
|
rlm@466
|
895 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
|
rlm@466
|
896 pivot-a (world-to-local obj-a joint-center)
|
rlm@466
|
897 pivot-b (world-to-local obj-b joint-center)]
|
rlm@466
|
898 (if-let
|
rlm@466
|
899 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
|
rlm@466
|
900 ;; A side-effect of creating a joint registers
|
rlm@466
|
901 ;; it with both physics objects which in turn
|
rlm@466
|
902 ;; will register the joint with the physics system
|
rlm@466
|
903 ;; when the simulation is started.
|
rlm@466
|
904 (joint-dispatch constraints
|
rlm@466
|
905 control-a control-b
|
rlm@466
|
906 pivot-a pivot-b
|
rlm@466
|
907 joint-rotation))))
|
rlm@469
|
908 #+end_src
|
rlm@469
|
909 #+end_listing
|
rlm@466
|
910
|
rlm@469
|
911 In general, whenever =CORTEX= exposes a sense (or in this case
|
rlm@469
|
912 physicality), it provides a function of the type =sense!=, which
|
rlm@469
|
913 takes in a collection of nodes and augments it to support that
|
rlm@469
|
914 sense. The function returns any controlls necessary to use that
|
rlm@469
|
915 sense. In this case =body!= cerates a physical body and returns no
|
rlm@469
|
916 control functions.
|
rlm@466
|
917
|
rlm@469
|
918 #+caption: Program to give joints to a creature.
|
rlm@469
|
919 #+name: name
|
rlm@469
|
920 #+begin_listing clojure
|
rlm@469
|
921 #+begin_src clojure
|
rlm@466
|
922 (defn joints!
|
rlm@466
|
923 "Connect the solid parts of the creature with physical joints. The
|
rlm@466
|
924 joints are taken from the \"joints\" node in the creature."
|
rlm@466
|
925 [#^Node creature]
|
rlm@466
|
926 (dorun
|
rlm@466
|
927 (map
|
rlm@466
|
928 (fn [joint]
|
rlm@466
|
929 (let [[obj-a obj-b] (joint-targets creature joint)]
|
rlm@466
|
930 (connect obj-a obj-b joint)))
|
rlm@466
|
931 (joints creature))))
|
rlm@466
|
932 (defn body!
|
rlm@466
|
933 "Endow the creature with a physical body connected with joints. The
|
rlm@466
|
934 particulars of the joints and the masses of each body part are
|
rlm@466
|
935 determined in blender."
|
rlm@466
|
936 [#^Node creature]
|
rlm@466
|
937 (physical! creature)
|
rlm@466
|
938 (joints! creature))
|
rlm@469
|
939 #+end_src
|
rlm@469
|
940 #+end_listing
|
rlm@466
|
941
|
rlm@469
|
942 All of the code you have just seen amounts to only 130 lines, yet
|
rlm@469
|
943 because it builds on top of Blender and jMonkeyEngine3, those few
|
rlm@469
|
944 lines pack quite a punch!
|
rlm@466
|
945
|
rlm@469
|
946 The hand from figure \ref{blender-hand}, which was modeled after
|
rlm@469
|
947 my own right hand, can now be given joints and simulated as a
|
rlm@469
|
948 creature.
|
rlm@466
|
949
|
rlm@469
|
950 #+caption: With the ability to create physical creatures from blender,
|
rlm@469
|
951 #+caption: =CORTEX= gets one step closer to becomming a full creature
|
rlm@469
|
952 #+caption: simulation environment.
|
rlm@469
|
953 #+name: name
|
rlm@469
|
954 #+ATTR_LaTeX: :width 15cm
|
rlm@469
|
955 [[./images/physical-hand.png]]
|
rlm@468
|
956
|
rlm@436
|
957 ** Eyes reuse standard video game components
|
rlm@436
|
958
|
rlm@470
|
959 Vision is one of the most important senses for humans, so I need to
|
rlm@470
|
960 build a simulated sense of vision for my AI. I will do this with
|
rlm@470
|
961 simulated eyes. Each eye can be independently moved and should see
|
rlm@470
|
962 its own version of the world depending on where it is.
|
rlm@470
|
963
|
rlm@470
|
964 Making these simulated eyes a reality is simple because
|
rlm@470
|
965 jMonkeyEngine already contains extensive support for multiple views
|
rlm@470
|
966 of the same 3D simulated world. The reason jMonkeyEngine has this
|
rlm@470
|
967 support is because the support is necessary to create games with
|
rlm@470
|
968 split-screen views. Multiple views are also used to create
|
rlm@470
|
969 efficient pseudo-reflections by rendering the scene from a certain
|
rlm@470
|
970 perspective and then projecting it back onto a surface in the 3D
|
rlm@470
|
971 world.
|
rlm@470
|
972
|
rlm@470
|
973 #+caption: jMonkeyEngine supports multiple views to enable
|
rlm@470
|
974 #+caption: split-screen games, like GoldenEye, which was one of
|
rlm@470
|
975 #+caption: the first games to use split-screen views.
|
rlm@470
|
976 #+name: name
|
rlm@470
|
977 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
978 [[./images/goldeneye-4-player.png]]
|
rlm@470
|
979
|
rlm@470
|
980 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
|
rlm@470
|
981
|
rlm@470
|
982 jMonkeyEngine allows you to create a =ViewPort=, which represents a
|
rlm@470
|
983 view of the simulated world. You can create as many of these as you
|
rlm@470
|
984 want. Every frame, the =RenderManager= iterates through each
|
rlm@470
|
985 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
|
rlm@470
|
986 is a =FrameBuffer= which represents the rendered image in the GPU.
|
rlm@470
|
987
|
rlm@470
|
988 #+caption: =ViewPorts= are cameras in the world. During each frame,
|
rlm@470
|
989 #+caption: the =RenderManager= records a snapshot of what each view
|
rlm@470
|
990 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
|
rlm@470
|
991 #+name: name
|
rlm@470
|
992 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
993 [[../images/diagram_rendermanager2.png]]
|
rlm@470
|
994
|
rlm@470
|
995 Each =ViewPort= can have any number of attached =SceneProcessor=
|
rlm@470
|
996 objects, which are called every time a new frame is rendered. A
|
rlm@470
|
997 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
|
rlm@470
|
998 whatever it wants to the data. Often this consists of invoking GPU
|
rlm@470
|
999 specific operations on the rendered image. The =SceneProcessor= can
|
rlm@470
|
1000 also copy the GPU image data to RAM and process it with the CPU.
|
rlm@470
|
1001
|
rlm@470
|
1002 *** Appropriating Views for Vision
|
rlm@470
|
1003
|
rlm@470
|
1004 Each eye in the simulated creature needs its own =ViewPort= so
|
rlm@470
|
1005 that it can see the world from its own perspective. To this
|
rlm@470
|
1006 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
|
rlm@470
|
1007 any arbitrary continuation function for further processing. That
|
rlm@470
|
1008 continuation function may perform both CPU and GPU operations on
|
rlm@470
|
1009 the data. To make this easy for the continuation function, the
|
rlm@470
|
1010 =SceneProcessor= maintains appropriately sized buffers in RAM to
|
rlm@470
|
1011 hold the data. It does not do any copying from the GPU to the CPU
|
rlm@470
|
1012 itself because it is a slow operation.
|
rlm@470
|
1013
|
rlm@470
|
1014 #+caption: Function to make the rendered secne in jMonkeyEngine
|
rlm@470
|
1015 #+caption: available for further processing.
|
rlm@470
|
1016 #+name: pipeline-1
|
rlm@470
|
1017 #+begin_listing clojure
|
rlm@470
|
1018 #+begin_src clojure
|
rlm@470
|
1019 (defn vision-pipeline
|
rlm@470
|
1020 "Create a SceneProcessor object which wraps a vision processing
|
rlm@470
|
1021 continuation function. The continuation is a function that takes
|
rlm@470
|
1022 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
|
rlm@470
|
1023 each of which has already been appropriately sized."
|
rlm@470
|
1024 [continuation]
|
rlm@470
|
1025 (let [byte-buffer (atom nil)
|
rlm@470
|
1026 renderer (atom nil)
|
rlm@470
|
1027 image (atom nil)]
|
rlm@470
|
1028 (proxy [SceneProcessor] []
|
rlm@470
|
1029 (initialize
|
rlm@470
|
1030 [renderManager viewPort]
|
rlm@470
|
1031 (let [cam (.getCamera viewPort)
|
rlm@470
|
1032 width (.getWidth cam)
|
rlm@470
|
1033 height (.getHeight cam)]
|
rlm@470
|
1034 (reset! renderer (.getRenderer renderManager))
|
rlm@470
|
1035 (reset! byte-buffer
|
rlm@470
|
1036 (BufferUtils/createByteBuffer
|
rlm@470
|
1037 (* width height 4)))
|
rlm@470
|
1038 (reset! image (BufferedImage.
|
rlm@470
|
1039 width height
|
rlm@470
|
1040 BufferedImage/TYPE_4BYTE_ABGR))))
|
rlm@470
|
1041 (isInitialized [] (not (nil? @byte-buffer)))
|
rlm@470
|
1042 (reshape [_ _ _])
|
rlm@470
|
1043 (preFrame [_])
|
rlm@470
|
1044 (postQueue [_])
|
rlm@470
|
1045 (postFrame
|
rlm@470
|
1046 [#^FrameBuffer fb]
|
rlm@470
|
1047 (.clear @byte-buffer)
|
rlm@470
|
1048 (continuation @renderer fb @byte-buffer @image))
|
rlm@470
|
1049 (cleanup []))))
|
rlm@470
|
1050 #+end_src
|
rlm@470
|
1051 #+end_listing
|
rlm@470
|
1052
|
rlm@470
|
1053 The continuation function given to =vision-pipeline= above will be
|
rlm@470
|
1054 given a =Renderer= and three containers for image data. The
|
rlm@470
|
1055 =FrameBuffer= references the GPU image data, but the pixel data
|
rlm@470
|
1056 can not be used directly on the CPU. The =ByteBuffer= and
|
rlm@470
|
1057 =BufferedImage= are initially "empty" but are sized to hold the
|
rlm@470
|
1058 data in the =FrameBuffer=. I call transferring the GPU image data
|
rlm@470
|
1059 to the CPU structures "mixing" the image data.
|
rlm@470
|
1060
|
rlm@470
|
1061 *** Optical sensor arrays are described with images and referenced with metadata
|
rlm@470
|
1062
|
rlm@470
|
1063 The vision pipeline described above handles the flow of rendered
|
rlm@470
|
1064 images. Now, =CORTEX= needs simulated eyes to serve as the source
|
rlm@470
|
1065 of these images.
|
rlm@470
|
1066
|
rlm@470
|
1067 An eye is described in blender in the same way as a joint. They
|
rlm@470
|
1068 are zero dimensional empty objects with no geometry whose local
|
rlm@470
|
1069 coordinate system determines the orientation of the resulting eye.
|
rlm@470
|
1070 All eyes are children of a parent node named "eyes" just as all
|
rlm@470
|
1071 joints have a parent named "joints". An eye binds to the nearest
|
rlm@470
|
1072 physical object with =bind-sense=.
|
rlm@470
|
1073
|
rlm@470
|
1074 #+caption: Here, the camera is created based on metadata on the
|
rlm@470
|
1075 #+caption: eye-node and attached to the nearest physical object
|
rlm@470
|
1076 #+caption: with =bind-sense=
|
rlm@470
|
1077 #+name: add-eye
|
rlm@470
|
1078 #+begin_listing clojure
|
rlm@470
|
1079 (defn add-eye!
|
rlm@470
|
1080 "Create a Camera centered on the current position of 'eye which
|
rlm@470
|
1081 follows the closest physical node in 'creature. The camera will
|
rlm@470
|
1082 point in the X direction and use the Z vector as up as determined
|
rlm@470
|
1083 by the rotation of these vectors in blender coordinate space. Use
|
rlm@470
|
1084 XZY rotation for the node in blender."
|
rlm@470
|
1085 [#^Node creature #^Spatial eye]
|
rlm@470
|
1086 (let [target (closest-node creature eye)
|
rlm@470
|
1087 [cam-width cam-height]
|
rlm@470
|
1088 ;;[640 480] ;; graphics card on laptop doesn't support
|
rlm@470
|
1089 ;; arbitray dimensions.
|
rlm@470
|
1090 (eye-dimensions eye)
|
rlm@470
|
1091 cam (Camera. cam-width cam-height)
|
rlm@470
|
1092 rot (.getWorldRotation eye)]
|
rlm@470
|
1093 (.setLocation cam (.getWorldTranslation eye))
|
rlm@470
|
1094 (.lookAtDirection
|
rlm@470
|
1095 cam ; this part is not a mistake and
|
rlm@470
|
1096 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
|
rlm@470
|
1097 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
|
rlm@470
|
1098 (.setFrustumPerspective
|
rlm@470
|
1099 cam (float 45)
|
rlm@470
|
1100 (float (/ (.getWidth cam) (.getHeight cam)))
|
rlm@470
|
1101 (float 1)
|
rlm@470
|
1102 (float 1000))
|
rlm@470
|
1103 (bind-sense target cam) cam))
|
rlm@470
|
1104 #+end_listing
|
rlm@470
|
1105
|
rlm@470
|
1106 *** Simulated Retina
|
rlm@470
|
1107
|
rlm@470
|
1108 An eye is a surface (the retina) which contains many discrete
|
rlm@470
|
1109 sensors to detect light. These sensors can have different
|
rlm@470
|
1110 light-sensing properties. In humans, each discrete sensor is
|
rlm@470
|
1111 sensitive to red, blue, green, or gray. These different types of
|
rlm@470
|
1112 sensors can have different spatial distributions along the retina.
|
rlm@470
|
1113 In humans, there is a fovea in the center of the retina which has
|
rlm@470
|
1114 a very high density of color sensors, and a blind spot which has
|
rlm@470
|
1115 no sensors at all. Sensor density decreases in proportion to
|
rlm@470
|
1116 distance from the fovea.
|
rlm@470
|
1117
|
rlm@470
|
1118 I want to be able to model any retinal configuration, so my
|
rlm@470
|
1119 eye-nodes in blender contain metadata pointing to images that
|
rlm@470
|
1120 describe the precise position of the individual sensors using
|
rlm@470
|
1121 white pixels. The meta-data also describes the precise sensitivity
|
rlm@470
|
1122 to light that the sensors described in the image have. An eye can
|
rlm@470
|
1123 contain any number of these images. For example, the metadata for
|
rlm@470
|
1124 an eye might look like this:
|
rlm@470
|
1125
|
rlm@470
|
1126 #+begin_src clojure
|
rlm@470
|
1127 {0xFF0000 "Models/test-creature/retina-small.png"}
|
rlm@470
|
1128 #+end_src
|
rlm@470
|
1129
|
rlm@470
|
1130 #+caption: An example retinal profile image. White pixels are
|
rlm@470
|
1131 #+caption: photo-sensitive elements. The distribution of white
|
rlm@470
|
1132 #+caption: pixels is denser in the middle and falls off at the
|
rlm@470
|
1133 #+caption: edges and is inspired by the human retina.
|
rlm@470
|
1134 #+name: retina
|
rlm@470
|
1135 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
1136 [[./images/retina-small.png]]
|
rlm@470
|
1137
|
rlm@470
|
1138 Together, the number 0xFF0000 and the image image above describe
|
rlm@470
|
1139 the placement of red-sensitive sensory elements.
|
rlm@470
|
1140
|
rlm@470
|
1141 Meta-data to very crudely approximate a human eye might be
|
rlm@470
|
1142 something like this:
|
rlm@470
|
1143
|
rlm@470
|
1144 #+begin_src clojure
|
rlm@470
|
1145 (let [retinal-profile "Models/test-creature/retina-small.png"]
|
rlm@470
|
1146 {0xFF0000 retinal-profile
|
rlm@470
|
1147 0x00FF00 retinal-profile
|
rlm@470
|
1148 0x0000FF retinal-profile
|
rlm@470
|
1149 0xFFFFFF retinal-profile})
|
rlm@470
|
1150 #+end_src
|
rlm@470
|
1151
|
rlm@470
|
1152 The numbers that serve as keys in the map determine a sensor's
|
rlm@470
|
1153 relative sensitivity to the channels red, green, and blue. These
|
rlm@470
|
1154 sensitivity values are packed into an integer in the order
|
rlm@470
|
1155 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
|
rlm@470
|
1156 image are added together with these sensitivities as linear
|
rlm@470
|
1157 weights. Therefore, 0xFF0000 means sensitive to red only while
|
rlm@470
|
1158 0xFFFFFF means sensitive to all colors equally (gray).
|
rlm@470
|
1159
|
rlm@470
|
1160 #+caption: This is the core of vision in =CORTEX=. A given eye node
|
rlm@470
|
1161 #+caption: is converted into a function that returns visual
|
rlm@470
|
1162 #+caption: information from the simulation.
|
rlm@471
|
1163 #+name: vision-kernel
|
rlm@470
|
1164 #+begin_listing clojure
|
rlm@470
|
1165 (defn vision-kernel
|
rlm@470
|
1166 "Returns a list of functions, each of which will return a color
|
rlm@470
|
1167 channel's worth of visual information when called inside a running
|
rlm@470
|
1168 simulation."
|
rlm@470
|
1169 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
|
rlm@470
|
1170 (let [retinal-map (retina-sensor-profile eye)
|
rlm@470
|
1171 camera (add-eye! creature eye)
|
rlm@470
|
1172 vision-image
|
rlm@470
|
1173 (atom
|
rlm@470
|
1174 (BufferedImage. (.getWidth camera)
|
rlm@470
|
1175 (.getHeight camera)
|
rlm@470
|
1176 BufferedImage/TYPE_BYTE_BINARY))
|
rlm@470
|
1177 register-eye!
|
rlm@470
|
1178 (runonce
|
rlm@470
|
1179 (fn [world]
|
rlm@470
|
1180 (add-camera!
|
rlm@470
|
1181 world camera
|
rlm@470
|
1182 (let [counter (atom 0)]
|
rlm@470
|
1183 (fn [r fb bb bi]
|
rlm@470
|
1184 (if (zero? (rem (swap! counter inc) (inc skip)))
|
rlm@470
|
1185 (reset! vision-image
|
rlm@470
|
1186 (BufferedImage! r fb bb bi))))))))]
|
rlm@470
|
1187 (vec
|
rlm@470
|
1188 (map
|
rlm@470
|
1189 (fn [[key image]]
|
rlm@470
|
1190 (let [whites (white-coordinates image)
|
rlm@470
|
1191 topology (vec (collapse whites))
|
rlm@470
|
1192 sensitivity (sensitivity-presets key key)]
|
rlm@470
|
1193 (attached-viewport.
|
rlm@470
|
1194 (fn [world]
|
rlm@470
|
1195 (register-eye! world)
|
rlm@470
|
1196 (vector
|
rlm@470
|
1197 topology
|
rlm@470
|
1198 (vec
|
rlm@470
|
1199 (for [[x y] whites]
|
rlm@470
|
1200 (pixel-sense
|
rlm@470
|
1201 sensitivity
|
rlm@470
|
1202 (.getRGB @vision-image x y))))))
|
rlm@470
|
1203 register-eye!)))
|
rlm@470
|
1204 retinal-map))))
|
rlm@470
|
1205 #+end_listing
|
rlm@470
|
1206
|
rlm@470
|
1207 Note that since each of the functions generated by =vision-kernel=
|
rlm@470
|
1208 shares the same =register-eye!= function, the eye will be
|
rlm@470
|
1209 registered only once the first time any of the functions from the
|
rlm@470
|
1210 list returned by =vision-kernel= is called. Each of the functions
|
rlm@470
|
1211 returned by =vision-kernel= also allows access to the =Viewport=
|
rlm@470
|
1212 through which it receives images.
|
rlm@470
|
1213
|
rlm@470
|
1214 All the hard work has been done; all that remains is to apply
|
rlm@470
|
1215 =vision-kernel= to each eye in the creature and gather the results
|
rlm@470
|
1216 into one list of functions.
|
rlm@470
|
1217
|
rlm@470
|
1218
|
rlm@470
|
1219 #+caption: With =vision!=, =CORTEX= is already a fine simulation
|
rlm@470
|
1220 #+caption: environment for experimenting with different types of
|
rlm@470
|
1221 #+caption: eyes.
|
rlm@470
|
1222 #+name: vision!
|
rlm@470
|
1223 #+begin_listing clojure
|
rlm@470
|
1224 (defn vision!
|
rlm@470
|
1225 "Returns a list of functions, each of which returns visual sensory
|
rlm@470
|
1226 data when called inside a running simulation."
|
rlm@470
|
1227 [#^Node creature & {skip :skip :or {skip 0}}]
|
rlm@470
|
1228 (reduce
|
rlm@470
|
1229 concat
|
rlm@470
|
1230 (for [eye (eyes creature)]
|
rlm@470
|
1231 (vision-kernel creature eye))))
|
rlm@470
|
1232 #+end_listing
|
rlm@470
|
1233
|
rlm@471
|
1234 #+caption: Simulated vision with a test creature and the
|
rlm@471
|
1235 #+caption: human-like eye approximation. Notice how each channel
|
rlm@471
|
1236 #+caption: of the eye responds differently to the differently
|
rlm@471
|
1237 #+caption: colored balls.
|
rlm@471
|
1238 #+name: worm-vision-test.
|
rlm@471
|
1239 #+ATTR_LaTeX: :width 13cm
|
rlm@471
|
1240 [[./images/worm-vision.png]]
|
rlm@470
|
1241
|
rlm@471
|
1242 The vision code is not much more complicated than the body code,
|
rlm@471
|
1243 and enables multiple further paths for simulated vision. For
|
rlm@471
|
1244 example, it is quite easy to create bifocal vision -- you just
|
rlm@471
|
1245 make two eyes next to each other in blender! It is also possible
|
rlm@471
|
1246 to encode vision transforms in the retinal files. For example, the
|
rlm@471
|
1247 human like retina file in figure \ref{retina} approximates a
|
rlm@471
|
1248 log-polar transform.
|
rlm@470
|
1249
|
rlm@471
|
1250 This vision code has already been absorbed by the jMonkeyEngine
|
rlm@471
|
1251 community and is now (in modified form) part of a system for
|
rlm@471
|
1252 capturing in-game video to a file.
|
rlm@470
|
1253
|
rlm@436
|
1254 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
1255
|
rlm@436
|
1256 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
1257
|
rlm@440
|
1258 ** Proprioception is the sense that makes everything ``real''
|
rlm@436
|
1259
|
rlm@436
|
1260 ** Muscles are both effectors and sensors
|
rlm@436
|
1261
|
rlm@436
|
1262 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
1263
|
rlm@436
|
1264 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
1265
|
rlm@465
|
1266 * COMMENT Empathy in a simulated worm
|
rlm@435
|
1267
|
rlm@449
|
1268 Here I develop a computational model of empathy, using =CORTEX= as a
|
rlm@449
|
1269 base. Empathy in this context is the ability to observe another
|
rlm@449
|
1270 creature and infer what sorts of sensations that creature is
|
rlm@449
|
1271 feeling. My empathy algorithm involves multiple phases. First is
|
rlm@449
|
1272 free-play, where the creature moves around and gains sensory
|
rlm@449
|
1273 experience. From this experience I construct a representation of the
|
rlm@449
|
1274 creature's sensory state space, which I call \Phi-space. Using
|
rlm@449
|
1275 \Phi-space, I construct an efficient function which takes the
|
rlm@449
|
1276 limited data that comes from observing another creature and enriches
|
rlm@449
|
1277 it full compliment of imagined sensory data. I can then use the
|
rlm@449
|
1278 imagined sensory data to recognize what the observed creature is
|
rlm@449
|
1279 doing and feeling, using straightforward embodied action predicates.
|
rlm@449
|
1280 This is all demonstrated with using a simple worm-like creature, and
|
rlm@449
|
1281 recognizing worm-actions based on limited data.
|
rlm@449
|
1282
|
rlm@449
|
1283 #+caption: Here is the worm with which we will be working.
|
rlm@449
|
1284 #+caption: It is composed of 5 segments. Each segment has a
|
rlm@449
|
1285 #+caption: pair of extensor and flexor muscles. Each of the
|
rlm@449
|
1286 #+caption: worm's four joints is a hinge joint which allows
|
rlm@451
|
1287 #+caption: about 30 degrees of rotation to either side. Each segment
|
rlm@449
|
1288 #+caption: of the worm is touch-capable and has a uniform
|
rlm@449
|
1289 #+caption: distribution of touch sensors on each of its faces.
|
rlm@449
|
1290 #+caption: Each joint has a proprioceptive sense to detect
|
rlm@449
|
1291 #+caption: relative positions. The worm segments are all the
|
rlm@449
|
1292 #+caption: same except for the first one, which has a much
|
rlm@449
|
1293 #+caption: higher weight than the others to allow for easy
|
rlm@449
|
1294 #+caption: manual motor control.
|
rlm@449
|
1295 #+name: basic-worm-view
|
rlm@449
|
1296 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
1297 [[./images/basic-worm-view.png]]
|
rlm@449
|
1298
|
rlm@449
|
1299 #+caption: Program for reading a worm from a blender file and
|
rlm@449
|
1300 #+caption: outfitting it with the senses of proprioception,
|
rlm@449
|
1301 #+caption: touch, and the ability to move, as specified in the
|
rlm@449
|
1302 #+caption: blender file.
|
rlm@449
|
1303 #+name: get-worm
|
rlm@449
|
1304 #+begin_listing clojure
|
rlm@449
|
1305 #+begin_src clojure
|
rlm@449
|
1306 (defn worm []
|
rlm@449
|
1307 (let [model (load-blender-model "Models/worm/worm.blend")]
|
rlm@449
|
1308 {:body (doto model (body!))
|
rlm@449
|
1309 :touch (touch! model)
|
rlm@449
|
1310 :proprioception (proprioception! model)
|
rlm@449
|
1311 :muscles (movement! model)}))
|
rlm@449
|
1312 #+end_src
|
rlm@449
|
1313 #+end_listing
|
rlm@452
|
1314
|
rlm@436
|
1315 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
1316
|
rlm@449
|
1317 Using empathy, I divide the problem of action recognition into a
|
rlm@449
|
1318 recognition process expressed in the language of a full compliment
|
rlm@449
|
1319 of senses, and an imaganitive process that generates full sensory
|
rlm@449
|
1320 data from partial sensory data. Splitting the action recognition
|
rlm@449
|
1321 problem in this manner greatly reduces the total amount of work to
|
rlm@449
|
1322 recognize actions: The imaganitive process is mostly just matching
|
rlm@449
|
1323 previous experience, and the recognition process gets to use all
|
rlm@449
|
1324 the senses to directly describe any action.
|
rlm@449
|
1325
|
rlm@436
|
1326 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
1327
|
rlm@449
|
1328 Embodied representations using multiple senses such as touch,
|
rlm@449
|
1329 proprioception, and muscle tension turns out be be exceedingly
|
rlm@449
|
1330 efficient at describing body-centered actions. It is the ``right
|
rlm@449
|
1331 language for the job''. For example, it takes only around 5 lines
|
rlm@449
|
1332 of LISP code to describe the action of ``curling'' using embodied
|
rlm@451
|
1333 primitives. It takes about 10 lines to describe the seemingly
|
rlm@449
|
1334 complicated action of wiggling.
|
rlm@449
|
1335
|
rlm@449
|
1336 The following action predicates each take a stream of sensory
|
rlm@449
|
1337 experience, observe however much of it they desire, and decide
|
rlm@449
|
1338 whether the worm is doing the action they describe. =curled?=
|
rlm@449
|
1339 relies on proprioception, =resting?= relies on touch, =wiggling?=
|
rlm@449
|
1340 relies on a fourier analysis of muscle contraction, and
|
rlm@449
|
1341 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
|
rlm@449
|
1342
|
rlm@449
|
1343 #+caption: Program for detecting whether the worm is curled. This is the
|
rlm@449
|
1344 #+caption: simplest action predicate, because it only uses the last frame
|
rlm@449
|
1345 #+caption: of sensory experience, and only uses proprioceptive data. Even
|
rlm@449
|
1346 #+caption: this simple predicate, however, is automatically frame
|
rlm@449
|
1347 #+caption: independent and ignores vermopomorphic differences such as
|
rlm@449
|
1348 #+caption: worm textures and colors.
|
rlm@449
|
1349 #+name: curled
|
rlm@452
|
1350 #+attr_latex: [htpb]
|
rlm@452
|
1351 #+begin_listing clojure
|
rlm@449
|
1352 #+begin_src clojure
|
rlm@449
|
1353 (defn curled?
|
rlm@449
|
1354 "Is the worm curled up?"
|
rlm@449
|
1355 [experiences]
|
rlm@449
|
1356 (every?
|
rlm@449
|
1357 (fn [[_ _ bend]]
|
rlm@449
|
1358 (> (Math/sin bend) 0.64))
|
rlm@449
|
1359 (:proprioception (peek experiences))))
|
rlm@449
|
1360 #+end_src
|
rlm@449
|
1361 #+end_listing
|
rlm@449
|
1362
|
rlm@449
|
1363 #+caption: Program for summarizing the touch information in a patch
|
rlm@449
|
1364 #+caption: of skin.
|
rlm@449
|
1365 #+name: touch-summary
|
rlm@452
|
1366 #+attr_latex: [htpb]
|
rlm@452
|
1367
|
rlm@452
|
1368 #+begin_listing clojure
|
rlm@449
|
1369 #+begin_src clojure
|
rlm@449
|
1370 (defn contact
|
rlm@449
|
1371 "Determine how much contact a particular worm segment has with
|
rlm@449
|
1372 other objects. Returns a value between 0 and 1, where 1 is full
|
rlm@449
|
1373 contact and 0 is no contact."
|
rlm@449
|
1374 [touch-region [coords contact :as touch]]
|
rlm@449
|
1375 (-> (zipmap coords contact)
|
rlm@449
|
1376 (select-keys touch-region)
|
rlm@449
|
1377 (vals)
|
rlm@449
|
1378 (#(map first %))
|
rlm@449
|
1379 (average)
|
rlm@449
|
1380 (* 10)
|
rlm@449
|
1381 (- 1)
|
rlm@449
|
1382 (Math/abs)))
|
rlm@449
|
1383 #+end_src
|
rlm@449
|
1384 #+end_listing
|
rlm@449
|
1385
|
rlm@449
|
1386
|
rlm@449
|
1387 #+caption: Program for detecting whether the worm is at rest. This program
|
rlm@449
|
1388 #+caption: uses a summary of the tactile information from the underbelly
|
rlm@449
|
1389 #+caption: of the worm, and is only true if every segment is touching the
|
rlm@449
|
1390 #+caption: floor. Note that this function contains no references to
|
rlm@449
|
1391 #+caption: proprioction at all.
|
rlm@449
|
1392 #+name: resting
|
rlm@452
|
1393 #+attr_latex: [htpb]
|
rlm@452
|
1394 #+begin_listing clojure
|
rlm@449
|
1395 #+begin_src clojure
|
rlm@449
|
1396 (def worm-segment-bottom (rect-region [8 15] [14 22]))
|
rlm@449
|
1397
|
rlm@449
|
1398 (defn resting?
|
rlm@449
|
1399 "Is the worm resting on the ground?"
|
rlm@449
|
1400 [experiences]
|
rlm@449
|
1401 (every?
|
rlm@449
|
1402 (fn [touch-data]
|
rlm@449
|
1403 (< 0.9 (contact worm-segment-bottom touch-data)))
|
rlm@449
|
1404 (:touch (peek experiences))))
|
rlm@449
|
1405 #+end_src
|
rlm@449
|
1406 #+end_listing
|
rlm@449
|
1407
|
rlm@449
|
1408 #+caption: Program for detecting whether the worm is curled up into a
|
rlm@449
|
1409 #+caption: full circle. Here the embodied approach begins to shine, as
|
rlm@449
|
1410 #+caption: I am able to both use a previous action predicate (=curled?=)
|
rlm@449
|
1411 #+caption: as well as the direct tactile experience of the head and tail.
|
rlm@449
|
1412 #+name: grand-circle
|
rlm@452
|
1413 #+attr_latex: [htpb]
|
rlm@452
|
1414 #+begin_listing clojure
|
rlm@449
|
1415 #+begin_src clojure
|
rlm@449
|
1416 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
|
rlm@449
|
1417
|
rlm@449
|
1418 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
|
rlm@449
|
1419
|
rlm@449
|
1420 (defn grand-circle?
|
rlm@449
|
1421 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@449
|
1422 [experiences]
|
rlm@449
|
1423 (and (curled? experiences)
|
rlm@449
|
1424 (let [worm-touch (:touch (peek experiences))
|
rlm@449
|
1425 tail-touch (worm-touch 0)
|
rlm@449
|
1426 head-touch (worm-touch 4)]
|
rlm@449
|
1427 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@449
|
1428 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@449
|
1429 #+end_src
|
rlm@449
|
1430 #+end_listing
|
rlm@449
|
1431
|
rlm@449
|
1432
|
rlm@449
|
1433 #+caption: Program for detecting whether the worm has been wiggling for
|
rlm@449
|
1434 #+caption: the last few frames. It uses a fourier analysis of the muscle
|
rlm@449
|
1435 #+caption: contractions of the worm's tail to determine wiggling. This is
|
rlm@449
|
1436 #+caption: signigicant because there is no particular frame that clearly
|
rlm@449
|
1437 #+caption: indicates that the worm is wiggling --- only when multiple frames
|
rlm@449
|
1438 #+caption: are analyzed together is the wiggling revealed. Defining
|
rlm@449
|
1439 #+caption: wiggling this way also gives the worm an opportunity to learn
|
rlm@449
|
1440 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
|
rlm@449
|
1441 #+caption: wiggle but can't. Frustrated wiggling is very visually different
|
rlm@449
|
1442 #+caption: from actual wiggling, but this definition gives it to us for free.
|
rlm@449
|
1443 #+name: wiggling
|
rlm@452
|
1444 #+attr_latex: [htpb]
|
rlm@452
|
1445 #+begin_listing clojure
|
rlm@449
|
1446 #+begin_src clojure
|
rlm@449
|
1447 (defn fft [nums]
|
rlm@449
|
1448 (map
|
rlm@449
|
1449 #(.getReal %)
|
rlm@449
|
1450 (.transform
|
rlm@449
|
1451 (FastFourierTransformer. DftNormalization/STANDARD)
|
rlm@449
|
1452 (double-array nums) TransformType/FORWARD)))
|
rlm@449
|
1453
|
rlm@449
|
1454 (def indexed (partial map-indexed vector))
|
rlm@449
|
1455
|
rlm@449
|
1456 (defn max-indexed [s]
|
rlm@449
|
1457 (first (sort-by (comp - second) (indexed s))))
|
rlm@449
|
1458
|
rlm@449
|
1459 (defn wiggling?
|
rlm@449
|
1460 "Is the worm wiggling?"
|
rlm@449
|
1461 [experiences]
|
rlm@449
|
1462 (let [analysis-interval 0x40]
|
rlm@449
|
1463 (when (> (count experiences) analysis-interval)
|
rlm@449
|
1464 (let [a-flex 3
|
rlm@449
|
1465 a-ex 2
|
rlm@449
|
1466 muscle-activity
|
rlm@449
|
1467 (map :muscle (vector:last-n experiences analysis-interval))
|
rlm@449
|
1468 base-activity
|
rlm@449
|
1469 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
|
rlm@449
|
1470 (= 2
|
rlm@449
|
1471 (first
|
rlm@449
|
1472 (max-indexed
|
rlm@449
|
1473 (map #(Math/abs %)
|
rlm@449
|
1474 (take 20 (fft base-activity))))))))))
|
rlm@449
|
1475 #+end_src
|
rlm@449
|
1476 #+end_listing
|
rlm@449
|
1477
|
rlm@449
|
1478 With these action predicates, I can now recognize the actions of
|
rlm@449
|
1479 the worm while it is moving under my control and I have access to
|
rlm@449
|
1480 all the worm's senses.
|
rlm@449
|
1481
|
rlm@449
|
1482 #+caption: Use the action predicates defined earlier to report on
|
rlm@449
|
1483 #+caption: what the worm is doing while in simulation.
|
rlm@449
|
1484 #+name: report-worm-activity
|
rlm@452
|
1485 #+attr_latex: [htpb]
|
rlm@452
|
1486 #+begin_listing clojure
|
rlm@449
|
1487 #+begin_src clojure
|
rlm@449
|
1488 (defn debug-experience
|
rlm@449
|
1489 [experiences text]
|
rlm@449
|
1490 (cond
|
rlm@449
|
1491 (grand-circle? experiences) (.setText text "Grand Circle")
|
rlm@449
|
1492 (curled? experiences) (.setText text "Curled")
|
rlm@449
|
1493 (wiggling? experiences) (.setText text "Wiggling")
|
rlm@449
|
1494 (resting? experiences) (.setText text "Resting")))
|
rlm@449
|
1495 #+end_src
|
rlm@449
|
1496 #+end_listing
|
rlm@449
|
1497
|
rlm@449
|
1498 #+caption: Using =debug-experience=, the body-centered predicates
|
rlm@449
|
1499 #+caption: work together to classify the behaviour of the worm.
|
rlm@451
|
1500 #+caption: the predicates are operating with access to the worm's
|
rlm@451
|
1501 #+caption: full sensory data.
|
rlm@449
|
1502 #+name: basic-worm-view
|
rlm@449
|
1503 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
1504 [[./images/worm-identify-init.png]]
|
rlm@449
|
1505
|
rlm@449
|
1506 These action predicates satisfy the recognition requirement of an
|
rlm@451
|
1507 empathic recognition system. There is power in the simplicity of
|
rlm@451
|
1508 the action predicates. They describe their actions without getting
|
rlm@451
|
1509 confused in visual details of the worm. Each one is frame
|
rlm@451
|
1510 independent, but more than that, they are each indepent of
|
rlm@449
|
1511 irrelevant visual details of the worm and the environment. They
|
rlm@449
|
1512 will work regardless of whether the worm is a different color or
|
rlm@451
|
1513 hevaily textured, or if the environment has strange lighting.
|
rlm@449
|
1514
|
rlm@449
|
1515 The trick now is to make the action predicates work even when the
|
rlm@449
|
1516 sensory data on which they depend is absent. If I can do that, then
|
rlm@449
|
1517 I will have gained much,
|
rlm@435
|
1518
|
rlm@436
|
1519 ** \Phi-space describes the worm's experiences
|
rlm@449
|
1520
|
rlm@449
|
1521 As a first step towards building empathy, I need to gather all of
|
rlm@449
|
1522 the worm's experiences during free play. I use a simple vector to
|
rlm@449
|
1523 store all the experiences.
|
rlm@449
|
1524
|
rlm@449
|
1525 Each element of the experience vector exists in the vast space of
|
rlm@449
|
1526 all possible worm-experiences. Most of this vast space is actually
|
rlm@449
|
1527 unreachable due to physical constraints of the worm's body. For
|
rlm@449
|
1528 example, the worm's segments are connected by hinge joints that put
|
rlm@451
|
1529 a practical limit on the worm's range of motions without limiting
|
rlm@451
|
1530 its degrees of freedom. Some groupings of senses are impossible;
|
rlm@451
|
1531 the worm can not be bent into a circle so that its ends are
|
rlm@451
|
1532 touching and at the same time not also experience the sensation of
|
rlm@451
|
1533 touching itself.
|
rlm@449
|
1534
|
rlm@451
|
1535 As the worm moves around during free play and its experience vector
|
rlm@451
|
1536 grows larger, the vector begins to define a subspace which is all
|
rlm@451
|
1537 the sensations the worm can practicaly experience during normal
|
rlm@451
|
1538 operation. I call this subspace \Phi-space, short for
|
rlm@451
|
1539 physical-space. The experience vector defines a path through
|
rlm@451
|
1540 \Phi-space. This path has interesting properties that all derive
|
rlm@451
|
1541 from physical embodiment. The proprioceptive components are
|
rlm@451
|
1542 completely smooth, because in order for the worm to move from one
|
rlm@451
|
1543 position to another, it must pass through the intermediate
|
rlm@451
|
1544 positions. The path invariably forms loops as actions are repeated.
|
rlm@451
|
1545 Finally and most importantly, proprioception actually gives very
|
rlm@451
|
1546 strong inference about the other senses. For example, when the worm
|
rlm@451
|
1547 is flat, you can infer that it is touching the ground and that its
|
rlm@451
|
1548 muscles are not active, because if the muscles were active, the
|
rlm@451
|
1549 worm would be moving and would not be perfectly flat. In order to
|
rlm@451
|
1550 stay flat, the worm has to be touching the ground, or it would
|
rlm@451
|
1551 again be moving out of the flat position due to gravity. If the
|
rlm@451
|
1552 worm is positioned in such a way that it interacts with itself,
|
rlm@451
|
1553 then it is very likely to be feeling the same tactile feelings as
|
rlm@451
|
1554 the last time it was in that position, because it has the same body
|
rlm@451
|
1555 as then. If you observe multiple frames of proprioceptive data,
|
rlm@451
|
1556 then you can become increasingly confident about the exact
|
rlm@451
|
1557 activations of the worm's muscles, because it generally takes a
|
rlm@451
|
1558 unique combination of muscle contractions to transform the worm's
|
rlm@451
|
1559 body along a specific path through \Phi-space.
|
rlm@449
|
1560
|
rlm@449
|
1561 There is a simple way of taking \Phi-space and the total ordering
|
rlm@449
|
1562 provided by an experience vector and reliably infering the rest of
|
rlm@449
|
1563 the senses.
|
rlm@435
|
1564
|
rlm@436
|
1565 ** Empathy is the process of tracing though \Phi-space
|
rlm@449
|
1566
|
rlm@450
|
1567 Here is the core of a basic empathy algorithm, starting with an
|
rlm@451
|
1568 experience vector:
|
rlm@451
|
1569
|
rlm@451
|
1570 First, group the experiences into tiered proprioceptive bins. I use
|
rlm@451
|
1571 powers of 10 and 3 bins, and the smallest bin has an approximate
|
rlm@451
|
1572 size of 0.001 radians in all proprioceptive dimensions.
|
rlm@450
|
1573
|
rlm@450
|
1574 Then, given a sequence of proprioceptive input, generate a set of
|
rlm@451
|
1575 matching experience records for each input, using the tiered
|
rlm@451
|
1576 proprioceptive bins.
|
rlm@449
|
1577
|
rlm@450
|
1578 Finally, to infer sensory data, select the longest consective chain
|
rlm@451
|
1579 of experiences. Conecutive experience means that the experiences
|
rlm@451
|
1580 appear next to each other in the experience vector.
|
rlm@449
|
1581
|
rlm@450
|
1582 This algorithm has three advantages:
|
rlm@450
|
1583
|
rlm@450
|
1584 1. It's simple
|
rlm@450
|
1585
|
rlm@451
|
1586 3. It's very fast -- retrieving possible interpretations takes
|
rlm@451
|
1587 constant time. Tracing through chains of interpretations takes
|
rlm@451
|
1588 time proportional to the average number of experiences in a
|
rlm@451
|
1589 proprioceptive bin. Redundant experiences in \Phi-space can be
|
rlm@451
|
1590 merged to save computation.
|
rlm@450
|
1591
|
rlm@450
|
1592 2. It protects from wrong interpretations of transient ambiguous
|
rlm@451
|
1593 proprioceptive data. For example, if the worm is flat for just
|
rlm@450
|
1594 an instant, this flattness will not be interpreted as implying
|
rlm@450
|
1595 that the worm has its muscles relaxed, since the flattness is
|
rlm@450
|
1596 part of a longer chain which includes a distinct pattern of
|
rlm@451
|
1597 muscle activation. Markov chains or other memoryless statistical
|
rlm@451
|
1598 models that operate on individual frames may very well make this
|
rlm@451
|
1599 mistake.
|
rlm@450
|
1600
|
rlm@450
|
1601 #+caption: Program to convert an experience vector into a
|
rlm@450
|
1602 #+caption: proprioceptively binned lookup function.
|
rlm@450
|
1603 #+name: bin
|
rlm@452
|
1604 #+attr_latex: [htpb]
|
rlm@452
|
1605 #+begin_listing clojure
|
rlm@450
|
1606 #+begin_src clojure
|
rlm@449
|
1607 (defn bin [digits]
|
rlm@449
|
1608 (fn [angles]
|
rlm@449
|
1609 (->> angles
|
rlm@449
|
1610 (flatten)
|
rlm@449
|
1611 (map (juxt #(Math/sin %) #(Math/cos %)))
|
rlm@449
|
1612 (flatten)
|
rlm@449
|
1613 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
|
rlm@449
|
1614
|
rlm@449
|
1615 (defn gen-phi-scan
|
rlm@450
|
1616 "Nearest-neighbors with binning. Only returns a result if
|
rlm@450
|
1617 the propriceptive data is within 10% of a previously recorded
|
rlm@450
|
1618 result in all dimensions."
|
rlm@450
|
1619 [phi-space]
|
rlm@449
|
1620 (let [bin-keys (map bin [3 2 1])
|
rlm@449
|
1621 bin-maps
|
rlm@449
|
1622 (map (fn [bin-key]
|
rlm@449
|
1623 (group-by
|
rlm@449
|
1624 (comp bin-key :proprioception phi-space)
|
rlm@449
|
1625 (range (count phi-space)))) bin-keys)
|
rlm@449
|
1626 lookups (map (fn [bin-key bin-map]
|
rlm@450
|
1627 (fn [proprio] (bin-map (bin-key proprio))))
|
rlm@450
|
1628 bin-keys bin-maps)]
|
rlm@449
|
1629 (fn lookup [proprio-data]
|
rlm@449
|
1630 (set (some #(% proprio-data) lookups)))))
|
rlm@450
|
1631 #+end_src
|
rlm@450
|
1632 #+end_listing
|
rlm@449
|
1633
|
rlm@451
|
1634 #+caption: =longest-thread= finds the longest path of consecutive
|
rlm@451
|
1635 #+caption: experiences to explain proprioceptive worm data.
|
rlm@451
|
1636 #+name: phi-space-history-scan
|
rlm@451
|
1637 #+ATTR_LaTeX: :width 10cm
|
rlm@451
|
1638 [[./images/aurellem-gray.png]]
|
rlm@451
|
1639
|
rlm@451
|
1640 =longest-thread= infers sensory data by stitching together pieces
|
rlm@451
|
1641 from previous experience. It prefers longer chains of previous
|
rlm@451
|
1642 experience to shorter ones. For example, during training the worm
|
rlm@451
|
1643 might rest on the ground for one second before it performs its
|
rlm@451
|
1644 excercises. If during recognition the worm rests on the ground for
|
rlm@451
|
1645 five seconds, =longest-thread= will accomodate this five second
|
rlm@451
|
1646 rest period by looping the one second rest chain five times.
|
rlm@451
|
1647
|
rlm@451
|
1648 =longest-thread= takes time proportinal to the average number of
|
rlm@451
|
1649 entries in a proprioceptive bin, because for each element in the
|
rlm@451
|
1650 starting bin it performes a series of set lookups in the preceeding
|
rlm@451
|
1651 bins. If the total history is limited, then this is only a constant
|
rlm@451
|
1652 multiple times the number of entries in the starting bin. This
|
rlm@451
|
1653 analysis also applies even if the action requires multiple longest
|
rlm@451
|
1654 chains -- it's still the average number of entries in a
|
rlm@451
|
1655 proprioceptive bin times the desired chain length. Because
|
rlm@451
|
1656 =longest-thread= is so efficient and simple, I can interpret
|
rlm@451
|
1657 worm-actions in real time.
|
rlm@449
|
1658
|
rlm@450
|
1659 #+caption: Program to calculate empathy by tracing though \Phi-space
|
rlm@450
|
1660 #+caption: and finding the longest (ie. most coherent) interpretation
|
rlm@450
|
1661 #+caption: of the data.
|
rlm@450
|
1662 #+name: longest-thread
|
rlm@452
|
1663 #+attr_latex: [htpb]
|
rlm@452
|
1664 #+begin_listing clojure
|
rlm@450
|
1665 #+begin_src clojure
|
rlm@449
|
1666 (defn longest-thread
|
rlm@449
|
1667 "Find the longest thread from phi-index-sets. The index sets should
|
rlm@449
|
1668 be ordered from most recent to least recent."
|
rlm@449
|
1669 [phi-index-sets]
|
rlm@449
|
1670 (loop [result '()
|
rlm@449
|
1671 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
|
rlm@449
|
1672 (if (empty? phi-index-sets)
|
rlm@449
|
1673 (vec result)
|
rlm@449
|
1674 (let [threads
|
rlm@449
|
1675 (for [thread-base thread-bases]
|
rlm@449
|
1676 (loop [thread (list thread-base)
|
rlm@449
|
1677 remaining remaining]
|
rlm@449
|
1678 (let [next-index (dec (first thread))]
|
rlm@449
|
1679 (cond (empty? remaining) thread
|
rlm@449
|
1680 (contains? (first remaining) next-index)
|
rlm@449
|
1681 (recur
|
rlm@449
|
1682 (cons next-index thread) (rest remaining))
|
rlm@449
|
1683 :else thread))))
|
rlm@449
|
1684 longest-thread
|
rlm@449
|
1685 (reduce (fn [thread-a thread-b]
|
rlm@449
|
1686 (if (> (count thread-a) (count thread-b))
|
rlm@449
|
1687 thread-a thread-b))
|
rlm@449
|
1688 '(nil)
|
rlm@449
|
1689 threads)]
|
rlm@449
|
1690 (recur (concat longest-thread result)
|
rlm@449
|
1691 (drop (count longest-thread) phi-index-sets))))))
|
rlm@450
|
1692 #+end_src
|
rlm@450
|
1693 #+end_listing
|
rlm@450
|
1694
|
rlm@451
|
1695 There is one final piece, which is to replace missing sensory data
|
rlm@451
|
1696 with a best-guess estimate. While I could fill in missing data by
|
rlm@451
|
1697 using a gradient over the closest known sensory data points,
|
rlm@451
|
1698 averages can be misleading. It is certainly possible to create an
|
rlm@451
|
1699 impossible sensory state by averaging two possible sensory states.
|
rlm@451
|
1700 Therefore, I simply replicate the most recent sensory experience to
|
rlm@451
|
1701 fill in the gaps.
|
rlm@449
|
1702
|
rlm@449
|
1703 #+caption: Fill in blanks in sensory experience by replicating the most
|
rlm@449
|
1704 #+caption: recent experience.
|
rlm@449
|
1705 #+name: infer-nils
|
rlm@452
|
1706 #+attr_latex: [htpb]
|
rlm@452
|
1707 #+begin_listing clojure
|
rlm@449
|
1708 #+begin_src clojure
|
rlm@449
|
1709 (defn infer-nils
|
rlm@449
|
1710 "Replace nils with the next available non-nil element in the
|
rlm@449
|
1711 sequence, or barring that, 0."
|
rlm@449
|
1712 [s]
|
rlm@449
|
1713 (loop [i (dec (count s))
|
rlm@449
|
1714 v (transient s)]
|
rlm@449
|
1715 (if (zero? i) (persistent! v)
|
rlm@449
|
1716 (if-let [cur (v i)]
|
rlm@449
|
1717 (if (get v (dec i) 0)
|
rlm@449
|
1718 (recur (dec i) v)
|
rlm@449
|
1719 (recur (dec i) (assoc! v (dec i) cur)))
|
rlm@449
|
1720 (recur i (assoc! v i 0))))))
|
rlm@449
|
1721 #+end_src
|
rlm@449
|
1722 #+end_listing
|
rlm@435
|
1723
|
rlm@441
|
1724 ** Efficient action recognition with =EMPATH=
|
rlm@451
|
1725
|
rlm@451
|
1726 To use =EMPATH= with the worm, I first need to gather a set of
|
rlm@451
|
1727 experiences from the worm that includes the actions I want to
|
rlm@452
|
1728 recognize. The =generate-phi-space= program (listing
|
rlm@451
|
1729 \ref{generate-phi-space} runs the worm through a series of
|
rlm@451
|
1730 exercices and gatheres those experiences into a vector. The
|
rlm@451
|
1731 =do-all-the-things= program is a routine expressed in a simple
|
rlm@452
|
1732 muscle contraction script language for automated worm control. It
|
rlm@452
|
1733 causes the worm to rest, curl, and wiggle over about 700 frames
|
rlm@452
|
1734 (approx. 11 seconds).
|
rlm@425
|
1735
|
rlm@451
|
1736 #+caption: Program to gather the worm's experiences into a vector for
|
rlm@451
|
1737 #+caption: further processing. The =motor-control-program= line uses
|
rlm@451
|
1738 #+caption: a motor control script that causes the worm to execute a series
|
rlm@451
|
1739 #+caption: of ``exercices'' that include all the action predicates.
|
rlm@451
|
1740 #+name: generate-phi-space
|
rlm@452
|
1741 #+attr_latex: [htpb]
|
rlm@452
|
1742 #+begin_listing clojure
|
rlm@451
|
1743 #+begin_src clojure
|
rlm@451
|
1744 (def do-all-the-things
|
rlm@451
|
1745 (concat
|
rlm@451
|
1746 curl-script
|
rlm@451
|
1747 [[300 :d-ex 40]
|
rlm@451
|
1748 [320 :d-ex 0]]
|
rlm@451
|
1749 (shift-script 280 (take 16 wiggle-script))))
|
rlm@451
|
1750
|
rlm@451
|
1751 (defn generate-phi-space []
|
rlm@451
|
1752 (let [experiences (atom [])]
|
rlm@451
|
1753 (run-world
|
rlm@451
|
1754 (apply-map
|
rlm@451
|
1755 worm-world
|
rlm@451
|
1756 (merge
|
rlm@451
|
1757 (worm-world-defaults)
|
rlm@451
|
1758 {:end-frame 700
|
rlm@451
|
1759 :motor-control
|
rlm@451
|
1760 (motor-control-program worm-muscle-labels do-all-the-things)
|
rlm@451
|
1761 :experiences experiences})))
|
rlm@451
|
1762 @experiences))
|
rlm@451
|
1763 #+end_src
|
rlm@451
|
1764 #+end_listing
|
rlm@451
|
1765
|
rlm@451
|
1766 #+caption: Use longest thread and a phi-space generated from a short
|
rlm@451
|
1767 #+caption: exercise routine to interpret actions during free play.
|
rlm@451
|
1768 #+name: empathy-debug
|
rlm@452
|
1769 #+attr_latex: [htpb]
|
rlm@452
|
1770 #+begin_listing clojure
|
rlm@451
|
1771 #+begin_src clojure
|
rlm@451
|
1772 (defn init []
|
rlm@451
|
1773 (def phi-space (generate-phi-space))
|
rlm@451
|
1774 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
1775
|
rlm@451
|
1776 (defn empathy-demonstration []
|
rlm@451
|
1777 (let [proprio (atom ())]
|
rlm@451
|
1778 (fn
|
rlm@451
|
1779 [experiences text]
|
rlm@451
|
1780 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
1781 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
1782 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
1783 empathy (mapv phi-space (infer-nils exp-thread))]
|
rlm@451
|
1784 (println-repl (vector:last-n exp-thread 22))
|
rlm@451
|
1785 (cond
|
rlm@451
|
1786 (grand-circle? empathy) (.setText text "Grand Circle")
|
rlm@451
|
1787 (curled? empathy) (.setText text "Curled")
|
rlm@451
|
1788 (wiggling? empathy) (.setText text "Wiggling")
|
rlm@451
|
1789 (resting? empathy) (.setText text "Resting")
|
rlm@451
|
1790 :else (.setText text "Unknown")))))))
|
rlm@451
|
1791
|
rlm@451
|
1792 (defn empathy-experiment [record]
|
rlm@451
|
1793 (.start (worm-world :experience-watch (debug-experience-phi)
|
rlm@451
|
1794 :record record :worm worm*)))
|
rlm@451
|
1795 #+end_src
|
rlm@451
|
1796 #+end_listing
|
rlm@451
|
1797
|
rlm@451
|
1798 The result of running =empathy-experiment= is that the system is
|
rlm@451
|
1799 generally able to interpret worm actions using the action-predicates
|
rlm@451
|
1800 on simulated sensory data just as well as with actual data. Figure
|
rlm@451
|
1801 \ref{empathy-debug-image} was generated using =empathy-experiment=:
|
rlm@451
|
1802
|
rlm@451
|
1803 #+caption: From only proprioceptive data, =EMPATH= was able to infer
|
rlm@451
|
1804 #+caption: the complete sensory experience and classify four poses
|
rlm@451
|
1805 #+caption: (The last panel shows a composite image of \emph{wriggling},
|
rlm@451
|
1806 #+caption: a dynamic pose.)
|
rlm@451
|
1807 #+name: empathy-debug-image
|
rlm@451
|
1808 #+ATTR_LaTeX: :width 10cm :placement [H]
|
rlm@451
|
1809 [[./images/empathy-1.png]]
|
rlm@451
|
1810
|
rlm@451
|
1811 One way to measure the performance of =EMPATH= is to compare the
|
rlm@451
|
1812 sutiability of the imagined sense experience to trigger the same
|
rlm@451
|
1813 action predicates as the real sensory experience.
|
rlm@451
|
1814
|
rlm@451
|
1815 #+caption: Determine how closely empathy approximates actual
|
rlm@451
|
1816 #+caption: sensory data.
|
rlm@451
|
1817 #+name: test-empathy-accuracy
|
rlm@452
|
1818 #+attr_latex: [htpb]
|
rlm@452
|
1819 #+begin_listing clojure
|
rlm@451
|
1820 #+begin_src clojure
|
rlm@451
|
1821 (def worm-action-label
|
rlm@451
|
1822 (juxt grand-circle? curled? wiggling?))
|
rlm@451
|
1823
|
rlm@451
|
1824 (defn compare-empathy-with-baseline [matches]
|
rlm@451
|
1825 (let [proprio (atom ())]
|
rlm@451
|
1826 (fn
|
rlm@451
|
1827 [experiences text]
|
rlm@451
|
1828 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
1829 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
1830 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
1831 empathy (mapv phi-space (infer-nils exp-thread))
|
rlm@451
|
1832 experience-matches-empathy
|
rlm@451
|
1833 (= (worm-action-label experiences)
|
rlm@451
|
1834 (worm-action-label empathy))]
|
rlm@451
|
1835 (println-repl experience-matches-empathy)
|
rlm@451
|
1836 (swap! matches #(conj % experience-matches-empathy)))))))
|
rlm@451
|
1837
|
rlm@451
|
1838 (defn accuracy [v]
|
rlm@451
|
1839 (float (/ (count (filter true? v)) (count v))))
|
rlm@451
|
1840
|
rlm@451
|
1841 (defn test-empathy-accuracy []
|
rlm@451
|
1842 (let [res (atom [])]
|
rlm@451
|
1843 (run-world
|
rlm@451
|
1844 (worm-world :experience-watch
|
rlm@451
|
1845 (compare-empathy-with-baseline res)
|
rlm@451
|
1846 :worm worm*))
|
rlm@451
|
1847 (accuracy @res)))
|
rlm@451
|
1848 #+end_src
|
rlm@451
|
1849 #+end_listing
|
rlm@451
|
1850
|
rlm@451
|
1851 Running =test-empathy-accuracy= using the very short exercise
|
rlm@451
|
1852 program defined in listing \ref{generate-phi-space}, and then doing
|
rlm@451
|
1853 a similar pattern of activity manually yeilds an accuracy of around
|
rlm@451
|
1854 73%. This is based on very limited worm experience. By training the
|
rlm@451
|
1855 worm for longer, the accuracy dramatically improves.
|
rlm@451
|
1856
|
rlm@451
|
1857 #+caption: Program to generate \Phi-space using manual training.
|
rlm@451
|
1858 #+name: manual-phi-space
|
rlm@452
|
1859 #+attr_latex: [htpb]
|
rlm@451
|
1860 #+begin_listing clojure
|
rlm@451
|
1861 #+begin_src clojure
|
rlm@451
|
1862 (defn init-interactive []
|
rlm@451
|
1863 (def phi-space
|
rlm@451
|
1864 (let [experiences (atom [])]
|
rlm@451
|
1865 (run-world
|
rlm@451
|
1866 (apply-map
|
rlm@451
|
1867 worm-world
|
rlm@451
|
1868 (merge
|
rlm@451
|
1869 (worm-world-defaults)
|
rlm@451
|
1870 {:experiences experiences})))
|
rlm@451
|
1871 @experiences))
|
rlm@451
|
1872 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
1873 #+end_src
|
rlm@451
|
1874 #+end_listing
|
rlm@451
|
1875
|
rlm@451
|
1876 After about 1 minute of manual training, I was able to achieve 95%
|
rlm@451
|
1877 accuracy on manual testing of the worm using =init-interactive= and
|
rlm@452
|
1878 =test-empathy-accuracy=. The majority of errors are near the
|
rlm@452
|
1879 boundaries of transitioning from one type of action to another.
|
rlm@452
|
1880 During these transitions the exact label for the action is more open
|
rlm@452
|
1881 to interpretation, and dissaggrement between empathy and experience
|
rlm@452
|
1882 is more excusable.
|
rlm@450
|
1883
|
rlm@449
|
1884 ** Digression: bootstrapping touch using free exploration
|
rlm@449
|
1885
|
rlm@452
|
1886 In the previous section I showed how to compute actions in terms of
|
rlm@452
|
1887 body-centered predicates which relied averate touch activation of
|
rlm@452
|
1888 pre-defined regions of the worm's skin. What if, instead of recieving
|
rlm@452
|
1889 touch pre-grouped into the six faces of each worm segment, the true
|
rlm@452
|
1890 topology of the worm's skin was unknown? This is more similiar to how
|
rlm@452
|
1891 a nerve fiber bundle might be arranged. While two fibers that are
|
rlm@452
|
1892 close in a nerve bundle /might/ correspond to two touch sensors that
|
rlm@452
|
1893 are close together on the skin, the process of taking a complicated
|
rlm@452
|
1894 surface and forcing it into essentially a circle requires some cuts
|
rlm@452
|
1895 and rerragenments.
|
rlm@452
|
1896
|
rlm@452
|
1897 In this section I show how to automatically learn the skin-topology of
|
rlm@452
|
1898 a worm segment by free exploration. As the worm rolls around on the
|
rlm@452
|
1899 floor, large sections of its surface get activated. If the worm has
|
rlm@452
|
1900 stopped moving, then whatever region of skin that is touching the
|
rlm@452
|
1901 floor is probably an important region, and should be recorded.
|
rlm@452
|
1902
|
rlm@452
|
1903 #+caption: Program to detect whether the worm is in a resting state
|
rlm@452
|
1904 #+caption: with one face touching the floor.
|
rlm@452
|
1905 #+name: pure-touch
|
rlm@452
|
1906 #+begin_listing clojure
|
rlm@452
|
1907 #+begin_src clojure
|
rlm@452
|
1908 (def full-contact [(float 0.0) (float 0.1)])
|
rlm@452
|
1909
|
rlm@452
|
1910 (defn pure-touch?
|
rlm@452
|
1911 "This is worm specific code to determine if a large region of touch
|
rlm@452
|
1912 sensors is either all on or all off."
|
rlm@452
|
1913 [[coords touch :as touch-data]]
|
rlm@452
|
1914 (= (set (map first touch)) (set full-contact)))
|
rlm@452
|
1915 #+end_src
|
rlm@452
|
1916 #+end_listing
|
rlm@452
|
1917
|
rlm@452
|
1918 After collecting these important regions, there will many nearly
|
rlm@452
|
1919 similiar touch regions. While for some purposes the subtle
|
rlm@452
|
1920 differences between these regions will be important, for my
|
rlm@452
|
1921 purposes I colapse them into mostly non-overlapping sets using
|
rlm@452
|
1922 =remove-similiar= in listing \ref{remove-similiar}
|
rlm@452
|
1923
|
rlm@452
|
1924 #+caption: Program to take a lits of set of points and ``collapse them''
|
rlm@452
|
1925 #+caption: so that the remaining sets in the list are siginificantly
|
rlm@452
|
1926 #+caption: different from each other. Prefer smaller sets to larger ones.
|
rlm@452
|
1927 #+name: remove-similiar
|
rlm@452
|
1928 #+begin_listing clojure
|
rlm@452
|
1929 #+begin_src clojure
|
rlm@452
|
1930 (defn remove-similar
|
rlm@452
|
1931 [coll]
|
rlm@452
|
1932 (loop [result () coll (sort-by (comp - count) coll)]
|
rlm@452
|
1933 (if (empty? coll) result
|
rlm@452
|
1934 (let [[x & xs] coll
|
rlm@452
|
1935 c (count x)]
|
rlm@452
|
1936 (if (some
|
rlm@452
|
1937 (fn [other-set]
|
rlm@452
|
1938 (let [oc (count other-set)]
|
rlm@452
|
1939 (< (- (count (union other-set x)) c) (* oc 0.1))))
|
rlm@452
|
1940 xs)
|
rlm@452
|
1941 (recur result xs)
|
rlm@452
|
1942 (recur (cons x result) xs))))))
|
rlm@452
|
1943 #+end_src
|
rlm@452
|
1944 #+end_listing
|
rlm@452
|
1945
|
rlm@452
|
1946 Actually running this simulation is easy given =CORTEX='s facilities.
|
rlm@452
|
1947
|
rlm@452
|
1948 #+caption: Collect experiences while the worm moves around. Filter the touch
|
rlm@452
|
1949 #+caption: sensations by stable ones, collapse similiar ones together,
|
rlm@452
|
1950 #+caption: and report the regions learned.
|
rlm@452
|
1951 #+name: learn-touch
|
rlm@452
|
1952 #+begin_listing clojure
|
rlm@452
|
1953 #+begin_src clojure
|
rlm@452
|
1954 (defn learn-touch-regions []
|
rlm@452
|
1955 (let [experiences (atom [])
|
rlm@452
|
1956 world (apply-map
|
rlm@452
|
1957 worm-world
|
rlm@452
|
1958 (assoc (worm-segment-defaults)
|
rlm@452
|
1959 :experiences experiences))]
|
rlm@452
|
1960 (run-world world)
|
rlm@452
|
1961 (->>
|
rlm@452
|
1962 @experiences
|
rlm@452
|
1963 (drop 175)
|
rlm@452
|
1964 ;; access the single segment's touch data
|
rlm@452
|
1965 (map (comp first :touch))
|
rlm@452
|
1966 ;; only deal with "pure" touch data to determine surfaces
|
rlm@452
|
1967 (filter pure-touch?)
|
rlm@452
|
1968 ;; associate coordinates with touch values
|
rlm@452
|
1969 (map (partial apply zipmap))
|
rlm@452
|
1970 ;; select those regions where contact is being made
|
rlm@452
|
1971 (map (partial group-by second))
|
rlm@452
|
1972 (map #(get % full-contact))
|
rlm@452
|
1973 (map (partial map first))
|
rlm@452
|
1974 ;; remove redundant/subset regions
|
rlm@452
|
1975 (map set)
|
rlm@452
|
1976 remove-similar)))
|
rlm@452
|
1977
|
rlm@452
|
1978 (defn learn-and-view-touch-regions []
|
rlm@452
|
1979 (map view-touch-region
|
rlm@452
|
1980 (learn-touch-regions)))
|
rlm@452
|
1981 #+end_src
|
rlm@452
|
1982 #+end_listing
|
rlm@452
|
1983
|
rlm@452
|
1984 The only thing remining to define is the particular motion the worm
|
rlm@452
|
1985 must take. I accomplish this with a simple motor control program.
|
rlm@452
|
1986
|
rlm@452
|
1987 #+caption: Motor control program for making the worm roll on the ground.
|
rlm@452
|
1988 #+caption: This could also be replaced with random motion.
|
rlm@452
|
1989 #+name: worm-roll
|
rlm@452
|
1990 #+begin_listing clojure
|
rlm@452
|
1991 #+begin_src clojure
|
rlm@452
|
1992 (defn touch-kinesthetics []
|
rlm@452
|
1993 [[170 :lift-1 40]
|
rlm@452
|
1994 [190 :lift-1 19]
|
rlm@452
|
1995 [206 :lift-1 0]
|
rlm@452
|
1996
|
rlm@452
|
1997 [400 :lift-2 40]
|
rlm@452
|
1998 [410 :lift-2 0]
|
rlm@452
|
1999
|
rlm@452
|
2000 [570 :lift-2 40]
|
rlm@452
|
2001 [590 :lift-2 21]
|
rlm@452
|
2002 [606 :lift-2 0]
|
rlm@452
|
2003
|
rlm@452
|
2004 [800 :lift-1 30]
|
rlm@452
|
2005 [809 :lift-1 0]
|
rlm@452
|
2006
|
rlm@452
|
2007 [900 :roll-2 40]
|
rlm@452
|
2008 [905 :roll-2 20]
|
rlm@452
|
2009 [910 :roll-2 0]
|
rlm@452
|
2010
|
rlm@452
|
2011 [1000 :roll-2 40]
|
rlm@452
|
2012 [1005 :roll-2 20]
|
rlm@452
|
2013 [1010 :roll-2 0]
|
rlm@452
|
2014
|
rlm@452
|
2015 [1100 :roll-2 40]
|
rlm@452
|
2016 [1105 :roll-2 20]
|
rlm@452
|
2017 [1110 :roll-2 0]
|
rlm@452
|
2018 ])
|
rlm@452
|
2019 #+end_src
|
rlm@452
|
2020 #+end_listing
|
rlm@452
|
2021
|
rlm@452
|
2022
|
rlm@452
|
2023 #+caption: The small worm rolls around on the floor, driven
|
rlm@452
|
2024 #+caption: by the motor control program in listing \ref{worm-roll}.
|
rlm@452
|
2025 #+name: worm-roll
|
rlm@452
|
2026 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
2027 [[./images/worm-roll.png]]
|
rlm@452
|
2028
|
rlm@452
|
2029
|
rlm@452
|
2030 #+caption: After completing its adventures, the worm now knows
|
rlm@452
|
2031 #+caption: how its touch sensors are arranged along its skin. These
|
rlm@452
|
2032 #+caption: are the regions that were deemed important by
|
rlm@452
|
2033 #+caption: =learn-touch-regions=. Note that the worm has discovered
|
rlm@452
|
2034 #+caption: that it has six sides.
|
rlm@452
|
2035 #+name: worm-touch-map
|
rlm@452
|
2036 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
2037 [[./images/touch-learn.png]]
|
rlm@452
|
2038
|
rlm@452
|
2039 While simple, =learn-touch-regions= exploits regularities in both
|
rlm@452
|
2040 the worm's physiology and the worm's environment to correctly
|
rlm@452
|
2041 deduce that the worm has six sides. Note that =learn-touch-regions=
|
rlm@452
|
2042 would work just as well even if the worm's touch sense data were
|
rlm@452
|
2043 completely scrambled. The cross shape is just for convienence. This
|
rlm@452
|
2044 example justifies the use of pre-defined touch regions in =EMPATH=.
|
rlm@452
|
2045
|
rlm@465
|
2046 * COMMENT Contributions
|
rlm@454
|
2047
|
rlm@461
|
2048 In this thesis you have seen the =CORTEX= system, a complete
|
rlm@461
|
2049 environment for creating simulated creatures. You have seen how to
|
rlm@461
|
2050 implement five senses including touch, proprioception, hearing,
|
rlm@461
|
2051 vision, and muscle tension. You have seen how to create new creatues
|
rlm@461
|
2052 using blender, a 3D modeling tool. I hope that =CORTEX= will be
|
rlm@461
|
2053 useful in further research projects. To this end I have included the
|
rlm@461
|
2054 full source to =CORTEX= along with a large suite of tests and
|
rlm@461
|
2055 examples. I have also created a user guide for =CORTEX= which is
|
rlm@461
|
2056 inculded in an appendix to this thesis.
|
rlm@447
|
2057
|
rlm@461
|
2058 You have also seen how I used =CORTEX= as a platform to attach the
|
rlm@461
|
2059 /action recognition/ problem, which is the problem of recognizing
|
rlm@461
|
2060 actions in video. You saw a simple system called =EMPATH= which
|
rlm@461
|
2061 ientifies actions by first describing actions in a body-centerd,
|
rlm@461
|
2062 rich sense language, then infering a full range of sensory
|
rlm@461
|
2063 experience from limited data using previous experience gained from
|
rlm@461
|
2064 free play.
|
rlm@447
|
2065
|
rlm@461
|
2066 As a minor digression, you also saw how I used =CORTEX= to enable a
|
rlm@461
|
2067 tiny worm to discover the topology of its skin simply by rolling on
|
rlm@461
|
2068 the ground.
|
rlm@461
|
2069
|
rlm@461
|
2070 In conclusion, the main contributions of this thesis are:
|
rlm@461
|
2071
|
rlm@461
|
2072 - =CORTEX=, a system for creating simulated creatures with rich
|
rlm@461
|
2073 senses.
|
rlm@461
|
2074 - =EMPATH=, a program for recognizing actions by imagining sensory
|
rlm@461
|
2075 experience.
|
rlm@447
|
2076
|
rlm@447
|
2077 # An anatomical joke:
|
rlm@447
|
2078 # - Training
|
rlm@447
|
2079 # - Skeletal imitation
|
rlm@447
|
2080 # - Sensory fleshing-out
|
rlm@447
|
2081 # - Classification
|