rlm@425
|
1 #+title: =CORTEX=
|
rlm@425
|
2 #+author: Robert McIntyre
|
rlm@425
|
3 #+email: rlm@mit.edu
|
rlm@425
|
4 #+description: Using embodied AI to facilitate Artificial Imagination.
|
rlm@425
|
5 #+keywords: AI, clojure, embodiment
|
rlm@451
|
6 #+LaTeX_CLASS_OPTIONS: [nofloat]
|
rlm@422
|
7
|
rlm@465
|
8 * COMMENT templates
|
rlm@470
|
9 #+caption:
|
rlm@470
|
10 #+caption:
|
rlm@470
|
11 #+caption:
|
rlm@470
|
12 #+caption:
|
rlm@470
|
13 #+name: name
|
rlm@470
|
14 #+begin_listing clojure
|
rlm@470
|
15 #+end_listing
|
rlm@465
|
16
|
rlm@470
|
17 #+caption:
|
rlm@470
|
18 #+caption:
|
rlm@470
|
19 #+caption:
|
rlm@470
|
20 #+name: name
|
rlm@470
|
21 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
22 [[./images/aurellem-gray.png]]
|
rlm@470
|
23
|
rlm@470
|
24 #+caption:
|
rlm@470
|
25 #+caption:
|
rlm@470
|
26 #+caption:
|
rlm@470
|
27 #+caption:
|
rlm@470
|
28 #+name: name
|
rlm@470
|
29 #+begin_listing clojure
|
rlm@470
|
30 #+end_listing
|
rlm@470
|
31
|
rlm@470
|
32 #+caption:
|
rlm@470
|
33 #+caption:
|
rlm@470
|
34 #+caption:
|
rlm@470
|
35 #+name: name
|
rlm@470
|
36 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
37 [[./images/aurellem-gray.png]]
|
rlm@470
|
38
|
rlm@465
|
39
|
rlm@465
|
40 * COMMENT Empathy and Embodiment as problem solving strategies
|
rlm@437
|
41
|
rlm@437
|
42 By the end of this thesis, you will have seen a novel approach to
|
rlm@437
|
43 interpreting video using embodiment and empathy. You will have also
|
rlm@437
|
44 seen one way to efficiently implement empathy for embodied
|
rlm@447
|
45 creatures. Finally, you will become familiar with =CORTEX=, a system
|
rlm@447
|
46 for designing and simulating creatures with rich senses, which you
|
rlm@447
|
47 may choose to use in your own research.
|
rlm@437
|
48
|
rlm@441
|
49 This is the core vision of my thesis: That one of the important ways
|
rlm@441
|
50 in which we understand others is by imagining ourselves in their
|
rlm@441
|
51 position and emphatically feeling experiences relative to our own
|
rlm@441
|
52 bodies. By understanding events in terms of our own previous
|
rlm@441
|
53 corporeal experience, we greatly constrain the possibilities of what
|
rlm@441
|
54 would otherwise be an unwieldy exponential search. This extra
|
rlm@441
|
55 constraint can be the difference between easily understanding what
|
rlm@441
|
56 is happening in a video and being completely lost in a sea of
|
rlm@441
|
57 incomprehensible color and movement.
|
rlm@435
|
58
|
rlm@436
|
59 ** Recognizing actions in video is extremely difficult
|
rlm@437
|
60
|
rlm@447
|
61 Consider for example the problem of determining what is happening
|
rlm@447
|
62 in a video of which this is one frame:
|
rlm@437
|
63
|
rlm@441
|
64 #+caption: A cat drinking some water. Identifying this action is
|
rlm@441
|
65 #+caption: beyond the state of the art for computers.
|
rlm@441
|
66 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
67 [[./images/cat-drinking.jpg]]
|
rlm@441
|
68
|
rlm@441
|
69 It is currently impossible for any computer program to reliably
|
rlm@447
|
70 label such a video as ``drinking''. And rightly so -- it is a very
|
rlm@441
|
71 hard problem! What features can you describe in terms of low level
|
rlm@441
|
72 functions of pixels that can even begin to describe at a high level
|
rlm@441
|
73 what is happening here?
|
rlm@437
|
74
|
rlm@447
|
75 Or suppose that you are building a program that recognizes chairs.
|
rlm@448
|
76 How could you ``see'' the chair in figure \ref{hidden-chair}?
|
rlm@441
|
77
|
rlm@441
|
78 #+caption: The chair in this image is quite obvious to humans, but I
|
rlm@448
|
79 #+caption: doubt that any modern computer vision program can find it.
|
rlm@441
|
80 #+name: hidden-chair
|
rlm@441
|
81 #+ATTR_LaTeX: :width 10cm
|
rlm@441
|
82 [[./images/fat-person-sitting-at-desk.jpg]]
|
rlm@441
|
83
|
rlm@441
|
84 Finally, how is it that you can easily tell the difference between
|
rlm@441
|
85 how the girls /muscles/ are working in figure \ref{girl}?
|
rlm@441
|
86
|
rlm@441
|
87 #+caption: The mysterious ``common sense'' appears here as you are able
|
rlm@441
|
88 #+caption: to discern the difference in how the girl's arm muscles
|
rlm@441
|
89 #+caption: are activated between the two images.
|
rlm@441
|
90 #+name: girl
|
rlm@448
|
91 #+ATTR_LaTeX: :width 7cm
|
rlm@441
|
92 [[./images/wall-push.png]]
|
rlm@437
|
93
|
rlm@441
|
94 Each of these examples tells us something about what might be going
|
rlm@441
|
95 on in our minds as we easily solve these recognition problems.
|
rlm@441
|
96
|
rlm@441
|
97 The hidden chairs show us that we are strongly triggered by cues
|
rlm@447
|
98 relating to the position of human bodies, and that we can determine
|
rlm@447
|
99 the overall physical configuration of a human body even if much of
|
rlm@447
|
100 that body is occluded.
|
rlm@437
|
101
|
rlm@441
|
102 The picture of the girl pushing against the wall tells us that we
|
rlm@441
|
103 have common sense knowledge about the kinetics of our own bodies.
|
rlm@441
|
104 We know well how our muscles would have to work to maintain us in
|
rlm@441
|
105 most positions, and we can easily project this self-knowledge to
|
rlm@441
|
106 imagined positions triggered by images of the human body.
|
rlm@441
|
107
|
rlm@441
|
108 ** =EMPATH= neatly solves recognition problems
|
rlm@441
|
109
|
rlm@441
|
110 I propose a system that can express the types of recognition
|
rlm@441
|
111 problems above in a form amenable to computation. It is split into
|
rlm@441
|
112 four parts:
|
rlm@441
|
113
|
rlm@448
|
114 - Free/Guided Play :: The creature moves around and experiences the
|
rlm@448
|
115 world through its unique perspective. Many otherwise
|
rlm@448
|
116 complicated actions are easily described in the language of a
|
rlm@448
|
117 full suite of body-centered, rich senses. For example,
|
rlm@448
|
118 drinking is the feeling of water sliding down your throat, and
|
rlm@448
|
119 cooling your insides. It's often accompanied by bringing your
|
rlm@448
|
120 hand close to your face, or bringing your face close to water.
|
rlm@448
|
121 Sitting down is the feeling of bending your knees, activating
|
rlm@448
|
122 your quadriceps, then feeling a surface with your bottom and
|
rlm@448
|
123 relaxing your legs. These body-centered action descriptions
|
rlm@448
|
124 can be either learned or hard coded.
|
rlm@448
|
125 - Posture Imitation :: When trying to interpret a video or image,
|
rlm@448
|
126 the creature takes a model of itself and aligns it with
|
rlm@448
|
127 whatever it sees. This alignment can even cross species, as
|
rlm@448
|
128 when humans try to align themselves with things like ponies,
|
rlm@448
|
129 dogs, or other humans with a different body type.
|
rlm@448
|
130 - Empathy :: The alignment triggers associations with
|
rlm@448
|
131 sensory data from prior experiences. For example, the
|
rlm@448
|
132 alignment itself easily maps to proprioceptive data. Any
|
rlm@448
|
133 sounds or obvious skin contact in the video can to a lesser
|
rlm@448
|
134 extent trigger previous experience. Segments of previous
|
rlm@448
|
135 experiences are stitched together to form a coherent and
|
rlm@448
|
136 complete sensory portrait of the scene.
|
rlm@448
|
137 - Recognition :: With the scene described in terms of first
|
rlm@448
|
138 person sensory events, the creature can now run its
|
rlm@447
|
139 action-identification programs on this synthesized sensory
|
rlm@447
|
140 data, just as it would if it were actually experiencing the
|
rlm@447
|
141 scene first-hand. If previous experience has been accurately
|
rlm@447
|
142 retrieved, and if it is analogous enough to the scene, then
|
rlm@447
|
143 the creature will correctly identify the action in the scene.
|
rlm@447
|
144
|
rlm@441
|
145 For example, I think humans are able to label the cat video as
|
rlm@447
|
146 ``drinking'' because they imagine /themselves/ as the cat, and
|
rlm@441
|
147 imagine putting their face up against a stream of water and
|
rlm@441
|
148 sticking out their tongue. In that imagined world, they can feel
|
rlm@441
|
149 the cool water hitting their tongue, and feel the water entering
|
rlm@447
|
150 their body, and are able to recognize that /feeling/ as drinking.
|
rlm@447
|
151 So, the label of the action is not really in the pixels of the
|
rlm@447
|
152 image, but is found clearly in a simulation inspired by those
|
rlm@447
|
153 pixels. An imaginative system, having been trained on drinking and
|
rlm@447
|
154 non-drinking examples and learning that the most important
|
rlm@447
|
155 component of drinking is the feeling of water sliding down one's
|
rlm@447
|
156 throat, would analyze a video of a cat drinking in the following
|
rlm@447
|
157 manner:
|
rlm@441
|
158
|
rlm@447
|
159 1. Create a physical model of the video by putting a ``fuzzy''
|
rlm@447
|
160 model of its own body in place of the cat. Possibly also create
|
rlm@447
|
161 a simulation of the stream of water.
|
rlm@441
|
162
|
rlm@441
|
163 2. Play out this simulated scene and generate imagined sensory
|
rlm@441
|
164 experience. This will include relevant muscle contractions, a
|
rlm@441
|
165 close up view of the stream from the cat's perspective, and most
|
rlm@441
|
166 importantly, the imagined feeling of water entering the
|
rlm@443
|
167 mouth. The imagined sensory experience can come from a
|
rlm@441
|
168 simulation of the event, but can also be pattern-matched from
|
rlm@441
|
169 previous, similar embodied experience.
|
rlm@441
|
170
|
rlm@441
|
171 3. The action is now easily identified as drinking by the sense of
|
rlm@441
|
172 taste alone. The other senses (such as the tongue moving in and
|
rlm@441
|
173 out) help to give plausibility to the simulated action. Note that
|
rlm@441
|
174 the sense of vision, while critical in creating the simulation,
|
rlm@441
|
175 is not critical for identifying the action from the simulation.
|
rlm@441
|
176
|
rlm@441
|
177 For the chair examples, the process is even easier:
|
rlm@441
|
178
|
rlm@441
|
179 1. Align a model of your body to the person in the image.
|
rlm@441
|
180
|
rlm@441
|
181 2. Generate proprioceptive sensory data from this alignment.
|
rlm@437
|
182
|
rlm@441
|
183 3. Use the imagined proprioceptive data as a key to lookup related
|
rlm@441
|
184 sensory experience associated with that particular proproceptive
|
rlm@441
|
185 feeling.
|
rlm@437
|
186
|
rlm@443
|
187 4. Retrieve the feeling of your bottom resting on a surface, your
|
rlm@443
|
188 knees bent, and your leg muscles relaxed.
|
rlm@437
|
189
|
rlm@441
|
190 5. This sensory information is consistent with the =sitting?=
|
rlm@441
|
191 sensory predicate, so you (and the entity in the image) must be
|
rlm@441
|
192 sitting.
|
rlm@440
|
193
|
rlm@441
|
194 6. There must be a chair-like object since you are sitting.
|
rlm@440
|
195
|
rlm@441
|
196 Empathy offers yet another alternative to the age-old AI
|
rlm@441
|
197 representation question: ``What is a chair?'' --- A chair is the
|
rlm@441
|
198 feeling of sitting.
|
rlm@441
|
199
|
rlm@441
|
200 My program, =EMPATH= uses this empathic problem solving technique
|
rlm@441
|
201 to interpret the actions of a simple, worm-like creature.
|
rlm@437
|
202
|
rlm@441
|
203 #+caption: The worm performs many actions during free play such as
|
rlm@441
|
204 #+caption: curling, wiggling, and resting.
|
rlm@441
|
205 #+name: worm-intro
|
rlm@446
|
206 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
207 [[./images/worm-intro-white.png]]
|
rlm@437
|
208
|
rlm@462
|
209 #+caption: =EMPATH= recognized and classified each of these
|
rlm@462
|
210 #+caption: poses by inferring the complete sensory experience
|
rlm@462
|
211 #+caption: from proprioceptive data.
|
rlm@441
|
212 #+name: worm-recognition-intro
|
rlm@446
|
213 #+ATTR_LaTeX: :width 15cm
|
rlm@445
|
214 [[./images/worm-poses.png]]
|
rlm@441
|
215
|
rlm@441
|
216 One powerful advantage of empathic problem solving is that it
|
rlm@441
|
217 factors the action recognition problem into two easier problems. To
|
rlm@441
|
218 use empathy, you need an /aligner/, which takes the video and a
|
rlm@441
|
219 model of your body, and aligns the model with the video. Then, you
|
rlm@441
|
220 need a /recognizer/, which uses the aligned model to interpret the
|
rlm@441
|
221 action. The power in this method lies in the fact that you describe
|
rlm@448
|
222 all actions form a body-centered viewpoint. You are less tied to
|
rlm@447
|
223 the particulars of any visual representation of the actions. If you
|
rlm@441
|
224 teach the system what ``running'' is, and you have a good enough
|
rlm@441
|
225 aligner, the system will from then on be able to recognize running
|
rlm@441
|
226 from any point of view, even strange points of view like above or
|
rlm@441
|
227 underneath the runner. This is in contrast to action recognition
|
rlm@448
|
228 schemes that try to identify actions using a non-embodied approach.
|
rlm@448
|
229 If these systems learn about running as viewed from the side, they
|
rlm@448
|
230 will not automatically be able to recognize running from any other
|
rlm@448
|
231 viewpoint.
|
rlm@441
|
232
|
rlm@441
|
233 Another powerful advantage is that using the language of multiple
|
rlm@441
|
234 body-centered rich senses to describe body-centerd actions offers a
|
rlm@441
|
235 massive boost in descriptive capability. Consider how difficult it
|
rlm@441
|
236 would be to compose a set of HOG filters to describe the action of
|
rlm@447
|
237 a simple worm-creature ``curling'' so that its head touches its
|
rlm@447
|
238 tail, and then behold the simplicity of describing thus action in a
|
rlm@441
|
239 language designed for the task (listing \ref{grand-circle-intro}):
|
rlm@441
|
240
|
rlm@446
|
241 #+caption: Body-centerd actions are best expressed in a body-centered
|
rlm@446
|
242 #+caption: language. This code detects when the worm has curled into a
|
rlm@446
|
243 #+caption: full circle. Imagine how you would replicate this functionality
|
rlm@446
|
244 #+caption: using low-level pixel features such as HOG filters!
|
rlm@446
|
245 #+name: grand-circle-intro
|
rlm@452
|
246 #+attr_latex: [htpb]
|
rlm@452
|
247 #+begin_listing clojure
|
rlm@446
|
248 #+begin_src clojure
|
rlm@446
|
249 (defn grand-circle?
|
rlm@446
|
250 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@446
|
251 [experiences]
|
rlm@446
|
252 (and (curled? experiences)
|
rlm@446
|
253 (let [worm-touch (:touch (peek experiences))
|
rlm@446
|
254 tail-touch (worm-touch 0)
|
rlm@446
|
255 head-touch (worm-touch 4)]
|
rlm@462
|
256 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
|
rlm@462
|
257 (< 0.2 (contact worm-segment-top-tip head-touch))))))
|
rlm@446
|
258 #+end_src
|
rlm@446
|
259 #+end_listing
|
rlm@446
|
260
|
rlm@435
|
261
|
rlm@449
|
262 ** =CORTEX= is a toolkit for building sensate creatures
|
rlm@435
|
263
|
rlm@448
|
264 I built =CORTEX= to be a general AI research platform for doing
|
rlm@448
|
265 experiments involving multiple rich senses and a wide variety and
|
rlm@448
|
266 number of creatures. I intend it to be useful as a library for many
|
rlm@462
|
267 more projects than just this thesis. =CORTEX= was necessary to meet
|
rlm@462
|
268 a need among AI researchers at CSAIL and beyond, which is that
|
rlm@462
|
269 people often will invent neat ideas that are best expressed in the
|
rlm@448
|
270 language of creatures and senses, but in order to explore those
|
rlm@448
|
271 ideas they must first build a platform in which they can create
|
rlm@448
|
272 simulated creatures with rich senses! There are many ideas that
|
rlm@448
|
273 would be simple to execute (such as =EMPATH=), but attached to them
|
rlm@448
|
274 is the multi-month effort to make a good creature simulator. Often,
|
rlm@448
|
275 that initial investment of time proves to be too much, and the
|
rlm@448
|
276 project must make do with a lesser environment.
|
rlm@435
|
277
|
rlm@448
|
278 =CORTEX= is well suited as an environment for embodied AI research
|
rlm@448
|
279 for three reasons:
|
rlm@448
|
280
|
rlm@448
|
281 - You can create new creatures using Blender, a popular 3D modeling
|
rlm@448
|
282 program. Each sense can be specified using special blender nodes
|
rlm@448
|
283 with biologically inspired paramaters. You need not write any
|
rlm@448
|
284 code to create a creature, and can use a wide library of
|
rlm@448
|
285 pre-existing blender models as a base for your own creatures.
|
rlm@448
|
286
|
rlm@448
|
287 - =CORTEX= implements a wide variety of senses, including touch,
|
rlm@448
|
288 proprioception, vision, hearing, and muscle tension. Complicated
|
rlm@448
|
289 senses like touch, and vision involve multiple sensory elements
|
rlm@448
|
290 embedded in a 2D surface. You have complete control over the
|
rlm@448
|
291 distribution of these sensor elements through the use of simple
|
rlm@448
|
292 png image files. In particular, =CORTEX= implements more
|
rlm@448
|
293 comprehensive hearing than any other creature simulation system
|
rlm@448
|
294 available.
|
rlm@448
|
295
|
rlm@448
|
296 - =CORTEX= supports any number of creatures and any number of
|
rlm@448
|
297 senses. Time in =CORTEX= dialates so that the simulated creatures
|
rlm@448
|
298 always precieve a perfectly smooth flow of time, regardless of
|
rlm@448
|
299 the actual computational load.
|
rlm@448
|
300
|
rlm@448
|
301 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
|
rlm@448
|
302 engine designed to create cross-platform 3D desktop games. =CORTEX=
|
rlm@448
|
303 is mainly written in clojure, a dialect of =LISP= that runs on the
|
rlm@448
|
304 java virtual machine (JVM). The API for creating and simulating
|
rlm@449
|
305 creatures and senses is entirely expressed in clojure, though many
|
rlm@449
|
306 senses are implemented at the layer of jMonkeyEngine or below. For
|
rlm@449
|
307 example, for the sense of hearing I use a layer of clojure code on
|
rlm@449
|
308 top of a layer of java JNI bindings that drive a layer of =C++=
|
rlm@449
|
309 code which implements a modified version of =OpenAL= to support
|
rlm@449
|
310 multiple listeners. =CORTEX= is the only simulation environment
|
rlm@449
|
311 that I know of that can support multiple entities that can each
|
rlm@449
|
312 hear the world from their own perspective. Other senses also
|
rlm@449
|
313 require a small layer of Java code. =CORTEX= also uses =bullet=, a
|
rlm@449
|
314 physics simulator written in =C=.
|
rlm@448
|
315
|
rlm@448
|
316 #+caption: Here is the worm from above modeled in Blender, a free
|
rlm@448
|
317 #+caption: 3D-modeling program. Senses and joints are described
|
rlm@448
|
318 #+caption: using special nodes in Blender.
|
rlm@448
|
319 #+name: worm-recognition-intro
|
rlm@448
|
320 #+ATTR_LaTeX: :width 12cm
|
rlm@448
|
321 [[./images/blender-worm.png]]
|
rlm@448
|
322
|
rlm@449
|
323 Here are some thing I anticipate that =CORTEX= might be used for:
|
rlm@449
|
324
|
rlm@449
|
325 - exploring new ideas about sensory integration
|
rlm@449
|
326 - distributed communication among swarm creatures
|
rlm@449
|
327 - self-learning using free exploration,
|
rlm@449
|
328 - evolutionary algorithms involving creature construction
|
rlm@449
|
329 - exploration of exoitic senses and effectors that are not possible
|
rlm@449
|
330 in the real world (such as telekenisis or a semantic sense)
|
rlm@449
|
331 - imagination using subworlds
|
rlm@449
|
332
|
rlm@451
|
333 During one test with =CORTEX=, I created 3,000 creatures each with
|
rlm@448
|
334 their own independent senses and ran them all at only 1/80 real
|
rlm@448
|
335 time. In another test, I created a detailed model of my own hand,
|
rlm@448
|
336 equipped with a realistic distribution of touch (more sensitive at
|
rlm@448
|
337 the fingertips), as well as eyes and ears, and it ran at around 1/4
|
rlm@451
|
338 real time.
|
rlm@448
|
339
|
rlm@451
|
340 #+BEGIN_LaTeX
|
rlm@449
|
341 \begin{sidewaysfigure}
|
rlm@449
|
342 \includegraphics[width=9.5in]{images/full-hand.png}
|
rlm@451
|
343 \caption{
|
rlm@451
|
344 I modeled my own right hand in Blender and rigged it with all the
|
rlm@451
|
345 senses that {\tt CORTEX} supports. My simulated hand has a
|
rlm@451
|
346 biologically inspired distribution of touch sensors. The senses are
|
rlm@451
|
347 displayed on the right, and the simulation is displayed on the
|
rlm@451
|
348 left. Notice that my hand is curling its fingers, that it can see
|
rlm@451
|
349 its own finger from the eye in its palm, and that it can feel its
|
rlm@451
|
350 own thumb touching its palm.}
|
rlm@449
|
351 \end{sidewaysfigure}
|
rlm@451
|
352 #+END_LaTeX
|
rlm@448
|
353
|
rlm@437
|
354 ** Contributions
|
rlm@435
|
355
|
rlm@451
|
356 - I built =CORTEX=, a comprehensive platform for embodied AI
|
rlm@451
|
357 experiments. =CORTEX= supports many features lacking in other
|
rlm@451
|
358 systems, such proper simulation of hearing. It is easy to create
|
rlm@451
|
359 new =CORTEX= creatures using Blender, a free 3D modeling program.
|
rlm@449
|
360
|
rlm@451
|
361 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
|
rlm@451
|
362 a worm-like creature using a computational model of empathy.
|
rlm@449
|
363
|
rlm@436
|
364 * Building =CORTEX=
|
rlm@435
|
365
|
rlm@462
|
366 I intend for =CORTEX= to be used as a general purpose library for
|
rlm@462
|
367 building creatures and outfitting them with senses, so that it will
|
rlm@462
|
368 be useful for other researchers who want to test out ideas of their
|
rlm@462
|
369 own. To this end, wherver I have had to make archetictural choices
|
rlm@462
|
370 about =CORTEX=, I have chosen to give as much freedom to the user as
|
rlm@462
|
371 possible, so that =CORTEX= may be used for things I have not
|
rlm@462
|
372 forseen.
|
rlm@462
|
373
|
rlm@465
|
374 ** COMMENT Simulation or Reality?
|
rlm@462
|
375
|
rlm@462
|
376 The most important archetictural decision of all is the choice to
|
rlm@462
|
377 use a computer-simulated environemnt in the first place! The world
|
rlm@462
|
378 is a vast and rich place, and for now simulations are a very poor
|
rlm@462
|
379 reflection of its complexity. It may be that there is a significant
|
rlm@462
|
380 qualatative difference between dealing with senses in the real
|
rlm@468
|
381 world and dealing with pale facilimilies of them in a simulation.
|
rlm@468
|
382 What are the advantages and disadvantages of a simulation vs.
|
rlm@468
|
383 reality?
|
rlm@462
|
384
|
rlm@462
|
385 *** Simulation
|
rlm@462
|
386
|
rlm@462
|
387 The advantages of virtual reality are that when everything is a
|
rlm@462
|
388 simulation, experiments in that simulation are absolutely
|
rlm@462
|
389 reproducible. It's also easier to change the character and world
|
rlm@462
|
390 to explore new situations and different sensory combinations.
|
rlm@462
|
391
|
rlm@462
|
392 If the world is to be simulated on a computer, then not only do
|
rlm@462
|
393 you have to worry about whether the character's senses are rich
|
rlm@462
|
394 enough to learn from the world, but whether the world itself is
|
rlm@462
|
395 rendered with enough detail and realism to give enough working
|
rlm@462
|
396 material to the character's senses. To name just a few
|
rlm@462
|
397 difficulties facing modern physics simulators: destructibility of
|
rlm@462
|
398 the environment, simulation of water/other fluids, large areas,
|
rlm@462
|
399 nonrigid bodies, lots of objects, smoke. I don't know of any
|
rlm@462
|
400 computer simulation that would allow a character to take a rock
|
rlm@462
|
401 and grind it into fine dust, then use that dust to make a clay
|
rlm@462
|
402 sculpture, at least not without spending years calculating the
|
rlm@462
|
403 interactions of every single small grain of dust. Maybe a
|
rlm@462
|
404 simulated world with today's limitations doesn't provide enough
|
rlm@462
|
405 richness for real intelligence to evolve.
|
rlm@462
|
406
|
rlm@462
|
407 *** Reality
|
rlm@462
|
408
|
rlm@462
|
409 The other approach for playing with senses is to hook your
|
rlm@462
|
410 software up to real cameras, microphones, robots, etc., and let it
|
rlm@462
|
411 loose in the real world. This has the advantage of eliminating
|
rlm@462
|
412 concerns about simulating the world at the expense of increasing
|
rlm@462
|
413 the complexity of implementing the senses. Instead of just
|
rlm@462
|
414 grabbing the current rendered frame for processing, you have to
|
rlm@462
|
415 use an actual camera with real lenses and interact with photons to
|
rlm@462
|
416 get an image. It is much harder to change the character, which is
|
rlm@462
|
417 now partly a physical robot of some sort, since doing so involves
|
rlm@462
|
418 changing things around in the real world instead of modifying
|
rlm@462
|
419 lines of code. While the real world is very rich and definitely
|
rlm@462
|
420 provides enough stimulation for intelligence to develop as
|
rlm@462
|
421 evidenced by our own existence, it is also uncontrollable in the
|
rlm@462
|
422 sense that a particular situation cannot be recreated perfectly or
|
rlm@462
|
423 saved for later use. It is harder to conduct science because it is
|
rlm@462
|
424 harder to repeat an experiment. The worst thing about using the
|
rlm@462
|
425 real world instead of a simulation is the matter of time. Instead
|
rlm@462
|
426 of simulated time you get the constant and unstoppable flow of
|
rlm@462
|
427 real time. This severely limits the sorts of software you can use
|
rlm@462
|
428 to program the AI because all sense inputs must be handled in real
|
rlm@462
|
429 time. Complicated ideas may have to be implemented in hardware or
|
rlm@462
|
430 may simply be impossible given the current speed of our
|
rlm@462
|
431 processors. Contrast this with a simulation, in which the flow of
|
rlm@462
|
432 time in the simulated world can be slowed down to accommodate the
|
rlm@462
|
433 limitations of the character's programming. In terms of cost,
|
rlm@462
|
434 doing everything in software is far cheaper than building custom
|
rlm@462
|
435 real-time hardware. All you need is a laptop and some patience.
|
rlm@435
|
436
|
rlm@465
|
437 ** COMMENT Because of Time, simulation is perferable to reality
|
rlm@435
|
438
|
rlm@462
|
439 I envision =CORTEX= being used to support rapid prototyping and
|
rlm@462
|
440 iteration of ideas. Even if I could put together a well constructed
|
rlm@462
|
441 kit for creating robots, it would still not be enough because of
|
rlm@462
|
442 the scourge of real-time processing. Anyone who wants to test their
|
rlm@462
|
443 ideas in the real world must always worry about getting their
|
rlm@465
|
444 algorithms to run fast enough to process information in real time.
|
rlm@465
|
445 The need for real time processing only increases if multiple senses
|
rlm@465
|
446 are involved. In the extreme case, even simple algorithms will have
|
rlm@465
|
447 to be accelerated by ASIC chips or FPGAs, turning what would
|
rlm@465
|
448 otherwise be a few lines of code and a 10x speed penality into a
|
rlm@465
|
449 multi-month ordeal. For this reason, =CORTEX= supports
|
rlm@462
|
450 /time-dialiation/, which scales back the framerate of the
|
rlm@465
|
451 simulation in proportion to the amount of processing each frame.
|
rlm@465
|
452 From the perspective of the creatures inside the simulation, time
|
rlm@465
|
453 always appears to flow at a constant rate, regardless of how
|
rlm@462
|
454 complicated the envorimnent becomes or how many creatures are in
|
rlm@462
|
455 the simulation. The cost is that =CORTEX= can sometimes run slower
|
rlm@462
|
456 than real time. This can also be an advantage, however ---
|
rlm@462
|
457 simulations of very simple creatures in =CORTEX= generally run at
|
rlm@462
|
458 40x on my machine!
|
rlm@462
|
459
|
rlm@469
|
460 ** COMMENT What is a sense?
|
rlm@468
|
461
|
rlm@468
|
462 If =CORTEX= is to support a wide variety of senses, it would help
|
rlm@468
|
463 to have a better understanding of what a ``sense'' actually is!
|
rlm@468
|
464 While vision, touch, and hearing all seem like they are quite
|
rlm@468
|
465 different things, I was supprised to learn during the course of
|
rlm@468
|
466 this thesis that they (and all physical senses) can be expressed as
|
rlm@468
|
467 exactly the same mathematical object due to a dimensional argument!
|
rlm@468
|
468
|
rlm@468
|
469 Human beings are three-dimensional objects, and the nerves that
|
rlm@468
|
470 transmit data from our various sense organs to our brain are
|
rlm@468
|
471 essentially one-dimensional. This leaves up to two dimensions in
|
rlm@468
|
472 which our sensory information may flow. For example, imagine your
|
rlm@468
|
473 skin: it is a two-dimensional surface around a three-dimensional
|
rlm@468
|
474 object (your body). It has discrete touch sensors embedded at
|
rlm@468
|
475 various points, and the density of these sensors corresponds to the
|
rlm@468
|
476 sensitivity of that region of skin. Each touch sensor connects to a
|
rlm@468
|
477 nerve, all of which eventually are bundled together as they travel
|
rlm@468
|
478 up the spinal cord to the brain. Intersect the spinal nerves with a
|
rlm@468
|
479 guillotining plane and you will see all of the sensory data of the
|
rlm@468
|
480 skin revealed in a roughly circular two-dimensional image which is
|
rlm@468
|
481 the cross section of the spinal cord. Points on this image that are
|
rlm@468
|
482 close together in this circle represent touch sensors that are
|
rlm@468
|
483 /probably/ close together on the skin, although there is of course
|
rlm@468
|
484 some cutting and rearrangement that has to be done to transfer the
|
rlm@468
|
485 complicated surface of the skin onto a two dimensional image.
|
rlm@468
|
486
|
rlm@468
|
487 Most human senses consist of many discrete sensors of various
|
rlm@468
|
488 properties distributed along a surface at various densities. For
|
rlm@468
|
489 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
|
rlm@468
|
490 disks, and Ruffini's endings, which detect pressure and vibration
|
rlm@468
|
491 of various intensities. For ears, it is the stereocilia distributed
|
rlm@468
|
492 along the basilar membrane inside the cochlea; each one is
|
rlm@468
|
493 sensitive to a slightly different frequency of sound. For eyes, it
|
rlm@468
|
494 is rods and cones distributed along the surface of the retina. In
|
rlm@468
|
495 each case, we can describe the sense with a surface and a
|
rlm@468
|
496 distribution of sensors along that surface.
|
rlm@468
|
497
|
rlm@468
|
498 The neat idea is that every human sense can be effectively
|
rlm@468
|
499 described in terms of a surface containing embedded sensors. If the
|
rlm@468
|
500 sense had any more dimensions, then there wouldn't be enough room
|
rlm@468
|
501 in the spinal chord to transmit the information!
|
rlm@468
|
502
|
rlm@468
|
503 Therefore, =CORTEX= must support the ability to create objects and
|
rlm@468
|
504 then be able to ``paint'' points along their surfaces to describe
|
rlm@468
|
505 each sense.
|
rlm@468
|
506
|
rlm@468
|
507 Fortunately this idea is already a well known computer graphics
|
rlm@468
|
508 technique called called /UV-mapping/. The three-dimensional surface
|
rlm@468
|
509 of a model is cut and smooshed until it fits on a two-dimensional
|
rlm@468
|
510 image. You paint whatever you want on that image, and when the
|
rlm@468
|
511 three-dimensional shape is rendered in a game the smooshing and
|
rlm@468
|
512 cutting is reversed and the image appears on the three-dimensional
|
rlm@468
|
513 object.
|
rlm@468
|
514
|
rlm@468
|
515 To make a sense, interpret the UV-image as describing the
|
rlm@468
|
516 distribution of that senses sensors. To get different types of
|
rlm@468
|
517 sensors, you can either use a different color for each type of
|
rlm@468
|
518 sensor, or use multiple UV-maps, each labeled with that sensor
|
rlm@468
|
519 type. I generally use a white pixel to mean the presence of a
|
rlm@468
|
520 sensor and a black pixel to mean the absence of a sensor, and use
|
rlm@468
|
521 one UV-map for each sensor-type within a given sense.
|
rlm@468
|
522
|
rlm@468
|
523 #+CAPTION: The UV-map for an elongated icososphere. The white
|
rlm@468
|
524 #+caption: dots each represent a touch sensor. They are dense
|
rlm@468
|
525 #+caption: in the regions that describe the tip of the finger,
|
rlm@468
|
526 #+caption: and less dense along the dorsal side of the finger
|
rlm@468
|
527 #+caption: opposite the tip.
|
rlm@468
|
528 #+name: finger-UV
|
rlm@468
|
529 #+ATTR_latex: :width 10cm
|
rlm@468
|
530 [[./images/finger-UV.png]]
|
rlm@468
|
531
|
rlm@468
|
532 #+caption: Ventral side of the UV-mapped finger. Notice the
|
rlm@468
|
533 #+caption: density of touch sensors at the tip.
|
rlm@468
|
534 #+name: finger-side-view
|
rlm@468
|
535 #+ATTR_LaTeX: :width 10cm
|
rlm@468
|
536 [[./images/finger-1.png]]
|
rlm@468
|
537
|
rlm@465
|
538 ** COMMENT Video game engines are a great starting point
|
rlm@462
|
539
|
rlm@462
|
540 I did not need to write my own physics simulation code or shader to
|
rlm@462
|
541 build =CORTEX=. Doing so would lead to a system that is impossible
|
rlm@462
|
542 for anyone but myself to use anyway. Instead, I use a video game
|
rlm@462
|
543 engine as a base and modify it to accomodate the additional needs
|
rlm@462
|
544 of =CORTEX=. Video game engines are an ideal starting point to
|
rlm@462
|
545 build =CORTEX=, because they are not far from being creature
|
rlm@463
|
546 building systems themselves.
|
rlm@462
|
547
|
rlm@462
|
548 First off, general purpose video game engines come with a physics
|
rlm@462
|
549 engine and lighting / sound system. The physics system provides
|
rlm@462
|
550 tools that can be co-opted to serve as touch, proprioception, and
|
rlm@462
|
551 muscles. Since some games support split screen views, a good video
|
rlm@462
|
552 game engine will allow you to efficiently create multiple cameras
|
rlm@463
|
553 in the simulated world that can be used as eyes. Video game systems
|
rlm@463
|
554 offer integrated asset management for things like textures and
|
rlm@468
|
555 creatures models, providing an avenue for defining creatures. They
|
rlm@468
|
556 also understand UV-mapping, since this technique is used to apply a
|
rlm@468
|
557 texture to a model. Finally, because video game engines support a
|
rlm@468
|
558 large number of users, as long as =CORTEX= doesn't stray too far
|
rlm@468
|
559 from the base system, other researchers can turn to this community
|
rlm@468
|
560 for help when doing their research.
|
rlm@463
|
561
|
rlm@465
|
562 ** COMMENT =CORTEX= is based on jMonkeyEngine3
|
rlm@463
|
563
|
rlm@463
|
564 While preparing to build =CORTEX= I studied several video game
|
rlm@463
|
565 engines to see which would best serve as a base. The top contenders
|
rlm@463
|
566 were:
|
rlm@463
|
567
|
rlm@463
|
568 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
|
rlm@463
|
569 software in 1997. All the source code was released by ID
|
rlm@463
|
570 software into the Public Domain several years ago, and as a
|
rlm@463
|
571 result it has been ported to many different languages. This
|
rlm@463
|
572 engine was famous for its advanced use of realistic shading
|
rlm@463
|
573 and had decent and fast physics simulation. The main advantage
|
rlm@463
|
574 of the Quake II engine is its simplicity, but I ultimately
|
rlm@463
|
575 rejected it because the engine is too tied to the concept of a
|
rlm@463
|
576 first-person shooter game. One of the problems I had was that
|
rlm@463
|
577 there does not seem to be any easy way to attach multiple
|
rlm@463
|
578 cameras to a single character. There are also several physics
|
rlm@463
|
579 clipping issues that are corrected in a way that only applies
|
rlm@463
|
580 to the main character and do not apply to arbitrary objects.
|
rlm@463
|
581
|
rlm@463
|
582 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
|
rlm@463
|
583 and Quake I engines and is used by Valve in the Half-Life
|
rlm@463
|
584 series of games. The physics simulation in the Source Engine
|
rlm@463
|
585 is quite accurate and probably the best out of all the engines
|
rlm@463
|
586 I investigated. There is also an extensive community actively
|
rlm@463
|
587 working with the engine. However, applications that use the
|
rlm@463
|
588 Source Engine must be written in C++, the code is not open, it
|
rlm@463
|
589 only runs on Windows, and the tools that come with the SDK to
|
rlm@463
|
590 handle models and textures are complicated and awkward to use.
|
rlm@463
|
591
|
rlm@463
|
592 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
|
rlm@463
|
593 games in Java. It uses OpenGL to render to the screen and uses
|
rlm@463
|
594 screengraphs to avoid drawing things that do not appear on the
|
rlm@463
|
595 screen. It has an active community and several games in the
|
rlm@463
|
596 pipeline. The engine was not built to serve any particular
|
rlm@463
|
597 game but is instead meant to be used for any 3D game.
|
rlm@463
|
598
|
rlm@463
|
599 I chose jMonkeyEngine3 because it because it had the most features
|
rlm@464
|
600 out of all the free projects I looked at, and because I could then
|
rlm@463
|
601 write my code in clojure, an implementation of =LISP= that runs on
|
rlm@463
|
602 the JVM.
|
rlm@435
|
603
|
rlm@469
|
604 ** COMMENT =CORTEX= uses Blender to create creature models
|
rlm@435
|
605
|
rlm@464
|
606 For the simple worm-like creatures I will use later on in this
|
rlm@464
|
607 thesis, I could define a simple API in =CORTEX= that would allow
|
rlm@464
|
608 one to create boxes, spheres, etc., and leave that API as the sole
|
rlm@464
|
609 way to create creatures. However, for =CORTEX= to truly be useful
|
rlm@468
|
610 for other projects, it needs a way to construct complicated
|
rlm@464
|
611 creatures. If possible, it would be nice to leverage work that has
|
rlm@464
|
612 already been done by the community of 3D modelers, or at least
|
rlm@464
|
613 enable people who are talented at moedling but not programming to
|
rlm@468
|
614 design =CORTEX= creatures.
|
rlm@464
|
615
|
rlm@464
|
616 Therefore, I use Blender, a free 3D modeling program, as the main
|
rlm@464
|
617 way to create creatures in =CORTEX=. However, the creatures modeled
|
rlm@464
|
618 in Blender must also be simple to simulate in jMonkeyEngine3's game
|
rlm@468
|
619 engine, and must also be easy to rig with =CORTEX='s senses. I
|
rlm@468
|
620 accomplish this with extensive use of Blender's ``empty nodes.''
|
rlm@464
|
621
|
rlm@468
|
622 Empty nodes have no mass, physical presence, or appearance, but
|
rlm@468
|
623 they can hold metadata and have names. I use a tree structure of
|
rlm@468
|
624 empty nodes to specify senses in the following manner:
|
rlm@468
|
625
|
rlm@468
|
626 - Create a single top-level empty node whose name is the name of
|
rlm@468
|
627 the sense.
|
rlm@468
|
628 - Add empty nodes which each contain meta-data relevant to the
|
rlm@468
|
629 sense, including a UV-map describing the number/distribution of
|
rlm@468
|
630 sensors if applicable.
|
rlm@468
|
631 - Make each empty-node the child of the top-level node.
|
rlm@468
|
632
|
rlm@468
|
633 #+caption: An example of annoting a creature model with empty
|
rlm@468
|
634 #+caption: nodes to describe the layout of senses. There are
|
rlm@468
|
635 #+caption: multiple empty nodes which each describe the position
|
rlm@468
|
636 #+caption: of muscles, ears, eyes, or joints.
|
rlm@468
|
637 #+name: sense-nodes
|
rlm@468
|
638 #+ATTR_LaTeX: :width 10cm
|
rlm@468
|
639 [[./images/empty-sense-nodes.png]]
|
rlm@468
|
640
|
rlm@469
|
641 ** COMMENT Bodies are composed of segments connected by joints
|
rlm@468
|
642
|
rlm@468
|
643 Blender is a general purpose animation tool, which has been used in
|
rlm@468
|
644 the past to create high quality movies such as Sintel
|
rlm@468
|
645 \cite{sintel}. Though Blender can model and render even complicated
|
rlm@468
|
646 things like water, it is crucual to keep models that are meant to
|
rlm@468
|
647 be simulated as creatures simple. =Bullet=, which =CORTEX= uses
|
rlm@468
|
648 though jMonkeyEngine3, is a rigid-body physics system. This offers
|
rlm@468
|
649 a compromise between the expressiveness of a game level and the
|
rlm@468
|
650 speed at which it can be simulated, and it means that creatures
|
rlm@468
|
651 should be naturally expressed as rigid components held together by
|
rlm@468
|
652 joint constraints.
|
rlm@468
|
653
|
rlm@468
|
654 But humans are more like a squishy bag with wrapped around some
|
rlm@468
|
655 hard bones which define the overall shape. When we move, our skin
|
rlm@468
|
656 bends and stretches to accomodate the new positions of our bones.
|
rlm@468
|
657
|
rlm@468
|
658 One way to make bodies composed of rigid pieces connected by joints
|
rlm@468
|
659 /seem/ more human-like is to use an /armature/, (or /rigging/)
|
rlm@468
|
660 system, which defines a overall ``body mesh'' and defines how the
|
rlm@468
|
661 mesh deforms as a function of the position of each ``bone'' which
|
rlm@468
|
662 is a standard rigid body. This technique is used extensively to
|
rlm@468
|
663 model humans and create realistic animations. It is not a good
|
rlm@468
|
664 technique for physical simulation, however because it creates a lie
|
rlm@468
|
665 -- the skin is not a physical part of the simulation and does not
|
rlm@468
|
666 interact with any objects in the world or itself. Objects will pass
|
rlm@468
|
667 right though the skin until they come in contact with the
|
rlm@468
|
668 underlying bone, which is a physical object. Whithout simulating
|
rlm@468
|
669 the skin, the sense of touch has little meaning, and the creature's
|
rlm@468
|
670 own vision will lie to it about the true extent of its body.
|
rlm@468
|
671 Simulating the skin as a physical object requires some way to
|
rlm@468
|
672 continuously update the physical model of the skin along with the
|
rlm@468
|
673 movement of the bones, which is unacceptably slow compared to rigid
|
rlm@468
|
674 body simulation.
|
rlm@468
|
675
|
rlm@468
|
676 Therefore, instead of using the human-like ``deformable bag of
|
rlm@468
|
677 bones'' approach, I decided to base my body plans on multiple solid
|
rlm@468
|
678 objects that are connected by joints, inspired by the robot =EVE=
|
rlm@468
|
679 from the movie WALL-E.
|
rlm@464
|
680
|
rlm@464
|
681 #+caption: =EVE= from the movie WALL-E. This body plan turns
|
rlm@464
|
682 #+caption: out to be much better suited to my purposes than a more
|
rlm@464
|
683 #+caption: human-like one.
|
rlm@465
|
684 #+ATTR_LaTeX: :width 10cm
|
rlm@464
|
685 [[./images/Eve.jpg]]
|
rlm@464
|
686
|
rlm@464
|
687 =EVE='s body is composed of several rigid components that are held
|
rlm@464
|
688 together by invisible joint constraints. This is what I mean by
|
rlm@464
|
689 ``eve-like''. The main reason that I use eve-style bodies is for
|
rlm@464
|
690 efficiency, and so that there will be correspondence between the
|
rlm@468
|
691 AI's semses and the physical presence of its body. Each individual
|
rlm@464
|
692 section is simulated by a separate rigid body that corresponds
|
rlm@464
|
693 exactly with its visual representation and does not change.
|
rlm@464
|
694 Sections are connected by invisible joints that are well supported
|
rlm@464
|
695 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
|
rlm@464
|
696 can efficiently simulate hundreds of rigid bodies connected by
|
rlm@468
|
697 joints. Just because sections are rigid does not mean they have to
|
rlm@468
|
698 stay as one piece forever; they can be dynamically replaced with
|
rlm@468
|
699 multiple sections to simulate splitting in two. This could be used
|
rlm@468
|
700 to simulate retractable claws or =EVE='s hands, which are able to
|
rlm@468
|
701 coalesce into one object in the movie.
|
rlm@465
|
702
|
rlm@469
|
703 *** Solidifying/Connecting a body
|
rlm@465
|
704
|
rlm@469
|
705 =CORTEX= creates a creature in two steps: first, it traverses the
|
rlm@469
|
706 nodes in the blender file and creates physical representations for
|
rlm@469
|
707 any of them that have mass defined in their blender meta-data.
|
rlm@466
|
708
|
rlm@466
|
709 #+caption: Program for iterating through the nodes in a blender file
|
rlm@466
|
710 #+caption: and generating physical jMonkeyEngine3 objects with mass
|
rlm@466
|
711 #+caption: and a matching physics shape.
|
rlm@466
|
712 #+name: name
|
rlm@466
|
713 #+begin_listing clojure
|
rlm@466
|
714 #+begin_src clojure
|
rlm@466
|
715 (defn physical!
|
rlm@466
|
716 "Iterate through the nodes in creature and make them real physical
|
rlm@466
|
717 objects in the simulation."
|
rlm@466
|
718 [#^Node creature]
|
rlm@466
|
719 (dorun
|
rlm@466
|
720 (map
|
rlm@466
|
721 (fn [geom]
|
rlm@466
|
722 (let [physics-control
|
rlm@466
|
723 (RigidBodyControl.
|
rlm@466
|
724 (HullCollisionShape.
|
rlm@466
|
725 (.getMesh geom))
|
rlm@466
|
726 (if-let [mass (meta-data geom "mass")]
|
rlm@466
|
727 (float mass) (float 1)))]
|
rlm@466
|
728 (.addControl geom physics-control)))
|
rlm@466
|
729 (filter #(isa? (class %) Geometry )
|
rlm@466
|
730 (node-seq creature)))))
|
rlm@466
|
731 #+end_src
|
rlm@466
|
732 #+end_listing
|
rlm@465
|
733
|
rlm@469
|
734 The next step to making a proper body is to connect those pieces
|
rlm@469
|
735 together with joints. jMonkeyEngine has a large array of joints
|
rlm@469
|
736 available via =bullet=, such as Point2Point, Cone, Hinge, and a
|
rlm@469
|
737 generic Six Degree of Freedom joint, with or without spring
|
rlm@469
|
738 restitution.
|
rlm@465
|
739
|
rlm@469
|
740 Joints are treated a lot like proper senses, in that there is a
|
rlm@469
|
741 top-level empty node named ``joints'' whose children each
|
rlm@469
|
742 represent a joint.
|
rlm@466
|
743
|
rlm@469
|
744 #+caption: View of the hand model in Blender showing the main ``joints''
|
rlm@469
|
745 #+caption: node (highlighted in yellow) and its children which each
|
rlm@469
|
746 #+caption: represent a joint in the hand. Each joint node has metadata
|
rlm@469
|
747 #+caption: specifying what sort of joint it is.
|
rlm@469
|
748 #+name: blender-hand
|
rlm@469
|
749 #+ATTR_LaTeX: :width 10cm
|
rlm@469
|
750 [[./images/hand-screenshot1.png]]
|
rlm@469
|
751
|
rlm@469
|
752
|
rlm@469
|
753 =CORTEX='s procedure for binding the creature together with joints
|
rlm@469
|
754 is as follows:
|
rlm@469
|
755
|
rlm@469
|
756 - Find the children of the ``joints'' node.
|
rlm@469
|
757 - Determine the two spatials the joint is meant to connect.
|
rlm@469
|
758 - Create the joint based on the meta-data of the empty node.
|
rlm@469
|
759
|
rlm@469
|
760 The higher order function =sense-nodes= from =cortex.sense=
|
rlm@469
|
761 simplifies finding the joints based on their parent ``joints''
|
rlm@469
|
762 node.
|
rlm@466
|
763
|
rlm@466
|
764 #+caption: Retrieving the children empty nodes from a single
|
rlm@466
|
765 #+caption: named empty node is a common pattern in =CORTEX=
|
rlm@466
|
766 #+caption: further instances of this technique for the senses
|
rlm@466
|
767 #+caption: will be omitted
|
rlm@466
|
768 #+name: get-empty-nodes
|
rlm@466
|
769 #+begin_listing clojure
|
rlm@466
|
770 #+begin_src clojure
|
rlm@466
|
771 (defn sense-nodes
|
rlm@466
|
772 "For some senses there is a special empty blender node whose
|
rlm@466
|
773 children are considered markers for an instance of that sense. This
|
rlm@466
|
774 function generates functions to find those children, given the name
|
rlm@466
|
775 of the special parent node."
|
rlm@466
|
776 [parent-name]
|
rlm@466
|
777 (fn [#^Node creature]
|
rlm@466
|
778 (if-let [sense-node (.getChild creature parent-name)]
|
rlm@466
|
779 (seq (.getChildren sense-node)) [])))
|
rlm@466
|
780
|
rlm@466
|
781 (def
|
rlm@466
|
782 ^{:doc "Return the children of the creature's \"joints\" node."
|
rlm@466
|
783 :arglists '([creature])}
|
rlm@466
|
784 joints
|
rlm@466
|
785 (sense-nodes "joints"))
|
rlm@466
|
786 #+end_src
|
rlm@466
|
787 #+end_listing
|
rlm@466
|
788
|
rlm@469
|
789 To find a joint's targets, =CORTEX= creates a small cube, centered
|
rlm@469
|
790 around the empty-node, and grows the cube exponentially until it
|
rlm@469
|
791 intersects two physical objects. The objects are ordered according
|
rlm@469
|
792 to the joint's rotation, with the first one being the object that
|
rlm@469
|
793 has more negative coordinates in the joint's reference frame.
|
rlm@469
|
794 Since the objects must be physical, the empty-node itself escapes
|
rlm@469
|
795 detection. Because the objects must be physical, =joint-targets=
|
rlm@469
|
796 must be called /after/ =physical!= is called.
|
rlm@464
|
797
|
rlm@469
|
798 #+caption: Program to find the targets of a joint node by
|
rlm@469
|
799 #+caption: exponentiallly growth of a search cube.
|
rlm@469
|
800 #+name: joint-targets
|
rlm@469
|
801 #+begin_listing clojure
|
rlm@469
|
802 #+begin_src clojure
|
rlm@466
|
803 (defn joint-targets
|
rlm@466
|
804 "Return the two closest two objects to the joint object, ordered
|
rlm@466
|
805 from bottom to top according to the joint's rotation."
|
rlm@466
|
806 [#^Node parts #^Node joint]
|
rlm@466
|
807 (loop [radius (float 0.01)]
|
rlm@466
|
808 (let [results (CollisionResults.)]
|
rlm@466
|
809 (.collideWith
|
rlm@466
|
810 parts
|
rlm@466
|
811 (BoundingBox. (.getWorldTranslation joint)
|
rlm@466
|
812 radius radius radius) results)
|
rlm@466
|
813 (let [targets
|
rlm@466
|
814 (distinct
|
rlm@466
|
815 (map #(.getGeometry %) results))]
|
rlm@466
|
816 (if (>= (count targets) 2)
|
rlm@466
|
817 (sort-by
|
rlm@466
|
818 #(let [joint-ref-frame-position
|
rlm@466
|
819 (jme-to-blender
|
rlm@466
|
820 (.mult
|
rlm@466
|
821 (.inverse (.getWorldRotation joint))
|
rlm@466
|
822 (.subtract (.getWorldTranslation %)
|
rlm@466
|
823 (.getWorldTranslation joint))))]
|
rlm@466
|
824 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
|
rlm@466
|
825 (take 2 targets))
|
rlm@466
|
826 (recur (float (* radius 2))))))))
|
rlm@469
|
827 #+end_src
|
rlm@469
|
828 #+end_listing
|
rlm@464
|
829
|
rlm@469
|
830 Once =CORTEX= finds all joints and targets, it creates them using
|
rlm@469
|
831 a dispatch on the metadata of each joint node.
|
rlm@466
|
832
|
rlm@469
|
833 #+caption: Program to dispatch on blender metadata and create joints
|
rlm@469
|
834 #+caption: sutiable for physical simulation.
|
rlm@469
|
835 #+name: joint-dispatch
|
rlm@469
|
836 #+begin_listing clojure
|
rlm@469
|
837 #+begin_src clojure
|
rlm@466
|
838 (defmulti joint-dispatch
|
rlm@466
|
839 "Translate blender pseudo-joints into real JME joints."
|
rlm@466
|
840 (fn [constraints & _]
|
rlm@466
|
841 (:type constraints)))
|
rlm@466
|
842
|
rlm@466
|
843 (defmethod joint-dispatch :point
|
rlm@466
|
844 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
845 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
|
rlm@466
|
846 (.setLinearLowerLimit Vector3f/ZERO)
|
rlm@466
|
847 (.setLinearUpperLimit Vector3f/ZERO)))
|
rlm@466
|
848
|
rlm@466
|
849 (defmethod joint-dispatch :hinge
|
rlm@466
|
850 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
851 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
|
rlm@466
|
852 [limit-1 limit-2] (:limit constraints)
|
rlm@466
|
853 hinge-axis (.mult rotation (blender-to-jme axis))]
|
rlm@466
|
854 (doto (HingeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
855 hinge-axis hinge-axis)
|
rlm@466
|
856 (.setLimit limit-1 limit-2))))
|
rlm@466
|
857
|
rlm@466
|
858 (defmethod joint-dispatch :cone
|
rlm@466
|
859 [constraints control-a control-b pivot-a pivot-b rotation]
|
rlm@466
|
860 (let [limit-xz (:limit-xz constraints)
|
rlm@466
|
861 limit-xy (:limit-xy constraints)
|
rlm@466
|
862 twist (:twist constraints)]
|
rlm@466
|
863 (doto (ConeJoint. control-a control-b pivot-a pivot-b
|
rlm@466
|
864 rotation rotation)
|
rlm@466
|
865 (.setLimit (float limit-xz) (float limit-xy)
|
rlm@466
|
866 (float twist)))))
|
rlm@469
|
867 #+end_src
|
rlm@469
|
868 #+end_listing
|
rlm@466
|
869
|
rlm@469
|
870 All that is left for joints it to combine the above pieces into a
|
rlm@469
|
871 something that can operate on the collection of nodes that a
|
rlm@469
|
872 blender file represents.
|
rlm@466
|
873
|
rlm@469
|
874 #+caption: Program to completely create a joint given information
|
rlm@469
|
875 #+caption: from a blender file.
|
rlm@469
|
876 #+name: connect
|
rlm@469
|
877 #+begin_listing clojure
|
rlm@466
|
878 #+begin_src clojure
|
rlm@466
|
879 (defn connect
|
rlm@466
|
880 "Create a joint between 'obj-a and 'obj-b at the location of
|
rlm@466
|
881 'joint. The type of joint is determined by the metadata on 'joint.
|
rlm@466
|
882
|
rlm@466
|
883 Here are some examples:
|
rlm@466
|
884 {:type :point}
|
rlm@466
|
885 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
|
rlm@466
|
886 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
|
rlm@466
|
887
|
rlm@466
|
888 {:type :cone :limit-xz 0]
|
rlm@466
|
889 :limit-xy 0]
|
rlm@466
|
890 :twist 0]} (use XZY rotation mode in blender!)"
|
rlm@466
|
891 [#^Node obj-a #^Node obj-b #^Node joint]
|
rlm@466
|
892 (let [control-a (.getControl obj-a RigidBodyControl)
|
rlm@466
|
893 control-b (.getControl obj-b RigidBodyControl)
|
rlm@466
|
894 joint-center (.getWorldTranslation joint)
|
rlm@466
|
895 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
|
rlm@466
|
896 pivot-a (world-to-local obj-a joint-center)
|
rlm@466
|
897 pivot-b (world-to-local obj-b joint-center)]
|
rlm@466
|
898 (if-let
|
rlm@466
|
899 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
|
rlm@466
|
900 ;; A side-effect of creating a joint registers
|
rlm@466
|
901 ;; it with both physics objects which in turn
|
rlm@466
|
902 ;; will register the joint with the physics system
|
rlm@466
|
903 ;; when the simulation is started.
|
rlm@466
|
904 (joint-dispatch constraints
|
rlm@466
|
905 control-a control-b
|
rlm@466
|
906 pivot-a pivot-b
|
rlm@466
|
907 joint-rotation))))
|
rlm@469
|
908 #+end_src
|
rlm@469
|
909 #+end_listing
|
rlm@466
|
910
|
rlm@469
|
911 In general, whenever =CORTEX= exposes a sense (or in this case
|
rlm@469
|
912 physicality), it provides a function of the type =sense!=, which
|
rlm@469
|
913 takes in a collection of nodes and augments it to support that
|
rlm@469
|
914 sense. The function returns any controlls necessary to use that
|
rlm@469
|
915 sense. In this case =body!= cerates a physical body and returns no
|
rlm@469
|
916 control functions.
|
rlm@466
|
917
|
rlm@469
|
918 #+caption: Program to give joints to a creature.
|
rlm@469
|
919 #+name: name
|
rlm@469
|
920 #+begin_listing clojure
|
rlm@469
|
921 #+begin_src clojure
|
rlm@466
|
922 (defn joints!
|
rlm@466
|
923 "Connect the solid parts of the creature with physical joints. The
|
rlm@466
|
924 joints are taken from the \"joints\" node in the creature."
|
rlm@466
|
925 [#^Node creature]
|
rlm@466
|
926 (dorun
|
rlm@466
|
927 (map
|
rlm@466
|
928 (fn [joint]
|
rlm@466
|
929 (let [[obj-a obj-b] (joint-targets creature joint)]
|
rlm@466
|
930 (connect obj-a obj-b joint)))
|
rlm@466
|
931 (joints creature))))
|
rlm@466
|
932 (defn body!
|
rlm@466
|
933 "Endow the creature with a physical body connected with joints. The
|
rlm@466
|
934 particulars of the joints and the masses of each body part are
|
rlm@466
|
935 determined in blender."
|
rlm@466
|
936 [#^Node creature]
|
rlm@466
|
937 (physical! creature)
|
rlm@466
|
938 (joints! creature))
|
rlm@469
|
939 #+end_src
|
rlm@469
|
940 #+end_listing
|
rlm@466
|
941
|
rlm@469
|
942 All of the code you have just seen amounts to only 130 lines, yet
|
rlm@469
|
943 because it builds on top of Blender and jMonkeyEngine3, those few
|
rlm@469
|
944 lines pack quite a punch!
|
rlm@466
|
945
|
rlm@469
|
946 The hand from figure \ref{blender-hand}, which was modeled after
|
rlm@469
|
947 my own right hand, can now be given joints and simulated as a
|
rlm@469
|
948 creature.
|
rlm@466
|
949
|
rlm@469
|
950 #+caption: With the ability to create physical creatures from blender,
|
rlm@469
|
951 #+caption: =CORTEX= gets one step closer to becomming a full creature
|
rlm@469
|
952 #+caption: simulation environment.
|
rlm@469
|
953 #+name: name
|
rlm@469
|
954 #+ATTR_LaTeX: :width 15cm
|
rlm@469
|
955 [[./images/physical-hand.png]]
|
rlm@468
|
956
|
rlm@472
|
957 ** COMMENT Eyes reuse standard video game components
|
rlm@436
|
958
|
rlm@470
|
959 Vision is one of the most important senses for humans, so I need to
|
rlm@470
|
960 build a simulated sense of vision for my AI. I will do this with
|
rlm@470
|
961 simulated eyes. Each eye can be independently moved and should see
|
rlm@470
|
962 its own version of the world depending on where it is.
|
rlm@470
|
963
|
rlm@470
|
964 Making these simulated eyes a reality is simple because
|
rlm@470
|
965 jMonkeyEngine already contains extensive support for multiple views
|
rlm@470
|
966 of the same 3D simulated world. The reason jMonkeyEngine has this
|
rlm@470
|
967 support is because the support is necessary to create games with
|
rlm@470
|
968 split-screen views. Multiple views are also used to create
|
rlm@470
|
969 efficient pseudo-reflections by rendering the scene from a certain
|
rlm@470
|
970 perspective and then projecting it back onto a surface in the 3D
|
rlm@470
|
971 world.
|
rlm@470
|
972
|
rlm@470
|
973 #+caption: jMonkeyEngine supports multiple views to enable
|
rlm@470
|
974 #+caption: split-screen games, like GoldenEye, which was one of
|
rlm@470
|
975 #+caption: the first games to use split-screen views.
|
rlm@470
|
976 #+name: name
|
rlm@470
|
977 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
978 [[./images/goldeneye-4-player.png]]
|
rlm@470
|
979
|
rlm@470
|
980 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
|
rlm@470
|
981
|
rlm@470
|
982 jMonkeyEngine allows you to create a =ViewPort=, which represents a
|
rlm@470
|
983 view of the simulated world. You can create as many of these as you
|
rlm@470
|
984 want. Every frame, the =RenderManager= iterates through each
|
rlm@470
|
985 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
|
rlm@470
|
986 is a =FrameBuffer= which represents the rendered image in the GPU.
|
rlm@470
|
987
|
rlm@470
|
988 #+caption: =ViewPorts= are cameras in the world. During each frame,
|
rlm@470
|
989 #+caption: the =RenderManager= records a snapshot of what each view
|
rlm@470
|
990 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
|
rlm@470
|
991 #+name: name
|
rlm@470
|
992 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
993 [[../images/diagram_rendermanager2.png]]
|
rlm@470
|
994
|
rlm@470
|
995 Each =ViewPort= can have any number of attached =SceneProcessor=
|
rlm@470
|
996 objects, which are called every time a new frame is rendered. A
|
rlm@470
|
997 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
|
rlm@470
|
998 whatever it wants to the data. Often this consists of invoking GPU
|
rlm@470
|
999 specific operations on the rendered image. The =SceneProcessor= can
|
rlm@470
|
1000 also copy the GPU image data to RAM and process it with the CPU.
|
rlm@470
|
1001
|
rlm@470
|
1002 *** Appropriating Views for Vision
|
rlm@470
|
1003
|
rlm@470
|
1004 Each eye in the simulated creature needs its own =ViewPort= so
|
rlm@470
|
1005 that it can see the world from its own perspective. To this
|
rlm@470
|
1006 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
|
rlm@470
|
1007 any arbitrary continuation function for further processing. That
|
rlm@470
|
1008 continuation function may perform both CPU and GPU operations on
|
rlm@470
|
1009 the data. To make this easy for the continuation function, the
|
rlm@470
|
1010 =SceneProcessor= maintains appropriately sized buffers in RAM to
|
rlm@470
|
1011 hold the data. It does not do any copying from the GPU to the CPU
|
rlm@470
|
1012 itself because it is a slow operation.
|
rlm@470
|
1013
|
rlm@470
|
1014 #+caption: Function to make the rendered secne in jMonkeyEngine
|
rlm@470
|
1015 #+caption: available for further processing.
|
rlm@470
|
1016 #+name: pipeline-1
|
rlm@470
|
1017 #+begin_listing clojure
|
rlm@470
|
1018 #+begin_src clojure
|
rlm@470
|
1019 (defn vision-pipeline
|
rlm@470
|
1020 "Create a SceneProcessor object which wraps a vision processing
|
rlm@470
|
1021 continuation function. The continuation is a function that takes
|
rlm@470
|
1022 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
|
rlm@470
|
1023 each of which has already been appropriately sized."
|
rlm@470
|
1024 [continuation]
|
rlm@470
|
1025 (let [byte-buffer (atom nil)
|
rlm@470
|
1026 renderer (atom nil)
|
rlm@470
|
1027 image (atom nil)]
|
rlm@470
|
1028 (proxy [SceneProcessor] []
|
rlm@470
|
1029 (initialize
|
rlm@470
|
1030 [renderManager viewPort]
|
rlm@470
|
1031 (let [cam (.getCamera viewPort)
|
rlm@470
|
1032 width (.getWidth cam)
|
rlm@470
|
1033 height (.getHeight cam)]
|
rlm@470
|
1034 (reset! renderer (.getRenderer renderManager))
|
rlm@470
|
1035 (reset! byte-buffer
|
rlm@470
|
1036 (BufferUtils/createByteBuffer
|
rlm@470
|
1037 (* width height 4)))
|
rlm@470
|
1038 (reset! image (BufferedImage.
|
rlm@470
|
1039 width height
|
rlm@470
|
1040 BufferedImage/TYPE_4BYTE_ABGR))))
|
rlm@470
|
1041 (isInitialized [] (not (nil? @byte-buffer)))
|
rlm@470
|
1042 (reshape [_ _ _])
|
rlm@470
|
1043 (preFrame [_])
|
rlm@470
|
1044 (postQueue [_])
|
rlm@470
|
1045 (postFrame
|
rlm@470
|
1046 [#^FrameBuffer fb]
|
rlm@470
|
1047 (.clear @byte-buffer)
|
rlm@470
|
1048 (continuation @renderer fb @byte-buffer @image))
|
rlm@470
|
1049 (cleanup []))))
|
rlm@470
|
1050 #+end_src
|
rlm@470
|
1051 #+end_listing
|
rlm@470
|
1052
|
rlm@470
|
1053 The continuation function given to =vision-pipeline= above will be
|
rlm@470
|
1054 given a =Renderer= and three containers for image data. The
|
rlm@470
|
1055 =FrameBuffer= references the GPU image data, but the pixel data
|
rlm@470
|
1056 can not be used directly on the CPU. The =ByteBuffer= and
|
rlm@470
|
1057 =BufferedImage= are initially "empty" but are sized to hold the
|
rlm@470
|
1058 data in the =FrameBuffer=. I call transferring the GPU image data
|
rlm@470
|
1059 to the CPU structures "mixing" the image data.
|
rlm@470
|
1060
|
rlm@470
|
1061 *** Optical sensor arrays are described with images and referenced with metadata
|
rlm@470
|
1062
|
rlm@470
|
1063 The vision pipeline described above handles the flow of rendered
|
rlm@470
|
1064 images. Now, =CORTEX= needs simulated eyes to serve as the source
|
rlm@470
|
1065 of these images.
|
rlm@470
|
1066
|
rlm@470
|
1067 An eye is described in blender in the same way as a joint. They
|
rlm@470
|
1068 are zero dimensional empty objects with no geometry whose local
|
rlm@470
|
1069 coordinate system determines the orientation of the resulting eye.
|
rlm@470
|
1070 All eyes are children of a parent node named "eyes" just as all
|
rlm@470
|
1071 joints have a parent named "joints". An eye binds to the nearest
|
rlm@470
|
1072 physical object with =bind-sense=.
|
rlm@470
|
1073
|
rlm@470
|
1074 #+caption: Here, the camera is created based on metadata on the
|
rlm@470
|
1075 #+caption: eye-node and attached to the nearest physical object
|
rlm@470
|
1076 #+caption: with =bind-sense=
|
rlm@470
|
1077 #+name: add-eye
|
rlm@470
|
1078 #+begin_listing clojure
|
rlm@470
|
1079 (defn add-eye!
|
rlm@470
|
1080 "Create a Camera centered on the current position of 'eye which
|
rlm@470
|
1081 follows the closest physical node in 'creature. The camera will
|
rlm@470
|
1082 point in the X direction and use the Z vector as up as determined
|
rlm@470
|
1083 by the rotation of these vectors in blender coordinate space. Use
|
rlm@470
|
1084 XZY rotation for the node in blender."
|
rlm@470
|
1085 [#^Node creature #^Spatial eye]
|
rlm@470
|
1086 (let [target (closest-node creature eye)
|
rlm@470
|
1087 [cam-width cam-height]
|
rlm@470
|
1088 ;;[640 480] ;; graphics card on laptop doesn't support
|
rlm@470
|
1089 ;; arbitray dimensions.
|
rlm@470
|
1090 (eye-dimensions eye)
|
rlm@470
|
1091 cam (Camera. cam-width cam-height)
|
rlm@470
|
1092 rot (.getWorldRotation eye)]
|
rlm@470
|
1093 (.setLocation cam (.getWorldTranslation eye))
|
rlm@470
|
1094 (.lookAtDirection
|
rlm@470
|
1095 cam ; this part is not a mistake and
|
rlm@470
|
1096 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
|
rlm@470
|
1097 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
|
rlm@470
|
1098 (.setFrustumPerspective
|
rlm@470
|
1099 cam (float 45)
|
rlm@470
|
1100 (float (/ (.getWidth cam) (.getHeight cam)))
|
rlm@470
|
1101 (float 1)
|
rlm@470
|
1102 (float 1000))
|
rlm@470
|
1103 (bind-sense target cam) cam))
|
rlm@470
|
1104 #+end_listing
|
rlm@470
|
1105
|
rlm@470
|
1106 *** Simulated Retina
|
rlm@470
|
1107
|
rlm@470
|
1108 An eye is a surface (the retina) which contains many discrete
|
rlm@470
|
1109 sensors to detect light. These sensors can have different
|
rlm@470
|
1110 light-sensing properties. In humans, each discrete sensor is
|
rlm@470
|
1111 sensitive to red, blue, green, or gray. These different types of
|
rlm@470
|
1112 sensors can have different spatial distributions along the retina.
|
rlm@470
|
1113 In humans, there is a fovea in the center of the retina which has
|
rlm@470
|
1114 a very high density of color sensors, and a blind spot which has
|
rlm@470
|
1115 no sensors at all. Sensor density decreases in proportion to
|
rlm@470
|
1116 distance from the fovea.
|
rlm@470
|
1117
|
rlm@470
|
1118 I want to be able to model any retinal configuration, so my
|
rlm@470
|
1119 eye-nodes in blender contain metadata pointing to images that
|
rlm@470
|
1120 describe the precise position of the individual sensors using
|
rlm@470
|
1121 white pixels. The meta-data also describes the precise sensitivity
|
rlm@470
|
1122 to light that the sensors described in the image have. An eye can
|
rlm@470
|
1123 contain any number of these images. For example, the metadata for
|
rlm@470
|
1124 an eye might look like this:
|
rlm@470
|
1125
|
rlm@470
|
1126 #+begin_src clojure
|
rlm@470
|
1127 {0xFF0000 "Models/test-creature/retina-small.png"}
|
rlm@470
|
1128 #+end_src
|
rlm@470
|
1129
|
rlm@470
|
1130 #+caption: An example retinal profile image. White pixels are
|
rlm@470
|
1131 #+caption: photo-sensitive elements. The distribution of white
|
rlm@470
|
1132 #+caption: pixels is denser in the middle and falls off at the
|
rlm@470
|
1133 #+caption: edges and is inspired by the human retina.
|
rlm@470
|
1134 #+name: retina
|
rlm@470
|
1135 #+ATTR_LaTeX: :width 10cm
|
rlm@470
|
1136 [[./images/retina-small.png]]
|
rlm@470
|
1137
|
rlm@470
|
1138 Together, the number 0xFF0000 and the image image above describe
|
rlm@470
|
1139 the placement of red-sensitive sensory elements.
|
rlm@470
|
1140
|
rlm@470
|
1141 Meta-data to very crudely approximate a human eye might be
|
rlm@470
|
1142 something like this:
|
rlm@470
|
1143
|
rlm@470
|
1144 #+begin_src clojure
|
rlm@470
|
1145 (let [retinal-profile "Models/test-creature/retina-small.png"]
|
rlm@470
|
1146 {0xFF0000 retinal-profile
|
rlm@470
|
1147 0x00FF00 retinal-profile
|
rlm@470
|
1148 0x0000FF retinal-profile
|
rlm@470
|
1149 0xFFFFFF retinal-profile})
|
rlm@470
|
1150 #+end_src
|
rlm@470
|
1151
|
rlm@470
|
1152 The numbers that serve as keys in the map determine a sensor's
|
rlm@470
|
1153 relative sensitivity to the channels red, green, and blue. These
|
rlm@470
|
1154 sensitivity values are packed into an integer in the order
|
rlm@470
|
1155 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
|
rlm@470
|
1156 image are added together with these sensitivities as linear
|
rlm@470
|
1157 weights. Therefore, 0xFF0000 means sensitive to red only while
|
rlm@470
|
1158 0xFFFFFF means sensitive to all colors equally (gray).
|
rlm@470
|
1159
|
rlm@470
|
1160 #+caption: This is the core of vision in =CORTEX=. A given eye node
|
rlm@470
|
1161 #+caption: is converted into a function that returns visual
|
rlm@470
|
1162 #+caption: information from the simulation.
|
rlm@471
|
1163 #+name: vision-kernel
|
rlm@470
|
1164 #+begin_listing clojure
|
rlm@470
|
1165 (defn vision-kernel
|
rlm@470
|
1166 "Returns a list of functions, each of which will return a color
|
rlm@470
|
1167 channel's worth of visual information when called inside a running
|
rlm@470
|
1168 simulation."
|
rlm@470
|
1169 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
|
rlm@470
|
1170 (let [retinal-map (retina-sensor-profile eye)
|
rlm@470
|
1171 camera (add-eye! creature eye)
|
rlm@470
|
1172 vision-image
|
rlm@470
|
1173 (atom
|
rlm@470
|
1174 (BufferedImage. (.getWidth camera)
|
rlm@470
|
1175 (.getHeight camera)
|
rlm@470
|
1176 BufferedImage/TYPE_BYTE_BINARY))
|
rlm@470
|
1177 register-eye!
|
rlm@470
|
1178 (runonce
|
rlm@470
|
1179 (fn [world]
|
rlm@470
|
1180 (add-camera!
|
rlm@470
|
1181 world camera
|
rlm@470
|
1182 (let [counter (atom 0)]
|
rlm@470
|
1183 (fn [r fb bb bi]
|
rlm@470
|
1184 (if (zero? (rem (swap! counter inc) (inc skip)))
|
rlm@470
|
1185 (reset! vision-image
|
rlm@470
|
1186 (BufferedImage! r fb bb bi))))))))]
|
rlm@470
|
1187 (vec
|
rlm@470
|
1188 (map
|
rlm@470
|
1189 (fn [[key image]]
|
rlm@470
|
1190 (let [whites (white-coordinates image)
|
rlm@470
|
1191 topology (vec (collapse whites))
|
rlm@470
|
1192 sensitivity (sensitivity-presets key key)]
|
rlm@470
|
1193 (attached-viewport.
|
rlm@470
|
1194 (fn [world]
|
rlm@470
|
1195 (register-eye! world)
|
rlm@470
|
1196 (vector
|
rlm@470
|
1197 topology
|
rlm@470
|
1198 (vec
|
rlm@470
|
1199 (for [[x y] whites]
|
rlm@470
|
1200 (pixel-sense
|
rlm@470
|
1201 sensitivity
|
rlm@470
|
1202 (.getRGB @vision-image x y))))))
|
rlm@470
|
1203 register-eye!)))
|
rlm@470
|
1204 retinal-map))))
|
rlm@470
|
1205 #+end_listing
|
rlm@470
|
1206
|
rlm@470
|
1207 Note that since each of the functions generated by =vision-kernel=
|
rlm@470
|
1208 shares the same =register-eye!= function, the eye will be
|
rlm@470
|
1209 registered only once the first time any of the functions from the
|
rlm@470
|
1210 list returned by =vision-kernel= is called. Each of the functions
|
rlm@470
|
1211 returned by =vision-kernel= also allows access to the =Viewport=
|
rlm@470
|
1212 through which it receives images.
|
rlm@470
|
1213
|
rlm@470
|
1214 All the hard work has been done; all that remains is to apply
|
rlm@470
|
1215 =vision-kernel= to each eye in the creature and gather the results
|
rlm@470
|
1216 into one list of functions.
|
rlm@470
|
1217
|
rlm@470
|
1218
|
rlm@470
|
1219 #+caption: With =vision!=, =CORTEX= is already a fine simulation
|
rlm@470
|
1220 #+caption: environment for experimenting with different types of
|
rlm@470
|
1221 #+caption: eyes.
|
rlm@470
|
1222 #+name: vision!
|
rlm@470
|
1223 #+begin_listing clojure
|
rlm@470
|
1224 (defn vision!
|
rlm@470
|
1225 "Returns a list of functions, each of which returns visual sensory
|
rlm@470
|
1226 data when called inside a running simulation."
|
rlm@470
|
1227 [#^Node creature & {skip :skip :or {skip 0}}]
|
rlm@470
|
1228 (reduce
|
rlm@470
|
1229 concat
|
rlm@470
|
1230 (for [eye (eyes creature)]
|
rlm@470
|
1231 (vision-kernel creature eye))))
|
rlm@470
|
1232 #+end_listing
|
rlm@470
|
1233
|
rlm@471
|
1234 #+caption: Simulated vision with a test creature and the
|
rlm@471
|
1235 #+caption: human-like eye approximation. Notice how each channel
|
rlm@471
|
1236 #+caption: of the eye responds differently to the differently
|
rlm@471
|
1237 #+caption: colored balls.
|
rlm@471
|
1238 #+name: worm-vision-test.
|
rlm@471
|
1239 #+ATTR_LaTeX: :width 13cm
|
rlm@471
|
1240 [[./images/worm-vision.png]]
|
rlm@470
|
1241
|
rlm@471
|
1242 The vision code is not much more complicated than the body code,
|
rlm@471
|
1243 and enables multiple further paths for simulated vision. For
|
rlm@471
|
1244 example, it is quite easy to create bifocal vision -- you just
|
rlm@471
|
1245 make two eyes next to each other in blender! It is also possible
|
rlm@471
|
1246 to encode vision transforms in the retinal files. For example, the
|
rlm@471
|
1247 human like retina file in figure \ref{retina} approximates a
|
rlm@471
|
1248 log-polar transform.
|
rlm@470
|
1249
|
rlm@471
|
1250 This vision code has already been absorbed by the jMonkeyEngine
|
rlm@471
|
1251 community and is now (in modified form) part of a system for
|
rlm@471
|
1252 capturing in-game video to a file.
|
rlm@470
|
1253
|
rlm@436
|
1254 ** Hearing is hard; =CORTEX= does it right
|
rlm@436
|
1255
|
rlm@472
|
1256 At the end of this section I will have simulated ears that work the
|
rlm@472
|
1257 same way as the simulated eyes in the last section. I will be able to
|
rlm@472
|
1258 place any number of ear-nodes in a blender file, and they will bind to
|
rlm@472
|
1259 the closest physical object and follow it as it moves around. Each ear
|
rlm@472
|
1260 will provide access to the sound data it picks up between every frame.
|
rlm@472
|
1261
|
rlm@472
|
1262 Hearing is one of the more difficult senses to simulate, because there
|
rlm@472
|
1263 is less support for obtaining the actual sound data that is processed
|
rlm@472
|
1264 by jMonkeyEngine3. There is no "split-screen" support for rendering
|
rlm@472
|
1265 sound from different points of view, and there is no way to directly
|
rlm@472
|
1266 access the rendered sound data.
|
rlm@472
|
1267
|
rlm@472
|
1268 =CORTEX='s hearing is unique because it does not have any
|
rlm@472
|
1269 limitations compared to other simulation environments. As far as I
|
rlm@472
|
1270 know, there is no other system that supports multiple listerers,
|
rlm@472
|
1271 and the sound demo at the end of this section is the first time
|
rlm@472
|
1272 it's been done in a video game environment.
|
rlm@472
|
1273
|
rlm@472
|
1274 *** Brief Description of jMonkeyEngine's Sound System
|
rlm@472
|
1275
|
rlm@472
|
1276 jMonkeyEngine's sound system works as follows:
|
rlm@472
|
1277
|
rlm@472
|
1278 - jMonkeyEngine uses the =AppSettings= for the particular
|
rlm@472
|
1279 application to determine what sort of =AudioRenderer= should be
|
rlm@472
|
1280 used.
|
rlm@472
|
1281 - Although some support is provided for multiple AudioRendering
|
rlm@472
|
1282 backends, jMonkeyEngine at the time of this writing will either
|
rlm@472
|
1283 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
|
rlm@472
|
1284 - jMonkeyEngine tries to figure out what sort of system you're
|
rlm@472
|
1285 running and extracts the appropriate native libraries.
|
rlm@472
|
1286 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
|
rlm@472
|
1287 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
|
rlm@472
|
1288 - =OpenAL= renders the 3D sound and feeds the rendered sound
|
rlm@472
|
1289 directly to any of various sound output devices with which it
|
rlm@472
|
1290 knows how to communicate.
|
rlm@472
|
1291
|
rlm@472
|
1292 A consequence of this is that there's no way to access the actual
|
rlm@472
|
1293 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
|
rlm@472
|
1294 one /listener/ (it renders sound data from only one perspective),
|
rlm@472
|
1295 which normally isn't a problem for games, but becomes a problem
|
rlm@472
|
1296 when trying to make multiple AI creatures that can each hear the
|
rlm@472
|
1297 world from a different perspective.
|
rlm@472
|
1298
|
rlm@472
|
1299 To make many AI creatures in jMonkeyEngine that can each hear the
|
rlm@472
|
1300 world from their own perspective, or to make a single creature with
|
rlm@472
|
1301 many ears, it is necessary to go all the way back to =OpenAL= and
|
rlm@472
|
1302 implement support for simulated hearing there.
|
rlm@472
|
1303
|
rlm@472
|
1304 *** Extending =OpenAl=
|
rlm@472
|
1305
|
rlm@472
|
1306 Extending =OpenAL= to support multiple listeners requires 500
|
rlm@472
|
1307 lines of =C= code and is too hairy to mention here. Instead, I
|
rlm@472
|
1308 will show a small amount of extension code and go over the high
|
rlm@472
|
1309 level stragety. Full source is of course available with the
|
rlm@472
|
1310 =CORTEX= distribution if you're interested.
|
rlm@472
|
1311
|
rlm@472
|
1312 =OpenAL= goes to great lengths to support many different systems,
|
rlm@472
|
1313 all with different sound capabilities and interfaces. It
|
rlm@472
|
1314 accomplishes this difficult task by providing code for many
|
rlm@472
|
1315 different sound backends in pseudo-objects called /Devices/.
|
rlm@472
|
1316 There's a device for the Linux Open Sound System and the Advanced
|
rlm@472
|
1317 Linux Sound Architecture, there's one for Direct Sound on Windows,
|
rlm@472
|
1318 and there's even one for Solaris. =OpenAL= solves the problem of
|
rlm@472
|
1319 platform independence by providing all these Devices.
|
rlm@472
|
1320
|
rlm@472
|
1321 Wrapper libraries such as LWJGL are free to examine the system on
|
rlm@472
|
1322 which they are running and then select an appropriate device for
|
rlm@472
|
1323 that system.
|
rlm@472
|
1324
|
rlm@472
|
1325 There are also a few "special" devices that don't interface with
|
rlm@472
|
1326 any particular system. These include the Null Device, which
|
rlm@472
|
1327 doesn't do anything, and the Wave Device, which writes whatever
|
rlm@472
|
1328 sound it receives to a file, if everything has been set up
|
rlm@472
|
1329 correctly when configuring =OpenAL=.
|
rlm@472
|
1330
|
rlm@472
|
1331 Actual mixing (doppler shift and distance.environment-based
|
rlm@472
|
1332 attenuation) of the sound data happens in the Devices, and they
|
rlm@472
|
1333 are the only point in the sound rendering process where this data
|
rlm@472
|
1334 is available.
|
rlm@472
|
1335
|
rlm@472
|
1336 Therefore, in order to support multiple listeners, and get the
|
rlm@472
|
1337 sound data in a form that the AIs can use, it is necessary to
|
rlm@472
|
1338 create a new Device which supports this feature.
|
rlm@472
|
1339
|
rlm@472
|
1340 Adding a device to OpenAL is rather tricky -- there are five
|
rlm@472
|
1341 separate files in the =OpenAL= source tree that must be modified
|
rlm@472
|
1342 to do so. I named my device the "Multiple Audio Send" Device, or
|
rlm@472
|
1343 =Send= Device for short, since it sends audio data back to the
|
rlm@472
|
1344 calling application like an Aux-Send cable on a mixing board.
|
rlm@472
|
1345
|
rlm@472
|
1346 The main idea behind the Send device is to take advantage of the
|
rlm@472
|
1347 fact that LWJGL only manages one /context/ when using OpenAL. A
|
rlm@472
|
1348 /context/ is like a container that holds samples and keeps track
|
rlm@472
|
1349 of where the listener is. In order to support multiple listeners,
|
rlm@472
|
1350 the Send device identifies the LWJGL context as the master
|
rlm@472
|
1351 context, and creates any number of slave contexts to represent
|
rlm@472
|
1352 additional listeners. Every time the device renders sound, it
|
rlm@472
|
1353 synchronizes every source from the master LWJGL context to the
|
rlm@472
|
1354 slave contexts. Then, it renders each context separately, using a
|
rlm@472
|
1355 different listener for each one. The rendered sound is made
|
rlm@472
|
1356 available via JNI to jMonkeyEngine.
|
rlm@472
|
1357
|
rlm@472
|
1358 Switching between contexts is not the normal operation of a
|
rlm@472
|
1359 Device, and one of the problems with doing so is that a Device
|
rlm@472
|
1360 normally keeps around a few pieces of state such as the
|
rlm@472
|
1361 =ClickRemoval= array above which will become corrupted if the
|
rlm@472
|
1362 contexts are not rendered in parallel. The solution is to create a
|
rlm@472
|
1363 copy of this normally global device state for each context, and
|
rlm@472
|
1364 copy it back and forth into and out of the actual device state
|
rlm@472
|
1365 whenever a context is rendered.
|
rlm@472
|
1366
|
rlm@472
|
1367 The core of the =Send= device is the =syncSources= function, which
|
rlm@472
|
1368 does the job of copying all relevant data from one context to
|
rlm@472
|
1369 another.
|
rlm@472
|
1370
|
rlm@472
|
1371 #+caption: Program for extending =OpenAL= to support multiple
|
rlm@472
|
1372 #+caption: listeners via context copying/switching.
|
rlm@472
|
1373 #+name: sync-openal-sources
|
rlm@472
|
1374 #+begin_listing C
|
rlm@472
|
1375 void syncSources(ALsource *masterSource, ALsource *slaveSource,
|
rlm@472
|
1376 ALCcontext *masterCtx, ALCcontext *slaveCtx){
|
rlm@472
|
1377 ALuint master = masterSource->source;
|
rlm@472
|
1378 ALuint slave = slaveSource->source;
|
rlm@472
|
1379 ALCcontext *current = alcGetCurrentContext();
|
rlm@472
|
1380
|
rlm@472
|
1381 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
|
rlm@472
|
1382 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
|
rlm@472
|
1383 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
|
rlm@472
|
1384 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
|
rlm@472
|
1385 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
|
rlm@472
|
1386 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
|
rlm@472
|
1387 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
|
rlm@472
|
1388 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
|
rlm@472
|
1389 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
|
rlm@472
|
1390 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
|
rlm@472
|
1391 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
|
rlm@472
|
1392 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
|
rlm@472
|
1393 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
|
rlm@472
|
1394
|
rlm@472
|
1395 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
|
rlm@472
|
1396 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
|
rlm@472
|
1397 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
|
rlm@472
|
1398
|
rlm@472
|
1399 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
|
rlm@472
|
1400 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
|
rlm@472
|
1401
|
rlm@472
|
1402 alcMakeContextCurrent(masterCtx);
|
rlm@472
|
1403 ALint source_type;
|
rlm@472
|
1404 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
|
rlm@472
|
1405
|
rlm@472
|
1406 // Only static sources are currently synchronized!
|
rlm@472
|
1407 if (AL_STATIC == source_type){
|
rlm@472
|
1408 ALint master_buffer;
|
rlm@472
|
1409 ALint slave_buffer;
|
rlm@472
|
1410 alGetSourcei(master, AL_BUFFER, &master_buffer);
|
rlm@472
|
1411 alcMakeContextCurrent(slaveCtx);
|
rlm@472
|
1412 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
|
rlm@472
|
1413 if (master_buffer != slave_buffer){
|
rlm@472
|
1414 alSourcei(slave, AL_BUFFER, master_buffer);
|
rlm@472
|
1415 }
|
rlm@472
|
1416 }
|
rlm@472
|
1417
|
rlm@472
|
1418 // Synchronize the state of the two sources.
|
rlm@472
|
1419 alcMakeContextCurrent(masterCtx);
|
rlm@472
|
1420 ALint masterState;
|
rlm@472
|
1421 ALint slaveState;
|
rlm@472
|
1422
|
rlm@472
|
1423 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
|
rlm@472
|
1424 alcMakeContextCurrent(slaveCtx);
|
rlm@472
|
1425 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
|
rlm@472
|
1426
|
rlm@472
|
1427 if (masterState != slaveState){
|
rlm@472
|
1428 switch (masterState){
|
rlm@472
|
1429 case AL_INITIAL : alSourceRewind(slave); break;
|
rlm@472
|
1430 case AL_PLAYING : alSourcePlay(slave); break;
|
rlm@472
|
1431 case AL_PAUSED : alSourcePause(slave); break;
|
rlm@472
|
1432 case AL_STOPPED : alSourceStop(slave); break;
|
rlm@472
|
1433 }
|
rlm@472
|
1434 }
|
rlm@472
|
1435 // Restore whatever context was previously active.
|
rlm@472
|
1436 alcMakeContextCurrent(current);
|
rlm@472
|
1437 }
|
rlm@472
|
1438 #+end_listing
|
rlm@472
|
1439
|
rlm@472
|
1440 With this special context-switching device, and some ugly JNI
|
rlm@472
|
1441 bindings that are not worth mentioning, =CORTEX= gains the ability
|
rlm@472
|
1442 to access multiple sound streams from =OpenAL=.
|
rlm@472
|
1443
|
rlm@472
|
1444 #+caption: Program to create an ear from a blender empty node. The ear
|
rlm@472
|
1445 #+caption: follows around the nearest physical object and passes
|
rlm@472
|
1446 #+caption: all sensory data to a continuation function.
|
rlm@472
|
1447 #+name: add-ear
|
rlm@472
|
1448 #+begin_listing clojure
|
rlm@472
|
1449 (defn add-ear!
|
rlm@472
|
1450 "Create a Listener centered on the current position of 'ear
|
rlm@472
|
1451 which follows the closest physical node in 'creature and
|
rlm@472
|
1452 sends sound data to 'continuation."
|
rlm@472
|
1453 [#^Application world #^Node creature #^Spatial ear continuation]
|
rlm@472
|
1454 (let [target (closest-node creature ear)
|
rlm@472
|
1455 lis (Listener.)
|
rlm@472
|
1456 audio-renderer (.getAudioRenderer world)
|
rlm@472
|
1457 sp (hearing-pipeline continuation)]
|
rlm@472
|
1458 (.setLocation lis (.getWorldTranslation ear))
|
rlm@472
|
1459 (.setRotation lis (.getWorldRotation ear))
|
rlm@472
|
1460 (bind-sense target lis)
|
rlm@472
|
1461 (update-listener-velocity! target lis)
|
rlm@472
|
1462 (.addListener audio-renderer lis)
|
rlm@472
|
1463 (.registerSoundProcessor audio-renderer lis sp)))
|
rlm@472
|
1464 #+end_listing
|
rlm@472
|
1465
|
rlm@472
|
1466
|
rlm@472
|
1467 The =Send= device, unlike most of the other devices in =OpenAL=,
|
rlm@472
|
1468 does not render sound unless asked. This enables the system to
|
rlm@472
|
1469 slow down or speed up depending on the needs of the AIs who are
|
rlm@472
|
1470 using it to listen. If the device tried to render samples in
|
rlm@472
|
1471 real-time, a complicated AI whose mind takes 100 seconds of
|
rlm@472
|
1472 computer time to simulate 1 second of AI-time would miss almost
|
rlm@472
|
1473 all of the sound in its environment!
|
rlm@472
|
1474
|
rlm@472
|
1475 #+caption: Program to enable arbitrary hearing in =CORTEX=
|
rlm@472
|
1476 #+name: hearing
|
rlm@472
|
1477 #+begin_listing clojure
|
rlm@472
|
1478 (defn hearing-kernel
|
rlm@472
|
1479 "Returns a function which returns auditory sensory data when called
|
rlm@472
|
1480 inside a running simulation."
|
rlm@472
|
1481 [#^Node creature #^Spatial ear]
|
rlm@472
|
1482 (let [hearing-data (atom [])
|
rlm@472
|
1483 register-listener!
|
rlm@472
|
1484 (runonce
|
rlm@472
|
1485 (fn [#^Application world]
|
rlm@472
|
1486 (add-ear!
|
rlm@472
|
1487 world creature ear
|
rlm@472
|
1488 (comp #(reset! hearing-data %)
|
rlm@472
|
1489 byteBuffer->pulse-vector))))]
|
rlm@472
|
1490 (fn [#^Application world]
|
rlm@472
|
1491 (register-listener! world)
|
rlm@472
|
1492 (let [data @hearing-data
|
rlm@472
|
1493 topology
|
rlm@472
|
1494 (vec (map #(vector % 0) (range 0 (count data))))]
|
rlm@472
|
1495 [topology data]))))
|
rlm@472
|
1496
|
rlm@472
|
1497 (defn hearing!
|
rlm@472
|
1498 "Endow the creature in a particular world with the sense of
|
rlm@472
|
1499 hearing. Will return a sequence of functions, one for each ear,
|
rlm@472
|
1500 which when called will return the auditory data from that ear."
|
rlm@472
|
1501 [#^Node creature]
|
rlm@472
|
1502 (for [ear (ears creature)]
|
rlm@472
|
1503 (hearing-kernel creature ear)))
|
rlm@472
|
1504 #+end_listing
|
rlm@472
|
1505
|
rlm@472
|
1506 Armed with these functions, =CORTEX= is able to test possibly the
|
rlm@472
|
1507 first ever instance of multiple listeners in a video game engine
|
rlm@472
|
1508 based simulation!
|
rlm@472
|
1509
|
rlm@472
|
1510 #+caption: Here a simple creature responds to sound by changing
|
rlm@472
|
1511 #+caption: its color from gray to green when the total volume
|
rlm@472
|
1512 #+caption: goes over a threshold.
|
rlm@472
|
1513 #+name: sound-test
|
rlm@472
|
1514 #+begin_listing java
|
rlm@472
|
1515 /**
|
rlm@472
|
1516 * Respond to sound! This is the brain of an AI entity that
|
rlm@472
|
1517 * hears its surroundings and reacts to them.
|
rlm@472
|
1518 */
|
rlm@472
|
1519 public void process(ByteBuffer audioSamples,
|
rlm@472
|
1520 int numSamples, AudioFormat format) {
|
rlm@472
|
1521 audioSamples.clear();
|
rlm@472
|
1522 byte[] data = new byte[numSamples];
|
rlm@472
|
1523 float[] out = new float[numSamples];
|
rlm@472
|
1524 audioSamples.get(data);
|
rlm@472
|
1525 FloatSampleTools.
|
rlm@472
|
1526 byte2floatInterleaved
|
rlm@472
|
1527 (data, 0, out, 0, numSamples/format.getFrameSize(), format);
|
rlm@472
|
1528
|
rlm@472
|
1529 float max = Float.NEGATIVE_INFINITY;
|
rlm@472
|
1530 for (float f : out){if (f > max) max = f;}
|
rlm@472
|
1531 audioSamples.clear();
|
rlm@472
|
1532
|
rlm@472
|
1533 if (max > 0.1){
|
rlm@472
|
1534 entity.getMaterial().setColor("Color", ColorRGBA.Green);
|
rlm@472
|
1535 }
|
rlm@472
|
1536 else {
|
rlm@472
|
1537 entity.getMaterial().setColor("Color", ColorRGBA.Gray);
|
rlm@472
|
1538 }
|
rlm@472
|
1539 #+end_listing
|
rlm@472
|
1540
|
rlm@472
|
1541 #+caption: First ever simulation of multiple listerners in =CORTEX=.
|
rlm@472
|
1542 #+caption: Each cube is a creature which processes sound data with
|
rlm@472
|
1543 #+caption: the =process= function from listing \ref{sound-test}.
|
rlm@472
|
1544 #+caption: the ball is constantally emiting a pure tone of
|
rlm@472
|
1545 #+caption: constant volume. As it approaches the cubes, they each
|
rlm@472
|
1546 #+caption: change color in response to the sound.
|
rlm@472
|
1547 #+name: sound-cubes.
|
rlm@472
|
1548 #+ATTR_LaTeX: :width 10cm
|
rlm@472
|
1549 [[./images/aurellem-gray.png]]
|
rlm@472
|
1550
|
rlm@472
|
1551 This system of hearing has also been co-opted by the
|
rlm@472
|
1552 jMonkeyEngine3 community and is used to record audio for demo
|
rlm@472
|
1553 videos.
|
rlm@472
|
1554
|
rlm@436
|
1555 ** Touch uses hundreds of hair-like elements
|
rlm@436
|
1556
|
rlm@440
|
1557 ** Proprioception is the sense that makes everything ``real''
|
rlm@436
|
1558
|
rlm@436
|
1559 ** Muscles are both effectors and sensors
|
rlm@436
|
1560
|
rlm@436
|
1561 ** =CORTEX= brings complex creatures to life!
|
rlm@436
|
1562
|
rlm@436
|
1563 ** =CORTEX= enables many possiblities for further research
|
rlm@435
|
1564
|
rlm@465
|
1565 * COMMENT Empathy in a simulated worm
|
rlm@435
|
1566
|
rlm@449
|
1567 Here I develop a computational model of empathy, using =CORTEX= as a
|
rlm@449
|
1568 base. Empathy in this context is the ability to observe another
|
rlm@449
|
1569 creature and infer what sorts of sensations that creature is
|
rlm@449
|
1570 feeling. My empathy algorithm involves multiple phases. First is
|
rlm@449
|
1571 free-play, where the creature moves around and gains sensory
|
rlm@449
|
1572 experience. From this experience I construct a representation of the
|
rlm@449
|
1573 creature's sensory state space, which I call \Phi-space. Using
|
rlm@449
|
1574 \Phi-space, I construct an efficient function which takes the
|
rlm@449
|
1575 limited data that comes from observing another creature and enriches
|
rlm@449
|
1576 it full compliment of imagined sensory data. I can then use the
|
rlm@449
|
1577 imagined sensory data to recognize what the observed creature is
|
rlm@449
|
1578 doing and feeling, using straightforward embodied action predicates.
|
rlm@449
|
1579 This is all demonstrated with using a simple worm-like creature, and
|
rlm@449
|
1580 recognizing worm-actions based on limited data.
|
rlm@449
|
1581
|
rlm@449
|
1582 #+caption: Here is the worm with which we will be working.
|
rlm@449
|
1583 #+caption: It is composed of 5 segments. Each segment has a
|
rlm@449
|
1584 #+caption: pair of extensor and flexor muscles. Each of the
|
rlm@449
|
1585 #+caption: worm's four joints is a hinge joint which allows
|
rlm@451
|
1586 #+caption: about 30 degrees of rotation to either side. Each segment
|
rlm@449
|
1587 #+caption: of the worm is touch-capable and has a uniform
|
rlm@449
|
1588 #+caption: distribution of touch sensors on each of its faces.
|
rlm@449
|
1589 #+caption: Each joint has a proprioceptive sense to detect
|
rlm@449
|
1590 #+caption: relative positions. The worm segments are all the
|
rlm@449
|
1591 #+caption: same except for the first one, which has a much
|
rlm@449
|
1592 #+caption: higher weight than the others to allow for easy
|
rlm@449
|
1593 #+caption: manual motor control.
|
rlm@449
|
1594 #+name: basic-worm-view
|
rlm@449
|
1595 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
1596 [[./images/basic-worm-view.png]]
|
rlm@449
|
1597
|
rlm@449
|
1598 #+caption: Program for reading a worm from a blender file and
|
rlm@449
|
1599 #+caption: outfitting it with the senses of proprioception,
|
rlm@449
|
1600 #+caption: touch, and the ability to move, as specified in the
|
rlm@449
|
1601 #+caption: blender file.
|
rlm@449
|
1602 #+name: get-worm
|
rlm@449
|
1603 #+begin_listing clojure
|
rlm@449
|
1604 #+begin_src clojure
|
rlm@449
|
1605 (defn worm []
|
rlm@449
|
1606 (let [model (load-blender-model "Models/worm/worm.blend")]
|
rlm@449
|
1607 {:body (doto model (body!))
|
rlm@449
|
1608 :touch (touch! model)
|
rlm@449
|
1609 :proprioception (proprioception! model)
|
rlm@449
|
1610 :muscles (movement! model)}))
|
rlm@449
|
1611 #+end_src
|
rlm@449
|
1612 #+end_listing
|
rlm@452
|
1613
|
rlm@436
|
1614 ** Embodiment factors action recognition into managable parts
|
rlm@435
|
1615
|
rlm@449
|
1616 Using empathy, I divide the problem of action recognition into a
|
rlm@449
|
1617 recognition process expressed in the language of a full compliment
|
rlm@449
|
1618 of senses, and an imaganitive process that generates full sensory
|
rlm@449
|
1619 data from partial sensory data. Splitting the action recognition
|
rlm@449
|
1620 problem in this manner greatly reduces the total amount of work to
|
rlm@449
|
1621 recognize actions: The imaganitive process is mostly just matching
|
rlm@449
|
1622 previous experience, and the recognition process gets to use all
|
rlm@449
|
1623 the senses to directly describe any action.
|
rlm@449
|
1624
|
rlm@436
|
1625 ** Action recognition is easy with a full gamut of senses
|
rlm@435
|
1626
|
rlm@449
|
1627 Embodied representations using multiple senses such as touch,
|
rlm@449
|
1628 proprioception, and muscle tension turns out be be exceedingly
|
rlm@449
|
1629 efficient at describing body-centered actions. It is the ``right
|
rlm@449
|
1630 language for the job''. For example, it takes only around 5 lines
|
rlm@449
|
1631 of LISP code to describe the action of ``curling'' using embodied
|
rlm@451
|
1632 primitives. It takes about 10 lines to describe the seemingly
|
rlm@449
|
1633 complicated action of wiggling.
|
rlm@449
|
1634
|
rlm@449
|
1635 The following action predicates each take a stream of sensory
|
rlm@449
|
1636 experience, observe however much of it they desire, and decide
|
rlm@449
|
1637 whether the worm is doing the action they describe. =curled?=
|
rlm@449
|
1638 relies on proprioception, =resting?= relies on touch, =wiggling?=
|
rlm@449
|
1639 relies on a fourier analysis of muscle contraction, and
|
rlm@449
|
1640 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
|
rlm@449
|
1641
|
rlm@449
|
1642 #+caption: Program for detecting whether the worm is curled. This is the
|
rlm@449
|
1643 #+caption: simplest action predicate, because it only uses the last frame
|
rlm@449
|
1644 #+caption: of sensory experience, and only uses proprioceptive data. Even
|
rlm@449
|
1645 #+caption: this simple predicate, however, is automatically frame
|
rlm@449
|
1646 #+caption: independent and ignores vermopomorphic differences such as
|
rlm@449
|
1647 #+caption: worm textures and colors.
|
rlm@449
|
1648 #+name: curled
|
rlm@452
|
1649 #+attr_latex: [htpb]
|
rlm@452
|
1650 #+begin_listing clojure
|
rlm@449
|
1651 #+begin_src clojure
|
rlm@449
|
1652 (defn curled?
|
rlm@449
|
1653 "Is the worm curled up?"
|
rlm@449
|
1654 [experiences]
|
rlm@449
|
1655 (every?
|
rlm@449
|
1656 (fn [[_ _ bend]]
|
rlm@449
|
1657 (> (Math/sin bend) 0.64))
|
rlm@449
|
1658 (:proprioception (peek experiences))))
|
rlm@449
|
1659 #+end_src
|
rlm@449
|
1660 #+end_listing
|
rlm@449
|
1661
|
rlm@449
|
1662 #+caption: Program for summarizing the touch information in a patch
|
rlm@449
|
1663 #+caption: of skin.
|
rlm@449
|
1664 #+name: touch-summary
|
rlm@452
|
1665 #+attr_latex: [htpb]
|
rlm@452
|
1666
|
rlm@452
|
1667 #+begin_listing clojure
|
rlm@449
|
1668 #+begin_src clojure
|
rlm@449
|
1669 (defn contact
|
rlm@449
|
1670 "Determine how much contact a particular worm segment has with
|
rlm@449
|
1671 other objects. Returns a value between 0 and 1, where 1 is full
|
rlm@449
|
1672 contact and 0 is no contact."
|
rlm@449
|
1673 [touch-region [coords contact :as touch]]
|
rlm@449
|
1674 (-> (zipmap coords contact)
|
rlm@449
|
1675 (select-keys touch-region)
|
rlm@449
|
1676 (vals)
|
rlm@449
|
1677 (#(map first %))
|
rlm@449
|
1678 (average)
|
rlm@449
|
1679 (* 10)
|
rlm@449
|
1680 (- 1)
|
rlm@449
|
1681 (Math/abs)))
|
rlm@449
|
1682 #+end_src
|
rlm@449
|
1683 #+end_listing
|
rlm@449
|
1684
|
rlm@449
|
1685
|
rlm@449
|
1686 #+caption: Program for detecting whether the worm is at rest. This program
|
rlm@449
|
1687 #+caption: uses a summary of the tactile information from the underbelly
|
rlm@449
|
1688 #+caption: of the worm, and is only true if every segment is touching the
|
rlm@449
|
1689 #+caption: floor. Note that this function contains no references to
|
rlm@449
|
1690 #+caption: proprioction at all.
|
rlm@449
|
1691 #+name: resting
|
rlm@452
|
1692 #+attr_latex: [htpb]
|
rlm@452
|
1693 #+begin_listing clojure
|
rlm@449
|
1694 #+begin_src clojure
|
rlm@449
|
1695 (def worm-segment-bottom (rect-region [8 15] [14 22]))
|
rlm@449
|
1696
|
rlm@449
|
1697 (defn resting?
|
rlm@449
|
1698 "Is the worm resting on the ground?"
|
rlm@449
|
1699 [experiences]
|
rlm@449
|
1700 (every?
|
rlm@449
|
1701 (fn [touch-data]
|
rlm@449
|
1702 (< 0.9 (contact worm-segment-bottom touch-data)))
|
rlm@449
|
1703 (:touch (peek experiences))))
|
rlm@449
|
1704 #+end_src
|
rlm@449
|
1705 #+end_listing
|
rlm@449
|
1706
|
rlm@449
|
1707 #+caption: Program for detecting whether the worm is curled up into a
|
rlm@449
|
1708 #+caption: full circle. Here the embodied approach begins to shine, as
|
rlm@449
|
1709 #+caption: I am able to both use a previous action predicate (=curled?=)
|
rlm@449
|
1710 #+caption: as well as the direct tactile experience of the head and tail.
|
rlm@449
|
1711 #+name: grand-circle
|
rlm@452
|
1712 #+attr_latex: [htpb]
|
rlm@452
|
1713 #+begin_listing clojure
|
rlm@449
|
1714 #+begin_src clojure
|
rlm@449
|
1715 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
|
rlm@449
|
1716
|
rlm@449
|
1717 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
|
rlm@449
|
1718
|
rlm@449
|
1719 (defn grand-circle?
|
rlm@449
|
1720 "Does the worm form a majestic circle (one end touching the other)?"
|
rlm@449
|
1721 [experiences]
|
rlm@449
|
1722 (and (curled? experiences)
|
rlm@449
|
1723 (let [worm-touch (:touch (peek experiences))
|
rlm@449
|
1724 tail-touch (worm-touch 0)
|
rlm@449
|
1725 head-touch (worm-touch 4)]
|
rlm@449
|
1726 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
|
rlm@449
|
1727 (< 0.55 (contact worm-segment-top-tip head-touch))))))
|
rlm@449
|
1728 #+end_src
|
rlm@449
|
1729 #+end_listing
|
rlm@449
|
1730
|
rlm@449
|
1731
|
rlm@449
|
1732 #+caption: Program for detecting whether the worm has been wiggling for
|
rlm@449
|
1733 #+caption: the last few frames. It uses a fourier analysis of the muscle
|
rlm@449
|
1734 #+caption: contractions of the worm's tail to determine wiggling. This is
|
rlm@449
|
1735 #+caption: signigicant because there is no particular frame that clearly
|
rlm@449
|
1736 #+caption: indicates that the worm is wiggling --- only when multiple frames
|
rlm@449
|
1737 #+caption: are analyzed together is the wiggling revealed. Defining
|
rlm@449
|
1738 #+caption: wiggling this way also gives the worm an opportunity to learn
|
rlm@449
|
1739 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
|
rlm@449
|
1740 #+caption: wiggle but can't. Frustrated wiggling is very visually different
|
rlm@449
|
1741 #+caption: from actual wiggling, but this definition gives it to us for free.
|
rlm@449
|
1742 #+name: wiggling
|
rlm@452
|
1743 #+attr_latex: [htpb]
|
rlm@452
|
1744 #+begin_listing clojure
|
rlm@449
|
1745 #+begin_src clojure
|
rlm@449
|
1746 (defn fft [nums]
|
rlm@449
|
1747 (map
|
rlm@449
|
1748 #(.getReal %)
|
rlm@449
|
1749 (.transform
|
rlm@449
|
1750 (FastFourierTransformer. DftNormalization/STANDARD)
|
rlm@449
|
1751 (double-array nums) TransformType/FORWARD)))
|
rlm@449
|
1752
|
rlm@449
|
1753 (def indexed (partial map-indexed vector))
|
rlm@449
|
1754
|
rlm@449
|
1755 (defn max-indexed [s]
|
rlm@449
|
1756 (first (sort-by (comp - second) (indexed s))))
|
rlm@449
|
1757
|
rlm@449
|
1758 (defn wiggling?
|
rlm@449
|
1759 "Is the worm wiggling?"
|
rlm@449
|
1760 [experiences]
|
rlm@449
|
1761 (let [analysis-interval 0x40]
|
rlm@449
|
1762 (when (> (count experiences) analysis-interval)
|
rlm@449
|
1763 (let [a-flex 3
|
rlm@449
|
1764 a-ex 2
|
rlm@449
|
1765 muscle-activity
|
rlm@449
|
1766 (map :muscle (vector:last-n experiences analysis-interval))
|
rlm@449
|
1767 base-activity
|
rlm@449
|
1768 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
|
rlm@449
|
1769 (= 2
|
rlm@449
|
1770 (first
|
rlm@449
|
1771 (max-indexed
|
rlm@449
|
1772 (map #(Math/abs %)
|
rlm@449
|
1773 (take 20 (fft base-activity))))))))))
|
rlm@449
|
1774 #+end_src
|
rlm@449
|
1775 #+end_listing
|
rlm@449
|
1776
|
rlm@449
|
1777 With these action predicates, I can now recognize the actions of
|
rlm@449
|
1778 the worm while it is moving under my control and I have access to
|
rlm@449
|
1779 all the worm's senses.
|
rlm@449
|
1780
|
rlm@449
|
1781 #+caption: Use the action predicates defined earlier to report on
|
rlm@449
|
1782 #+caption: what the worm is doing while in simulation.
|
rlm@449
|
1783 #+name: report-worm-activity
|
rlm@452
|
1784 #+attr_latex: [htpb]
|
rlm@452
|
1785 #+begin_listing clojure
|
rlm@449
|
1786 #+begin_src clojure
|
rlm@449
|
1787 (defn debug-experience
|
rlm@449
|
1788 [experiences text]
|
rlm@449
|
1789 (cond
|
rlm@449
|
1790 (grand-circle? experiences) (.setText text "Grand Circle")
|
rlm@449
|
1791 (curled? experiences) (.setText text "Curled")
|
rlm@449
|
1792 (wiggling? experiences) (.setText text "Wiggling")
|
rlm@449
|
1793 (resting? experiences) (.setText text "Resting")))
|
rlm@449
|
1794 #+end_src
|
rlm@449
|
1795 #+end_listing
|
rlm@449
|
1796
|
rlm@449
|
1797 #+caption: Using =debug-experience=, the body-centered predicates
|
rlm@449
|
1798 #+caption: work together to classify the behaviour of the worm.
|
rlm@451
|
1799 #+caption: the predicates are operating with access to the worm's
|
rlm@451
|
1800 #+caption: full sensory data.
|
rlm@449
|
1801 #+name: basic-worm-view
|
rlm@449
|
1802 #+ATTR_LaTeX: :width 10cm
|
rlm@449
|
1803 [[./images/worm-identify-init.png]]
|
rlm@449
|
1804
|
rlm@449
|
1805 These action predicates satisfy the recognition requirement of an
|
rlm@451
|
1806 empathic recognition system. There is power in the simplicity of
|
rlm@451
|
1807 the action predicates. They describe their actions without getting
|
rlm@451
|
1808 confused in visual details of the worm. Each one is frame
|
rlm@451
|
1809 independent, but more than that, they are each indepent of
|
rlm@449
|
1810 irrelevant visual details of the worm and the environment. They
|
rlm@449
|
1811 will work regardless of whether the worm is a different color or
|
rlm@451
|
1812 hevaily textured, or if the environment has strange lighting.
|
rlm@449
|
1813
|
rlm@449
|
1814 The trick now is to make the action predicates work even when the
|
rlm@449
|
1815 sensory data on which they depend is absent. If I can do that, then
|
rlm@449
|
1816 I will have gained much,
|
rlm@435
|
1817
|
rlm@436
|
1818 ** \Phi-space describes the worm's experiences
|
rlm@449
|
1819
|
rlm@449
|
1820 As a first step towards building empathy, I need to gather all of
|
rlm@449
|
1821 the worm's experiences during free play. I use a simple vector to
|
rlm@449
|
1822 store all the experiences.
|
rlm@449
|
1823
|
rlm@449
|
1824 Each element of the experience vector exists in the vast space of
|
rlm@449
|
1825 all possible worm-experiences. Most of this vast space is actually
|
rlm@449
|
1826 unreachable due to physical constraints of the worm's body. For
|
rlm@449
|
1827 example, the worm's segments are connected by hinge joints that put
|
rlm@451
|
1828 a practical limit on the worm's range of motions without limiting
|
rlm@451
|
1829 its degrees of freedom. Some groupings of senses are impossible;
|
rlm@451
|
1830 the worm can not be bent into a circle so that its ends are
|
rlm@451
|
1831 touching and at the same time not also experience the sensation of
|
rlm@451
|
1832 touching itself.
|
rlm@449
|
1833
|
rlm@451
|
1834 As the worm moves around during free play and its experience vector
|
rlm@451
|
1835 grows larger, the vector begins to define a subspace which is all
|
rlm@451
|
1836 the sensations the worm can practicaly experience during normal
|
rlm@451
|
1837 operation. I call this subspace \Phi-space, short for
|
rlm@451
|
1838 physical-space. The experience vector defines a path through
|
rlm@451
|
1839 \Phi-space. This path has interesting properties that all derive
|
rlm@451
|
1840 from physical embodiment. The proprioceptive components are
|
rlm@451
|
1841 completely smooth, because in order for the worm to move from one
|
rlm@451
|
1842 position to another, it must pass through the intermediate
|
rlm@451
|
1843 positions. The path invariably forms loops as actions are repeated.
|
rlm@451
|
1844 Finally and most importantly, proprioception actually gives very
|
rlm@451
|
1845 strong inference about the other senses. For example, when the worm
|
rlm@451
|
1846 is flat, you can infer that it is touching the ground and that its
|
rlm@451
|
1847 muscles are not active, because if the muscles were active, the
|
rlm@451
|
1848 worm would be moving and would not be perfectly flat. In order to
|
rlm@451
|
1849 stay flat, the worm has to be touching the ground, or it would
|
rlm@451
|
1850 again be moving out of the flat position due to gravity. If the
|
rlm@451
|
1851 worm is positioned in such a way that it interacts with itself,
|
rlm@451
|
1852 then it is very likely to be feeling the same tactile feelings as
|
rlm@451
|
1853 the last time it was in that position, because it has the same body
|
rlm@451
|
1854 as then. If you observe multiple frames of proprioceptive data,
|
rlm@451
|
1855 then you can become increasingly confident about the exact
|
rlm@451
|
1856 activations of the worm's muscles, because it generally takes a
|
rlm@451
|
1857 unique combination of muscle contractions to transform the worm's
|
rlm@451
|
1858 body along a specific path through \Phi-space.
|
rlm@449
|
1859
|
rlm@449
|
1860 There is a simple way of taking \Phi-space and the total ordering
|
rlm@449
|
1861 provided by an experience vector and reliably infering the rest of
|
rlm@449
|
1862 the senses.
|
rlm@435
|
1863
|
rlm@436
|
1864 ** Empathy is the process of tracing though \Phi-space
|
rlm@449
|
1865
|
rlm@450
|
1866 Here is the core of a basic empathy algorithm, starting with an
|
rlm@451
|
1867 experience vector:
|
rlm@451
|
1868
|
rlm@451
|
1869 First, group the experiences into tiered proprioceptive bins. I use
|
rlm@451
|
1870 powers of 10 and 3 bins, and the smallest bin has an approximate
|
rlm@451
|
1871 size of 0.001 radians in all proprioceptive dimensions.
|
rlm@450
|
1872
|
rlm@450
|
1873 Then, given a sequence of proprioceptive input, generate a set of
|
rlm@451
|
1874 matching experience records for each input, using the tiered
|
rlm@451
|
1875 proprioceptive bins.
|
rlm@449
|
1876
|
rlm@450
|
1877 Finally, to infer sensory data, select the longest consective chain
|
rlm@451
|
1878 of experiences. Conecutive experience means that the experiences
|
rlm@451
|
1879 appear next to each other in the experience vector.
|
rlm@449
|
1880
|
rlm@450
|
1881 This algorithm has three advantages:
|
rlm@450
|
1882
|
rlm@450
|
1883 1. It's simple
|
rlm@450
|
1884
|
rlm@451
|
1885 3. It's very fast -- retrieving possible interpretations takes
|
rlm@451
|
1886 constant time. Tracing through chains of interpretations takes
|
rlm@451
|
1887 time proportional to the average number of experiences in a
|
rlm@451
|
1888 proprioceptive bin. Redundant experiences in \Phi-space can be
|
rlm@451
|
1889 merged to save computation.
|
rlm@450
|
1890
|
rlm@450
|
1891 2. It protects from wrong interpretations of transient ambiguous
|
rlm@451
|
1892 proprioceptive data. For example, if the worm is flat for just
|
rlm@450
|
1893 an instant, this flattness will not be interpreted as implying
|
rlm@450
|
1894 that the worm has its muscles relaxed, since the flattness is
|
rlm@450
|
1895 part of a longer chain which includes a distinct pattern of
|
rlm@451
|
1896 muscle activation. Markov chains or other memoryless statistical
|
rlm@451
|
1897 models that operate on individual frames may very well make this
|
rlm@451
|
1898 mistake.
|
rlm@450
|
1899
|
rlm@450
|
1900 #+caption: Program to convert an experience vector into a
|
rlm@450
|
1901 #+caption: proprioceptively binned lookup function.
|
rlm@450
|
1902 #+name: bin
|
rlm@452
|
1903 #+attr_latex: [htpb]
|
rlm@452
|
1904 #+begin_listing clojure
|
rlm@450
|
1905 #+begin_src clojure
|
rlm@449
|
1906 (defn bin [digits]
|
rlm@449
|
1907 (fn [angles]
|
rlm@449
|
1908 (->> angles
|
rlm@449
|
1909 (flatten)
|
rlm@449
|
1910 (map (juxt #(Math/sin %) #(Math/cos %)))
|
rlm@449
|
1911 (flatten)
|
rlm@449
|
1912 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
|
rlm@449
|
1913
|
rlm@449
|
1914 (defn gen-phi-scan
|
rlm@450
|
1915 "Nearest-neighbors with binning. Only returns a result if
|
rlm@450
|
1916 the propriceptive data is within 10% of a previously recorded
|
rlm@450
|
1917 result in all dimensions."
|
rlm@450
|
1918 [phi-space]
|
rlm@449
|
1919 (let [bin-keys (map bin [3 2 1])
|
rlm@449
|
1920 bin-maps
|
rlm@449
|
1921 (map (fn [bin-key]
|
rlm@449
|
1922 (group-by
|
rlm@449
|
1923 (comp bin-key :proprioception phi-space)
|
rlm@449
|
1924 (range (count phi-space)))) bin-keys)
|
rlm@449
|
1925 lookups (map (fn [bin-key bin-map]
|
rlm@450
|
1926 (fn [proprio] (bin-map (bin-key proprio))))
|
rlm@450
|
1927 bin-keys bin-maps)]
|
rlm@449
|
1928 (fn lookup [proprio-data]
|
rlm@449
|
1929 (set (some #(% proprio-data) lookups)))))
|
rlm@450
|
1930 #+end_src
|
rlm@450
|
1931 #+end_listing
|
rlm@449
|
1932
|
rlm@451
|
1933 #+caption: =longest-thread= finds the longest path of consecutive
|
rlm@451
|
1934 #+caption: experiences to explain proprioceptive worm data.
|
rlm@451
|
1935 #+name: phi-space-history-scan
|
rlm@451
|
1936 #+ATTR_LaTeX: :width 10cm
|
rlm@451
|
1937 [[./images/aurellem-gray.png]]
|
rlm@451
|
1938
|
rlm@451
|
1939 =longest-thread= infers sensory data by stitching together pieces
|
rlm@451
|
1940 from previous experience. It prefers longer chains of previous
|
rlm@451
|
1941 experience to shorter ones. For example, during training the worm
|
rlm@451
|
1942 might rest on the ground for one second before it performs its
|
rlm@451
|
1943 excercises. If during recognition the worm rests on the ground for
|
rlm@451
|
1944 five seconds, =longest-thread= will accomodate this five second
|
rlm@451
|
1945 rest period by looping the one second rest chain five times.
|
rlm@451
|
1946
|
rlm@451
|
1947 =longest-thread= takes time proportinal to the average number of
|
rlm@451
|
1948 entries in a proprioceptive bin, because for each element in the
|
rlm@451
|
1949 starting bin it performes a series of set lookups in the preceeding
|
rlm@451
|
1950 bins. If the total history is limited, then this is only a constant
|
rlm@451
|
1951 multiple times the number of entries in the starting bin. This
|
rlm@451
|
1952 analysis also applies even if the action requires multiple longest
|
rlm@451
|
1953 chains -- it's still the average number of entries in a
|
rlm@451
|
1954 proprioceptive bin times the desired chain length. Because
|
rlm@451
|
1955 =longest-thread= is so efficient and simple, I can interpret
|
rlm@451
|
1956 worm-actions in real time.
|
rlm@449
|
1957
|
rlm@450
|
1958 #+caption: Program to calculate empathy by tracing though \Phi-space
|
rlm@450
|
1959 #+caption: and finding the longest (ie. most coherent) interpretation
|
rlm@450
|
1960 #+caption: of the data.
|
rlm@450
|
1961 #+name: longest-thread
|
rlm@452
|
1962 #+attr_latex: [htpb]
|
rlm@452
|
1963 #+begin_listing clojure
|
rlm@450
|
1964 #+begin_src clojure
|
rlm@449
|
1965 (defn longest-thread
|
rlm@449
|
1966 "Find the longest thread from phi-index-sets. The index sets should
|
rlm@449
|
1967 be ordered from most recent to least recent."
|
rlm@449
|
1968 [phi-index-sets]
|
rlm@449
|
1969 (loop [result '()
|
rlm@449
|
1970 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
|
rlm@449
|
1971 (if (empty? phi-index-sets)
|
rlm@449
|
1972 (vec result)
|
rlm@449
|
1973 (let [threads
|
rlm@449
|
1974 (for [thread-base thread-bases]
|
rlm@449
|
1975 (loop [thread (list thread-base)
|
rlm@449
|
1976 remaining remaining]
|
rlm@449
|
1977 (let [next-index (dec (first thread))]
|
rlm@449
|
1978 (cond (empty? remaining) thread
|
rlm@449
|
1979 (contains? (first remaining) next-index)
|
rlm@449
|
1980 (recur
|
rlm@449
|
1981 (cons next-index thread) (rest remaining))
|
rlm@449
|
1982 :else thread))))
|
rlm@449
|
1983 longest-thread
|
rlm@449
|
1984 (reduce (fn [thread-a thread-b]
|
rlm@449
|
1985 (if (> (count thread-a) (count thread-b))
|
rlm@449
|
1986 thread-a thread-b))
|
rlm@449
|
1987 '(nil)
|
rlm@449
|
1988 threads)]
|
rlm@449
|
1989 (recur (concat longest-thread result)
|
rlm@449
|
1990 (drop (count longest-thread) phi-index-sets))))))
|
rlm@450
|
1991 #+end_src
|
rlm@450
|
1992 #+end_listing
|
rlm@450
|
1993
|
rlm@451
|
1994 There is one final piece, which is to replace missing sensory data
|
rlm@451
|
1995 with a best-guess estimate. While I could fill in missing data by
|
rlm@451
|
1996 using a gradient over the closest known sensory data points,
|
rlm@451
|
1997 averages can be misleading. It is certainly possible to create an
|
rlm@451
|
1998 impossible sensory state by averaging two possible sensory states.
|
rlm@451
|
1999 Therefore, I simply replicate the most recent sensory experience to
|
rlm@451
|
2000 fill in the gaps.
|
rlm@449
|
2001
|
rlm@449
|
2002 #+caption: Fill in blanks in sensory experience by replicating the most
|
rlm@449
|
2003 #+caption: recent experience.
|
rlm@449
|
2004 #+name: infer-nils
|
rlm@452
|
2005 #+attr_latex: [htpb]
|
rlm@452
|
2006 #+begin_listing clojure
|
rlm@449
|
2007 #+begin_src clojure
|
rlm@449
|
2008 (defn infer-nils
|
rlm@449
|
2009 "Replace nils with the next available non-nil element in the
|
rlm@449
|
2010 sequence, or barring that, 0."
|
rlm@449
|
2011 [s]
|
rlm@449
|
2012 (loop [i (dec (count s))
|
rlm@449
|
2013 v (transient s)]
|
rlm@449
|
2014 (if (zero? i) (persistent! v)
|
rlm@449
|
2015 (if-let [cur (v i)]
|
rlm@449
|
2016 (if (get v (dec i) 0)
|
rlm@449
|
2017 (recur (dec i) v)
|
rlm@449
|
2018 (recur (dec i) (assoc! v (dec i) cur)))
|
rlm@449
|
2019 (recur i (assoc! v i 0))))))
|
rlm@449
|
2020 #+end_src
|
rlm@449
|
2021 #+end_listing
|
rlm@435
|
2022
|
rlm@441
|
2023 ** Efficient action recognition with =EMPATH=
|
rlm@451
|
2024
|
rlm@451
|
2025 To use =EMPATH= with the worm, I first need to gather a set of
|
rlm@451
|
2026 experiences from the worm that includes the actions I want to
|
rlm@452
|
2027 recognize. The =generate-phi-space= program (listing
|
rlm@451
|
2028 \ref{generate-phi-space} runs the worm through a series of
|
rlm@451
|
2029 exercices and gatheres those experiences into a vector. The
|
rlm@451
|
2030 =do-all-the-things= program is a routine expressed in a simple
|
rlm@452
|
2031 muscle contraction script language for automated worm control. It
|
rlm@452
|
2032 causes the worm to rest, curl, and wiggle over about 700 frames
|
rlm@452
|
2033 (approx. 11 seconds).
|
rlm@425
|
2034
|
rlm@451
|
2035 #+caption: Program to gather the worm's experiences into a vector for
|
rlm@451
|
2036 #+caption: further processing. The =motor-control-program= line uses
|
rlm@451
|
2037 #+caption: a motor control script that causes the worm to execute a series
|
rlm@451
|
2038 #+caption: of ``exercices'' that include all the action predicates.
|
rlm@451
|
2039 #+name: generate-phi-space
|
rlm@452
|
2040 #+attr_latex: [htpb]
|
rlm@452
|
2041 #+begin_listing clojure
|
rlm@451
|
2042 #+begin_src clojure
|
rlm@451
|
2043 (def do-all-the-things
|
rlm@451
|
2044 (concat
|
rlm@451
|
2045 curl-script
|
rlm@451
|
2046 [[300 :d-ex 40]
|
rlm@451
|
2047 [320 :d-ex 0]]
|
rlm@451
|
2048 (shift-script 280 (take 16 wiggle-script))))
|
rlm@451
|
2049
|
rlm@451
|
2050 (defn generate-phi-space []
|
rlm@451
|
2051 (let [experiences (atom [])]
|
rlm@451
|
2052 (run-world
|
rlm@451
|
2053 (apply-map
|
rlm@451
|
2054 worm-world
|
rlm@451
|
2055 (merge
|
rlm@451
|
2056 (worm-world-defaults)
|
rlm@451
|
2057 {:end-frame 700
|
rlm@451
|
2058 :motor-control
|
rlm@451
|
2059 (motor-control-program worm-muscle-labels do-all-the-things)
|
rlm@451
|
2060 :experiences experiences})))
|
rlm@451
|
2061 @experiences))
|
rlm@451
|
2062 #+end_src
|
rlm@451
|
2063 #+end_listing
|
rlm@451
|
2064
|
rlm@451
|
2065 #+caption: Use longest thread and a phi-space generated from a short
|
rlm@451
|
2066 #+caption: exercise routine to interpret actions during free play.
|
rlm@451
|
2067 #+name: empathy-debug
|
rlm@452
|
2068 #+attr_latex: [htpb]
|
rlm@452
|
2069 #+begin_listing clojure
|
rlm@451
|
2070 #+begin_src clojure
|
rlm@451
|
2071 (defn init []
|
rlm@451
|
2072 (def phi-space (generate-phi-space))
|
rlm@451
|
2073 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
2074
|
rlm@451
|
2075 (defn empathy-demonstration []
|
rlm@451
|
2076 (let [proprio (atom ())]
|
rlm@451
|
2077 (fn
|
rlm@451
|
2078 [experiences text]
|
rlm@451
|
2079 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
2080 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
2081 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
2082 empathy (mapv phi-space (infer-nils exp-thread))]
|
rlm@451
|
2083 (println-repl (vector:last-n exp-thread 22))
|
rlm@451
|
2084 (cond
|
rlm@451
|
2085 (grand-circle? empathy) (.setText text "Grand Circle")
|
rlm@451
|
2086 (curled? empathy) (.setText text "Curled")
|
rlm@451
|
2087 (wiggling? empathy) (.setText text "Wiggling")
|
rlm@451
|
2088 (resting? empathy) (.setText text "Resting")
|
rlm@451
|
2089 :else (.setText text "Unknown")))))))
|
rlm@451
|
2090
|
rlm@451
|
2091 (defn empathy-experiment [record]
|
rlm@451
|
2092 (.start (worm-world :experience-watch (debug-experience-phi)
|
rlm@451
|
2093 :record record :worm worm*)))
|
rlm@451
|
2094 #+end_src
|
rlm@451
|
2095 #+end_listing
|
rlm@451
|
2096
|
rlm@451
|
2097 The result of running =empathy-experiment= is that the system is
|
rlm@451
|
2098 generally able to interpret worm actions using the action-predicates
|
rlm@451
|
2099 on simulated sensory data just as well as with actual data. Figure
|
rlm@451
|
2100 \ref{empathy-debug-image} was generated using =empathy-experiment=:
|
rlm@451
|
2101
|
rlm@451
|
2102 #+caption: From only proprioceptive data, =EMPATH= was able to infer
|
rlm@451
|
2103 #+caption: the complete sensory experience and classify four poses
|
rlm@451
|
2104 #+caption: (The last panel shows a composite image of \emph{wriggling},
|
rlm@451
|
2105 #+caption: a dynamic pose.)
|
rlm@451
|
2106 #+name: empathy-debug-image
|
rlm@451
|
2107 #+ATTR_LaTeX: :width 10cm :placement [H]
|
rlm@451
|
2108 [[./images/empathy-1.png]]
|
rlm@451
|
2109
|
rlm@451
|
2110 One way to measure the performance of =EMPATH= is to compare the
|
rlm@451
|
2111 sutiability of the imagined sense experience to trigger the same
|
rlm@451
|
2112 action predicates as the real sensory experience.
|
rlm@451
|
2113
|
rlm@451
|
2114 #+caption: Determine how closely empathy approximates actual
|
rlm@451
|
2115 #+caption: sensory data.
|
rlm@451
|
2116 #+name: test-empathy-accuracy
|
rlm@452
|
2117 #+attr_latex: [htpb]
|
rlm@452
|
2118 #+begin_listing clojure
|
rlm@451
|
2119 #+begin_src clojure
|
rlm@451
|
2120 (def worm-action-label
|
rlm@451
|
2121 (juxt grand-circle? curled? wiggling?))
|
rlm@451
|
2122
|
rlm@451
|
2123 (defn compare-empathy-with-baseline [matches]
|
rlm@451
|
2124 (let [proprio (atom ())]
|
rlm@451
|
2125 (fn
|
rlm@451
|
2126 [experiences text]
|
rlm@451
|
2127 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
|
rlm@451
|
2128 (swap! proprio (partial cons phi-indices))
|
rlm@451
|
2129 (let [exp-thread (longest-thread (take 300 @proprio))
|
rlm@451
|
2130 empathy (mapv phi-space (infer-nils exp-thread))
|
rlm@451
|
2131 experience-matches-empathy
|
rlm@451
|
2132 (= (worm-action-label experiences)
|
rlm@451
|
2133 (worm-action-label empathy))]
|
rlm@451
|
2134 (println-repl experience-matches-empathy)
|
rlm@451
|
2135 (swap! matches #(conj % experience-matches-empathy)))))))
|
rlm@451
|
2136
|
rlm@451
|
2137 (defn accuracy [v]
|
rlm@451
|
2138 (float (/ (count (filter true? v)) (count v))))
|
rlm@451
|
2139
|
rlm@451
|
2140 (defn test-empathy-accuracy []
|
rlm@451
|
2141 (let [res (atom [])]
|
rlm@451
|
2142 (run-world
|
rlm@451
|
2143 (worm-world :experience-watch
|
rlm@451
|
2144 (compare-empathy-with-baseline res)
|
rlm@451
|
2145 :worm worm*))
|
rlm@451
|
2146 (accuracy @res)))
|
rlm@451
|
2147 #+end_src
|
rlm@451
|
2148 #+end_listing
|
rlm@451
|
2149
|
rlm@451
|
2150 Running =test-empathy-accuracy= using the very short exercise
|
rlm@451
|
2151 program defined in listing \ref{generate-phi-space}, and then doing
|
rlm@451
|
2152 a similar pattern of activity manually yeilds an accuracy of around
|
rlm@451
|
2153 73%. This is based on very limited worm experience. By training the
|
rlm@451
|
2154 worm for longer, the accuracy dramatically improves.
|
rlm@451
|
2155
|
rlm@451
|
2156 #+caption: Program to generate \Phi-space using manual training.
|
rlm@451
|
2157 #+name: manual-phi-space
|
rlm@452
|
2158 #+attr_latex: [htpb]
|
rlm@451
|
2159 #+begin_listing clojure
|
rlm@451
|
2160 #+begin_src clojure
|
rlm@451
|
2161 (defn init-interactive []
|
rlm@451
|
2162 (def phi-space
|
rlm@451
|
2163 (let [experiences (atom [])]
|
rlm@451
|
2164 (run-world
|
rlm@451
|
2165 (apply-map
|
rlm@451
|
2166 worm-world
|
rlm@451
|
2167 (merge
|
rlm@451
|
2168 (worm-world-defaults)
|
rlm@451
|
2169 {:experiences experiences})))
|
rlm@451
|
2170 @experiences))
|
rlm@451
|
2171 (def phi-scan (gen-phi-scan phi-space)))
|
rlm@451
|
2172 #+end_src
|
rlm@451
|
2173 #+end_listing
|
rlm@451
|
2174
|
rlm@451
|
2175 After about 1 minute of manual training, I was able to achieve 95%
|
rlm@451
|
2176 accuracy on manual testing of the worm using =init-interactive= and
|
rlm@452
|
2177 =test-empathy-accuracy=. The majority of errors are near the
|
rlm@452
|
2178 boundaries of transitioning from one type of action to another.
|
rlm@452
|
2179 During these transitions the exact label for the action is more open
|
rlm@452
|
2180 to interpretation, and dissaggrement between empathy and experience
|
rlm@452
|
2181 is more excusable.
|
rlm@450
|
2182
|
rlm@449
|
2183 ** Digression: bootstrapping touch using free exploration
|
rlm@449
|
2184
|
rlm@452
|
2185 In the previous section I showed how to compute actions in terms of
|
rlm@452
|
2186 body-centered predicates which relied averate touch activation of
|
rlm@452
|
2187 pre-defined regions of the worm's skin. What if, instead of recieving
|
rlm@452
|
2188 touch pre-grouped into the six faces of each worm segment, the true
|
rlm@452
|
2189 topology of the worm's skin was unknown? This is more similiar to how
|
rlm@452
|
2190 a nerve fiber bundle might be arranged. While two fibers that are
|
rlm@452
|
2191 close in a nerve bundle /might/ correspond to two touch sensors that
|
rlm@452
|
2192 are close together on the skin, the process of taking a complicated
|
rlm@452
|
2193 surface and forcing it into essentially a circle requires some cuts
|
rlm@452
|
2194 and rerragenments.
|
rlm@452
|
2195
|
rlm@452
|
2196 In this section I show how to automatically learn the skin-topology of
|
rlm@452
|
2197 a worm segment by free exploration. As the worm rolls around on the
|
rlm@452
|
2198 floor, large sections of its surface get activated. If the worm has
|
rlm@452
|
2199 stopped moving, then whatever region of skin that is touching the
|
rlm@452
|
2200 floor is probably an important region, and should be recorded.
|
rlm@452
|
2201
|
rlm@452
|
2202 #+caption: Program to detect whether the worm is in a resting state
|
rlm@452
|
2203 #+caption: with one face touching the floor.
|
rlm@452
|
2204 #+name: pure-touch
|
rlm@452
|
2205 #+begin_listing clojure
|
rlm@452
|
2206 #+begin_src clojure
|
rlm@452
|
2207 (def full-contact [(float 0.0) (float 0.1)])
|
rlm@452
|
2208
|
rlm@452
|
2209 (defn pure-touch?
|
rlm@452
|
2210 "This is worm specific code to determine if a large region of touch
|
rlm@452
|
2211 sensors is either all on or all off."
|
rlm@452
|
2212 [[coords touch :as touch-data]]
|
rlm@452
|
2213 (= (set (map first touch)) (set full-contact)))
|
rlm@452
|
2214 #+end_src
|
rlm@452
|
2215 #+end_listing
|
rlm@452
|
2216
|
rlm@452
|
2217 After collecting these important regions, there will many nearly
|
rlm@452
|
2218 similiar touch regions. While for some purposes the subtle
|
rlm@452
|
2219 differences between these regions will be important, for my
|
rlm@452
|
2220 purposes I colapse them into mostly non-overlapping sets using
|
rlm@452
|
2221 =remove-similiar= in listing \ref{remove-similiar}
|
rlm@452
|
2222
|
rlm@452
|
2223 #+caption: Program to take a lits of set of points and ``collapse them''
|
rlm@452
|
2224 #+caption: so that the remaining sets in the list are siginificantly
|
rlm@452
|
2225 #+caption: different from each other. Prefer smaller sets to larger ones.
|
rlm@452
|
2226 #+name: remove-similiar
|
rlm@452
|
2227 #+begin_listing clojure
|
rlm@452
|
2228 #+begin_src clojure
|
rlm@452
|
2229 (defn remove-similar
|
rlm@452
|
2230 [coll]
|
rlm@452
|
2231 (loop [result () coll (sort-by (comp - count) coll)]
|
rlm@452
|
2232 (if (empty? coll) result
|
rlm@452
|
2233 (let [[x & xs] coll
|
rlm@452
|
2234 c (count x)]
|
rlm@452
|
2235 (if (some
|
rlm@452
|
2236 (fn [other-set]
|
rlm@452
|
2237 (let [oc (count other-set)]
|
rlm@452
|
2238 (< (- (count (union other-set x)) c) (* oc 0.1))))
|
rlm@452
|
2239 xs)
|
rlm@452
|
2240 (recur result xs)
|
rlm@452
|
2241 (recur (cons x result) xs))))))
|
rlm@452
|
2242 #+end_src
|
rlm@452
|
2243 #+end_listing
|
rlm@452
|
2244
|
rlm@452
|
2245 Actually running this simulation is easy given =CORTEX='s facilities.
|
rlm@452
|
2246
|
rlm@452
|
2247 #+caption: Collect experiences while the worm moves around. Filter the touch
|
rlm@452
|
2248 #+caption: sensations by stable ones, collapse similiar ones together,
|
rlm@452
|
2249 #+caption: and report the regions learned.
|
rlm@452
|
2250 #+name: learn-touch
|
rlm@452
|
2251 #+begin_listing clojure
|
rlm@452
|
2252 #+begin_src clojure
|
rlm@452
|
2253 (defn learn-touch-regions []
|
rlm@452
|
2254 (let [experiences (atom [])
|
rlm@452
|
2255 world (apply-map
|
rlm@452
|
2256 worm-world
|
rlm@452
|
2257 (assoc (worm-segment-defaults)
|
rlm@452
|
2258 :experiences experiences))]
|
rlm@452
|
2259 (run-world world)
|
rlm@452
|
2260 (->>
|
rlm@452
|
2261 @experiences
|
rlm@452
|
2262 (drop 175)
|
rlm@452
|
2263 ;; access the single segment's touch data
|
rlm@452
|
2264 (map (comp first :touch))
|
rlm@452
|
2265 ;; only deal with "pure" touch data to determine surfaces
|
rlm@452
|
2266 (filter pure-touch?)
|
rlm@452
|
2267 ;; associate coordinates with touch values
|
rlm@452
|
2268 (map (partial apply zipmap))
|
rlm@452
|
2269 ;; select those regions where contact is being made
|
rlm@452
|
2270 (map (partial group-by second))
|
rlm@452
|
2271 (map #(get % full-contact))
|
rlm@452
|
2272 (map (partial map first))
|
rlm@452
|
2273 ;; remove redundant/subset regions
|
rlm@452
|
2274 (map set)
|
rlm@452
|
2275 remove-similar)))
|
rlm@452
|
2276
|
rlm@452
|
2277 (defn learn-and-view-touch-regions []
|
rlm@452
|
2278 (map view-touch-region
|
rlm@452
|
2279 (learn-touch-regions)))
|
rlm@452
|
2280 #+end_src
|
rlm@452
|
2281 #+end_listing
|
rlm@452
|
2282
|
rlm@452
|
2283 The only thing remining to define is the particular motion the worm
|
rlm@452
|
2284 must take. I accomplish this with a simple motor control program.
|
rlm@452
|
2285
|
rlm@452
|
2286 #+caption: Motor control program for making the worm roll on the ground.
|
rlm@452
|
2287 #+caption: This could also be replaced with random motion.
|
rlm@452
|
2288 #+name: worm-roll
|
rlm@452
|
2289 #+begin_listing clojure
|
rlm@452
|
2290 #+begin_src clojure
|
rlm@452
|
2291 (defn touch-kinesthetics []
|
rlm@452
|
2292 [[170 :lift-1 40]
|
rlm@452
|
2293 [190 :lift-1 19]
|
rlm@452
|
2294 [206 :lift-1 0]
|
rlm@452
|
2295
|
rlm@452
|
2296 [400 :lift-2 40]
|
rlm@452
|
2297 [410 :lift-2 0]
|
rlm@452
|
2298
|
rlm@452
|
2299 [570 :lift-2 40]
|
rlm@452
|
2300 [590 :lift-2 21]
|
rlm@452
|
2301 [606 :lift-2 0]
|
rlm@452
|
2302
|
rlm@452
|
2303 [800 :lift-1 30]
|
rlm@452
|
2304 [809 :lift-1 0]
|
rlm@452
|
2305
|
rlm@452
|
2306 [900 :roll-2 40]
|
rlm@452
|
2307 [905 :roll-2 20]
|
rlm@452
|
2308 [910 :roll-2 0]
|
rlm@452
|
2309
|
rlm@452
|
2310 [1000 :roll-2 40]
|
rlm@452
|
2311 [1005 :roll-2 20]
|
rlm@452
|
2312 [1010 :roll-2 0]
|
rlm@452
|
2313
|
rlm@452
|
2314 [1100 :roll-2 40]
|
rlm@452
|
2315 [1105 :roll-2 20]
|
rlm@452
|
2316 [1110 :roll-2 0]
|
rlm@452
|
2317 ])
|
rlm@452
|
2318 #+end_src
|
rlm@452
|
2319 #+end_listing
|
rlm@452
|
2320
|
rlm@452
|
2321
|
rlm@452
|
2322 #+caption: The small worm rolls around on the floor, driven
|
rlm@452
|
2323 #+caption: by the motor control program in listing \ref{worm-roll}.
|
rlm@452
|
2324 #+name: worm-roll
|
rlm@452
|
2325 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
2326 [[./images/worm-roll.png]]
|
rlm@452
|
2327
|
rlm@452
|
2328
|
rlm@452
|
2329 #+caption: After completing its adventures, the worm now knows
|
rlm@452
|
2330 #+caption: how its touch sensors are arranged along its skin. These
|
rlm@452
|
2331 #+caption: are the regions that were deemed important by
|
rlm@452
|
2332 #+caption: =learn-touch-regions=. Note that the worm has discovered
|
rlm@452
|
2333 #+caption: that it has six sides.
|
rlm@452
|
2334 #+name: worm-touch-map
|
rlm@452
|
2335 #+ATTR_LaTeX: :width 12cm
|
rlm@452
|
2336 [[./images/touch-learn.png]]
|
rlm@452
|
2337
|
rlm@452
|
2338 While simple, =learn-touch-regions= exploits regularities in both
|
rlm@452
|
2339 the worm's physiology and the worm's environment to correctly
|
rlm@452
|
2340 deduce that the worm has six sides. Note that =learn-touch-regions=
|
rlm@452
|
2341 would work just as well even if the worm's touch sense data were
|
rlm@452
|
2342 completely scrambled. The cross shape is just for convienence. This
|
rlm@452
|
2343 example justifies the use of pre-defined touch regions in =EMPATH=.
|
rlm@452
|
2344
|
rlm@465
|
2345 * COMMENT Contributions
|
rlm@454
|
2346
|
rlm@461
|
2347 In this thesis you have seen the =CORTEX= system, a complete
|
rlm@461
|
2348 environment for creating simulated creatures. You have seen how to
|
rlm@461
|
2349 implement five senses including touch, proprioception, hearing,
|
rlm@461
|
2350 vision, and muscle tension. You have seen how to create new creatues
|
rlm@461
|
2351 using blender, a 3D modeling tool. I hope that =CORTEX= will be
|
rlm@461
|
2352 useful in further research projects. To this end I have included the
|
rlm@461
|
2353 full source to =CORTEX= along with a large suite of tests and
|
rlm@461
|
2354 examples. I have also created a user guide for =CORTEX= which is
|
rlm@461
|
2355 inculded in an appendix to this thesis.
|
rlm@447
|
2356
|
rlm@461
|
2357 You have also seen how I used =CORTEX= as a platform to attach the
|
rlm@461
|
2358 /action recognition/ problem, which is the problem of recognizing
|
rlm@461
|
2359 actions in video. You saw a simple system called =EMPATH= which
|
rlm@461
|
2360 ientifies actions by first describing actions in a body-centerd,
|
rlm@461
|
2361 rich sense language, then infering a full range of sensory
|
rlm@461
|
2362 experience from limited data using previous experience gained from
|
rlm@461
|
2363 free play.
|
rlm@447
|
2364
|
rlm@461
|
2365 As a minor digression, you also saw how I used =CORTEX= to enable a
|
rlm@461
|
2366 tiny worm to discover the topology of its skin simply by rolling on
|
rlm@461
|
2367 the ground.
|
rlm@461
|
2368
|
rlm@461
|
2369 In conclusion, the main contributions of this thesis are:
|
rlm@461
|
2370
|
rlm@461
|
2371 - =CORTEX=, a system for creating simulated creatures with rich
|
rlm@461
|
2372 senses.
|
rlm@461
|
2373 - =EMPATH=, a program for recognizing actions by imagining sensory
|
rlm@461
|
2374 experience.
|
rlm@447
|
2375
|
rlm@447
|
2376 # An anatomical joke:
|
rlm@447
|
2377 # - Training
|
rlm@447
|
2378 # - Skeletal imitation
|
rlm@447
|
2379 # - Sensory fleshing-out
|
rlm@447
|
2380 # - Classification
|