Mercurial > cortex
comparison thesis/old-cortex.org @ 516:ced955c3c84f
resurrect old cortex to fix flow issues.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sun, 30 Mar 2014 22:48:19 -0400 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
515:58fa1ffd481e | 516:ced955c3c84f |
---|---|
1 #+title: =CORTEX= | |
2 #+author: Robert McIntyre | |
3 #+email: rlm@mit.edu | |
4 #+description: Using embodied AI to facilitate Artificial Imagination. | |
5 #+keywords: AI, clojure, embodiment | |
6 #+LaTeX_CLASS_OPTIONS: [nofloat] | |
7 | |
8 * COMMENT templates | |
9 #+caption: | |
10 #+caption: | |
11 #+caption: | |
12 #+caption: | |
13 #+name: name | |
14 #+begin_listing clojure | |
15 #+BEGIN_SRC clojure | |
16 #+END_SRC | |
17 #+end_listing | |
18 | |
19 #+caption: | |
20 #+caption: | |
21 #+caption: | |
22 #+name: name | |
23 #+ATTR_LaTeX: :width 10cm | |
24 [[./images/aurellem-gray.png]] | |
25 | |
26 #+caption: | |
27 #+caption: | |
28 #+caption: | |
29 #+caption: | |
30 #+name: name | |
31 #+begin_listing clojure | |
32 #+BEGIN_SRC clojure | |
33 #+END_SRC | |
34 #+end_listing | |
35 | |
36 #+caption: | |
37 #+caption: | |
38 #+caption: | |
39 #+name: name | |
40 #+ATTR_LaTeX: :width 10cm | |
41 [[./images/aurellem-gray.png]] | |
42 | |
43 | |
44 * Empathy and Embodiment as problem solving strategies | |
45 | |
46 By the end of this thesis, you will have seen a novel approach to | |
47 interpreting video using embodiment and empathy. You will have also | |
48 seen one way to efficiently implement empathy for embodied | |
49 creatures. Finally, you will become familiar with =CORTEX=, a system | |
50 for designing and simulating creatures with rich senses, which you | |
51 may choose to use in your own research. | |
52 | |
53 This is the core vision of my thesis: That one of the important ways | |
54 in which we understand others is by imagining ourselves in their | |
55 position and emphatically feeling experiences relative to our own | |
56 bodies. By understanding events in terms of our own previous | |
57 corporeal experience, we greatly constrain the possibilities of what | |
58 would otherwise be an unwieldy exponential search. This extra | |
59 constraint can be the difference between easily understanding what | |
60 is happening in a video and being completely lost in a sea of | |
61 incomprehensible color and movement. | |
62 | |
63 ** Recognizing actions in video is extremely difficult | |
64 | |
65 Consider for example the problem of determining what is happening | |
66 in a video of which this is one frame: | |
67 | |
68 #+caption: A cat drinking some water. Identifying this action is | |
69 #+caption: beyond the state of the art for computers. | |
70 #+ATTR_LaTeX: :width 7cm | |
71 [[./images/cat-drinking.jpg]] | |
72 | |
73 It is currently impossible for any computer program to reliably | |
74 label such a video as ``drinking''. And rightly so -- it is a very | |
75 hard problem! What features can you describe in terms of low level | |
76 functions of pixels that can even begin to describe at a high level | |
77 what is happening here? | |
78 | |
79 Or suppose that you are building a program that recognizes chairs. | |
80 How could you ``see'' the chair in figure \ref{hidden-chair}? | |
81 | |
82 #+caption: The chair in this image is quite obvious to humans, but I | |
83 #+caption: doubt that any modern computer vision program can find it. | |
84 #+name: hidden-chair | |
85 #+ATTR_LaTeX: :width 10cm | |
86 [[./images/fat-person-sitting-at-desk.jpg]] | |
87 | |
88 Finally, how is it that you can easily tell the difference between | |
89 how the girls /muscles/ are working in figure \ref{girl}? | |
90 | |
91 #+caption: The mysterious ``common sense'' appears here as you are able | |
92 #+caption: to discern the difference in how the girl's arm muscles | |
93 #+caption: are activated between the two images. | |
94 #+name: girl | |
95 #+ATTR_LaTeX: :width 7cm | |
96 [[./images/wall-push.png]] | |
97 | |
98 Each of these examples tells us something about what might be going | |
99 on in our minds as we easily solve these recognition problems. | |
100 | |
101 The hidden chairs show us that we are strongly triggered by cues | |
102 relating to the position of human bodies, and that we can determine | |
103 the overall physical configuration of a human body even if much of | |
104 that body is occluded. | |
105 | |
106 The picture of the girl pushing against the wall tells us that we | |
107 have common sense knowledge about the kinetics of our own bodies. | |
108 We know well how our muscles would have to work to maintain us in | |
109 most positions, and we can easily project this self-knowledge to | |
110 imagined positions triggered by images of the human body. | |
111 | |
112 ** =EMPATH= neatly solves recognition problems | |
113 | |
114 I propose a system that can express the types of recognition | |
115 problems above in a form amenable to computation. It is split into | |
116 four parts: | |
117 | |
118 - Free/Guided Play :: The creature moves around and experiences the | |
119 world through its unique perspective. Many otherwise | |
120 complicated actions are easily described in the language of a | |
121 full suite of body-centered, rich senses. For example, | |
122 drinking is the feeling of water sliding down your throat, and | |
123 cooling your insides. It's often accompanied by bringing your | |
124 hand close to your face, or bringing your face close to water. | |
125 Sitting down is the feeling of bending your knees, activating | |
126 your quadriceps, then feeling a surface with your bottom and | |
127 relaxing your legs. These body-centered action descriptions | |
128 can be either learned or hard coded. | |
129 - Posture Imitation :: When trying to interpret a video or image, | |
130 the creature takes a model of itself and aligns it with | |
131 whatever it sees. This alignment can even cross species, as | |
132 when humans try to align themselves with things like ponies, | |
133 dogs, or other humans with a different body type. | |
134 - Empathy :: The alignment triggers associations with | |
135 sensory data from prior experiences. For example, the | |
136 alignment itself easily maps to proprioceptive data. Any | |
137 sounds or obvious skin contact in the video can to a lesser | |
138 extent trigger previous experience. Segments of previous | |
139 experiences are stitched together to form a coherent and | |
140 complete sensory portrait of the scene. | |
141 - Recognition :: With the scene described in terms of first | |
142 person sensory events, the creature can now run its | |
143 action-identification programs on this synthesized sensory | |
144 data, just as it would if it were actually experiencing the | |
145 scene first-hand. If previous experience has been accurately | |
146 retrieved, and if it is analogous enough to the scene, then | |
147 the creature will correctly identify the action in the scene. | |
148 | |
149 For example, I think humans are able to label the cat video as | |
150 ``drinking'' because they imagine /themselves/ as the cat, and | |
151 imagine putting their face up against a stream of water and | |
152 sticking out their tongue. In that imagined world, they can feel | |
153 the cool water hitting their tongue, and feel the water entering | |
154 their body, and are able to recognize that /feeling/ as drinking. | |
155 So, the label of the action is not really in the pixels of the | |
156 image, but is found clearly in a simulation inspired by those | |
157 pixels. An imaginative system, having been trained on drinking and | |
158 non-drinking examples and learning that the most important | |
159 component of drinking is the feeling of water sliding down one's | |
160 throat, would analyze a video of a cat drinking in the following | |
161 manner: | |
162 | |
163 1. Create a physical model of the video by putting a ``fuzzy'' | |
164 model of its own body in place of the cat. Possibly also create | |
165 a simulation of the stream of water. | |
166 | |
167 2. Play out this simulated scene and generate imagined sensory | |
168 experience. This will include relevant muscle contractions, a | |
169 close up view of the stream from the cat's perspective, and most | |
170 importantly, the imagined feeling of water entering the | |
171 mouth. The imagined sensory experience can come from a | |
172 simulation of the event, but can also be pattern-matched from | |
173 previous, similar embodied experience. | |
174 | |
175 3. The action is now easily identified as drinking by the sense of | |
176 taste alone. The other senses (such as the tongue moving in and | |
177 out) help to give plausibility to the simulated action. Note that | |
178 the sense of vision, while critical in creating the simulation, | |
179 is not critical for identifying the action from the simulation. | |
180 | |
181 For the chair examples, the process is even easier: | |
182 | |
183 1. Align a model of your body to the person in the image. | |
184 | |
185 2. Generate proprioceptive sensory data from this alignment. | |
186 | |
187 3. Use the imagined proprioceptive data as a key to lookup related | |
188 sensory experience associated with that particular proproceptive | |
189 feeling. | |
190 | |
191 4. Retrieve the feeling of your bottom resting on a surface, your | |
192 knees bent, and your leg muscles relaxed. | |
193 | |
194 5. This sensory information is consistent with the =sitting?= | |
195 sensory predicate, so you (and the entity in the image) must be | |
196 sitting. | |
197 | |
198 6. There must be a chair-like object since you are sitting. | |
199 | |
200 Empathy offers yet another alternative to the age-old AI | |
201 representation question: ``What is a chair?'' --- A chair is the | |
202 feeling of sitting. | |
203 | |
204 My program, =EMPATH= uses this empathic problem solving technique | |
205 to interpret the actions of a simple, worm-like creature. | |
206 | |
207 #+caption: The worm performs many actions during free play such as | |
208 #+caption: curling, wiggling, and resting. | |
209 #+name: worm-intro | |
210 #+ATTR_LaTeX: :width 15cm | |
211 [[./images/worm-intro-white.png]] | |
212 | |
213 #+caption: =EMPATH= recognized and classified each of these | |
214 #+caption: poses by inferring the complete sensory experience | |
215 #+caption: from proprioceptive data. | |
216 #+name: worm-recognition-intro | |
217 #+ATTR_LaTeX: :width 15cm | |
218 [[./images/worm-poses.png]] | |
219 | |
220 One powerful advantage of empathic problem solving is that it | |
221 factors the action recognition problem into two easier problems. To | |
222 use empathy, you need an /aligner/, which takes the video and a | |
223 model of your body, and aligns the model with the video. Then, you | |
224 need a /recognizer/, which uses the aligned model to interpret the | |
225 action. The power in this method lies in the fact that you describe | |
226 all actions form a body-centered viewpoint. You are less tied to | |
227 the particulars of any visual representation of the actions. If you | |
228 teach the system what ``running'' is, and you have a good enough | |
229 aligner, the system will from then on be able to recognize running | |
230 from any point of view, even strange points of view like above or | |
231 underneath the runner. This is in contrast to action recognition | |
232 schemes that try to identify actions using a non-embodied approach. | |
233 If these systems learn about running as viewed from the side, they | |
234 will not automatically be able to recognize running from any other | |
235 viewpoint. | |
236 | |
237 Another powerful advantage is that using the language of multiple | |
238 body-centered rich senses to describe body-centerd actions offers a | |
239 massive boost in descriptive capability. Consider how difficult it | |
240 would be to compose a set of HOG filters to describe the action of | |
241 a simple worm-creature ``curling'' so that its head touches its | |
242 tail, and then behold the simplicity of describing thus action in a | |
243 language designed for the task (listing \ref{grand-circle-intro}): | |
244 | |
245 #+caption: Body-centerd actions are best expressed in a body-centered | |
246 #+caption: language. This code detects when the worm has curled into a | |
247 #+caption: full circle. Imagine how you would replicate this functionality | |
248 #+caption: using low-level pixel features such as HOG filters! | |
249 #+name: grand-circle-intro | |
250 #+begin_listing clojure | |
251 #+begin_src clojure | |
252 (defn grand-circle? | |
253 "Does the worm form a majestic circle (one end touching the other)?" | |
254 [experiences] | |
255 (and (curled? experiences) | |
256 (let [worm-touch (:touch (peek experiences)) | |
257 tail-touch (worm-touch 0) | |
258 head-touch (worm-touch 4)] | |
259 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch)) | |
260 (< 0.2 (contact worm-segment-top-tip head-touch)))))) | |
261 #+end_src | |
262 #+end_listing | |
263 | |
264 ** =CORTEX= is a toolkit for building sensate creatures | |
265 | |
266 I built =CORTEX= to be a general AI research platform for doing | |
267 experiments involving multiple rich senses and a wide variety and | |
268 number of creatures. I intend it to be useful as a library for many | |
269 more projects than just this thesis. =CORTEX= was necessary to meet | |
270 a need among AI researchers at CSAIL and beyond, which is that | |
271 people often will invent neat ideas that are best expressed in the | |
272 language of creatures and senses, but in order to explore those | |
273 ideas they must first build a platform in which they can create | |
274 simulated creatures with rich senses! There are many ideas that | |
275 would be simple to execute (such as =EMPATH=), but attached to them | |
276 is the multi-month effort to make a good creature simulator. Often, | |
277 that initial investment of time proves to be too much, and the | |
278 project must make do with a lesser environment. | |
279 | |
280 =CORTEX= is well suited as an environment for embodied AI research | |
281 for three reasons: | |
282 | |
283 - You can create new creatures using Blender, a popular 3D modeling | |
284 program. Each sense can be specified using special blender nodes | |
285 with biologically inspired paramaters. You need not write any | |
286 code to create a creature, and can use a wide library of | |
287 pre-existing blender models as a base for your own creatures. | |
288 | |
289 - =CORTEX= implements a wide variety of senses, including touch, | |
290 proprioception, vision, hearing, and muscle tension. Complicated | |
291 senses like touch, and vision involve multiple sensory elements | |
292 embedded in a 2D surface. You have complete control over the | |
293 distribution of these sensor elements through the use of simple | |
294 png image files. In particular, =CORTEX= implements more | |
295 comprehensive hearing than any other creature simulation system | |
296 available. | |
297 | |
298 - =CORTEX= supports any number of creatures and any number of | |
299 senses. Time in =CORTEX= dialates so that the simulated creatures | |
300 always precieve a perfectly smooth flow of time, regardless of | |
301 the actual computational load. | |
302 | |
303 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game | |
304 engine designed to create cross-platform 3D desktop games. =CORTEX= | |
305 is mainly written in clojure, a dialect of =LISP= that runs on the | |
306 java virtual machine (JVM). The API for creating and simulating | |
307 creatures and senses is entirely expressed in clojure, though many | |
308 senses are implemented at the layer of jMonkeyEngine or below. For | |
309 example, for the sense of hearing I use a layer of clojure code on | |
310 top of a layer of java JNI bindings that drive a layer of =C++= | |
311 code which implements a modified version of =OpenAL= to support | |
312 multiple listeners. =CORTEX= is the only simulation environment | |
313 that I know of that can support multiple entities that can each | |
314 hear the world from their own perspective. Other senses also | |
315 require a small layer of Java code. =CORTEX= also uses =bullet=, a | |
316 physics simulator written in =C=. | |
317 | |
318 #+caption: Here is the worm from figure \ref{worm-intro} modeled | |
319 #+caption: in Blender, a free 3D-modeling program. Senses and | |
320 #+caption: joints are described using special nodes in Blender. | |
321 #+name: worm-recognition-intro | |
322 #+ATTR_LaTeX: :width 12cm | |
323 [[./images/blender-worm.png]] | |
324 | |
325 Here are some thing I anticipate that =CORTEX= might be used for: | |
326 | |
327 - exploring new ideas about sensory integration | |
328 - distributed communication among swarm creatures | |
329 - self-learning using free exploration, | |
330 - evolutionary algorithms involving creature construction | |
331 - exploration of exoitic senses and effectors that are not possible | |
332 in the real world (such as telekenisis or a semantic sense) | |
333 - imagination using subworlds | |
334 | |
335 During one test with =CORTEX=, I created 3,000 creatures each with | |
336 their own independent senses and ran them all at only 1/80 real | |
337 time. In another test, I created a detailed model of my own hand, | |
338 equipped with a realistic distribution of touch (more sensitive at | |
339 the fingertips), as well as eyes and ears, and it ran at around 1/4 | |
340 real time. | |
341 | |
342 #+BEGIN_LaTeX | |
343 \begin{sidewaysfigure} | |
344 \includegraphics[width=9.5in]{images/full-hand.png} | |
345 \caption{ | |
346 I modeled my own right hand in Blender and rigged it with all the | |
347 senses that {\tt CORTEX} supports. My simulated hand has a | |
348 biologically inspired distribution of touch sensors. The senses are | |
349 displayed on the right, and the simulation is displayed on the | |
350 left. Notice that my hand is curling its fingers, that it can see | |
351 its own finger from the eye in its palm, and that it can feel its | |
352 own thumb touching its palm.} | |
353 \end{sidewaysfigure} | |
354 #+END_LaTeX | |
355 | |
356 ** Contributions | |
357 | |
358 - I built =CORTEX=, a comprehensive platform for embodied AI | |
359 experiments. =CORTEX= supports many features lacking in other | |
360 systems, such proper simulation of hearing. It is easy to create | |
361 new =CORTEX= creatures using Blender, a free 3D modeling program. | |
362 | |
363 - I built =EMPATH=, which uses =CORTEX= to identify the actions of | |
364 a worm-like creature using a computational model of empathy. | |
365 | |
366 * Building =CORTEX= | |
367 | |
368 I intend for =CORTEX= to be used as a general-purpose library for | |
369 building creatures and outfitting them with senses, so that it will | |
370 be useful for other researchers who want to test out ideas of their | |
371 own. To this end, wherver I have had to make archetictural choices | |
372 about =CORTEX=, I have chosen to give as much freedom to the user as | |
373 possible, so that =CORTEX= may be used for things I have not | |
374 forseen. | |
375 | |
376 ** Simulation or Reality? | |
377 | |
378 The most important archetictural decision of all is the choice to | |
379 use a computer-simulated environemnt in the first place! The world | |
380 is a vast and rich place, and for now simulations are a very poor | |
381 reflection of its complexity. It may be that there is a significant | |
382 qualatative difference between dealing with senses in the real | |
383 world and dealing with pale facilimilies of them in a simulation. | |
384 What are the advantages and disadvantages of a simulation vs. | |
385 reality? | |
386 | |
387 *** Simulation | |
388 | |
389 The advantages of virtual reality are that when everything is a | |
390 simulation, experiments in that simulation are absolutely | |
391 reproducible. It's also easier to change the character and world | |
392 to explore new situations and different sensory combinations. | |
393 | |
394 If the world is to be simulated on a computer, then not only do | |
395 you have to worry about whether the character's senses are rich | |
396 enough to learn from the world, but whether the world itself is | |
397 rendered with enough detail and realism to give enough working | |
398 material to the character's senses. To name just a few | |
399 difficulties facing modern physics simulators: destructibility of | |
400 the environment, simulation of water/other fluids, large areas, | |
401 nonrigid bodies, lots of objects, smoke. I don't know of any | |
402 computer simulation that would allow a character to take a rock | |
403 and grind it into fine dust, then use that dust to make a clay | |
404 sculpture, at least not without spending years calculating the | |
405 interactions of every single small grain of dust. Maybe a | |
406 simulated world with today's limitations doesn't provide enough | |
407 richness for real intelligence to evolve. | |
408 | |
409 *** Reality | |
410 | |
411 The other approach for playing with senses is to hook your | |
412 software up to real cameras, microphones, robots, etc., and let it | |
413 loose in the real world. This has the advantage of eliminating | |
414 concerns about simulating the world at the expense of increasing | |
415 the complexity of implementing the senses. Instead of just | |
416 grabbing the current rendered frame for processing, you have to | |
417 use an actual camera with real lenses and interact with photons to | |
418 get an image. It is much harder to change the character, which is | |
419 now partly a physical robot of some sort, since doing so involves | |
420 changing things around in the real world instead of modifying | |
421 lines of code. While the real world is very rich and definitely | |
422 provides enough stimulation for intelligence to develop as | |
423 evidenced by our own existence, it is also uncontrollable in the | |
424 sense that a particular situation cannot be recreated perfectly or | |
425 saved for later use. It is harder to conduct science because it is | |
426 harder to repeat an experiment. The worst thing about using the | |
427 real world instead of a simulation is the matter of time. Instead | |
428 of simulated time you get the constant and unstoppable flow of | |
429 real time. This severely limits the sorts of software you can use | |
430 to program the AI because all sense inputs must be handled in real | |
431 time. Complicated ideas may have to be implemented in hardware or | |
432 may simply be impossible given the current speed of our | |
433 processors. Contrast this with a simulation, in which the flow of | |
434 time in the simulated world can be slowed down to accommodate the | |
435 limitations of the character's programming. In terms of cost, | |
436 doing everything in software is far cheaper than building custom | |
437 real-time hardware. All you need is a laptop and some patience. | |
438 | |
439 ** Because of Time, simulation is perferable to reality | |
440 | |
441 I envision =CORTEX= being used to support rapid prototyping and | |
442 iteration of ideas. Even if I could put together a well constructed | |
443 kit for creating robots, it would still not be enough because of | |
444 the scourge of real-time processing. Anyone who wants to test their | |
445 ideas in the real world must always worry about getting their | |
446 algorithms to run fast enough to process information in real time. | |
447 The need for real time processing only increases if multiple senses | |
448 are involved. In the extreme case, even simple algorithms will have | |
449 to be accelerated by ASIC chips or FPGAs, turning what would | |
450 otherwise be a few lines of code and a 10x speed penality into a | |
451 multi-month ordeal. For this reason, =CORTEX= supports | |
452 /time-dialiation/, which scales back the framerate of the | |
453 simulation in proportion to the amount of processing each frame. | |
454 From the perspective of the creatures inside the simulation, time | |
455 always appears to flow at a constant rate, regardless of how | |
456 complicated the envorimnent becomes or how many creatures are in | |
457 the simulation. The cost is that =CORTEX= can sometimes run slower | |
458 than real time. This can also be an advantage, however --- | |
459 simulations of very simple creatures in =CORTEX= generally run at | |
460 40x on my machine! | |
461 | |
462 ** What is a sense? | |
463 | |
464 If =CORTEX= is to support a wide variety of senses, it would help | |
465 to have a better understanding of what a ``sense'' actually is! | |
466 While vision, touch, and hearing all seem like they are quite | |
467 different things, I was supprised to learn during the course of | |
468 this thesis that they (and all physical senses) can be expressed as | |
469 exactly the same mathematical object due to a dimensional argument! | |
470 | |
471 Human beings are three-dimensional objects, and the nerves that | |
472 transmit data from our various sense organs to our brain are | |
473 essentially one-dimensional. This leaves up to two dimensions in | |
474 which our sensory information may flow. For example, imagine your | |
475 skin: it is a two-dimensional surface around a three-dimensional | |
476 object (your body). It has discrete touch sensors embedded at | |
477 various points, and the density of these sensors corresponds to the | |
478 sensitivity of that region of skin. Each touch sensor connects to a | |
479 nerve, all of which eventually are bundled together as they travel | |
480 up the spinal cord to the brain. Intersect the spinal nerves with a | |
481 guillotining plane and you will see all of the sensory data of the | |
482 skin revealed in a roughly circular two-dimensional image which is | |
483 the cross section of the spinal cord. Points on this image that are | |
484 close together in this circle represent touch sensors that are | |
485 /probably/ close together on the skin, although there is of course | |
486 some cutting and rearrangement that has to be done to transfer the | |
487 complicated surface of the skin onto a two dimensional image. | |
488 | |
489 Most human senses consist of many discrete sensors of various | |
490 properties distributed along a surface at various densities. For | |
491 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's | |
492 disks, and Ruffini's endings, which detect pressure and vibration | |
493 of various intensities. For ears, it is the stereocilia distributed | |
494 along the basilar membrane inside the cochlea; each one is | |
495 sensitive to a slightly different frequency of sound. For eyes, it | |
496 is rods and cones distributed along the surface of the retina. In | |
497 each case, we can describe the sense with a surface and a | |
498 distribution of sensors along that surface. | |
499 | |
500 The neat idea is that every human sense can be effectively | |
501 described in terms of a surface containing embedded sensors. If the | |
502 sense had any more dimensions, then there wouldn't be enough room | |
503 in the spinal chord to transmit the information! | |
504 | |
505 Therefore, =CORTEX= must support the ability to create objects and | |
506 then be able to ``paint'' points along their surfaces to describe | |
507 each sense. | |
508 | |
509 Fortunately this idea is already a well known computer graphics | |
510 technique called called /UV-mapping/. The three-dimensional surface | |
511 of a model is cut and smooshed until it fits on a two-dimensional | |
512 image. You paint whatever you want on that image, and when the | |
513 three-dimensional shape is rendered in a game the smooshing and | |
514 cutting is reversed and the image appears on the three-dimensional | |
515 object. | |
516 | |
517 To make a sense, interpret the UV-image as describing the | |
518 distribution of that senses sensors. To get different types of | |
519 sensors, you can either use a different color for each type of | |
520 sensor, or use multiple UV-maps, each labeled with that sensor | |
521 type. I generally use a white pixel to mean the presence of a | |
522 sensor and a black pixel to mean the absence of a sensor, and use | |
523 one UV-map for each sensor-type within a given sense. | |
524 | |
525 #+CAPTION: The UV-map for an elongated icososphere. The white | |
526 #+caption: dots each represent a touch sensor. They are dense | |
527 #+caption: in the regions that describe the tip of the finger, | |
528 #+caption: and less dense along the dorsal side of the finger | |
529 #+caption: opposite the tip. | |
530 #+name: finger-UV | |
531 #+ATTR_latex: :width 10cm | |
532 [[./images/finger-UV.png]] | |
533 | |
534 #+caption: Ventral side of the UV-mapped finger. Notice the | |
535 #+caption: density of touch sensors at the tip. | |
536 #+name: finger-side-view | |
537 #+ATTR_LaTeX: :width 10cm | |
538 [[./images/finger-1.png]] | |
539 | |
540 ** Video game engines provide ready-made physics and shading | |
541 | |
542 I did not need to write my own physics simulation code or shader to | |
543 build =CORTEX=. Doing so would lead to a system that is impossible | |
544 for anyone but myself to use anyway. Instead, I use a video game | |
545 engine as a base and modify it to accomodate the additional needs | |
546 of =CORTEX=. Video game engines are an ideal starting point to | |
547 build =CORTEX=, because they are not far from being creature | |
548 building systems themselves. | |
549 | |
550 First off, general purpose video game engines come with a physics | |
551 engine and lighting / sound system. The physics system provides | |
552 tools that can be co-opted to serve as touch, proprioception, and | |
553 muscles. Since some games support split screen views, a good video | |
554 game engine will allow you to efficiently create multiple cameras | |
555 in the simulated world that can be used as eyes. Video game systems | |
556 offer integrated asset management for things like textures and | |
557 creatures models, providing an avenue for defining creatures. They | |
558 also understand UV-mapping, since this technique is used to apply a | |
559 texture to a model. Finally, because video game engines support a | |
560 large number of users, as long as =CORTEX= doesn't stray too far | |
561 from the base system, other researchers can turn to this community | |
562 for help when doing their research. | |
563 | |
564 ** =CORTEX= is based on jMonkeyEngine3 | |
565 | |
566 While preparing to build =CORTEX= I studied several video game | |
567 engines to see which would best serve as a base. The top contenders | |
568 were: | |
569 | |
570 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID | |
571 software in 1997. All the source code was released by ID | |
572 software into the Public Domain several years ago, and as a | |
573 result it has been ported to many different languages. This | |
574 engine was famous for its advanced use of realistic shading | |
575 and had decent and fast physics simulation. The main advantage | |
576 of the Quake II engine is its simplicity, but I ultimately | |
577 rejected it because the engine is too tied to the concept of a | |
578 first-person shooter game. One of the problems I had was that | |
579 there does not seem to be any easy way to attach multiple | |
580 cameras to a single character. There are also several physics | |
581 clipping issues that are corrected in a way that only applies | |
582 to the main character and do not apply to arbitrary objects. | |
583 | |
584 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II | |
585 and Quake I engines and is used by Valve in the Half-Life | |
586 series of games. The physics simulation in the Source Engine | |
587 is quite accurate and probably the best out of all the engines | |
588 I investigated. There is also an extensive community actively | |
589 working with the engine. However, applications that use the | |
590 Source Engine must be written in C++, the code is not open, it | |
591 only runs on Windows, and the tools that come with the SDK to | |
592 handle models and textures are complicated and awkward to use. | |
593 | |
594 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating | |
595 games in Java. It uses OpenGL to render to the screen and uses | |
596 screengraphs to avoid drawing things that do not appear on the | |
597 screen. It has an active community and several games in the | |
598 pipeline. The engine was not built to serve any particular | |
599 game but is instead meant to be used for any 3D game. | |
600 | |
601 I chose jMonkeyEngine3 because it because it had the most features | |
602 out of all the free projects I looked at, and because I could then | |
603 write my code in clojure, an implementation of =LISP= that runs on | |
604 the JVM. | |
605 | |
606 ** =CORTEX= uses Blender to create creature models | |
607 | |
608 For the simple worm-like creatures I will use later on in this | |
609 thesis, I could define a simple API in =CORTEX= that would allow | |
610 one to create boxes, spheres, etc., and leave that API as the sole | |
611 way to create creatures. However, for =CORTEX= to truly be useful | |
612 for other projects, it needs a way to construct complicated | |
613 creatures. If possible, it would be nice to leverage work that has | |
614 already been done by the community of 3D modelers, or at least | |
615 enable people who are talented at moedling but not programming to | |
616 design =CORTEX= creatures. | |
617 | |
618 Therefore, I use Blender, a free 3D modeling program, as the main | |
619 way to create creatures in =CORTEX=. However, the creatures modeled | |
620 in Blender must also be simple to simulate in jMonkeyEngine3's game | |
621 engine, and must also be easy to rig with =CORTEX='s senses. I | |
622 accomplish this with extensive use of Blender's ``empty nodes.'' | |
623 | |
624 Empty nodes have no mass, physical presence, or appearance, but | |
625 they can hold metadata and have names. I use a tree structure of | |
626 empty nodes to specify senses in the following manner: | |
627 | |
628 - Create a single top-level empty node whose name is the name of | |
629 the sense. | |
630 - Add empty nodes which each contain meta-data relevant to the | |
631 sense, including a UV-map describing the number/distribution of | |
632 sensors if applicable. | |
633 - Make each empty-node the child of the top-level node. | |
634 | |
635 #+caption: An example of annoting a creature model with empty | |
636 #+caption: nodes to describe the layout of senses. There are | |
637 #+caption: multiple empty nodes which each describe the position | |
638 #+caption: of muscles, ears, eyes, or joints. | |
639 #+name: sense-nodes | |
640 #+ATTR_LaTeX: :width 10cm | |
641 [[./images/empty-sense-nodes.png]] | |
642 | |
643 ** Bodies are composed of segments connected by joints | |
644 | |
645 Blender is a general purpose animation tool, which has been used in | |
646 the past to create high quality movies such as Sintel | |
647 \cite{blender}. Though Blender can model and render even complicated | |
648 things like water, it is crucual to keep models that are meant to | |
649 be simulated as creatures simple. =Bullet=, which =CORTEX= uses | |
650 though jMonkeyEngine3, is a rigid-body physics system. This offers | |
651 a compromise between the expressiveness of a game level and the | |
652 speed at which it can be simulated, and it means that creatures | |
653 should be naturally expressed as rigid components held together by | |
654 joint constraints. | |
655 | |
656 But humans are more like a squishy bag with wrapped around some | |
657 hard bones which define the overall shape. When we move, our skin | |
658 bends and stretches to accomodate the new positions of our bones. | |
659 | |
660 One way to make bodies composed of rigid pieces connected by joints | |
661 /seem/ more human-like is to use an /armature/, (or /rigging/) | |
662 system, which defines a overall ``body mesh'' and defines how the | |
663 mesh deforms as a function of the position of each ``bone'' which | |
664 is a standard rigid body. This technique is used extensively to | |
665 model humans and create realistic animations. It is not a good | |
666 technique for physical simulation, however because it creates a lie | |
667 -- the skin is not a physical part of the simulation and does not | |
668 interact with any objects in the world or itself. Objects will pass | |
669 right though the skin until they come in contact with the | |
670 underlying bone, which is a physical object. Whithout simulating | |
671 the skin, the sense of touch has little meaning, and the creature's | |
672 own vision will lie to it about the true extent of its body. | |
673 Simulating the skin as a physical object requires some way to | |
674 continuously update the physical model of the skin along with the | |
675 movement of the bones, which is unacceptably slow compared to rigid | |
676 body simulation. | |
677 | |
678 Therefore, instead of using the human-like ``deformable bag of | |
679 bones'' approach, I decided to base my body plans on multiple solid | |
680 objects that are connected by joints, inspired by the robot =EVE= | |
681 from the movie WALL-E. | |
682 | |
683 #+caption: =EVE= from the movie WALL-E. This body plan turns | |
684 #+caption: out to be much better suited to my purposes than a more | |
685 #+caption: human-like one. | |
686 #+ATTR_LaTeX: :width 10cm | |
687 [[./images/Eve.jpg]] | |
688 | |
689 =EVE='s body is composed of several rigid components that are held | |
690 together by invisible joint constraints. This is what I mean by | |
691 ``eve-like''. The main reason that I use eve-style bodies is for | |
692 efficiency, and so that there will be correspondence between the | |
693 AI's semses and the physical presence of its body. Each individual | |
694 section is simulated by a separate rigid body that corresponds | |
695 exactly with its visual representation and does not change. | |
696 Sections are connected by invisible joints that are well supported | |
697 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3, | |
698 can efficiently simulate hundreds of rigid bodies connected by | |
699 joints. Just because sections are rigid does not mean they have to | |
700 stay as one piece forever; they can be dynamically replaced with | |
701 multiple sections to simulate splitting in two. This could be used | |
702 to simulate retractable claws or =EVE='s hands, which are able to | |
703 coalesce into one object in the movie. | |
704 | |
705 *** Solidifying/Connecting a body | |
706 | |
707 =CORTEX= creates a creature in two steps: first, it traverses the | |
708 nodes in the blender file and creates physical representations for | |
709 any of them that have mass defined in their blender meta-data. | |
710 | |
711 #+caption: Program for iterating through the nodes in a blender file | |
712 #+caption: and generating physical jMonkeyEngine3 objects with mass | |
713 #+caption: and a matching physics shape. | |
714 #+name: name | |
715 #+begin_listing clojure | |
716 #+begin_src clojure | |
717 (defn physical! | |
718 "Iterate through the nodes in creature and make them real physical | |
719 objects in the simulation." | |
720 [#^Node creature] | |
721 (dorun | |
722 (map | |
723 (fn [geom] | |
724 (let [physics-control | |
725 (RigidBodyControl. | |
726 (HullCollisionShape. | |
727 (.getMesh geom)) | |
728 (if-let [mass (meta-data geom "mass")] | |
729 (float mass) (float 1)))] | |
730 (.addControl geom physics-control))) | |
731 (filter #(isa? (class %) Geometry ) | |
732 (node-seq creature))))) | |
733 #+end_src | |
734 #+end_listing | |
735 | |
736 The next step to making a proper body is to connect those pieces | |
737 together with joints. jMonkeyEngine has a large array of joints | |
738 available via =bullet=, such as Point2Point, Cone, Hinge, and a | |
739 generic Six Degree of Freedom joint, with or without spring | |
740 restitution. | |
741 | |
742 Joints are treated a lot like proper senses, in that there is a | |
743 top-level empty node named ``joints'' whose children each | |
744 represent a joint. | |
745 | |
746 #+caption: View of the hand model in Blender showing the main ``joints'' | |
747 #+caption: node (highlighted in yellow) and its children which each | |
748 #+caption: represent a joint in the hand. Each joint node has metadata | |
749 #+caption: specifying what sort of joint it is. | |
750 #+name: blender-hand | |
751 #+ATTR_LaTeX: :width 10cm | |
752 [[./images/hand-screenshot1.png]] | |
753 | |
754 | |
755 =CORTEX='s procedure for binding the creature together with joints | |
756 is as follows: | |
757 | |
758 - Find the children of the ``joints'' node. | |
759 - Determine the two spatials the joint is meant to connect. | |
760 - Create the joint based on the meta-data of the empty node. | |
761 | |
762 The higher order function =sense-nodes= from =cortex.sense= | |
763 simplifies finding the joints based on their parent ``joints'' | |
764 node. | |
765 | |
766 #+caption: Retrieving the children empty nodes from a single | |
767 #+caption: named empty node is a common pattern in =CORTEX= | |
768 #+caption: further instances of this technique for the senses | |
769 #+caption: will be omitted | |
770 #+name: get-empty-nodes | |
771 #+begin_listing clojure | |
772 #+begin_src clojure | |
773 (defn sense-nodes | |
774 "For some senses there is a special empty blender node whose | |
775 children are considered markers for an instance of that sense. This | |
776 function generates functions to find those children, given the name | |
777 of the special parent node." | |
778 [parent-name] | |
779 (fn [#^Node creature] | |
780 (if-let [sense-node (.getChild creature parent-name)] | |
781 (seq (.getChildren sense-node)) []))) | |
782 | |
783 (def | |
784 ^{:doc "Return the children of the creature's \"joints\" node." | |
785 :arglists '([creature])} | |
786 joints | |
787 (sense-nodes "joints")) | |
788 #+end_src | |
789 #+end_listing | |
790 | |
791 To find a joint's targets, =CORTEX= creates a small cube, centered | |
792 around the empty-node, and grows the cube exponentially until it | |
793 intersects two physical objects. The objects are ordered according | |
794 to the joint's rotation, with the first one being the object that | |
795 has more negative coordinates in the joint's reference frame. | |
796 Since the objects must be physical, the empty-node itself escapes | |
797 detection. Because the objects must be physical, =joint-targets= | |
798 must be called /after/ =physical!= is called. | |
799 | |
800 #+caption: Program to find the targets of a joint node by | |
801 #+caption: exponentiallly growth of a search cube. | |
802 #+name: joint-targets | |
803 #+begin_listing clojure | |
804 #+begin_src clojure | |
805 (defn joint-targets | |
806 "Return the two closest two objects to the joint object, ordered | |
807 from bottom to top according to the joint's rotation." | |
808 [#^Node parts #^Node joint] | |
809 (loop [radius (float 0.01)] | |
810 (let [results (CollisionResults.)] | |
811 (.collideWith | |
812 parts | |
813 (BoundingBox. (.getWorldTranslation joint) | |
814 radius radius radius) results) | |
815 (let [targets | |
816 (distinct | |
817 (map #(.getGeometry %) results))] | |
818 (if (>= (count targets) 2) | |
819 (sort-by | |
820 #(let [joint-ref-frame-position | |
821 (jme-to-blender | |
822 (.mult | |
823 (.inverse (.getWorldRotation joint)) | |
824 (.subtract (.getWorldTranslation %) | |
825 (.getWorldTranslation joint))))] | |
826 (.dot (Vector3f. 1 1 1) joint-ref-frame-position)) | |
827 (take 2 targets)) | |
828 (recur (float (* radius 2)))))))) | |
829 #+end_src | |
830 #+end_listing | |
831 | |
832 Once =CORTEX= finds all joints and targets, it creates them using | |
833 a dispatch on the metadata of each joint node. | |
834 | |
835 #+caption: Program to dispatch on blender metadata and create joints | |
836 #+caption: sutiable for physical simulation. | |
837 #+name: joint-dispatch | |
838 #+begin_listing clojure | |
839 #+begin_src clojure | |
840 (defmulti joint-dispatch | |
841 "Translate blender pseudo-joints into real JME joints." | |
842 (fn [constraints & _] | |
843 (:type constraints))) | |
844 | |
845 (defmethod joint-dispatch :point | |
846 [constraints control-a control-b pivot-a pivot-b rotation] | |
847 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false) | |
848 (.setLinearLowerLimit Vector3f/ZERO) | |
849 (.setLinearUpperLimit Vector3f/ZERO))) | |
850 | |
851 (defmethod joint-dispatch :hinge | |
852 [constraints control-a control-b pivot-a pivot-b rotation] | |
853 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X) | |
854 [limit-1 limit-2] (:limit constraints) | |
855 hinge-axis (.mult rotation (blender-to-jme axis))] | |
856 (doto (HingeJoint. control-a control-b pivot-a pivot-b | |
857 hinge-axis hinge-axis) | |
858 (.setLimit limit-1 limit-2)))) | |
859 | |
860 (defmethod joint-dispatch :cone | |
861 [constraints control-a control-b pivot-a pivot-b rotation] | |
862 (let [limit-xz (:limit-xz constraints) | |
863 limit-xy (:limit-xy constraints) | |
864 twist (:twist constraints)] | |
865 (doto (ConeJoint. control-a control-b pivot-a pivot-b | |
866 rotation rotation) | |
867 (.setLimit (float limit-xz) (float limit-xy) | |
868 (float twist))))) | |
869 #+end_src | |
870 #+end_listing | |
871 | |
872 All that is left for joints it to combine the above pieces into a | |
873 something that can operate on the collection of nodes that a | |
874 blender file represents. | |
875 | |
876 #+caption: Program to completely create a joint given information | |
877 #+caption: from a blender file. | |
878 #+name: connect | |
879 #+begin_listing clojure | |
880 #+begin_src clojure | |
881 (defn connect | |
882 "Create a joint between 'obj-a and 'obj-b at the location of | |
883 'joint. The type of joint is determined by the metadata on 'joint. | |
884 | |
885 Here are some examples: | |
886 {:type :point} | |
887 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)} | |
888 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints) | |
889 | |
890 {:type :cone :limit-xz 0] | |
891 :limit-xy 0] | |
892 :twist 0]} (use XZY rotation mode in blender!)" | |
893 [#^Node obj-a #^Node obj-b #^Node joint] | |
894 (let [control-a (.getControl obj-a RigidBodyControl) | |
895 control-b (.getControl obj-b RigidBodyControl) | |
896 joint-center (.getWorldTranslation joint) | |
897 joint-rotation (.toRotationMatrix (.getWorldRotation joint)) | |
898 pivot-a (world-to-local obj-a joint-center) | |
899 pivot-b (world-to-local obj-b joint-center)] | |
900 (if-let | |
901 [constraints (map-vals eval (read-string (meta-data joint "joint")))] | |
902 ;; A side-effect of creating a joint registers | |
903 ;; it with both physics objects which in turn | |
904 ;; will register the joint with the physics system | |
905 ;; when the simulation is started. | |
906 (joint-dispatch constraints | |
907 control-a control-b | |
908 pivot-a pivot-b | |
909 joint-rotation)))) | |
910 #+end_src | |
911 #+end_listing | |
912 | |
913 In general, whenever =CORTEX= exposes a sense (or in this case | |
914 physicality), it provides a function of the type =sense!=, which | |
915 takes in a collection of nodes and augments it to support that | |
916 sense. The function returns any controlls necessary to use that | |
917 sense. In this case =body!= cerates a physical body and returns no | |
918 control functions. | |
919 | |
920 #+caption: Program to give joints to a creature. | |
921 #+name: name | |
922 #+begin_listing clojure | |
923 #+begin_src clojure | |
924 (defn joints! | |
925 "Connect the solid parts of the creature with physical joints. The | |
926 joints are taken from the \"joints\" node in the creature." | |
927 [#^Node creature] | |
928 (dorun | |
929 (map | |
930 (fn [joint] | |
931 (let [[obj-a obj-b] (joint-targets creature joint)] | |
932 (connect obj-a obj-b joint))) | |
933 (joints creature)))) | |
934 (defn body! | |
935 "Endow the creature with a physical body connected with joints. The | |
936 particulars of the joints and the masses of each body part are | |
937 determined in blender." | |
938 [#^Node creature] | |
939 (physical! creature) | |
940 (joints! creature)) | |
941 #+end_src | |
942 #+end_listing | |
943 | |
944 All of the code you have just seen amounts to only 130 lines, yet | |
945 because it builds on top of Blender and jMonkeyEngine3, those few | |
946 lines pack quite a punch! | |
947 | |
948 The hand from figure \ref{blender-hand}, which was modeled after | |
949 my own right hand, can now be given joints and simulated as a | |
950 creature. | |
951 | |
952 #+caption: With the ability to create physical creatures from blender, | |
953 #+caption: =CORTEX= gets one step closer to becomming a full creature | |
954 #+caption: simulation environment. | |
955 #+name: name | |
956 #+ATTR_LaTeX: :width 15cm | |
957 [[./images/physical-hand.png]] | |
958 | |
959 ** Eyes reuse standard video game components | |
960 | |
961 Vision is one of the most important senses for humans, so I need to | |
962 build a simulated sense of vision for my AI. I will do this with | |
963 simulated eyes. Each eye can be independently moved and should see | |
964 its own version of the world depending on where it is. | |
965 | |
966 Making these simulated eyes a reality is simple because | |
967 jMonkeyEngine already contains extensive support for multiple views | |
968 of the same 3D simulated world. The reason jMonkeyEngine has this | |
969 support is because the support is necessary to create games with | |
970 split-screen views. Multiple views are also used to create | |
971 efficient pseudo-reflections by rendering the scene from a certain | |
972 perspective and then projecting it back onto a surface in the 3D | |
973 world. | |
974 | |
975 #+caption: jMonkeyEngine supports multiple views to enable | |
976 #+caption: split-screen games, like GoldenEye, which was one of | |
977 #+caption: the first games to use split-screen views. | |
978 #+name: name | |
979 #+ATTR_LaTeX: :width 10cm | |
980 [[./images/goldeneye-4-player.png]] | |
981 | |
982 *** A Brief Description of jMonkeyEngine's Rendering Pipeline | |
983 | |
984 jMonkeyEngine allows you to create a =ViewPort=, which represents a | |
985 view of the simulated world. You can create as many of these as you | |
986 want. Every frame, the =RenderManager= iterates through each | |
987 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there | |
988 is a =FrameBuffer= which represents the rendered image in the GPU. | |
989 | |
990 #+caption: =ViewPorts= are cameras in the world. During each frame, | |
991 #+caption: the =RenderManager= records a snapshot of what each view | |
992 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects. | |
993 #+name: rendermanagers | |
994 #+ATTR_LaTeX: :width 10cm | |
995 [[./images/diagram_rendermanager2.png]] | |
996 | |
997 Each =ViewPort= can have any number of attached =SceneProcessor= | |
998 objects, which are called every time a new frame is rendered. A | |
999 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do | |
1000 whatever it wants to the data. Often this consists of invoking GPU | |
1001 specific operations on the rendered image. The =SceneProcessor= can | |
1002 also copy the GPU image data to RAM and process it with the CPU. | |
1003 | |
1004 *** Appropriating Views for Vision | |
1005 | |
1006 Each eye in the simulated creature needs its own =ViewPort= so | |
1007 that it can see the world from its own perspective. To this | |
1008 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to | |
1009 any arbitrary continuation function for further processing. That | |
1010 continuation function may perform both CPU and GPU operations on | |
1011 the data. To make this easy for the continuation function, the | |
1012 =SceneProcessor= maintains appropriately sized buffers in RAM to | |
1013 hold the data. It does not do any copying from the GPU to the CPU | |
1014 itself because it is a slow operation. | |
1015 | |
1016 #+caption: Function to make the rendered secne in jMonkeyEngine | |
1017 #+caption: available for further processing. | |
1018 #+name: pipeline-1 | |
1019 #+begin_listing clojure | |
1020 #+begin_src clojure | |
1021 (defn vision-pipeline | |
1022 "Create a SceneProcessor object which wraps a vision processing | |
1023 continuation function. The continuation is a function that takes | |
1024 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi], | |
1025 each of which has already been appropriately sized." | |
1026 [continuation] | |
1027 (let [byte-buffer (atom nil) | |
1028 renderer (atom nil) | |
1029 image (atom nil)] | |
1030 (proxy [SceneProcessor] [] | |
1031 (initialize | |
1032 [renderManager viewPort] | |
1033 (let [cam (.getCamera viewPort) | |
1034 width (.getWidth cam) | |
1035 height (.getHeight cam)] | |
1036 (reset! renderer (.getRenderer renderManager)) | |
1037 (reset! byte-buffer | |
1038 (BufferUtils/createByteBuffer | |
1039 (* width height 4))) | |
1040 (reset! image (BufferedImage. | |
1041 width height | |
1042 BufferedImage/TYPE_4BYTE_ABGR)))) | |
1043 (isInitialized [] (not (nil? @byte-buffer))) | |
1044 (reshape [_ _ _]) | |
1045 (preFrame [_]) | |
1046 (postQueue [_]) | |
1047 (postFrame | |
1048 [#^FrameBuffer fb] | |
1049 (.clear @byte-buffer) | |
1050 (continuation @renderer fb @byte-buffer @image)) | |
1051 (cleanup [])))) | |
1052 #+end_src | |
1053 #+end_listing | |
1054 | |
1055 The continuation function given to =vision-pipeline= above will be | |
1056 given a =Renderer= and three containers for image data. The | |
1057 =FrameBuffer= references the GPU image data, but the pixel data | |
1058 can not be used directly on the CPU. The =ByteBuffer= and | |
1059 =BufferedImage= are initially "empty" but are sized to hold the | |
1060 data in the =FrameBuffer=. I call transferring the GPU image data | |
1061 to the CPU structures "mixing" the image data. | |
1062 | |
1063 *** Optical sensor arrays are described with images and referenced with metadata | |
1064 | |
1065 The vision pipeline described above handles the flow of rendered | |
1066 images. Now, =CORTEX= needs simulated eyes to serve as the source | |
1067 of these images. | |
1068 | |
1069 An eye is described in blender in the same way as a joint. They | |
1070 are zero dimensional empty objects with no geometry whose local | |
1071 coordinate system determines the orientation of the resulting eye. | |
1072 All eyes are children of a parent node named "eyes" just as all | |
1073 joints have a parent named "joints". An eye binds to the nearest | |
1074 physical object with =bind-sense=. | |
1075 | |
1076 #+caption: Here, the camera is created based on metadata on the | |
1077 #+caption: eye-node and attached to the nearest physical object | |
1078 #+caption: with =bind-sense= | |
1079 #+name: add-eye | |
1080 #+begin_listing clojure | |
1081 (defn add-eye! | |
1082 "Create a Camera centered on the current position of 'eye which | |
1083 follows the closest physical node in 'creature. The camera will | |
1084 point in the X direction and use the Z vector as up as determined | |
1085 by the rotation of these vectors in blender coordinate space. Use | |
1086 XZY rotation for the node in blender." | |
1087 [#^Node creature #^Spatial eye] | |
1088 (let [target (closest-node creature eye) | |
1089 [cam-width cam-height] | |
1090 ;;[640 480] ;; graphics card on laptop doesn't support | |
1091 ;; arbitray dimensions. | |
1092 (eye-dimensions eye) | |
1093 cam (Camera. cam-width cam-height) | |
1094 rot (.getWorldRotation eye)] | |
1095 (.setLocation cam (.getWorldTranslation eye)) | |
1096 (.lookAtDirection | |
1097 cam ; this part is not a mistake and | |
1098 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in | |
1099 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector. | |
1100 (.setFrustumPerspective | |
1101 cam (float 45) | |
1102 (float (/ (.getWidth cam) (.getHeight cam))) | |
1103 (float 1) | |
1104 (float 1000)) | |
1105 (bind-sense target cam) cam)) | |
1106 #+end_listing | |
1107 | |
1108 *** Simulated Retina | |
1109 | |
1110 An eye is a surface (the retina) which contains many discrete | |
1111 sensors to detect light. These sensors can have different | |
1112 light-sensing properties. In humans, each discrete sensor is | |
1113 sensitive to red, blue, green, or gray. These different types of | |
1114 sensors can have different spatial distributions along the retina. | |
1115 In humans, there is a fovea in the center of the retina which has | |
1116 a very high density of color sensors, and a blind spot which has | |
1117 no sensors at all. Sensor density decreases in proportion to | |
1118 distance from the fovea. | |
1119 | |
1120 I want to be able to model any retinal configuration, so my | |
1121 eye-nodes in blender contain metadata pointing to images that | |
1122 describe the precise position of the individual sensors using | |
1123 white pixels. The meta-data also describes the precise sensitivity | |
1124 to light that the sensors described in the image have. An eye can | |
1125 contain any number of these images. For example, the metadata for | |
1126 an eye might look like this: | |
1127 | |
1128 #+begin_src clojure | |
1129 {0xFF0000 "Models/test-creature/retina-small.png"} | |
1130 #+end_src | |
1131 | |
1132 #+caption: An example retinal profile image. White pixels are | |
1133 #+caption: photo-sensitive elements. The distribution of white | |
1134 #+caption: pixels is denser in the middle and falls off at the | |
1135 #+caption: edges and is inspired by the human retina. | |
1136 #+name: retina | |
1137 #+ATTR_LaTeX: :width 7cm | |
1138 [[./images/retina-small.png]] | |
1139 | |
1140 Together, the number 0xFF0000 and the image image above describe | |
1141 the placement of red-sensitive sensory elements. | |
1142 | |
1143 Meta-data to very crudely approximate a human eye might be | |
1144 something like this: | |
1145 | |
1146 #+begin_src clojure | |
1147 (let [retinal-profile "Models/test-creature/retina-small.png"] | |
1148 {0xFF0000 retinal-profile | |
1149 0x00FF00 retinal-profile | |
1150 0x0000FF retinal-profile | |
1151 0xFFFFFF retinal-profile}) | |
1152 #+end_src | |
1153 | |
1154 The numbers that serve as keys in the map determine a sensor's | |
1155 relative sensitivity to the channels red, green, and blue. These | |
1156 sensitivity values are packed into an integer in the order | |
1157 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the | |
1158 image are added together with these sensitivities as linear | |
1159 weights. Therefore, 0xFF0000 means sensitive to red only while | |
1160 0xFFFFFF means sensitive to all colors equally (gray). | |
1161 | |
1162 #+caption: This is the core of vision in =CORTEX=. A given eye node | |
1163 #+caption: is converted into a function that returns visual | |
1164 #+caption: information from the simulation. | |
1165 #+name: vision-kernel | |
1166 #+begin_listing clojure | |
1167 #+BEGIN_SRC clojure | |
1168 (defn vision-kernel | |
1169 "Returns a list of functions, each of which will return a color | |
1170 channel's worth of visual information when called inside a running | |
1171 simulation." | |
1172 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}] | |
1173 (let [retinal-map (retina-sensor-profile eye) | |
1174 camera (add-eye! creature eye) | |
1175 vision-image | |
1176 (atom | |
1177 (BufferedImage. (.getWidth camera) | |
1178 (.getHeight camera) | |
1179 BufferedImage/TYPE_BYTE_BINARY)) | |
1180 register-eye! | |
1181 (runonce | |
1182 (fn [world] | |
1183 (add-camera! | |
1184 world camera | |
1185 (let [counter (atom 0)] | |
1186 (fn [r fb bb bi] | |
1187 (if (zero? (rem (swap! counter inc) (inc skip))) | |
1188 (reset! vision-image | |
1189 (BufferedImage! r fb bb bi))))))))] | |
1190 (vec | |
1191 (map | |
1192 (fn [[key image]] | |
1193 (let [whites (white-coordinates image) | |
1194 topology (vec (collapse whites)) | |
1195 sensitivity (sensitivity-presets key key)] | |
1196 (attached-viewport. | |
1197 (fn [world] | |
1198 (register-eye! world) | |
1199 (vector | |
1200 topology | |
1201 (vec | |
1202 (for [[x y] whites] | |
1203 (pixel-sense | |
1204 sensitivity | |
1205 (.getRGB @vision-image x y)))))) | |
1206 register-eye!))) | |
1207 retinal-map)))) | |
1208 #+END_SRC | |
1209 #+end_listing | |
1210 | |
1211 Note that since each of the functions generated by =vision-kernel= | |
1212 shares the same =register-eye!= function, the eye will be | |
1213 registered only once the first time any of the functions from the | |
1214 list returned by =vision-kernel= is called. Each of the functions | |
1215 returned by =vision-kernel= also allows access to the =Viewport= | |
1216 through which it receives images. | |
1217 | |
1218 All the hard work has been done; all that remains is to apply | |
1219 =vision-kernel= to each eye in the creature and gather the results | |
1220 into one list of functions. | |
1221 | |
1222 | |
1223 #+caption: With =vision!=, =CORTEX= is already a fine simulation | |
1224 #+caption: environment for experimenting with different types of | |
1225 #+caption: eyes. | |
1226 #+name: vision! | |
1227 #+begin_listing clojure | |
1228 #+BEGIN_SRC clojure | |
1229 (defn vision! | |
1230 "Returns a list of functions, each of which returns visual sensory | |
1231 data when called inside a running simulation." | |
1232 [#^Node creature & {skip :skip :or {skip 0}}] | |
1233 (reduce | |
1234 concat | |
1235 (for [eye (eyes creature)] | |
1236 (vision-kernel creature eye)))) | |
1237 #+END_SRC | |
1238 #+end_listing | |
1239 | |
1240 #+caption: Simulated vision with a test creature and the | |
1241 #+caption: human-like eye approximation. Notice how each channel | |
1242 #+caption: of the eye responds differently to the differently | |
1243 #+caption: colored balls. | |
1244 #+name: worm-vision-test. | |
1245 #+ATTR_LaTeX: :width 13cm | |
1246 [[./images/worm-vision.png]] | |
1247 | |
1248 The vision code is not much more complicated than the body code, | |
1249 and enables multiple further paths for simulated vision. For | |
1250 example, it is quite easy to create bifocal vision -- you just | |
1251 make two eyes next to each other in blender! It is also possible | |
1252 to encode vision transforms in the retinal files. For example, the | |
1253 human like retina file in figure \ref{retina} approximates a | |
1254 log-polar transform. | |
1255 | |
1256 This vision code has already been absorbed by the jMonkeyEngine | |
1257 community and is now (in modified form) part of a system for | |
1258 capturing in-game video to a file. | |
1259 | |
1260 ** Hearing is hard; =CORTEX= does it right | |
1261 | |
1262 At the end of this section I will have simulated ears that work the | |
1263 same way as the simulated eyes in the last section. I will be able to | |
1264 place any number of ear-nodes in a blender file, and they will bind to | |
1265 the closest physical object and follow it as it moves around. Each ear | |
1266 will provide access to the sound data it picks up between every frame. | |
1267 | |
1268 Hearing is one of the more difficult senses to simulate, because there | |
1269 is less support for obtaining the actual sound data that is processed | |
1270 by jMonkeyEngine3. There is no "split-screen" support for rendering | |
1271 sound from different points of view, and there is no way to directly | |
1272 access the rendered sound data. | |
1273 | |
1274 =CORTEX='s hearing is unique because it does not have any | |
1275 limitations compared to other simulation environments. As far as I | |
1276 know, there is no other system that supports multiple listerers, | |
1277 and the sound demo at the end of this section is the first time | |
1278 it's been done in a video game environment. | |
1279 | |
1280 *** Brief Description of jMonkeyEngine's Sound System | |
1281 | |
1282 jMonkeyEngine's sound system works as follows: | |
1283 | |
1284 - jMonkeyEngine uses the =AppSettings= for the particular | |
1285 application to determine what sort of =AudioRenderer= should be | |
1286 used. | |
1287 - Although some support is provided for multiple AudioRendering | |
1288 backends, jMonkeyEngine at the time of this writing will either | |
1289 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=. | |
1290 - jMonkeyEngine tries to figure out what sort of system you're | |
1291 running and extracts the appropriate native libraries. | |
1292 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game | |
1293 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] | |
1294 - =OpenAL= renders the 3D sound and feeds the rendered sound | |
1295 directly to any of various sound output devices with which it | |
1296 knows how to communicate. | |
1297 | |
1298 A consequence of this is that there's no way to access the actual | |
1299 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports | |
1300 one /listener/ (it renders sound data from only one perspective), | |
1301 which normally isn't a problem for games, but becomes a problem | |
1302 when trying to make multiple AI creatures that can each hear the | |
1303 world from a different perspective. | |
1304 | |
1305 To make many AI creatures in jMonkeyEngine that can each hear the | |
1306 world from their own perspective, or to make a single creature with | |
1307 many ears, it is necessary to go all the way back to =OpenAL= and | |
1308 implement support for simulated hearing there. | |
1309 | |
1310 *** Extending =OpenAl= | |
1311 | |
1312 Extending =OpenAL= to support multiple listeners requires 500 | |
1313 lines of =C= code and is too hairy to mention here. Instead, I | |
1314 will show a small amount of extension code and go over the high | |
1315 level stragety. Full source is of course available with the | |
1316 =CORTEX= distribution if you're interested. | |
1317 | |
1318 =OpenAL= goes to great lengths to support many different systems, | |
1319 all with different sound capabilities and interfaces. It | |
1320 accomplishes this difficult task by providing code for many | |
1321 different sound backends in pseudo-objects called /Devices/. | |
1322 There's a device for the Linux Open Sound System and the Advanced | |
1323 Linux Sound Architecture, there's one for Direct Sound on Windows, | |
1324 and there's even one for Solaris. =OpenAL= solves the problem of | |
1325 platform independence by providing all these Devices. | |
1326 | |
1327 Wrapper libraries such as LWJGL are free to examine the system on | |
1328 which they are running and then select an appropriate device for | |
1329 that system. | |
1330 | |
1331 There are also a few "special" devices that don't interface with | |
1332 any particular system. These include the Null Device, which | |
1333 doesn't do anything, and the Wave Device, which writes whatever | |
1334 sound it receives to a file, if everything has been set up | |
1335 correctly when configuring =OpenAL=. | |
1336 | |
1337 Actual mixing (doppler shift and distance.environment-based | |
1338 attenuation) of the sound data happens in the Devices, and they | |
1339 are the only point in the sound rendering process where this data | |
1340 is available. | |
1341 | |
1342 Therefore, in order to support multiple listeners, and get the | |
1343 sound data in a form that the AIs can use, it is necessary to | |
1344 create a new Device which supports this feature. | |
1345 | |
1346 Adding a device to OpenAL is rather tricky -- there are five | |
1347 separate files in the =OpenAL= source tree that must be modified | |
1348 to do so. I named my device the "Multiple Audio Send" Device, or | |
1349 =Send= Device for short, since it sends audio data back to the | |
1350 calling application like an Aux-Send cable on a mixing board. | |
1351 | |
1352 The main idea behind the Send device is to take advantage of the | |
1353 fact that LWJGL only manages one /context/ when using OpenAL. A | |
1354 /context/ is like a container that holds samples and keeps track | |
1355 of where the listener is. In order to support multiple listeners, | |
1356 the Send device identifies the LWJGL context as the master | |
1357 context, and creates any number of slave contexts to represent | |
1358 additional listeners. Every time the device renders sound, it | |
1359 synchronizes every source from the master LWJGL context to the | |
1360 slave contexts. Then, it renders each context separately, using a | |
1361 different listener for each one. The rendered sound is made | |
1362 available via JNI to jMonkeyEngine. | |
1363 | |
1364 Switching between contexts is not the normal operation of a | |
1365 Device, and one of the problems with doing so is that a Device | |
1366 normally keeps around a few pieces of state such as the | |
1367 =ClickRemoval= array above which will become corrupted if the | |
1368 contexts are not rendered in parallel. The solution is to create a | |
1369 copy of this normally global device state for each context, and | |
1370 copy it back and forth into and out of the actual device state | |
1371 whenever a context is rendered. | |
1372 | |
1373 The core of the =Send= device is the =syncSources= function, which | |
1374 does the job of copying all relevant data from one context to | |
1375 another. | |
1376 | |
1377 #+caption: Program for extending =OpenAL= to support multiple | |
1378 #+caption: listeners via context copying/switching. | |
1379 #+name: sync-openal-sources | |
1380 #+begin_listing c | |
1381 #+BEGIN_SRC c | |
1382 void syncSources(ALsource *masterSource, ALsource *slaveSource, | |
1383 ALCcontext *masterCtx, ALCcontext *slaveCtx){ | |
1384 ALuint master = masterSource->source; | |
1385 ALuint slave = slaveSource->source; | |
1386 ALCcontext *current = alcGetCurrentContext(); | |
1387 | |
1388 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); | |
1389 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); | |
1390 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); | |
1391 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); | |
1392 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); | |
1393 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); | |
1394 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); | |
1395 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); | |
1396 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); | |
1397 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); | |
1398 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); | |
1399 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); | |
1400 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); | |
1401 | |
1402 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); | |
1403 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); | |
1404 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); | |
1405 | |
1406 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); | |
1407 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); | |
1408 | |
1409 alcMakeContextCurrent(masterCtx); | |
1410 ALint source_type; | |
1411 alGetSourcei(master, AL_SOURCE_TYPE, &source_type); | |
1412 | |
1413 // Only static sources are currently synchronized! | |
1414 if (AL_STATIC == source_type){ | |
1415 ALint master_buffer; | |
1416 ALint slave_buffer; | |
1417 alGetSourcei(master, AL_BUFFER, &master_buffer); | |
1418 alcMakeContextCurrent(slaveCtx); | |
1419 alGetSourcei(slave, AL_BUFFER, &slave_buffer); | |
1420 if (master_buffer != slave_buffer){ | |
1421 alSourcei(slave, AL_BUFFER, master_buffer); | |
1422 } | |
1423 } | |
1424 | |
1425 // Synchronize the state of the two sources. | |
1426 alcMakeContextCurrent(masterCtx); | |
1427 ALint masterState; | |
1428 ALint slaveState; | |
1429 | |
1430 alGetSourcei(master, AL_SOURCE_STATE, &masterState); | |
1431 alcMakeContextCurrent(slaveCtx); | |
1432 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); | |
1433 | |
1434 if (masterState != slaveState){ | |
1435 switch (masterState){ | |
1436 case AL_INITIAL : alSourceRewind(slave); break; | |
1437 case AL_PLAYING : alSourcePlay(slave); break; | |
1438 case AL_PAUSED : alSourcePause(slave); break; | |
1439 case AL_STOPPED : alSourceStop(slave); break; | |
1440 } | |
1441 } | |
1442 // Restore whatever context was previously active. | |
1443 alcMakeContextCurrent(current); | |
1444 } | |
1445 #+END_SRC | |
1446 #+end_listing | |
1447 | |
1448 With this special context-switching device, and some ugly JNI | |
1449 bindings that are not worth mentioning, =CORTEX= gains the ability | |
1450 to access multiple sound streams from =OpenAL=. | |
1451 | |
1452 #+caption: Program to create an ear from a blender empty node. The ear | |
1453 #+caption: follows around the nearest physical object and passes | |
1454 #+caption: all sensory data to a continuation function. | |
1455 #+name: add-ear | |
1456 #+begin_listing clojure | |
1457 #+BEGIN_SRC clojure | |
1458 (defn add-ear! | |
1459 "Create a Listener centered on the current position of 'ear | |
1460 which follows the closest physical node in 'creature and | |
1461 sends sound data to 'continuation." | |
1462 [#^Application world #^Node creature #^Spatial ear continuation] | |
1463 (let [target (closest-node creature ear) | |
1464 lis (Listener.) | |
1465 audio-renderer (.getAudioRenderer world) | |
1466 sp (hearing-pipeline continuation)] | |
1467 (.setLocation lis (.getWorldTranslation ear)) | |
1468 (.setRotation lis (.getWorldRotation ear)) | |
1469 (bind-sense target lis) | |
1470 (update-listener-velocity! target lis) | |
1471 (.addListener audio-renderer lis) | |
1472 (.registerSoundProcessor audio-renderer lis sp))) | |
1473 #+END_SRC | |
1474 #+end_listing | |
1475 | |
1476 The =Send= device, unlike most of the other devices in =OpenAL=, | |
1477 does not render sound unless asked. This enables the system to | |
1478 slow down or speed up depending on the needs of the AIs who are | |
1479 using it to listen. If the device tried to render samples in | |
1480 real-time, a complicated AI whose mind takes 100 seconds of | |
1481 computer time to simulate 1 second of AI-time would miss almost | |
1482 all of the sound in its environment! | |
1483 | |
1484 #+caption: Program to enable arbitrary hearing in =CORTEX= | |
1485 #+name: hearing | |
1486 #+begin_listing clojure | |
1487 #+BEGIN_SRC clojure | |
1488 (defn hearing-kernel | |
1489 "Returns a function which returns auditory sensory data when called | |
1490 inside a running simulation." | |
1491 [#^Node creature #^Spatial ear] | |
1492 (let [hearing-data (atom []) | |
1493 register-listener! | |
1494 (runonce | |
1495 (fn [#^Application world] | |
1496 (add-ear! | |
1497 world creature ear | |
1498 (comp #(reset! hearing-data %) | |
1499 byteBuffer->pulse-vector))))] | |
1500 (fn [#^Application world] | |
1501 (register-listener! world) | |
1502 (let [data @hearing-data | |
1503 topology | |
1504 (vec (map #(vector % 0) (range 0 (count data))))] | |
1505 [topology data])))) | |
1506 | |
1507 (defn hearing! | |
1508 "Endow the creature in a particular world with the sense of | |
1509 hearing. Will return a sequence of functions, one for each ear, | |
1510 which when called will return the auditory data from that ear." | |
1511 [#^Node creature] | |
1512 (for [ear (ears creature)] | |
1513 (hearing-kernel creature ear))) | |
1514 #+END_SRC | |
1515 #+end_listing | |
1516 | |
1517 Armed with these functions, =CORTEX= is able to test possibly the | |
1518 first ever instance of multiple listeners in a video game engine | |
1519 based simulation! | |
1520 | |
1521 #+caption: Here a simple creature responds to sound by changing | |
1522 #+caption: its color from gray to green when the total volume | |
1523 #+caption: goes over a threshold. | |
1524 #+name: sound-test | |
1525 #+begin_listing java | |
1526 #+BEGIN_SRC java | |
1527 /** | |
1528 * Respond to sound! This is the brain of an AI entity that | |
1529 * hears its surroundings and reacts to them. | |
1530 */ | |
1531 public void process(ByteBuffer audioSamples, | |
1532 int numSamples, AudioFormat format) { | |
1533 audioSamples.clear(); | |
1534 byte[] data = new byte[numSamples]; | |
1535 float[] out = new float[numSamples]; | |
1536 audioSamples.get(data); | |
1537 FloatSampleTools. | |
1538 byte2floatInterleaved | |
1539 (data, 0, out, 0, numSamples/format.getFrameSize(), format); | |
1540 | |
1541 float max = Float.NEGATIVE_INFINITY; | |
1542 for (float f : out){if (f > max) max = f;} | |
1543 audioSamples.clear(); | |
1544 | |
1545 if (max > 0.1){ | |
1546 entity.getMaterial().setColor("Color", ColorRGBA.Green); | |
1547 } | |
1548 else { | |
1549 entity.getMaterial().setColor("Color", ColorRGBA.Gray); | |
1550 } | |
1551 #+END_SRC | |
1552 #+end_listing | |
1553 | |
1554 #+caption: First ever simulation of multiple listerners in =CORTEX=. | |
1555 #+caption: Each cube is a creature which processes sound data with | |
1556 #+caption: the =process= function from listing \ref{sound-test}. | |
1557 #+caption: the ball is constantally emiting a pure tone of | |
1558 #+caption: constant volume. As it approaches the cubes, they each | |
1559 #+caption: change color in response to the sound. | |
1560 #+name: sound-cubes. | |
1561 #+ATTR_LaTeX: :width 10cm | |
1562 [[./images/java-hearing-test.png]] | |
1563 | |
1564 This system of hearing has also been co-opted by the | |
1565 jMonkeyEngine3 community and is used to record audio for demo | |
1566 videos. | |
1567 | |
1568 ** Touch uses hundreds of hair-like elements | |
1569 | |
1570 Touch is critical to navigation and spatial reasoning and as such I | |
1571 need a simulated version of it to give to my AI creatures. | |
1572 | |
1573 Human skin has a wide array of touch sensors, each of which | |
1574 specialize in detecting different vibrational modes and pressures. | |
1575 These sensors can integrate a vast expanse of skin (i.e. your | |
1576 entire palm), or a tiny patch of skin at the tip of your finger. | |
1577 The hairs of the skin help detect objects before they even come | |
1578 into contact with the skin proper. | |
1579 | |
1580 However, touch in my simulated world can not exactly correspond to | |
1581 human touch because my creatures are made out of completely rigid | |
1582 segments that don't deform like human skin. | |
1583 | |
1584 Instead of measuring deformation or vibration, I surround each | |
1585 rigid part with a plenitude of hair-like objects (/feelers/) which | |
1586 do not interact with the physical world. Physical objects can pass | |
1587 through them with no effect. The feelers are able to tell when | |
1588 other objects pass through them, and they constantly report how | |
1589 much of their extent is covered. So even though the creature's body | |
1590 parts do not deform, the feelers create a margin around those body | |
1591 parts which achieves a sense of touch which is a hybrid between a | |
1592 human's sense of deformation and sense from hairs. | |
1593 | |
1594 Implementing touch in jMonkeyEngine follows a different technical | |
1595 route than vision and hearing. Those two senses piggybacked off | |
1596 jMonkeyEngine's 3D audio and video rendering subsystems. To | |
1597 simulate touch, I use jMonkeyEngine's physics system to execute | |
1598 many small collision detections, one for each feeler. The placement | |
1599 of the feelers is determined by a UV-mapped image which shows where | |
1600 each feeler should be on the 3D surface of the body. | |
1601 | |
1602 *** Defining Touch Meta-Data in Blender | |
1603 | |
1604 Each geometry can have a single UV map which describes the | |
1605 position of the feelers which will constitute its sense of touch. | |
1606 This image path is stored under the ``touch'' key. The image itself | |
1607 is black and white, with black meaning a feeler length of 0 (no | |
1608 feeler is present) and white meaning a feeler length of =scale=, | |
1609 which is a float stored under the key "scale". | |
1610 | |
1611 #+caption: Touch does not use empty nodes, to store metadata, | |
1612 #+caption: because the metadata of each solid part of a | |
1613 #+caption: creature's body is sufficient. | |
1614 #+name: touch-meta-data | |
1615 #+begin_listing clojure | |
1616 #+BEGIN_SRC clojure | |
1617 (defn tactile-sensor-profile | |
1618 "Return the touch-sensor distribution image in BufferedImage format, | |
1619 or nil if it does not exist." | |
1620 [#^Geometry obj] | |
1621 (if-let [image-path (meta-data obj "touch")] | |
1622 (load-image image-path))) | |
1623 | |
1624 (defn tactile-scale | |
1625 "Return the length of each feeler. Default scale is 0.01 | |
1626 jMonkeyEngine units." | |
1627 [#^Geometry obj] | |
1628 (if-let [scale (meta-data obj "scale")] | |
1629 scale 0.1)) | |
1630 #+END_SRC | |
1631 #+end_listing | |
1632 | |
1633 Here is an example of a UV-map which specifies the position of | |
1634 touch sensors along the surface of the upper segment of a fingertip. | |
1635 | |
1636 #+caption: This is the tactile-sensor-profile for the upper segment | |
1637 #+caption: of a fingertip. It defines regions of high touch sensitivity | |
1638 #+caption: (where there are many white pixels) and regions of low | |
1639 #+caption: sensitivity (where white pixels are sparse). | |
1640 #+name: fingertip-UV | |
1641 #+ATTR_LaTeX: :width 13cm | |
1642 [[./images/finger-UV.png]] | |
1643 | |
1644 *** Implementation Summary | |
1645 | |
1646 To simulate touch there are three conceptual steps. For each solid | |
1647 object in the creature, you first have to get UV image and scale | |
1648 parameter which define the position and length of the feelers. | |
1649 Then, you use the triangles which comprise the mesh and the UV | |
1650 data stored in the mesh to determine the world-space position and | |
1651 orientation of each feeler. Then once every frame, update these | |
1652 positions and orientations to match the current position and | |
1653 orientation of the object, and use physics collision detection to | |
1654 gather tactile data. | |
1655 | |
1656 Extracting the meta-data has already been described. The third | |
1657 step, physics collision detection, is handled in =touch-kernel=. | |
1658 Translating the positions and orientations of the feelers from the | |
1659 UV-map to world-space is itself a three-step process. | |
1660 | |
1661 - Find the triangles which make up the mesh in pixel-space and in | |
1662 world-space. \\(=triangles=, =pixel-triangles=). | |
1663 | |
1664 - Find the coordinates of each feeler in world-space. These are | |
1665 the origins of the feelers. (=feeler-origins=). | |
1666 | |
1667 - Calculate the normals of the triangles in world space, and add | |
1668 them to each of the origins of the feelers. These are the | |
1669 normalized coordinates of the tips of the feelers. | |
1670 (=feeler-tips=). | |
1671 | |
1672 *** Triangle Math | |
1673 | |
1674 The rigid objects which make up a creature have an underlying | |
1675 =Geometry=, which is a =Mesh= plus a =Material= and other | |
1676 important data involved with displaying the object. | |
1677 | |
1678 A =Mesh= is composed of =Triangles=, and each =Triangle= has three | |
1679 vertices which have coordinates in world space and UV space. | |
1680 | |
1681 Here, =triangles= gets all the world-space triangles which | |
1682 comprise a mesh, while =pixel-triangles= gets those same triangles | |
1683 expressed in pixel coordinates (which are UV coordinates scaled to | |
1684 fit the height and width of the UV image). | |
1685 | |
1686 #+caption: Programs to extract triangles from a geometry and get | |
1687 #+caption: their verticies in both world and UV-coordinates. | |
1688 #+name: get-triangles | |
1689 #+begin_listing clojure | |
1690 #+BEGIN_SRC clojure | |
1691 (defn triangle | |
1692 "Get the triangle specified by triangle-index from the mesh." | |
1693 [#^Geometry geo triangle-index] | |
1694 (triangle-seq | |
1695 (let [scratch (Triangle.)] | |
1696 (.getTriangle (.getMesh geo) triangle-index scratch) scratch))) | |
1697 | |
1698 (defn triangles | |
1699 "Return a sequence of all the Triangles which comprise a given | |
1700 Geometry." | |
1701 [#^Geometry geo] | |
1702 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo))))) | |
1703 | |
1704 (defn triangle-vertex-indices | |
1705 "Get the triangle vertex indices of a given triangle from a given | |
1706 mesh." | |
1707 [#^Mesh mesh triangle-index] | |
1708 (let [indices (int-array 3)] | |
1709 (.getTriangle mesh triangle-index indices) | |
1710 (vec indices))) | |
1711 | |
1712 (defn vertex-UV-coord | |
1713 "Get the UV-coordinates of the vertex named by vertex-index" | |
1714 [#^Mesh mesh vertex-index] | |
1715 (let [UV-buffer | |
1716 (.getData | |
1717 (.getBuffer | |
1718 mesh | |
1719 VertexBuffer$Type/TexCoord))] | |
1720 [(.get UV-buffer (* vertex-index 2)) | |
1721 (.get UV-buffer (+ 1 (* vertex-index 2)))])) | |
1722 | |
1723 (defn pixel-triangle [#^Geometry geo image index] | |
1724 (let [mesh (.getMesh geo) | |
1725 width (.getWidth image) | |
1726 height (.getHeight image)] | |
1727 (vec (map (fn [[u v]] (vector (* width u) (* height v))) | |
1728 (map (partial vertex-UV-coord mesh) | |
1729 (triangle-vertex-indices mesh index)))))) | |
1730 | |
1731 (defn pixel-triangles | |
1732 "The pixel-space triangles of the Geometry, in the same order as | |
1733 (triangles geo)" | |
1734 [#^Geometry geo image] | |
1735 (let [height (.getHeight image) | |
1736 width (.getWidth image)] | |
1737 (map (partial pixel-triangle geo image) | |
1738 (range (.getTriangleCount (.getMesh geo)))))) | |
1739 #+END_SRC | |
1740 #+end_listing | |
1741 | |
1742 *** The Affine Transform from one Triangle to Another | |
1743 | |
1744 =pixel-triangles= gives us the mesh triangles expressed in pixel | |
1745 coordinates and =triangles= gives us the mesh triangles expressed | |
1746 in world coordinates. The tactile-sensor-profile gives the | |
1747 position of each feeler in pixel-space. In order to convert | |
1748 pixel-space coordinates into world-space coordinates we need | |
1749 something that takes coordinates on the surface of one triangle | |
1750 and gives the corresponding coordinates on the surface of another | |
1751 triangle. | |
1752 | |
1753 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed | |
1754 into any other by a combination of translation, scaling, and | |
1755 rotation. The affine transformation from one triangle to another | |
1756 is readily computable if the triangle is expressed in terms of a | |
1757 $4x4$ matrix. | |
1758 | |
1759 #+BEGIN_LaTeX | |
1760 $$ | |
1761 \begin{bmatrix} | |
1762 x_1 & x_2 & x_3 & n_x \\ | |
1763 y_1 & y_2 & y_3 & n_y \\ | |
1764 z_1 & z_2 & z_3 & n_z \\ | |
1765 1 & 1 & 1 & 1 | |
1766 \end{bmatrix} | |
1767 $$ | |
1768 #+END_LaTeX | |
1769 | |
1770 Here, the first three columns of the matrix are the vertices of | |
1771 the triangle. The last column is the right-handed unit normal of | |
1772 the triangle. | |
1773 | |
1774 With two triangles $T_{1}$ and $T_{2}$ each expressed as a | |
1775 matrix like above, the affine transform from $T_{1}$ to $T_{2}$ | |
1776 is $T_{2}T_{1}^{-1}$. | |
1777 | |
1778 The clojure code below recapitulates the formulas above, using | |
1779 jMonkeyEngine's =Matrix4f= objects, which can describe any affine | |
1780 transformation. | |
1781 | |
1782 #+caption: Program to interpert triangles as affine transforms. | |
1783 #+name: triangle-affine | |
1784 #+begin_listing clojure | |
1785 #+BEGIN_SRC clojure | |
1786 (defn triangle->matrix4f | |
1787 "Converts the triangle into a 4x4 matrix: The first three columns | |
1788 contain the vertices of the triangle; the last contains the unit | |
1789 normal of the triangle. The bottom row is filled with 1s." | |
1790 [#^Triangle t] | |
1791 (let [mat (Matrix4f.) | |
1792 [vert-1 vert-2 vert-3] | |
1793 (mapv #(.get t %) (range 3)) | |
1794 unit-normal (do (.calculateNormal t)(.getNormal t)) | |
1795 vertices [vert-1 vert-2 vert-3 unit-normal]] | |
1796 (dorun | |
1797 (for [row (range 4) col (range 3)] | |
1798 (do | |
1799 (.set mat col row (.get (vertices row) col)) | |
1800 (.set mat 3 row 1)))) mat)) | |
1801 | |
1802 (defn triangles->affine-transform | |
1803 "Returns the affine transformation that converts each vertex in the | |
1804 first triangle into the corresponding vertex in the second | |
1805 triangle." | |
1806 [#^Triangle tri-1 #^Triangle tri-2] | |
1807 (.mult | |
1808 (triangle->matrix4f tri-2) | |
1809 (.invert (triangle->matrix4f tri-1)))) | |
1810 #+END_SRC | |
1811 #+end_listing | |
1812 | |
1813 *** Triangle Boundaries | |
1814 | |
1815 For efficiency's sake I will divide the tactile-profile image into | |
1816 small squares which inscribe each pixel-triangle, then extract the | |
1817 points which lie inside the triangle and map them to 3D-space using | |
1818 =triangle-transform= above. To do this I need a function, | |
1819 =convex-bounds= which finds the smallest box which inscribes a 2D | |
1820 triangle. | |
1821 | |
1822 =inside-triangle?= determines whether a point is inside a triangle | |
1823 in 2D pixel-space. | |
1824 | |
1825 #+caption: Program to efficiently determine point includion | |
1826 #+caption: in a triangle. | |
1827 #+name: in-triangle | |
1828 #+begin_listing clojure | |
1829 #+BEGIN_SRC clojure | |
1830 (defn convex-bounds | |
1831 "Returns the smallest square containing the given vertices, as a | |
1832 vector of integers [left top width height]." | |
1833 [verts] | |
1834 (let [xs (map first verts) | |
1835 ys (map second verts) | |
1836 x0 (Math/floor (apply min xs)) | |
1837 y0 (Math/floor (apply min ys)) | |
1838 x1 (Math/ceil (apply max xs)) | |
1839 y1 (Math/ceil (apply max ys))] | |
1840 [x0 y0 (- x1 x0) (- y1 y0)])) | |
1841 | |
1842 (defn same-side? | |
1843 "Given the points p1 and p2 and the reference point ref, is point p | |
1844 on the same side of the line that goes through p1 and p2 as ref is?" | |
1845 [p1 p2 ref p] | |
1846 (<= | |
1847 0 | |
1848 (.dot | |
1849 (.cross (.subtract p2 p1) (.subtract p p1)) | |
1850 (.cross (.subtract p2 p1) (.subtract ref p1))))) | |
1851 | |
1852 (defn inside-triangle? | |
1853 "Is the point inside the triangle?" | |
1854 {:author "Dylan Holmes"} | |
1855 [#^Triangle tri #^Vector3f p] | |
1856 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]] | |
1857 (and | |
1858 (same-side? vert-1 vert-2 vert-3 p) | |
1859 (same-side? vert-2 vert-3 vert-1 p) | |
1860 (same-side? vert-3 vert-1 vert-2 p)))) | |
1861 #+END_SRC | |
1862 #+end_listing | |
1863 | |
1864 *** Feeler Coordinates | |
1865 | |
1866 The triangle-related functions above make short work of | |
1867 calculating the positions and orientations of each feeler in | |
1868 world-space. | |
1869 | |
1870 #+caption: Program to get the coordinates of ``feelers '' in | |
1871 #+caption: both world and UV-coordinates. | |
1872 #+name: feeler-coordinates | |
1873 #+begin_listing clojure | |
1874 #+BEGIN_SRC clojure | |
1875 (defn feeler-pixel-coords | |
1876 "Returns the coordinates of the feelers in pixel space in lists, one | |
1877 list for each triangle, ordered in the same way as (triangles) and | |
1878 (pixel-triangles)." | |
1879 [#^Geometry geo image] | |
1880 (map | |
1881 (fn [pixel-triangle] | |
1882 (filter | |
1883 (fn [coord] | |
1884 (inside-triangle? (->triangle pixel-triangle) | |
1885 (->vector3f coord))) | |
1886 (white-coordinates image (convex-bounds pixel-triangle)))) | |
1887 (pixel-triangles geo image))) | |
1888 | |
1889 (defn feeler-world-coords | |
1890 "Returns the coordinates of the feelers in world space in lists, one | |
1891 list for each triangle, ordered in the same way as (triangles) and | |
1892 (pixel-triangles)." | |
1893 [#^Geometry geo image] | |
1894 (let [transforms | |
1895 (map #(triangles->affine-transform | |
1896 (->triangle %1) (->triangle %2)) | |
1897 (pixel-triangles geo image) | |
1898 (triangles geo))] | |
1899 (map (fn [transform coords] | |
1900 (map #(.mult transform (->vector3f %)) coords)) | |
1901 transforms (feeler-pixel-coords geo image)))) | |
1902 #+END_SRC | |
1903 #+end_listing | |
1904 | |
1905 #+caption: Program to get the position of the base and tip of | |
1906 #+caption: each ``feeler'' | |
1907 #+name: feeler-tips | |
1908 #+begin_listing clojure | |
1909 #+BEGIN_SRC clojure | |
1910 (defn feeler-origins | |
1911 "The world space coordinates of the root of each feeler." | |
1912 [#^Geometry geo image] | |
1913 (reduce concat (feeler-world-coords geo image))) | |
1914 | |
1915 (defn feeler-tips | |
1916 "The world space coordinates of the tip of each feeler." | |
1917 [#^Geometry geo image] | |
1918 (let [world-coords (feeler-world-coords geo image) | |
1919 normals | |
1920 (map | |
1921 (fn [triangle] | |
1922 (.calculateNormal triangle) | |
1923 (.clone (.getNormal triangle))) | |
1924 (map ->triangle (triangles geo)))] | |
1925 | |
1926 (mapcat (fn [origins normal] | |
1927 (map #(.add % normal) origins)) | |
1928 world-coords normals))) | |
1929 | |
1930 (defn touch-topology | |
1931 [#^Geometry geo image] | |
1932 (collapse (reduce concat (feeler-pixel-coords geo image)))) | |
1933 #+END_SRC | |
1934 #+end_listing | |
1935 | |
1936 *** Simulated Touch | |
1937 | |
1938 Now that the functions to construct feelers are complete, | |
1939 =touch-kernel= generates functions to be called from within a | |
1940 simulation that perform the necessary physics collisions to | |
1941 collect tactile data, and =touch!= recursively applies it to every | |
1942 node in the creature. | |
1943 | |
1944 #+caption: Efficient program to transform a ray from | |
1945 #+caption: one position to another. | |
1946 #+name: set-ray | |
1947 #+begin_listing clojure | |
1948 #+BEGIN_SRC clojure | |
1949 (defn set-ray [#^Ray ray #^Matrix4f transform | |
1950 #^Vector3f origin #^Vector3f tip] | |
1951 ;; Doing everything locally reduces garbage collection by enough to | |
1952 ;; be worth it. | |
1953 (.mult transform origin (.getOrigin ray)) | |
1954 (.mult transform tip (.getDirection ray)) | |
1955 (.subtractLocal (.getDirection ray) (.getOrigin ray)) | |
1956 (.normalizeLocal (.getDirection ray))) | |
1957 #+END_SRC | |
1958 #+end_listing | |
1959 | |
1960 #+caption: This is the core of touch in =CORTEX= each feeler | |
1961 #+caption: follows the object it is bound to, reporting any | |
1962 #+caption: collisions that may happen. | |
1963 #+name: touch-kernel | |
1964 #+begin_listing clojure | |
1965 #+BEGIN_SRC clojure | |
1966 (defn touch-kernel | |
1967 "Constructs a function which will return tactile sensory data from | |
1968 'geo when called from inside a running simulation" | |
1969 [#^Geometry geo] | |
1970 (if-let | |
1971 [profile (tactile-sensor-profile geo)] | |
1972 (let [ray-reference-origins (feeler-origins geo profile) | |
1973 ray-reference-tips (feeler-tips geo profile) | |
1974 ray-length (tactile-scale geo) | |
1975 current-rays (map (fn [_] (Ray.)) ray-reference-origins) | |
1976 topology (touch-topology geo profile) | |
1977 correction (float (* ray-length -0.2))] | |
1978 ;; slight tolerance for very close collisions. | |
1979 (dorun | |
1980 (map (fn [origin tip] | |
1981 (.addLocal origin (.mult (.subtract tip origin) | |
1982 correction))) | |
1983 ray-reference-origins ray-reference-tips)) | |
1984 (dorun (map #(.setLimit % ray-length) current-rays)) | |
1985 (fn [node] | |
1986 (let [transform (.getWorldMatrix geo)] | |
1987 (dorun | |
1988 (map (fn [ray ref-origin ref-tip] | |
1989 (set-ray ray transform ref-origin ref-tip)) | |
1990 current-rays ray-reference-origins | |
1991 ray-reference-tips)) | |
1992 (vector | |
1993 topology | |
1994 (vec | |
1995 (for [ray current-rays] | |
1996 (do | |
1997 (let [results (CollisionResults.)] | |
1998 (.collideWith node ray results) | |
1999 (let [touch-objects | |
2000 (filter #(not (= geo (.getGeometry %))) | |
2001 results) | |
2002 limit (.getLimit ray)] | |
2003 [(if (empty? touch-objects) | |
2004 limit | |
2005 (let [response | |
2006 (apply min (map #(.getDistance %) | |
2007 touch-objects))] | |
2008 (FastMath/clamp | |
2009 (float | |
2010 (if (> response limit) (float 0.0) | |
2011 (+ response correction))) | |
2012 (float 0.0) | |
2013 limit))) | |
2014 limit]))))))))))) | |
2015 #+END_SRC | |
2016 #+end_listing | |
2017 | |
2018 Armed with the =touch!= function, =CORTEX= becomes capable of | |
2019 giving creatures a sense of touch. A simple test is to create a | |
2020 cube that is outfitted with a uniform distrubition of touch | |
2021 sensors. It can feel the ground and any balls that it touches. | |
2022 | |
2023 #+caption: =CORTEX= interface for creating touch in a simulated | |
2024 #+caption: creature. | |
2025 #+name: touch | |
2026 #+begin_listing clojure | |
2027 #+BEGIN_SRC clojure | |
2028 (defn touch! | |
2029 "Endow the creature with the sense of touch. Returns a sequence of | |
2030 functions, one for each body part with a tactile-sensor-profile, | |
2031 each of which when called returns sensory data for that body part." | |
2032 [#^Node creature] | |
2033 (filter | |
2034 (comp not nil?) | |
2035 (map touch-kernel | |
2036 (filter #(isa? (class %) Geometry) | |
2037 (node-seq creature))))) | |
2038 #+END_SRC | |
2039 #+end_listing | |
2040 | |
2041 The tactile-sensor-profile image for the touch cube is a simple | |
2042 cross with a unifom distribution of touch sensors: | |
2043 | |
2044 #+caption: The touch profile for the touch-cube. Each pure white | |
2045 #+caption: pixel defines a touch sensitive feeler. | |
2046 #+name: touch-cube-uv-map | |
2047 #+ATTR_LaTeX: :width 7cm | |
2048 [[./images/touch-profile.png]] | |
2049 | |
2050 #+caption: The touch cube reacts to canonballs. The black, red, | |
2051 #+caption: and white cross on the right is a visual display of | |
2052 #+caption: the creature's touch. White means that it is feeling | |
2053 #+caption: something strongly, black is not feeling anything, | |
2054 #+caption: and gray is in-between. The cube can feel both the | |
2055 #+caption: floor and the ball. Notice that when the ball causes | |
2056 #+caption: the cube to tip, that the bottom face can still feel | |
2057 #+caption: part of the ground. | |
2058 #+name: touch-cube-uv-map | |
2059 #+ATTR_LaTeX: :width 15cm | |
2060 [[./images/touch-cube.png]] | |
2061 | |
2062 ** Proprioception is the sense that makes everything ``real'' | |
2063 | |
2064 Close your eyes, and touch your nose with your right index finger. | |
2065 How did you do it? You could not see your hand, and neither your | |
2066 hand nor your nose could use the sense of touch to guide the path | |
2067 of your hand. There are no sound cues, and Taste and Smell | |
2068 certainly don't provide any help. You know where your hand is | |
2069 without your other senses because of Proprioception. | |
2070 | |
2071 Humans can sometimes loose this sense through viral infections or | |
2072 damage to the spinal cord or brain, and when they do, they loose | |
2073 the ability to control their own bodies without looking directly at | |
2074 the parts they want to move. In [[http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat][The Man Who Mistook His Wife for a | |
2075 Hat]], a woman named Christina looses this sense and has to learn how | |
2076 to move by carefully watching her arms and legs. She describes | |
2077 proprioception as the "eyes of the body, the way the body sees | |
2078 itself". | |
2079 | |
2080 Proprioception in humans is mediated by [[http://en.wikipedia.org/wiki/Articular_capsule][joint capsules]], [[http://en.wikipedia.org/wiki/Muscle_spindle][muscle | |
2081 spindles]], and the [[http://en.wikipedia.org/wiki/Golgi_tendon_organ][Golgi tendon organs]]. These measure the relative | |
2082 positions of each body part by monitoring muscle strain and length. | |
2083 | |
2084 It's clear that this is a vital sense for fluid, graceful movement. | |
2085 It's also particularly easy to implement in jMonkeyEngine. | |
2086 | |
2087 My simulated proprioception calculates the relative angles of each | |
2088 joint from the rest position defined in the blender file. This | |
2089 simulates the muscle-spindles and joint capsules. I will deal with | |
2090 Golgi tendon organs, which calculate muscle strain, in the next | |
2091 section. | |
2092 | |
2093 *** Helper functions | |
2094 | |
2095 =absolute-angle= calculates the angle between two vectors, | |
2096 relative to a third axis vector. This angle is the number of | |
2097 radians you have to move counterclockwise around the axis vector | |
2098 to get from the first to the second vector. It is not commutative | |
2099 like a normal dot-product angle is. | |
2100 | |
2101 The purpose of these functions is to build a system of angle | |
2102 measurement that is biologically plausable. | |
2103 | |
2104 #+caption: Program to measure angles along a vector | |
2105 #+name: helpers | |
2106 #+begin_listing clojure | |
2107 #+BEGIN_SRC clojure | |
2108 (defn right-handed? | |
2109 "true iff the three vectors form a right handed coordinate | |
2110 system. The three vectors do not have to be normalized or | |
2111 orthogonal." | |
2112 [vec1 vec2 vec3] | |
2113 (pos? (.dot (.cross vec1 vec2) vec3))) | |
2114 | |
2115 (defn absolute-angle | |
2116 "The angle between 'vec1 and 'vec2 around 'axis. In the range | |
2117 [0 (* 2 Math/PI)]." | |
2118 [vec1 vec2 axis] | |
2119 (let [angle (.angleBetween vec1 vec2)] | |
2120 (if (right-handed? vec1 vec2 axis) | |
2121 angle (- (* 2 Math/PI) angle)))) | |
2122 #+END_SRC | |
2123 #+end_listing | |
2124 | |
2125 *** Proprioception Kernel | |
2126 | |
2127 Given a joint, =proprioception-kernel= produces a function that | |
2128 calculates the Euler angles between the the objects the joint | |
2129 connects. The only tricky part here is making the angles relative | |
2130 to the joint's initial ``straightness''. | |
2131 | |
2132 #+caption: Program to return biologially reasonable proprioceptive | |
2133 #+caption: data for each joint. | |
2134 #+name: proprioception | |
2135 #+begin_listing clojure | |
2136 #+BEGIN_SRC clojure | |
2137 (defn proprioception-kernel | |
2138 "Returns a function which returns proprioceptive sensory data when | |
2139 called inside a running simulation." | |
2140 [#^Node parts #^Node joint] | |
2141 (let [[obj-a obj-b] (joint-targets parts joint) | |
2142 joint-rot (.getWorldRotation joint) | |
2143 x0 (.mult joint-rot Vector3f/UNIT_X) | |
2144 y0 (.mult joint-rot Vector3f/UNIT_Y) | |
2145 z0 (.mult joint-rot Vector3f/UNIT_Z)] | |
2146 (fn [] | |
2147 (let [rot-a (.clone (.getWorldRotation obj-a)) | |
2148 rot-b (.clone (.getWorldRotation obj-b)) | |
2149 x (.mult rot-a x0) | |
2150 y (.mult rot-a y0) | |
2151 z (.mult rot-a z0) | |
2152 | |
2153 X (.mult rot-b x0) | |
2154 Y (.mult rot-b y0) | |
2155 Z (.mult rot-b z0) | |
2156 heading (Math/atan2 (.dot X z) (.dot X x)) | |
2157 pitch (Math/atan2 (.dot X y) (.dot X x)) | |
2158 | |
2159 ;; rotate x-vector back to origin | |
2160 reverse | |
2161 (doto (Quaternion.) | |
2162 (.fromAngleAxis | |
2163 (.angleBetween X x) | |
2164 (let [cross (.normalize (.cross X x))] | |
2165 (if (= 0 (.length cross)) y cross)))) | |
2166 roll (absolute-angle (.mult reverse Y) y x)] | |
2167 [heading pitch roll])))) | |
2168 | |
2169 (defn proprioception! | |
2170 "Endow the creature with the sense of proprioception. Returns a | |
2171 sequence of functions, one for each child of the \"joints\" node in | |
2172 the creature, which each report proprioceptive information about | |
2173 that joint." | |
2174 [#^Node creature] | |
2175 ;; extract the body's joints | |
2176 (let [senses (map (partial proprioception-kernel creature) | |
2177 (joints creature))] | |
2178 (fn [] | |
2179 (map #(%) senses)))) | |
2180 #+END_SRC | |
2181 #+end_listing | |
2182 | |
2183 =proprioception!= maps =proprioception-kernel= across all the | |
2184 joints of the creature. It uses the same list of joints that | |
2185 =joints= uses. Proprioception is the easiest sense to implement in | |
2186 =CORTEX=, and it will play a crucial role when efficiently | |
2187 implementing empathy. | |
2188 | |
2189 #+caption: In the upper right corner, the three proprioceptive | |
2190 #+caption: angle measurements are displayed. Red is yaw, Green is | |
2191 #+caption: pitch, and White is roll. | |
2192 #+name: proprio | |
2193 #+ATTR_LaTeX: :width 11cm | |
2194 [[./images/proprio.png]] | |
2195 | |
2196 ** Muscles are both effectors and sensors | |
2197 | |
2198 Surprisingly enough, terrestrial creatures only move by using | |
2199 torque applied about their joints. There's not a single straight | |
2200 line of force in the human body at all! (A straight line of force | |
2201 would correspond to some sort of jet or rocket propulsion.) | |
2202 | |
2203 In humans, muscles are composed of muscle fibers which can contract | |
2204 to exert force. The muscle fibers which compose a muscle are | |
2205 partitioned into discrete groups which are each controlled by a | |
2206 single alpha motor neuron. A single alpha motor neuron might | |
2207 control as little as three or as many as one thousand muscle | |
2208 fibers. When the alpha motor neuron is engaged by the spinal cord, | |
2209 it activates all of the muscle fibers to which it is attached. The | |
2210 spinal cord generally engages the alpha motor neurons which control | |
2211 few muscle fibers before the motor neurons which control many | |
2212 muscle fibers. This recruitment strategy allows for precise | |
2213 movements at low strength. The collection of all motor neurons that | |
2214 control a muscle is called the motor pool. The brain essentially | |
2215 says "activate 30% of the motor pool" and the spinal cord recruits | |
2216 motor neurons until 30% are activated. Since the distribution of | |
2217 power among motor neurons is unequal and recruitment goes from | |
2218 weakest to strongest, the first 30% of the motor pool might be 5% | |
2219 of the strength of the muscle. | |
2220 | |
2221 My simulated muscles follow a similar design: Each muscle is | |
2222 defined by a 1-D array of numbers (the "motor pool"). Each entry in | |
2223 the array represents a motor neuron which controls a number of | |
2224 muscle fibers equal to the value of the entry. Each muscle has a | |
2225 scalar strength factor which determines the total force the muscle | |
2226 can exert when all motor neurons are activated. The effector | |
2227 function for a muscle takes a number to index into the motor pool, | |
2228 and then "activates" all the motor neurons whose index is lower or | |
2229 equal to the number. Each motor-neuron will apply force in | |
2230 proportion to its value in the array. Lower values cause less | |
2231 force. The lower values can be put at the "beginning" of the 1-D | |
2232 array to simulate the layout of actual human muscles, which are | |
2233 capable of more precise movements when exerting less force. Or, the | |
2234 motor pool can simulate more exotic recruitment strategies which do | |
2235 not correspond to human muscles. | |
2236 | |
2237 This 1D array is defined in an image file for ease of | |
2238 creation/visualization. Here is an example muscle profile image. | |
2239 | |
2240 #+caption: A muscle profile image that describes the strengths | |
2241 #+caption: of each motor neuron in a muscle. White is weakest | |
2242 #+caption: and dark red is strongest. This particular pattern | |
2243 #+caption: has weaker motor neurons at the beginning, just | |
2244 #+caption: like human muscle. | |
2245 #+name: muscle-recruit | |
2246 #+ATTR_LaTeX: :width 7cm | |
2247 [[./images/basic-muscle.png]] | |
2248 | |
2249 *** Muscle meta-data | |
2250 | |
2251 #+caption: Program to deal with loading muscle data from a blender | |
2252 #+caption: file's metadata. | |
2253 #+name: motor-pool | |
2254 #+begin_listing clojure | |
2255 #+BEGIN_SRC clojure | |
2256 (defn muscle-profile-image | |
2257 "Get the muscle-profile image from the node's blender meta-data." | |
2258 [#^Node muscle] | |
2259 (if-let [image (meta-data muscle "muscle")] | |
2260 (load-image image))) | |
2261 | |
2262 (defn muscle-strength | |
2263 "Return the strength of this muscle, or 1 if it is not defined." | |
2264 [#^Node muscle] | |
2265 (if-let [strength (meta-data muscle "strength")] | |
2266 strength 1)) | |
2267 | |
2268 (defn motor-pool | |
2269 "Return a vector where each entry is the strength of the \"motor | |
2270 neuron\" at that part in the muscle." | |
2271 [#^Node muscle] | |
2272 (let [profile (muscle-profile-image muscle)] | |
2273 (vec | |
2274 (let [width (.getWidth profile)] | |
2275 (for [x (range width)] | |
2276 (- 255 | |
2277 (bit-and | |
2278 0x0000FF | |
2279 (.getRGB profile x 0)))))))) | |
2280 #+END_SRC | |
2281 #+end_listing | |
2282 | |
2283 Of note here is =motor-pool= which interprets the muscle-profile | |
2284 image in a way that allows me to use gradients between white and | |
2285 red, instead of shades of gray as I've been using for all the | |
2286 other senses. This is purely an aesthetic touch. | |
2287 | |
2288 *** Creating muscles | |
2289 | |
2290 #+caption: This is the core movement functoion in =CORTEX=, which | |
2291 #+caption: implements muscles that report on their activation. | |
2292 #+name: muscle-kernel | |
2293 #+begin_listing clojure | |
2294 #+BEGIN_SRC clojure | |
2295 (defn movement-kernel | |
2296 "Returns a function which when called with a integer value inside a | |
2297 running simulation will cause movement in the creature according | |
2298 to the muscle's position and strength profile. Each function | |
2299 returns the amount of force applied / max force." | |
2300 [#^Node creature #^Node muscle] | |
2301 (let [target (closest-node creature muscle) | |
2302 axis | |
2303 (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y) | |
2304 strength (muscle-strength muscle) | |
2305 | |
2306 pool (motor-pool muscle) | |
2307 pool-integral (reductions + pool) | |
2308 forces | |
2309 (vec (map #(float (* strength (/ % (last pool-integral)))) | |
2310 pool-integral)) | |
2311 control (.getControl target RigidBodyControl)] | |
2312 ;;(println-repl (.getName target) axis) | |
2313 (fn [n] | |
2314 (let [pool-index (max 0 (min n (dec (count pool)))) | |
2315 force (forces pool-index)] | |
2316 (.applyTorque control (.mult axis force)) | |
2317 (float (/ force strength)))))) | |
2318 | |
2319 (defn movement! | |
2320 "Endow the creature with the power of movement. Returns a sequence | |
2321 of functions, each of which accept an integer value and will | |
2322 activate their corresponding muscle." | |
2323 [#^Node creature] | |
2324 (for [muscle (muscles creature)] | |
2325 (movement-kernel creature muscle))) | |
2326 #+END_SRC | |
2327 #+end_listing | |
2328 | |
2329 | |
2330 =movement-kernel= creates a function that will move the nearest | |
2331 physical object to the muscle node. The muscle exerts a rotational | |
2332 force dependent on it's orientation to the object in the blender | |
2333 file. The function returned by =movement-kernel= is also a sense | |
2334 function: it returns the percent of the total muscle strength that | |
2335 is currently being employed. This is analogous to muscle tension | |
2336 in humans and completes the sense of proprioception begun in the | |
2337 last section. | |
2338 | |
2339 ** =CORTEX= brings complex creatures to life! | |
2340 | |
2341 The ultimate test of =CORTEX= is to create a creature with the full | |
2342 gamut of senses and put it though its paces. | |
2343 | |
2344 With all senses enabled, my right hand model looks like an | |
2345 intricate marionette hand with several strings for each finger: | |
2346 | |
2347 #+caption: View of the hand model with all sense nodes. You can see | |
2348 #+caption: the joint, muscle, ear, and eye nodess here. | |
2349 #+name: hand-nodes-1 | |
2350 #+ATTR_LaTeX: :width 11cm | |
2351 [[./images/hand-with-all-senses2.png]] | |
2352 | |
2353 #+caption: An alternate view of the hand. | |
2354 #+name: hand-nodes-2 | |
2355 #+ATTR_LaTeX: :width 15cm | |
2356 [[./images/hand-with-all-senses3.png]] | |
2357 | |
2358 With the hand fully rigged with senses, I can run it though a test | |
2359 that will test everything. | |
2360 | |
2361 #+caption: A full test of the hand with all senses. Note expecially | |
2362 #+caption: the interactions the hand has with itself: it feels | |
2363 #+caption: its own palm and fingers, and when it curls its fingers, | |
2364 #+caption: it sees them with its eye (which is located in the center | |
2365 #+caption: of the palm. The red block appears with a pure tone sound. | |
2366 #+caption: The hand then uses its muscles to launch the cube! | |
2367 #+name: integration | |
2368 #+ATTR_LaTeX: :width 16cm | |
2369 [[./images/integration.png]] | |
2370 | |
2371 ** =CORTEX= enables many possiblities for further research | |
2372 | |
2373 Often times, the hardest part of building a system involving | |
2374 creatures is dealing with physics and graphics. =CORTEX= removes | |
2375 much of this initial difficulty and leaves researchers free to | |
2376 directly pursue their ideas. I hope that even undergrads with a | |
2377 passing curiosity about simulated touch or creature evolution will | |
2378 be able to use cortex for experimentation. =CORTEX= is a completely | |
2379 simulated world, and far from being a disadvantage, its simulated | |
2380 nature enables you to create senses and creatures that would be | |
2381 impossible to make in the real world. | |
2382 | |
2383 While not by any means a complete list, here are some paths | |
2384 =CORTEX= is well suited to help you explore: | |
2385 | |
2386 - Empathy :: my empathy program leaves many areas for | |
2387 improvement, among which are using vision to infer | |
2388 proprioception and looking up sensory experience with imagined | |
2389 vision, touch, and sound. | |
2390 - Evolution :: Karl Sims created a rich environment for | |
2391 simulating the evolution of creatures on a connection | |
2392 machine. Today, this can be redone and expanded with =CORTEX= | |
2393 on an ordinary computer. | |
2394 - Exotic senses :: Cortex enables many fascinating senses that are | |
2395 not possible to build in the real world. For example, | |
2396 telekinesis is an interesting avenue to explore. You can also | |
2397 make a ``semantic'' sense which looks up metadata tags on | |
2398 objects in the environment the metadata tags might contain | |
2399 other sensory information. | |
2400 - Imagination via subworlds :: this would involve a creature with | |
2401 an effector which creates an entire new sub-simulation where | |
2402 the creature has direct control over placement/creation of | |
2403 objects via simulated telekinesis. The creature observes this | |
2404 sub-world through it's normal senses and uses its observations | |
2405 to make predictions about its top level world. | |
2406 - Simulated prescience :: step the simulation forward a few ticks, | |
2407 gather sensory data, then supply this data for the creature as | |
2408 one of its actual senses. The cost of prescience is slowing | |
2409 the simulation down by a factor proportional to however far | |
2410 you want the entities to see into the future. What happens | |
2411 when two evolved creatures that can each see into the future | |
2412 fight each other? | |
2413 - Swarm creatures :: Program a group of creatures that cooperate | |
2414 with each other. Because the creatures would be simulated, you | |
2415 could investigate computationally complex rules of behavior | |
2416 which still, from the group's point of view, would happen in | |
2417 ``real time''. Interactions could be as simple as cellular | |
2418 organisms communicating via flashing lights, or as complex as | |
2419 humanoids completing social tasks, etc. | |
2420 - =HACKER= for writing muscle-control programs :: Presented with | |
2421 low-level muscle control/ sense API, generate higher level | |
2422 programs for accomplishing various stated goals. Example goals | |
2423 might be "extend all your fingers" or "move your hand into the | |
2424 area with blue light" or "decrease the angle of this joint". | |
2425 It would be like Sussman's HACKER, except it would operate | |
2426 with much more data in a more realistic world. Start off with | |
2427 "calisthenics" to develop subroutines over the motor control | |
2428 API. This would be the "spinal chord" of a more intelligent | |
2429 creature. The low level programming code might be a turning | |
2430 machine that could develop programs to iterate over a "tape" | |
2431 where each entry in the tape could control recruitment of the | |
2432 fibers in a muscle. | |
2433 - Sense fusion :: There is much work to be done on sense | |
2434 integration -- building up a coherent picture of the world and | |
2435 the things in it with =CORTEX= as a base, you can explore | |
2436 concepts like self-organizing maps or cross modal clustering | |
2437 in ways that have never before been tried. | |
2438 - Inverse kinematics :: experiments in sense guided motor control | |
2439 are easy given =CORTEX='s support -- you can get right to the | |
2440 hard control problems without worrying about physics or | |
2441 senses. | |
2442 | |
2443 * Empathy in a simulated worm | |
2444 | |
2445 Here I develop a computational model of empathy, using =CORTEX= as a | |
2446 base. Empathy in this context is the ability to observe another | |
2447 creature and infer what sorts of sensations that creature is | |
2448 feeling. My empathy algorithm involves multiple phases. First is | |
2449 free-play, where the creature moves around and gains sensory | |
2450 experience. From this experience I construct a representation of the | |
2451 creature's sensory state space, which I call \Phi-space. Using | |
2452 \Phi-space, I construct an efficient function which takes the | |
2453 limited data that comes from observing another creature and enriches | |
2454 it full compliment of imagined sensory data. I can then use the | |
2455 imagined sensory data to recognize what the observed creature is | |
2456 doing and feeling, using straightforward embodied action predicates. | |
2457 This is all demonstrated with using a simple worm-like creature, and | |
2458 recognizing worm-actions based on limited data. | |
2459 | |
2460 #+caption: Here is the worm with which we will be working. | |
2461 #+caption: It is composed of 5 segments. Each segment has a | |
2462 #+caption: pair of extensor and flexor muscles. Each of the | |
2463 #+caption: worm's four joints is a hinge joint which allows | |
2464 #+caption: about 30 degrees of rotation to either side. Each segment | |
2465 #+caption: of the worm is touch-capable and has a uniform | |
2466 #+caption: distribution of touch sensors on each of its faces. | |
2467 #+caption: Each joint has a proprioceptive sense to detect | |
2468 #+caption: relative positions. The worm segments are all the | |
2469 #+caption: same except for the first one, which has a much | |
2470 #+caption: higher weight than the others to allow for easy | |
2471 #+caption: manual motor control. | |
2472 #+name: basic-worm-view | |
2473 #+ATTR_LaTeX: :width 10cm | |
2474 [[./images/basic-worm-view.png]] | |
2475 | |
2476 #+caption: Program for reading a worm from a blender file and | |
2477 #+caption: outfitting it with the senses of proprioception, | |
2478 #+caption: touch, and the ability to move, as specified in the | |
2479 #+caption: blender file. | |
2480 #+name: get-worm | |
2481 #+begin_listing clojure | |
2482 #+begin_src clojure | |
2483 (defn worm [] | |
2484 (let [model (load-blender-model "Models/worm/worm.blend")] | |
2485 {:body (doto model (body!)) | |
2486 :touch (touch! model) | |
2487 :proprioception (proprioception! model) | |
2488 :muscles (movement! model)})) | |
2489 #+end_src | |
2490 #+end_listing | |
2491 | |
2492 ** Embodiment factors action recognition into managable parts | |
2493 | |
2494 Using empathy, I divide the problem of action recognition into a | |
2495 recognition process expressed in the language of a full compliment | |
2496 of senses, and an imaganitive process that generates full sensory | |
2497 data from partial sensory data. Splitting the action recognition | |
2498 problem in this manner greatly reduces the total amount of work to | |
2499 recognize actions: The imaganitive process is mostly just matching | |
2500 previous experience, and the recognition process gets to use all | |
2501 the senses to directly describe any action. | |
2502 | |
2503 ** Action recognition is easy with a full gamut of senses | |
2504 | |
2505 Embodied representations using multiple senses such as touch, | |
2506 proprioception, and muscle tension turns out be be exceedingly | |
2507 efficient at describing body-centered actions. It is the ``right | |
2508 language for the job''. For example, it takes only around 5 lines | |
2509 of LISP code to describe the action of ``curling'' using embodied | |
2510 primitives. It takes about 10 lines to describe the seemingly | |
2511 complicated action of wiggling. | |
2512 | |
2513 The following action predicates each take a stream of sensory | |
2514 experience, observe however much of it they desire, and decide | |
2515 whether the worm is doing the action they describe. =curled?= | |
2516 relies on proprioception, =resting?= relies on touch, =wiggling?= | |
2517 relies on a fourier analysis of muscle contraction, and | |
2518 =grand-circle?= relies on touch and reuses =curled?= as a gaurd. | |
2519 | |
2520 #+caption: Program for detecting whether the worm is curled. This is the | |
2521 #+caption: simplest action predicate, because it only uses the last frame | |
2522 #+caption: of sensory experience, and only uses proprioceptive data. Even | |
2523 #+caption: this simple predicate, however, is automatically frame | |
2524 #+caption: independent and ignores vermopomorphic differences such as | |
2525 #+caption: worm textures and colors. | |
2526 #+name: curled | |
2527 #+begin_listing clojure | |
2528 #+begin_src clojure | |
2529 (defn curled? | |
2530 "Is the worm curled up?" | |
2531 [experiences] | |
2532 (every? | |
2533 (fn [[_ _ bend]] | |
2534 (> (Math/sin bend) 0.64)) | |
2535 (:proprioception (peek experiences)))) | |
2536 #+end_src | |
2537 #+end_listing | |
2538 | |
2539 #+caption: Program for summarizing the touch information in a patch | |
2540 #+caption: of skin. | |
2541 #+name: touch-summary | |
2542 #+begin_listing clojure | |
2543 #+begin_src clojure | |
2544 (defn contact | |
2545 "Determine how much contact a particular worm segment has with | |
2546 other objects. Returns a value between 0 and 1, where 1 is full | |
2547 contact and 0 is no contact." | |
2548 [touch-region [coords contact :as touch]] | |
2549 (-> (zipmap coords contact) | |
2550 (select-keys touch-region) | |
2551 (vals) | |
2552 (#(map first %)) | |
2553 (average) | |
2554 (* 10) | |
2555 (- 1) | |
2556 (Math/abs))) | |
2557 #+end_src | |
2558 #+end_listing | |
2559 | |
2560 | |
2561 #+caption: Program for detecting whether the worm is at rest. This program | |
2562 #+caption: uses a summary of the tactile information from the underbelly | |
2563 #+caption: of the worm, and is only true if every segment is touching the | |
2564 #+caption: floor. Note that this function contains no references to | |
2565 #+caption: proprioction at all. | |
2566 #+name: resting | |
2567 #+begin_listing clojure | |
2568 #+begin_src clojure | |
2569 (def worm-segment-bottom (rect-region [8 15] [14 22])) | |
2570 | |
2571 (defn resting? | |
2572 "Is the worm resting on the ground?" | |
2573 [experiences] | |
2574 (every? | |
2575 (fn [touch-data] | |
2576 (< 0.9 (contact worm-segment-bottom touch-data))) | |
2577 (:touch (peek experiences)))) | |
2578 #+end_src | |
2579 #+end_listing | |
2580 | |
2581 #+caption: Program for detecting whether the worm is curled up into a | |
2582 #+caption: full circle. Here the embodied approach begins to shine, as | |
2583 #+caption: I am able to both use a previous action predicate (=curled?=) | |
2584 #+caption: as well as the direct tactile experience of the head and tail. | |
2585 #+name: grand-circle | |
2586 #+begin_listing clojure | |
2587 #+begin_src clojure | |
2588 (def worm-segment-bottom-tip (rect-region [15 15] [22 22])) | |
2589 | |
2590 (def worm-segment-top-tip (rect-region [0 15] [7 22])) | |
2591 | |
2592 (defn grand-circle? | |
2593 "Does the worm form a majestic circle (one end touching the other)?" | |
2594 [experiences] | |
2595 (and (curled? experiences) | |
2596 (let [worm-touch (:touch (peek experiences)) | |
2597 tail-touch (worm-touch 0) | |
2598 head-touch (worm-touch 4)] | |
2599 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) | |
2600 (< 0.55 (contact worm-segment-top-tip head-touch)))))) | |
2601 #+end_src | |
2602 #+end_listing | |
2603 | |
2604 | |
2605 #+caption: Program for detecting whether the worm has been wiggling for | |
2606 #+caption: the last few frames. It uses a fourier analysis of the muscle | |
2607 #+caption: contractions of the worm's tail to determine wiggling. This is | |
2608 #+caption: signigicant because there is no particular frame that clearly | |
2609 #+caption: indicates that the worm is wiggling --- only when multiple frames | |
2610 #+caption: are analyzed together is the wiggling revealed. Defining | |
2611 #+caption: wiggling this way also gives the worm an opportunity to learn | |
2612 #+caption: and recognize ``frustrated wiggling'', where the worm tries to | |
2613 #+caption: wiggle but can't. Frustrated wiggling is very visually different | |
2614 #+caption: from actual wiggling, but this definition gives it to us for free. | |
2615 #+name: wiggling | |
2616 #+begin_listing clojure | |
2617 #+begin_src clojure | |
2618 (defn fft [nums] | |
2619 (map | |
2620 #(.getReal %) | |
2621 (.transform | |
2622 (FastFourierTransformer. DftNormalization/STANDARD) | |
2623 (double-array nums) TransformType/FORWARD))) | |
2624 | |
2625 (def indexed (partial map-indexed vector)) | |
2626 | |
2627 (defn max-indexed [s] | |
2628 (first (sort-by (comp - second) (indexed s)))) | |
2629 | |
2630 (defn wiggling? | |
2631 "Is the worm wiggling?" | |
2632 [experiences] | |
2633 (let [analysis-interval 0x40] | |
2634 (when (> (count experiences) analysis-interval) | |
2635 (let [a-flex 3 | |
2636 a-ex 2 | |
2637 muscle-activity | |
2638 (map :muscle (vector:last-n experiences analysis-interval)) | |
2639 base-activity | |
2640 (map #(- (% a-flex) (% a-ex)) muscle-activity)] | |
2641 (= 2 | |
2642 (first | |
2643 (max-indexed | |
2644 (map #(Math/abs %) | |
2645 (take 20 (fft base-activity)))))))))) | |
2646 #+end_src | |
2647 #+end_listing | |
2648 | |
2649 With these action predicates, I can now recognize the actions of | |
2650 the worm while it is moving under my control and I have access to | |
2651 all the worm's senses. | |
2652 | |
2653 #+caption: Use the action predicates defined earlier to report on | |
2654 #+caption: what the worm is doing while in simulation. | |
2655 #+name: report-worm-activity | |
2656 #+begin_listing clojure | |
2657 #+begin_src clojure | |
2658 (defn debug-experience | |
2659 [experiences text] | |
2660 (cond | |
2661 (grand-circle? experiences) (.setText text "Grand Circle") | |
2662 (curled? experiences) (.setText text "Curled") | |
2663 (wiggling? experiences) (.setText text "Wiggling") | |
2664 (resting? experiences) (.setText text "Resting"))) | |
2665 #+end_src | |
2666 #+end_listing | |
2667 | |
2668 #+caption: Using =debug-experience=, the body-centered predicates | |
2669 #+caption: work together to classify the behaviour of the worm. | |
2670 #+caption: the predicates are operating with access to the worm's | |
2671 #+caption: full sensory data. | |
2672 #+name: basic-worm-view | |
2673 #+ATTR_LaTeX: :width 10cm | |
2674 [[./images/worm-identify-init.png]] | |
2675 | |
2676 These action predicates satisfy the recognition requirement of an | |
2677 empathic recognition system. There is power in the simplicity of | |
2678 the action predicates. They describe their actions without getting | |
2679 confused in visual details of the worm. Each one is frame | |
2680 independent, but more than that, they are each indepent of | |
2681 irrelevant visual details of the worm and the environment. They | |
2682 will work regardless of whether the worm is a different color or | |
2683 hevaily textured, or if the environment has strange lighting. | |
2684 | |
2685 The trick now is to make the action predicates work even when the | |
2686 sensory data on which they depend is absent. If I can do that, then | |
2687 I will have gained much, | |
2688 | |
2689 ** \Phi-space describes the worm's experiences | |
2690 | |
2691 As a first step towards building empathy, I need to gather all of | |
2692 the worm's experiences during free play. I use a simple vector to | |
2693 store all the experiences. | |
2694 | |
2695 Each element of the experience vector exists in the vast space of | |
2696 all possible worm-experiences. Most of this vast space is actually | |
2697 unreachable due to physical constraints of the worm's body. For | |
2698 example, the worm's segments are connected by hinge joints that put | |
2699 a practical limit on the worm's range of motions without limiting | |
2700 its degrees of freedom. Some groupings of senses are impossible; | |
2701 the worm can not be bent into a circle so that its ends are | |
2702 touching and at the same time not also experience the sensation of | |
2703 touching itself. | |
2704 | |
2705 As the worm moves around during free play and its experience vector | |
2706 grows larger, the vector begins to define a subspace which is all | |
2707 the sensations the worm can practicaly experience during normal | |
2708 operation. I call this subspace \Phi-space, short for | |
2709 physical-space. The experience vector defines a path through | |
2710 \Phi-space. This path has interesting properties that all derive | |
2711 from physical embodiment. The proprioceptive components are | |
2712 completely smooth, because in order for the worm to move from one | |
2713 position to another, it must pass through the intermediate | |
2714 positions. The path invariably forms loops as actions are repeated. | |
2715 Finally and most importantly, proprioception actually gives very | |
2716 strong inference about the other senses. For example, when the worm | |
2717 is flat, you can infer that it is touching the ground and that its | |
2718 muscles are not active, because if the muscles were active, the | |
2719 worm would be moving and would not be perfectly flat. In order to | |
2720 stay flat, the worm has to be touching the ground, or it would | |
2721 again be moving out of the flat position due to gravity. If the | |
2722 worm is positioned in such a way that it interacts with itself, | |
2723 then it is very likely to be feeling the same tactile feelings as | |
2724 the last time it was in that position, because it has the same body | |
2725 as then. If you observe multiple frames of proprioceptive data, | |
2726 then you can become increasingly confident about the exact | |
2727 activations of the worm's muscles, because it generally takes a | |
2728 unique combination of muscle contractions to transform the worm's | |
2729 body along a specific path through \Phi-space. | |
2730 | |
2731 There is a simple way of taking \Phi-space and the total ordering | |
2732 provided by an experience vector and reliably infering the rest of | |
2733 the senses. | |
2734 | |
2735 ** Empathy is the process of tracing though \Phi-space | |
2736 | |
2737 Here is the core of a basic empathy algorithm, starting with an | |
2738 experience vector: | |
2739 | |
2740 First, group the experiences into tiered proprioceptive bins. I use | |
2741 powers of 10 and 3 bins, and the smallest bin has an approximate | |
2742 size of 0.001 radians in all proprioceptive dimensions. | |
2743 | |
2744 Then, given a sequence of proprioceptive input, generate a set of | |
2745 matching experience records for each input, using the tiered | |
2746 proprioceptive bins. | |
2747 | |
2748 Finally, to infer sensory data, select the longest consective chain | |
2749 of experiences. Conecutive experience means that the experiences | |
2750 appear next to each other in the experience vector. | |
2751 | |
2752 This algorithm has three advantages: | |
2753 | |
2754 1. It's simple | |
2755 | |
2756 3. It's very fast -- retrieving possible interpretations takes | |
2757 constant time. Tracing through chains of interpretations takes | |
2758 time proportional to the average number of experiences in a | |
2759 proprioceptive bin. Redundant experiences in \Phi-space can be | |
2760 merged to save computation. | |
2761 | |
2762 2. It protects from wrong interpretations of transient ambiguous | |
2763 proprioceptive data. For example, if the worm is flat for just | |
2764 an instant, this flattness will not be interpreted as implying | |
2765 that the worm has its muscles relaxed, since the flattness is | |
2766 part of a longer chain which includes a distinct pattern of | |
2767 muscle activation. Markov chains or other memoryless statistical | |
2768 models that operate on individual frames may very well make this | |
2769 mistake. | |
2770 | |
2771 #+caption: Program to convert an experience vector into a | |
2772 #+caption: proprioceptively binned lookup function. | |
2773 #+name: bin | |
2774 #+begin_listing clojure | |
2775 #+begin_src clojure | |
2776 (defn bin [digits] | |
2777 (fn [angles] | |
2778 (->> angles | |
2779 (flatten) | |
2780 (map (juxt #(Math/sin %) #(Math/cos %))) | |
2781 (flatten) | |
2782 (mapv #(Math/round (* % (Math/pow 10 (dec digits)))))))) | |
2783 | |
2784 (defn gen-phi-scan | |
2785 "Nearest-neighbors with binning. Only returns a result if | |
2786 the propriceptive data is within 10% of a previously recorded | |
2787 result in all dimensions." | |
2788 [phi-space] | |
2789 (let [bin-keys (map bin [3 2 1]) | |
2790 bin-maps | |
2791 (map (fn [bin-key] | |
2792 (group-by | |
2793 (comp bin-key :proprioception phi-space) | |
2794 (range (count phi-space)))) bin-keys) | |
2795 lookups (map (fn [bin-key bin-map] | |
2796 (fn [proprio] (bin-map (bin-key proprio)))) | |
2797 bin-keys bin-maps)] | |
2798 (fn lookup [proprio-data] | |
2799 (set (some #(% proprio-data) lookups))))) | |
2800 #+end_src | |
2801 #+end_listing | |
2802 | |
2803 #+caption: =longest-thread= finds the longest path of consecutive | |
2804 #+caption: experiences to explain proprioceptive worm data. | |
2805 #+name: phi-space-history-scan | |
2806 #+ATTR_LaTeX: :width 10cm | |
2807 [[./images/aurellem-gray.png]] | |
2808 | |
2809 =longest-thread= infers sensory data by stitching together pieces | |
2810 from previous experience. It prefers longer chains of previous | |
2811 experience to shorter ones. For example, during training the worm | |
2812 might rest on the ground for one second before it performs its | |
2813 excercises. If during recognition the worm rests on the ground for | |
2814 five seconds, =longest-thread= will accomodate this five second | |
2815 rest period by looping the one second rest chain five times. | |
2816 | |
2817 =longest-thread= takes time proportinal to the average number of | |
2818 entries in a proprioceptive bin, because for each element in the | |
2819 starting bin it performes a series of set lookups in the preceeding | |
2820 bins. If the total history is limited, then this is only a constant | |
2821 multiple times the number of entries in the starting bin. This | |
2822 analysis also applies even if the action requires multiple longest | |
2823 chains -- it's still the average number of entries in a | |
2824 proprioceptive bin times the desired chain length. Because | |
2825 =longest-thread= is so efficient and simple, I can interpret | |
2826 worm-actions in real time. | |
2827 | |
2828 #+caption: Program to calculate empathy by tracing though \Phi-space | |
2829 #+caption: and finding the longest (ie. most coherent) interpretation | |
2830 #+caption: of the data. | |
2831 #+name: longest-thread | |
2832 #+begin_listing clojure | |
2833 #+begin_src clojure | |
2834 (defn longest-thread | |
2835 "Find the longest thread from phi-index-sets. The index sets should | |
2836 be ordered from most recent to least recent." | |
2837 [phi-index-sets] | |
2838 (loop [result '() | |
2839 [thread-bases & remaining :as phi-index-sets] phi-index-sets] | |
2840 (if (empty? phi-index-sets) | |
2841 (vec result) | |
2842 (let [threads | |
2843 (for [thread-base thread-bases] | |
2844 (loop [thread (list thread-base) | |
2845 remaining remaining] | |
2846 (let [next-index (dec (first thread))] | |
2847 (cond (empty? remaining) thread | |
2848 (contains? (first remaining) next-index) | |
2849 (recur | |
2850 (cons next-index thread) (rest remaining)) | |
2851 :else thread)))) | |
2852 longest-thread | |
2853 (reduce (fn [thread-a thread-b] | |
2854 (if (> (count thread-a) (count thread-b)) | |
2855 thread-a thread-b)) | |
2856 '(nil) | |
2857 threads)] | |
2858 (recur (concat longest-thread result) | |
2859 (drop (count longest-thread) phi-index-sets)))))) | |
2860 #+end_src | |
2861 #+end_listing | |
2862 | |
2863 There is one final piece, which is to replace missing sensory data | |
2864 with a best-guess estimate. While I could fill in missing data by | |
2865 using a gradient over the closest known sensory data points, | |
2866 averages can be misleading. It is certainly possible to create an | |
2867 impossible sensory state by averaging two possible sensory states. | |
2868 Therefore, I simply replicate the most recent sensory experience to | |
2869 fill in the gaps. | |
2870 | |
2871 #+caption: Fill in blanks in sensory experience by replicating the most | |
2872 #+caption: recent experience. | |
2873 #+name: infer-nils | |
2874 #+begin_listing clojure | |
2875 #+begin_src clojure | |
2876 (defn infer-nils | |
2877 "Replace nils with the next available non-nil element in the | |
2878 sequence, or barring that, 0." | |
2879 [s] | |
2880 (loop [i (dec (count s)) | |
2881 v (transient s)] | |
2882 (if (zero? i) (persistent! v) | |
2883 (if-let [cur (v i)] | |
2884 (if (get v (dec i) 0) | |
2885 (recur (dec i) v) | |
2886 (recur (dec i) (assoc! v (dec i) cur))) | |
2887 (recur i (assoc! v i 0)))))) | |
2888 #+end_src | |
2889 #+end_listing | |
2890 | |
2891 ** Efficient action recognition with =EMPATH= | |
2892 | |
2893 To use =EMPATH= with the worm, I first need to gather a set of | |
2894 experiences from the worm that includes the actions I want to | |
2895 recognize. The =generate-phi-space= program (listing | |
2896 \ref{generate-phi-space} runs the worm through a series of | |
2897 exercices and gatheres those experiences into a vector. The | |
2898 =do-all-the-things= program is a routine expressed in a simple | |
2899 muscle contraction script language for automated worm control. It | |
2900 causes the worm to rest, curl, and wiggle over about 700 frames | |
2901 (approx. 11 seconds). | |
2902 | |
2903 #+caption: Program to gather the worm's experiences into a vector for | |
2904 #+caption: further processing. The =motor-control-program= line uses | |
2905 #+caption: a motor control script that causes the worm to execute a series | |
2906 #+caption: of ``exercices'' that include all the action predicates. | |
2907 #+name: generate-phi-space | |
2908 #+begin_listing clojure | |
2909 #+begin_src clojure | |
2910 (def do-all-the-things | |
2911 (concat | |
2912 curl-script | |
2913 [[300 :d-ex 40] | |
2914 [320 :d-ex 0]] | |
2915 (shift-script 280 (take 16 wiggle-script)))) | |
2916 | |
2917 (defn generate-phi-space [] | |
2918 (let [experiences (atom [])] | |
2919 (run-world | |
2920 (apply-map | |
2921 worm-world | |
2922 (merge | |
2923 (worm-world-defaults) | |
2924 {:end-frame 700 | |
2925 :motor-control | |
2926 (motor-control-program worm-muscle-labels do-all-the-things) | |
2927 :experiences experiences}))) | |
2928 @experiences)) | |
2929 #+end_src | |
2930 #+end_listing | |
2931 | |
2932 #+caption: Use longest thread and a phi-space generated from a short | |
2933 #+caption: exercise routine to interpret actions during free play. | |
2934 #+name: empathy-debug | |
2935 #+begin_listing clojure | |
2936 #+begin_src clojure | |
2937 (defn init [] | |
2938 (def phi-space (generate-phi-space)) | |
2939 (def phi-scan (gen-phi-scan phi-space))) | |
2940 | |
2941 (defn empathy-demonstration [] | |
2942 (let [proprio (atom ())] | |
2943 (fn | |
2944 [experiences text] | |
2945 (let [phi-indices (phi-scan (:proprioception (peek experiences)))] | |
2946 (swap! proprio (partial cons phi-indices)) | |
2947 (let [exp-thread (longest-thread (take 300 @proprio)) | |
2948 empathy (mapv phi-space (infer-nils exp-thread))] | |
2949 (println-repl (vector:last-n exp-thread 22)) | |
2950 (cond | |
2951 (grand-circle? empathy) (.setText text "Grand Circle") | |
2952 (curled? empathy) (.setText text "Curled") | |
2953 (wiggling? empathy) (.setText text "Wiggling") | |
2954 (resting? empathy) (.setText text "Resting") | |
2955 :else (.setText text "Unknown"))))))) | |
2956 | |
2957 (defn empathy-experiment [record] | |
2958 (.start (worm-world :experience-watch (debug-experience-phi) | |
2959 :record record :worm worm*))) | |
2960 #+end_src | |
2961 #+end_listing | |
2962 | |
2963 The result of running =empathy-experiment= is that the system is | |
2964 generally able to interpret worm actions using the action-predicates | |
2965 on simulated sensory data just as well as with actual data. Figure | |
2966 \ref{empathy-debug-image} was generated using =empathy-experiment=: | |
2967 | |
2968 #+caption: From only proprioceptive data, =EMPATH= was able to infer | |
2969 #+caption: the complete sensory experience and classify four poses | |
2970 #+caption: (The last panel shows a composite image of \emph{wriggling}, | |
2971 #+caption: a dynamic pose.) | |
2972 #+name: empathy-debug-image | |
2973 #+ATTR_LaTeX: :width 10cm :placement [H] | |
2974 [[./images/empathy-1.png]] | |
2975 | |
2976 One way to measure the performance of =EMPATH= is to compare the | |
2977 sutiability of the imagined sense experience to trigger the same | |
2978 action predicates as the real sensory experience. | |
2979 | |
2980 #+caption: Determine how closely empathy approximates actual | |
2981 #+caption: sensory data. | |
2982 #+name: test-empathy-accuracy | |
2983 #+begin_listing clojure | |
2984 #+begin_src clojure | |
2985 (def worm-action-label | |
2986 (juxt grand-circle? curled? wiggling?)) | |
2987 | |
2988 (defn compare-empathy-with-baseline [matches] | |
2989 (let [proprio (atom ())] | |
2990 (fn | |
2991 [experiences text] | |
2992 (let [phi-indices (phi-scan (:proprioception (peek experiences)))] | |
2993 (swap! proprio (partial cons phi-indices)) | |
2994 (let [exp-thread (longest-thread (take 300 @proprio)) | |
2995 empathy (mapv phi-space (infer-nils exp-thread)) | |
2996 experience-matches-empathy | |
2997 (= (worm-action-label experiences) | |
2998 (worm-action-label empathy))] | |
2999 (println-repl experience-matches-empathy) | |
3000 (swap! matches #(conj % experience-matches-empathy))))))) | |
3001 | |
3002 (defn accuracy [v] | |
3003 (float (/ (count (filter true? v)) (count v)))) | |
3004 | |
3005 (defn test-empathy-accuracy [] | |
3006 (let [res (atom [])] | |
3007 (run-world | |
3008 (worm-world :experience-watch | |
3009 (compare-empathy-with-baseline res) | |
3010 :worm worm*)) | |
3011 (accuracy @res))) | |
3012 #+end_src | |
3013 #+end_listing | |
3014 | |
3015 Running =test-empathy-accuracy= using the very short exercise | |
3016 program defined in listing \ref{generate-phi-space}, and then doing | |
3017 a similar pattern of activity manually yeilds an accuracy of around | |
3018 73%. This is based on very limited worm experience. By training the | |
3019 worm for longer, the accuracy dramatically improves. | |
3020 | |
3021 #+caption: Program to generate \Phi-space using manual training. | |
3022 #+name: manual-phi-space | |
3023 #+begin_listing clojure | |
3024 #+begin_src clojure | |
3025 (defn init-interactive [] | |
3026 (def phi-space | |
3027 (let [experiences (atom [])] | |
3028 (run-world | |
3029 (apply-map | |
3030 worm-world | |
3031 (merge | |
3032 (worm-world-defaults) | |
3033 {:experiences experiences}))) | |
3034 @experiences)) | |
3035 (def phi-scan (gen-phi-scan phi-space))) | |
3036 #+end_src | |
3037 #+end_listing | |
3038 | |
3039 After about 1 minute of manual training, I was able to achieve 95% | |
3040 accuracy on manual testing of the worm using =init-interactive= and | |
3041 =test-empathy-accuracy=. The majority of errors are near the | |
3042 boundaries of transitioning from one type of action to another. | |
3043 During these transitions the exact label for the action is more open | |
3044 to interpretation, and dissaggrement between empathy and experience | |
3045 is more excusable. | |
3046 | |
3047 ** Digression: bootstrapping touch using free exploration | |
3048 | |
3049 In the previous section I showed how to compute actions in terms of | |
3050 body-centered predicates which relied averate touch activation of | |
3051 pre-defined regions of the worm's skin. What if, instead of recieving | |
3052 touch pre-grouped into the six faces of each worm segment, the true | |
3053 topology of the worm's skin was unknown? This is more similiar to how | |
3054 a nerve fiber bundle might be arranged. While two fibers that are | |
3055 close in a nerve bundle /might/ correspond to two touch sensors that | |
3056 are close together on the skin, the process of taking a complicated | |
3057 surface and forcing it into essentially a circle requires some cuts | |
3058 and rerragenments. | |
3059 | |
3060 In this section I show how to automatically learn the skin-topology of | |
3061 a worm segment by free exploration. As the worm rolls around on the | |
3062 floor, large sections of its surface get activated. If the worm has | |
3063 stopped moving, then whatever region of skin that is touching the | |
3064 floor is probably an important region, and should be recorded. | |
3065 | |
3066 #+caption: Program to detect whether the worm is in a resting state | |
3067 #+caption: with one face touching the floor. | |
3068 #+name: pure-touch | |
3069 #+begin_listing clojure | |
3070 #+begin_src clojure | |
3071 (def full-contact [(float 0.0) (float 0.1)]) | |
3072 | |
3073 (defn pure-touch? | |
3074 "This is worm specific code to determine if a large region of touch | |
3075 sensors is either all on or all off." | |
3076 [[coords touch :as touch-data]] | |
3077 (= (set (map first touch)) (set full-contact))) | |
3078 #+end_src | |
3079 #+end_listing | |
3080 | |
3081 After collecting these important regions, there will many nearly | |
3082 similiar touch regions. While for some purposes the subtle | |
3083 differences between these regions will be important, for my | |
3084 purposes I colapse them into mostly non-overlapping sets using | |
3085 =remove-similiar= in listing \ref{remove-similiar} | |
3086 | |
3087 #+caption: Program to take a lits of set of points and ``collapse them'' | |
3088 #+caption: so that the remaining sets in the list are siginificantly | |
3089 #+caption: different from each other. Prefer smaller sets to larger ones. | |
3090 #+name: remove-similiar | |
3091 #+begin_listing clojure | |
3092 #+begin_src clojure | |
3093 (defn remove-similar | |
3094 [coll] | |
3095 (loop [result () coll (sort-by (comp - count) coll)] | |
3096 (if (empty? coll) result | |
3097 (let [[x & xs] coll | |
3098 c (count x)] | |
3099 (if (some | |
3100 (fn [other-set] | |
3101 (let [oc (count other-set)] | |
3102 (< (- (count (union other-set x)) c) (* oc 0.1)))) | |
3103 xs) | |
3104 (recur result xs) | |
3105 (recur (cons x result) xs)))))) | |
3106 #+end_src | |
3107 #+end_listing | |
3108 | |
3109 Actually running this simulation is easy given =CORTEX='s facilities. | |
3110 | |
3111 #+caption: Collect experiences while the worm moves around. Filter the touch | |
3112 #+caption: sensations by stable ones, collapse similiar ones together, | |
3113 #+caption: and report the regions learned. | |
3114 #+name: learn-touch | |
3115 #+begin_listing clojure | |
3116 #+begin_src clojure | |
3117 (defn learn-touch-regions [] | |
3118 (let [experiences (atom []) | |
3119 world (apply-map | |
3120 worm-world | |
3121 (assoc (worm-segment-defaults) | |
3122 :experiences experiences))] | |
3123 (run-world world) | |
3124 (->> | |
3125 @experiences | |
3126 (drop 175) | |
3127 ;; access the single segment's touch data | |
3128 (map (comp first :touch)) | |
3129 ;; only deal with "pure" touch data to determine surfaces | |
3130 (filter pure-touch?) | |
3131 ;; associate coordinates with touch values | |
3132 (map (partial apply zipmap)) | |
3133 ;; select those regions where contact is being made | |
3134 (map (partial group-by second)) | |
3135 (map #(get % full-contact)) | |
3136 (map (partial map first)) | |
3137 ;; remove redundant/subset regions | |
3138 (map set) | |
3139 remove-similar))) | |
3140 | |
3141 (defn learn-and-view-touch-regions [] | |
3142 (map view-touch-region | |
3143 (learn-touch-regions))) | |
3144 #+end_src | |
3145 #+end_listing | |
3146 | |
3147 The only thing remining to define is the particular motion the worm | |
3148 must take. I accomplish this with a simple motor control program. | |
3149 | |
3150 #+caption: Motor control program for making the worm roll on the ground. | |
3151 #+caption: This could also be replaced with random motion. | |
3152 #+name: worm-roll | |
3153 #+begin_listing clojure | |
3154 #+begin_src clojure | |
3155 (defn touch-kinesthetics [] | |
3156 [[170 :lift-1 40] | |
3157 [190 :lift-1 19] | |
3158 [206 :lift-1 0] | |
3159 | |
3160 [400 :lift-2 40] | |
3161 [410 :lift-2 0] | |
3162 | |
3163 [570 :lift-2 40] | |
3164 [590 :lift-2 21] | |
3165 [606 :lift-2 0] | |
3166 | |
3167 [800 :lift-1 30] | |
3168 [809 :lift-1 0] | |
3169 | |
3170 [900 :roll-2 40] | |
3171 [905 :roll-2 20] | |
3172 [910 :roll-2 0] | |
3173 | |
3174 [1000 :roll-2 40] | |
3175 [1005 :roll-2 20] | |
3176 [1010 :roll-2 0] | |
3177 | |
3178 [1100 :roll-2 40] | |
3179 [1105 :roll-2 20] | |
3180 [1110 :roll-2 0] | |
3181 ]) | |
3182 #+end_src | |
3183 #+end_listing | |
3184 | |
3185 | |
3186 #+caption: The small worm rolls around on the floor, driven | |
3187 #+caption: by the motor control program in listing \ref{worm-roll}. | |
3188 #+name: worm-roll | |
3189 #+ATTR_LaTeX: :width 12cm | |
3190 [[./images/worm-roll.png]] | |
3191 | |
3192 | |
3193 #+caption: After completing its adventures, the worm now knows | |
3194 #+caption: how its touch sensors are arranged along its skin. These | |
3195 #+caption: are the regions that were deemed important by | |
3196 #+caption: =learn-touch-regions=. Note that the worm has discovered | |
3197 #+caption: that it has six sides. | |
3198 #+name: worm-touch-map | |
3199 #+ATTR_LaTeX: :width 12cm | |
3200 [[./images/touch-learn.png]] | |
3201 | |
3202 While simple, =learn-touch-regions= exploits regularities in both | |
3203 the worm's physiology and the worm's environment to correctly | |
3204 deduce that the worm has six sides. Note that =learn-touch-regions= | |
3205 would work just as well even if the worm's touch sense data were | |
3206 completely scrambled. The cross shape is just for convienence. This | |
3207 example justifies the use of pre-defined touch regions in =EMPATH=. | |
3208 | |
3209 * Contributions | |
3210 | |
3211 In this thesis you have seen the =CORTEX= system, a complete | |
3212 environment for creating simulated creatures. You have seen how to | |
3213 implement five senses including touch, proprioception, hearing, | |
3214 vision, and muscle tension. You have seen how to create new creatues | |
3215 using blender, a 3D modeling tool. I hope that =CORTEX= will be | |
3216 useful in further research projects. To this end I have included the | |
3217 full source to =CORTEX= along with a large suite of tests and | |
3218 examples. I have also created a user guide for =CORTEX= which is | |
3219 inculded in an appendix to this thesis. | |
3220 | |
3221 You have also seen how I used =CORTEX= as a platform to attach the | |
3222 /action recognition/ problem, which is the problem of recognizing | |
3223 actions in video. You saw a simple system called =EMPATH= which | |
3224 ientifies actions by first describing actions in a body-centerd, | |
3225 rich sense language, then infering a full range of sensory | |
3226 experience from limited data using previous experience gained from | |
3227 free play. | |
3228 | |
3229 As a minor digression, you also saw how I used =CORTEX= to enable a | |
3230 tiny worm to discover the topology of its skin simply by rolling on | |
3231 the ground. | |
3232 | |
3233 In conclusion, the main contributions of this thesis are: | |
3234 | |
3235 - =CORTEX=, a system for creating simulated creatures with rich | |
3236 senses. | |
3237 - =EMPATH=, a program for recognizing actions by imagining sensory | |
3238 experience. | |
3239 | |
3240 # An anatomical joke: | |
3241 # - Training | |
3242 # - Skeletal imitation | |
3243 # - Sensory fleshing-out | |
3244 # - Classification | |
3245 #+BEGIN_LaTeX | |
3246 \appendix | |
3247 #+END_LaTeX | |
3248 * Appendix: =CORTEX= User Guide | |
3249 | |
3250 Those who write a thesis should endeavor to make their code not only | |
3251 accessable, but actually useable, as a way to pay back the community | |
3252 that made the thesis possible in the first place. This thesis would | |
3253 not be possible without Free Software such as jMonkeyEngine3, | |
3254 Blender, clojure, emacs, ffmpeg, and many other tools. That is why I | |
3255 have included this user guide, in the hope that someone else might | |
3256 find =CORTEX= useful. | |
3257 | |
3258 ** Obtaining =CORTEX= | |
3259 | |
3260 You can get cortex from its mercurial repository at | |
3261 http://hg.bortreb.com/cortex. You may also download =CORTEX= | |
3262 releases at http://aurellem.org/cortex/releases/. As a condition of | |
3263 making this thesis, I have also provided Professor Winston the | |
3264 =CORTEX= source, and he knows how to run the demos and get started. | |
3265 You may also email me at =cortex@aurellem.org= and I may help where | |
3266 I can. | |
3267 | |
3268 ** Running =CORTEX= | |
3269 | |
3270 =CORTEX= comes with README and INSTALL files that will guide you | |
3271 through installation and running the test suite. In particular you | |
3272 should look at test =cortex.test= which contains test suites that | |
3273 run through all senses and multiple creatures. | |
3274 | |
3275 ** Creating creatures | |
3276 | |
3277 Creatures are created using /Blender/, a free 3D modeling program. | |
3278 You will need Blender version 2.6 when using the =CORTEX= included | |
3279 in this thesis. You create a =CORTEX= creature in a similiar manner | |
3280 to modeling anything in Blender, except that you also create | |
3281 several trees of empty nodes which define the creature's senses. | |
3282 | |
3283 *** Mass | |
3284 | |
3285 To give an object mass in =CORTEX=, add a ``mass'' metadata label | |
3286 to the object with the mass in jMonkeyEngine units. Note that | |
3287 setting the mass to 0 causes the object to be immovable. | |
3288 | |
3289 *** Joints | |
3290 | |
3291 Joints are created by creating an empty node named =joints= and | |
3292 then creating any number of empty child nodes to represent your | |
3293 creature's joints. The joint will automatically connect the | |
3294 closest two physical objects. It will help to set the empty node's | |
3295 display mode to ``Arrows'' so that you can clearly see the | |
3296 direction of the axes. | |
3297 | |
3298 Joint nodes should have the following metadata under the ``joint'' | |
3299 label: | |
3300 | |
3301 #+BEGIN_SRC clojure | |
3302 ;; ONE OF the following, under the label "joint": | |
3303 {:type :point} | |
3304 | |
3305 ;; OR | |
3306 | |
3307 {:type :hinge | |
3308 :limit [<limit-low> <limit-high>] | |
3309 :axis (Vector3f. <x> <y> <z>)} | |
3310 ;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints) | |
3311 | |
3312 ;; OR | |
3313 | |
3314 {:type :cone | |
3315 :limit-xz <lim-xz> | |
3316 :limit-xy <lim-xy> | |
3317 :twist <lim-twist>} ;(use XZY rotation mode in blender!) | |
3318 #+END_SRC | |
3319 | |
3320 *** Eyes | |
3321 | |
3322 Eyes are created by creating an empty node named =eyes= and then | |
3323 creating any number of empty child nodes to represent your | |
3324 creature's eyes. | |
3325 | |
3326 Eye nodes should have the following metadata under the ``eye'' | |
3327 label: | |
3328 | |
3329 #+BEGIN_SRC clojure | |
3330 {:red <red-retina-definition> | |
3331 :blue <blue-retina-definition> | |
3332 :green <green-retina-definition> | |
3333 :all <all-retina-definition> | |
3334 (<0xrrggbb> <custom-retina-image>)... | |
3335 } | |
3336 #+END_SRC | |
3337 | |
3338 Any of the color channels may be omitted. You may also include | |
3339 your own color selectors, and in fact :red is equivalent to | |
3340 0xFF0000 and so forth. The eye will be placed at the same position | |
3341 as the empty node and will bind to the neatest physical object. | |
3342 The eye will point outward from the X-axis of the node, and ``up'' | |
3343 will be in the direction of the X-axis of the node. It will help | |
3344 to set the empty node's display mode to ``Arrows'' so that you can | |
3345 clearly see the direction of the axes. | |
3346 | |
3347 Each retina file should contain white pixels whever you want to be | |
3348 sensitive to your chosen color. If you want the entire field of | |
3349 view, specify :all of 0xFFFFFF and a retinal map that is entirely | |
3350 white. | |
3351 | |
3352 Here is a sample retinal map: | |
3353 | |
3354 #+caption: An example retinal profile image. White pixels are | |
3355 #+caption: photo-sensitive elements. The distribution of white | |
3356 #+caption: pixels is denser in the middle and falls off at the | |
3357 #+caption: edges and is inspired by the human retina. | |
3358 #+name: retina | |
3359 #+ATTR_LaTeX: :width 7cm :placement [H] | |
3360 [[./images/retina-small.png]] | |
3361 | |
3362 *** Hearing | |
3363 | |
3364 Ears are created by creating an empty node named =ears= and then | |
3365 creating any number of empty child nodes to represent your | |
3366 creature's ears. | |
3367 | |
3368 Ear nodes do not require any metadata. | |
3369 | |
3370 The ear will bind to and follow the closest physical node. | |
3371 | |
3372 *** Touch | |
3373 | |
3374 Touch is handled similarly to mass. To make a particular object | |
3375 touch sensitive, add metadata of the following form under the | |
3376 object's ``touch'' metadata field: | |
3377 | |
3378 #+BEGIN_EXAMPLE | |
3379 <touch-UV-map-file-name> | |
3380 #+END_EXAMPLE | |
3381 | |
3382 You may also include an optional ``scale'' metadata number to | |
3383 specifiy the length of the touch feelers. The default is $0.1$, | |
3384 and this is generally sufficient. | |
3385 | |
3386 The touch UV should contain white pixels for each touch sensor. | |
3387 | |
3388 Here is an example touch-uv map that approximates a human finger, | |
3389 and its corresponding model. | |
3390 | |
3391 #+caption: This is the tactile-sensor-profile for the upper segment | |
3392 #+caption: of a fingertip. It defines regions of high touch sensitivity | |
3393 #+caption: (where there are many white pixels) and regions of low | |
3394 #+caption: sensitivity (where white pixels are sparse). | |
3395 #+name: guide-fingertip-UV | |
3396 #+ATTR_LaTeX: :width 9cm :placement [H] | |
3397 [[./images/finger-UV.png]] | |
3398 | |
3399 #+caption: The fingertip UV-image form above applied to a simple | |
3400 #+caption: model of a fingertip. | |
3401 #+name: guide-fingertip | |
3402 #+ATTR_LaTeX: :width 9cm :placement [H] | |
3403 [[./images/finger-2.png]] | |
3404 | |
3405 *** Propriocepotion | |
3406 | |
3407 Proprioception is tied to each joint node -- nothing special must | |
3408 be done in a blender model to enable proprioception other than | |
3409 creating joint nodes. | |
3410 | |
3411 *** Muscles | |
3412 | |
3413 Muscles are created by creating an empty node named =muscles= and | |
3414 then creating any number of empty child nodes to represent your | |
3415 creature's muscles. | |
3416 | |
3417 | |
3418 Muscle nodes should have the following metadata under the | |
3419 ``muscle'' label: | |
3420 | |
3421 #+BEGIN_EXAMPLE | |
3422 <muscle-profile-file-name> | |
3423 #+END_EXAMPLE | |
3424 | |
3425 Muscles should also have a ``strength'' metadata entry describing | |
3426 the muscle's total strength at full activation. | |
3427 | |
3428 Muscle profiles are simple images that contain the relative amount | |
3429 of muscle power in each simulated alpha motor neuron. The width of | |
3430 the image is the total size of the motor pool, and the redness of | |
3431 each neuron is the relative power of that motor pool. | |
3432 | |
3433 While the profile image can have any dimensions, only the first | |
3434 line of pixels is used to define the muscle. Here is a sample | |
3435 muscle profile image that defines a human-like muscle. | |
3436 | |
3437 #+caption: A muscle profile image that describes the strengths | |
3438 #+caption: of each motor neuron in a muscle. White is weakest | |
3439 #+caption: and dark red is strongest. This particular pattern | |
3440 #+caption: has weaker motor neurons at the beginning, just | |
3441 #+caption: like human muscle. | |
3442 #+name: muscle-recruit | |
3443 #+ATTR_LaTeX: :width 7cm :placement [H] | |
3444 [[./images/basic-muscle.png]] | |
3445 | |
3446 Muscles twist the nearest physical object about the muscle node's | |
3447 Z-axis. I recommend using the ``Single Arrow'' display mode for | |
3448 muscles and using the right hand rule to determine which way the | |
3449 muscle will twist. To make a segment that can twist in multiple | |
3450 directions, create multiple, differently aligned muscles. | |
3451 | |
3452 ** =CORTEX= API | |
3453 | |
3454 These are the some functions exposed by =CORTEX= for creating | |
3455 worlds and simulating creatures. These are in addition to | |
3456 jMonkeyEngine3's extensive library, which is documented elsewhere. | |
3457 | |
3458 *** Simulation | |
3459 - =(world root-node key-map setup-fn update-fn)= :: create | |
3460 a simulation. | |
3461 - /root-node/ :: a =com.jme3.scene.Node= object which | |
3462 contains all of the objects that should be in the | |
3463 simulation. | |
3464 | |
3465 - /key-map/ :: a map from strings describing keys to | |
3466 functions that should be executed whenever that key is | |
3467 pressed. the functions should take a SimpleApplication | |
3468 object and a boolean value. The SimpleApplication is the | |
3469 current simulation that is running, and the boolean is true | |
3470 if the key is being pressed, and false if it is being | |
3471 released. As an example, | |
3472 #+BEGIN_SRC clojure | |
3473 {"key-j" (fn [game value] (if value (println "key j pressed")))} | |
3474 #+END_SRC | |
3475 is a valid key-map which will cause the simulation to print | |
3476 a message whenever the 'j' key on the keyboard is pressed. | |
3477 | |
3478 - /setup-fn/ :: a function that takes a =SimpleApplication= | |
3479 object. It is called once when initializing the simulation. | |
3480 Use it to create things like lights, change the gravity, | |
3481 initialize debug nodes, etc. | |
3482 | |
3483 - /update-fn/ :: this function takes a =SimpleApplication= | |
3484 object and a float and is called every frame of the | |
3485 simulation. The float tells how many seconds is has been | |
3486 since the last frame was rendered, according to whatever | |
3487 clock jme is currently using. The default is to use IsoTimer | |
3488 which will result in this value always being the same. | |
3489 | |
3490 - =(position-camera world position rotation)= :: set the position | |
3491 of the simulation's main camera. | |
3492 | |
3493 - =(enable-debug world)= :: turn on debug wireframes for each | |
3494 simulated object. | |
3495 | |
3496 - =(set-gravity world gravity)= :: set the gravity of a running | |
3497 simulation. | |
3498 | |
3499 - =(box length width height & {options})= :: create a box in the | |
3500 simulation. Options is a hash map specifying texture, mass, | |
3501 etc. Possible options are =:name=, =:color=, =:mass=, | |
3502 =:friction=, =:texture=, =:material=, =:position=, | |
3503 =:rotation=, =:shape=, and =:physical?=. | |
3504 | |
3505 - =(sphere radius & {options})= :: create a sphere in the simulation. | |
3506 Options are the same as in =box=. | |
3507 | |
3508 - =(load-blender-model file-name)= :: create a node structure | |
3509 representing that described in a blender file. | |
3510 | |
3511 - =(light-up-everything world)= :: distribute a standard compliment | |
3512 of lights throught the simulation. Should be adequate for most | |
3513 purposes. | |
3514 | |
3515 - =(node-seq node)= :: return a recursuve list of the node's | |
3516 children. | |
3517 | |
3518 - =(nodify name children)= :: construct a node given a node-name and | |
3519 desired children. | |
3520 | |
3521 - =(add-element world element)= :: add an object to a running world | |
3522 simulation. | |
3523 | |
3524 - =(set-accuracy world accuracy)= :: change the accuracy of the | |
3525 world's physics simulator. | |
3526 | |
3527 - =(asset-manager)= :: get an /AssetManager/, a jMonkeyEngine | |
3528 construct that is useful for loading textures and is required | |
3529 for smooth interaction with jMonkeyEngine library functions. | |
3530 | |
3531 - =(load-bullet)= :: unpack native libraries and initialize | |
3532 blender. This function is required before other world building | |
3533 functions are called. | |
3534 | |
3535 *** Creature Manipulation / Import | |
3536 | |
3537 - =(body! creature)= :: give the creature a physical body. | |
3538 | |
3539 - =(vision! creature)= :: give the creature a sense of vision. | |
3540 Returns a list of functions which will each, when called | |
3541 during a simulation, return the vision data for the channel of | |
3542 one of the eyes. The functions are ordered depending on the | |
3543 alphabetical order of the names of the eye nodes in the | |
3544 blender file. The data returned by the functions is a vector | |
3545 containing the eye's /topology/, a vector of coordinates, and | |
3546 the eye's /data/, a vector of RGB values filtered by the eye's | |
3547 sensitivity. | |
3548 | |
3549 - =(hearing! creature)= :: give the creature a sense of hearing. | |
3550 Returns a list of functions, one for each ear, that when | |
3551 called will return a frame's worth of hearing data for that | |
3552 ear. The functions are ordered depending on the alphabetical | |
3553 order of the names of the ear nodes in the blender file. The | |
3554 data returned by the functions is an array PCM encoded wav | |
3555 data. | |
3556 | |
3557 - =(touch! creature)= :: give the creature a sense of touch. Returns | |
3558 a single function that must be called with the /root node/ of | |
3559 the world, and which will return a vector of /touch-data/ | |
3560 one entry for each touch sensitive component, each entry of | |
3561 which contains a /topology/ that specifies the distribution of | |
3562 touch sensors, and the /data/, which is a vector of | |
3563 =[activation, length]= pairs for each touch hair. | |
3564 | |
3565 - =(proprioception! creature)= :: give the creature the sense of | |
3566 proprioception. Returns a list of functions, one for each | |
3567 joint, that when called during a running simulation will | |
3568 report the =[headnig, pitch, roll]= of the joint. | |
3569 | |
3570 - =(movement! creature)= :: give the creature the power of movement. | |
3571 Creates a list of functions, one for each muscle, that when | |
3572 called with an integer, will set the recruitment of that | |
3573 muscle to that integer, and will report the current power | |
3574 being exerted by the muscle. Order of muscles is determined by | |
3575 the alphabetical sort order of the names of the muscle nodes. | |
3576 | |
3577 *** Visualization/Debug | |
3578 | |
3579 - =(view-vision)= :: create a function that when called with a list | |
3580 of visual data returned from the functions made by =vision!=, | |
3581 will display that visual data on the screen. | |
3582 | |
3583 - =(view-hearing)= :: same as =view-vision= but for hearing. | |
3584 | |
3585 - =(view-touch)= :: same as =view-vision= but for touch. | |
3586 | |
3587 - =(view-proprioception)= :: same as =view-vision= but for | |
3588 proprioception. | |
3589 | |
3590 - =(view-movement)= :: same as =view-vision= but for | |
3591 proprioception. | |
3592 | |
3593 - =(view anything)= :: =view= is a polymorphic function that allows | |
3594 you to inspect almost anything you could reasonably expect to | |
3595 be able to ``see'' in =CORTEX=. | |
3596 | |
3597 - =(text anything)= :: =text= is a polymorphic function that allows | |
3598 you to convert practically anything into a text string. | |
3599 | |
3600 - =(println-repl anything)= :: print messages to clojure's repl | |
3601 instead of the simulation's terminal window. | |
3602 | |
3603 - =(mega-import-jme3)= :: for experimenting at the REPL. This | |
3604 function will import all jMonkeyEngine3 classes for immediate | |
3605 use. | |
3606 | |
3607 - =(display-dialated-time world timer)= :: Shows the time as it is | |
3608 flowing in the simulation on a HUD display. | |
3609 | |
3610 | |
3611 |