annotate thesis/cortex.org @ 474:57c7d5aec8d5

mix in touch; need to clean it up.
author Robert McIntyre <rlm@mit.edu>
date Fri, 28 Mar 2014 21:05:12 -0400
parents 486ce07f5545
children 3ec428e096e5
rev   line source
rlm@425 1 #+title: =CORTEX=
rlm@425 2 #+author: Robert McIntyre
rlm@425 3 #+email: rlm@mit.edu
rlm@425 4 #+description: Using embodied AI to facilitate Artificial Imagination.
rlm@425 5 #+keywords: AI, clojure, embodiment
rlm@451 6 #+LaTeX_CLASS_OPTIONS: [nofloat]
rlm@422 7
rlm@465 8 * COMMENT templates
rlm@470 9 #+caption:
rlm@470 10 #+caption:
rlm@470 11 #+caption:
rlm@470 12 #+caption:
rlm@470 13 #+name: name
rlm@470 14 #+begin_listing clojure
rlm@470 15 #+end_listing
rlm@465 16
rlm@470 17 #+caption:
rlm@470 18 #+caption:
rlm@470 19 #+caption:
rlm@470 20 #+name: name
rlm@470 21 #+ATTR_LaTeX: :width 10cm
rlm@470 22 [[./images/aurellem-gray.png]]
rlm@470 23
rlm@470 24 #+caption:
rlm@470 25 #+caption:
rlm@470 26 #+caption:
rlm@470 27 #+caption:
rlm@470 28 #+name: name
rlm@470 29 #+begin_listing clojure
rlm@470 30 #+end_listing
rlm@470 31
rlm@470 32 #+caption:
rlm@470 33 #+caption:
rlm@470 34 #+caption:
rlm@470 35 #+name: name
rlm@470 36 #+ATTR_LaTeX: :width 10cm
rlm@470 37 [[./images/aurellem-gray.png]]
rlm@470 38
rlm@465 39
rlm@465 40 * COMMENT Empathy and Embodiment as problem solving strategies
rlm@437 41
rlm@437 42 By the end of this thesis, you will have seen a novel approach to
rlm@437 43 interpreting video using embodiment and empathy. You will have also
rlm@437 44 seen one way to efficiently implement empathy for embodied
rlm@447 45 creatures. Finally, you will become familiar with =CORTEX=, a system
rlm@447 46 for designing and simulating creatures with rich senses, which you
rlm@447 47 may choose to use in your own research.
rlm@437 48
rlm@441 49 This is the core vision of my thesis: That one of the important ways
rlm@441 50 in which we understand others is by imagining ourselves in their
rlm@441 51 position and emphatically feeling experiences relative to our own
rlm@441 52 bodies. By understanding events in terms of our own previous
rlm@441 53 corporeal experience, we greatly constrain the possibilities of what
rlm@441 54 would otherwise be an unwieldy exponential search. This extra
rlm@441 55 constraint can be the difference between easily understanding what
rlm@441 56 is happening in a video and being completely lost in a sea of
rlm@441 57 incomprehensible color and movement.
rlm@435 58
rlm@436 59 ** Recognizing actions in video is extremely difficult
rlm@437 60
rlm@447 61 Consider for example the problem of determining what is happening
rlm@447 62 in a video of which this is one frame:
rlm@437 63
rlm@441 64 #+caption: A cat drinking some water. Identifying this action is
rlm@441 65 #+caption: beyond the state of the art for computers.
rlm@441 66 #+ATTR_LaTeX: :width 7cm
rlm@441 67 [[./images/cat-drinking.jpg]]
rlm@441 68
rlm@441 69 It is currently impossible for any computer program to reliably
rlm@447 70 label such a video as ``drinking''. And rightly so -- it is a very
rlm@441 71 hard problem! What features can you describe in terms of low level
rlm@441 72 functions of pixels that can even begin to describe at a high level
rlm@441 73 what is happening here?
rlm@437 74
rlm@447 75 Or suppose that you are building a program that recognizes chairs.
rlm@448 76 How could you ``see'' the chair in figure \ref{hidden-chair}?
rlm@441 77
rlm@441 78 #+caption: The chair in this image is quite obvious to humans, but I
rlm@448 79 #+caption: doubt that any modern computer vision program can find it.
rlm@441 80 #+name: hidden-chair
rlm@441 81 #+ATTR_LaTeX: :width 10cm
rlm@441 82 [[./images/fat-person-sitting-at-desk.jpg]]
rlm@441 83
rlm@441 84 Finally, how is it that you can easily tell the difference between
rlm@441 85 how the girls /muscles/ are working in figure \ref{girl}?
rlm@441 86
rlm@441 87 #+caption: The mysterious ``common sense'' appears here as you are able
rlm@441 88 #+caption: to discern the difference in how the girl's arm muscles
rlm@441 89 #+caption: are activated between the two images.
rlm@441 90 #+name: girl
rlm@448 91 #+ATTR_LaTeX: :width 7cm
rlm@441 92 [[./images/wall-push.png]]
rlm@437 93
rlm@441 94 Each of these examples tells us something about what might be going
rlm@441 95 on in our minds as we easily solve these recognition problems.
rlm@441 96
rlm@441 97 The hidden chairs show us that we are strongly triggered by cues
rlm@447 98 relating to the position of human bodies, and that we can determine
rlm@447 99 the overall physical configuration of a human body even if much of
rlm@447 100 that body is occluded.
rlm@437 101
rlm@441 102 The picture of the girl pushing against the wall tells us that we
rlm@441 103 have common sense knowledge about the kinetics of our own bodies.
rlm@441 104 We know well how our muscles would have to work to maintain us in
rlm@441 105 most positions, and we can easily project this self-knowledge to
rlm@441 106 imagined positions triggered by images of the human body.
rlm@441 107
rlm@441 108 ** =EMPATH= neatly solves recognition problems
rlm@441 109
rlm@441 110 I propose a system that can express the types of recognition
rlm@441 111 problems above in a form amenable to computation. It is split into
rlm@441 112 four parts:
rlm@441 113
rlm@448 114 - Free/Guided Play :: The creature moves around and experiences the
rlm@448 115 world through its unique perspective. Many otherwise
rlm@448 116 complicated actions are easily described in the language of a
rlm@448 117 full suite of body-centered, rich senses. For example,
rlm@448 118 drinking is the feeling of water sliding down your throat, and
rlm@448 119 cooling your insides. It's often accompanied by bringing your
rlm@448 120 hand close to your face, or bringing your face close to water.
rlm@448 121 Sitting down is the feeling of bending your knees, activating
rlm@448 122 your quadriceps, then feeling a surface with your bottom and
rlm@448 123 relaxing your legs. These body-centered action descriptions
rlm@448 124 can be either learned or hard coded.
rlm@448 125 - Posture Imitation :: When trying to interpret a video or image,
rlm@448 126 the creature takes a model of itself and aligns it with
rlm@448 127 whatever it sees. This alignment can even cross species, as
rlm@448 128 when humans try to align themselves with things like ponies,
rlm@448 129 dogs, or other humans with a different body type.
rlm@448 130 - Empathy :: The alignment triggers associations with
rlm@448 131 sensory data from prior experiences. For example, the
rlm@448 132 alignment itself easily maps to proprioceptive data. Any
rlm@448 133 sounds or obvious skin contact in the video can to a lesser
rlm@448 134 extent trigger previous experience. Segments of previous
rlm@448 135 experiences are stitched together to form a coherent and
rlm@448 136 complete sensory portrait of the scene.
rlm@448 137 - Recognition :: With the scene described in terms of first
rlm@448 138 person sensory events, the creature can now run its
rlm@447 139 action-identification programs on this synthesized sensory
rlm@447 140 data, just as it would if it were actually experiencing the
rlm@447 141 scene first-hand. If previous experience has been accurately
rlm@447 142 retrieved, and if it is analogous enough to the scene, then
rlm@447 143 the creature will correctly identify the action in the scene.
rlm@447 144
rlm@441 145 For example, I think humans are able to label the cat video as
rlm@447 146 ``drinking'' because they imagine /themselves/ as the cat, and
rlm@441 147 imagine putting their face up against a stream of water and
rlm@441 148 sticking out their tongue. In that imagined world, they can feel
rlm@441 149 the cool water hitting their tongue, and feel the water entering
rlm@447 150 their body, and are able to recognize that /feeling/ as drinking.
rlm@447 151 So, the label of the action is not really in the pixels of the
rlm@447 152 image, but is found clearly in a simulation inspired by those
rlm@447 153 pixels. An imaginative system, having been trained on drinking and
rlm@447 154 non-drinking examples and learning that the most important
rlm@447 155 component of drinking is the feeling of water sliding down one's
rlm@447 156 throat, would analyze a video of a cat drinking in the following
rlm@447 157 manner:
rlm@441 158
rlm@447 159 1. Create a physical model of the video by putting a ``fuzzy''
rlm@447 160 model of its own body in place of the cat. Possibly also create
rlm@447 161 a simulation of the stream of water.
rlm@441 162
rlm@441 163 2. Play out this simulated scene and generate imagined sensory
rlm@441 164 experience. This will include relevant muscle contractions, a
rlm@441 165 close up view of the stream from the cat's perspective, and most
rlm@441 166 importantly, the imagined feeling of water entering the
rlm@443 167 mouth. The imagined sensory experience can come from a
rlm@441 168 simulation of the event, but can also be pattern-matched from
rlm@441 169 previous, similar embodied experience.
rlm@441 170
rlm@441 171 3. The action is now easily identified as drinking by the sense of
rlm@441 172 taste alone. The other senses (such as the tongue moving in and
rlm@441 173 out) help to give plausibility to the simulated action. Note that
rlm@441 174 the sense of vision, while critical in creating the simulation,
rlm@441 175 is not critical for identifying the action from the simulation.
rlm@441 176
rlm@441 177 For the chair examples, the process is even easier:
rlm@441 178
rlm@441 179 1. Align a model of your body to the person in the image.
rlm@441 180
rlm@441 181 2. Generate proprioceptive sensory data from this alignment.
rlm@437 182
rlm@441 183 3. Use the imagined proprioceptive data as a key to lookup related
rlm@441 184 sensory experience associated with that particular proproceptive
rlm@441 185 feeling.
rlm@437 186
rlm@443 187 4. Retrieve the feeling of your bottom resting on a surface, your
rlm@443 188 knees bent, and your leg muscles relaxed.
rlm@437 189
rlm@441 190 5. This sensory information is consistent with the =sitting?=
rlm@441 191 sensory predicate, so you (and the entity in the image) must be
rlm@441 192 sitting.
rlm@440 193
rlm@441 194 6. There must be a chair-like object since you are sitting.
rlm@440 195
rlm@441 196 Empathy offers yet another alternative to the age-old AI
rlm@441 197 representation question: ``What is a chair?'' --- A chair is the
rlm@441 198 feeling of sitting.
rlm@441 199
rlm@441 200 My program, =EMPATH= uses this empathic problem solving technique
rlm@441 201 to interpret the actions of a simple, worm-like creature.
rlm@437 202
rlm@441 203 #+caption: The worm performs many actions during free play such as
rlm@441 204 #+caption: curling, wiggling, and resting.
rlm@441 205 #+name: worm-intro
rlm@446 206 #+ATTR_LaTeX: :width 15cm
rlm@445 207 [[./images/worm-intro-white.png]]
rlm@437 208
rlm@462 209 #+caption: =EMPATH= recognized and classified each of these
rlm@462 210 #+caption: poses by inferring the complete sensory experience
rlm@462 211 #+caption: from proprioceptive data.
rlm@441 212 #+name: worm-recognition-intro
rlm@446 213 #+ATTR_LaTeX: :width 15cm
rlm@445 214 [[./images/worm-poses.png]]
rlm@441 215
rlm@441 216 One powerful advantage of empathic problem solving is that it
rlm@441 217 factors the action recognition problem into two easier problems. To
rlm@441 218 use empathy, you need an /aligner/, which takes the video and a
rlm@441 219 model of your body, and aligns the model with the video. Then, you
rlm@441 220 need a /recognizer/, which uses the aligned model to interpret the
rlm@441 221 action. The power in this method lies in the fact that you describe
rlm@448 222 all actions form a body-centered viewpoint. You are less tied to
rlm@447 223 the particulars of any visual representation of the actions. If you
rlm@441 224 teach the system what ``running'' is, and you have a good enough
rlm@441 225 aligner, the system will from then on be able to recognize running
rlm@441 226 from any point of view, even strange points of view like above or
rlm@441 227 underneath the runner. This is in contrast to action recognition
rlm@448 228 schemes that try to identify actions using a non-embodied approach.
rlm@448 229 If these systems learn about running as viewed from the side, they
rlm@448 230 will not automatically be able to recognize running from any other
rlm@448 231 viewpoint.
rlm@441 232
rlm@441 233 Another powerful advantage is that using the language of multiple
rlm@441 234 body-centered rich senses to describe body-centerd actions offers a
rlm@441 235 massive boost in descriptive capability. Consider how difficult it
rlm@441 236 would be to compose a set of HOG filters to describe the action of
rlm@447 237 a simple worm-creature ``curling'' so that its head touches its
rlm@447 238 tail, and then behold the simplicity of describing thus action in a
rlm@441 239 language designed for the task (listing \ref{grand-circle-intro}):
rlm@441 240
rlm@446 241 #+caption: Body-centerd actions are best expressed in a body-centered
rlm@446 242 #+caption: language. This code detects when the worm has curled into a
rlm@446 243 #+caption: full circle. Imagine how you would replicate this functionality
rlm@446 244 #+caption: using low-level pixel features such as HOG filters!
rlm@446 245 #+name: grand-circle-intro
rlm@452 246 #+attr_latex: [htpb]
rlm@452 247 #+begin_listing clojure
rlm@446 248 #+begin_src clojure
rlm@446 249 (defn grand-circle?
rlm@446 250 "Does the worm form a majestic circle (one end touching the other)?"
rlm@446 251 [experiences]
rlm@446 252 (and (curled? experiences)
rlm@446 253 (let [worm-touch (:touch (peek experiences))
rlm@446 254 tail-touch (worm-touch 0)
rlm@446 255 head-touch (worm-touch 4)]
rlm@462 256 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
rlm@462 257 (< 0.2 (contact worm-segment-top-tip head-touch))))))
rlm@446 258 #+end_src
rlm@446 259 #+end_listing
rlm@446 260
rlm@435 261
rlm@449 262 ** =CORTEX= is a toolkit for building sensate creatures
rlm@435 263
rlm@448 264 I built =CORTEX= to be a general AI research platform for doing
rlm@448 265 experiments involving multiple rich senses and a wide variety and
rlm@448 266 number of creatures. I intend it to be useful as a library for many
rlm@462 267 more projects than just this thesis. =CORTEX= was necessary to meet
rlm@462 268 a need among AI researchers at CSAIL and beyond, which is that
rlm@462 269 people often will invent neat ideas that are best expressed in the
rlm@448 270 language of creatures and senses, but in order to explore those
rlm@448 271 ideas they must first build a platform in which they can create
rlm@448 272 simulated creatures with rich senses! There are many ideas that
rlm@448 273 would be simple to execute (such as =EMPATH=), but attached to them
rlm@448 274 is the multi-month effort to make a good creature simulator. Often,
rlm@448 275 that initial investment of time proves to be too much, and the
rlm@448 276 project must make do with a lesser environment.
rlm@435 277
rlm@448 278 =CORTEX= is well suited as an environment for embodied AI research
rlm@448 279 for three reasons:
rlm@448 280
rlm@448 281 - You can create new creatures using Blender, a popular 3D modeling
rlm@448 282 program. Each sense can be specified using special blender nodes
rlm@448 283 with biologically inspired paramaters. You need not write any
rlm@448 284 code to create a creature, and can use a wide library of
rlm@448 285 pre-existing blender models as a base for your own creatures.
rlm@448 286
rlm@448 287 - =CORTEX= implements a wide variety of senses, including touch,
rlm@448 288 proprioception, vision, hearing, and muscle tension. Complicated
rlm@448 289 senses like touch, and vision involve multiple sensory elements
rlm@448 290 embedded in a 2D surface. You have complete control over the
rlm@448 291 distribution of these sensor elements through the use of simple
rlm@448 292 png image files. In particular, =CORTEX= implements more
rlm@448 293 comprehensive hearing than any other creature simulation system
rlm@448 294 available.
rlm@448 295
rlm@448 296 - =CORTEX= supports any number of creatures and any number of
rlm@448 297 senses. Time in =CORTEX= dialates so that the simulated creatures
rlm@448 298 always precieve a perfectly smooth flow of time, regardless of
rlm@448 299 the actual computational load.
rlm@448 300
rlm@448 301 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
rlm@448 302 engine designed to create cross-platform 3D desktop games. =CORTEX=
rlm@448 303 is mainly written in clojure, a dialect of =LISP= that runs on the
rlm@448 304 java virtual machine (JVM). The API for creating and simulating
rlm@449 305 creatures and senses is entirely expressed in clojure, though many
rlm@449 306 senses are implemented at the layer of jMonkeyEngine or below. For
rlm@449 307 example, for the sense of hearing I use a layer of clojure code on
rlm@449 308 top of a layer of java JNI bindings that drive a layer of =C++=
rlm@449 309 code which implements a modified version of =OpenAL= to support
rlm@449 310 multiple listeners. =CORTEX= is the only simulation environment
rlm@449 311 that I know of that can support multiple entities that can each
rlm@449 312 hear the world from their own perspective. Other senses also
rlm@449 313 require a small layer of Java code. =CORTEX= also uses =bullet=, a
rlm@449 314 physics simulator written in =C=.
rlm@448 315
rlm@448 316 #+caption: Here is the worm from above modeled in Blender, a free
rlm@448 317 #+caption: 3D-modeling program. Senses and joints are described
rlm@448 318 #+caption: using special nodes in Blender.
rlm@448 319 #+name: worm-recognition-intro
rlm@448 320 #+ATTR_LaTeX: :width 12cm
rlm@448 321 [[./images/blender-worm.png]]
rlm@448 322
rlm@449 323 Here are some thing I anticipate that =CORTEX= might be used for:
rlm@449 324
rlm@449 325 - exploring new ideas about sensory integration
rlm@449 326 - distributed communication among swarm creatures
rlm@449 327 - self-learning using free exploration,
rlm@449 328 - evolutionary algorithms involving creature construction
rlm@449 329 - exploration of exoitic senses and effectors that are not possible
rlm@449 330 in the real world (such as telekenisis or a semantic sense)
rlm@449 331 - imagination using subworlds
rlm@449 332
rlm@451 333 During one test with =CORTEX=, I created 3,000 creatures each with
rlm@448 334 their own independent senses and ran them all at only 1/80 real
rlm@448 335 time. In another test, I created a detailed model of my own hand,
rlm@448 336 equipped with a realistic distribution of touch (more sensitive at
rlm@448 337 the fingertips), as well as eyes and ears, and it ran at around 1/4
rlm@451 338 real time.
rlm@448 339
rlm@451 340 #+BEGIN_LaTeX
rlm@449 341 \begin{sidewaysfigure}
rlm@449 342 \includegraphics[width=9.5in]{images/full-hand.png}
rlm@451 343 \caption{
rlm@451 344 I modeled my own right hand in Blender and rigged it with all the
rlm@451 345 senses that {\tt CORTEX} supports. My simulated hand has a
rlm@451 346 biologically inspired distribution of touch sensors. The senses are
rlm@451 347 displayed on the right, and the simulation is displayed on the
rlm@451 348 left. Notice that my hand is curling its fingers, that it can see
rlm@451 349 its own finger from the eye in its palm, and that it can feel its
rlm@451 350 own thumb touching its palm.}
rlm@449 351 \end{sidewaysfigure}
rlm@451 352 #+END_LaTeX
rlm@448 353
rlm@437 354 ** Contributions
rlm@435 355
rlm@451 356 - I built =CORTEX=, a comprehensive platform for embodied AI
rlm@451 357 experiments. =CORTEX= supports many features lacking in other
rlm@451 358 systems, such proper simulation of hearing. It is easy to create
rlm@451 359 new =CORTEX= creatures using Blender, a free 3D modeling program.
rlm@449 360
rlm@451 361 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
rlm@451 362 a worm-like creature using a computational model of empathy.
rlm@449 363
rlm@436 364 * Building =CORTEX=
rlm@435 365
rlm@462 366 I intend for =CORTEX= to be used as a general purpose library for
rlm@462 367 building creatures and outfitting them with senses, so that it will
rlm@462 368 be useful for other researchers who want to test out ideas of their
rlm@462 369 own. To this end, wherver I have had to make archetictural choices
rlm@462 370 about =CORTEX=, I have chosen to give as much freedom to the user as
rlm@462 371 possible, so that =CORTEX= may be used for things I have not
rlm@462 372 forseen.
rlm@462 373
rlm@465 374 ** COMMENT Simulation or Reality?
rlm@462 375
rlm@462 376 The most important archetictural decision of all is the choice to
rlm@462 377 use a computer-simulated environemnt in the first place! The world
rlm@462 378 is a vast and rich place, and for now simulations are a very poor
rlm@462 379 reflection of its complexity. It may be that there is a significant
rlm@462 380 qualatative difference between dealing with senses in the real
rlm@468 381 world and dealing with pale facilimilies of them in a simulation.
rlm@468 382 What are the advantages and disadvantages of a simulation vs.
rlm@468 383 reality?
rlm@462 384
rlm@462 385 *** Simulation
rlm@462 386
rlm@462 387 The advantages of virtual reality are that when everything is a
rlm@462 388 simulation, experiments in that simulation are absolutely
rlm@462 389 reproducible. It's also easier to change the character and world
rlm@462 390 to explore new situations and different sensory combinations.
rlm@462 391
rlm@462 392 If the world is to be simulated on a computer, then not only do
rlm@462 393 you have to worry about whether the character's senses are rich
rlm@462 394 enough to learn from the world, but whether the world itself is
rlm@462 395 rendered with enough detail and realism to give enough working
rlm@462 396 material to the character's senses. To name just a few
rlm@462 397 difficulties facing modern physics simulators: destructibility of
rlm@462 398 the environment, simulation of water/other fluids, large areas,
rlm@462 399 nonrigid bodies, lots of objects, smoke. I don't know of any
rlm@462 400 computer simulation that would allow a character to take a rock
rlm@462 401 and grind it into fine dust, then use that dust to make a clay
rlm@462 402 sculpture, at least not without spending years calculating the
rlm@462 403 interactions of every single small grain of dust. Maybe a
rlm@462 404 simulated world with today's limitations doesn't provide enough
rlm@462 405 richness for real intelligence to evolve.
rlm@462 406
rlm@462 407 *** Reality
rlm@462 408
rlm@462 409 The other approach for playing with senses is to hook your
rlm@462 410 software up to real cameras, microphones, robots, etc., and let it
rlm@462 411 loose in the real world. This has the advantage of eliminating
rlm@462 412 concerns about simulating the world at the expense of increasing
rlm@462 413 the complexity of implementing the senses. Instead of just
rlm@462 414 grabbing the current rendered frame for processing, you have to
rlm@462 415 use an actual camera with real lenses and interact with photons to
rlm@462 416 get an image. It is much harder to change the character, which is
rlm@462 417 now partly a physical robot of some sort, since doing so involves
rlm@462 418 changing things around in the real world instead of modifying
rlm@462 419 lines of code. While the real world is very rich and definitely
rlm@462 420 provides enough stimulation for intelligence to develop as
rlm@462 421 evidenced by our own existence, it is also uncontrollable in the
rlm@462 422 sense that a particular situation cannot be recreated perfectly or
rlm@462 423 saved for later use. It is harder to conduct science because it is
rlm@462 424 harder to repeat an experiment. The worst thing about using the
rlm@462 425 real world instead of a simulation is the matter of time. Instead
rlm@462 426 of simulated time you get the constant and unstoppable flow of
rlm@462 427 real time. This severely limits the sorts of software you can use
rlm@462 428 to program the AI because all sense inputs must be handled in real
rlm@462 429 time. Complicated ideas may have to be implemented in hardware or
rlm@462 430 may simply be impossible given the current speed of our
rlm@462 431 processors. Contrast this with a simulation, in which the flow of
rlm@462 432 time in the simulated world can be slowed down to accommodate the
rlm@462 433 limitations of the character's programming. In terms of cost,
rlm@462 434 doing everything in software is far cheaper than building custom
rlm@462 435 real-time hardware. All you need is a laptop and some patience.
rlm@435 436
rlm@465 437 ** COMMENT Because of Time, simulation is perferable to reality
rlm@435 438
rlm@462 439 I envision =CORTEX= being used to support rapid prototyping and
rlm@462 440 iteration of ideas. Even if I could put together a well constructed
rlm@462 441 kit for creating robots, it would still not be enough because of
rlm@462 442 the scourge of real-time processing. Anyone who wants to test their
rlm@462 443 ideas in the real world must always worry about getting their
rlm@465 444 algorithms to run fast enough to process information in real time.
rlm@465 445 The need for real time processing only increases if multiple senses
rlm@465 446 are involved. In the extreme case, even simple algorithms will have
rlm@465 447 to be accelerated by ASIC chips or FPGAs, turning what would
rlm@465 448 otherwise be a few lines of code and a 10x speed penality into a
rlm@465 449 multi-month ordeal. For this reason, =CORTEX= supports
rlm@462 450 /time-dialiation/, which scales back the framerate of the
rlm@465 451 simulation in proportion to the amount of processing each frame.
rlm@465 452 From the perspective of the creatures inside the simulation, time
rlm@465 453 always appears to flow at a constant rate, regardless of how
rlm@462 454 complicated the envorimnent becomes or how many creatures are in
rlm@462 455 the simulation. The cost is that =CORTEX= can sometimes run slower
rlm@462 456 than real time. This can also be an advantage, however ---
rlm@462 457 simulations of very simple creatures in =CORTEX= generally run at
rlm@462 458 40x on my machine!
rlm@462 459
rlm@469 460 ** COMMENT What is a sense?
rlm@468 461
rlm@468 462 If =CORTEX= is to support a wide variety of senses, it would help
rlm@468 463 to have a better understanding of what a ``sense'' actually is!
rlm@468 464 While vision, touch, and hearing all seem like they are quite
rlm@468 465 different things, I was supprised to learn during the course of
rlm@468 466 this thesis that they (and all physical senses) can be expressed as
rlm@468 467 exactly the same mathematical object due to a dimensional argument!
rlm@468 468
rlm@468 469 Human beings are three-dimensional objects, and the nerves that
rlm@468 470 transmit data from our various sense organs to our brain are
rlm@468 471 essentially one-dimensional. This leaves up to two dimensions in
rlm@468 472 which our sensory information may flow. For example, imagine your
rlm@468 473 skin: it is a two-dimensional surface around a three-dimensional
rlm@468 474 object (your body). It has discrete touch sensors embedded at
rlm@468 475 various points, and the density of these sensors corresponds to the
rlm@468 476 sensitivity of that region of skin. Each touch sensor connects to a
rlm@468 477 nerve, all of which eventually are bundled together as they travel
rlm@468 478 up the spinal cord to the brain. Intersect the spinal nerves with a
rlm@468 479 guillotining plane and you will see all of the sensory data of the
rlm@468 480 skin revealed in a roughly circular two-dimensional image which is
rlm@468 481 the cross section of the spinal cord. Points on this image that are
rlm@468 482 close together in this circle represent touch sensors that are
rlm@468 483 /probably/ close together on the skin, although there is of course
rlm@468 484 some cutting and rearrangement that has to be done to transfer the
rlm@468 485 complicated surface of the skin onto a two dimensional image.
rlm@468 486
rlm@468 487 Most human senses consist of many discrete sensors of various
rlm@468 488 properties distributed along a surface at various densities. For
rlm@468 489 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
rlm@468 490 disks, and Ruffini's endings, which detect pressure and vibration
rlm@468 491 of various intensities. For ears, it is the stereocilia distributed
rlm@468 492 along the basilar membrane inside the cochlea; each one is
rlm@468 493 sensitive to a slightly different frequency of sound. For eyes, it
rlm@468 494 is rods and cones distributed along the surface of the retina. In
rlm@468 495 each case, we can describe the sense with a surface and a
rlm@468 496 distribution of sensors along that surface.
rlm@468 497
rlm@468 498 The neat idea is that every human sense can be effectively
rlm@468 499 described in terms of a surface containing embedded sensors. If the
rlm@468 500 sense had any more dimensions, then there wouldn't be enough room
rlm@468 501 in the spinal chord to transmit the information!
rlm@468 502
rlm@468 503 Therefore, =CORTEX= must support the ability to create objects and
rlm@468 504 then be able to ``paint'' points along their surfaces to describe
rlm@468 505 each sense.
rlm@468 506
rlm@468 507 Fortunately this idea is already a well known computer graphics
rlm@468 508 technique called called /UV-mapping/. The three-dimensional surface
rlm@468 509 of a model is cut and smooshed until it fits on a two-dimensional
rlm@468 510 image. You paint whatever you want on that image, and when the
rlm@468 511 three-dimensional shape is rendered in a game the smooshing and
rlm@468 512 cutting is reversed and the image appears on the three-dimensional
rlm@468 513 object.
rlm@468 514
rlm@468 515 To make a sense, interpret the UV-image as describing the
rlm@468 516 distribution of that senses sensors. To get different types of
rlm@468 517 sensors, you can either use a different color for each type of
rlm@468 518 sensor, or use multiple UV-maps, each labeled with that sensor
rlm@468 519 type. I generally use a white pixel to mean the presence of a
rlm@468 520 sensor and a black pixel to mean the absence of a sensor, and use
rlm@468 521 one UV-map for each sensor-type within a given sense.
rlm@468 522
rlm@468 523 #+CAPTION: The UV-map for an elongated icososphere. The white
rlm@468 524 #+caption: dots each represent a touch sensor. They are dense
rlm@468 525 #+caption: in the regions that describe the tip of the finger,
rlm@468 526 #+caption: and less dense along the dorsal side of the finger
rlm@468 527 #+caption: opposite the tip.
rlm@468 528 #+name: finger-UV
rlm@468 529 #+ATTR_latex: :width 10cm
rlm@468 530 [[./images/finger-UV.png]]
rlm@468 531
rlm@468 532 #+caption: Ventral side of the UV-mapped finger. Notice the
rlm@468 533 #+caption: density of touch sensors at the tip.
rlm@468 534 #+name: finger-side-view
rlm@468 535 #+ATTR_LaTeX: :width 10cm
rlm@468 536 [[./images/finger-1.png]]
rlm@468 537
rlm@465 538 ** COMMENT Video game engines are a great starting point
rlm@462 539
rlm@462 540 I did not need to write my own physics simulation code or shader to
rlm@462 541 build =CORTEX=. Doing so would lead to a system that is impossible
rlm@462 542 for anyone but myself to use anyway. Instead, I use a video game
rlm@462 543 engine as a base and modify it to accomodate the additional needs
rlm@462 544 of =CORTEX=. Video game engines are an ideal starting point to
rlm@462 545 build =CORTEX=, because they are not far from being creature
rlm@463 546 building systems themselves.
rlm@462 547
rlm@462 548 First off, general purpose video game engines come with a physics
rlm@462 549 engine and lighting / sound system. The physics system provides
rlm@462 550 tools that can be co-opted to serve as touch, proprioception, and
rlm@462 551 muscles. Since some games support split screen views, a good video
rlm@462 552 game engine will allow you to efficiently create multiple cameras
rlm@463 553 in the simulated world that can be used as eyes. Video game systems
rlm@463 554 offer integrated asset management for things like textures and
rlm@468 555 creatures models, providing an avenue for defining creatures. They
rlm@468 556 also understand UV-mapping, since this technique is used to apply a
rlm@468 557 texture to a model. Finally, because video game engines support a
rlm@468 558 large number of users, as long as =CORTEX= doesn't stray too far
rlm@468 559 from the base system, other researchers can turn to this community
rlm@468 560 for help when doing their research.
rlm@463 561
rlm@465 562 ** COMMENT =CORTEX= is based on jMonkeyEngine3
rlm@463 563
rlm@463 564 While preparing to build =CORTEX= I studied several video game
rlm@463 565 engines to see which would best serve as a base. The top contenders
rlm@463 566 were:
rlm@463 567
rlm@463 568 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
rlm@463 569 software in 1997. All the source code was released by ID
rlm@463 570 software into the Public Domain several years ago, and as a
rlm@463 571 result it has been ported to many different languages. This
rlm@463 572 engine was famous for its advanced use of realistic shading
rlm@463 573 and had decent and fast physics simulation. The main advantage
rlm@463 574 of the Quake II engine is its simplicity, but I ultimately
rlm@463 575 rejected it because the engine is too tied to the concept of a
rlm@463 576 first-person shooter game. One of the problems I had was that
rlm@463 577 there does not seem to be any easy way to attach multiple
rlm@463 578 cameras to a single character. There are also several physics
rlm@463 579 clipping issues that are corrected in a way that only applies
rlm@463 580 to the main character and do not apply to arbitrary objects.
rlm@463 581
rlm@463 582 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
rlm@463 583 and Quake I engines and is used by Valve in the Half-Life
rlm@463 584 series of games. The physics simulation in the Source Engine
rlm@463 585 is quite accurate and probably the best out of all the engines
rlm@463 586 I investigated. There is also an extensive community actively
rlm@463 587 working with the engine. However, applications that use the
rlm@463 588 Source Engine must be written in C++, the code is not open, it
rlm@463 589 only runs on Windows, and the tools that come with the SDK to
rlm@463 590 handle models and textures are complicated and awkward to use.
rlm@463 591
rlm@463 592 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
rlm@463 593 games in Java. It uses OpenGL to render to the screen and uses
rlm@463 594 screengraphs to avoid drawing things that do not appear on the
rlm@463 595 screen. It has an active community and several games in the
rlm@463 596 pipeline. The engine was not built to serve any particular
rlm@463 597 game but is instead meant to be used for any 3D game.
rlm@463 598
rlm@463 599 I chose jMonkeyEngine3 because it because it had the most features
rlm@464 600 out of all the free projects I looked at, and because I could then
rlm@463 601 write my code in clojure, an implementation of =LISP= that runs on
rlm@463 602 the JVM.
rlm@435 603
rlm@469 604 ** COMMENT =CORTEX= uses Blender to create creature models
rlm@435 605
rlm@464 606 For the simple worm-like creatures I will use later on in this
rlm@464 607 thesis, I could define a simple API in =CORTEX= that would allow
rlm@464 608 one to create boxes, spheres, etc., and leave that API as the sole
rlm@464 609 way to create creatures. However, for =CORTEX= to truly be useful
rlm@468 610 for other projects, it needs a way to construct complicated
rlm@464 611 creatures. If possible, it would be nice to leverage work that has
rlm@464 612 already been done by the community of 3D modelers, or at least
rlm@464 613 enable people who are talented at moedling but not programming to
rlm@468 614 design =CORTEX= creatures.
rlm@464 615
rlm@464 616 Therefore, I use Blender, a free 3D modeling program, as the main
rlm@464 617 way to create creatures in =CORTEX=. However, the creatures modeled
rlm@464 618 in Blender must also be simple to simulate in jMonkeyEngine3's game
rlm@468 619 engine, and must also be easy to rig with =CORTEX='s senses. I
rlm@468 620 accomplish this with extensive use of Blender's ``empty nodes.''
rlm@464 621
rlm@468 622 Empty nodes have no mass, physical presence, or appearance, but
rlm@468 623 they can hold metadata and have names. I use a tree structure of
rlm@468 624 empty nodes to specify senses in the following manner:
rlm@468 625
rlm@468 626 - Create a single top-level empty node whose name is the name of
rlm@468 627 the sense.
rlm@468 628 - Add empty nodes which each contain meta-data relevant to the
rlm@468 629 sense, including a UV-map describing the number/distribution of
rlm@468 630 sensors if applicable.
rlm@468 631 - Make each empty-node the child of the top-level node.
rlm@468 632
rlm@468 633 #+caption: An example of annoting a creature model with empty
rlm@468 634 #+caption: nodes to describe the layout of senses. There are
rlm@468 635 #+caption: multiple empty nodes which each describe the position
rlm@468 636 #+caption: of muscles, ears, eyes, or joints.
rlm@468 637 #+name: sense-nodes
rlm@468 638 #+ATTR_LaTeX: :width 10cm
rlm@468 639 [[./images/empty-sense-nodes.png]]
rlm@468 640
rlm@469 641 ** COMMENT Bodies are composed of segments connected by joints
rlm@468 642
rlm@468 643 Blender is a general purpose animation tool, which has been used in
rlm@468 644 the past to create high quality movies such as Sintel
rlm@468 645 \cite{sintel}. Though Blender can model and render even complicated
rlm@468 646 things like water, it is crucual to keep models that are meant to
rlm@468 647 be simulated as creatures simple. =Bullet=, which =CORTEX= uses
rlm@468 648 though jMonkeyEngine3, is a rigid-body physics system. This offers
rlm@468 649 a compromise between the expressiveness of a game level and the
rlm@468 650 speed at which it can be simulated, and it means that creatures
rlm@468 651 should be naturally expressed as rigid components held together by
rlm@468 652 joint constraints.
rlm@468 653
rlm@468 654 But humans are more like a squishy bag with wrapped around some
rlm@468 655 hard bones which define the overall shape. When we move, our skin
rlm@468 656 bends and stretches to accomodate the new positions of our bones.
rlm@468 657
rlm@468 658 One way to make bodies composed of rigid pieces connected by joints
rlm@468 659 /seem/ more human-like is to use an /armature/, (or /rigging/)
rlm@468 660 system, which defines a overall ``body mesh'' and defines how the
rlm@468 661 mesh deforms as a function of the position of each ``bone'' which
rlm@468 662 is a standard rigid body. This technique is used extensively to
rlm@468 663 model humans and create realistic animations. It is not a good
rlm@468 664 technique for physical simulation, however because it creates a lie
rlm@468 665 -- the skin is not a physical part of the simulation and does not
rlm@468 666 interact with any objects in the world or itself. Objects will pass
rlm@468 667 right though the skin until they come in contact with the
rlm@468 668 underlying bone, which is a physical object. Whithout simulating
rlm@468 669 the skin, the sense of touch has little meaning, and the creature's
rlm@468 670 own vision will lie to it about the true extent of its body.
rlm@468 671 Simulating the skin as a physical object requires some way to
rlm@468 672 continuously update the physical model of the skin along with the
rlm@468 673 movement of the bones, which is unacceptably slow compared to rigid
rlm@468 674 body simulation.
rlm@468 675
rlm@468 676 Therefore, instead of using the human-like ``deformable bag of
rlm@468 677 bones'' approach, I decided to base my body plans on multiple solid
rlm@468 678 objects that are connected by joints, inspired by the robot =EVE=
rlm@468 679 from the movie WALL-E.
rlm@464 680
rlm@464 681 #+caption: =EVE= from the movie WALL-E. This body plan turns
rlm@464 682 #+caption: out to be much better suited to my purposes than a more
rlm@464 683 #+caption: human-like one.
rlm@465 684 #+ATTR_LaTeX: :width 10cm
rlm@464 685 [[./images/Eve.jpg]]
rlm@464 686
rlm@464 687 =EVE='s body is composed of several rigid components that are held
rlm@464 688 together by invisible joint constraints. This is what I mean by
rlm@464 689 ``eve-like''. The main reason that I use eve-style bodies is for
rlm@464 690 efficiency, and so that there will be correspondence between the
rlm@468 691 AI's semses and the physical presence of its body. Each individual
rlm@464 692 section is simulated by a separate rigid body that corresponds
rlm@464 693 exactly with its visual representation and does not change.
rlm@464 694 Sections are connected by invisible joints that are well supported
rlm@464 695 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
rlm@464 696 can efficiently simulate hundreds of rigid bodies connected by
rlm@468 697 joints. Just because sections are rigid does not mean they have to
rlm@468 698 stay as one piece forever; they can be dynamically replaced with
rlm@468 699 multiple sections to simulate splitting in two. This could be used
rlm@468 700 to simulate retractable claws or =EVE='s hands, which are able to
rlm@468 701 coalesce into one object in the movie.
rlm@465 702
rlm@469 703 *** Solidifying/Connecting a body
rlm@465 704
rlm@469 705 =CORTEX= creates a creature in two steps: first, it traverses the
rlm@469 706 nodes in the blender file and creates physical representations for
rlm@469 707 any of them that have mass defined in their blender meta-data.
rlm@466 708
rlm@466 709 #+caption: Program for iterating through the nodes in a blender file
rlm@466 710 #+caption: and generating physical jMonkeyEngine3 objects with mass
rlm@466 711 #+caption: and a matching physics shape.
rlm@466 712 #+name: name
rlm@466 713 #+begin_listing clojure
rlm@466 714 #+begin_src clojure
rlm@466 715 (defn physical!
rlm@466 716 "Iterate through the nodes in creature and make them real physical
rlm@466 717 objects in the simulation."
rlm@466 718 [#^Node creature]
rlm@466 719 (dorun
rlm@466 720 (map
rlm@466 721 (fn [geom]
rlm@466 722 (let [physics-control
rlm@466 723 (RigidBodyControl.
rlm@466 724 (HullCollisionShape.
rlm@466 725 (.getMesh geom))
rlm@466 726 (if-let [mass (meta-data geom "mass")]
rlm@466 727 (float mass) (float 1)))]
rlm@466 728 (.addControl geom physics-control)))
rlm@466 729 (filter #(isa? (class %) Geometry )
rlm@466 730 (node-seq creature)))))
rlm@466 731 #+end_src
rlm@466 732 #+end_listing
rlm@465 733
rlm@469 734 The next step to making a proper body is to connect those pieces
rlm@469 735 together with joints. jMonkeyEngine has a large array of joints
rlm@469 736 available via =bullet=, such as Point2Point, Cone, Hinge, and a
rlm@469 737 generic Six Degree of Freedom joint, with or without spring
rlm@469 738 restitution.
rlm@465 739
rlm@469 740 Joints are treated a lot like proper senses, in that there is a
rlm@469 741 top-level empty node named ``joints'' whose children each
rlm@469 742 represent a joint.
rlm@466 743
rlm@469 744 #+caption: View of the hand model in Blender showing the main ``joints''
rlm@469 745 #+caption: node (highlighted in yellow) and its children which each
rlm@469 746 #+caption: represent a joint in the hand. Each joint node has metadata
rlm@469 747 #+caption: specifying what sort of joint it is.
rlm@469 748 #+name: blender-hand
rlm@469 749 #+ATTR_LaTeX: :width 10cm
rlm@469 750 [[./images/hand-screenshot1.png]]
rlm@469 751
rlm@469 752
rlm@469 753 =CORTEX='s procedure for binding the creature together with joints
rlm@469 754 is as follows:
rlm@469 755
rlm@469 756 - Find the children of the ``joints'' node.
rlm@469 757 - Determine the two spatials the joint is meant to connect.
rlm@469 758 - Create the joint based on the meta-data of the empty node.
rlm@469 759
rlm@469 760 The higher order function =sense-nodes= from =cortex.sense=
rlm@469 761 simplifies finding the joints based on their parent ``joints''
rlm@469 762 node.
rlm@466 763
rlm@466 764 #+caption: Retrieving the children empty nodes from a single
rlm@466 765 #+caption: named empty node is a common pattern in =CORTEX=
rlm@466 766 #+caption: further instances of this technique for the senses
rlm@466 767 #+caption: will be omitted
rlm@466 768 #+name: get-empty-nodes
rlm@466 769 #+begin_listing clojure
rlm@466 770 #+begin_src clojure
rlm@466 771 (defn sense-nodes
rlm@466 772 "For some senses there is a special empty blender node whose
rlm@466 773 children are considered markers for an instance of that sense. This
rlm@466 774 function generates functions to find those children, given the name
rlm@466 775 of the special parent node."
rlm@466 776 [parent-name]
rlm@466 777 (fn [#^Node creature]
rlm@466 778 (if-let [sense-node (.getChild creature parent-name)]
rlm@466 779 (seq (.getChildren sense-node)) [])))
rlm@466 780
rlm@466 781 (def
rlm@466 782 ^{:doc "Return the children of the creature's \"joints\" node."
rlm@466 783 :arglists '([creature])}
rlm@466 784 joints
rlm@466 785 (sense-nodes "joints"))
rlm@466 786 #+end_src
rlm@466 787 #+end_listing
rlm@466 788
rlm@469 789 To find a joint's targets, =CORTEX= creates a small cube, centered
rlm@469 790 around the empty-node, and grows the cube exponentially until it
rlm@469 791 intersects two physical objects. The objects are ordered according
rlm@469 792 to the joint's rotation, with the first one being the object that
rlm@469 793 has more negative coordinates in the joint's reference frame.
rlm@469 794 Since the objects must be physical, the empty-node itself escapes
rlm@469 795 detection. Because the objects must be physical, =joint-targets=
rlm@469 796 must be called /after/ =physical!= is called.
rlm@464 797
rlm@469 798 #+caption: Program to find the targets of a joint node by
rlm@469 799 #+caption: exponentiallly growth of a search cube.
rlm@469 800 #+name: joint-targets
rlm@469 801 #+begin_listing clojure
rlm@469 802 #+begin_src clojure
rlm@466 803 (defn joint-targets
rlm@466 804 "Return the two closest two objects to the joint object, ordered
rlm@466 805 from bottom to top according to the joint's rotation."
rlm@466 806 [#^Node parts #^Node joint]
rlm@466 807 (loop [radius (float 0.01)]
rlm@466 808 (let [results (CollisionResults.)]
rlm@466 809 (.collideWith
rlm@466 810 parts
rlm@466 811 (BoundingBox. (.getWorldTranslation joint)
rlm@466 812 radius radius radius) results)
rlm@466 813 (let [targets
rlm@466 814 (distinct
rlm@466 815 (map #(.getGeometry %) results))]
rlm@466 816 (if (>= (count targets) 2)
rlm@466 817 (sort-by
rlm@466 818 #(let [joint-ref-frame-position
rlm@466 819 (jme-to-blender
rlm@466 820 (.mult
rlm@466 821 (.inverse (.getWorldRotation joint))
rlm@466 822 (.subtract (.getWorldTranslation %)
rlm@466 823 (.getWorldTranslation joint))))]
rlm@466 824 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
rlm@466 825 (take 2 targets))
rlm@466 826 (recur (float (* radius 2))))))))
rlm@469 827 #+end_src
rlm@469 828 #+end_listing
rlm@464 829
rlm@469 830 Once =CORTEX= finds all joints and targets, it creates them using
rlm@469 831 a dispatch on the metadata of each joint node.
rlm@466 832
rlm@469 833 #+caption: Program to dispatch on blender metadata and create joints
rlm@469 834 #+caption: sutiable for physical simulation.
rlm@469 835 #+name: joint-dispatch
rlm@469 836 #+begin_listing clojure
rlm@469 837 #+begin_src clojure
rlm@466 838 (defmulti joint-dispatch
rlm@466 839 "Translate blender pseudo-joints into real JME joints."
rlm@466 840 (fn [constraints & _]
rlm@466 841 (:type constraints)))
rlm@466 842
rlm@466 843 (defmethod joint-dispatch :point
rlm@466 844 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 845 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
rlm@466 846 (.setLinearLowerLimit Vector3f/ZERO)
rlm@466 847 (.setLinearUpperLimit Vector3f/ZERO)))
rlm@466 848
rlm@466 849 (defmethod joint-dispatch :hinge
rlm@466 850 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 851 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
rlm@466 852 [limit-1 limit-2] (:limit constraints)
rlm@466 853 hinge-axis (.mult rotation (blender-to-jme axis))]
rlm@466 854 (doto (HingeJoint. control-a control-b pivot-a pivot-b
rlm@466 855 hinge-axis hinge-axis)
rlm@466 856 (.setLimit limit-1 limit-2))))
rlm@466 857
rlm@466 858 (defmethod joint-dispatch :cone
rlm@466 859 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 860 (let [limit-xz (:limit-xz constraints)
rlm@466 861 limit-xy (:limit-xy constraints)
rlm@466 862 twist (:twist constraints)]
rlm@466 863 (doto (ConeJoint. control-a control-b pivot-a pivot-b
rlm@466 864 rotation rotation)
rlm@466 865 (.setLimit (float limit-xz) (float limit-xy)
rlm@466 866 (float twist)))))
rlm@469 867 #+end_src
rlm@469 868 #+end_listing
rlm@466 869
rlm@469 870 All that is left for joints it to combine the above pieces into a
rlm@469 871 something that can operate on the collection of nodes that a
rlm@469 872 blender file represents.
rlm@466 873
rlm@469 874 #+caption: Program to completely create a joint given information
rlm@469 875 #+caption: from a blender file.
rlm@469 876 #+name: connect
rlm@469 877 #+begin_listing clojure
rlm@466 878 #+begin_src clojure
rlm@466 879 (defn connect
rlm@466 880 "Create a joint between 'obj-a and 'obj-b at the location of
rlm@466 881 'joint. The type of joint is determined by the metadata on 'joint.
rlm@466 882
rlm@466 883 Here are some examples:
rlm@466 884 {:type :point}
rlm@466 885 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
rlm@466 886 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
rlm@466 887
rlm@466 888 {:type :cone :limit-xz 0]
rlm@466 889 :limit-xy 0]
rlm@466 890 :twist 0]} (use XZY rotation mode in blender!)"
rlm@466 891 [#^Node obj-a #^Node obj-b #^Node joint]
rlm@466 892 (let [control-a (.getControl obj-a RigidBodyControl)
rlm@466 893 control-b (.getControl obj-b RigidBodyControl)
rlm@466 894 joint-center (.getWorldTranslation joint)
rlm@466 895 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
rlm@466 896 pivot-a (world-to-local obj-a joint-center)
rlm@466 897 pivot-b (world-to-local obj-b joint-center)]
rlm@466 898 (if-let
rlm@466 899 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
rlm@466 900 ;; A side-effect of creating a joint registers
rlm@466 901 ;; it with both physics objects which in turn
rlm@466 902 ;; will register the joint with the physics system
rlm@466 903 ;; when the simulation is started.
rlm@466 904 (joint-dispatch constraints
rlm@466 905 control-a control-b
rlm@466 906 pivot-a pivot-b
rlm@466 907 joint-rotation))))
rlm@469 908 #+end_src
rlm@469 909 #+end_listing
rlm@466 910
rlm@469 911 In general, whenever =CORTEX= exposes a sense (or in this case
rlm@469 912 physicality), it provides a function of the type =sense!=, which
rlm@469 913 takes in a collection of nodes and augments it to support that
rlm@469 914 sense. The function returns any controlls necessary to use that
rlm@469 915 sense. In this case =body!= cerates a physical body and returns no
rlm@469 916 control functions.
rlm@466 917
rlm@469 918 #+caption: Program to give joints to a creature.
rlm@469 919 #+name: name
rlm@469 920 #+begin_listing clojure
rlm@469 921 #+begin_src clojure
rlm@466 922 (defn joints!
rlm@466 923 "Connect the solid parts of the creature with physical joints. The
rlm@466 924 joints are taken from the \"joints\" node in the creature."
rlm@466 925 [#^Node creature]
rlm@466 926 (dorun
rlm@466 927 (map
rlm@466 928 (fn [joint]
rlm@466 929 (let [[obj-a obj-b] (joint-targets creature joint)]
rlm@466 930 (connect obj-a obj-b joint)))
rlm@466 931 (joints creature))))
rlm@466 932 (defn body!
rlm@466 933 "Endow the creature with a physical body connected with joints. The
rlm@466 934 particulars of the joints and the masses of each body part are
rlm@466 935 determined in blender."
rlm@466 936 [#^Node creature]
rlm@466 937 (physical! creature)
rlm@466 938 (joints! creature))
rlm@469 939 #+end_src
rlm@469 940 #+end_listing
rlm@466 941
rlm@469 942 All of the code you have just seen amounts to only 130 lines, yet
rlm@469 943 because it builds on top of Blender and jMonkeyEngine3, those few
rlm@469 944 lines pack quite a punch!
rlm@466 945
rlm@469 946 The hand from figure \ref{blender-hand}, which was modeled after
rlm@469 947 my own right hand, can now be given joints and simulated as a
rlm@469 948 creature.
rlm@466 949
rlm@469 950 #+caption: With the ability to create physical creatures from blender,
rlm@469 951 #+caption: =CORTEX= gets one step closer to becomming a full creature
rlm@469 952 #+caption: simulation environment.
rlm@469 953 #+name: name
rlm@469 954 #+ATTR_LaTeX: :width 15cm
rlm@469 955 [[./images/physical-hand.png]]
rlm@468 956
rlm@472 957 ** COMMENT Eyes reuse standard video game components
rlm@436 958
rlm@470 959 Vision is one of the most important senses for humans, so I need to
rlm@470 960 build a simulated sense of vision for my AI. I will do this with
rlm@470 961 simulated eyes. Each eye can be independently moved and should see
rlm@470 962 its own version of the world depending on where it is.
rlm@470 963
rlm@470 964 Making these simulated eyes a reality is simple because
rlm@470 965 jMonkeyEngine already contains extensive support for multiple views
rlm@470 966 of the same 3D simulated world. The reason jMonkeyEngine has this
rlm@470 967 support is because the support is necessary to create games with
rlm@470 968 split-screen views. Multiple views are also used to create
rlm@470 969 efficient pseudo-reflections by rendering the scene from a certain
rlm@470 970 perspective and then projecting it back onto a surface in the 3D
rlm@470 971 world.
rlm@470 972
rlm@470 973 #+caption: jMonkeyEngine supports multiple views to enable
rlm@470 974 #+caption: split-screen games, like GoldenEye, which was one of
rlm@470 975 #+caption: the first games to use split-screen views.
rlm@470 976 #+name: name
rlm@470 977 #+ATTR_LaTeX: :width 10cm
rlm@470 978 [[./images/goldeneye-4-player.png]]
rlm@470 979
rlm@470 980 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
rlm@470 981
rlm@470 982 jMonkeyEngine allows you to create a =ViewPort=, which represents a
rlm@470 983 view of the simulated world. You can create as many of these as you
rlm@470 984 want. Every frame, the =RenderManager= iterates through each
rlm@470 985 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
rlm@470 986 is a =FrameBuffer= which represents the rendered image in the GPU.
rlm@470 987
rlm@470 988 #+caption: =ViewPorts= are cameras in the world. During each frame,
rlm@470 989 #+caption: the =RenderManager= records a snapshot of what each view
rlm@470 990 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
rlm@470 991 #+name: name
rlm@470 992 #+ATTR_LaTeX: :width 10cm
rlm@470 993 [[../images/diagram_rendermanager2.png]]
rlm@470 994
rlm@470 995 Each =ViewPort= can have any number of attached =SceneProcessor=
rlm@470 996 objects, which are called every time a new frame is rendered. A
rlm@470 997 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
rlm@470 998 whatever it wants to the data. Often this consists of invoking GPU
rlm@470 999 specific operations on the rendered image. The =SceneProcessor= can
rlm@470 1000 also copy the GPU image data to RAM and process it with the CPU.
rlm@470 1001
rlm@470 1002 *** Appropriating Views for Vision
rlm@470 1003
rlm@470 1004 Each eye in the simulated creature needs its own =ViewPort= so
rlm@470 1005 that it can see the world from its own perspective. To this
rlm@470 1006 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
rlm@470 1007 any arbitrary continuation function for further processing. That
rlm@470 1008 continuation function may perform both CPU and GPU operations on
rlm@470 1009 the data. To make this easy for the continuation function, the
rlm@470 1010 =SceneProcessor= maintains appropriately sized buffers in RAM to
rlm@470 1011 hold the data. It does not do any copying from the GPU to the CPU
rlm@470 1012 itself because it is a slow operation.
rlm@470 1013
rlm@470 1014 #+caption: Function to make the rendered secne in jMonkeyEngine
rlm@470 1015 #+caption: available for further processing.
rlm@470 1016 #+name: pipeline-1
rlm@470 1017 #+begin_listing clojure
rlm@470 1018 #+begin_src clojure
rlm@470 1019 (defn vision-pipeline
rlm@470 1020 "Create a SceneProcessor object which wraps a vision processing
rlm@470 1021 continuation function. The continuation is a function that takes
rlm@470 1022 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
rlm@470 1023 each of which has already been appropriately sized."
rlm@470 1024 [continuation]
rlm@470 1025 (let [byte-buffer (atom nil)
rlm@470 1026 renderer (atom nil)
rlm@470 1027 image (atom nil)]
rlm@470 1028 (proxy [SceneProcessor] []
rlm@470 1029 (initialize
rlm@470 1030 [renderManager viewPort]
rlm@470 1031 (let [cam (.getCamera viewPort)
rlm@470 1032 width (.getWidth cam)
rlm@470 1033 height (.getHeight cam)]
rlm@470 1034 (reset! renderer (.getRenderer renderManager))
rlm@470 1035 (reset! byte-buffer
rlm@470 1036 (BufferUtils/createByteBuffer
rlm@470 1037 (* width height 4)))
rlm@470 1038 (reset! image (BufferedImage.
rlm@470 1039 width height
rlm@470 1040 BufferedImage/TYPE_4BYTE_ABGR))))
rlm@470 1041 (isInitialized [] (not (nil? @byte-buffer)))
rlm@470 1042 (reshape [_ _ _])
rlm@470 1043 (preFrame [_])
rlm@470 1044 (postQueue [_])
rlm@470 1045 (postFrame
rlm@470 1046 [#^FrameBuffer fb]
rlm@470 1047 (.clear @byte-buffer)
rlm@470 1048 (continuation @renderer fb @byte-buffer @image))
rlm@470 1049 (cleanup []))))
rlm@470 1050 #+end_src
rlm@470 1051 #+end_listing
rlm@470 1052
rlm@470 1053 The continuation function given to =vision-pipeline= above will be
rlm@470 1054 given a =Renderer= and three containers for image data. The
rlm@470 1055 =FrameBuffer= references the GPU image data, but the pixel data
rlm@470 1056 can not be used directly on the CPU. The =ByteBuffer= and
rlm@470 1057 =BufferedImage= are initially "empty" but are sized to hold the
rlm@470 1058 data in the =FrameBuffer=. I call transferring the GPU image data
rlm@470 1059 to the CPU structures "mixing" the image data.
rlm@470 1060
rlm@470 1061 *** Optical sensor arrays are described with images and referenced with metadata
rlm@470 1062
rlm@470 1063 The vision pipeline described above handles the flow of rendered
rlm@470 1064 images. Now, =CORTEX= needs simulated eyes to serve as the source
rlm@470 1065 of these images.
rlm@470 1066
rlm@470 1067 An eye is described in blender in the same way as a joint. They
rlm@470 1068 are zero dimensional empty objects with no geometry whose local
rlm@470 1069 coordinate system determines the orientation of the resulting eye.
rlm@470 1070 All eyes are children of a parent node named "eyes" just as all
rlm@470 1071 joints have a parent named "joints". An eye binds to the nearest
rlm@470 1072 physical object with =bind-sense=.
rlm@470 1073
rlm@470 1074 #+caption: Here, the camera is created based on metadata on the
rlm@470 1075 #+caption: eye-node and attached to the nearest physical object
rlm@470 1076 #+caption: with =bind-sense=
rlm@470 1077 #+name: add-eye
rlm@470 1078 #+begin_listing clojure
rlm@470 1079 (defn add-eye!
rlm@470 1080 "Create a Camera centered on the current position of 'eye which
rlm@470 1081 follows the closest physical node in 'creature. The camera will
rlm@470 1082 point in the X direction and use the Z vector as up as determined
rlm@470 1083 by the rotation of these vectors in blender coordinate space. Use
rlm@470 1084 XZY rotation for the node in blender."
rlm@470 1085 [#^Node creature #^Spatial eye]
rlm@470 1086 (let [target (closest-node creature eye)
rlm@470 1087 [cam-width cam-height]
rlm@470 1088 ;;[640 480] ;; graphics card on laptop doesn't support
rlm@470 1089 ;; arbitray dimensions.
rlm@470 1090 (eye-dimensions eye)
rlm@470 1091 cam (Camera. cam-width cam-height)
rlm@470 1092 rot (.getWorldRotation eye)]
rlm@470 1093 (.setLocation cam (.getWorldTranslation eye))
rlm@470 1094 (.lookAtDirection
rlm@470 1095 cam ; this part is not a mistake and
rlm@470 1096 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
rlm@470 1097 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
rlm@470 1098 (.setFrustumPerspective
rlm@470 1099 cam (float 45)
rlm@470 1100 (float (/ (.getWidth cam) (.getHeight cam)))
rlm@470 1101 (float 1)
rlm@470 1102 (float 1000))
rlm@470 1103 (bind-sense target cam) cam))
rlm@470 1104 #+end_listing
rlm@470 1105
rlm@470 1106 *** Simulated Retina
rlm@470 1107
rlm@470 1108 An eye is a surface (the retina) which contains many discrete
rlm@470 1109 sensors to detect light. These sensors can have different
rlm@470 1110 light-sensing properties. In humans, each discrete sensor is
rlm@470 1111 sensitive to red, blue, green, or gray. These different types of
rlm@470 1112 sensors can have different spatial distributions along the retina.
rlm@470 1113 In humans, there is a fovea in the center of the retina which has
rlm@470 1114 a very high density of color sensors, and a blind spot which has
rlm@470 1115 no sensors at all. Sensor density decreases in proportion to
rlm@470 1116 distance from the fovea.
rlm@470 1117
rlm@470 1118 I want to be able to model any retinal configuration, so my
rlm@470 1119 eye-nodes in blender contain metadata pointing to images that
rlm@470 1120 describe the precise position of the individual sensors using
rlm@470 1121 white pixels. The meta-data also describes the precise sensitivity
rlm@470 1122 to light that the sensors described in the image have. An eye can
rlm@470 1123 contain any number of these images. For example, the metadata for
rlm@470 1124 an eye might look like this:
rlm@470 1125
rlm@470 1126 #+begin_src clojure
rlm@470 1127 {0xFF0000 "Models/test-creature/retina-small.png"}
rlm@470 1128 #+end_src
rlm@470 1129
rlm@470 1130 #+caption: An example retinal profile image. White pixels are
rlm@470 1131 #+caption: photo-sensitive elements. The distribution of white
rlm@470 1132 #+caption: pixels is denser in the middle and falls off at the
rlm@470 1133 #+caption: edges and is inspired by the human retina.
rlm@470 1134 #+name: retina
rlm@470 1135 #+ATTR_LaTeX: :width 10cm
rlm@470 1136 [[./images/retina-small.png]]
rlm@470 1137
rlm@470 1138 Together, the number 0xFF0000 and the image image above describe
rlm@470 1139 the placement of red-sensitive sensory elements.
rlm@470 1140
rlm@470 1141 Meta-data to very crudely approximate a human eye might be
rlm@470 1142 something like this:
rlm@470 1143
rlm@470 1144 #+begin_src clojure
rlm@470 1145 (let [retinal-profile "Models/test-creature/retina-small.png"]
rlm@470 1146 {0xFF0000 retinal-profile
rlm@470 1147 0x00FF00 retinal-profile
rlm@470 1148 0x0000FF retinal-profile
rlm@470 1149 0xFFFFFF retinal-profile})
rlm@470 1150 #+end_src
rlm@470 1151
rlm@470 1152 The numbers that serve as keys in the map determine a sensor's
rlm@470 1153 relative sensitivity to the channels red, green, and blue. These
rlm@470 1154 sensitivity values are packed into an integer in the order
rlm@470 1155 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
rlm@470 1156 image are added together with these sensitivities as linear
rlm@470 1157 weights. Therefore, 0xFF0000 means sensitive to red only while
rlm@470 1158 0xFFFFFF means sensitive to all colors equally (gray).
rlm@470 1159
rlm@470 1160 #+caption: This is the core of vision in =CORTEX=. A given eye node
rlm@470 1161 #+caption: is converted into a function that returns visual
rlm@470 1162 #+caption: information from the simulation.
rlm@471 1163 #+name: vision-kernel
rlm@470 1164 #+begin_listing clojure
rlm@470 1165 (defn vision-kernel
rlm@470 1166 "Returns a list of functions, each of which will return a color
rlm@470 1167 channel's worth of visual information when called inside a running
rlm@470 1168 simulation."
rlm@470 1169 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
rlm@470 1170 (let [retinal-map (retina-sensor-profile eye)
rlm@470 1171 camera (add-eye! creature eye)
rlm@470 1172 vision-image
rlm@470 1173 (atom
rlm@470 1174 (BufferedImage. (.getWidth camera)
rlm@470 1175 (.getHeight camera)
rlm@470 1176 BufferedImage/TYPE_BYTE_BINARY))
rlm@470 1177 register-eye!
rlm@470 1178 (runonce
rlm@470 1179 (fn [world]
rlm@470 1180 (add-camera!
rlm@470 1181 world camera
rlm@470 1182 (let [counter (atom 0)]
rlm@470 1183 (fn [r fb bb bi]
rlm@470 1184 (if (zero? (rem (swap! counter inc) (inc skip)))
rlm@470 1185 (reset! vision-image
rlm@470 1186 (BufferedImage! r fb bb bi))))))))]
rlm@470 1187 (vec
rlm@470 1188 (map
rlm@470 1189 (fn [[key image]]
rlm@470 1190 (let [whites (white-coordinates image)
rlm@470 1191 topology (vec (collapse whites))
rlm@470 1192 sensitivity (sensitivity-presets key key)]
rlm@470 1193 (attached-viewport.
rlm@470 1194 (fn [world]
rlm@470 1195 (register-eye! world)
rlm@470 1196 (vector
rlm@470 1197 topology
rlm@470 1198 (vec
rlm@470 1199 (for [[x y] whites]
rlm@470 1200 (pixel-sense
rlm@470 1201 sensitivity
rlm@470 1202 (.getRGB @vision-image x y))))))
rlm@470 1203 register-eye!)))
rlm@470 1204 retinal-map))))
rlm@470 1205 #+end_listing
rlm@470 1206
rlm@470 1207 Note that since each of the functions generated by =vision-kernel=
rlm@470 1208 shares the same =register-eye!= function, the eye will be
rlm@470 1209 registered only once the first time any of the functions from the
rlm@470 1210 list returned by =vision-kernel= is called. Each of the functions
rlm@470 1211 returned by =vision-kernel= also allows access to the =Viewport=
rlm@470 1212 through which it receives images.
rlm@470 1213
rlm@470 1214 All the hard work has been done; all that remains is to apply
rlm@470 1215 =vision-kernel= to each eye in the creature and gather the results
rlm@470 1216 into one list of functions.
rlm@470 1217
rlm@470 1218
rlm@470 1219 #+caption: With =vision!=, =CORTEX= is already a fine simulation
rlm@470 1220 #+caption: environment for experimenting with different types of
rlm@470 1221 #+caption: eyes.
rlm@470 1222 #+name: vision!
rlm@470 1223 #+begin_listing clojure
rlm@470 1224 (defn vision!
rlm@470 1225 "Returns a list of functions, each of which returns visual sensory
rlm@470 1226 data when called inside a running simulation."
rlm@470 1227 [#^Node creature & {skip :skip :or {skip 0}}]
rlm@470 1228 (reduce
rlm@470 1229 concat
rlm@470 1230 (for [eye (eyes creature)]
rlm@470 1231 (vision-kernel creature eye))))
rlm@470 1232 #+end_listing
rlm@470 1233
rlm@471 1234 #+caption: Simulated vision with a test creature and the
rlm@471 1235 #+caption: human-like eye approximation. Notice how each channel
rlm@471 1236 #+caption: of the eye responds differently to the differently
rlm@471 1237 #+caption: colored balls.
rlm@471 1238 #+name: worm-vision-test.
rlm@471 1239 #+ATTR_LaTeX: :width 13cm
rlm@471 1240 [[./images/worm-vision.png]]
rlm@470 1241
rlm@471 1242 The vision code is not much more complicated than the body code,
rlm@471 1243 and enables multiple further paths for simulated vision. For
rlm@471 1244 example, it is quite easy to create bifocal vision -- you just
rlm@471 1245 make two eyes next to each other in blender! It is also possible
rlm@471 1246 to encode vision transforms in the retinal files. For example, the
rlm@471 1247 human like retina file in figure \ref{retina} approximates a
rlm@471 1248 log-polar transform.
rlm@470 1249
rlm@471 1250 This vision code has already been absorbed by the jMonkeyEngine
rlm@471 1251 community and is now (in modified form) part of a system for
rlm@471 1252 capturing in-game video to a file.
rlm@470 1253
rlm@473 1254 ** COMMENT Hearing is hard; =CORTEX= does it right
rlm@473 1255
rlm@472 1256 At the end of this section I will have simulated ears that work the
rlm@472 1257 same way as the simulated eyes in the last section. I will be able to
rlm@472 1258 place any number of ear-nodes in a blender file, and they will bind to
rlm@472 1259 the closest physical object and follow it as it moves around. Each ear
rlm@472 1260 will provide access to the sound data it picks up between every frame.
rlm@472 1261
rlm@472 1262 Hearing is one of the more difficult senses to simulate, because there
rlm@472 1263 is less support for obtaining the actual sound data that is processed
rlm@472 1264 by jMonkeyEngine3. There is no "split-screen" support for rendering
rlm@472 1265 sound from different points of view, and there is no way to directly
rlm@472 1266 access the rendered sound data.
rlm@472 1267
rlm@472 1268 =CORTEX='s hearing is unique because it does not have any
rlm@472 1269 limitations compared to other simulation environments. As far as I
rlm@472 1270 know, there is no other system that supports multiple listerers,
rlm@472 1271 and the sound demo at the end of this section is the first time
rlm@472 1272 it's been done in a video game environment.
rlm@472 1273
rlm@472 1274 *** Brief Description of jMonkeyEngine's Sound System
rlm@472 1275
rlm@472 1276 jMonkeyEngine's sound system works as follows:
rlm@472 1277
rlm@472 1278 - jMonkeyEngine uses the =AppSettings= for the particular
rlm@472 1279 application to determine what sort of =AudioRenderer= should be
rlm@472 1280 used.
rlm@472 1281 - Although some support is provided for multiple AudioRendering
rlm@472 1282 backends, jMonkeyEngine at the time of this writing will either
rlm@472 1283 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
rlm@472 1284 - jMonkeyEngine tries to figure out what sort of system you're
rlm@472 1285 running and extracts the appropriate native libraries.
rlm@472 1286 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
rlm@472 1287 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
rlm@472 1288 - =OpenAL= renders the 3D sound and feeds the rendered sound
rlm@472 1289 directly to any of various sound output devices with which it
rlm@472 1290 knows how to communicate.
rlm@472 1291
rlm@472 1292 A consequence of this is that there's no way to access the actual
rlm@472 1293 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
rlm@472 1294 one /listener/ (it renders sound data from only one perspective),
rlm@472 1295 which normally isn't a problem for games, but becomes a problem
rlm@472 1296 when trying to make multiple AI creatures that can each hear the
rlm@472 1297 world from a different perspective.
rlm@472 1298
rlm@472 1299 To make many AI creatures in jMonkeyEngine that can each hear the
rlm@472 1300 world from their own perspective, or to make a single creature with
rlm@472 1301 many ears, it is necessary to go all the way back to =OpenAL= and
rlm@472 1302 implement support for simulated hearing there.
rlm@472 1303
rlm@472 1304 *** Extending =OpenAl=
rlm@472 1305
rlm@472 1306 Extending =OpenAL= to support multiple listeners requires 500
rlm@472 1307 lines of =C= code and is too hairy to mention here. Instead, I
rlm@472 1308 will show a small amount of extension code and go over the high
rlm@472 1309 level stragety. Full source is of course available with the
rlm@472 1310 =CORTEX= distribution if you're interested.
rlm@472 1311
rlm@472 1312 =OpenAL= goes to great lengths to support many different systems,
rlm@472 1313 all with different sound capabilities and interfaces. It
rlm@472 1314 accomplishes this difficult task by providing code for many
rlm@472 1315 different sound backends in pseudo-objects called /Devices/.
rlm@472 1316 There's a device for the Linux Open Sound System and the Advanced
rlm@472 1317 Linux Sound Architecture, there's one for Direct Sound on Windows,
rlm@472 1318 and there's even one for Solaris. =OpenAL= solves the problem of
rlm@472 1319 platform independence by providing all these Devices.
rlm@472 1320
rlm@472 1321 Wrapper libraries such as LWJGL are free to examine the system on
rlm@472 1322 which they are running and then select an appropriate device for
rlm@472 1323 that system.
rlm@472 1324
rlm@472 1325 There are also a few "special" devices that don't interface with
rlm@472 1326 any particular system. These include the Null Device, which
rlm@472 1327 doesn't do anything, and the Wave Device, which writes whatever
rlm@472 1328 sound it receives to a file, if everything has been set up
rlm@472 1329 correctly when configuring =OpenAL=.
rlm@472 1330
rlm@472 1331 Actual mixing (doppler shift and distance.environment-based
rlm@472 1332 attenuation) of the sound data happens in the Devices, and they
rlm@472 1333 are the only point in the sound rendering process where this data
rlm@472 1334 is available.
rlm@472 1335
rlm@472 1336 Therefore, in order to support multiple listeners, and get the
rlm@472 1337 sound data in a form that the AIs can use, it is necessary to
rlm@472 1338 create a new Device which supports this feature.
rlm@472 1339
rlm@472 1340 Adding a device to OpenAL is rather tricky -- there are five
rlm@472 1341 separate files in the =OpenAL= source tree that must be modified
rlm@472 1342 to do so. I named my device the "Multiple Audio Send" Device, or
rlm@472 1343 =Send= Device for short, since it sends audio data back to the
rlm@472 1344 calling application like an Aux-Send cable on a mixing board.
rlm@472 1345
rlm@472 1346 The main idea behind the Send device is to take advantage of the
rlm@472 1347 fact that LWJGL only manages one /context/ when using OpenAL. A
rlm@472 1348 /context/ is like a container that holds samples and keeps track
rlm@472 1349 of where the listener is. In order to support multiple listeners,
rlm@472 1350 the Send device identifies the LWJGL context as the master
rlm@472 1351 context, and creates any number of slave contexts to represent
rlm@472 1352 additional listeners. Every time the device renders sound, it
rlm@472 1353 synchronizes every source from the master LWJGL context to the
rlm@472 1354 slave contexts. Then, it renders each context separately, using a
rlm@472 1355 different listener for each one. The rendered sound is made
rlm@472 1356 available via JNI to jMonkeyEngine.
rlm@472 1357
rlm@472 1358 Switching between contexts is not the normal operation of a
rlm@472 1359 Device, and one of the problems with doing so is that a Device
rlm@472 1360 normally keeps around a few pieces of state such as the
rlm@472 1361 =ClickRemoval= array above which will become corrupted if the
rlm@472 1362 contexts are not rendered in parallel. The solution is to create a
rlm@472 1363 copy of this normally global device state for each context, and
rlm@472 1364 copy it back and forth into and out of the actual device state
rlm@472 1365 whenever a context is rendered.
rlm@472 1366
rlm@472 1367 The core of the =Send= device is the =syncSources= function, which
rlm@472 1368 does the job of copying all relevant data from one context to
rlm@472 1369 another.
rlm@472 1370
rlm@472 1371 #+caption: Program for extending =OpenAL= to support multiple
rlm@472 1372 #+caption: listeners via context copying/switching.
rlm@472 1373 #+name: sync-openal-sources
rlm@472 1374 #+begin_listing C
rlm@472 1375 void syncSources(ALsource *masterSource, ALsource *slaveSource,
rlm@472 1376 ALCcontext *masterCtx, ALCcontext *slaveCtx){
rlm@472 1377 ALuint master = masterSource->source;
rlm@472 1378 ALuint slave = slaveSource->source;
rlm@472 1379 ALCcontext *current = alcGetCurrentContext();
rlm@472 1380
rlm@472 1381 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
rlm@472 1382 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
rlm@472 1383 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
rlm@472 1384 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
rlm@472 1385 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
rlm@472 1386 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
rlm@472 1387 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
rlm@472 1388 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
rlm@472 1389 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
rlm@472 1390 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
rlm@472 1391 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
rlm@472 1392 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
rlm@472 1393 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
rlm@472 1394
rlm@472 1395 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
rlm@472 1396 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
rlm@472 1397 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
rlm@472 1398
rlm@472 1399 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
rlm@472 1400 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
rlm@472 1401
rlm@472 1402 alcMakeContextCurrent(masterCtx);
rlm@472 1403 ALint source_type;
rlm@472 1404 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
rlm@472 1405
rlm@472 1406 // Only static sources are currently synchronized!
rlm@472 1407 if (AL_STATIC == source_type){
rlm@472 1408 ALint master_buffer;
rlm@472 1409 ALint slave_buffer;
rlm@472 1410 alGetSourcei(master, AL_BUFFER, &master_buffer);
rlm@472 1411 alcMakeContextCurrent(slaveCtx);
rlm@472 1412 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
rlm@472 1413 if (master_buffer != slave_buffer){
rlm@472 1414 alSourcei(slave, AL_BUFFER, master_buffer);
rlm@472 1415 }
rlm@472 1416 }
rlm@472 1417
rlm@472 1418 // Synchronize the state of the two sources.
rlm@472 1419 alcMakeContextCurrent(masterCtx);
rlm@472 1420 ALint masterState;
rlm@472 1421 ALint slaveState;
rlm@472 1422
rlm@472 1423 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
rlm@472 1424 alcMakeContextCurrent(slaveCtx);
rlm@472 1425 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
rlm@472 1426
rlm@472 1427 if (masterState != slaveState){
rlm@472 1428 switch (masterState){
rlm@472 1429 case AL_INITIAL : alSourceRewind(slave); break;
rlm@472 1430 case AL_PLAYING : alSourcePlay(slave); break;
rlm@472 1431 case AL_PAUSED : alSourcePause(slave); break;
rlm@472 1432 case AL_STOPPED : alSourceStop(slave); break;
rlm@472 1433 }
rlm@472 1434 }
rlm@472 1435 // Restore whatever context was previously active.
rlm@472 1436 alcMakeContextCurrent(current);
rlm@472 1437 }
rlm@472 1438 #+end_listing
rlm@472 1439
rlm@472 1440 With this special context-switching device, and some ugly JNI
rlm@472 1441 bindings that are not worth mentioning, =CORTEX= gains the ability
rlm@472 1442 to access multiple sound streams from =OpenAL=.
rlm@472 1443
rlm@472 1444 #+caption: Program to create an ear from a blender empty node. The ear
rlm@472 1445 #+caption: follows around the nearest physical object and passes
rlm@472 1446 #+caption: all sensory data to a continuation function.
rlm@472 1447 #+name: add-ear
rlm@472 1448 #+begin_listing clojure
rlm@472 1449 (defn add-ear!
rlm@472 1450 "Create a Listener centered on the current position of 'ear
rlm@472 1451 which follows the closest physical node in 'creature and
rlm@472 1452 sends sound data to 'continuation."
rlm@472 1453 [#^Application world #^Node creature #^Spatial ear continuation]
rlm@472 1454 (let [target (closest-node creature ear)
rlm@472 1455 lis (Listener.)
rlm@472 1456 audio-renderer (.getAudioRenderer world)
rlm@472 1457 sp (hearing-pipeline continuation)]
rlm@472 1458 (.setLocation lis (.getWorldTranslation ear))
rlm@472 1459 (.setRotation lis (.getWorldRotation ear))
rlm@472 1460 (bind-sense target lis)
rlm@472 1461 (update-listener-velocity! target lis)
rlm@472 1462 (.addListener audio-renderer lis)
rlm@472 1463 (.registerSoundProcessor audio-renderer lis sp)))
rlm@472 1464 #+end_listing
rlm@472 1465
rlm@472 1466
rlm@472 1467 The =Send= device, unlike most of the other devices in =OpenAL=,
rlm@472 1468 does not render sound unless asked. This enables the system to
rlm@472 1469 slow down or speed up depending on the needs of the AIs who are
rlm@472 1470 using it to listen. If the device tried to render samples in
rlm@472 1471 real-time, a complicated AI whose mind takes 100 seconds of
rlm@472 1472 computer time to simulate 1 second of AI-time would miss almost
rlm@472 1473 all of the sound in its environment!
rlm@472 1474
rlm@472 1475 #+caption: Program to enable arbitrary hearing in =CORTEX=
rlm@472 1476 #+name: hearing
rlm@472 1477 #+begin_listing clojure
rlm@472 1478 (defn hearing-kernel
rlm@472 1479 "Returns a function which returns auditory sensory data when called
rlm@472 1480 inside a running simulation."
rlm@472 1481 [#^Node creature #^Spatial ear]
rlm@472 1482 (let [hearing-data (atom [])
rlm@472 1483 register-listener!
rlm@472 1484 (runonce
rlm@472 1485 (fn [#^Application world]
rlm@472 1486 (add-ear!
rlm@472 1487 world creature ear
rlm@472 1488 (comp #(reset! hearing-data %)
rlm@472 1489 byteBuffer->pulse-vector))))]
rlm@472 1490 (fn [#^Application world]
rlm@472 1491 (register-listener! world)
rlm@472 1492 (let [data @hearing-data
rlm@472 1493 topology
rlm@472 1494 (vec (map #(vector % 0) (range 0 (count data))))]
rlm@472 1495 [topology data]))))
rlm@472 1496
rlm@472 1497 (defn hearing!
rlm@472 1498 "Endow the creature in a particular world with the sense of
rlm@472 1499 hearing. Will return a sequence of functions, one for each ear,
rlm@472 1500 which when called will return the auditory data from that ear."
rlm@472 1501 [#^Node creature]
rlm@472 1502 (for [ear (ears creature)]
rlm@472 1503 (hearing-kernel creature ear)))
rlm@472 1504 #+end_listing
rlm@472 1505
rlm@472 1506 Armed with these functions, =CORTEX= is able to test possibly the
rlm@472 1507 first ever instance of multiple listeners in a video game engine
rlm@472 1508 based simulation!
rlm@472 1509
rlm@472 1510 #+caption: Here a simple creature responds to sound by changing
rlm@472 1511 #+caption: its color from gray to green when the total volume
rlm@472 1512 #+caption: goes over a threshold.
rlm@472 1513 #+name: sound-test
rlm@472 1514 #+begin_listing java
rlm@472 1515 /**
rlm@472 1516 * Respond to sound! This is the brain of an AI entity that
rlm@472 1517 * hears its surroundings and reacts to them.
rlm@472 1518 */
rlm@472 1519 public void process(ByteBuffer audioSamples,
rlm@472 1520 int numSamples, AudioFormat format) {
rlm@472 1521 audioSamples.clear();
rlm@472 1522 byte[] data = new byte[numSamples];
rlm@472 1523 float[] out = new float[numSamples];
rlm@472 1524 audioSamples.get(data);
rlm@472 1525 FloatSampleTools.
rlm@472 1526 byte2floatInterleaved
rlm@472 1527 (data, 0, out, 0, numSamples/format.getFrameSize(), format);
rlm@472 1528
rlm@472 1529 float max = Float.NEGATIVE_INFINITY;
rlm@472 1530 for (float f : out){if (f > max) max = f;}
rlm@472 1531 audioSamples.clear();
rlm@472 1532
rlm@472 1533 if (max > 0.1){
rlm@472 1534 entity.getMaterial().setColor("Color", ColorRGBA.Green);
rlm@472 1535 }
rlm@472 1536 else {
rlm@472 1537 entity.getMaterial().setColor("Color", ColorRGBA.Gray);
rlm@472 1538 }
rlm@472 1539 #+end_listing
rlm@472 1540
rlm@472 1541 #+caption: First ever simulation of multiple listerners in =CORTEX=.
rlm@472 1542 #+caption: Each cube is a creature which processes sound data with
rlm@472 1543 #+caption: the =process= function from listing \ref{sound-test}.
rlm@472 1544 #+caption: the ball is constantally emiting a pure tone of
rlm@472 1545 #+caption: constant volume. As it approaches the cubes, they each
rlm@472 1546 #+caption: change color in response to the sound.
rlm@472 1547 #+name: sound-cubes.
rlm@472 1548 #+ATTR_LaTeX: :width 10cm
rlm@472 1549 [[./images/aurellem-gray.png]]
rlm@472 1550
rlm@472 1551 This system of hearing has also been co-opted by the
rlm@472 1552 jMonkeyEngine3 community and is used to record audio for demo
rlm@472 1553 videos.
rlm@472 1554
rlm@436 1555 ** Touch uses hundreds of hair-like elements
rlm@436 1556
rlm@474 1557 Touch is critical to navigation and spatial reasoning and as such I
rlm@474 1558 need a simulated version of it to give to my AI creatures.
rlm@474 1559
rlm@474 1560 Human skin has a wide array of touch sensors, each of which
rlm@474 1561 specialize in detecting different vibrational modes and pressures.
rlm@474 1562 These sensors can integrate a vast expanse of skin (i.e. your
rlm@474 1563 entire palm), or a tiny patch of skin at the tip of your finger.
rlm@474 1564 The hairs of the skin help detect objects before they even come
rlm@474 1565 into contact with the skin proper.
rlm@474 1566
rlm@474 1567 However, touch in my simulated world can not exactly correspond to
rlm@474 1568 human touch because my creatures are made out of completely rigid
rlm@474 1569 segments that don't deform like human skin.
rlm@474 1570
rlm@474 1571 Instead of measuring deformation or vibration, I surround each
rlm@474 1572 rigid part with a plenitude of hair-like objects (/feelers/) which
rlm@474 1573 do not interact with the physical world. Physical objects can pass
rlm@474 1574 through them with no effect. The feelers are able to tell when
rlm@474 1575 other objects pass through them, and they constantly report how
rlm@474 1576 much of their extent is covered. So even though the creature's body
rlm@474 1577 parts do not deform, the feelers create a margin around those body
rlm@474 1578 parts which achieves a sense of touch which is a hybrid between a
rlm@474 1579 human's sense of deformation and sense from hairs.
rlm@474 1580
rlm@474 1581 Implementing touch in jMonkeyEngine follows a different technical
rlm@474 1582 route than vision and hearing. Those two senses piggybacked off
rlm@474 1583 jMonkeyEngine's 3D audio and video rendering subsystems. To
rlm@474 1584 simulate touch, I use jMonkeyEngine's physics system to execute
rlm@474 1585 many small collision detections, one for each feeler. The placement
rlm@474 1586 of the feelers is determined by a UV-mapped image which shows where
rlm@474 1587 each feeler should be on the 3D surface of the body.
rlm@474 1588
rlm@474 1589 *** Defining Touch Meta-Data in Blender
rlm@474 1590
rlm@474 1591 Each geometry can have a single UV map which describes the
rlm@474 1592 position of the feelers which will constitute its sense of touch.
rlm@474 1593 This image path is stored under the ``touch'' key. The image itself
rlm@474 1594 is black and white, with black meaning a feeler length of 0 (no
rlm@474 1595 feeler is present) and white meaning a feeler length of =scale=,
rlm@474 1596 which is a float stored under the key "scale".
rlm@474 1597
rlm@474 1598 #+name: meta-data
rlm@474 1599 #+begin_src clojure
rlm@474 1600 (defn tactile-sensor-profile
rlm@474 1601 "Return the touch-sensor distribution image in BufferedImage format,
rlm@474 1602 or nil if it does not exist."
rlm@474 1603 [#^Geometry obj]
rlm@474 1604 (if-let [image-path (meta-data obj "touch")]
rlm@474 1605 (load-image image-path)))
rlm@474 1606
rlm@474 1607 (defn tactile-scale
rlm@474 1608 "Return the length of each feeler. Default scale is 0.01
rlm@474 1609 jMonkeyEngine units."
rlm@474 1610 [#^Geometry obj]
rlm@474 1611 (if-let [scale (meta-data obj "scale")]
rlm@474 1612 scale 0.1))
rlm@474 1613 #+end_src
rlm@474 1614
rlm@474 1615 Here is an example of a UV-map which specifies the position of touch
rlm@474 1616 sensors along the surface of the upper segment of the worm.
rlm@474 1617
rlm@474 1618 #+attr_html: width=755
rlm@474 1619 #+caption: This is the tactile-sensor-profile for the upper segment of the worm. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse).
rlm@474 1620 [[../images/finger-UV.png]]
rlm@474 1621
rlm@474 1622 *** Implementation Summary
rlm@474 1623
rlm@474 1624 To simulate touch there are three conceptual steps. For each solid
rlm@474 1625 object in the creature, you first have to get UV image and scale
rlm@474 1626 parameter which define the position and length of the feelers.
rlm@474 1627 Then, you use the triangles which comprise the mesh and the UV
rlm@474 1628 data stored in the mesh to determine the world-space position and
rlm@474 1629 orientation of each feeler. Then once every frame, update these
rlm@474 1630 positions and orientations to match the current position and
rlm@474 1631 orientation of the object, and use physics collision detection to
rlm@474 1632 gather tactile data.
rlm@474 1633
rlm@474 1634 Extracting the meta-data has already been described. The third
rlm@474 1635 step, physics collision detection, is handled in =touch-kernel=.
rlm@474 1636 Translating the positions and orientations of the feelers from the
rlm@474 1637 UV-map to world-space is itself a three-step process.
rlm@474 1638
rlm@474 1639 - Find the triangles which make up the mesh in pixel-space and in
rlm@474 1640 world-space. =triangles= =pixel-triangles=.
rlm@474 1641
rlm@474 1642 - Find the coordinates of each feeler in world-space. These are the
rlm@474 1643 origins of the feelers. =feeler-origins=.
rlm@474 1644
rlm@474 1645 - Calculate the normals of the triangles in world space, and add
rlm@474 1646 them to each of the origins of the feelers. These are the
rlm@474 1647 normalized coordinates of the tips of the feelers. =feeler-tips=.
rlm@474 1648
rlm@474 1649 *** Triangle Math
rlm@474 1650
rlm@474 1651 The rigid objects which make up a creature have an underlying
rlm@474 1652 =Geometry=, which is a =Mesh= plus a =Material= and other important
rlm@474 1653 data involved with displaying the object.
rlm@474 1654
rlm@474 1655 A =Mesh= is composed of =Triangles=, and each =Triangle= has three
rlm@474 1656 vertices which have coordinates in world space and UV space.
rlm@474 1657
rlm@474 1658 Here, =triangles= gets all the world-space triangles which comprise a
rlm@474 1659 mesh, while =pixel-triangles= gets those same triangles expressed in
rlm@474 1660 pixel coordinates (which are UV coordinates scaled to fit the height
rlm@474 1661 and width of the UV image).
rlm@474 1662
rlm@474 1663 #+name: triangles-2
rlm@474 1664 #+begin_src clojure
rlm@474 1665 (in-ns 'cortex.touch)
rlm@474 1666 (defn triangle
rlm@474 1667 "Get the triangle specified by triangle-index from the mesh."
rlm@474 1668 [#^Geometry geo triangle-index]
rlm@474 1669 (triangle-seq
rlm@474 1670 (let [scratch (Triangle.)]
rlm@474 1671 (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
rlm@474 1672
rlm@474 1673 (defn triangles
rlm@474 1674 "Return a sequence of all the Triangles which comprise a given
rlm@474 1675 Geometry."
rlm@474 1676 [#^Geometry geo]
rlm@474 1677 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
rlm@474 1678
rlm@474 1679 (defn triangle-vertex-indices
rlm@474 1680 "Get the triangle vertex indices of a given triangle from a given
rlm@474 1681 mesh."
rlm@474 1682 [#^Mesh mesh triangle-index]
rlm@474 1683 (let [indices (int-array 3)]
rlm@474 1684 (.getTriangle mesh triangle-index indices)
rlm@474 1685 (vec indices)))
rlm@474 1686
rlm@474 1687 (defn vertex-UV-coord
rlm@474 1688 "Get the UV-coordinates of the vertex named by vertex-index"
rlm@474 1689 [#^Mesh mesh vertex-index]
rlm@474 1690 (let [UV-buffer
rlm@474 1691 (.getData
rlm@474 1692 (.getBuffer
rlm@474 1693 mesh
rlm@474 1694 VertexBuffer$Type/TexCoord))]
rlm@474 1695 [(.get UV-buffer (* vertex-index 2))
rlm@474 1696 (.get UV-buffer (+ 1 (* vertex-index 2)))]))
rlm@474 1697
rlm@474 1698 (defn pixel-triangle [#^Geometry geo image index]
rlm@474 1699 (let [mesh (.getMesh geo)
rlm@474 1700 width (.getWidth image)
rlm@474 1701 height (.getHeight image)]
rlm@474 1702 (vec (map (fn [[u v]] (vector (* width u) (* height v)))
rlm@474 1703 (map (partial vertex-UV-coord mesh)
rlm@474 1704 (triangle-vertex-indices mesh index))))))
rlm@474 1705
rlm@474 1706 (defn pixel-triangles
rlm@474 1707 "The pixel-space triangles of the Geometry, in the same order as
rlm@474 1708 (triangles geo)"
rlm@474 1709 [#^Geometry geo image]
rlm@474 1710 (let [height (.getHeight image)
rlm@474 1711 width (.getWidth image)]
rlm@474 1712 (map (partial pixel-triangle geo image)
rlm@474 1713 (range (.getTriangleCount (.getMesh geo))))))
rlm@474 1714 #+end_src
rlm@474 1715
rlm@474 1716 *** The Affine Transform from one Triangle to Another
rlm@474 1717
rlm@474 1718 =pixel-triangles= gives us the mesh triangles expressed in pixel
rlm@474 1719 coordinates and =triangles= gives us the mesh triangles expressed in
rlm@474 1720 world coordinates. The tactile-sensor-profile gives the position of
rlm@474 1721 each feeler in pixel-space. In order to convert pixel-space
rlm@474 1722 coordinates into world-space coordinates we need something that takes
rlm@474 1723 coordinates on the surface of one triangle and gives the corresponding
rlm@474 1724 coordinates on the surface of another triangle.
rlm@474 1725
rlm@474 1726 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed into
rlm@474 1727 any other by a combination of translation, scaling, and
rlm@474 1728 rotation. The affine transformation from one triangle to another
rlm@474 1729 is readily computable if the triangle is expressed in terms of a $4x4$
rlm@474 1730 matrix.
rlm@474 1731
rlm@474 1732 \begin{bmatrix}
rlm@474 1733 x_1 & x_2 & x_3 & n_x \\
rlm@474 1734 y_1 & y_2 & y_3 & n_y \\
rlm@474 1735 z_1 & z_2 & z_3 & n_z \\
rlm@474 1736 1 & 1 & 1 & 1
rlm@474 1737 \end{bmatrix}
rlm@474 1738
rlm@474 1739 Here, the first three columns of the matrix are the vertices of the
rlm@474 1740 triangle. The last column is the right-handed unit normal of the
rlm@474 1741 triangle.
rlm@474 1742
rlm@474 1743 With two triangles $T_{1}$ and $T_{2}$ each expressed as a matrix like
rlm@474 1744 above, the affine transform from $T_{1}$ to $T_{2}$ is
rlm@474 1745
rlm@474 1746 $T_{2}T_{1}^{-1}$
rlm@474 1747
rlm@474 1748 The clojure code below recapitulates the formulas above, using
rlm@474 1749 jMonkeyEngine's =Matrix4f= objects, which can describe any affine
rlm@474 1750 transformation.
rlm@474 1751
rlm@474 1752 #+name: triangles-3
rlm@474 1753 #+begin_src clojure
rlm@474 1754 (in-ns 'cortex.touch)
rlm@474 1755
rlm@474 1756 (defn triangle->matrix4f
rlm@474 1757 "Converts the triangle into a 4x4 matrix: The first three columns
rlm@474 1758 contain the vertices of the triangle; the last contains the unit
rlm@474 1759 normal of the triangle. The bottom row is filled with 1s."
rlm@474 1760 [#^Triangle t]
rlm@474 1761 (let [mat (Matrix4f.)
rlm@474 1762 [vert-1 vert-2 vert-3]
rlm@474 1763 (mapv #(.get t %) (range 3))
rlm@474 1764 unit-normal (do (.calculateNormal t)(.getNormal t))
rlm@474 1765 vertices [vert-1 vert-2 vert-3 unit-normal]]
rlm@474 1766 (dorun
rlm@474 1767 (for [row (range 4) col (range 3)]
rlm@474 1768 (do
rlm@474 1769 (.set mat col row (.get (vertices row) col))
rlm@474 1770 (.set mat 3 row 1)))) mat))
rlm@474 1771
rlm@474 1772 (defn triangles->affine-transform
rlm@474 1773 "Returns the affine transformation that converts each vertex in the
rlm@474 1774 first triangle into the corresponding vertex in the second
rlm@474 1775 triangle."
rlm@474 1776 [#^Triangle tri-1 #^Triangle tri-2]
rlm@474 1777 (.mult
rlm@474 1778 (triangle->matrix4f tri-2)
rlm@474 1779 (.invert (triangle->matrix4f tri-1))))
rlm@474 1780 #+end_src
rlm@474 1781
rlm@474 1782 *** Triangle Boundaries
rlm@474 1783
rlm@474 1784 For efficiency's sake I will divide the tactile-profile image into
rlm@474 1785 small squares which inscribe each pixel-triangle, then extract the
rlm@474 1786 points which lie inside the triangle and map them to 3D-space using
rlm@474 1787 =triangle-transform= above. To do this I need a function,
rlm@474 1788 =convex-bounds= which finds the smallest box which inscribes a 2D
rlm@474 1789 triangle.
rlm@474 1790
rlm@474 1791 =inside-triangle?= determines whether a point is inside a triangle
rlm@474 1792 in 2D pixel-space.
rlm@474 1793
rlm@474 1794 #+name: triangles-4
rlm@474 1795 #+begin_src clojure
rlm@474 1796 (defn convex-bounds
rlm@474 1797 "Returns the smallest square containing the given vertices, as a
rlm@474 1798 vector of integers [left top width height]."
rlm@474 1799 [verts]
rlm@474 1800 (let [xs (map first verts)
rlm@474 1801 ys (map second verts)
rlm@474 1802 x0 (Math/floor (apply min xs))
rlm@474 1803 y0 (Math/floor (apply min ys))
rlm@474 1804 x1 (Math/ceil (apply max xs))
rlm@474 1805 y1 (Math/ceil (apply max ys))]
rlm@474 1806 [x0 y0 (- x1 x0) (- y1 y0)]))
rlm@474 1807
rlm@474 1808 (defn same-side?
rlm@474 1809 "Given the points p1 and p2 and the reference point ref, is point p
rlm@474 1810 on the same side of the line that goes through p1 and p2 as ref is?"
rlm@474 1811 [p1 p2 ref p]
rlm@474 1812 (<=
rlm@474 1813 0
rlm@474 1814 (.dot
rlm@474 1815 (.cross (.subtract p2 p1) (.subtract p p1))
rlm@474 1816 (.cross (.subtract p2 p1) (.subtract ref p1)))))
rlm@474 1817
rlm@474 1818 (defn inside-triangle?
rlm@474 1819 "Is the point inside the triangle?"
rlm@474 1820 {:author "Dylan Holmes"}
rlm@474 1821 [#^Triangle tri #^Vector3f p]
rlm@474 1822 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
rlm@474 1823 (and
rlm@474 1824 (same-side? vert-1 vert-2 vert-3 p)
rlm@474 1825 (same-side? vert-2 vert-3 vert-1 p)
rlm@474 1826 (same-side? vert-3 vert-1 vert-2 p))))
rlm@474 1827 #+end_src
rlm@474 1828
rlm@474 1829 *** Feeler Coordinates
rlm@474 1830
rlm@474 1831 The triangle-related functions above make short work of calculating
rlm@474 1832 the positions and orientations of each feeler in world-space.
rlm@474 1833
rlm@474 1834 #+name: sensors
rlm@474 1835 #+begin_src clojure
rlm@474 1836 (in-ns 'cortex.touch)
rlm@474 1837
rlm@474 1838 (defn feeler-pixel-coords
rlm@474 1839 "Returns the coordinates of the feelers in pixel space in lists, one
rlm@474 1840 list for each triangle, ordered in the same way as (triangles) and
rlm@474 1841 (pixel-triangles)."
rlm@474 1842 [#^Geometry geo image]
rlm@474 1843 (map
rlm@474 1844 (fn [pixel-triangle]
rlm@474 1845 (filter
rlm@474 1846 (fn [coord]
rlm@474 1847 (inside-triangle? (->triangle pixel-triangle)
rlm@474 1848 (->vector3f coord)))
rlm@474 1849 (white-coordinates image (convex-bounds pixel-triangle))))
rlm@474 1850 (pixel-triangles geo image)))
rlm@474 1851
rlm@474 1852 (defn feeler-world-coords
rlm@474 1853 "Returns the coordinates of the feelers in world space in lists, one
rlm@474 1854 list for each triangle, ordered in the same way as (triangles) and
rlm@474 1855 (pixel-triangles)."
rlm@474 1856 [#^Geometry geo image]
rlm@474 1857 (let [transforms
rlm@474 1858 (map #(triangles->affine-transform
rlm@474 1859 (->triangle %1) (->triangle %2))
rlm@474 1860 (pixel-triangles geo image)
rlm@474 1861 (triangles geo))]
rlm@474 1862 (map (fn [transform coords]
rlm@474 1863 (map #(.mult transform (->vector3f %)) coords))
rlm@474 1864 transforms (feeler-pixel-coords geo image))))
rlm@474 1865
rlm@474 1866 (defn feeler-origins
rlm@474 1867 "The world space coordinates of the root of each feeler."
rlm@474 1868 [#^Geometry geo image]
rlm@474 1869 (reduce concat (feeler-world-coords geo image)))
rlm@474 1870
rlm@474 1871 (defn feeler-tips
rlm@474 1872 "The world space coordinates of the tip of each feeler."
rlm@474 1873 [#^Geometry geo image]
rlm@474 1874 (let [world-coords (feeler-world-coords geo image)
rlm@474 1875 normals
rlm@474 1876 (map
rlm@474 1877 (fn [triangle]
rlm@474 1878 (.calculateNormal triangle)
rlm@474 1879 (.clone (.getNormal triangle)))
rlm@474 1880 (map ->triangle (triangles geo)))]
rlm@474 1881
rlm@474 1882 (mapcat (fn [origins normal]
rlm@474 1883 (map #(.add % normal) origins))
rlm@474 1884 world-coords normals)))
rlm@474 1885
rlm@474 1886 (defn touch-topology
rlm@474 1887 "touch-topology? is not a function."
rlm@474 1888 [#^Geometry geo image]
rlm@474 1889 (collapse (reduce concat (feeler-pixel-coords geo image))))
rlm@474 1890 #+end_src
rlm@474 1891 *** Simulated Touch
rlm@474 1892
rlm@474 1893 =touch-kernel= generates functions to be called from within a
rlm@474 1894 simulation that perform the necessary physics collisions to collect
rlm@474 1895 tactile data, and =touch!= recursively applies it to every node in
rlm@474 1896 the creature.
rlm@474 1897
rlm@474 1898 #+name: kernel
rlm@474 1899 #+begin_src clojure
rlm@474 1900 (in-ns 'cortex.touch)
rlm@474 1901
rlm@474 1902 (defn set-ray [#^Ray ray #^Matrix4f transform
rlm@474 1903 #^Vector3f origin #^Vector3f tip]
rlm@474 1904 ;; Doing everything locally reduces garbage collection by enough to
rlm@474 1905 ;; be worth it.
rlm@474 1906 (.mult transform origin (.getOrigin ray))
rlm@474 1907 (.mult transform tip (.getDirection ray))
rlm@474 1908 (.subtractLocal (.getDirection ray) (.getOrigin ray))
rlm@474 1909 (.normalizeLocal (.getDirection ray)))
rlm@474 1910
rlm@474 1911 (import com.jme3.math.FastMath)
rlm@474 1912
rlm@474 1913 (defn touch-kernel
rlm@474 1914 "Constructs a function which will return tactile sensory data from
rlm@474 1915 'geo when called from inside a running simulation"
rlm@474 1916 [#^Geometry geo]
rlm@474 1917 (if-let
rlm@474 1918 [profile (tactile-sensor-profile geo)]
rlm@474 1919 (let [ray-reference-origins (feeler-origins geo profile)
rlm@474 1920 ray-reference-tips (feeler-tips geo profile)
rlm@474 1921 ray-length (tactile-scale geo)
rlm@474 1922 current-rays (map (fn [_] (Ray.)) ray-reference-origins)
rlm@474 1923 topology (touch-topology geo profile)
rlm@474 1924 correction (float (* ray-length -0.2))]
rlm@474 1925
rlm@474 1926 ;; slight tolerance for very close collisions.
rlm@474 1927 (dorun
rlm@474 1928 (map (fn [origin tip]
rlm@474 1929 (.addLocal origin (.mult (.subtract tip origin)
rlm@474 1930 correction)))
rlm@474 1931 ray-reference-origins ray-reference-tips))
rlm@474 1932 (dorun (map #(.setLimit % ray-length) current-rays))
rlm@474 1933 (fn [node]
rlm@474 1934 (let [transform (.getWorldMatrix geo)]
rlm@474 1935 (dorun
rlm@474 1936 (map (fn [ray ref-origin ref-tip]
rlm@474 1937 (set-ray ray transform ref-origin ref-tip))
rlm@474 1938 current-rays ray-reference-origins
rlm@474 1939 ray-reference-tips))
rlm@474 1940 (vector
rlm@474 1941 topology
rlm@474 1942 (vec
rlm@474 1943 (for [ray current-rays]
rlm@474 1944 (do
rlm@474 1945 (let [results (CollisionResults.)]
rlm@474 1946 (.collideWith node ray results)
rlm@474 1947 (let [touch-objects
rlm@474 1948 (filter #(not (= geo (.getGeometry %)))
rlm@474 1949 results)
rlm@474 1950 limit (.getLimit ray)]
rlm@474 1951 [(if (empty? touch-objects)
rlm@474 1952 limit
rlm@474 1953 (let [response
rlm@474 1954 (apply min (map #(.getDistance %)
rlm@474 1955 touch-objects))]
rlm@474 1956 (FastMath/clamp
rlm@474 1957 (float
rlm@474 1958 (if (> response limit) (float 0.0)
rlm@474 1959 (+ response correction)))
rlm@474 1960 (float 0.0)
rlm@474 1961 limit)))
rlm@474 1962 limit])))))))))))
rlm@474 1963
rlm@474 1964 (defn touch!
rlm@474 1965 "Endow the creature with the sense of touch. Returns a sequence of
rlm@474 1966 functions, one for each body part with a tactile-sensor-profile,
rlm@474 1967 each of which when called returns sensory data for that body part."
rlm@474 1968 [#^Node creature]
rlm@474 1969 (filter
rlm@474 1970 (comp not nil?)
rlm@474 1971 (map touch-kernel
rlm@474 1972 (filter #(isa? (class %) Geometry)
rlm@474 1973 (node-seq creature)))))
rlm@474 1974 #+end_src
rlm@474 1975
rlm@474 1976
rlm@474 1977 Armed with the =touch!= function, =CORTEX= becomes capable of giving
rlm@474 1978 creatures a sense of touch. A simple test is to create a cube that is
rlm@474 1979 outfitted with a uniform distrubition of touch sensors. It can feel
rlm@474 1980 the ground and any balls that it touches.
rlm@474 1981
rlm@474 1982 # insert touch cube image; UV map
rlm@474 1983 # insert video
rlm@474 1984
rlm@440 1985 ** Proprioception is the sense that makes everything ``real''
rlm@436 1986
rlm@436 1987 ** Muscles are both effectors and sensors
rlm@436 1988
rlm@436 1989 ** =CORTEX= brings complex creatures to life!
rlm@436 1990
rlm@436 1991 ** =CORTEX= enables many possiblities for further research
rlm@474 1992
rlm@465 1993 * COMMENT Empathy in a simulated worm
rlm@435 1994
rlm@449 1995 Here I develop a computational model of empathy, using =CORTEX= as a
rlm@449 1996 base. Empathy in this context is the ability to observe another
rlm@449 1997 creature and infer what sorts of sensations that creature is
rlm@449 1998 feeling. My empathy algorithm involves multiple phases. First is
rlm@449 1999 free-play, where the creature moves around and gains sensory
rlm@449 2000 experience. From this experience I construct a representation of the
rlm@449 2001 creature's sensory state space, which I call \Phi-space. Using
rlm@449 2002 \Phi-space, I construct an efficient function which takes the
rlm@449 2003 limited data that comes from observing another creature and enriches
rlm@449 2004 it full compliment of imagined sensory data. I can then use the
rlm@449 2005 imagined sensory data to recognize what the observed creature is
rlm@449 2006 doing and feeling, using straightforward embodied action predicates.
rlm@449 2007 This is all demonstrated with using a simple worm-like creature, and
rlm@449 2008 recognizing worm-actions based on limited data.
rlm@449 2009
rlm@449 2010 #+caption: Here is the worm with which we will be working.
rlm@449 2011 #+caption: It is composed of 5 segments. Each segment has a
rlm@449 2012 #+caption: pair of extensor and flexor muscles. Each of the
rlm@449 2013 #+caption: worm's four joints is a hinge joint which allows
rlm@451 2014 #+caption: about 30 degrees of rotation to either side. Each segment
rlm@449 2015 #+caption: of the worm is touch-capable and has a uniform
rlm@449 2016 #+caption: distribution of touch sensors on each of its faces.
rlm@449 2017 #+caption: Each joint has a proprioceptive sense to detect
rlm@449 2018 #+caption: relative positions. The worm segments are all the
rlm@449 2019 #+caption: same except for the first one, which has a much
rlm@449 2020 #+caption: higher weight than the others to allow for easy
rlm@449 2021 #+caption: manual motor control.
rlm@449 2022 #+name: basic-worm-view
rlm@449 2023 #+ATTR_LaTeX: :width 10cm
rlm@449 2024 [[./images/basic-worm-view.png]]
rlm@449 2025
rlm@449 2026 #+caption: Program for reading a worm from a blender file and
rlm@449 2027 #+caption: outfitting it with the senses of proprioception,
rlm@449 2028 #+caption: touch, and the ability to move, as specified in the
rlm@449 2029 #+caption: blender file.
rlm@449 2030 #+name: get-worm
rlm@449 2031 #+begin_listing clojure
rlm@449 2032 #+begin_src clojure
rlm@449 2033 (defn worm []
rlm@449 2034 (let [model (load-blender-model "Models/worm/worm.blend")]
rlm@449 2035 {:body (doto model (body!))
rlm@449 2036 :touch (touch! model)
rlm@449 2037 :proprioception (proprioception! model)
rlm@449 2038 :muscles (movement! model)}))
rlm@449 2039 #+end_src
rlm@449 2040 #+end_listing
rlm@452 2041
rlm@436 2042 ** Embodiment factors action recognition into managable parts
rlm@435 2043
rlm@449 2044 Using empathy, I divide the problem of action recognition into a
rlm@449 2045 recognition process expressed in the language of a full compliment
rlm@449 2046 of senses, and an imaganitive process that generates full sensory
rlm@449 2047 data from partial sensory data. Splitting the action recognition
rlm@449 2048 problem in this manner greatly reduces the total amount of work to
rlm@449 2049 recognize actions: The imaganitive process is mostly just matching
rlm@449 2050 previous experience, and the recognition process gets to use all
rlm@449 2051 the senses to directly describe any action.
rlm@449 2052
rlm@436 2053 ** Action recognition is easy with a full gamut of senses
rlm@435 2054
rlm@449 2055 Embodied representations using multiple senses such as touch,
rlm@449 2056 proprioception, and muscle tension turns out be be exceedingly
rlm@449 2057 efficient at describing body-centered actions. It is the ``right
rlm@449 2058 language for the job''. For example, it takes only around 5 lines
rlm@449 2059 of LISP code to describe the action of ``curling'' using embodied
rlm@451 2060 primitives. It takes about 10 lines to describe the seemingly
rlm@449 2061 complicated action of wiggling.
rlm@449 2062
rlm@449 2063 The following action predicates each take a stream of sensory
rlm@449 2064 experience, observe however much of it they desire, and decide
rlm@449 2065 whether the worm is doing the action they describe. =curled?=
rlm@449 2066 relies on proprioception, =resting?= relies on touch, =wiggling?=
rlm@449 2067 relies on a fourier analysis of muscle contraction, and
rlm@449 2068 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
rlm@449 2069
rlm@449 2070 #+caption: Program for detecting whether the worm is curled. This is the
rlm@449 2071 #+caption: simplest action predicate, because it only uses the last frame
rlm@449 2072 #+caption: of sensory experience, and only uses proprioceptive data. Even
rlm@449 2073 #+caption: this simple predicate, however, is automatically frame
rlm@449 2074 #+caption: independent and ignores vermopomorphic differences such as
rlm@449 2075 #+caption: worm textures and colors.
rlm@449 2076 #+name: curled
rlm@452 2077 #+attr_latex: [htpb]
rlm@452 2078 #+begin_listing clojure
rlm@449 2079 #+begin_src clojure
rlm@449 2080 (defn curled?
rlm@449 2081 "Is the worm curled up?"
rlm@449 2082 [experiences]
rlm@449 2083 (every?
rlm@449 2084 (fn [[_ _ bend]]
rlm@449 2085 (> (Math/sin bend) 0.64))
rlm@449 2086 (:proprioception (peek experiences))))
rlm@449 2087 #+end_src
rlm@449 2088 #+end_listing
rlm@449 2089
rlm@449 2090 #+caption: Program for summarizing the touch information in a patch
rlm@449 2091 #+caption: of skin.
rlm@449 2092 #+name: touch-summary
rlm@452 2093 #+attr_latex: [htpb]
rlm@452 2094
rlm@452 2095 #+begin_listing clojure
rlm@449 2096 #+begin_src clojure
rlm@449 2097 (defn contact
rlm@449 2098 "Determine how much contact a particular worm segment has with
rlm@449 2099 other objects. Returns a value between 0 and 1, where 1 is full
rlm@449 2100 contact and 0 is no contact."
rlm@449 2101 [touch-region [coords contact :as touch]]
rlm@449 2102 (-> (zipmap coords contact)
rlm@449 2103 (select-keys touch-region)
rlm@449 2104 (vals)
rlm@449 2105 (#(map first %))
rlm@449 2106 (average)
rlm@449 2107 (* 10)
rlm@449 2108 (- 1)
rlm@449 2109 (Math/abs)))
rlm@449 2110 #+end_src
rlm@449 2111 #+end_listing
rlm@449 2112
rlm@449 2113
rlm@449 2114 #+caption: Program for detecting whether the worm is at rest. This program
rlm@449 2115 #+caption: uses a summary of the tactile information from the underbelly
rlm@449 2116 #+caption: of the worm, and is only true if every segment is touching the
rlm@449 2117 #+caption: floor. Note that this function contains no references to
rlm@449 2118 #+caption: proprioction at all.
rlm@449 2119 #+name: resting
rlm@452 2120 #+attr_latex: [htpb]
rlm@452 2121 #+begin_listing clojure
rlm@449 2122 #+begin_src clojure
rlm@449 2123 (def worm-segment-bottom (rect-region [8 15] [14 22]))
rlm@449 2124
rlm@449 2125 (defn resting?
rlm@449 2126 "Is the worm resting on the ground?"
rlm@449 2127 [experiences]
rlm@449 2128 (every?
rlm@449 2129 (fn [touch-data]
rlm@449 2130 (< 0.9 (contact worm-segment-bottom touch-data)))
rlm@449 2131 (:touch (peek experiences))))
rlm@449 2132 #+end_src
rlm@449 2133 #+end_listing
rlm@449 2134
rlm@449 2135 #+caption: Program for detecting whether the worm is curled up into a
rlm@449 2136 #+caption: full circle. Here the embodied approach begins to shine, as
rlm@449 2137 #+caption: I am able to both use a previous action predicate (=curled?=)
rlm@449 2138 #+caption: as well as the direct tactile experience of the head and tail.
rlm@449 2139 #+name: grand-circle
rlm@452 2140 #+attr_latex: [htpb]
rlm@452 2141 #+begin_listing clojure
rlm@449 2142 #+begin_src clojure
rlm@449 2143 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
rlm@449 2144
rlm@449 2145 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
rlm@449 2146
rlm@449 2147 (defn grand-circle?
rlm@449 2148 "Does the worm form a majestic circle (one end touching the other)?"
rlm@449 2149 [experiences]
rlm@449 2150 (and (curled? experiences)
rlm@449 2151 (let [worm-touch (:touch (peek experiences))
rlm@449 2152 tail-touch (worm-touch 0)
rlm@449 2153 head-touch (worm-touch 4)]
rlm@449 2154 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
rlm@449 2155 (< 0.55 (contact worm-segment-top-tip head-touch))))))
rlm@449 2156 #+end_src
rlm@449 2157 #+end_listing
rlm@449 2158
rlm@449 2159
rlm@449 2160 #+caption: Program for detecting whether the worm has been wiggling for
rlm@449 2161 #+caption: the last few frames. It uses a fourier analysis of the muscle
rlm@449 2162 #+caption: contractions of the worm's tail to determine wiggling. This is
rlm@449 2163 #+caption: signigicant because there is no particular frame that clearly
rlm@449 2164 #+caption: indicates that the worm is wiggling --- only when multiple frames
rlm@449 2165 #+caption: are analyzed together is the wiggling revealed. Defining
rlm@449 2166 #+caption: wiggling this way also gives the worm an opportunity to learn
rlm@449 2167 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
rlm@449 2168 #+caption: wiggle but can't. Frustrated wiggling is very visually different
rlm@449 2169 #+caption: from actual wiggling, but this definition gives it to us for free.
rlm@449 2170 #+name: wiggling
rlm@452 2171 #+attr_latex: [htpb]
rlm@452 2172 #+begin_listing clojure
rlm@449 2173 #+begin_src clojure
rlm@449 2174 (defn fft [nums]
rlm@449 2175 (map
rlm@449 2176 #(.getReal %)
rlm@449 2177 (.transform
rlm@449 2178 (FastFourierTransformer. DftNormalization/STANDARD)
rlm@449 2179 (double-array nums) TransformType/FORWARD)))
rlm@449 2180
rlm@449 2181 (def indexed (partial map-indexed vector))
rlm@449 2182
rlm@449 2183 (defn max-indexed [s]
rlm@449 2184 (first (sort-by (comp - second) (indexed s))))
rlm@449 2185
rlm@449 2186 (defn wiggling?
rlm@449 2187 "Is the worm wiggling?"
rlm@449 2188 [experiences]
rlm@449 2189 (let [analysis-interval 0x40]
rlm@449 2190 (when (> (count experiences) analysis-interval)
rlm@449 2191 (let [a-flex 3
rlm@449 2192 a-ex 2
rlm@449 2193 muscle-activity
rlm@449 2194 (map :muscle (vector:last-n experiences analysis-interval))
rlm@449 2195 base-activity
rlm@449 2196 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
rlm@449 2197 (= 2
rlm@449 2198 (first
rlm@449 2199 (max-indexed
rlm@449 2200 (map #(Math/abs %)
rlm@449 2201 (take 20 (fft base-activity))))))))))
rlm@449 2202 #+end_src
rlm@449 2203 #+end_listing
rlm@449 2204
rlm@449 2205 With these action predicates, I can now recognize the actions of
rlm@449 2206 the worm while it is moving under my control and I have access to
rlm@449 2207 all the worm's senses.
rlm@449 2208
rlm@449 2209 #+caption: Use the action predicates defined earlier to report on
rlm@449 2210 #+caption: what the worm is doing while in simulation.
rlm@449 2211 #+name: report-worm-activity
rlm@452 2212 #+attr_latex: [htpb]
rlm@452 2213 #+begin_listing clojure
rlm@449 2214 #+begin_src clojure
rlm@449 2215 (defn debug-experience
rlm@449 2216 [experiences text]
rlm@449 2217 (cond
rlm@449 2218 (grand-circle? experiences) (.setText text "Grand Circle")
rlm@449 2219 (curled? experiences) (.setText text "Curled")
rlm@449 2220 (wiggling? experiences) (.setText text "Wiggling")
rlm@449 2221 (resting? experiences) (.setText text "Resting")))
rlm@449 2222 #+end_src
rlm@449 2223 #+end_listing
rlm@449 2224
rlm@449 2225 #+caption: Using =debug-experience=, the body-centered predicates
rlm@449 2226 #+caption: work together to classify the behaviour of the worm.
rlm@451 2227 #+caption: the predicates are operating with access to the worm's
rlm@451 2228 #+caption: full sensory data.
rlm@449 2229 #+name: basic-worm-view
rlm@449 2230 #+ATTR_LaTeX: :width 10cm
rlm@449 2231 [[./images/worm-identify-init.png]]
rlm@449 2232
rlm@449 2233 These action predicates satisfy the recognition requirement of an
rlm@451 2234 empathic recognition system. There is power in the simplicity of
rlm@451 2235 the action predicates. They describe their actions without getting
rlm@451 2236 confused in visual details of the worm. Each one is frame
rlm@451 2237 independent, but more than that, they are each indepent of
rlm@449 2238 irrelevant visual details of the worm and the environment. They
rlm@449 2239 will work regardless of whether the worm is a different color or
rlm@451 2240 hevaily textured, or if the environment has strange lighting.
rlm@449 2241
rlm@449 2242 The trick now is to make the action predicates work even when the
rlm@449 2243 sensory data on which they depend is absent. If I can do that, then
rlm@449 2244 I will have gained much,
rlm@435 2245
rlm@436 2246 ** \Phi-space describes the worm's experiences
rlm@449 2247
rlm@449 2248 As a first step towards building empathy, I need to gather all of
rlm@449 2249 the worm's experiences during free play. I use a simple vector to
rlm@449 2250 store all the experiences.
rlm@449 2251
rlm@449 2252 Each element of the experience vector exists in the vast space of
rlm@449 2253 all possible worm-experiences. Most of this vast space is actually
rlm@449 2254 unreachable due to physical constraints of the worm's body. For
rlm@449 2255 example, the worm's segments are connected by hinge joints that put
rlm@451 2256 a practical limit on the worm's range of motions without limiting
rlm@451 2257 its degrees of freedom. Some groupings of senses are impossible;
rlm@451 2258 the worm can not be bent into a circle so that its ends are
rlm@451 2259 touching and at the same time not also experience the sensation of
rlm@451 2260 touching itself.
rlm@449 2261
rlm@451 2262 As the worm moves around during free play and its experience vector
rlm@451 2263 grows larger, the vector begins to define a subspace which is all
rlm@451 2264 the sensations the worm can practicaly experience during normal
rlm@451 2265 operation. I call this subspace \Phi-space, short for
rlm@451 2266 physical-space. The experience vector defines a path through
rlm@451 2267 \Phi-space. This path has interesting properties that all derive
rlm@451 2268 from physical embodiment. The proprioceptive components are
rlm@451 2269 completely smooth, because in order for the worm to move from one
rlm@451 2270 position to another, it must pass through the intermediate
rlm@451 2271 positions. The path invariably forms loops as actions are repeated.
rlm@451 2272 Finally and most importantly, proprioception actually gives very
rlm@451 2273 strong inference about the other senses. For example, when the worm
rlm@451 2274 is flat, you can infer that it is touching the ground and that its
rlm@451 2275 muscles are not active, because if the muscles were active, the
rlm@451 2276 worm would be moving and would not be perfectly flat. In order to
rlm@451 2277 stay flat, the worm has to be touching the ground, or it would
rlm@451 2278 again be moving out of the flat position due to gravity. If the
rlm@451 2279 worm is positioned in such a way that it interacts with itself,
rlm@451 2280 then it is very likely to be feeling the same tactile feelings as
rlm@451 2281 the last time it was in that position, because it has the same body
rlm@451 2282 as then. If you observe multiple frames of proprioceptive data,
rlm@451 2283 then you can become increasingly confident about the exact
rlm@451 2284 activations of the worm's muscles, because it generally takes a
rlm@451 2285 unique combination of muscle contractions to transform the worm's
rlm@451 2286 body along a specific path through \Phi-space.
rlm@449 2287
rlm@449 2288 There is a simple way of taking \Phi-space and the total ordering
rlm@449 2289 provided by an experience vector and reliably infering the rest of
rlm@449 2290 the senses.
rlm@435 2291
rlm@436 2292 ** Empathy is the process of tracing though \Phi-space
rlm@449 2293
rlm@450 2294 Here is the core of a basic empathy algorithm, starting with an
rlm@451 2295 experience vector:
rlm@451 2296
rlm@451 2297 First, group the experiences into tiered proprioceptive bins. I use
rlm@451 2298 powers of 10 and 3 bins, and the smallest bin has an approximate
rlm@451 2299 size of 0.001 radians in all proprioceptive dimensions.
rlm@450 2300
rlm@450 2301 Then, given a sequence of proprioceptive input, generate a set of
rlm@451 2302 matching experience records for each input, using the tiered
rlm@451 2303 proprioceptive bins.
rlm@449 2304
rlm@450 2305 Finally, to infer sensory data, select the longest consective chain
rlm@451 2306 of experiences. Conecutive experience means that the experiences
rlm@451 2307 appear next to each other in the experience vector.
rlm@449 2308
rlm@450 2309 This algorithm has three advantages:
rlm@450 2310
rlm@450 2311 1. It's simple
rlm@450 2312
rlm@451 2313 3. It's very fast -- retrieving possible interpretations takes
rlm@451 2314 constant time. Tracing through chains of interpretations takes
rlm@451 2315 time proportional to the average number of experiences in a
rlm@451 2316 proprioceptive bin. Redundant experiences in \Phi-space can be
rlm@451 2317 merged to save computation.
rlm@450 2318
rlm@450 2319 2. It protects from wrong interpretations of transient ambiguous
rlm@451 2320 proprioceptive data. For example, if the worm is flat for just
rlm@450 2321 an instant, this flattness will not be interpreted as implying
rlm@450 2322 that the worm has its muscles relaxed, since the flattness is
rlm@450 2323 part of a longer chain which includes a distinct pattern of
rlm@451 2324 muscle activation. Markov chains or other memoryless statistical
rlm@451 2325 models that operate on individual frames may very well make this
rlm@451 2326 mistake.
rlm@450 2327
rlm@450 2328 #+caption: Program to convert an experience vector into a
rlm@450 2329 #+caption: proprioceptively binned lookup function.
rlm@450 2330 #+name: bin
rlm@452 2331 #+attr_latex: [htpb]
rlm@452 2332 #+begin_listing clojure
rlm@450 2333 #+begin_src clojure
rlm@449 2334 (defn bin [digits]
rlm@449 2335 (fn [angles]
rlm@449 2336 (->> angles
rlm@449 2337 (flatten)
rlm@449 2338 (map (juxt #(Math/sin %) #(Math/cos %)))
rlm@449 2339 (flatten)
rlm@449 2340 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
rlm@449 2341
rlm@449 2342 (defn gen-phi-scan
rlm@450 2343 "Nearest-neighbors with binning. Only returns a result if
rlm@450 2344 the propriceptive data is within 10% of a previously recorded
rlm@450 2345 result in all dimensions."
rlm@450 2346 [phi-space]
rlm@449 2347 (let [bin-keys (map bin [3 2 1])
rlm@449 2348 bin-maps
rlm@449 2349 (map (fn [bin-key]
rlm@449 2350 (group-by
rlm@449 2351 (comp bin-key :proprioception phi-space)
rlm@449 2352 (range (count phi-space)))) bin-keys)
rlm@449 2353 lookups (map (fn [bin-key bin-map]
rlm@450 2354 (fn [proprio] (bin-map (bin-key proprio))))
rlm@450 2355 bin-keys bin-maps)]
rlm@449 2356 (fn lookup [proprio-data]
rlm@449 2357 (set (some #(% proprio-data) lookups)))))
rlm@450 2358 #+end_src
rlm@450 2359 #+end_listing
rlm@449 2360
rlm@451 2361 #+caption: =longest-thread= finds the longest path of consecutive
rlm@451 2362 #+caption: experiences to explain proprioceptive worm data.
rlm@451 2363 #+name: phi-space-history-scan
rlm@451 2364 #+ATTR_LaTeX: :width 10cm
rlm@451 2365 [[./images/aurellem-gray.png]]
rlm@451 2366
rlm@451 2367 =longest-thread= infers sensory data by stitching together pieces
rlm@451 2368 from previous experience. It prefers longer chains of previous
rlm@451 2369 experience to shorter ones. For example, during training the worm
rlm@451 2370 might rest on the ground for one second before it performs its
rlm@451 2371 excercises. If during recognition the worm rests on the ground for
rlm@451 2372 five seconds, =longest-thread= will accomodate this five second
rlm@451 2373 rest period by looping the one second rest chain five times.
rlm@451 2374
rlm@451 2375 =longest-thread= takes time proportinal to the average number of
rlm@451 2376 entries in a proprioceptive bin, because for each element in the
rlm@451 2377 starting bin it performes a series of set lookups in the preceeding
rlm@451 2378 bins. If the total history is limited, then this is only a constant
rlm@451 2379 multiple times the number of entries in the starting bin. This
rlm@451 2380 analysis also applies even if the action requires multiple longest
rlm@451 2381 chains -- it's still the average number of entries in a
rlm@451 2382 proprioceptive bin times the desired chain length. Because
rlm@451 2383 =longest-thread= is so efficient and simple, I can interpret
rlm@451 2384 worm-actions in real time.
rlm@449 2385
rlm@450 2386 #+caption: Program to calculate empathy by tracing though \Phi-space
rlm@450 2387 #+caption: and finding the longest (ie. most coherent) interpretation
rlm@450 2388 #+caption: of the data.
rlm@450 2389 #+name: longest-thread
rlm@452 2390 #+attr_latex: [htpb]
rlm@452 2391 #+begin_listing clojure
rlm@450 2392 #+begin_src clojure
rlm@449 2393 (defn longest-thread
rlm@449 2394 "Find the longest thread from phi-index-sets. The index sets should
rlm@449 2395 be ordered from most recent to least recent."
rlm@449 2396 [phi-index-sets]
rlm@449 2397 (loop [result '()
rlm@449 2398 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
rlm@449 2399 (if (empty? phi-index-sets)
rlm@449 2400 (vec result)
rlm@449 2401 (let [threads
rlm@449 2402 (for [thread-base thread-bases]
rlm@449 2403 (loop [thread (list thread-base)
rlm@449 2404 remaining remaining]
rlm@449 2405 (let [next-index (dec (first thread))]
rlm@449 2406 (cond (empty? remaining) thread
rlm@449 2407 (contains? (first remaining) next-index)
rlm@449 2408 (recur
rlm@449 2409 (cons next-index thread) (rest remaining))
rlm@449 2410 :else thread))))
rlm@449 2411 longest-thread
rlm@449 2412 (reduce (fn [thread-a thread-b]
rlm@449 2413 (if (> (count thread-a) (count thread-b))
rlm@449 2414 thread-a thread-b))
rlm@449 2415 '(nil)
rlm@449 2416 threads)]
rlm@449 2417 (recur (concat longest-thread result)
rlm@449 2418 (drop (count longest-thread) phi-index-sets))))))
rlm@450 2419 #+end_src
rlm@450 2420 #+end_listing
rlm@450 2421
rlm@451 2422 There is one final piece, which is to replace missing sensory data
rlm@451 2423 with a best-guess estimate. While I could fill in missing data by
rlm@451 2424 using a gradient over the closest known sensory data points,
rlm@451 2425 averages can be misleading. It is certainly possible to create an
rlm@451 2426 impossible sensory state by averaging two possible sensory states.
rlm@451 2427 Therefore, I simply replicate the most recent sensory experience to
rlm@451 2428 fill in the gaps.
rlm@449 2429
rlm@449 2430 #+caption: Fill in blanks in sensory experience by replicating the most
rlm@449 2431 #+caption: recent experience.
rlm@449 2432 #+name: infer-nils
rlm@452 2433 #+attr_latex: [htpb]
rlm@452 2434 #+begin_listing clojure
rlm@449 2435 #+begin_src clojure
rlm@449 2436 (defn infer-nils
rlm@449 2437 "Replace nils with the next available non-nil element in the
rlm@449 2438 sequence, or barring that, 0."
rlm@449 2439 [s]
rlm@449 2440 (loop [i (dec (count s))
rlm@449 2441 v (transient s)]
rlm@449 2442 (if (zero? i) (persistent! v)
rlm@449 2443 (if-let [cur (v i)]
rlm@449 2444 (if (get v (dec i) 0)
rlm@449 2445 (recur (dec i) v)
rlm@449 2446 (recur (dec i) (assoc! v (dec i) cur)))
rlm@449 2447 (recur i (assoc! v i 0))))))
rlm@449 2448 #+end_src
rlm@449 2449 #+end_listing
rlm@435 2450
rlm@441 2451 ** Efficient action recognition with =EMPATH=
rlm@451 2452
rlm@451 2453 To use =EMPATH= with the worm, I first need to gather a set of
rlm@451 2454 experiences from the worm that includes the actions I want to
rlm@452 2455 recognize. The =generate-phi-space= program (listing
rlm@451 2456 \ref{generate-phi-space} runs the worm through a series of
rlm@451 2457 exercices and gatheres those experiences into a vector. The
rlm@451 2458 =do-all-the-things= program is a routine expressed in a simple
rlm@452 2459 muscle contraction script language for automated worm control. It
rlm@452 2460 causes the worm to rest, curl, and wiggle over about 700 frames
rlm@452 2461 (approx. 11 seconds).
rlm@425 2462
rlm@451 2463 #+caption: Program to gather the worm's experiences into a vector for
rlm@451 2464 #+caption: further processing. The =motor-control-program= line uses
rlm@451 2465 #+caption: a motor control script that causes the worm to execute a series
rlm@451 2466 #+caption: of ``exercices'' that include all the action predicates.
rlm@451 2467 #+name: generate-phi-space
rlm@452 2468 #+attr_latex: [htpb]
rlm@452 2469 #+begin_listing clojure
rlm@451 2470 #+begin_src clojure
rlm@451 2471 (def do-all-the-things
rlm@451 2472 (concat
rlm@451 2473 curl-script
rlm@451 2474 [[300 :d-ex 40]
rlm@451 2475 [320 :d-ex 0]]
rlm@451 2476 (shift-script 280 (take 16 wiggle-script))))
rlm@451 2477
rlm@451 2478 (defn generate-phi-space []
rlm@451 2479 (let [experiences (atom [])]
rlm@451 2480 (run-world
rlm@451 2481 (apply-map
rlm@451 2482 worm-world
rlm@451 2483 (merge
rlm@451 2484 (worm-world-defaults)
rlm@451 2485 {:end-frame 700
rlm@451 2486 :motor-control
rlm@451 2487 (motor-control-program worm-muscle-labels do-all-the-things)
rlm@451 2488 :experiences experiences})))
rlm@451 2489 @experiences))
rlm@451 2490 #+end_src
rlm@451 2491 #+end_listing
rlm@451 2492
rlm@451 2493 #+caption: Use longest thread and a phi-space generated from a short
rlm@451 2494 #+caption: exercise routine to interpret actions during free play.
rlm@451 2495 #+name: empathy-debug
rlm@452 2496 #+attr_latex: [htpb]
rlm@452 2497 #+begin_listing clojure
rlm@451 2498 #+begin_src clojure
rlm@451 2499 (defn init []
rlm@451 2500 (def phi-space (generate-phi-space))
rlm@451 2501 (def phi-scan (gen-phi-scan phi-space)))
rlm@451 2502
rlm@451 2503 (defn empathy-demonstration []
rlm@451 2504 (let [proprio (atom ())]
rlm@451 2505 (fn
rlm@451 2506 [experiences text]
rlm@451 2507 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
rlm@451 2508 (swap! proprio (partial cons phi-indices))
rlm@451 2509 (let [exp-thread (longest-thread (take 300 @proprio))
rlm@451 2510 empathy (mapv phi-space (infer-nils exp-thread))]
rlm@451 2511 (println-repl (vector:last-n exp-thread 22))
rlm@451 2512 (cond
rlm@451 2513 (grand-circle? empathy) (.setText text "Grand Circle")
rlm@451 2514 (curled? empathy) (.setText text "Curled")
rlm@451 2515 (wiggling? empathy) (.setText text "Wiggling")
rlm@451 2516 (resting? empathy) (.setText text "Resting")
rlm@451 2517 :else (.setText text "Unknown")))))))
rlm@451 2518
rlm@451 2519 (defn empathy-experiment [record]
rlm@451 2520 (.start (worm-world :experience-watch (debug-experience-phi)
rlm@451 2521 :record record :worm worm*)))
rlm@451 2522 #+end_src
rlm@451 2523 #+end_listing
rlm@451 2524
rlm@451 2525 The result of running =empathy-experiment= is that the system is
rlm@451 2526 generally able to interpret worm actions using the action-predicates
rlm@451 2527 on simulated sensory data just as well as with actual data. Figure
rlm@451 2528 \ref{empathy-debug-image} was generated using =empathy-experiment=:
rlm@451 2529
rlm@451 2530 #+caption: From only proprioceptive data, =EMPATH= was able to infer
rlm@451 2531 #+caption: the complete sensory experience and classify four poses
rlm@451 2532 #+caption: (The last panel shows a composite image of \emph{wriggling},
rlm@451 2533 #+caption: a dynamic pose.)
rlm@451 2534 #+name: empathy-debug-image
rlm@451 2535 #+ATTR_LaTeX: :width 10cm :placement [H]
rlm@451 2536 [[./images/empathy-1.png]]
rlm@451 2537
rlm@451 2538 One way to measure the performance of =EMPATH= is to compare the
rlm@451 2539 sutiability of the imagined sense experience to trigger the same
rlm@451 2540 action predicates as the real sensory experience.
rlm@451 2541
rlm@451 2542 #+caption: Determine how closely empathy approximates actual
rlm@451 2543 #+caption: sensory data.
rlm@451 2544 #+name: test-empathy-accuracy
rlm@452 2545 #+attr_latex: [htpb]
rlm@452 2546 #+begin_listing clojure
rlm@451 2547 #+begin_src clojure
rlm@451 2548 (def worm-action-label
rlm@451 2549 (juxt grand-circle? curled? wiggling?))
rlm@451 2550
rlm@451 2551 (defn compare-empathy-with-baseline [matches]
rlm@451 2552 (let [proprio (atom ())]
rlm@451 2553 (fn
rlm@451 2554 [experiences text]
rlm@451 2555 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
rlm@451 2556 (swap! proprio (partial cons phi-indices))
rlm@451 2557 (let [exp-thread (longest-thread (take 300 @proprio))
rlm@451 2558 empathy (mapv phi-space (infer-nils exp-thread))
rlm@451 2559 experience-matches-empathy
rlm@451 2560 (= (worm-action-label experiences)
rlm@451 2561 (worm-action-label empathy))]
rlm@451 2562 (println-repl experience-matches-empathy)
rlm@451 2563 (swap! matches #(conj % experience-matches-empathy)))))))
rlm@451 2564
rlm@451 2565 (defn accuracy [v]
rlm@451 2566 (float (/ (count (filter true? v)) (count v))))
rlm@451 2567
rlm@451 2568 (defn test-empathy-accuracy []
rlm@451 2569 (let [res (atom [])]
rlm@451 2570 (run-world
rlm@451 2571 (worm-world :experience-watch
rlm@451 2572 (compare-empathy-with-baseline res)
rlm@451 2573 :worm worm*))
rlm@451 2574 (accuracy @res)))
rlm@451 2575 #+end_src
rlm@451 2576 #+end_listing
rlm@451 2577
rlm@451 2578 Running =test-empathy-accuracy= using the very short exercise
rlm@451 2579 program defined in listing \ref{generate-phi-space}, and then doing
rlm@451 2580 a similar pattern of activity manually yeilds an accuracy of around
rlm@451 2581 73%. This is based on very limited worm experience. By training the
rlm@451 2582 worm for longer, the accuracy dramatically improves.
rlm@451 2583
rlm@451 2584 #+caption: Program to generate \Phi-space using manual training.
rlm@451 2585 #+name: manual-phi-space
rlm@452 2586 #+attr_latex: [htpb]
rlm@451 2587 #+begin_listing clojure
rlm@451 2588 #+begin_src clojure
rlm@451 2589 (defn init-interactive []
rlm@451 2590 (def phi-space
rlm@451 2591 (let [experiences (atom [])]
rlm@451 2592 (run-world
rlm@451 2593 (apply-map
rlm@451 2594 worm-world
rlm@451 2595 (merge
rlm@451 2596 (worm-world-defaults)
rlm@451 2597 {:experiences experiences})))
rlm@451 2598 @experiences))
rlm@451 2599 (def phi-scan (gen-phi-scan phi-space)))
rlm@451 2600 #+end_src
rlm@451 2601 #+end_listing
rlm@451 2602
rlm@451 2603 After about 1 minute of manual training, I was able to achieve 95%
rlm@451 2604 accuracy on manual testing of the worm using =init-interactive= and
rlm@452 2605 =test-empathy-accuracy=. The majority of errors are near the
rlm@452 2606 boundaries of transitioning from one type of action to another.
rlm@452 2607 During these transitions the exact label for the action is more open
rlm@452 2608 to interpretation, and dissaggrement between empathy and experience
rlm@452 2609 is more excusable.
rlm@450 2610
rlm@449 2611 ** Digression: bootstrapping touch using free exploration
rlm@449 2612
rlm@452 2613 In the previous section I showed how to compute actions in terms of
rlm@452 2614 body-centered predicates which relied averate touch activation of
rlm@452 2615 pre-defined regions of the worm's skin. What if, instead of recieving
rlm@452 2616 touch pre-grouped into the six faces of each worm segment, the true
rlm@452 2617 topology of the worm's skin was unknown? This is more similiar to how
rlm@452 2618 a nerve fiber bundle might be arranged. While two fibers that are
rlm@452 2619 close in a nerve bundle /might/ correspond to two touch sensors that
rlm@452 2620 are close together on the skin, the process of taking a complicated
rlm@452 2621 surface and forcing it into essentially a circle requires some cuts
rlm@452 2622 and rerragenments.
rlm@452 2623
rlm@452 2624 In this section I show how to automatically learn the skin-topology of
rlm@452 2625 a worm segment by free exploration. As the worm rolls around on the
rlm@452 2626 floor, large sections of its surface get activated. If the worm has
rlm@452 2627 stopped moving, then whatever region of skin that is touching the
rlm@452 2628 floor is probably an important region, and should be recorded.
rlm@452 2629
rlm@452 2630 #+caption: Program to detect whether the worm is in a resting state
rlm@452 2631 #+caption: with one face touching the floor.
rlm@452 2632 #+name: pure-touch
rlm@452 2633 #+begin_listing clojure
rlm@452 2634 #+begin_src clojure
rlm@452 2635 (def full-contact [(float 0.0) (float 0.1)])
rlm@452 2636
rlm@452 2637 (defn pure-touch?
rlm@452 2638 "This is worm specific code to determine if a large region of touch
rlm@452 2639 sensors is either all on or all off."
rlm@452 2640 [[coords touch :as touch-data]]
rlm@452 2641 (= (set (map first touch)) (set full-contact)))
rlm@452 2642 #+end_src
rlm@452 2643 #+end_listing
rlm@452 2644
rlm@452 2645 After collecting these important regions, there will many nearly
rlm@452 2646 similiar touch regions. While for some purposes the subtle
rlm@452 2647 differences between these regions will be important, for my
rlm@452 2648 purposes I colapse them into mostly non-overlapping sets using
rlm@452 2649 =remove-similiar= in listing \ref{remove-similiar}
rlm@452 2650
rlm@452 2651 #+caption: Program to take a lits of set of points and ``collapse them''
rlm@452 2652 #+caption: so that the remaining sets in the list are siginificantly
rlm@452 2653 #+caption: different from each other. Prefer smaller sets to larger ones.
rlm@452 2654 #+name: remove-similiar
rlm@452 2655 #+begin_listing clojure
rlm@452 2656 #+begin_src clojure
rlm@452 2657 (defn remove-similar
rlm@452 2658 [coll]
rlm@452 2659 (loop [result () coll (sort-by (comp - count) coll)]
rlm@452 2660 (if (empty? coll) result
rlm@452 2661 (let [[x & xs] coll
rlm@452 2662 c (count x)]
rlm@452 2663 (if (some
rlm@452 2664 (fn [other-set]
rlm@452 2665 (let [oc (count other-set)]
rlm@452 2666 (< (- (count (union other-set x)) c) (* oc 0.1))))
rlm@452 2667 xs)
rlm@452 2668 (recur result xs)
rlm@452 2669 (recur (cons x result) xs))))))
rlm@452 2670 #+end_src
rlm@452 2671 #+end_listing
rlm@452 2672
rlm@452 2673 Actually running this simulation is easy given =CORTEX='s facilities.
rlm@452 2674
rlm@452 2675 #+caption: Collect experiences while the worm moves around. Filter the touch
rlm@452 2676 #+caption: sensations by stable ones, collapse similiar ones together,
rlm@452 2677 #+caption: and report the regions learned.
rlm@452 2678 #+name: learn-touch
rlm@452 2679 #+begin_listing clojure
rlm@452 2680 #+begin_src clojure
rlm@452 2681 (defn learn-touch-regions []
rlm@452 2682 (let [experiences (atom [])
rlm@452 2683 world (apply-map
rlm@452 2684 worm-world
rlm@452 2685 (assoc (worm-segment-defaults)
rlm@452 2686 :experiences experiences))]
rlm@452 2687 (run-world world)
rlm@452 2688 (->>
rlm@452 2689 @experiences
rlm@452 2690 (drop 175)
rlm@452 2691 ;; access the single segment's touch data
rlm@452 2692 (map (comp first :touch))
rlm@452 2693 ;; only deal with "pure" touch data to determine surfaces
rlm@452 2694 (filter pure-touch?)
rlm@452 2695 ;; associate coordinates with touch values
rlm@452 2696 (map (partial apply zipmap))
rlm@452 2697 ;; select those regions where contact is being made
rlm@452 2698 (map (partial group-by second))
rlm@452 2699 (map #(get % full-contact))
rlm@452 2700 (map (partial map first))
rlm@452 2701 ;; remove redundant/subset regions
rlm@452 2702 (map set)
rlm@452 2703 remove-similar)))
rlm@452 2704
rlm@452 2705 (defn learn-and-view-touch-regions []
rlm@452 2706 (map view-touch-region
rlm@452 2707 (learn-touch-regions)))
rlm@452 2708 #+end_src
rlm@452 2709 #+end_listing
rlm@452 2710
rlm@452 2711 The only thing remining to define is the particular motion the worm
rlm@452 2712 must take. I accomplish this with a simple motor control program.
rlm@452 2713
rlm@452 2714 #+caption: Motor control program for making the worm roll on the ground.
rlm@452 2715 #+caption: This could also be replaced with random motion.
rlm@452 2716 #+name: worm-roll
rlm@452 2717 #+begin_listing clojure
rlm@452 2718 #+begin_src clojure
rlm@452 2719 (defn touch-kinesthetics []
rlm@452 2720 [[170 :lift-1 40]
rlm@452 2721 [190 :lift-1 19]
rlm@452 2722 [206 :lift-1 0]
rlm@452 2723
rlm@452 2724 [400 :lift-2 40]
rlm@452 2725 [410 :lift-2 0]
rlm@452 2726
rlm@452 2727 [570 :lift-2 40]
rlm@452 2728 [590 :lift-2 21]
rlm@452 2729 [606 :lift-2 0]
rlm@452 2730
rlm@452 2731 [800 :lift-1 30]
rlm@452 2732 [809 :lift-1 0]
rlm@452 2733
rlm@452 2734 [900 :roll-2 40]
rlm@452 2735 [905 :roll-2 20]
rlm@452 2736 [910 :roll-2 0]
rlm@452 2737
rlm@452 2738 [1000 :roll-2 40]
rlm@452 2739 [1005 :roll-2 20]
rlm@452 2740 [1010 :roll-2 0]
rlm@452 2741
rlm@452 2742 [1100 :roll-2 40]
rlm@452 2743 [1105 :roll-2 20]
rlm@452 2744 [1110 :roll-2 0]
rlm@452 2745 ])
rlm@452 2746 #+end_src
rlm@452 2747 #+end_listing
rlm@452 2748
rlm@452 2749
rlm@452 2750 #+caption: The small worm rolls around on the floor, driven
rlm@452 2751 #+caption: by the motor control program in listing \ref{worm-roll}.
rlm@452 2752 #+name: worm-roll
rlm@452 2753 #+ATTR_LaTeX: :width 12cm
rlm@452 2754 [[./images/worm-roll.png]]
rlm@452 2755
rlm@452 2756
rlm@452 2757 #+caption: After completing its adventures, the worm now knows
rlm@452 2758 #+caption: how its touch sensors are arranged along its skin. These
rlm@452 2759 #+caption: are the regions that were deemed important by
rlm@452 2760 #+caption: =learn-touch-regions=. Note that the worm has discovered
rlm@452 2761 #+caption: that it has six sides.
rlm@452 2762 #+name: worm-touch-map
rlm@452 2763 #+ATTR_LaTeX: :width 12cm
rlm@452 2764 [[./images/touch-learn.png]]
rlm@452 2765
rlm@452 2766 While simple, =learn-touch-regions= exploits regularities in both
rlm@452 2767 the worm's physiology and the worm's environment to correctly
rlm@452 2768 deduce that the worm has six sides. Note that =learn-touch-regions=
rlm@452 2769 would work just as well even if the worm's touch sense data were
rlm@452 2770 completely scrambled. The cross shape is just for convienence. This
rlm@452 2771 example justifies the use of pre-defined touch regions in =EMPATH=.
rlm@452 2772
rlm@465 2773 * COMMENT Contributions
rlm@454 2774
rlm@461 2775 In this thesis you have seen the =CORTEX= system, a complete
rlm@461 2776 environment for creating simulated creatures. You have seen how to
rlm@461 2777 implement five senses including touch, proprioception, hearing,
rlm@461 2778 vision, and muscle tension. You have seen how to create new creatues
rlm@461 2779 using blender, a 3D modeling tool. I hope that =CORTEX= will be
rlm@461 2780 useful in further research projects. To this end I have included the
rlm@461 2781 full source to =CORTEX= along with a large suite of tests and
rlm@461 2782 examples. I have also created a user guide for =CORTEX= which is
rlm@461 2783 inculded in an appendix to this thesis.
rlm@447 2784
rlm@461 2785 You have also seen how I used =CORTEX= as a platform to attach the
rlm@461 2786 /action recognition/ problem, which is the problem of recognizing
rlm@461 2787 actions in video. You saw a simple system called =EMPATH= which
rlm@461 2788 ientifies actions by first describing actions in a body-centerd,
rlm@461 2789 rich sense language, then infering a full range of sensory
rlm@461 2790 experience from limited data using previous experience gained from
rlm@461 2791 free play.
rlm@447 2792
rlm@461 2793 As a minor digression, you also saw how I used =CORTEX= to enable a
rlm@461 2794 tiny worm to discover the topology of its skin simply by rolling on
rlm@461 2795 the ground.
rlm@461 2796
rlm@461 2797 In conclusion, the main contributions of this thesis are:
rlm@461 2798
rlm@461 2799 - =CORTEX=, a system for creating simulated creatures with rich
rlm@461 2800 senses.
rlm@461 2801 - =EMPATH=, a program for recognizing actions by imagining sensory
rlm@461 2802 experience.
rlm@447 2803
rlm@447 2804 # An anatomical joke:
rlm@447 2805 # - Training
rlm@447 2806 # - Skeletal imitation
rlm@447 2807 # - Sensory fleshing-out
rlm@447 2808 # - Classification