annotate thesis/cortex.org @ 509:81c845a91378

completed initial render at 81 pages, but images are unacceptably scattered.
author Robert McIntyre <rlm@mit.edu>
date Sun, 30 Mar 2014 01:22:23 -0400
parents c11d3fc3e6f0
children f639e2139ce2
rev   line source
rlm@425 1 #+title: =CORTEX=
rlm@425 2 #+author: Robert McIntyre
rlm@425 3 #+email: rlm@mit.edu
rlm@425 4 #+description: Using embodied AI to facilitate Artificial Imagination.
rlm@425 5 #+keywords: AI, clojure, embodiment
rlm@451 6 #+LaTeX_CLASS_OPTIONS: [nofloat]
rlm@422 7
rlm@465 8 * COMMENT templates
rlm@470 9 #+caption:
rlm@470 10 #+caption:
rlm@470 11 #+caption:
rlm@470 12 #+caption:
rlm@470 13 #+name: name
rlm@470 14 #+begin_listing clojure
rlm@479 15 #+BEGIN_SRC clojure
rlm@479 16 #+END_SRC
rlm@470 17 #+end_listing
rlm@465 18
rlm@470 19 #+caption:
rlm@470 20 #+caption:
rlm@470 21 #+caption:
rlm@470 22 #+name: name
rlm@470 23 #+ATTR_LaTeX: :width 10cm
rlm@470 24 [[./images/aurellem-gray.png]]
rlm@470 25
rlm@470 26 #+caption:
rlm@470 27 #+caption:
rlm@470 28 #+caption:
rlm@470 29 #+caption:
rlm@470 30 #+name: name
rlm@470 31 #+begin_listing clojure
rlm@475 32 #+BEGIN_SRC clojure
rlm@475 33 #+END_SRC
rlm@470 34 #+end_listing
rlm@470 35
rlm@470 36 #+caption:
rlm@470 37 #+caption:
rlm@470 38 #+caption:
rlm@470 39 #+name: name
rlm@470 40 #+ATTR_LaTeX: :width 10cm
rlm@470 41 [[./images/aurellem-gray.png]]
rlm@470 42
rlm@465 43
rlm@507 44 * Empathy and Embodiment as problem solving strategies
rlm@437 45
rlm@437 46 By the end of this thesis, you will have seen a novel approach to
rlm@437 47 interpreting video using embodiment and empathy. You will have also
rlm@437 48 seen one way to efficiently implement empathy for embodied
rlm@447 49 creatures. Finally, you will become familiar with =CORTEX=, a system
rlm@447 50 for designing and simulating creatures with rich senses, which you
rlm@447 51 may choose to use in your own research.
rlm@437 52
rlm@441 53 This is the core vision of my thesis: That one of the important ways
rlm@441 54 in which we understand others is by imagining ourselves in their
rlm@441 55 position and emphatically feeling experiences relative to our own
rlm@441 56 bodies. By understanding events in terms of our own previous
rlm@441 57 corporeal experience, we greatly constrain the possibilities of what
rlm@441 58 would otherwise be an unwieldy exponential search. This extra
rlm@441 59 constraint can be the difference between easily understanding what
rlm@441 60 is happening in a video and being completely lost in a sea of
rlm@441 61 incomprehensible color and movement.
rlm@505 62
rlm@436 63 ** Recognizing actions in video is extremely difficult
rlm@437 64
rlm@447 65 Consider for example the problem of determining what is happening
rlm@447 66 in a video of which this is one frame:
rlm@437 67
rlm@441 68 #+caption: A cat drinking some water. Identifying this action is
rlm@441 69 #+caption: beyond the state of the art for computers.
rlm@441 70 #+ATTR_LaTeX: :width 7cm
rlm@441 71 [[./images/cat-drinking.jpg]]
rlm@441 72
rlm@441 73 It is currently impossible for any computer program to reliably
rlm@447 74 label such a video as ``drinking''. And rightly so -- it is a very
rlm@441 75 hard problem! What features can you describe in terms of low level
rlm@441 76 functions of pixels that can even begin to describe at a high level
rlm@441 77 what is happening here?
rlm@437 78
rlm@447 79 Or suppose that you are building a program that recognizes chairs.
rlm@448 80 How could you ``see'' the chair in figure \ref{hidden-chair}?
rlm@441 81
rlm@441 82 #+caption: The chair in this image is quite obvious to humans, but I
rlm@448 83 #+caption: doubt that any modern computer vision program can find it.
rlm@441 84 #+name: hidden-chair
rlm@441 85 #+ATTR_LaTeX: :width 10cm
rlm@441 86 [[./images/fat-person-sitting-at-desk.jpg]]
rlm@441 87
rlm@441 88 Finally, how is it that you can easily tell the difference between
rlm@441 89 how the girls /muscles/ are working in figure \ref{girl}?
rlm@441 90
rlm@441 91 #+caption: The mysterious ``common sense'' appears here as you are able
rlm@441 92 #+caption: to discern the difference in how the girl's arm muscles
rlm@441 93 #+caption: are activated between the two images.
rlm@441 94 #+name: girl
rlm@448 95 #+ATTR_LaTeX: :width 7cm
rlm@441 96 [[./images/wall-push.png]]
rlm@437 97
rlm@441 98 Each of these examples tells us something about what might be going
rlm@441 99 on in our minds as we easily solve these recognition problems.
rlm@441 100
rlm@441 101 The hidden chairs show us that we are strongly triggered by cues
rlm@447 102 relating to the position of human bodies, and that we can determine
rlm@447 103 the overall physical configuration of a human body even if much of
rlm@447 104 that body is occluded.
rlm@437 105
rlm@441 106 The picture of the girl pushing against the wall tells us that we
rlm@441 107 have common sense knowledge about the kinetics of our own bodies.
rlm@441 108 We know well how our muscles would have to work to maintain us in
rlm@441 109 most positions, and we can easily project this self-knowledge to
rlm@441 110 imagined positions triggered by images of the human body.
rlm@441 111
rlm@441 112 ** =EMPATH= neatly solves recognition problems
rlm@441 113
rlm@441 114 I propose a system that can express the types of recognition
rlm@441 115 problems above in a form amenable to computation. It is split into
rlm@441 116 four parts:
rlm@441 117
rlm@448 118 - Free/Guided Play :: The creature moves around and experiences the
rlm@448 119 world through its unique perspective. Many otherwise
rlm@448 120 complicated actions are easily described in the language of a
rlm@448 121 full suite of body-centered, rich senses. For example,
rlm@448 122 drinking is the feeling of water sliding down your throat, and
rlm@448 123 cooling your insides. It's often accompanied by bringing your
rlm@448 124 hand close to your face, or bringing your face close to water.
rlm@448 125 Sitting down is the feeling of bending your knees, activating
rlm@448 126 your quadriceps, then feeling a surface with your bottom and
rlm@448 127 relaxing your legs. These body-centered action descriptions
rlm@448 128 can be either learned or hard coded.
rlm@448 129 - Posture Imitation :: When trying to interpret a video or image,
rlm@448 130 the creature takes a model of itself and aligns it with
rlm@448 131 whatever it sees. This alignment can even cross species, as
rlm@448 132 when humans try to align themselves with things like ponies,
rlm@448 133 dogs, or other humans with a different body type.
rlm@448 134 - Empathy :: The alignment triggers associations with
rlm@448 135 sensory data from prior experiences. For example, the
rlm@448 136 alignment itself easily maps to proprioceptive data. Any
rlm@448 137 sounds or obvious skin contact in the video can to a lesser
rlm@448 138 extent trigger previous experience. Segments of previous
rlm@448 139 experiences are stitched together to form a coherent and
rlm@448 140 complete sensory portrait of the scene.
rlm@448 141 - Recognition :: With the scene described in terms of first
rlm@448 142 person sensory events, the creature can now run its
rlm@447 143 action-identification programs on this synthesized sensory
rlm@447 144 data, just as it would if it were actually experiencing the
rlm@447 145 scene first-hand. If previous experience has been accurately
rlm@447 146 retrieved, and if it is analogous enough to the scene, then
rlm@447 147 the creature will correctly identify the action in the scene.
rlm@447 148
rlm@441 149 For example, I think humans are able to label the cat video as
rlm@447 150 ``drinking'' because they imagine /themselves/ as the cat, and
rlm@441 151 imagine putting their face up against a stream of water and
rlm@441 152 sticking out their tongue. In that imagined world, they can feel
rlm@441 153 the cool water hitting their tongue, and feel the water entering
rlm@447 154 their body, and are able to recognize that /feeling/ as drinking.
rlm@447 155 So, the label of the action is not really in the pixels of the
rlm@447 156 image, but is found clearly in a simulation inspired by those
rlm@447 157 pixels. An imaginative system, having been trained on drinking and
rlm@447 158 non-drinking examples and learning that the most important
rlm@447 159 component of drinking is the feeling of water sliding down one's
rlm@447 160 throat, would analyze a video of a cat drinking in the following
rlm@447 161 manner:
rlm@441 162
rlm@447 163 1. Create a physical model of the video by putting a ``fuzzy''
rlm@447 164 model of its own body in place of the cat. Possibly also create
rlm@447 165 a simulation of the stream of water.
rlm@441 166
rlm@441 167 2. Play out this simulated scene and generate imagined sensory
rlm@441 168 experience. This will include relevant muscle contractions, a
rlm@441 169 close up view of the stream from the cat's perspective, and most
rlm@441 170 importantly, the imagined feeling of water entering the
rlm@443 171 mouth. The imagined sensory experience can come from a
rlm@441 172 simulation of the event, but can also be pattern-matched from
rlm@441 173 previous, similar embodied experience.
rlm@441 174
rlm@441 175 3. The action is now easily identified as drinking by the sense of
rlm@441 176 taste alone. The other senses (such as the tongue moving in and
rlm@441 177 out) help to give plausibility to the simulated action. Note that
rlm@441 178 the sense of vision, while critical in creating the simulation,
rlm@441 179 is not critical for identifying the action from the simulation.
rlm@441 180
rlm@441 181 For the chair examples, the process is even easier:
rlm@441 182
rlm@441 183 1. Align a model of your body to the person in the image.
rlm@441 184
rlm@441 185 2. Generate proprioceptive sensory data from this alignment.
rlm@437 186
rlm@441 187 3. Use the imagined proprioceptive data as a key to lookup related
rlm@441 188 sensory experience associated with that particular proproceptive
rlm@441 189 feeling.
rlm@437 190
rlm@443 191 4. Retrieve the feeling of your bottom resting on a surface, your
rlm@443 192 knees bent, and your leg muscles relaxed.
rlm@437 193
rlm@441 194 5. This sensory information is consistent with the =sitting?=
rlm@441 195 sensory predicate, so you (and the entity in the image) must be
rlm@441 196 sitting.
rlm@440 197
rlm@441 198 6. There must be a chair-like object since you are sitting.
rlm@440 199
rlm@441 200 Empathy offers yet another alternative to the age-old AI
rlm@441 201 representation question: ``What is a chair?'' --- A chair is the
rlm@441 202 feeling of sitting.
rlm@441 203
rlm@441 204 My program, =EMPATH= uses this empathic problem solving technique
rlm@441 205 to interpret the actions of a simple, worm-like creature.
rlm@437 206
rlm@441 207 #+caption: The worm performs many actions during free play such as
rlm@441 208 #+caption: curling, wiggling, and resting.
rlm@441 209 #+name: worm-intro
rlm@446 210 #+ATTR_LaTeX: :width 15cm
rlm@445 211 [[./images/worm-intro-white.png]]
rlm@437 212
rlm@462 213 #+caption: =EMPATH= recognized and classified each of these
rlm@462 214 #+caption: poses by inferring the complete sensory experience
rlm@462 215 #+caption: from proprioceptive data.
rlm@441 216 #+name: worm-recognition-intro
rlm@446 217 #+ATTR_LaTeX: :width 15cm
rlm@445 218 [[./images/worm-poses.png]]
rlm@441 219
rlm@441 220 One powerful advantage of empathic problem solving is that it
rlm@441 221 factors the action recognition problem into two easier problems. To
rlm@441 222 use empathy, you need an /aligner/, which takes the video and a
rlm@441 223 model of your body, and aligns the model with the video. Then, you
rlm@441 224 need a /recognizer/, which uses the aligned model to interpret the
rlm@441 225 action. The power in this method lies in the fact that you describe
rlm@448 226 all actions form a body-centered viewpoint. You are less tied to
rlm@447 227 the particulars of any visual representation of the actions. If you
rlm@441 228 teach the system what ``running'' is, and you have a good enough
rlm@441 229 aligner, the system will from then on be able to recognize running
rlm@441 230 from any point of view, even strange points of view like above or
rlm@441 231 underneath the runner. This is in contrast to action recognition
rlm@448 232 schemes that try to identify actions using a non-embodied approach.
rlm@448 233 If these systems learn about running as viewed from the side, they
rlm@448 234 will not automatically be able to recognize running from any other
rlm@448 235 viewpoint.
rlm@441 236
rlm@441 237 Another powerful advantage is that using the language of multiple
rlm@441 238 body-centered rich senses to describe body-centerd actions offers a
rlm@441 239 massive boost in descriptive capability. Consider how difficult it
rlm@441 240 would be to compose a set of HOG filters to describe the action of
rlm@447 241 a simple worm-creature ``curling'' so that its head touches its
rlm@447 242 tail, and then behold the simplicity of describing thus action in a
rlm@441 243 language designed for the task (listing \ref{grand-circle-intro}):
rlm@441 244
rlm@446 245 #+caption: Body-centerd actions are best expressed in a body-centered
rlm@446 246 #+caption: language. This code detects when the worm has curled into a
rlm@446 247 #+caption: full circle. Imagine how you would replicate this functionality
rlm@446 248 #+caption: using low-level pixel features such as HOG filters!
rlm@446 249 #+name: grand-circle-intro
rlm@509 250 #+begin_listing clojure
rlm@446 251 #+begin_src clojure
rlm@446 252 (defn grand-circle?
rlm@446 253 "Does the worm form a majestic circle (one end touching the other)?"
rlm@446 254 [experiences]
rlm@446 255 (and (curled? experiences)
rlm@446 256 (let [worm-touch (:touch (peek experiences))
rlm@446 257 tail-touch (worm-touch 0)
rlm@446 258 head-touch (worm-touch 4)]
rlm@462 259 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
rlm@462 260 (< 0.2 (contact worm-segment-top-tip head-touch))))))
rlm@446 261 #+end_src
rlm@446 262 #+end_listing
rlm@446 263
rlm@449 264 ** =CORTEX= is a toolkit for building sensate creatures
rlm@435 265
rlm@448 266 I built =CORTEX= to be a general AI research platform for doing
rlm@448 267 experiments involving multiple rich senses and a wide variety and
rlm@448 268 number of creatures. I intend it to be useful as a library for many
rlm@462 269 more projects than just this thesis. =CORTEX= was necessary to meet
rlm@462 270 a need among AI researchers at CSAIL and beyond, which is that
rlm@462 271 people often will invent neat ideas that are best expressed in the
rlm@448 272 language of creatures and senses, but in order to explore those
rlm@448 273 ideas they must first build a platform in which they can create
rlm@448 274 simulated creatures with rich senses! There are many ideas that
rlm@448 275 would be simple to execute (such as =EMPATH=), but attached to them
rlm@448 276 is the multi-month effort to make a good creature simulator. Often,
rlm@448 277 that initial investment of time proves to be too much, and the
rlm@448 278 project must make do with a lesser environment.
rlm@435 279
rlm@448 280 =CORTEX= is well suited as an environment for embodied AI research
rlm@448 281 for three reasons:
rlm@448 282
rlm@448 283 - You can create new creatures using Blender, a popular 3D modeling
rlm@448 284 program. Each sense can be specified using special blender nodes
rlm@448 285 with biologically inspired paramaters. You need not write any
rlm@448 286 code to create a creature, and can use a wide library of
rlm@448 287 pre-existing blender models as a base for your own creatures.
rlm@448 288
rlm@448 289 - =CORTEX= implements a wide variety of senses, including touch,
rlm@448 290 proprioception, vision, hearing, and muscle tension. Complicated
rlm@448 291 senses like touch, and vision involve multiple sensory elements
rlm@448 292 embedded in a 2D surface. You have complete control over the
rlm@448 293 distribution of these sensor elements through the use of simple
rlm@448 294 png image files. In particular, =CORTEX= implements more
rlm@448 295 comprehensive hearing than any other creature simulation system
rlm@448 296 available.
rlm@448 297
rlm@448 298 - =CORTEX= supports any number of creatures and any number of
rlm@448 299 senses. Time in =CORTEX= dialates so that the simulated creatures
rlm@448 300 always precieve a perfectly smooth flow of time, regardless of
rlm@448 301 the actual computational load.
rlm@448 302
rlm@448 303 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
rlm@448 304 engine designed to create cross-platform 3D desktop games. =CORTEX=
rlm@448 305 is mainly written in clojure, a dialect of =LISP= that runs on the
rlm@448 306 java virtual machine (JVM). The API for creating and simulating
rlm@449 307 creatures and senses is entirely expressed in clojure, though many
rlm@449 308 senses are implemented at the layer of jMonkeyEngine or below. For
rlm@449 309 example, for the sense of hearing I use a layer of clojure code on
rlm@449 310 top of a layer of java JNI bindings that drive a layer of =C++=
rlm@449 311 code which implements a modified version of =OpenAL= to support
rlm@449 312 multiple listeners. =CORTEX= is the only simulation environment
rlm@449 313 that I know of that can support multiple entities that can each
rlm@449 314 hear the world from their own perspective. Other senses also
rlm@449 315 require a small layer of Java code. =CORTEX= also uses =bullet=, a
rlm@449 316 physics simulator written in =C=.
rlm@448 317
rlm@448 318 #+caption: Here is the worm from above modeled in Blender, a free
rlm@448 319 #+caption: 3D-modeling program. Senses and joints are described
rlm@448 320 #+caption: using special nodes in Blender.
rlm@448 321 #+name: worm-recognition-intro
rlm@448 322 #+ATTR_LaTeX: :width 12cm
rlm@448 323 [[./images/blender-worm.png]]
rlm@448 324
rlm@449 325 Here are some thing I anticipate that =CORTEX= might be used for:
rlm@449 326
rlm@449 327 - exploring new ideas about sensory integration
rlm@449 328 - distributed communication among swarm creatures
rlm@449 329 - self-learning using free exploration,
rlm@449 330 - evolutionary algorithms involving creature construction
rlm@449 331 - exploration of exoitic senses and effectors that are not possible
rlm@449 332 in the real world (such as telekenisis or a semantic sense)
rlm@449 333 - imagination using subworlds
rlm@449 334
rlm@451 335 During one test with =CORTEX=, I created 3,000 creatures each with
rlm@448 336 their own independent senses and ran them all at only 1/80 real
rlm@448 337 time. In another test, I created a detailed model of my own hand,
rlm@448 338 equipped with a realistic distribution of touch (more sensitive at
rlm@448 339 the fingertips), as well as eyes and ears, and it ran at around 1/4
rlm@451 340 real time.
rlm@448 341
rlm@451 342 #+BEGIN_LaTeX
rlm@449 343 \begin{sidewaysfigure}
rlm@449 344 \includegraphics[width=9.5in]{images/full-hand.png}
rlm@451 345 \caption{
rlm@451 346 I modeled my own right hand in Blender and rigged it with all the
rlm@451 347 senses that {\tt CORTEX} supports. My simulated hand has a
rlm@451 348 biologically inspired distribution of touch sensors. The senses are
rlm@451 349 displayed on the right, and the simulation is displayed on the
rlm@451 350 left. Notice that my hand is curling its fingers, that it can see
rlm@451 351 its own finger from the eye in its palm, and that it can feel its
rlm@451 352 own thumb touching its palm.}
rlm@449 353 \end{sidewaysfigure}
rlm@451 354 #+END_LaTeX
rlm@448 355
rlm@437 356 ** Contributions
rlm@435 357
rlm@451 358 - I built =CORTEX=, a comprehensive platform for embodied AI
rlm@451 359 experiments. =CORTEX= supports many features lacking in other
rlm@451 360 systems, such proper simulation of hearing. It is easy to create
rlm@451 361 new =CORTEX= creatures using Blender, a free 3D modeling program.
rlm@449 362
rlm@451 363 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
rlm@451 364 a worm-like creature using a computational model of empathy.
rlm@449 365
rlm@504 366 * Building =CORTEX=
rlm@435 367
rlm@485 368 I intend for =CORTEX= to be used as a general-purpose library for
rlm@462 369 building creatures and outfitting them with senses, so that it will
rlm@462 370 be useful for other researchers who want to test out ideas of their
rlm@462 371 own. To this end, wherver I have had to make archetictural choices
rlm@462 372 about =CORTEX=, I have chosen to give as much freedom to the user as
rlm@462 373 possible, so that =CORTEX= may be used for things I have not
rlm@462 374 forseen.
rlm@462 375
rlm@507 376 ** Simulation or Reality?
rlm@462 377
rlm@462 378 The most important archetictural decision of all is the choice to
rlm@462 379 use a computer-simulated environemnt in the first place! The world
rlm@462 380 is a vast and rich place, and for now simulations are a very poor
rlm@462 381 reflection of its complexity. It may be that there is a significant
rlm@462 382 qualatative difference between dealing with senses in the real
rlm@468 383 world and dealing with pale facilimilies of them in a simulation.
rlm@468 384 What are the advantages and disadvantages of a simulation vs.
rlm@468 385 reality?
rlm@462 386
rlm@462 387 *** Simulation
rlm@462 388
rlm@462 389 The advantages of virtual reality are that when everything is a
rlm@462 390 simulation, experiments in that simulation are absolutely
rlm@462 391 reproducible. It's also easier to change the character and world
rlm@462 392 to explore new situations and different sensory combinations.
rlm@462 393
rlm@462 394 If the world is to be simulated on a computer, then not only do
rlm@462 395 you have to worry about whether the character's senses are rich
rlm@462 396 enough to learn from the world, but whether the world itself is
rlm@462 397 rendered with enough detail and realism to give enough working
rlm@462 398 material to the character's senses. To name just a few
rlm@462 399 difficulties facing modern physics simulators: destructibility of
rlm@462 400 the environment, simulation of water/other fluids, large areas,
rlm@462 401 nonrigid bodies, lots of objects, smoke. I don't know of any
rlm@462 402 computer simulation that would allow a character to take a rock
rlm@462 403 and grind it into fine dust, then use that dust to make a clay
rlm@462 404 sculpture, at least not without spending years calculating the
rlm@462 405 interactions of every single small grain of dust. Maybe a
rlm@462 406 simulated world with today's limitations doesn't provide enough
rlm@462 407 richness for real intelligence to evolve.
rlm@462 408
rlm@462 409 *** Reality
rlm@462 410
rlm@462 411 The other approach for playing with senses is to hook your
rlm@462 412 software up to real cameras, microphones, robots, etc., and let it
rlm@462 413 loose in the real world. This has the advantage of eliminating
rlm@462 414 concerns about simulating the world at the expense of increasing
rlm@462 415 the complexity of implementing the senses. Instead of just
rlm@462 416 grabbing the current rendered frame for processing, you have to
rlm@462 417 use an actual camera with real lenses and interact with photons to
rlm@462 418 get an image. It is much harder to change the character, which is
rlm@462 419 now partly a physical robot of some sort, since doing so involves
rlm@462 420 changing things around in the real world instead of modifying
rlm@462 421 lines of code. While the real world is very rich and definitely
rlm@462 422 provides enough stimulation for intelligence to develop as
rlm@462 423 evidenced by our own existence, it is also uncontrollable in the
rlm@462 424 sense that a particular situation cannot be recreated perfectly or
rlm@462 425 saved for later use. It is harder to conduct science because it is
rlm@462 426 harder to repeat an experiment. The worst thing about using the
rlm@462 427 real world instead of a simulation is the matter of time. Instead
rlm@462 428 of simulated time you get the constant and unstoppable flow of
rlm@462 429 real time. This severely limits the sorts of software you can use
rlm@462 430 to program the AI because all sense inputs must be handled in real
rlm@462 431 time. Complicated ideas may have to be implemented in hardware or
rlm@462 432 may simply be impossible given the current speed of our
rlm@462 433 processors. Contrast this with a simulation, in which the flow of
rlm@462 434 time in the simulated world can be slowed down to accommodate the
rlm@462 435 limitations of the character's programming. In terms of cost,
rlm@462 436 doing everything in software is far cheaper than building custom
rlm@462 437 real-time hardware. All you need is a laptop and some patience.
rlm@435 438
rlm@507 439 ** Because of Time, simulation is perferable to reality
rlm@435 440
rlm@462 441 I envision =CORTEX= being used to support rapid prototyping and
rlm@462 442 iteration of ideas. Even if I could put together a well constructed
rlm@462 443 kit for creating robots, it would still not be enough because of
rlm@462 444 the scourge of real-time processing. Anyone who wants to test their
rlm@462 445 ideas in the real world must always worry about getting their
rlm@465 446 algorithms to run fast enough to process information in real time.
rlm@465 447 The need for real time processing only increases if multiple senses
rlm@465 448 are involved. In the extreme case, even simple algorithms will have
rlm@465 449 to be accelerated by ASIC chips or FPGAs, turning what would
rlm@465 450 otherwise be a few lines of code and a 10x speed penality into a
rlm@465 451 multi-month ordeal. For this reason, =CORTEX= supports
rlm@462 452 /time-dialiation/, which scales back the framerate of the
rlm@465 453 simulation in proportion to the amount of processing each frame.
rlm@465 454 From the perspective of the creatures inside the simulation, time
rlm@465 455 always appears to flow at a constant rate, regardless of how
rlm@462 456 complicated the envorimnent becomes or how many creatures are in
rlm@462 457 the simulation. The cost is that =CORTEX= can sometimes run slower
rlm@462 458 than real time. This can also be an advantage, however ---
rlm@462 459 simulations of very simple creatures in =CORTEX= generally run at
rlm@462 460 40x on my machine!
rlm@462 461
rlm@507 462 ** What is a sense?
rlm@468 463
rlm@468 464 If =CORTEX= is to support a wide variety of senses, it would help
rlm@468 465 to have a better understanding of what a ``sense'' actually is!
rlm@468 466 While vision, touch, and hearing all seem like they are quite
rlm@468 467 different things, I was supprised to learn during the course of
rlm@468 468 this thesis that they (and all physical senses) can be expressed as
rlm@468 469 exactly the same mathematical object due to a dimensional argument!
rlm@468 470
rlm@468 471 Human beings are three-dimensional objects, and the nerves that
rlm@468 472 transmit data from our various sense organs to our brain are
rlm@468 473 essentially one-dimensional. This leaves up to two dimensions in
rlm@468 474 which our sensory information may flow. For example, imagine your
rlm@468 475 skin: it is a two-dimensional surface around a three-dimensional
rlm@468 476 object (your body). It has discrete touch sensors embedded at
rlm@468 477 various points, and the density of these sensors corresponds to the
rlm@468 478 sensitivity of that region of skin. Each touch sensor connects to a
rlm@468 479 nerve, all of which eventually are bundled together as they travel
rlm@468 480 up the spinal cord to the brain. Intersect the spinal nerves with a
rlm@468 481 guillotining plane and you will see all of the sensory data of the
rlm@468 482 skin revealed in a roughly circular two-dimensional image which is
rlm@468 483 the cross section of the spinal cord. Points on this image that are
rlm@468 484 close together in this circle represent touch sensors that are
rlm@468 485 /probably/ close together on the skin, although there is of course
rlm@468 486 some cutting and rearrangement that has to be done to transfer the
rlm@468 487 complicated surface of the skin onto a two dimensional image.
rlm@468 488
rlm@468 489 Most human senses consist of many discrete sensors of various
rlm@468 490 properties distributed along a surface at various densities. For
rlm@468 491 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
rlm@468 492 disks, and Ruffini's endings, which detect pressure and vibration
rlm@468 493 of various intensities. For ears, it is the stereocilia distributed
rlm@468 494 along the basilar membrane inside the cochlea; each one is
rlm@468 495 sensitive to a slightly different frequency of sound. For eyes, it
rlm@468 496 is rods and cones distributed along the surface of the retina. In
rlm@468 497 each case, we can describe the sense with a surface and a
rlm@468 498 distribution of sensors along that surface.
rlm@468 499
rlm@468 500 The neat idea is that every human sense can be effectively
rlm@468 501 described in terms of a surface containing embedded sensors. If the
rlm@468 502 sense had any more dimensions, then there wouldn't be enough room
rlm@468 503 in the spinal chord to transmit the information!
rlm@468 504
rlm@468 505 Therefore, =CORTEX= must support the ability to create objects and
rlm@468 506 then be able to ``paint'' points along their surfaces to describe
rlm@468 507 each sense.
rlm@468 508
rlm@468 509 Fortunately this idea is already a well known computer graphics
rlm@468 510 technique called called /UV-mapping/. The three-dimensional surface
rlm@468 511 of a model is cut and smooshed until it fits on a two-dimensional
rlm@468 512 image. You paint whatever you want on that image, and when the
rlm@468 513 three-dimensional shape is rendered in a game the smooshing and
rlm@468 514 cutting is reversed and the image appears on the three-dimensional
rlm@468 515 object.
rlm@468 516
rlm@468 517 To make a sense, interpret the UV-image as describing the
rlm@468 518 distribution of that senses sensors. To get different types of
rlm@468 519 sensors, you can either use a different color for each type of
rlm@468 520 sensor, or use multiple UV-maps, each labeled with that sensor
rlm@468 521 type. I generally use a white pixel to mean the presence of a
rlm@468 522 sensor and a black pixel to mean the absence of a sensor, and use
rlm@468 523 one UV-map for each sensor-type within a given sense.
rlm@468 524
rlm@468 525 #+CAPTION: The UV-map for an elongated icososphere. The white
rlm@468 526 #+caption: dots each represent a touch sensor. They are dense
rlm@468 527 #+caption: in the regions that describe the tip of the finger,
rlm@468 528 #+caption: and less dense along the dorsal side of the finger
rlm@468 529 #+caption: opposite the tip.
rlm@468 530 #+name: finger-UV
rlm@468 531 #+ATTR_latex: :width 10cm
rlm@468 532 [[./images/finger-UV.png]]
rlm@468 533
rlm@468 534 #+caption: Ventral side of the UV-mapped finger. Notice the
rlm@468 535 #+caption: density of touch sensors at the tip.
rlm@468 536 #+name: finger-side-view
rlm@468 537 #+ATTR_LaTeX: :width 10cm
rlm@468 538 [[./images/finger-1.png]]
rlm@468 539
rlm@507 540 ** Video game engines provide ready-made physics and shading
rlm@462 541
rlm@462 542 I did not need to write my own physics simulation code or shader to
rlm@462 543 build =CORTEX=. Doing so would lead to a system that is impossible
rlm@462 544 for anyone but myself to use anyway. Instead, I use a video game
rlm@462 545 engine as a base and modify it to accomodate the additional needs
rlm@462 546 of =CORTEX=. Video game engines are an ideal starting point to
rlm@462 547 build =CORTEX=, because they are not far from being creature
rlm@463 548 building systems themselves.
rlm@462 549
rlm@462 550 First off, general purpose video game engines come with a physics
rlm@462 551 engine and lighting / sound system. The physics system provides
rlm@462 552 tools that can be co-opted to serve as touch, proprioception, and
rlm@462 553 muscles. Since some games support split screen views, a good video
rlm@462 554 game engine will allow you to efficiently create multiple cameras
rlm@463 555 in the simulated world that can be used as eyes. Video game systems
rlm@463 556 offer integrated asset management for things like textures and
rlm@468 557 creatures models, providing an avenue for defining creatures. They
rlm@468 558 also understand UV-mapping, since this technique is used to apply a
rlm@468 559 texture to a model. Finally, because video game engines support a
rlm@468 560 large number of users, as long as =CORTEX= doesn't stray too far
rlm@468 561 from the base system, other researchers can turn to this community
rlm@468 562 for help when doing their research.
rlm@463 563
rlm@507 564 ** =CORTEX= is based on jMonkeyEngine3
rlm@463 565
rlm@463 566 While preparing to build =CORTEX= I studied several video game
rlm@463 567 engines to see which would best serve as a base. The top contenders
rlm@463 568 were:
rlm@463 569
rlm@463 570 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
rlm@463 571 software in 1997. All the source code was released by ID
rlm@463 572 software into the Public Domain several years ago, and as a
rlm@463 573 result it has been ported to many different languages. This
rlm@463 574 engine was famous for its advanced use of realistic shading
rlm@463 575 and had decent and fast physics simulation. The main advantage
rlm@463 576 of the Quake II engine is its simplicity, but I ultimately
rlm@463 577 rejected it because the engine is too tied to the concept of a
rlm@463 578 first-person shooter game. One of the problems I had was that
rlm@463 579 there does not seem to be any easy way to attach multiple
rlm@463 580 cameras to a single character. There are also several physics
rlm@463 581 clipping issues that are corrected in a way that only applies
rlm@463 582 to the main character and do not apply to arbitrary objects.
rlm@463 583
rlm@463 584 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
rlm@463 585 and Quake I engines and is used by Valve in the Half-Life
rlm@463 586 series of games. The physics simulation in the Source Engine
rlm@463 587 is quite accurate and probably the best out of all the engines
rlm@463 588 I investigated. There is also an extensive community actively
rlm@463 589 working with the engine. However, applications that use the
rlm@463 590 Source Engine must be written in C++, the code is not open, it
rlm@463 591 only runs on Windows, and the tools that come with the SDK to
rlm@463 592 handle models and textures are complicated and awkward to use.
rlm@463 593
rlm@463 594 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
rlm@463 595 games in Java. It uses OpenGL to render to the screen and uses
rlm@463 596 screengraphs to avoid drawing things that do not appear on the
rlm@463 597 screen. It has an active community and several games in the
rlm@463 598 pipeline. The engine was not built to serve any particular
rlm@463 599 game but is instead meant to be used for any 3D game.
rlm@463 600
rlm@463 601 I chose jMonkeyEngine3 because it because it had the most features
rlm@464 602 out of all the free projects I looked at, and because I could then
rlm@463 603 write my code in clojure, an implementation of =LISP= that runs on
rlm@463 604 the JVM.
rlm@435 605
rlm@507 606 ** =CORTEX= uses Blender to create creature models
rlm@435 607
rlm@464 608 For the simple worm-like creatures I will use later on in this
rlm@464 609 thesis, I could define a simple API in =CORTEX= that would allow
rlm@464 610 one to create boxes, spheres, etc., and leave that API as the sole
rlm@464 611 way to create creatures. However, for =CORTEX= to truly be useful
rlm@468 612 for other projects, it needs a way to construct complicated
rlm@464 613 creatures. If possible, it would be nice to leverage work that has
rlm@464 614 already been done by the community of 3D modelers, or at least
rlm@464 615 enable people who are talented at moedling but not programming to
rlm@468 616 design =CORTEX= creatures.
rlm@464 617
rlm@464 618 Therefore, I use Blender, a free 3D modeling program, as the main
rlm@464 619 way to create creatures in =CORTEX=. However, the creatures modeled
rlm@464 620 in Blender must also be simple to simulate in jMonkeyEngine3's game
rlm@468 621 engine, and must also be easy to rig with =CORTEX='s senses. I
rlm@468 622 accomplish this with extensive use of Blender's ``empty nodes.''
rlm@464 623
rlm@468 624 Empty nodes have no mass, physical presence, or appearance, but
rlm@468 625 they can hold metadata and have names. I use a tree structure of
rlm@468 626 empty nodes to specify senses in the following manner:
rlm@468 627
rlm@468 628 - Create a single top-level empty node whose name is the name of
rlm@468 629 the sense.
rlm@468 630 - Add empty nodes which each contain meta-data relevant to the
rlm@468 631 sense, including a UV-map describing the number/distribution of
rlm@468 632 sensors if applicable.
rlm@468 633 - Make each empty-node the child of the top-level node.
rlm@468 634
rlm@468 635 #+caption: An example of annoting a creature model with empty
rlm@468 636 #+caption: nodes to describe the layout of senses. There are
rlm@468 637 #+caption: multiple empty nodes which each describe the position
rlm@468 638 #+caption: of muscles, ears, eyes, or joints.
rlm@468 639 #+name: sense-nodes
rlm@468 640 #+ATTR_LaTeX: :width 10cm
rlm@468 641 [[./images/empty-sense-nodes.png]]
rlm@468 642
rlm@508 643 ** Bodies are composed of segments connected by joints
rlm@468 644
rlm@468 645 Blender is a general purpose animation tool, which has been used in
rlm@468 646 the past to create high quality movies such as Sintel
rlm@508 647 \cite{blender}. Though Blender can model and render even complicated
rlm@468 648 things like water, it is crucual to keep models that are meant to
rlm@468 649 be simulated as creatures simple. =Bullet=, which =CORTEX= uses
rlm@468 650 though jMonkeyEngine3, is a rigid-body physics system. This offers
rlm@468 651 a compromise between the expressiveness of a game level and the
rlm@468 652 speed at which it can be simulated, and it means that creatures
rlm@468 653 should be naturally expressed as rigid components held together by
rlm@468 654 joint constraints.
rlm@468 655
rlm@468 656 But humans are more like a squishy bag with wrapped around some
rlm@468 657 hard bones which define the overall shape. When we move, our skin
rlm@468 658 bends and stretches to accomodate the new positions of our bones.
rlm@468 659
rlm@468 660 One way to make bodies composed of rigid pieces connected by joints
rlm@468 661 /seem/ more human-like is to use an /armature/, (or /rigging/)
rlm@468 662 system, which defines a overall ``body mesh'' and defines how the
rlm@468 663 mesh deforms as a function of the position of each ``bone'' which
rlm@468 664 is a standard rigid body. This technique is used extensively to
rlm@468 665 model humans and create realistic animations. It is not a good
rlm@468 666 technique for physical simulation, however because it creates a lie
rlm@468 667 -- the skin is not a physical part of the simulation and does not
rlm@468 668 interact with any objects in the world or itself. Objects will pass
rlm@468 669 right though the skin until they come in contact with the
rlm@468 670 underlying bone, which is a physical object. Whithout simulating
rlm@468 671 the skin, the sense of touch has little meaning, and the creature's
rlm@468 672 own vision will lie to it about the true extent of its body.
rlm@468 673 Simulating the skin as a physical object requires some way to
rlm@468 674 continuously update the physical model of the skin along with the
rlm@468 675 movement of the bones, which is unacceptably slow compared to rigid
rlm@468 676 body simulation.
rlm@468 677
rlm@468 678 Therefore, instead of using the human-like ``deformable bag of
rlm@468 679 bones'' approach, I decided to base my body plans on multiple solid
rlm@468 680 objects that are connected by joints, inspired by the robot =EVE=
rlm@468 681 from the movie WALL-E.
rlm@464 682
rlm@464 683 #+caption: =EVE= from the movie WALL-E. This body plan turns
rlm@464 684 #+caption: out to be much better suited to my purposes than a more
rlm@464 685 #+caption: human-like one.
rlm@465 686 #+ATTR_LaTeX: :width 10cm
rlm@464 687 [[./images/Eve.jpg]]
rlm@464 688
rlm@464 689 =EVE='s body is composed of several rigid components that are held
rlm@464 690 together by invisible joint constraints. This is what I mean by
rlm@464 691 ``eve-like''. The main reason that I use eve-style bodies is for
rlm@464 692 efficiency, and so that there will be correspondence between the
rlm@468 693 AI's semses and the physical presence of its body. Each individual
rlm@464 694 section is simulated by a separate rigid body that corresponds
rlm@464 695 exactly with its visual representation and does not change.
rlm@464 696 Sections are connected by invisible joints that are well supported
rlm@464 697 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
rlm@464 698 can efficiently simulate hundreds of rigid bodies connected by
rlm@468 699 joints. Just because sections are rigid does not mean they have to
rlm@468 700 stay as one piece forever; they can be dynamically replaced with
rlm@468 701 multiple sections to simulate splitting in two. This could be used
rlm@468 702 to simulate retractable claws or =EVE='s hands, which are able to
rlm@468 703 coalesce into one object in the movie.
rlm@465 704
rlm@469 705 *** Solidifying/Connecting a body
rlm@465 706
rlm@469 707 =CORTEX= creates a creature in two steps: first, it traverses the
rlm@469 708 nodes in the blender file and creates physical representations for
rlm@469 709 any of them that have mass defined in their blender meta-data.
rlm@466 710
rlm@466 711 #+caption: Program for iterating through the nodes in a blender file
rlm@466 712 #+caption: and generating physical jMonkeyEngine3 objects with mass
rlm@466 713 #+caption: and a matching physics shape.
rlm@466 714 #+name: name
rlm@466 715 #+begin_listing clojure
rlm@466 716 #+begin_src clojure
rlm@466 717 (defn physical!
rlm@466 718 "Iterate through the nodes in creature and make them real physical
rlm@466 719 objects in the simulation."
rlm@466 720 [#^Node creature]
rlm@466 721 (dorun
rlm@466 722 (map
rlm@466 723 (fn [geom]
rlm@466 724 (let [physics-control
rlm@466 725 (RigidBodyControl.
rlm@466 726 (HullCollisionShape.
rlm@466 727 (.getMesh geom))
rlm@466 728 (if-let [mass (meta-data geom "mass")]
rlm@466 729 (float mass) (float 1)))]
rlm@466 730 (.addControl geom physics-control)))
rlm@466 731 (filter #(isa? (class %) Geometry )
rlm@466 732 (node-seq creature)))))
rlm@466 733 #+end_src
rlm@466 734 #+end_listing
rlm@465 735
rlm@469 736 The next step to making a proper body is to connect those pieces
rlm@469 737 together with joints. jMonkeyEngine has a large array of joints
rlm@469 738 available via =bullet=, such as Point2Point, Cone, Hinge, and a
rlm@469 739 generic Six Degree of Freedom joint, with or without spring
rlm@469 740 restitution.
rlm@465 741
rlm@469 742 Joints are treated a lot like proper senses, in that there is a
rlm@469 743 top-level empty node named ``joints'' whose children each
rlm@469 744 represent a joint.
rlm@466 745
rlm@469 746 #+caption: View of the hand model in Blender showing the main ``joints''
rlm@469 747 #+caption: node (highlighted in yellow) and its children which each
rlm@469 748 #+caption: represent a joint in the hand. Each joint node has metadata
rlm@469 749 #+caption: specifying what sort of joint it is.
rlm@469 750 #+name: blender-hand
rlm@469 751 #+ATTR_LaTeX: :width 10cm
rlm@469 752 [[./images/hand-screenshot1.png]]
rlm@469 753
rlm@469 754
rlm@469 755 =CORTEX='s procedure for binding the creature together with joints
rlm@469 756 is as follows:
rlm@469 757
rlm@469 758 - Find the children of the ``joints'' node.
rlm@469 759 - Determine the two spatials the joint is meant to connect.
rlm@469 760 - Create the joint based on the meta-data of the empty node.
rlm@469 761
rlm@469 762 The higher order function =sense-nodes= from =cortex.sense=
rlm@469 763 simplifies finding the joints based on their parent ``joints''
rlm@469 764 node.
rlm@466 765
rlm@466 766 #+caption: Retrieving the children empty nodes from a single
rlm@466 767 #+caption: named empty node is a common pattern in =CORTEX=
rlm@466 768 #+caption: further instances of this technique for the senses
rlm@466 769 #+caption: will be omitted
rlm@466 770 #+name: get-empty-nodes
rlm@466 771 #+begin_listing clojure
rlm@466 772 #+begin_src clojure
rlm@466 773 (defn sense-nodes
rlm@466 774 "For some senses there is a special empty blender node whose
rlm@466 775 children are considered markers for an instance of that sense. This
rlm@466 776 function generates functions to find those children, given the name
rlm@466 777 of the special parent node."
rlm@466 778 [parent-name]
rlm@466 779 (fn [#^Node creature]
rlm@466 780 (if-let [sense-node (.getChild creature parent-name)]
rlm@466 781 (seq (.getChildren sense-node)) [])))
rlm@466 782
rlm@466 783 (def
rlm@466 784 ^{:doc "Return the children of the creature's \"joints\" node."
rlm@466 785 :arglists '([creature])}
rlm@466 786 joints
rlm@466 787 (sense-nodes "joints"))
rlm@466 788 #+end_src
rlm@466 789 #+end_listing
rlm@466 790
rlm@469 791 To find a joint's targets, =CORTEX= creates a small cube, centered
rlm@469 792 around the empty-node, and grows the cube exponentially until it
rlm@469 793 intersects two physical objects. The objects are ordered according
rlm@469 794 to the joint's rotation, with the first one being the object that
rlm@469 795 has more negative coordinates in the joint's reference frame.
rlm@469 796 Since the objects must be physical, the empty-node itself escapes
rlm@469 797 detection. Because the objects must be physical, =joint-targets=
rlm@469 798 must be called /after/ =physical!= is called.
rlm@464 799
rlm@469 800 #+caption: Program to find the targets of a joint node by
rlm@469 801 #+caption: exponentiallly growth of a search cube.
rlm@469 802 #+name: joint-targets
rlm@469 803 #+begin_listing clojure
rlm@469 804 #+begin_src clojure
rlm@466 805 (defn joint-targets
rlm@466 806 "Return the two closest two objects to the joint object, ordered
rlm@466 807 from bottom to top according to the joint's rotation."
rlm@466 808 [#^Node parts #^Node joint]
rlm@466 809 (loop [radius (float 0.01)]
rlm@466 810 (let [results (CollisionResults.)]
rlm@466 811 (.collideWith
rlm@466 812 parts
rlm@466 813 (BoundingBox. (.getWorldTranslation joint)
rlm@466 814 radius radius radius) results)
rlm@466 815 (let [targets
rlm@466 816 (distinct
rlm@466 817 (map #(.getGeometry %) results))]
rlm@466 818 (if (>= (count targets) 2)
rlm@466 819 (sort-by
rlm@466 820 #(let [joint-ref-frame-position
rlm@466 821 (jme-to-blender
rlm@466 822 (.mult
rlm@466 823 (.inverse (.getWorldRotation joint))
rlm@466 824 (.subtract (.getWorldTranslation %)
rlm@466 825 (.getWorldTranslation joint))))]
rlm@466 826 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
rlm@466 827 (take 2 targets))
rlm@466 828 (recur (float (* radius 2))))))))
rlm@469 829 #+end_src
rlm@469 830 #+end_listing
rlm@464 831
rlm@469 832 Once =CORTEX= finds all joints and targets, it creates them using
rlm@469 833 a dispatch on the metadata of each joint node.
rlm@466 834
rlm@469 835 #+caption: Program to dispatch on blender metadata and create joints
rlm@469 836 #+caption: sutiable for physical simulation.
rlm@469 837 #+name: joint-dispatch
rlm@469 838 #+begin_listing clojure
rlm@469 839 #+begin_src clojure
rlm@466 840 (defmulti joint-dispatch
rlm@466 841 "Translate blender pseudo-joints into real JME joints."
rlm@466 842 (fn [constraints & _]
rlm@466 843 (:type constraints)))
rlm@466 844
rlm@466 845 (defmethod joint-dispatch :point
rlm@466 846 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 847 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
rlm@466 848 (.setLinearLowerLimit Vector3f/ZERO)
rlm@466 849 (.setLinearUpperLimit Vector3f/ZERO)))
rlm@466 850
rlm@466 851 (defmethod joint-dispatch :hinge
rlm@466 852 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 853 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
rlm@466 854 [limit-1 limit-2] (:limit constraints)
rlm@466 855 hinge-axis (.mult rotation (blender-to-jme axis))]
rlm@466 856 (doto (HingeJoint. control-a control-b pivot-a pivot-b
rlm@466 857 hinge-axis hinge-axis)
rlm@466 858 (.setLimit limit-1 limit-2))))
rlm@466 859
rlm@466 860 (defmethod joint-dispatch :cone
rlm@466 861 [constraints control-a control-b pivot-a pivot-b rotation]
rlm@466 862 (let [limit-xz (:limit-xz constraints)
rlm@466 863 limit-xy (:limit-xy constraints)
rlm@466 864 twist (:twist constraints)]
rlm@466 865 (doto (ConeJoint. control-a control-b pivot-a pivot-b
rlm@466 866 rotation rotation)
rlm@466 867 (.setLimit (float limit-xz) (float limit-xy)
rlm@466 868 (float twist)))))
rlm@469 869 #+end_src
rlm@469 870 #+end_listing
rlm@466 871
rlm@469 872 All that is left for joints it to combine the above pieces into a
rlm@469 873 something that can operate on the collection of nodes that a
rlm@469 874 blender file represents.
rlm@466 875
rlm@469 876 #+caption: Program to completely create a joint given information
rlm@469 877 #+caption: from a blender file.
rlm@469 878 #+name: connect
rlm@469 879 #+begin_listing clojure
rlm@466 880 #+begin_src clojure
rlm@466 881 (defn connect
rlm@466 882 "Create a joint between 'obj-a and 'obj-b at the location of
rlm@466 883 'joint. The type of joint is determined by the metadata on 'joint.
rlm@466 884
rlm@466 885 Here are some examples:
rlm@466 886 {:type :point}
rlm@466 887 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
rlm@466 888 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
rlm@466 889
rlm@466 890 {:type :cone :limit-xz 0]
rlm@466 891 :limit-xy 0]
rlm@466 892 :twist 0]} (use XZY rotation mode in blender!)"
rlm@466 893 [#^Node obj-a #^Node obj-b #^Node joint]
rlm@466 894 (let [control-a (.getControl obj-a RigidBodyControl)
rlm@466 895 control-b (.getControl obj-b RigidBodyControl)
rlm@466 896 joint-center (.getWorldTranslation joint)
rlm@466 897 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
rlm@466 898 pivot-a (world-to-local obj-a joint-center)
rlm@466 899 pivot-b (world-to-local obj-b joint-center)]
rlm@466 900 (if-let
rlm@466 901 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
rlm@466 902 ;; A side-effect of creating a joint registers
rlm@466 903 ;; it with both physics objects which in turn
rlm@466 904 ;; will register the joint with the physics system
rlm@466 905 ;; when the simulation is started.
rlm@466 906 (joint-dispatch constraints
rlm@466 907 control-a control-b
rlm@466 908 pivot-a pivot-b
rlm@466 909 joint-rotation))))
rlm@469 910 #+end_src
rlm@469 911 #+end_listing
rlm@466 912
rlm@469 913 In general, whenever =CORTEX= exposes a sense (or in this case
rlm@469 914 physicality), it provides a function of the type =sense!=, which
rlm@469 915 takes in a collection of nodes and augments it to support that
rlm@469 916 sense. The function returns any controlls necessary to use that
rlm@469 917 sense. In this case =body!= cerates a physical body and returns no
rlm@469 918 control functions.
rlm@466 919
rlm@469 920 #+caption: Program to give joints to a creature.
rlm@469 921 #+name: name
rlm@469 922 #+begin_listing clojure
rlm@469 923 #+begin_src clojure
rlm@466 924 (defn joints!
rlm@466 925 "Connect the solid parts of the creature with physical joints. The
rlm@466 926 joints are taken from the \"joints\" node in the creature."
rlm@466 927 [#^Node creature]
rlm@466 928 (dorun
rlm@466 929 (map
rlm@466 930 (fn [joint]
rlm@466 931 (let [[obj-a obj-b] (joint-targets creature joint)]
rlm@466 932 (connect obj-a obj-b joint)))
rlm@466 933 (joints creature))))
rlm@466 934 (defn body!
rlm@466 935 "Endow the creature with a physical body connected with joints. The
rlm@466 936 particulars of the joints and the masses of each body part are
rlm@466 937 determined in blender."
rlm@466 938 [#^Node creature]
rlm@466 939 (physical! creature)
rlm@466 940 (joints! creature))
rlm@469 941 #+end_src
rlm@469 942 #+end_listing
rlm@466 943
rlm@469 944 All of the code you have just seen amounts to only 130 lines, yet
rlm@469 945 because it builds on top of Blender and jMonkeyEngine3, those few
rlm@469 946 lines pack quite a punch!
rlm@466 947
rlm@469 948 The hand from figure \ref{blender-hand}, which was modeled after
rlm@469 949 my own right hand, can now be given joints and simulated as a
rlm@469 950 creature.
rlm@466 951
rlm@469 952 #+caption: With the ability to create physical creatures from blender,
rlm@469 953 #+caption: =CORTEX= gets one step closer to becomming a full creature
rlm@469 954 #+caption: simulation environment.
rlm@469 955 #+name: name
rlm@469 956 #+ATTR_LaTeX: :width 15cm
rlm@469 957 [[./images/physical-hand.png]]
rlm@468 958
rlm@508 959 ** Eyes reuse standard video game components
rlm@436 960
rlm@470 961 Vision is one of the most important senses for humans, so I need to
rlm@470 962 build a simulated sense of vision for my AI. I will do this with
rlm@470 963 simulated eyes. Each eye can be independently moved and should see
rlm@470 964 its own version of the world depending on where it is.
rlm@470 965
rlm@470 966 Making these simulated eyes a reality is simple because
rlm@470 967 jMonkeyEngine already contains extensive support for multiple views
rlm@470 968 of the same 3D simulated world. The reason jMonkeyEngine has this
rlm@470 969 support is because the support is necessary to create games with
rlm@470 970 split-screen views. Multiple views are also used to create
rlm@470 971 efficient pseudo-reflections by rendering the scene from a certain
rlm@470 972 perspective and then projecting it back onto a surface in the 3D
rlm@470 973 world.
rlm@470 974
rlm@470 975 #+caption: jMonkeyEngine supports multiple views to enable
rlm@470 976 #+caption: split-screen games, like GoldenEye, which was one of
rlm@470 977 #+caption: the first games to use split-screen views.
rlm@470 978 #+name: name
rlm@470 979 #+ATTR_LaTeX: :width 10cm
rlm@470 980 [[./images/goldeneye-4-player.png]]
rlm@470 981
rlm@470 982 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
rlm@470 983
rlm@470 984 jMonkeyEngine allows you to create a =ViewPort=, which represents a
rlm@470 985 view of the simulated world. You can create as many of these as you
rlm@470 986 want. Every frame, the =RenderManager= iterates through each
rlm@470 987 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
rlm@470 988 is a =FrameBuffer= which represents the rendered image in the GPU.
rlm@470 989
rlm@470 990 #+caption: =ViewPorts= are cameras in the world. During each frame,
rlm@470 991 #+caption: the =RenderManager= records a snapshot of what each view
rlm@470 992 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
rlm@508 993 #+name: rendermanagers
rlm@470 994 #+ATTR_LaTeX: :width 10cm
rlm@508 995 [[./images/diagram_rendermanager2.png]]
rlm@470 996
rlm@470 997 Each =ViewPort= can have any number of attached =SceneProcessor=
rlm@470 998 objects, which are called every time a new frame is rendered. A
rlm@470 999 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
rlm@470 1000 whatever it wants to the data. Often this consists of invoking GPU
rlm@470 1001 specific operations on the rendered image. The =SceneProcessor= can
rlm@470 1002 also copy the GPU image data to RAM and process it with the CPU.
rlm@470 1003
rlm@470 1004 *** Appropriating Views for Vision
rlm@470 1005
rlm@470 1006 Each eye in the simulated creature needs its own =ViewPort= so
rlm@470 1007 that it can see the world from its own perspective. To this
rlm@470 1008 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
rlm@470 1009 any arbitrary continuation function for further processing. That
rlm@470 1010 continuation function may perform both CPU and GPU operations on
rlm@470 1011 the data. To make this easy for the continuation function, the
rlm@470 1012 =SceneProcessor= maintains appropriately sized buffers in RAM to
rlm@470 1013 hold the data. It does not do any copying from the GPU to the CPU
rlm@470 1014 itself because it is a slow operation.
rlm@470 1015
rlm@470 1016 #+caption: Function to make the rendered secne in jMonkeyEngine
rlm@470 1017 #+caption: available for further processing.
rlm@470 1018 #+name: pipeline-1
rlm@470 1019 #+begin_listing clojure
rlm@470 1020 #+begin_src clojure
rlm@470 1021 (defn vision-pipeline
rlm@470 1022 "Create a SceneProcessor object which wraps a vision processing
rlm@470 1023 continuation function. The continuation is a function that takes
rlm@470 1024 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
rlm@470 1025 each of which has already been appropriately sized."
rlm@470 1026 [continuation]
rlm@470 1027 (let [byte-buffer (atom nil)
rlm@470 1028 renderer (atom nil)
rlm@470 1029 image (atom nil)]
rlm@470 1030 (proxy [SceneProcessor] []
rlm@470 1031 (initialize
rlm@470 1032 [renderManager viewPort]
rlm@470 1033 (let [cam (.getCamera viewPort)
rlm@470 1034 width (.getWidth cam)
rlm@470 1035 height (.getHeight cam)]
rlm@470 1036 (reset! renderer (.getRenderer renderManager))
rlm@470 1037 (reset! byte-buffer
rlm@470 1038 (BufferUtils/createByteBuffer
rlm@470 1039 (* width height 4)))
rlm@470 1040 (reset! image (BufferedImage.
rlm@470 1041 width height
rlm@470 1042 BufferedImage/TYPE_4BYTE_ABGR))))
rlm@470 1043 (isInitialized [] (not (nil? @byte-buffer)))
rlm@470 1044 (reshape [_ _ _])
rlm@470 1045 (preFrame [_])
rlm@470 1046 (postQueue [_])
rlm@470 1047 (postFrame
rlm@470 1048 [#^FrameBuffer fb]
rlm@470 1049 (.clear @byte-buffer)
rlm@470 1050 (continuation @renderer fb @byte-buffer @image))
rlm@470 1051 (cleanup []))))
rlm@470 1052 #+end_src
rlm@470 1053 #+end_listing
rlm@470 1054
rlm@470 1055 The continuation function given to =vision-pipeline= above will be
rlm@470 1056 given a =Renderer= and three containers for image data. The
rlm@470 1057 =FrameBuffer= references the GPU image data, but the pixel data
rlm@470 1058 can not be used directly on the CPU. The =ByteBuffer= and
rlm@470 1059 =BufferedImage= are initially "empty" but are sized to hold the
rlm@470 1060 data in the =FrameBuffer=. I call transferring the GPU image data
rlm@470 1061 to the CPU structures "mixing" the image data.
rlm@470 1062
rlm@470 1063 *** Optical sensor arrays are described with images and referenced with metadata
rlm@470 1064
rlm@470 1065 The vision pipeline described above handles the flow of rendered
rlm@470 1066 images. Now, =CORTEX= needs simulated eyes to serve as the source
rlm@470 1067 of these images.
rlm@470 1068
rlm@470 1069 An eye is described in blender in the same way as a joint. They
rlm@470 1070 are zero dimensional empty objects with no geometry whose local
rlm@470 1071 coordinate system determines the orientation of the resulting eye.
rlm@470 1072 All eyes are children of a parent node named "eyes" just as all
rlm@470 1073 joints have a parent named "joints". An eye binds to the nearest
rlm@470 1074 physical object with =bind-sense=.
rlm@470 1075
rlm@470 1076 #+caption: Here, the camera is created based on metadata on the
rlm@470 1077 #+caption: eye-node and attached to the nearest physical object
rlm@470 1078 #+caption: with =bind-sense=
rlm@470 1079 #+name: add-eye
rlm@470 1080 #+begin_listing clojure
rlm@470 1081 (defn add-eye!
rlm@470 1082 "Create a Camera centered on the current position of 'eye which
rlm@470 1083 follows the closest physical node in 'creature. The camera will
rlm@470 1084 point in the X direction and use the Z vector as up as determined
rlm@470 1085 by the rotation of these vectors in blender coordinate space. Use
rlm@470 1086 XZY rotation for the node in blender."
rlm@470 1087 [#^Node creature #^Spatial eye]
rlm@470 1088 (let [target (closest-node creature eye)
rlm@470 1089 [cam-width cam-height]
rlm@470 1090 ;;[640 480] ;; graphics card on laptop doesn't support
rlm@470 1091 ;; arbitray dimensions.
rlm@470 1092 (eye-dimensions eye)
rlm@470 1093 cam (Camera. cam-width cam-height)
rlm@470 1094 rot (.getWorldRotation eye)]
rlm@470 1095 (.setLocation cam (.getWorldTranslation eye))
rlm@470 1096 (.lookAtDirection
rlm@470 1097 cam ; this part is not a mistake and
rlm@470 1098 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
rlm@470 1099 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
rlm@470 1100 (.setFrustumPerspective
rlm@470 1101 cam (float 45)
rlm@470 1102 (float (/ (.getWidth cam) (.getHeight cam)))
rlm@470 1103 (float 1)
rlm@470 1104 (float 1000))
rlm@470 1105 (bind-sense target cam) cam))
rlm@470 1106 #+end_listing
rlm@470 1107
rlm@470 1108 *** Simulated Retina
rlm@470 1109
rlm@470 1110 An eye is a surface (the retina) which contains many discrete
rlm@470 1111 sensors to detect light. These sensors can have different
rlm@470 1112 light-sensing properties. In humans, each discrete sensor is
rlm@470 1113 sensitive to red, blue, green, or gray. These different types of
rlm@470 1114 sensors can have different spatial distributions along the retina.
rlm@470 1115 In humans, there is a fovea in the center of the retina which has
rlm@470 1116 a very high density of color sensors, and a blind spot which has
rlm@470 1117 no sensors at all. Sensor density decreases in proportion to
rlm@470 1118 distance from the fovea.
rlm@470 1119
rlm@470 1120 I want to be able to model any retinal configuration, so my
rlm@470 1121 eye-nodes in blender contain metadata pointing to images that
rlm@470 1122 describe the precise position of the individual sensors using
rlm@470 1123 white pixels. The meta-data also describes the precise sensitivity
rlm@470 1124 to light that the sensors described in the image have. An eye can
rlm@470 1125 contain any number of these images. For example, the metadata for
rlm@470 1126 an eye might look like this:
rlm@470 1127
rlm@470 1128 #+begin_src clojure
rlm@470 1129 {0xFF0000 "Models/test-creature/retina-small.png"}
rlm@470 1130 #+end_src
rlm@470 1131
rlm@470 1132 #+caption: An example retinal profile image. White pixels are
rlm@470 1133 #+caption: photo-sensitive elements. The distribution of white
rlm@470 1134 #+caption: pixels is denser in the middle and falls off at the
rlm@470 1135 #+caption: edges and is inspired by the human retina.
rlm@470 1136 #+name: retina
rlm@470 1137 #+ATTR_LaTeX: :width 10cm
rlm@470 1138 [[./images/retina-small.png]]
rlm@470 1139
rlm@470 1140 Together, the number 0xFF0000 and the image image above describe
rlm@470 1141 the placement of red-sensitive sensory elements.
rlm@470 1142
rlm@470 1143 Meta-data to very crudely approximate a human eye might be
rlm@470 1144 something like this:
rlm@470 1145
rlm@470 1146 #+begin_src clojure
rlm@470 1147 (let [retinal-profile "Models/test-creature/retina-small.png"]
rlm@470 1148 {0xFF0000 retinal-profile
rlm@470 1149 0x00FF00 retinal-profile
rlm@470 1150 0x0000FF retinal-profile
rlm@470 1151 0xFFFFFF retinal-profile})
rlm@470 1152 #+end_src
rlm@470 1153
rlm@470 1154 The numbers that serve as keys in the map determine a sensor's
rlm@470 1155 relative sensitivity to the channels red, green, and blue. These
rlm@470 1156 sensitivity values are packed into an integer in the order
rlm@470 1157 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
rlm@470 1158 image are added together with these sensitivities as linear
rlm@470 1159 weights. Therefore, 0xFF0000 means sensitive to red only while
rlm@470 1160 0xFFFFFF means sensitive to all colors equally (gray).
rlm@470 1161
rlm@470 1162 #+caption: This is the core of vision in =CORTEX=. A given eye node
rlm@470 1163 #+caption: is converted into a function that returns visual
rlm@470 1164 #+caption: information from the simulation.
rlm@471 1165 #+name: vision-kernel
rlm@470 1166 #+begin_listing clojure
rlm@508 1167 #+BEGIN_SRC clojure
rlm@470 1168 (defn vision-kernel
rlm@470 1169 "Returns a list of functions, each of which will return a color
rlm@470 1170 channel's worth of visual information when called inside a running
rlm@470 1171 simulation."
rlm@470 1172 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
rlm@470 1173 (let [retinal-map (retina-sensor-profile eye)
rlm@470 1174 camera (add-eye! creature eye)
rlm@470 1175 vision-image
rlm@470 1176 (atom
rlm@470 1177 (BufferedImage. (.getWidth camera)
rlm@470 1178 (.getHeight camera)
rlm@470 1179 BufferedImage/TYPE_BYTE_BINARY))
rlm@470 1180 register-eye!
rlm@470 1181 (runonce
rlm@470 1182 (fn [world]
rlm@470 1183 (add-camera!
rlm@470 1184 world camera
rlm@470 1185 (let [counter (atom 0)]
rlm@470 1186 (fn [r fb bb bi]
rlm@470 1187 (if (zero? (rem (swap! counter inc) (inc skip)))
rlm@470 1188 (reset! vision-image
rlm@470 1189 (BufferedImage! r fb bb bi))))))))]
rlm@470 1190 (vec
rlm@470 1191 (map
rlm@470 1192 (fn [[key image]]
rlm@470 1193 (let [whites (white-coordinates image)
rlm@470 1194 topology (vec (collapse whites))
rlm@470 1195 sensitivity (sensitivity-presets key key)]
rlm@470 1196 (attached-viewport.
rlm@470 1197 (fn [world]
rlm@470 1198 (register-eye! world)
rlm@470 1199 (vector
rlm@470 1200 topology
rlm@470 1201 (vec
rlm@470 1202 (for [[x y] whites]
rlm@470 1203 (pixel-sense
rlm@470 1204 sensitivity
rlm@470 1205 (.getRGB @vision-image x y))))))
rlm@470 1206 register-eye!)))
rlm@470 1207 retinal-map))))
rlm@508 1208 #+END_SRC
rlm@470 1209 #+end_listing
rlm@470 1210
rlm@470 1211 Note that since each of the functions generated by =vision-kernel=
rlm@470 1212 shares the same =register-eye!= function, the eye will be
rlm@470 1213 registered only once the first time any of the functions from the
rlm@470 1214 list returned by =vision-kernel= is called. Each of the functions
rlm@470 1215 returned by =vision-kernel= also allows access to the =Viewport=
rlm@470 1216 through which it receives images.
rlm@470 1217
rlm@470 1218 All the hard work has been done; all that remains is to apply
rlm@470 1219 =vision-kernel= to each eye in the creature and gather the results
rlm@470 1220 into one list of functions.
rlm@470 1221
rlm@470 1222
rlm@470 1223 #+caption: With =vision!=, =CORTEX= is already a fine simulation
rlm@470 1224 #+caption: environment for experimenting with different types of
rlm@470 1225 #+caption: eyes.
rlm@470 1226 #+name: vision!
rlm@470 1227 #+begin_listing clojure
rlm@508 1228 #+BEGIN_SRC clojure
rlm@470 1229 (defn vision!
rlm@470 1230 "Returns a list of functions, each of which returns visual sensory
rlm@470 1231 data when called inside a running simulation."
rlm@470 1232 [#^Node creature & {skip :skip :or {skip 0}}]
rlm@470 1233 (reduce
rlm@470 1234 concat
rlm@470 1235 (for [eye (eyes creature)]
rlm@470 1236 (vision-kernel creature eye))))
rlm@508 1237 #+END_SRC
rlm@470 1238 #+end_listing
rlm@470 1239
rlm@471 1240 #+caption: Simulated vision with a test creature and the
rlm@471 1241 #+caption: human-like eye approximation. Notice how each channel
rlm@471 1242 #+caption: of the eye responds differently to the differently
rlm@471 1243 #+caption: colored balls.
rlm@471 1244 #+name: worm-vision-test.
rlm@471 1245 #+ATTR_LaTeX: :width 13cm
rlm@471 1246 [[./images/worm-vision.png]]
rlm@470 1247
rlm@471 1248 The vision code is not much more complicated than the body code,
rlm@471 1249 and enables multiple further paths for simulated vision. For
rlm@471 1250 example, it is quite easy to create bifocal vision -- you just
rlm@471 1251 make two eyes next to each other in blender! It is also possible
rlm@471 1252 to encode vision transforms in the retinal files. For example, the
rlm@471 1253 human like retina file in figure \ref{retina} approximates a
rlm@471 1254 log-polar transform.
rlm@470 1255
rlm@471 1256 This vision code has already been absorbed by the jMonkeyEngine
rlm@471 1257 community and is now (in modified form) part of a system for
rlm@471 1258 capturing in-game video to a file.
rlm@470 1259
rlm@508 1260 ** Hearing is hard; =CORTEX= does it right
rlm@473 1261
rlm@472 1262 At the end of this section I will have simulated ears that work the
rlm@472 1263 same way as the simulated eyes in the last section. I will be able to
rlm@472 1264 place any number of ear-nodes in a blender file, and they will bind to
rlm@472 1265 the closest physical object and follow it as it moves around. Each ear
rlm@472 1266 will provide access to the sound data it picks up between every frame.
rlm@472 1267
rlm@472 1268 Hearing is one of the more difficult senses to simulate, because there
rlm@472 1269 is less support for obtaining the actual sound data that is processed
rlm@472 1270 by jMonkeyEngine3. There is no "split-screen" support for rendering
rlm@472 1271 sound from different points of view, and there is no way to directly
rlm@472 1272 access the rendered sound data.
rlm@472 1273
rlm@472 1274 =CORTEX='s hearing is unique because it does not have any
rlm@472 1275 limitations compared to other simulation environments. As far as I
rlm@472 1276 know, there is no other system that supports multiple listerers,
rlm@472 1277 and the sound demo at the end of this section is the first time
rlm@472 1278 it's been done in a video game environment.
rlm@472 1279
rlm@472 1280 *** Brief Description of jMonkeyEngine's Sound System
rlm@472 1281
rlm@472 1282 jMonkeyEngine's sound system works as follows:
rlm@472 1283
rlm@472 1284 - jMonkeyEngine uses the =AppSettings= for the particular
rlm@472 1285 application to determine what sort of =AudioRenderer= should be
rlm@472 1286 used.
rlm@472 1287 - Although some support is provided for multiple AudioRendering
rlm@472 1288 backends, jMonkeyEngine at the time of this writing will either
rlm@472 1289 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
rlm@472 1290 - jMonkeyEngine tries to figure out what sort of system you're
rlm@472 1291 running and extracts the appropriate native libraries.
rlm@472 1292 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
rlm@472 1293 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
rlm@472 1294 - =OpenAL= renders the 3D sound and feeds the rendered sound
rlm@472 1295 directly to any of various sound output devices with which it
rlm@472 1296 knows how to communicate.
rlm@472 1297
rlm@472 1298 A consequence of this is that there's no way to access the actual
rlm@472 1299 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
rlm@472 1300 one /listener/ (it renders sound data from only one perspective),
rlm@472 1301 which normally isn't a problem for games, but becomes a problem
rlm@472 1302 when trying to make multiple AI creatures that can each hear the
rlm@472 1303 world from a different perspective.
rlm@472 1304
rlm@472 1305 To make many AI creatures in jMonkeyEngine that can each hear the
rlm@472 1306 world from their own perspective, or to make a single creature with
rlm@472 1307 many ears, it is necessary to go all the way back to =OpenAL= and
rlm@472 1308 implement support for simulated hearing there.
rlm@472 1309
rlm@472 1310 *** Extending =OpenAl=
rlm@472 1311
rlm@472 1312 Extending =OpenAL= to support multiple listeners requires 500
rlm@472 1313 lines of =C= code and is too hairy to mention here. Instead, I
rlm@472 1314 will show a small amount of extension code and go over the high
rlm@472 1315 level stragety. Full source is of course available with the
rlm@472 1316 =CORTEX= distribution if you're interested.
rlm@472 1317
rlm@472 1318 =OpenAL= goes to great lengths to support many different systems,
rlm@472 1319 all with different sound capabilities and interfaces. It
rlm@472 1320 accomplishes this difficult task by providing code for many
rlm@472 1321 different sound backends in pseudo-objects called /Devices/.
rlm@472 1322 There's a device for the Linux Open Sound System and the Advanced
rlm@472 1323 Linux Sound Architecture, there's one for Direct Sound on Windows,
rlm@472 1324 and there's even one for Solaris. =OpenAL= solves the problem of
rlm@472 1325 platform independence by providing all these Devices.
rlm@472 1326
rlm@472 1327 Wrapper libraries such as LWJGL are free to examine the system on
rlm@472 1328 which they are running and then select an appropriate device for
rlm@472 1329 that system.
rlm@472 1330
rlm@472 1331 There are also a few "special" devices that don't interface with
rlm@472 1332 any particular system. These include the Null Device, which
rlm@472 1333 doesn't do anything, and the Wave Device, which writes whatever
rlm@472 1334 sound it receives to a file, if everything has been set up
rlm@472 1335 correctly when configuring =OpenAL=.
rlm@472 1336
rlm@472 1337 Actual mixing (doppler shift and distance.environment-based
rlm@472 1338 attenuation) of the sound data happens in the Devices, and they
rlm@472 1339 are the only point in the sound rendering process where this data
rlm@472 1340 is available.
rlm@472 1341
rlm@472 1342 Therefore, in order to support multiple listeners, and get the
rlm@472 1343 sound data in a form that the AIs can use, it is necessary to
rlm@472 1344 create a new Device which supports this feature.
rlm@472 1345
rlm@472 1346 Adding a device to OpenAL is rather tricky -- there are five
rlm@472 1347 separate files in the =OpenAL= source tree that must be modified
rlm@472 1348 to do so. I named my device the "Multiple Audio Send" Device, or
rlm@472 1349 =Send= Device for short, since it sends audio data back to the
rlm@472 1350 calling application like an Aux-Send cable on a mixing board.
rlm@472 1351
rlm@472 1352 The main idea behind the Send device is to take advantage of the
rlm@472 1353 fact that LWJGL only manages one /context/ when using OpenAL. A
rlm@472 1354 /context/ is like a container that holds samples and keeps track
rlm@472 1355 of where the listener is. In order to support multiple listeners,
rlm@472 1356 the Send device identifies the LWJGL context as the master
rlm@472 1357 context, and creates any number of slave contexts to represent
rlm@472 1358 additional listeners. Every time the device renders sound, it
rlm@472 1359 synchronizes every source from the master LWJGL context to the
rlm@472 1360 slave contexts. Then, it renders each context separately, using a
rlm@472 1361 different listener for each one. The rendered sound is made
rlm@472 1362 available via JNI to jMonkeyEngine.
rlm@472 1363
rlm@472 1364 Switching between contexts is not the normal operation of a
rlm@472 1365 Device, and one of the problems with doing so is that a Device
rlm@472 1366 normally keeps around a few pieces of state such as the
rlm@472 1367 =ClickRemoval= array above which will become corrupted if the
rlm@472 1368 contexts are not rendered in parallel. The solution is to create a
rlm@472 1369 copy of this normally global device state for each context, and
rlm@472 1370 copy it back and forth into and out of the actual device state
rlm@472 1371 whenever a context is rendered.
rlm@472 1372
rlm@472 1373 The core of the =Send= device is the =syncSources= function, which
rlm@472 1374 does the job of copying all relevant data from one context to
rlm@472 1375 another.
rlm@472 1376
rlm@472 1377 #+caption: Program for extending =OpenAL= to support multiple
rlm@472 1378 #+caption: listeners via context copying/switching.
rlm@472 1379 #+name: sync-openal-sources
rlm@509 1380 #+begin_listing c
rlm@509 1381 #+BEGIN_SRC c
rlm@472 1382 void syncSources(ALsource *masterSource, ALsource *slaveSource,
rlm@472 1383 ALCcontext *masterCtx, ALCcontext *slaveCtx){
rlm@472 1384 ALuint master = masterSource->source;
rlm@472 1385 ALuint slave = slaveSource->source;
rlm@472 1386 ALCcontext *current = alcGetCurrentContext();
rlm@472 1387
rlm@472 1388 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
rlm@472 1389 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
rlm@472 1390 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
rlm@472 1391 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
rlm@472 1392 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
rlm@472 1393 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
rlm@472 1394 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
rlm@472 1395 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
rlm@472 1396 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
rlm@472 1397 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
rlm@472 1398 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
rlm@472 1399 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
rlm@472 1400 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
rlm@472 1401
rlm@472 1402 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
rlm@472 1403 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
rlm@472 1404 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
rlm@472 1405
rlm@472 1406 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
rlm@472 1407 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
rlm@472 1408
rlm@472 1409 alcMakeContextCurrent(masterCtx);
rlm@472 1410 ALint source_type;
rlm@472 1411 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
rlm@472 1412
rlm@472 1413 // Only static sources are currently synchronized!
rlm@472 1414 if (AL_STATIC == source_type){
rlm@472 1415 ALint master_buffer;
rlm@472 1416 ALint slave_buffer;
rlm@472 1417 alGetSourcei(master, AL_BUFFER, &master_buffer);
rlm@472 1418 alcMakeContextCurrent(slaveCtx);
rlm@472 1419 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
rlm@472 1420 if (master_buffer != slave_buffer){
rlm@472 1421 alSourcei(slave, AL_BUFFER, master_buffer);
rlm@472 1422 }
rlm@472 1423 }
rlm@472 1424
rlm@472 1425 // Synchronize the state of the two sources.
rlm@472 1426 alcMakeContextCurrent(masterCtx);
rlm@472 1427 ALint masterState;
rlm@472 1428 ALint slaveState;
rlm@472 1429
rlm@472 1430 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
rlm@472 1431 alcMakeContextCurrent(slaveCtx);
rlm@472 1432 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
rlm@472 1433
rlm@472 1434 if (masterState != slaveState){
rlm@472 1435 switch (masterState){
rlm@472 1436 case AL_INITIAL : alSourceRewind(slave); break;
rlm@472 1437 case AL_PLAYING : alSourcePlay(slave); break;
rlm@472 1438 case AL_PAUSED : alSourcePause(slave); break;
rlm@472 1439 case AL_STOPPED : alSourceStop(slave); break;
rlm@472 1440 }
rlm@472 1441 }
rlm@472 1442 // Restore whatever context was previously active.
rlm@472 1443 alcMakeContextCurrent(current);
rlm@472 1444 }
rlm@508 1445 #+END_SRC
rlm@472 1446 #+end_listing
rlm@472 1447
rlm@472 1448 With this special context-switching device, and some ugly JNI
rlm@472 1449 bindings that are not worth mentioning, =CORTEX= gains the ability
rlm@472 1450 to access multiple sound streams from =OpenAL=.
rlm@472 1451
rlm@472 1452 #+caption: Program to create an ear from a blender empty node. The ear
rlm@472 1453 #+caption: follows around the nearest physical object and passes
rlm@472 1454 #+caption: all sensory data to a continuation function.
rlm@472 1455 #+name: add-ear
rlm@472 1456 #+begin_listing clojure
rlm@508 1457 #+BEGIN_SRC clojure
rlm@472 1458 (defn add-ear!
rlm@472 1459 "Create a Listener centered on the current position of 'ear
rlm@472 1460 which follows the closest physical node in 'creature and
rlm@472 1461 sends sound data to 'continuation."
rlm@472 1462 [#^Application world #^Node creature #^Spatial ear continuation]
rlm@472 1463 (let [target (closest-node creature ear)
rlm@472 1464 lis (Listener.)
rlm@472 1465 audio-renderer (.getAudioRenderer world)
rlm@472 1466 sp (hearing-pipeline continuation)]
rlm@472 1467 (.setLocation lis (.getWorldTranslation ear))
rlm@472 1468 (.setRotation lis (.getWorldRotation ear))
rlm@472 1469 (bind-sense target lis)
rlm@472 1470 (update-listener-velocity! target lis)
rlm@472 1471 (.addListener audio-renderer lis)
rlm@472 1472 (.registerSoundProcessor audio-renderer lis sp)))
rlm@508 1473 #+END_SRC
rlm@472 1474 #+end_listing
rlm@472 1475
rlm@472 1476 The =Send= device, unlike most of the other devices in =OpenAL=,
rlm@472 1477 does not render sound unless asked. This enables the system to
rlm@472 1478 slow down or speed up depending on the needs of the AIs who are
rlm@472 1479 using it to listen. If the device tried to render samples in
rlm@472 1480 real-time, a complicated AI whose mind takes 100 seconds of
rlm@472 1481 computer time to simulate 1 second of AI-time would miss almost
rlm@472 1482 all of the sound in its environment!
rlm@472 1483
rlm@472 1484 #+caption: Program to enable arbitrary hearing in =CORTEX=
rlm@472 1485 #+name: hearing
rlm@472 1486 #+begin_listing clojure
rlm@508 1487 #+BEGIN_SRC clojure
rlm@472 1488 (defn hearing-kernel
rlm@472 1489 "Returns a function which returns auditory sensory data when called
rlm@472 1490 inside a running simulation."
rlm@472 1491 [#^Node creature #^Spatial ear]
rlm@472 1492 (let [hearing-data (atom [])
rlm@472 1493 register-listener!
rlm@472 1494 (runonce
rlm@472 1495 (fn [#^Application world]
rlm@472 1496 (add-ear!
rlm@472 1497 world creature ear
rlm@472 1498 (comp #(reset! hearing-data %)
rlm@472 1499 byteBuffer->pulse-vector))))]
rlm@472 1500 (fn [#^Application world]
rlm@472 1501 (register-listener! world)
rlm@472 1502 (let [data @hearing-data
rlm@472 1503 topology
rlm@472 1504 (vec (map #(vector % 0) (range 0 (count data))))]
rlm@472 1505 [topology data]))))
rlm@472 1506
rlm@472 1507 (defn hearing!
rlm@472 1508 "Endow the creature in a particular world with the sense of
rlm@472 1509 hearing. Will return a sequence of functions, one for each ear,
rlm@472 1510 which when called will return the auditory data from that ear."
rlm@472 1511 [#^Node creature]
rlm@472 1512 (for [ear (ears creature)]
rlm@472 1513 (hearing-kernel creature ear)))
rlm@508 1514 #+END_SRC
rlm@472 1515 #+end_listing
rlm@472 1516
rlm@472 1517 Armed with these functions, =CORTEX= is able to test possibly the
rlm@472 1518 first ever instance of multiple listeners in a video game engine
rlm@472 1519 based simulation!
rlm@472 1520
rlm@472 1521 #+caption: Here a simple creature responds to sound by changing
rlm@472 1522 #+caption: its color from gray to green when the total volume
rlm@472 1523 #+caption: goes over a threshold.
rlm@472 1524 #+name: sound-test
rlm@472 1525 #+begin_listing java
rlm@508 1526 #+BEGIN_SRC java
rlm@472 1527 /**
rlm@472 1528 * Respond to sound! This is the brain of an AI entity that
rlm@472 1529 * hears its surroundings and reacts to them.
rlm@472 1530 */
rlm@472 1531 public void process(ByteBuffer audioSamples,
rlm@472 1532 int numSamples, AudioFormat format) {
rlm@472 1533 audioSamples.clear();
rlm@472 1534 byte[] data = new byte[numSamples];
rlm@472 1535 float[] out = new float[numSamples];
rlm@472 1536 audioSamples.get(data);
rlm@472 1537 FloatSampleTools.
rlm@472 1538 byte2floatInterleaved
rlm@472 1539 (data, 0, out, 0, numSamples/format.getFrameSize(), format);
rlm@472 1540
rlm@472 1541 float max = Float.NEGATIVE_INFINITY;
rlm@472 1542 for (float f : out){if (f > max) max = f;}
rlm@472 1543 audioSamples.clear();
rlm@472 1544
rlm@472 1545 if (max > 0.1){
rlm@472 1546 entity.getMaterial().setColor("Color", ColorRGBA.Green);
rlm@472 1547 }
rlm@472 1548 else {
rlm@472 1549 entity.getMaterial().setColor("Color", ColorRGBA.Gray);
rlm@472 1550 }
rlm@508 1551 #+END_SRC
rlm@472 1552 #+end_listing
rlm@472 1553
rlm@472 1554 #+caption: First ever simulation of multiple listerners in =CORTEX=.
rlm@472 1555 #+caption: Each cube is a creature which processes sound data with
rlm@472 1556 #+caption: the =process= function from listing \ref{sound-test}.
rlm@472 1557 #+caption: the ball is constantally emiting a pure tone of
rlm@472 1558 #+caption: constant volume. As it approaches the cubes, they each
rlm@472 1559 #+caption: change color in response to the sound.
rlm@472 1560 #+name: sound-cubes.
rlm@472 1561 #+ATTR_LaTeX: :width 10cm
rlm@509 1562 [[./images/java-hearing-test.png]]
rlm@472 1563
rlm@472 1564 This system of hearing has also been co-opted by the
rlm@472 1565 jMonkeyEngine3 community and is used to record audio for demo
rlm@472 1566 videos.
rlm@472 1567
rlm@505 1568 ** Touch uses hundreds of hair-like elements
rlm@436 1569
rlm@474 1570 Touch is critical to navigation and spatial reasoning and as such I
rlm@474 1571 need a simulated version of it to give to my AI creatures.
rlm@474 1572
rlm@474 1573 Human skin has a wide array of touch sensors, each of which
rlm@474 1574 specialize in detecting different vibrational modes and pressures.
rlm@474 1575 These sensors can integrate a vast expanse of skin (i.e. your
rlm@474 1576 entire palm), or a tiny patch of skin at the tip of your finger.
rlm@474 1577 The hairs of the skin help detect objects before they even come
rlm@474 1578 into contact with the skin proper.
rlm@474 1579
rlm@474 1580 However, touch in my simulated world can not exactly correspond to
rlm@474 1581 human touch because my creatures are made out of completely rigid
rlm@474 1582 segments that don't deform like human skin.
rlm@474 1583
rlm@474 1584 Instead of measuring deformation or vibration, I surround each
rlm@474 1585 rigid part with a plenitude of hair-like objects (/feelers/) which
rlm@474 1586 do not interact with the physical world. Physical objects can pass
rlm@474 1587 through them with no effect. The feelers are able to tell when
rlm@474 1588 other objects pass through them, and they constantly report how
rlm@474 1589 much of their extent is covered. So even though the creature's body
rlm@474 1590 parts do not deform, the feelers create a margin around those body
rlm@474 1591 parts which achieves a sense of touch which is a hybrid between a
rlm@474 1592 human's sense of deformation and sense from hairs.
rlm@474 1593
rlm@474 1594 Implementing touch in jMonkeyEngine follows a different technical
rlm@474 1595 route than vision and hearing. Those two senses piggybacked off
rlm@474 1596 jMonkeyEngine's 3D audio and video rendering subsystems. To
rlm@474 1597 simulate touch, I use jMonkeyEngine's physics system to execute
rlm@474 1598 many small collision detections, one for each feeler. The placement
rlm@474 1599 of the feelers is determined by a UV-mapped image which shows where
rlm@474 1600 each feeler should be on the 3D surface of the body.
rlm@474 1601
rlm@477 1602 *** Defining Touch Meta-Data in Blender
rlm@474 1603
rlm@474 1604 Each geometry can have a single UV map which describes the
rlm@474 1605 position of the feelers which will constitute its sense of touch.
rlm@474 1606 This image path is stored under the ``touch'' key. The image itself
rlm@474 1607 is black and white, with black meaning a feeler length of 0 (no
rlm@474 1608 feeler is present) and white meaning a feeler length of =scale=,
rlm@474 1609 which is a float stored under the key "scale".
rlm@474 1610
rlm@475 1611 #+caption: Touch does not use empty nodes, to store metadata,
rlm@475 1612 #+caption: because the metadata of each solid part of a
rlm@475 1613 #+caption: creature's body is sufficient.
rlm@475 1614 #+name: touch-meta-data
rlm@475 1615 #+begin_listing clojure
rlm@477 1616 #+BEGIN_SRC clojure
rlm@474 1617 (defn tactile-sensor-profile
rlm@474 1618 "Return the touch-sensor distribution image in BufferedImage format,
rlm@474 1619 or nil if it does not exist."
rlm@474 1620 [#^Geometry obj]
rlm@474 1621 (if-let [image-path (meta-data obj "touch")]
rlm@474 1622 (load-image image-path)))
rlm@474 1623
rlm@474 1624 (defn tactile-scale
rlm@474 1625 "Return the length of each feeler. Default scale is 0.01
rlm@474 1626 jMonkeyEngine units."
rlm@474 1627 [#^Geometry obj]
rlm@474 1628 (if-let [scale (meta-data obj "scale")]
rlm@474 1629 scale 0.1))
rlm@477 1630 #+END_SRC
rlm@475 1631 #+end_listing
rlm@474 1632
rlm@475 1633 Here is an example of a UV-map which specifies the position of
rlm@475 1634 touch sensors along the surface of the upper segment of a fingertip.
rlm@474 1635
rlm@475 1636 #+caption: This is the tactile-sensor-profile for the upper segment
rlm@475 1637 #+caption: of a fingertip. It defines regions of high touch sensitivity
rlm@475 1638 #+caption: (where there are many white pixels) and regions of low
rlm@475 1639 #+caption: sensitivity (where white pixels are sparse).
rlm@486 1640 #+name: fingertip-UV
rlm@477 1641 #+ATTR_LaTeX: :width 13cm
rlm@477 1642 [[./images/finger-UV.png]]
rlm@474 1643
rlm@477 1644 *** Implementation Summary
rlm@474 1645
rlm@474 1646 To simulate touch there are three conceptual steps. For each solid
rlm@474 1647 object in the creature, you first have to get UV image and scale
rlm@474 1648 parameter which define the position and length of the feelers.
rlm@474 1649 Then, you use the triangles which comprise the mesh and the UV
rlm@474 1650 data stored in the mesh to determine the world-space position and
rlm@474 1651 orientation of each feeler. Then once every frame, update these
rlm@474 1652 positions and orientations to match the current position and
rlm@474 1653 orientation of the object, and use physics collision detection to
rlm@474 1654 gather tactile data.
rlm@474 1655
rlm@474 1656 Extracting the meta-data has already been described. The third
rlm@474 1657 step, physics collision detection, is handled in =touch-kernel=.
rlm@474 1658 Translating the positions and orientations of the feelers from the
rlm@474 1659 UV-map to world-space is itself a three-step process.
rlm@474 1660
rlm@475 1661 - Find the triangles which make up the mesh in pixel-space and in
rlm@505 1662 world-space. \\(=triangles=, =pixel-triangles=).
rlm@474 1663
rlm@475 1664 - Find the coordinates of each feeler in world-space. These are
rlm@475 1665 the origins of the feelers. (=feeler-origins=).
rlm@474 1666
rlm@475 1667 - Calculate the normals of the triangles in world space, and add
rlm@475 1668 them to each of the origins of the feelers. These are the
rlm@475 1669 normalized coordinates of the tips of the feelers.
rlm@475 1670 (=feeler-tips=).
rlm@474 1671
rlm@477 1672 *** Triangle Math
rlm@474 1673
rlm@475 1674 The rigid objects which make up a creature have an underlying
rlm@475 1675 =Geometry=, which is a =Mesh= plus a =Material= and other
rlm@475 1676 important data involved with displaying the object.
rlm@475 1677
rlm@475 1678 A =Mesh= is composed of =Triangles=, and each =Triangle= has three
rlm@475 1679 vertices which have coordinates in world space and UV space.
rlm@475 1680
rlm@475 1681 Here, =triangles= gets all the world-space triangles which
rlm@475 1682 comprise a mesh, while =pixel-triangles= gets those same triangles
rlm@475 1683 expressed in pixel coordinates (which are UV coordinates scaled to
rlm@475 1684 fit the height and width of the UV image).
rlm@474 1685
rlm@475 1686 #+caption: Programs to extract triangles from a geometry and get
rlm@475 1687 #+caption: their verticies in both world and UV-coordinates.
rlm@475 1688 #+name: get-triangles
rlm@475 1689 #+begin_listing clojure
rlm@477 1690 #+BEGIN_SRC clojure
rlm@474 1691 (defn triangle
rlm@474 1692 "Get the triangle specified by triangle-index from the mesh."
rlm@474 1693 [#^Geometry geo triangle-index]
rlm@474 1694 (triangle-seq
rlm@474 1695 (let [scratch (Triangle.)]
rlm@474 1696 (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
rlm@474 1697
rlm@474 1698 (defn triangles
rlm@474 1699 "Return a sequence of all the Triangles which comprise a given
rlm@474 1700 Geometry."
rlm@474 1701 [#^Geometry geo]
rlm@474 1702 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
rlm@474 1703
rlm@474 1704 (defn triangle-vertex-indices
rlm@474 1705 "Get the triangle vertex indices of a given triangle from a given
rlm@474 1706 mesh."
rlm@474 1707 [#^Mesh mesh triangle-index]
rlm@474 1708 (let [indices (int-array 3)]
rlm@474 1709 (.getTriangle mesh triangle-index indices)
rlm@474 1710 (vec indices)))
rlm@474 1711
rlm@475 1712 (defn vertex-UV-coord
rlm@474 1713 "Get the UV-coordinates of the vertex named by vertex-index"
rlm@474 1714 [#^Mesh mesh vertex-index]
rlm@474 1715 (let [UV-buffer
rlm@474 1716 (.getData
rlm@474 1717 (.getBuffer
rlm@474 1718 mesh
rlm@474 1719 VertexBuffer$Type/TexCoord))]
rlm@474 1720 [(.get UV-buffer (* vertex-index 2))
rlm@474 1721 (.get UV-buffer (+ 1 (* vertex-index 2)))]))
rlm@474 1722
rlm@474 1723 (defn pixel-triangle [#^Geometry geo image index]
rlm@474 1724 (let [mesh (.getMesh geo)
rlm@474 1725 width (.getWidth image)
rlm@474 1726 height (.getHeight image)]
rlm@474 1727 (vec (map (fn [[u v]] (vector (* width u) (* height v)))
rlm@474 1728 (map (partial vertex-UV-coord mesh)
rlm@474 1729 (triangle-vertex-indices mesh index))))))
rlm@474 1730
rlm@474 1731 (defn pixel-triangles
rlm@474 1732 "The pixel-space triangles of the Geometry, in the same order as
rlm@474 1733 (triangles geo)"
rlm@474 1734 [#^Geometry geo image]
rlm@474 1735 (let [height (.getHeight image)
rlm@474 1736 width (.getWidth image)]
rlm@474 1737 (map (partial pixel-triangle geo image)
rlm@474 1738 (range (.getTriangleCount (.getMesh geo))))))
rlm@477 1739 #+END_SRC
rlm@475 1740 #+end_listing
rlm@475 1741
rlm@474 1742 *** The Affine Transform from one Triangle to Another
rlm@474 1743
rlm@475 1744 =pixel-triangles= gives us the mesh triangles expressed in pixel
rlm@475 1745 coordinates and =triangles= gives us the mesh triangles expressed
rlm@475 1746 in world coordinates. The tactile-sensor-profile gives the
rlm@475 1747 position of each feeler in pixel-space. In order to convert
rlm@475 1748 pixel-space coordinates into world-space coordinates we need
rlm@475 1749 something that takes coordinates on the surface of one triangle
rlm@475 1750 and gives the corresponding coordinates on the surface of another
rlm@475 1751 triangle.
rlm@475 1752
rlm@475 1753 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed
rlm@475 1754 into any other by a combination of translation, scaling, and
rlm@475 1755 rotation. The affine transformation from one triangle to another
rlm@475 1756 is readily computable if the triangle is expressed in terms of a
rlm@475 1757 $4x4$ matrix.
rlm@476 1758
rlm@476 1759 #+BEGIN_LaTeX
rlm@476 1760 $$
rlm@475 1761 \begin{bmatrix}
rlm@475 1762 x_1 & x_2 & x_3 & n_x \\
rlm@475 1763 y_1 & y_2 & y_3 & n_y \\
rlm@475 1764 z_1 & z_2 & z_3 & n_z \\
rlm@475 1765 1 & 1 & 1 & 1
rlm@475 1766 \end{bmatrix}
rlm@476 1767 $$
rlm@476 1768 #+END_LaTeX
rlm@475 1769
rlm@475 1770 Here, the first three columns of the matrix are the vertices of
rlm@475 1771 the triangle. The last column is the right-handed unit normal of
rlm@475 1772 the triangle.
rlm@475 1773
rlm@476 1774 With two triangles $T_{1}$ and $T_{2}$ each expressed as a
rlm@476 1775 matrix like above, the affine transform from $T_{1}$ to $T_{2}$
rlm@476 1776 is $T_{2}T_{1}^{-1}$.
rlm@475 1777
rlm@475 1778 The clojure code below recapitulates the formulas above, using
rlm@475 1779 jMonkeyEngine's =Matrix4f= objects, which can describe any affine
rlm@475 1780 transformation.
rlm@474 1781
rlm@475 1782 #+caption: Program to interpert triangles as affine transforms.
rlm@475 1783 #+name: triangle-affine
rlm@475 1784 #+begin_listing clojure
rlm@475 1785 #+BEGIN_SRC clojure
rlm@474 1786 (defn triangle->matrix4f
rlm@474 1787 "Converts the triangle into a 4x4 matrix: The first three columns
rlm@474 1788 contain the vertices of the triangle; the last contains the unit
rlm@474 1789 normal of the triangle. The bottom row is filled with 1s."
rlm@474 1790 [#^Triangle t]
rlm@474 1791 (let [mat (Matrix4f.)
rlm@474 1792 [vert-1 vert-2 vert-3]
rlm@474 1793 (mapv #(.get t %) (range 3))
rlm@474 1794 unit-normal (do (.calculateNormal t)(.getNormal t))
rlm@474 1795 vertices [vert-1 vert-2 vert-3 unit-normal]]
rlm@474 1796 (dorun
rlm@474 1797 (for [row (range 4) col (range 3)]
rlm@474 1798 (do
rlm@474 1799 (.set mat col row (.get (vertices row) col))
rlm@474 1800 (.set mat 3 row 1)))) mat))
rlm@474 1801
rlm@474 1802 (defn triangles->affine-transform
rlm@474 1803 "Returns the affine transformation that converts each vertex in the
rlm@474 1804 first triangle into the corresponding vertex in the second
rlm@474 1805 triangle."
rlm@474 1806 [#^Triangle tri-1 #^Triangle tri-2]
rlm@474 1807 (.mult
rlm@474 1808 (triangle->matrix4f tri-2)
rlm@474 1809 (.invert (triangle->matrix4f tri-1))))
rlm@475 1810 #+END_SRC
rlm@475 1811 #+end_listing
rlm@474 1812
rlm@477 1813 *** Triangle Boundaries
rlm@474 1814
rlm@474 1815 For efficiency's sake I will divide the tactile-profile image into
rlm@474 1816 small squares which inscribe each pixel-triangle, then extract the
rlm@474 1817 points which lie inside the triangle and map them to 3D-space using
rlm@474 1818 =triangle-transform= above. To do this I need a function,
rlm@474 1819 =convex-bounds= which finds the smallest box which inscribes a 2D
rlm@474 1820 triangle.
rlm@474 1821
rlm@474 1822 =inside-triangle?= determines whether a point is inside a triangle
rlm@474 1823 in 2D pixel-space.
rlm@474 1824
rlm@475 1825 #+caption: Program to efficiently determine point includion
rlm@475 1826 #+caption: in a triangle.
rlm@475 1827 #+name: in-triangle
rlm@475 1828 #+begin_listing clojure
rlm@475 1829 #+BEGIN_SRC clojure
rlm@474 1830 (defn convex-bounds
rlm@474 1831 "Returns the smallest square containing the given vertices, as a
rlm@474 1832 vector of integers [left top width height]."
rlm@474 1833 [verts]
rlm@474 1834 (let [xs (map first verts)
rlm@474 1835 ys (map second verts)
rlm@474 1836 x0 (Math/floor (apply min xs))
rlm@474 1837 y0 (Math/floor (apply min ys))
rlm@474 1838 x1 (Math/ceil (apply max xs))
rlm@474 1839 y1 (Math/ceil (apply max ys))]
rlm@474 1840 [x0 y0 (- x1 x0) (- y1 y0)]))
rlm@474 1841
rlm@474 1842 (defn same-side?
rlm@474 1843 "Given the points p1 and p2 and the reference point ref, is point p
rlm@474 1844 on the same side of the line that goes through p1 and p2 as ref is?"
rlm@474 1845 [p1 p2 ref p]
rlm@474 1846 (<=
rlm@474 1847 0
rlm@474 1848 (.dot
rlm@474 1849 (.cross (.subtract p2 p1) (.subtract p p1))
rlm@474 1850 (.cross (.subtract p2 p1) (.subtract ref p1)))))
rlm@474 1851
rlm@474 1852 (defn inside-triangle?
rlm@474 1853 "Is the point inside the triangle?"
rlm@474 1854 {:author "Dylan Holmes"}
rlm@474 1855 [#^Triangle tri #^Vector3f p]
rlm@474 1856 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
rlm@474 1857 (and
rlm@474 1858 (same-side? vert-1 vert-2 vert-3 p)
rlm@474 1859 (same-side? vert-2 vert-3 vert-1 p)
rlm@474 1860 (same-side? vert-3 vert-1 vert-2 p))))
rlm@475 1861 #+END_SRC
rlm@475 1862 #+end_listing
rlm@474 1863
rlm@477 1864 *** Feeler Coordinates
rlm@474 1865
rlm@475 1866 The triangle-related functions above make short work of
rlm@475 1867 calculating the positions and orientations of each feeler in
rlm@475 1868 world-space.
rlm@474 1869
rlm@475 1870 #+caption: Program to get the coordinates of ``feelers '' in
rlm@475 1871 #+caption: both world and UV-coordinates.
rlm@475 1872 #+name: feeler-coordinates
rlm@475 1873 #+begin_listing clojure
rlm@475 1874 #+BEGIN_SRC clojure
rlm@474 1875 (defn feeler-pixel-coords
rlm@474 1876 "Returns the coordinates of the feelers in pixel space in lists, one
rlm@474 1877 list for each triangle, ordered in the same way as (triangles) and
rlm@474 1878 (pixel-triangles)."
rlm@474 1879 [#^Geometry geo image]
rlm@474 1880 (map
rlm@474 1881 (fn [pixel-triangle]
rlm@474 1882 (filter
rlm@474 1883 (fn [coord]
rlm@474 1884 (inside-triangle? (->triangle pixel-triangle)
rlm@474 1885 (->vector3f coord)))
rlm@474 1886 (white-coordinates image (convex-bounds pixel-triangle))))
rlm@474 1887 (pixel-triangles geo image)))
rlm@474 1888
rlm@474 1889 (defn feeler-world-coords
rlm@474 1890 "Returns the coordinates of the feelers in world space in lists, one
rlm@474 1891 list for each triangle, ordered in the same way as (triangles) and
rlm@474 1892 (pixel-triangles)."
rlm@474 1893 [#^Geometry geo image]
rlm@474 1894 (let [transforms
rlm@474 1895 (map #(triangles->affine-transform
rlm@474 1896 (->triangle %1) (->triangle %2))
rlm@474 1897 (pixel-triangles geo image)
rlm@474 1898 (triangles geo))]
rlm@474 1899 (map (fn [transform coords]
rlm@474 1900 (map #(.mult transform (->vector3f %)) coords))
rlm@474 1901 transforms (feeler-pixel-coords geo image))))
rlm@475 1902 #+END_SRC
rlm@475 1903 #+end_listing
rlm@474 1904
rlm@475 1905 #+caption: Program to get the position of the base and tip of
rlm@475 1906 #+caption: each ``feeler''
rlm@475 1907 #+name: feeler-tips
rlm@475 1908 #+begin_listing clojure
rlm@475 1909 #+BEGIN_SRC clojure
rlm@474 1910 (defn feeler-origins
rlm@474 1911 "The world space coordinates of the root of each feeler."
rlm@474 1912 [#^Geometry geo image]
rlm@474 1913 (reduce concat (feeler-world-coords geo image)))
rlm@474 1914
rlm@474 1915 (defn feeler-tips
rlm@474 1916 "The world space coordinates of the tip of each feeler."
rlm@474 1917 [#^Geometry geo image]
rlm@474 1918 (let [world-coords (feeler-world-coords geo image)
rlm@474 1919 normals
rlm@474 1920 (map
rlm@474 1921 (fn [triangle]
rlm@474 1922 (.calculateNormal triangle)
rlm@474 1923 (.clone (.getNormal triangle)))
rlm@474 1924 (map ->triangle (triangles geo)))]
rlm@474 1925
rlm@474 1926 (mapcat (fn [origins normal]
rlm@474 1927 (map #(.add % normal) origins))
rlm@474 1928 world-coords normals)))
rlm@474 1929
rlm@474 1930 (defn touch-topology
rlm@474 1931 [#^Geometry geo image]
rlm@474 1932 (collapse (reduce concat (feeler-pixel-coords geo image))))
rlm@475 1933 #+END_SRC
rlm@475 1934 #+end_listing
rlm@474 1935
rlm@477 1936 *** Simulated Touch
rlm@474 1937
rlm@475 1938 Now that the functions to construct feelers are complete,
rlm@475 1939 =touch-kernel= generates functions to be called from within a
rlm@475 1940 simulation that perform the necessary physics collisions to
rlm@475 1941 collect tactile data, and =touch!= recursively applies it to every
rlm@475 1942 node in the creature.
rlm@474 1943
rlm@475 1944 #+caption: Efficient program to transform a ray from
rlm@475 1945 #+caption: one position to another.
rlm@475 1946 #+name: set-ray
rlm@475 1947 #+begin_listing clojure
rlm@475 1948 #+BEGIN_SRC clojure
rlm@474 1949 (defn set-ray [#^Ray ray #^Matrix4f transform
rlm@474 1950 #^Vector3f origin #^Vector3f tip]
rlm@474 1951 ;; Doing everything locally reduces garbage collection by enough to
rlm@474 1952 ;; be worth it.
rlm@474 1953 (.mult transform origin (.getOrigin ray))
rlm@474 1954 (.mult transform tip (.getDirection ray))
rlm@474 1955 (.subtractLocal (.getDirection ray) (.getOrigin ray))
rlm@474 1956 (.normalizeLocal (.getDirection ray)))
rlm@475 1957 #+END_SRC
rlm@475 1958 #+end_listing
rlm@474 1959
rlm@475 1960 #+caption: This is the core of touch in =CORTEX= each feeler
rlm@475 1961 #+caption: follows the object it is bound to, reporting any
rlm@475 1962 #+caption: collisions that may happen.
rlm@475 1963 #+name: touch-kernel
rlm@475 1964 #+begin_listing clojure
rlm@475 1965 #+BEGIN_SRC clojure
rlm@474 1966 (defn touch-kernel
rlm@474 1967 "Constructs a function which will return tactile sensory data from
rlm@474 1968 'geo when called from inside a running simulation"
rlm@474 1969 [#^Geometry geo]
rlm@474 1970 (if-let
rlm@474 1971 [profile (tactile-sensor-profile geo)]
rlm@474 1972 (let [ray-reference-origins (feeler-origins geo profile)
rlm@474 1973 ray-reference-tips (feeler-tips geo profile)
rlm@474 1974 ray-length (tactile-scale geo)
rlm@474 1975 current-rays (map (fn [_] (Ray.)) ray-reference-origins)
rlm@474 1976 topology (touch-topology geo profile)
rlm@474 1977 correction (float (* ray-length -0.2))]
rlm@474 1978 ;; slight tolerance for very close collisions.
rlm@474 1979 (dorun
rlm@474 1980 (map (fn [origin tip]
rlm@474 1981 (.addLocal origin (.mult (.subtract tip origin)
rlm@474 1982 correction)))
rlm@474 1983 ray-reference-origins ray-reference-tips))
rlm@474 1984 (dorun (map #(.setLimit % ray-length) current-rays))
rlm@474 1985 (fn [node]
rlm@474 1986 (let [transform (.getWorldMatrix geo)]
rlm@474 1987 (dorun
rlm@474 1988 (map (fn [ray ref-origin ref-tip]
rlm@474 1989 (set-ray ray transform ref-origin ref-tip))
rlm@474 1990 current-rays ray-reference-origins
rlm@474 1991 ray-reference-tips))
rlm@474 1992 (vector
rlm@474 1993 topology
rlm@474 1994 (vec
rlm@474 1995 (for [ray current-rays]
rlm@474 1996 (do
rlm@474 1997 (let [results (CollisionResults.)]
rlm@474 1998 (.collideWith node ray results)
rlm@474 1999 (let [touch-objects
rlm@474 2000 (filter #(not (= geo (.getGeometry %)))
rlm@474 2001 results)
rlm@474 2002 limit (.getLimit ray)]
rlm@474 2003 [(if (empty? touch-objects)
rlm@474 2004 limit
rlm@474 2005 (let [response
rlm@474 2006 (apply min (map #(.getDistance %)
rlm@474 2007 touch-objects))]
rlm@474 2008 (FastMath/clamp
rlm@474 2009 (float
rlm@474 2010 (if (> response limit) (float 0.0)
rlm@474 2011 (+ response correction)))
rlm@474 2012 (float 0.0)
rlm@474 2013 limit)))
rlm@474 2014 limit])))))))))))
rlm@475 2015 #+END_SRC
rlm@475 2016 #+end_listing
rlm@474 2017
rlm@475 2018 Armed with the =touch!= function, =CORTEX= becomes capable of
rlm@475 2019 giving creatures a sense of touch. A simple test is to create a
rlm@475 2020 cube that is outfitted with a uniform distrubition of touch
rlm@475 2021 sensors. It can feel the ground and any balls that it touches.
rlm@475 2022
rlm@475 2023 #+caption: =CORTEX= interface for creating touch in a simulated
rlm@475 2024 #+caption: creature.
rlm@475 2025 #+name: touch
rlm@475 2026 #+begin_listing clojure
rlm@475 2027 #+BEGIN_SRC clojure
rlm@474 2028 (defn touch!
rlm@474 2029 "Endow the creature with the sense of touch. Returns a sequence of
rlm@474 2030 functions, one for each body part with a tactile-sensor-profile,
rlm@474 2031 each of which when called returns sensory data for that body part."
rlm@474 2032 [#^Node creature]
rlm@474 2033 (filter
rlm@474 2034 (comp not nil?)
rlm@474 2035 (map touch-kernel
rlm@474 2036 (filter #(isa? (class %) Geometry)
rlm@474 2037 (node-seq creature)))))
rlm@475 2038 #+END_SRC
rlm@475 2039 #+end_listing
rlm@475 2040
rlm@475 2041 The tactile-sensor-profile image for the touch cube is a simple
rlm@475 2042 cross with a unifom distribution of touch sensors:
rlm@474 2043
rlm@475 2044 #+caption: The touch profile for the touch-cube. Each pure white
rlm@475 2045 #+caption: pixel defines a touch sensitive feeler.
rlm@475 2046 #+name: touch-cube-uv-map
rlm@495 2047 #+ATTR_LaTeX: :width 7cm
rlm@475 2048 [[./images/touch-profile.png]]
rlm@474 2049
rlm@475 2050 #+caption: The touch cube reacts to canonballs. The black, red,
rlm@475 2051 #+caption: and white cross on the right is a visual display of
rlm@475 2052 #+caption: the creature's touch. White means that it is feeling
rlm@475 2053 #+caption: something strongly, black is not feeling anything,
rlm@475 2054 #+caption: and gray is in-between. The cube can feel both the
rlm@475 2055 #+caption: floor and the ball. Notice that when the ball causes
rlm@475 2056 #+caption: the cube to tip, that the bottom face can still feel
rlm@475 2057 #+caption: part of the ground.
rlm@475 2058 #+name: touch-cube-uv-map
rlm@475 2059 #+ATTR_LaTeX: :width 15cm
rlm@475 2060 [[./images/touch-cube.png]]
rlm@474 2061
rlm@507 2062 ** Proprioception is the sense that makes everything ``real''
rlm@436 2063
rlm@479 2064 Close your eyes, and touch your nose with your right index finger.
rlm@479 2065 How did you do it? You could not see your hand, and neither your
rlm@479 2066 hand nor your nose could use the sense of touch to guide the path
rlm@479 2067 of your hand. There are no sound cues, and Taste and Smell
rlm@479 2068 certainly don't provide any help. You know where your hand is
rlm@479 2069 without your other senses because of Proprioception.
rlm@479 2070
rlm@479 2071 Humans can sometimes loose this sense through viral infections or
rlm@479 2072 damage to the spinal cord or brain, and when they do, they loose
rlm@479 2073 the ability to control their own bodies without looking directly at
rlm@479 2074 the parts they want to move. In [[http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat][The Man Who Mistook His Wife for a
rlm@479 2075 Hat]], a woman named Christina looses this sense and has to learn how
rlm@479 2076 to move by carefully watching her arms and legs. She describes
rlm@479 2077 proprioception as the "eyes of the body, the way the body sees
rlm@479 2078 itself".
rlm@479 2079
rlm@479 2080 Proprioception in humans is mediated by [[http://en.wikipedia.org/wiki/Articular_capsule][joint capsules]], [[http://en.wikipedia.org/wiki/Muscle_spindle][muscle
rlm@479 2081 spindles]], and the [[http://en.wikipedia.org/wiki/Golgi_tendon_organ][Golgi tendon organs]]. These measure the relative
rlm@479 2082 positions of each body part by monitoring muscle strain and length.
rlm@479 2083
rlm@479 2084 It's clear that this is a vital sense for fluid, graceful movement.
rlm@479 2085 It's also particularly easy to implement in jMonkeyEngine.
rlm@479 2086
rlm@479 2087 My simulated proprioception calculates the relative angles of each
rlm@479 2088 joint from the rest position defined in the blender file. This
rlm@479 2089 simulates the muscle-spindles and joint capsules. I will deal with
rlm@479 2090 Golgi tendon organs, which calculate muscle strain, in the next
rlm@479 2091 section.
rlm@479 2092
rlm@479 2093 *** Helper functions
rlm@479 2094
rlm@479 2095 =absolute-angle= calculates the angle between two vectors,
rlm@479 2096 relative to a third axis vector. This angle is the number of
rlm@479 2097 radians you have to move counterclockwise around the axis vector
rlm@479 2098 to get from the first to the second vector. It is not commutative
rlm@479 2099 like a normal dot-product angle is.
rlm@479 2100
rlm@479 2101 The purpose of these functions is to build a system of angle
rlm@479 2102 measurement that is biologically plausable.
rlm@479 2103
rlm@479 2104 #+caption: Program to measure angles along a vector
rlm@479 2105 #+name: helpers
rlm@479 2106 #+begin_listing clojure
rlm@479 2107 #+BEGIN_SRC clojure
rlm@479 2108 (defn right-handed?
rlm@479 2109 "true iff the three vectors form a right handed coordinate
rlm@479 2110 system. The three vectors do not have to be normalized or
rlm@479 2111 orthogonal."
rlm@479 2112 [vec1 vec2 vec3]
rlm@479 2113 (pos? (.dot (.cross vec1 vec2) vec3)))
rlm@479 2114
rlm@479 2115 (defn absolute-angle
rlm@479 2116 "The angle between 'vec1 and 'vec2 around 'axis. In the range
rlm@479 2117 [0 (* 2 Math/PI)]."
rlm@479 2118 [vec1 vec2 axis]
rlm@479 2119 (let [angle (.angleBetween vec1 vec2)]
rlm@479 2120 (if (right-handed? vec1 vec2 axis)
rlm@479 2121 angle (- (* 2 Math/PI) angle))))
rlm@479 2122 #+END_SRC
rlm@479 2123 #+end_listing
rlm@479 2124
rlm@479 2125 *** Proprioception Kernel
rlm@479 2126
rlm@479 2127 Given a joint, =proprioception-kernel= produces a function that
rlm@479 2128 calculates the Euler angles between the the objects the joint
rlm@479 2129 connects. The only tricky part here is making the angles relative
rlm@479 2130 to the joint's initial ``straightness''.
rlm@479 2131
rlm@479 2132 #+caption: Program to return biologially reasonable proprioceptive
rlm@479 2133 #+caption: data for each joint.
rlm@479 2134 #+name: proprioception
rlm@479 2135 #+begin_listing clojure
rlm@479 2136 #+BEGIN_SRC clojure
rlm@479 2137 (defn proprioception-kernel
rlm@479 2138 "Returns a function which returns proprioceptive sensory data when
rlm@479 2139 called inside a running simulation."
rlm@479 2140 [#^Node parts #^Node joint]
rlm@479 2141 (let [[obj-a obj-b] (joint-targets parts joint)
rlm@479 2142 joint-rot (.getWorldRotation joint)
rlm@479 2143 x0 (.mult joint-rot Vector3f/UNIT_X)
rlm@479 2144 y0 (.mult joint-rot Vector3f/UNIT_Y)
rlm@479 2145 z0 (.mult joint-rot Vector3f/UNIT_Z)]
rlm@479 2146 (fn []
rlm@479 2147 (let [rot-a (.clone (.getWorldRotation obj-a))
rlm@479 2148 rot-b (.clone (.getWorldRotation obj-b))
rlm@479 2149 x (.mult rot-a x0)
rlm@479 2150 y (.mult rot-a y0)
rlm@479 2151 z (.mult rot-a z0)
rlm@479 2152
rlm@479 2153 X (.mult rot-b x0)
rlm@479 2154 Y (.mult rot-b y0)
rlm@479 2155 Z (.mult rot-b z0)
rlm@479 2156 heading (Math/atan2 (.dot X z) (.dot X x))
rlm@479 2157 pitch (Math/atan2 (.dot X y) (.dot X x))
rlm@479 2158
rlm@479 2159 ;; rotate x-vector back to origin
rlm@479 2160 reverse
rlm@479 2161 (doto (Quaternion.)
rlm@479 2162 (.fromAngleAxis
rlm@479 2163 (.angleBetween X x)
rlm@479 2164 (let [cross (.normalize (.cross X x))]
rlm@479 2165 (if (= 0 (.length cross)) y cross))))
rlm@479 2166 roll (absolute-angle (.mult reverse Y) y x)]
rlm@479 2167 [heading pitch roll]))))
rlm@479 2168
rlm@479 2169 (defn proprioception!
rlm@479 2170 "Endow the creature with the sense of proprioception. Returns a
rlm@479 2171 sequence of functions, one for each child of the \"joints\" node in
rlm@479 2172 the creature, which each report proprioceptive information about
rlm@479 2173 that joint."
rlm@479 2174 [#^Node creature]
rlm@479 2175 ;; extract the body's joints
rlm@479 2176 (let [senses (map (partial proprioception-kernel creature)
rlm@479 2177 (joints creature))]
rlm@479 2178 (fn []
rlm@479 2179 (map #(%) senses))))
rlm@479 2180 #+END_SRC
rlm@479 2181 #+end_listing
rlm@479 2182
rlm@479 2183 =proprioception!= maps =proprioception-kernel= across all the
rlm@479 2184 joints of the creature. It uses the same list of joints that
rlm@479 2185 =joints= uses. Proprioception is the easiest sense to implement in
rlm@479 2186 =CORTEX=, and it will play a crucial role when efficiently
rlm@479 2187 implementing empathy.
rlm@479 2188
rlm@479 2189 #+caption: In the upper right corner, the three proprioceptive
rlm@479 2190 #+caption: angle measurements are displayed. Red is yaw, Green is
rlm@479 2191 #+caption: pitch, and White is roll.
rlm@479 2192 #+name: proprio
rlm@479 2193 #+ATTR_LaTeX: :width 11cm
rlm@479 2194 [[./images/proprio.png]]
rlm@479 2195
rlm@507 2196 ** Muscles are both effectors and sensors
rlm@481 2197
rlm@481 2198 Surprisingly enough, terrestrial creatures only move by using
rlm@481 2199 torque applied about their joints. There's not a single straight
rlm@481 2200 line of force in the human body at all! (A straight line of force
rlm@481 2201 would correspond to some sort of jet or rocket propulsion.)
rlm@481 2202
rlm@481 2203 In humans, muscles are composed of muscle fibers which can contract
rlm@481 2204 to exert force. The muscle fibers which compose a muscle are
rlm@481 2205 partitioned into discrete groups which are each controlled by a
rlm@481 2206 single alpha motor neuron. A single alpha motor neuron might
rlm@481 2207 control as little as three or as many as one thousand muscle
rlm@481 2208 fibers. When the alpha motor neuron is engaged by the spinal cord,
rlm@481 2209 it activates all of the muscle fibers to which it is attached. The
rlm@481 2210 spinal cord generally engages the alpha motor neurons which control
rlm@481 2211 few muscle fibers before the motor neurons which control many
rlm@481 2212 muscle fibers. This recruitment strategy allows for precise
rlm@481 2213 movements at low strength. The collection of all motor neurons that
rlm@481 2214 control a muscle is called the motor pool. The brain essentially
rlm@481 2215 says "activate 30% of the motor pool" and the spinal cord recruits
rlm@481 2216 motor neurons until 30% are activated. Since the distribution of
rlm@481 2217 power among motor neurons is unequal and recruitment goes from
rlm@481 2218 weakest to strongest, the first 30% of the motor pool might be 5%
rlm@481 2219 of the strength of the muscle.
rlm@481 2220
rlm@481 2221 My simulated muscles follow a similar design: Each muscle is
rlm@481 2222 defined by a 1-D array of numbers (the "motor pool"). Each entry in
rlm@481 2223 the array represents a motor neuron which controls a number of
rlm@481 2224 muscle fibers equal to the value of the entry. Each muscle has a
rlm@481 2225 scalar strength factor which determines the total force the muscle
rlm@481 2226 can exert when all motor neurons are activated. The effector
rlm@481 2227 function for a muscle takes a number to index into the motor pool,
rlm@481 2228 and then "activates" all the motor neurons whose index is lower or
rlm@481 2229 equal to the number. Each motor-neuron will apply force in
rlm@481 2230 proportion to its value in the array. Lower values cause less
rlm@481 2231 force. The lower values can be put at the "beginning" of the 1-D
rlm@481 2232 array to simulate the layout of actual human muscles, which are
rlm@481 2233 capable of more precise movements when exerting less force. Or, the
rlm@481 2234 motor pool can simulate more exotic recruitment strategies which do
rlm@481 2235 not correspond to human muscles.
rlm@481 2236
rlm@481 2237 This 1D array is defined in an image file for ease of
rlm@481 2238 creation/visualization. Here is an example muscle profile image.
rlm@481 2239
rlm@481 2240 #+caption: A muscle profile image that describes the strengths
rlm@481 2241 #+caption: of each motor neuron in a muscle. White is weakest
rlm@481 2242 #+caption: and dark red is strongest. This particular pattern
rlm@481 2243 #+caption: has weaker motor neurons at the beginning, just
rlm@481 2244 #+caption: like human muscle.
rlm@481 2245 #+name: muscle-recruit
rlm@481 2246 #+ATTR_LaTeX: :width 7cm
rlm@481 2247 [[./images/basic-muscle.png]]
rlm@481 2248
rlm@481 2249 *** Muscle meta-data
rlm@481 2250
rlm@481 2251 #+caption: Program to deal with loading muscle data from a blender
rlm@481 2252 #+caption: file's metadata.
rlm@481 2253 #+name: motor-pool
rlm@481 2254 #+begin_listing clojure
rlm@481 2255 #+BEGIN_SRC clojure
rlm@481 2256 (defn muscle-profile-image
rlm@481 2257 "Get the muscle-profile image from the node's blender meta-data."
rlm@481 2258 [#^Node muscle]
rlm@481 2259 (if-let [image (meta-data muscle "muscle")]
rlm@481 2260 (load-image image)))
rlm@481 2261
rlm@481 2262 (defn muscle-strength
rlm@481 2263 "Return the strength of this muscle, or 1 if it is not defined."
rlm@481 2264 [#^Node muscle]
rlm@481 2265 (if-let [strength (meta-data muscle "strength")]
rlm@481 2266 strength 1))
rlm@481 2267
rlm@481 2268 (defn motor-pool
rlm@481 2269 "Return a vector where each entry is the strength of the \"motor
rlm@481 2270 neuron\" at that part in the muscle."
rlm@481 2271 [#^Node muscle]
rlm@481 2272 (let [profile (muscle-profile-image muscle)]
rlm@481 2273 (vec
rlm@481 2274 (let [width (.getWidth profile)]
rlm@481 2275 (for [x (range width)]
rlm@481 2276 (- 255
rlm@481 2277 (bit-and
rlm@481 2278 0x0000FF
rlm@481 2279 (.getRGB profile x 0))))))))
rlm@481 2280 #+END_SRC
rlm@481 2281 #+end_listing
rlm@481 2282
rlm@481 2283 Of note here is =motor-pool= which interprets the muscle-profile
rlm@481 2284 image in a way that allows me to use gradients between white and
rlm@481 2285 red, instead of shades of gray as I've been using for all the
rlm@481 2286 other senses. This is purely an aesthetic touch.
rlm@481 2287
rlm@481 2288 *** Creating muscles
rlm@481 2289
rlm@481 2290 #+caption: This is the core movement functoion in =CORTEX=, which
rlm@481 2291 #+caption: implements muscles that report on their activation.
rlm@481 2292 #+name: muscle-kernel
rlm@481 2293 #+begin_listing clojure
rlm@481 2294 #+BEGIN_SRC clojure
rlm@481 2295 (defn movement-kernel
rlm@481 2296 "Returns a function which when called with a integer value inside a
rlm@481 2297 running simulation will cause movement in the creature according
rlm@481 2298 to the muscle's position and strength profile. Each function
rlm@481 2299 returns the amount of force applied / max force."
rlm@481 2300 [#^Node creature #^Node muscle]
rlm@481 2301 (let [target (closest-node creature muscle)
rlm@481 2302 axis
rlm@481 2303 (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y)
rlm@481 2304 strength (muscle-strength muscle)
rlm@481 2305
rlm@481 2306 pool (motor-pool muscle)
rlm@481 2307 pool-integral (reductions + pool)
rlm@481 2308 forces
rlm@481 2309 (vec (map #(float (* strength (/ % (last pool-integral))))
rlm@481 2310 pool-integral))
rlm@481 2311 control (.getControl target RigidBodyControl)]
rlm@481 2312 ;;(println-repl (.getName target) axis)
rlm@481 2313 (fn [n]
rlm@481 2314 (let [pool-index (max 0 (min n (dec (count pool))))
rlm@481 2315 force (forces pool-index)]
rlm@481 2316 (.applyTorque control (.mult axis force))
rlm@481 2317 (float (/ force strength))))))
rlm@481 2318
rlm@481 2319 (defn movement!
rlm@481 2320 "Endow the creature with the power of movement. Returns a sequence
rlm@481 2321 of functions, each of which accept an integer value and will
rlm@481 2322 activate their corresponding muscle."
rlm@481 2323 [#^Node creature]
rlm@481 2324 (for [muscle (muscles creature)]
rlm@481 2325 (movement-kernel creature muscle)))
rlm@481 2326 #+END_SRC
rlm@481 2327 #+end_listing
rlm@481 2328
rlm@481 2329
rlm@481 2330 =movement-kernel= creates a function that will move the nearest
rlm@481 2331 physical object to the muscle node. The muscle exerts a rotational
rlm@481 2332 force dependent on it's orientation to the object in the blender
rlm@481 2333 file. The function returned by =movement-kernel= is also a sense
rlm@481 2334 function: it returns the percent of the total muscle strength that
rlm@481 2335 is currently being employed. This is analogous to muscle tension
rlm@481 2336 in humans and completes the sense of proprioception begun in the
rlm@481 2337 last section.
rlm@488 2338
rlm@507 2339 ** =CORTEX= brings complex creatures to life!
rlm@483 2340
rlm@483 2341 The ultimate test of =CORTEX= is to create a creature with the full
rlm@483 2342 gamut of senses and put it though its paces.
rlm@483 2343
rlm@483 2344 With all senses enabled, my right hand model looks like an
rlm@483 2345 intricate marionette hand with several strings for each finger:
rlm@483 2346
rlm@483 2347 #+caption: View of the hand model with all sense nodes. You can see
rlm@483 2348 #+caption: the joint, muscle, ear, and eye nodess here.
rlm@483 2349 #+name: hand-nodes-1
rlm@483 2350 #+ATTR_LaTeX: :width 11cm
rlm@483 2351 [[./images/hand-with-all-senses2.png]]
rlm@483 2352
rlm@483 2353 #+caption: An alternate view of the hand.
rlm@483 2354 #+name: hand-nodes-2
rlm@484 2355 #+ATTR_LaTeX: :width 15cm
rlm@484 2356 [[./images/hand-with-all-senses3.png]]
rlm@484 2357
rlm@484 2358 With the hand fully rigged with senses, I can run it though a test
rlm@484 2359 that will test everything.
rlm@484 2360
rlm@484 2361 #+caption: A full test of the hand with all senses. Note expecially
rlm@495 2362 #+caption: the interactions the hand has with itself: it feels
rlm@484 2363 #+caption: its own palm and fingers, and when it curls its fingers,
rlm@484 2364 #+caption: it sees them with its eye (which is located in the center
rlm@484 2365 #+caption: of the palm. The red block appears with a pure tone sound.
rlm@484 2366 #+caption: The hand then uses its muscles to launch the cube!
rlm@484 2367 #+name: integration
rlm@484 2368 #+ATTR_LaTeX: :width 16cm
rlm@484 2369 [[./images/integration.png]]
rlm@436 2370
rlm@508 2371 ** =CORTEX= enables many possiblities for further research
rlm@485 2372
rlm@485 2373 Often times, the hardest part of building a system involving
rlm@485 2374 creatures is dealing with physics and graphics. =CORTEX= removes
rlm@485 2375 much of this initial difficulty and leaves researchers free to
rlm@485 2376 directly pursue their ideas. I hope that even undergrads with a
rlm@485 2377 passing curiosity about simulated touch or creature evolution will
rlm@485 2378 be able to use cortex for experimentation. =CORTEX= is a completely
rlm@485 2379 simulated world, and far from being a disadvantage, its simulated
rlm@485 2380 nature enables you to create senses and creatures that would be
rlm@485 2381 impossible to make in the real world.
rlm@485 2382
rlm@485 2383 While not by any means a complete list, here are some paths
rlm@485 2384 =CORTEX= is well suited to help you explore:
rlm@485 2385
rlm@485 2386 - Empathy :: my empathy program leaves many areas for
rlm@485 2387 improvement, among which are using vision to infer
rlm@485 2388 proprioception and looking up sensory experience with imagined
rlm@485 2389 vision, touch, and sound.
rlm@485 2390 - Evolution :: Karl Sims created a rich environment for
rlm@485 2391 simulating the evolution of creatures on a connection
rlm@485 2392 machine. Today, this can be redone and expanded with =CORTEX=
rlm@485 2393 on an ordinary computer.
rlm@485 2394 - Exotic senses :: Cortex enables many fascinating senses that are
rlm@485 2395 not possible to build in the real world. For example,
rlm@485 2396 telekinesis is an interesting avenue to explore. You can also
rlm@485 2397 make a ``semantic'' sense which looks up metadata tags on
rlm@485 2398 objects in the environment the metadata tags might contain
rlm@485 2399 other sensory information.
rlm@485 2400 - Imagination via subworlds :: this would involve a creature with
rlm@485 2401 an effector which creates an entire new sub-simulation where
rlm@485 2402 the creature has direct control over placement/creation of
rlm@485 2403 objects via simulated telekinesis. The creature observes this
rlm@485 2404 sub-world through it's normal senses and uses its observations
rlm@485 2405 to make predictions about its top level world.
rlm@485 2406 - Simulated prescience :: step the simulation forward a few ticks,
rlm@485 2407 gather sensory data, then supply this data for the creature as
rlm@485 2408 one of its actual senses. The cost of prescience is slowing
rlm@485 2409 the simulation down by a factor proportional to however far
rlm@485 2410 you want the entities to see into the future. What happens
rlm@485 2411 when two evolved creatures that can each see into the future
rlm@485 2412 fight each other?
rlm@485 2413 - Swarm creatures :: Program a group of creatures that cooperate
rlm@485 2414 with each other. Because the creatures would be simulated, you
rlm@485 2415 could investigate computationally complex rules of behavior
rlm@485 2416 which still, from the group's point of view, would happen in
rlm@485 2417 ``real time''. Interactions could be as simple as cellular
rlm@485 2418 organisms communicating via flashing lights, or as complex as
rlm@485 2419 humanoids completing social tasks, etc.
rlm@485 2420 - =HACKER= for writing muscle-control programs :: Presented with
rlm@485 2421 low-level muscle control/ sense API, generate higher level
rlm@485 2422 programs for accomplishing various stated goals. Example goals
rlm@485 2423 might be "extend all your fingers" or "move your hand into the
rlm@485 2424 area with blue light" or "decrease the angle of this joint".
rlm@485 2425 It would be like Sussman's HACKER, except it would operate
rlm@485 2426 with much more data in a more realistic world. Start off with
rlm@485 2427 "calisthenics" to develop subroutines over the motor control
rlm@485 2428 API. This would be the "spinal chord" of a more intelligent
rlm@485 2429 creature. The low level programming code might be a turning
rlm@485 2430 machine that could develop programs to iterate over a "tape"
rlm@485 2431 where each entry in the tape could control recruitment of the
rlm@485 2432 fibers in a muscle.
rlm@485 2433 - Sense fusion :: There is much work to be done on sense
rlm@485 2434 integration -- building up a coherent picture of the world and
rlm@485 2435 the things in it with =CORTEX= as a base, you can explore
rlm@485 2436 concepts like self-organizing maps or cross modal clustering
rlm@485 2437 in ways that have never before been tried.
rlm@485 2438 - Inverse kinematics :: experiments in sense guided motor control
rlm@485 2439 are easy given =CORTEX='s support -- you can get right to the
rlm@485 2440 hard control problems without worrying about physics or
rlm@485 2441 senses.
rlm@485 2442
rlm@509 2443 * Empathy in a simulated worm
rlm@435 2444
rlm@449 2445 Here I develop a computational model of empathy, using =CORTEX= as a
rlm@449 2446 base. Empathy in this context is the ability to observe another
rlm@449 2447 creature and infer what sorts of sensations that creature is
rlm@449 2448 feeling. My empathy algorithm involves multiple phases. First is
rlm@449 2449 free-play, where the creature moves around and gains sensory
rlm@449 2450 experience. From this experience I construct a representation of the
rlm@449 2451 creature's sensory state space, which I call \Phi-space. Using
rlm@449 2452 \Phi-space, I construct an efficient function which takes the
rlm@449 2453 limited data that comes from observing another creature and enriches
rlm@449 2454 it full compliment of imagined sensory data. I can then use the
rlm@449 2455 imagined sensory data to recognize what the observed creature is
rlm@449 2456 doing and feeling, using straightforward embodied action predicates.
rlm@449 2457 This is all demonstrated with using a simple worm-like creature, and
rlm@449 2458 recognizing worm-actions based on limited data.
rlm@449 2459
rlm@449 2460 #+caption: Here is the worm with which we will be working.
rlm@449 2461 #+caption: It is composed of 5 segments. Each segment has a
rlm@449 2462 #+caption: pair of extensor and flexor muscles. Each of the
rlm@449 2463 #+caption: worm's four joints is a hinge joint which allows
rlm@451 2464 #+caption: about 30 degrees of rotation to either side. Each segment
rlm@449 2465 #+caption: of the worm is touch-capable and has a uniform
rlm@449 2466 #+caption: distribution of touch sensors on each of its faces.
rlm@449 2467 #+caption: Each joint has a proprioceptive sense to detect
rlm@449 2468 #+caption: relative positions. The worm segments are all the
rlm@449 2469 #+caption: same except for the first one, which has a much
rlm@449 2470 #+caption: higher weight than the others to allow for easy
rlm@449 2471 #+caption: manual motor control.
rlm@449 2472 #+name: basic-worm-view
rlm@449 2473 #+ATTR_LaTeX: :width 10cm
rlm@449 2474 [[./images/basic-worm-view.png]]
rlm@449 2475
rlm@449 2476 #+caption: Program for reading a worm from a blender file and
rlm@449 2477 #+caption: outfitting it with the senses of proprioception,
rlm@449 2478 #+caption: touch, and the ability to move, as specified in the
rlm@449 2479 #+caption: blender file.
rlm@449 2480 #+name: get-worm
rlm@449 2481 #+begin_listing clojure
rlm@449 2482 #+begin_src clojure
rlm@449 2483 (defn worm []
rlm@449 2484 (let [model (load-blender-model "Models/worm/worm.blend")]
rlm@449 2485 {:body (doto model (body!))
rlm@449 2486 :touch (touch! model)
rlm@449 2487 :proprioception (proprioception! model)
rlm@449 2488 :muscles (movement! model)}))
rlm@449 2489 #+end_src
rlm@449 2490 #+end_listing
rlm@452 2491
rlm@436 2492 ** Embodiment factors action recognition into managable parts
rlm@435 2493
rlm@449 2494 Using empathy, I divide the problem of action recognition into a
rlm@449 2495 recognition process expressed in the language of a full compliment
rlm@449 2496 of senses, and an imaganitive process that generates full sensory
rlm@449 2497 data from partial sensory data. Splitting the action recognition
rlm@449 2498 problem in this manner greatly reduces the total amount of work to
rlm@449 2499 recognize actions: The imaganitive process is mostly just matching
rlm@449 2500 previous experience, and the recognition process gets to use all
rlm@449 2501 the senses to directly describe any action.
rlm@449 2502
rlm@436 2503 ** Action recognition is easy with a full gamut of senses
rlm@435 2504
rlm@449 2505 Embodied representations using multiple senses such as touch,
rlm@449 2506 proprioception, and muscle tension turns out be be exceedingly
rlm@449 2507 efficient at describing body-centered actions. It is the ``right
rlm@449 2508 language for the job''. For example, it takes only around 5 lines
rlm@449 2509 of LISP code to describe the action of ``curling'' using embodied
rlm@451 2510 primitives. It takes about 10 lines to describe the seemingly
rlm@449 2511 complicated action of wiggling.
rlm@449 2512
rlm@449 2513 The following action predicates each take a stream of sensory
rlm@449 2514 experience, observe however much of it they desire, and decide
rlm@449 2515 whether the worm is doing the action they describe. =curled?=
rlm@449 2516 relies on proprioception, =resting?= relies on touch, =wiggling?=
rlm@449 2517 relies on a fourier analysis of muscle contraction, and
rlm@449 2518 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
rlm@449 2519
rlm@449 2520 #+caption: Program for detecting whether the worm is curled. This is the
rlm@449 2521 #+caption: simplest action predicate, because it only uses the last frame
rlm@449 2522 #+caption: of sensory experience, and only uses proprioceptive data. Even
rlm@449 2523 #+caption: this simple predicate, however, is automatically frame
rlm@449 2524 #+caption: independent and ignores vermopomorphic differences such as
rlm@449 2525 #+caption: worm textures and colors.
rlm@449 2526 #+name: curled
rlm@509 2527 #+begin_listing clojure
rlm@449 2528 #+begin_src clojure
rlm@449 2529 (defn curled?
rlm@449 2530 "Is the worm curled up?"
rlm@449 2531 [experiences]
rlm@449 2532 (every?
rlm@449 2533 (fn [[_ _ bend]]
rlm@449 2534 (> (Math/sin bend) 0.64))
rlm@449 2535 (:proprioception (peek experiences))))
rlm@449 2536 #+end_src
rlm@449 2537 #+end_listing
rlm@449 2538
rlm@449 2539 #+caption: Program for summarizing the touch information in a patch
rlm@449 2540 #+caption: of skin.
rlm@449 2541 #+name: touch-summary
rlm@509 2542 #+begin_listing clojure
rlm@449 2543 #+begin_src clojure
rlm@449 2544 (defn contact
rlm@449 2545 "Determine how much contact a particular worm segment has with
rlm@449 2546 other objects. Returns a value between 0 and 1, where 1 is full
rlm@449 2547 contact and 0 is no contact."
rlm@449 2548 [touch-region [coords contact :as touch]]
rlm@449 2549 (-> (zipmap coords contact)
rlm@449 2550 (select-keys touch-region)
rlm@449 2551 (vals)
rlm@449 2552 (#(map first %))
rlm@449 2553 (average)
rlm@449 2554 (* 10)
rlm@449 2555 (- 1)
rlm@449 2556 (Math/abs)))
rlm@449 2557 #+end_src
rlm@449 2558 #+end_listing
rlm@449 2559
rlm@449 2560
rlm@449 2561 #+caption: Program for detecting whether the worm is at rest. This program
rlm@449 2562 #+caption: uses a summary of the tactile information from the underbelly
rlm@449 2563 #+caption: of the worm, and is only true if every segment is touching the
rlm@449 2564 #+caption: floor. Note that this function contains no references to
rlm@449 2565 #+caption: proprioction at all.
rlm@449 2566 #+name: resting
rlm@452 2567 #+begin_listing clojure
rlm@449 2568 #+begin_src clojure
rlm@449 2569 (def worm-segment-bottom (rect-region [8 15] [14 22]))
rlm@449 2570
rlm@449 2571 (defn resting?
rlm@449 2572 "Is the worm resting on the ground?"
rlm@449 2573 [experiences]
rlm@449 2574 (every?
rlm@449 2575 (fn [touch-data]
rlm@449 2576 (< 0.9 (contact worm-segment-bottom touch-data)))
rlm@449 2577 (:touch (peek experiences))))
rlm@449 2578 #+end_src
rlm@449 2579 #+end_listing
rlm@449 2580
rlm@449 2581 #+caption: Program for detecting whether the worm is curled up into a
rlm@449 2582 #+caption: full circle. Here the embodied approach begins to shine, as
rlm@449 2583 #+caption: I am able to both use a previous action predicate (=curled?=)
rlm@449 2584 #+caption: as well as the direct tactile experience of the head and tail.
rlm@449 2585 #+name: grand-circle
rlm@452 2586 #+begin_listing clojure
rlm@449 2587 #+begin_src clojure
rlm@449 2588 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
rlm@449 2589
rlm@449 2590 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
rlm@449 2591
rlm@449 2592 (defn grand-circle?
rlm@449 2593 "Does the worm form a majestic circle (one end touching the other)?"
rlm@449 2594 [experiences]
rlm@449 2595 (and (curled? experiences)
rlm@449 2596 (let [worm-touch (:touch (peek experiences))
rlm@449 2597 tail-touch (worm-touch 0)
rlm@449 2598 head-touch (worm-touch 4)]
rlm@449 2599 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
rlm@449 2600 (< 0.55 (contact worm-segment-top-tip head-touch))))))
rlm@449 2601 #+end_src
rlm@449 2602 #+end_listing
rlm@449 2603
rlm@449 2604
rlm@449 2605 #+caption: Program for detecting whether the worm has been wiggling for
rlm@449 2606 #+caption: the last few frames. It uses a fourier analysis of the muscle
rlm@449 2607 #+caption: contractions of the worm's tail to determine wiggling. This is
rlm@449 2608 #+caption: signigicant because there is no particular frame that clearly
rlm@449 2609 #+caption: indicates that the worm is wiggling --- only when multiple frames
rlm@449 2610 #+caption: are analyzed together is the wiggling revealed. Defining
rlm@449 2611 #+caption: wiggling this way also gives the worm an opportunity to learn
rlm@449 2612 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
rlm@449 2613 #+caption: wiggle but can't. Frustrated wiggling is very visually different
rlm@449 2614 #+caption: from actual wiggling, but this definition gives it to us for free.
rlm@449 2615 #+name: wiggling
rlm@452 2616 #+begin_listing clojure
rlm@449 2617 #+begin_src clojure
rlm@449 2618 (defn fft [nums]
rlm@449 2619 (map
rlm@449 2620 #(.getReal %)
rlm@449 2621 (.transform
rlm@449 2622 (FastFourierTransformer. DftNormalization/STANDARD)
rlm@449 2623 (double-array nums) TransformType/FORWARD)))
rlm@449 2624
rlm@449 2625 (def indexed (partial map-indexed vector))
rlm@449 2626
rlm@449 2627 (defn max-indexed [s]
rlm@449 2628 (first (sort-by (comp - second) (indexed s))))
rlm@449 2629
rlm@449 2630 (defn wiggling?
rlm@449 2631 "Is the worm wiggling?"
rlm@449 2632 [experiences]
rlm@449 2633 (let [analysis-interval 0x40]
rlm@449 2634 (when (> (count experiences) analysis-interval)
rlm@449 2635 (let [a-flex 3
rlm@449 2636 a-ex 2
rlm@449 2637 muscle-activity
rlm@449 2638 (map :muscle (vector:last-n experiences analysis-interval))
rlm@449 2639 base-activity
rlm@449 2640 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
rlm@449 2641 (= 2
rlm@449 2642 (first
rlm@449 2643 (max-indexed
rlm@449 2644 (map #(Math/abs %)
rlm@449 2645 (take 20 (fft base-activity))))))))))
rlm@449 2646 #+end_src
rlm@449 2647 #+end_listing
rlm@449 2648
rlm@449 2649 With these action predicates, I can now recognize the actions of
rlm@449 2650 the worm while it is moving under my control and I have access to
rlm@449 2651 all the worm's senses.
rlm@449 2652
rlm@449 2653 #+caption: Use the action predicates defined earlier to report on
rlm@449 2654 #+caption: what the worm is doing while in simulation.
rlm@449 2655 #+name: report-worm-activity
rlm@452 2656 #+begin_listing clojure
rlm@449 2657 #+begin_src clojure
rlm@449 2658 (defn debug-experience
rlm@449 2659 [experiences text]
rlm@449 2660 (cond
rlm@449 2661 (grand-circle? experiences) (.setText text "Grand Circle")
rlm@449 2662 (curled? experiences) (.setText text "Curled")
rlm@449 2663 (wiggling? experiences) (.setText text "Wiggling")
rlm@449 2664 (resting? experiences) (.setText text "Resting")))
rlm@449 2665 #+end_src
rlm@449 2666 #+end_listing
rlm@449 2667
rlm@449 2668 #+caption: Using =debug-experience=, the body-centered predicates
rlm@449 2669 #+caption: work together to classify the behaviour of the worm.
rlm@451 2670 #+caption: the predicates are operating with access to the worm's
rlm@451 2671 #+caption: full sensory data.
rlm@449 2672 #+name: basic-worm-view
rlm@449 2673 #+ATTR_LaTeX: :width 10cm
rlm@449 2674 [[./images/worm-identify-init.png]]
rlm@449 2675
rlm@449 2676 These action predicates satisfy the recognition requirement of an
rlm@451 2677 empathic recognition system. There is power in the simplicity of
rlm@451 2678 the action predicates. They describe their actions without getting
rlm@451 2679 confused in visual details of the worm. Each one is frame
rlm@451 2680 independent, but more than that, they are each indepent of
rlm@449 2681 irrelevant visual details of the worm and the environment. They
rlm@449 2682 will work regardless of whether the worm is a different color or
rlm@451 2683 hevaily textured, or if the environment has strange lighting.
rlm@449 2684
rlm@449 2685 The trick now is to make the action predicates work even when the
rlm@449 2686 sensory data on which they depend is absent. If I can do that, then
rlm@449 2687 I will have gained much,
rlm@435 2688
rlm@436 2689 ** \Phi-space describes the worm's experiences
rlm@449 2690
rlm@449 2691 As a first step towards building empathy, I need to gather all of
rlm@449 2692 the worm's experiences during free play. I use a simple vector to
rlm@449 2693 store all the experiences.
rlm@449 2694
rlm@449 2695 Each element of the experience vector exists in the vast space of
rlm@449 2696 all possible worm-experiences. Most of this vast space is actually
rlm@449 2697 unreachable due to physical constraints of the worm's body. For
rlm@449 2698 example, the worm's segments are connected by hinge joints that put
rlm@451 2699 a practical limit on the worm's range of motions without limiting
rlm@451 2700 its degrees of freedom. Some groupings of senses are impossible;
rlm@451 2701 the worm can not be bent into a circle so that its ends are
rlm@451 2702 touching and at the same time not also experience the sensation of
rlm@451 2703 touching itself.
rlm@449 2704
rlm@451 2705 As the worm moves around during free play and its experience vector
rlm@451 2706 grows larger, the vector begins to define a subspace which is all
rlm@451 2707 the sensations the worm can practicaly experience during normal
rlm@451 2708 operation. I call this subspace \Phi-space, short for
rlm@451 2709 physical-space. The experience vector defines a path through
rlm@451 2710 \Phi-space. This path has interesting properties that all derive
rlm@451 2711 from physical embodiment. The proprioceptive components are
rlm@451 2712 completely smooth, because in order for the worm to move from one
rlm@451 2713 position to another, it must pass through the intermediate
rlm@451 2714 positions. The path invariably forms loops as actions are repeated.
rlm@451 2715 Finally and most importantly, proprioception actually gives very
rlm@451 2716 strong inference about the other senses. For example, when the worm
rlm@451 2717 is flat, you can infer that it is touching the ground and that its
rlm@451 2718 muscles are not active, because if the muscles were active, the
rlm@451 2719 worm would be moving and would not be perfectly flat. In order to
rlm@451 2720 stay flat, the worm has to be touching the ground, or it would
rlm@451 2721 again be moving out of the flat position due to gravity. If the
rlm@451 2722 worm is positioned in such a way that it interacts with itself,
rlm@451 2723 then it is very likely to be feeling the same tactile feelings as
rlm@451 2724 the last time it was in that position, because it has the same body
rlm@451 2725 as then. If you observe multiple frames of proprioceptive data,
rlm@451 2726 then you can become increasingly confident about the exact
rlm@451 2727 activations of the worm's muscles, because it generally takes a
rlm@451 2728 unique combination of muscle contractions to transform the worm's
rlm@451 2729 body along a specific path through \Phi-space.
rlm@449 2730
rlm@449 2731 There is a simple way of taking \Phi-space and the total ordering
rlm@449 2732 provided by an experience vector and reliably infering the rest of
rlm@449 2733 the senses.
rlm@435 2734
rlm@436 2735 ** Empathy is the process of tracing though \Phi-space
rlm@449 2736
rlm@450 2737 Here is the core of a basic empathy algorithm, starting with an
rlm@451 2738 experience vector:
rlm@451 2739
rlm@451 2740 First, group the experiences into tiered proprioceptive bins. I use
rlm@451 2741 powers of 10 and 3 bins, and the smallest bin has an approximate
rlm@451 2742 size of 0.001 radians in all proprioceptive dimensions.
rlm@450 2743
rlm@450 2744 Then, given a sequence of proprioceptive input, generate a set of
rlm@451 2745 matching experience records for each input, using the tiered
rlm@451 2746 proprioceptive bins.
rlm@449 2747
rlm@450 2748 Finally, to infer sensory data, select the longest consective chain
rlm@451 2749 of experiences. Conecutive experience means that the experiences
rlm@451 2750 appear next to each other in the experience vector.
rlm@449 2751
rlm@450 2752 This algorithm has three advantages:
rlm@450 2753
rlm@450 2754 1. It's simple
rlm@450 2755
rlm@451 2756 3. It's very fast -- retrieving possible interpretations takes
rlm@451 2757 constant time. Tracing through chains of interpretations takes
rlm@451 2758 time proportional to the average number of experiences in a
rlm@451 2759 proprioceptive bin. Redundant experiences in \Phi-space can be
rlm@451 2760 merged to save computation.
rlm@450 2761
rlm@450 2762 2. It protects from wrong interpretations of transient ambiguous
rlm@451 2763 proprioceptive data. For example, if the worm is flat for just
rlm@450 2764 an instant, this flattness will not be interpreted as implying
rlm@450 2765 that the worm has its muscles relaxed, since the flattness is
rlm@450 2766 part of a longer chain which includes a distinct pattern of
rlm@451 2767 muscle activation. Markov chains or other memoryless statistical
rlm@451 2768 models that operate on individual frames may very well make this
rlm@451 2769 mistake.
rlm@450 2770
rlm@450 2771 #+caption: Program to convert an experience vector into a
rlm@450 2772 #+caption: proprioceptively binned lookup function.
rlm@450 2773 #+name: bin
rlm@452 2774 #+begin_listing clojure
rlm@450 2775 #+begin_src clojure
rlm@449 2776 (defn bin [digits]
rlm@449 2777 (fn [angles]
rlm@449 2778 (->> angles
rlm@449 2779 (flatten)
rlm@449 2780 (map (juxt #(Math/sin %) #(Math/cos %)))
rlm@449 2781 (flatten)
rlm@449 2782 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
rlm@449 2783
rlm@449 2784 (defn gen-phi-scan
rlm@450 2785 "Nearest-neighbors with binning. Only returns a result if
rlm@450 2786 the propriceptive data is within 10% of a previously recorded
rlm@450 2787 result in all dimensions."
rlm@450 2788 [phi-space]
rlm@449 2789 (let [bin-keys (map bin [3 2 1])
rlm@449 2790 bin-maps
rlm@449 2791 (map (fn [bin-key]
rlm@449 2792 (group-by
rlm@449 2793 (comp bin-key :proprioception phi-space)
rlm@449 2794 (range (count phi-space)))) bin-keys)
rlm@449 2795 lookups (map (fn [bin-key bin-map]
rlm@450 2796 (fn [proprio] (bin-map (bin-key proprio))))
rlm@450 2797 bin-keys bin-maps)]
rlm@449 2798 (fn lookup [proprio-data]
rlm@449 2799 (set (some #(% proprio-data) lookups)))))
rlm@450 2800 #+end_src
rlm@450 2801 #+end_listing
rlm@449 2802
rlm@451 2803 #+caption: =longest-thread= finds the longest path of consecutive
rlm@451 2804 #+caption: experiences to explain proprioceptive worm data.
rlm@451 2805 #+name: phi-space-history-scan
rlm@451 2806 #+ATTR_LaTeX: :width 10cm
rlm@451 2807 [[./images/aurellem-gray.png]]
rlm@451 2808
rlm@451 2809 =longest-thread= infers sensory data by stitching together pieces
rlm@451 2810 from previous experience. It prefers longer chains of previous
rlm@451 2811 experience to shorter ones. For example, during training the worm
rlm@451 2812 might rest on the ground for one second before it performs its
rlm@451 2813 excercises. If during recognition the worm rests on the ground for
rlm@451 2814 five seconds, =longest-thread= will accomodate this five second
rlm@451 2815 rest period by looping the one second rest chain five times.
rlm@451 2816
rlm@451 2817 =longest-thread= takes time proportinal to the average number of
rlm@451 2818 entries in a proprioceptive bin, because for each element in the
rlm@451 2819 starting bin it performes a series of set lookups in the preceeding
rlm@451 2820 bins. If the total history is limited, then this is only a constant
rlm@451 2821 multiple times the number of entries in the starting bin. This
rlm@451 2822 analysis also applies even if the action requires multiple longest
rlm@451 2823 chains -- it's still the average number of entries in a
rlm@451 2824 proprioceptive bin times the desired chain length. Because
rlm@451 2825 =longest-thread= is so efficient and simple, I can interpret
rlm@451 2826 worm-actions in real time.
rlm@449 2827
rlm@450 2828 #+caption: Program to calculate empathy by tracing though \Phi-space
rlm@450 2829 #+caption: and finding the longest (ie. most coherent) interpretation
rlm@450 2830 #+caption: of the data.
rlm@450 2831 #+name: longest-thread
rlm@452 2832 #+begin_listing clojure
rlm@450 2833 #+begin_src clojure
rlm@449 2834 (defn longest-thread
rlm@449 2835 "Find the longest thread from phi-index-sets. The index sets should
rlm@449 2836 be ordered from most recent to least recent."
rlm@449 2837 [phi-index-sets]
rlm@449 2838 (loop [result '()
rlm@449 2839 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
rlm@449 2840 (if (empty? phi-index-sets)
rlm@449 2841 (vec result)
rlm@449 2842 (let [threads
rlm@449 2843 (for [thread-base thread-bases]
rlm@449 2844 (loop [thread (list thread-base)
rlm@449 2845 remaining remaining]
rlm@449 2846 (let [next-index (dec (first thread))]
rlm@449 2847 (cond (empty? remaining) thread
rlm@449 2848 (contains? (first remaining) next-index)
rlm@449 2849 (recur
rlm@449 2850 (cons next-index thread) (rest remaining))
rlm@449 2851 :else thread))))
rlm@449 2852 longest-thread
rlm@449 2853 (reduce (fn [thread-a thread-b]
rlm@449 2854 (if (> (count thread-a) (count thread-b))
rlm@449 2855 thread-a thread-b))
rlm@449 2856 '(nil)
rlm@449 2857 threads)]
rlm@449 2858 (recur (concat longest-thread result)
rlm@449 2859 (drop (count longest-thread) phi-index-sets))))))
rlm@450 2860 #+end_src
rlm@450 2861 #+end_listing
rlm@450 2862
rlm@451 2863 There is one final piece, which is to replace missing sensory data
rlm@451 2864 with a best-guess estimate. While I could fill in missing data by
rlm@451 2865 using a gradient over the closest known sensory data points,
rlm@451 2866 averages can be misleading. It is certainly possible to create an
rlm@451 2867 impossible sensory state by averaging two possible sensory states.
rlm@451 2868 Therefore, I simply replicate the most recent sensory experience to
rlm@451 2869 fill in the gaps.
rlm@449 2870
rlm@449 2871 #+caption: Fill in blanks in sensory experience by replicating the most
rlm@449 2872 #+caption: recent experience.
rlm@449 2873 #+name: infer-nils
rlm@452 2874 #+begin_listing clojure
rlm@449 2875 #+begin_src clojure
rlm@449 2876 (defn infer-nils
rlm@449 2877 "Replace nils with the next available non-nil element in the
rlm@449 2878 sequence, or barring that, 0."
rlm@449 2879 [s]
rlm@449 2880 (loop [i (dec (count s))
rlm@449 2881 v (transient s)]
rlm@449 2882 (if (zero? i) (persistent! v)
rlm@449 2883 (if-let [cur (v i)]
rlm@449 2884 (if (get v (dec i) 0)
rlm@449 2885 (recur (dec i) v)
rlm@449 2886 (recur (dec i) (assoc! v (dec i) cur)))
rlm@449 2887 (recur i (assoc! v i 0))))))
rlm@449 2888 #+end_src
rlm@449 2889 #+end_listing
rlm@435 2890
rlm@441 2891 ** Efficient action recognition with =EMPATH=
rlm@451 2892
rlm@451 2893 To use =EMPATH= with the worm, I first need to gather a set of
rlm@451 2894 experiences from the worm that includes the actions I want to
rlm@452 2895 recognize. The =generate-phi-space= program (listing
rlm@451 2896 \ref{generate-phi-space} runs the worm through a series of
rlm@451 2897 exercices and gatheres those experiences into a vector. The
rlm@451 2898 =do-all-the-things= program is a routine expressed in a simple
rlm@452 2899 muscle contraction script language for automated worm control. It
rlm@452 2900 causes the worm to rest, curl, and wiggle over about 700 frames
rlm@452 2901 (approx. 11 seconds).
rlm@425 2902
rlm@451 2903 #+caption: Program to gather the worm's experiences into a vector for
rlm@451 2904 #+caption: further processing. The =motor-control-program= line uses
rlm@451 2905 #+caption: a motor control script that causes the worm to execute a series
rlm@451 2906 #+caption: of ``exercices'' that include all the action predicates.
rlm@451 2907 #+name: generate-phi-space
rlm@452 2908 #+begin_listing clojure
rlm@451 2909 #+begin_src clojure
rlm@451 2910 (def do-all-the-things
rlm@451 2911 (concat
rlm@451 2912 curl-script
rlm@451 2913 [[300 :d-ex 40]
rlm@451 2914 [320 :d-ex 0]]
rlm@451 2915 (shift-script 280 (take 16 wiggle-script))))
rlm@451 2916
rlm@451 2917 (defn generate-phi-space []
rlm@451 2918 (let [experiences (atom [])]
rlm@451 2919 (run-world
rlm@451 2920 (apply-map
rlm@451 2921 worm-world
rlm@451 2922 (merge
rlm@451 2923 (worm-world-defaults)
rlm@451 2924 {:end-frame 700
rlm@451 2925 :motor-control
rlm@451 2926 (motor-control-program worm-muscle-labels do-all-the-things)
rlm@451 2927 :experiences experiences})))
rlm@451 2928 @experiences))
rlm@451 2929 #+end_src
rlm@451 2930 #+end_listing
rlm@451 2931
rlm@451 2932 #+caption: Use longest thread and a phi-space generated from a short
rlm@451 2933 #+caption: exercise routine to interpret actions during free play.
rlm@451 2934 #+name: empathy-debug
rlm@452 2935 #+begin_listing clojure
rlm@451 2936 #+begin_src clojure
rlm@451 2937 (defn init []
rlm@451 2938 (def phi-space (generate-phi-space))
rlm@451 2939 (def phi-scan (gen-phi-scan phi-space)))
rlm@451 2940
rlm@451 2941 (defn empathy-demonstration []
rlm@451 2942 (let [proprio (atom ())]
rlm@451 2943 (fn
rlm@451 2944 [experiences text]
rlm@451 2945 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
rlm@451 2946 (swap! proprio (partial cons phi-indices))
rlm@451 2947 (let [exp-thread (longest-thread (take 300 @proprio))
rlm@451 2948 empathy (mapv phi-space (infer-nils exp-thread))]
rlm@451 2949 (println-repl (vector:last-n exp-thread 22))
rlm@451 2950 (cond
rlm@451 2951 (grand-circle? empathy) (.setText text "Grand Circle")
rlm@451 2952 (curled? empathy) (.setText text "Curled")
rlm@451 2953 (wiggling? empathy) (.setText text "Wiggling")
rlm@451 2954 (resting? empathy) (.setText text "Resting")
rlm@451 2955 :else (.setText text "Unknown")))))))
rlm@451 2956
rlm@451 2957 (defn empathy-experiment [record]
rlm@451 2958 (.start (worm-world :experience-watch (debug-experience-phi)
rlm@451 2959 :record record :worm worm*)))
rlm@451 2960 #+end_src
rlm@451 2961 #+end_listing
rlm@451 2962
rlm@451 2963 The result of running =empathy-experiment= is that the system is
rlm@451 2964 generally able to interpret worm actions using the action-predicates
rlm@451 2965 on simulated sensory data just as well as with actual data. Figure
rlm@451 2966 \ref{empathy-debug-image} was generated using =empathy-experiment=:
rlm@451 2967
rlm@451 2968 #+caption: From only proprioceptive data, =EMPATH= was able to infer
rlm@451 2969 #+caption: the complete sensory experience and classify four poses
rlm@451 2970 #+caption: (The last panel shows a composite image of \emph{wriggling},
rlm@451 2971 #+caption: a dynamic pose.)
rlm@451 2972 #+name: empathy-debug-image
rlm@451 2973 #+ATTR_LaTeX: :width 10cm :placement [H]
rlm@451 2974 [[./images/empathy-1.png]]
rlm@451 2975
rlm@451 2976 One way to measure the performance of =EMPATH= is to compare the
rlm@451 2977 sutiability of the imagined sense experience to trigger the same
rlm@451 2978 action predicates as the real sensory experience.
rlm@451 2979
rlm@451 2980 #+caption: Determine how closely empathy approximates actual
rlm@451 2981 #+caption: sensory data.
rlm@451 2982 #+name: test-empathy-accuracy
rlm@452 2983 #+begin_listing clojure
rlm@451 2984 #+begin_src clojure
rlm@451 2985 (def worm-action-label
rlm@451 2986 (juxt grand-circle? curled? wiggling?))
rlm@451 2987
rlm@451 2988 (defn compare-empathy-with-baseline [matches]
rlm@451 2989 (let [proprio (atom ())]
rlm@451 2990 (fn
rlm@451 2991 [experiences text]
rlm@451 2992 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
rlm@451 2993 (swap! proprio (partial cons phi-indices))
rlm@451 2994 (let [exp-thread (longest-thread (take 300 @proprio))
rlm@451 2995 empathy (mapv phi-space (infer-nils exp-thread))
rlm@451 2996 experience-matches-empathy
rlm@451 2997 (= (worm-action-label experiences)
rlm@451 2998 (worm-action-label empathy))]
rlm@451 2999 (println-repl experience-matches-empathy)
rlm@451 3000 (swap! matches #(conj % experience-matches-empathy)))))))
rlm@451 3001
rlm@451 3002 (defn accuracy [v]
rlm@451 3003 (float (/ (count (filter true? v)) (count v))))
rlm@451 3004
rlm@451 3005 (defn test-empathy-accuracy []
rlm@451 3006 (let [res (atom [])]
rlm@451 3007 (run-world
rlm@451 3008 (worm-world :experience-watch
rlm@451 3009 (compare-empathy-with-baseline res)
rlm@451 3010 :worm worm*))
rlm@451 3011 (accuracy @res)))
rlm@451 3012 #+end_src
rlm@451 3013 #+end_listing
rlm@451 3014
rlm@451 3015 Running =test-empathy-accuracy= using the very short exercise
rlm@451 3016 program defined in listing \ref{generate-phi-space}, and then doing
rlm@451 3017 a similar pattern of activity manually yeilds an accuracy of around
rlm@451 3018 73%. This is based on very limited worm experience. By training the
rlm@451 3019 worm for longer, the accuracy dramatically improves.
rlm@451 3020
rlm@451 3021 #+caption: Program to generate \Phi-space using manual training.
rlm@451 3022 #+name: manual-phi-space
rlm@451 3023 #+begin_listing clojure
rlm@451 3024 #+begin_src clojure
rlm@451 3025 (defn init-interactive []
rlm@451 3026 (def phi-space
rlm@451 3027 (let [experiences (atom [])]
rlm@451 3028 (run-world
rlm@451 3029 (apply-map
rlm@451 3030 worm-world
rlm@451 3031 (merge
rlm@451 3032 (worm-world-defaults)
rlm@451 3033 {:experiences experiences})))
rlm@451 3034 @experiences))
rlm@451 3035 (def phi-scan (gen-phi-scan phi-space)))
rlm@451 3036 #+end_src
rlm@451 3037 #+end_listing
rlm@451 3038
rlm@451 3039 After about 1 minute of manual training, I was able to achieve 95%
rlm@451 3040 accuracy on manual testing of the worm using =init-interactive= and
rlm@452 3041 =test-empathy-accuracy=. The majority of errors are near the
rlm@452 3042 boundaries of transitioning from one type of action to another.
rlm@452 3043 During these transitions the exact label for the action is more open
rlm@452 3044 to interpretation, and dissaggrement between empathy and experience
rlm@452 3045 is more excusable.
rlm@450 3046
rlm@449 3047 ** Digression: bootstrapping touch using free exploration
rlm@449 3048
rlm@452 3049 In the previous section I showed how to compute actions in terms of
rlm@452 3050 body-centered predicates which relied averate touch activation of
rlm@452 3051 pre-defined regions of the worm's skin. What if, instead of recieving
rlm@452 3052 touch pre-grouped into the six faces of each worm segment, the true
rlm@452 3053 topology of the worm's skin was unknown? This is more similiar to how
rlm@452 3054 a nerve fiber bundle might be arranged. While two fibers that are
rlm@452 3055 close in a nerve bundle /might/ correspond to two touch sensors that
rlm@452 3056 are close together on the skin, the process of taking a complicated
rlm@452 3057 surface and forcing it into essentially a circle requires some cuts
rlm@452 3058 and rerragenments.
rlm@452 3059
rlm@452 3060 In this section I show how to automatically learn the skin-topology of
rlm@452 3061 a worm segment by free exploration. As the worm rolls around on the
rlm@452 3062 floor, large sections of its surface get activated. If the worm has
rlm@452 3063 stopped moving, then whatever region of skin that is touching the
rlm@452 3064 floor is probably an important region, and should be recorded.
rlm@452 3065
rlm@452 3066 #+caption: Program to detect whether the worm is in a resting state
rlm@452 3067 #+caption: with one face touching the floor.
rlm@452 3068 #+name: pure-touch
rlm@452 3069 #+begin_listing clojure
rlm@452 3070 #+begin_src clojure
rlm@452 3071 (def full-contact [(float 0.0) (float 0.1)])
rlm@452 3072
rlm@452 3073 (defn pure-touch?
rlm@452 3074 "This is worm specific code to determine if a large region of touch
rlm@452 3075 sensors is either all on or all off."
rlm@452 3076 [[coords touch :as touch-data]]
rlm@452 3077 (= (set (map first touch)) (set full-contact)))
rlm@452 3078 #+end_src
rlm@452 3079 #+end_listing
rlm@452 3080
rlm@452 3081 After collecting these important regions, there will many nearly
rlm@452 3082 similiar touch regions. While for some purposes the subtle
rlm@452 3083 differences between these regions will be important, for my
rlm@452 3084 purposes I colapse them into mostly non-overlapping sets using
rlm@452 3085 =remove-similiar= in listing \ref{remove-similiar}
rlm@452 3086
rlm@452 3087 #+caption: Program to take a lits of set of points and ``collapse them''
rlm@452 3088 #+caption: so that the remaining sets in the list are siginificantly
rlm@452 3089 #+caption: different from each other. Prefer smaller sets to larger ones.
rlm@452 3090 #+name: remove-similiar
rlm@452 3091 #+begin_listing clojure
rlm@452 3092 #+begin_src clojure
rlm@452 3093 (defn remove-similar
rlm@452 3094 [coll]
rlm@452 3095 (loop [result () coll (sort-by (comp - count) coll)]
rlm@452 3096 (if (empty? coll) result
rlm@452 3097 (let [[x & xs] coll
rlm@452 3098 c (count x)]
rlm@452 3099 (if (some
rlm@452 3100 (fn [other-set]
rlm@452 3101 (let [oc (count other-set)]
rlm@452 3102 (< (- (count (union other-set x)) c) (* oc 0.1))))
rlm@452 3103 xs)
rlm@452 3104 (recur result xs)
rlm@452 3105 (recur (cons x result) xs))))))
rlm@452 3106 #+end_src
rlm@452 3107 #+end_listing
rlm@452 3108
rlm@452 3109 Actually running this simulation is easy given =CORTEX='s facilities.
rlm@452 3110
rlm@452 3111 #+caption: Collect experiences while the worm moves around. Filter the touch
rlm@452 3112 #+caption: sensations by stable ones, collapse similiar ones together,
rlm@452 3113 #+caption: and report the regions learned.
rlm@452 3114 #+name: learn-touch
rlm@452 3115 #+begin_listing clojure
rlm@452 3116 #+begin_src clojure
rlm@452 3117 (defn learn-touch-regions []
rlm@452 3118 (let [experiences (atom [])
rlm@452 3119 world (apply-map
rlm@452 3120 worm-world
rlm@452 3121 (assoc (worm-segment-defaults)
rlm@452 3122 :experiences experiences))]
rlm@452 3123 (run-world world)
rlm@452 3124 (->>
rlm@452 3125 @experiences
rlm@452 3126 (drop 175)
rlm@452 3127 ;; access the single segment's touch data
rlm@452 3128 (map (comp first :touch))
rlm@452 3129 ;; only deal with "pure" touch data to determine surfaces
rlm@452 3130 (filter pure-touch?)
rlm@452 3131 ;; associate coordinates with touch values
rlm@452 3132 (map (partial apply zipmap))
rlm@452 3133 ;; select those regions where contact is being made
rlm@452 3134 (map (partial group-by second))
rlm@452 3135 (map #(get % full-contact))
rlm@452 3136 (map (partial map first))
rlm@452 3137 ;; remove redundant/subset regions
rlm@452 3138 (map set)
rlm@452 3139 remove-similar)))
rlm@452 3140
rlm@452 3141 (defn learn-and-view-touch-regions []
rlm@452 3142 (map view-touch-region
rlm@452 3143 (learn-touch-regions)))
rlm@452 3144 #+end_src
rlm@452 3145 #+end_listing
rlm@452 3146
rlm@452 3147 The only thing remining to define is the particular motion the worm
rlm@452 3148 must take. I accomplish this with a simple motor control program.
rlm@452 3149
rlm@452 3150 #+caption: Motor control program for making the worm roll on the ground.
rlm@452 3151 #+caption: This could also be replaced with random motion.
rlm@452 3152 #+name: worm-roll
rlm@452 3153 #+begin_listing clojure
rlm@452 3154 #+begin_src clojure
rlm@452 3155 (defn touch-kinesthetics []
rlm@452 3156 [[170 :lift-1 40]
rlm@452 3157 [190 :lift-1 19]
rlm@452 3158 [206 :lift-1 0]
rlm@452 3159
rlm@452 3160 [400 :lift-2 40]
rlm@452 3161 [410 :lift-2 0]
rlm@452 3162
rlm@452 3163 [570 :lift-2 40]
rlm@452 3164 [590 :lift-2 21]
rlm@452 3165 [606 :lift-2 0]
rlm@452 3166
rlm@452 3167 [800 :lift-1 30]
rlm@452 3168 [809 :lift-1 0]
rlm@452 3169
rlm@452 3170 [900 :roll-2 40]
rlm@452 3171 [905 :roll-2 20]
rlm@452 3172 [910 :roll-2 0]
rlm@452 3173
rlm@452 3174 [1000 :roll-2 40]
rlm@452 3175 [1005 :roll-2 20]
rlm@452 3176 [1010 :roll-2 0]
rlm@452 3177
rlm@452 3178 [1100 :roll-2 40]
rlm@452 3179 [1105 :roll-2 20]
rlm@452 3180 [1110 :roll-2 0]
rlm@452 3181 ])
rlm@452 3182 #+end_src
rlm@452 3183 #+end_listing
rlm@452 3184
rlm@452 3185
rlm@452 3186 #+caption: The small worm rolls around on the floor, driven
rlm@452 3187 #+caption: by the motor control program in listing \ref{worm-roll}.
rlm@452 3188 #+name: worm-roll
rlm@452 3189 #+ATTR_LaTeX: :width 12cm
rlm@452 3190 [[./images/worm-roll.png]]
rlm@452 3191
rlm@452 3192
rlm@452 3193 #+caption: After completing its adventures, the worm now knows
rlm@452 3194 #+caption: how its touch sensors are arranged along its skin. These
rlm@452 3195 #+caption: are the regions that were deemed important by
rlm@452 3196 #+caption: =learn-touch-regions=. Note that the worm has discovered
rlm@452 3197 #+caption: that it has six sides.
rlm@452 3198 #+name: worm-touch-map
rlm@452 3199 #+ATTR_LaTeX: :width 12cm
rlm@452 3200 [[./images/touch-learn.png]]
rlm@452 3201
rlm@452 3202 While simple, =learn-touch-regions= exploits regularities in both
rlm@452 3203 the worm's physiology and the worm's environment to correctly
rlm@452 3204 deduce that the worm has six sides. Note that =learn-touch-regions=
rlm@452 3205 would work just as well even if the worm's touch sense data were
rlm@452 3206 completely scrambled. The cross shape is just for convienence. This
rlm@452 3207 example justifies the use of pre-defined touch regions in =EMPATH=.
rlm@452 3208
rlm@509 3209 * Contributions
rlm@454 3210
rlm@461 3211 In this thesis you have seen the =CORTEX= system, a complete
rlm@461 3212 environment for creating simulated creatures. You have seen how to
rlm@461 3213 implement five senses including touch, proprioception, hearing,
rlm@461 3214 vision, and muscle tension. You have seen how to create new creatues
rlm@461 3215 using blender, a 3D modeling tool. I hope that =CORTEX= will be
rlm@461 3216 useful in further research projects. To this end I have included the
rlm@461 3217 full source to =CORTEX= along with a large suite of tests and
rlm@461 3218 examples. I have also created a user guide for =CORTEX= which is
rlm@461 3219 inculded in an appendix to this thesis.
rlm@447 3220
rlm@461 3221 You have also seen how I used =CORTEX= as a platform to attach the
rlm@461 3222 /action recognition/ problem, which is the problem of recognizing
rlm@461 3223 actions in video. You saw a simple system called =EMPATH= which
rlm@461 3224 ientifies actions by first describing actions in a body-centerd,
rlm@461 3225 rich sense language, then infering a full range of sensory
rlm@461 3226 experience from limited data using previous experience gained from
rlm@461 3227 free play.
rlm@447 3228
rlm@461 3229 As a minor digression, you also saw how I used =CORTEX= to enable a
rlm@461 3230 tiny worm to discover the topology of its skin simply by rolling on
rlm@461 3231 the ground.
rlm@461 3232
rlm@461 3233 In conclusion, the main contributions of this thesis are:
rlm@461 3234
rlm@461 3235 - =CORTEX=, a system for creating simulated creatures with rich
rlm@461 3236 senses.
rlm@461 3237 - =EMPATH=, a program for recognizing actions by imagining sensory
rlm@461 3238 experience.
rlm@447 3239
rlm@447 3240 # An anatomical joke:
rlm@447 3241 # - Training
rlm@447 3242 # - Skeletal imitation
rlm@447 3243 # - Sensory fleshing-out
rlm@447 3244 # - Classification
rlm@488 3245 #+BEGIN_LaTeX
rlm@488 3246 \appendix
rlm@488 3247 #+END_LaTeX
rlm@509 3248 * Appendix: =CORTEX= User Guide
rlm@488 3249
rlm@488 3250 Those who write a thesis should endeavor to make their code not only
rlm@488 3251 accessable, but actually useable, as a way to pay back the community
rlm@488 3252 that made the thesis possible in the first place. This thesis would
rlm@488 3253 not be possible without Free Software such as jMonkeyEngine3,
rlm@488 3254 Blender, clojure, emacs, ffmpeg, and many other tools. That is why I
rlm@488 3255 have included this user guide, in the hope that someone else might
rlm@488 3256 find =CORTEX= useful.
rlm@488 3257
rlm@488 3258 ** Obtaining =CORTEX=
rlm@488 3259
rlm@488 3260 You can get cortex from its mercurial repository at
rlm@488 3261 http://hg.bortreb.com/cortex. You may also download =CORTEX=
rlm@488 3262 releases at http://aurellem.org/cortex/releases/. As a condition of
rlm@488 3263 making this thesis, I have also provided Professor Winston the
rlm@488 3264 =CORTEX= source, and he knows how to run the demos and get started.
rlm@488 3265 You may also email me at =cortex@aurellem.org= and I may help where
rlm@488 3266 I can.
rlm@488 3267
rlm@488 3268 ** Running =CORTEX=
rlm@488 3269
rlm@488 3270 =CORTEX= comes with README and INSTALL files that will guide you
rlm@488 3271 through installation and running the test suite. In particular you
rlm@488 3272 should look at test =cortex.test= which contains test suites that
rlm@488 3273 run through all senses and multiple creatures.
rlm@488 3274
rlm@488 3275 ** Creating creatures
rlm@488 3276
rlm@488 3277 Creatures are created using /Blender/, a free 3D modeling program.
rlm@488 3278 You will need Blender version 2.6 when using the =CORTEX= included
rlm@488 3279 in this thesis. You create a =CORTEX= creature in a similiar manner
rlm@488 3280 to modeling anything in Blender, except that you also create
rlm@488 3281 several trees of empty nodes which define the creature's senses.
rlm@488 3282
rlm@488 3283 *** Mass
rlm@488 3284
rlm@488 3285 To give an object mass in =CORTEX=, add a ``mass'' metadata label
rlm@488 3286 to the object with the mass in jMonkeyEngine units. Note that
rlm@488 3287 setting the mass to 0 causes the object to be immovable.
rlm@488 3288
rlm@488 3289 *** Joints
rlm@488 3290
rlm@488 3291 Joints are created by creating an empty node named =joints= and
rlm@488 3292 then creating any number of empty child nodes to represent your
rlm@488 3293 creature's joints. The joint will automatically connect the
rlm@488 3294 closest two physical objects. It will help to set the empty node's
rlm@488 3295 display mode to ``Arrows'' so that you can clearly see the
rlm@488 3296 direction of the axes.
rlm@488 3297
rlm@488 3298 Joint nodes should have the following metadata under the ``joint''
rlm@488 3299 label:
rlm@488 3300
rlm@488 3301 #+BEGIN_SRC clojure
rlm@488 3302 ;; ONE OF the following, under the label "joint":
rlm@488 3303 {:type :point}
rlm@488 3304
rlm@488 3305 ;; OR
rlm@488 3306
rlm@488 3307 {:type :hinge
rlm@488 3308 :limit [<limit-low> <limit-high>]
rlm@488 3309 :axis (Vector3f. <x> <y> <z>)}
rlm@488 3310 ;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
rlm@488 3311
rlm@488 3312 ;; OR
rlm@488 3313
rlm@488 3314 {:type :cone
rlm@488 3315 :limit-xz <lim-xz>
rlm@488 3316 :limit-xy <lim-xy>
rlm@488 3317 :twist <lim-twist>} ;(use XZY rotation mode in blender!)
rlm@488 3318 #+END_SRC
rlm@488 3319
rlm@488 3320 *** Eyes
rlm@488 3321
rlm@488 3322 Eyes are created by creating an empty node named =eyes= and then
rlm@488 3323 creating any number of empty child nodes to represent your
rlm@488 3324 creature's eyes.
rlm@488 3325
rlm@488 3326 Eye nodes should have the following metadata under the ``eye''
rlm@488 3327 label:
rlm@488 3328
rlm@488 3329 #+BEGIN_SRC clojure
rlm@488 3330 {:red <red-retina-definition>
rlm@488 3331 :blue <blue-retina-definition>
rlm@488 3332 :green <green-retina-definition>
rlm@488 3333 :all <all-retina-definition>
rlm@488 3334 (<0xrrggbb> <custom-retina-image>)...
rlm@488 3335 }
rlm@488 3336 #+END_SRC
rlm@488 3337
rlm@488 3338 Any of the color channels may be omitted. You may also include
rlm@488 3339 your own color selectors, and in fact :red is equivalent to
rlm@488 3340 0xFF0000 and so forth. The eye will be placed at the same position
rlm@488 3341 as the empty node and will bind to the neatest physical object.
rlm@488 3342 The eye will point outward from the X-axis of the node, and ``up''
rlm@488 3343 will be in the direction of the X-axis of the node. It will help
rlm@488 3344 to set the empty node's display mode to ``Arrows'' so that you can
rlm@488 3345 clearly see the direction of the axes.
rlm@488 3346
rlm@488 3347 Each retina file should contain white pixels whever you want to be
rlm@488 3348 sensitive to your chosen color. If you want the entire field of
rlm@488 3349 view, specify :all of 0xFFFFFF and a retinal map that is entirely
rlm@488 3350 white.
rlm@488 3351
rlm@488 3352 Here is a sample retinal map:
rlm@488 3353
rlm@488 3354 #+caption: An example retinal profile image. White pixels are
rlm@488 3355 #+caption: photo-sensitive elements. The distribution of white
rlm@488 3356 #+caption: pixels is denser in the middle and falls off at the
rlm@488 3357 #+caption: edges and is inspired by the human retina.
rlm@488 3358 #+name: retina
rlm@488 3359 #+ATTR_LaTeX: :width 7cm :placement [H]
rlm@488 3360 [[./images/retina-small.png]]
rlm@488 3361
rlm@488 3362 *** Hearing
rlm@488 3363
rlm@488 3364 Ears are created by creating an empty node named =ears= and then
rlm@488 3365 creating any number of empty child nodes to represent your
rlm@488 3366 creature's ears.
rlm@488 3367
rlm@488 3368 Ear nodes do not require any metadata.
rlm@488 3369
rlm@488 3370 The ear will bind to and follow the closest physical node.
rlm@488 3371
rlm@488 3372 *** Touch
rlm@488 3373
rlm@488 3374 Touch is handled similarly to mass. To make a particular object
rlm@488 3375 touch sensitive, add metadata of the following form under the
rlm@488 3376 object's ``touch'' metadata field:
rlm@488 3377
rlm@488 3378 #+BEGIN_EXAMPLE
rlm@488 3379 <touch-UV-map-file-name>
rlm@488 3380 #+END_EXAMPLE
rlm@488 3381
rlm@488 3382 You may also include an optional ``scale'' metadata number to
rlm@488 3383 specifiy the length of the touch feelers. The default is $0.1$,
rlm@488 3384 and this is generally sufficient.
rlm@488 3385
rlm@488 3386 The touch UV should contain white pixels for each touch sensor.
rlm@488 3387
rlm@488 3388 Here is an example touch-uv map that approximates a human finger,
rlm@488 3389 and its corresponding model.
rlm@488 3390
rlm@488 3391 #+caption: This is the tactile-sensor-profile for the upper segment
rlm@488 3392 #+caption: of a fingertip. It defines regions of high touch sensitivity
rlm@488 3393 #+caption: (where there are many white pixels) and regions of low
rlm@488 3394 #+caption: sensitivity (where white pixels are sparse).
rlm@488 3395 #+name: guide-fingertip-UV
rlm@488 3396 #+ATTR_LaTeX: :width 9cm :placement [H]
rlm@488 3397 [[./images/finger-UV.png]]
rlm@488 3398
rlm@488 3399 #+caption: The fingertip UV-image form above applied to a simple
rlm@488 3400 #+caption: model of a fingertip.
rlm@488 3401 #+name: guide-fingertip
rlm@488 3402 #+ATTR_LaTeX: :width 9cm :placement [H]
rlm@488 3403 [[./images/finger-2.png]]
rlm@488 3404
rlm@488 3405 *** Propriocepotion
rlm@488 3406
rlm@488 3407 Proprioception is tied to each joint node -- nothing special must
rlm@488 3408 be done in a blender model to enable proprioception other than
rlm@488 3409 creating joint nodes.
rlm@488 3410
rlm@488 3411 *** Muscles
rlm@488 3412
rlm@488 3413 Muscles are created by creating an empty node named =muscles= and
rlm@488 3414 then creating any number of empty child nodes to represent your
rlm@488 3415 creature's muscles.
rlm@488 3416
rlm@488 3417
rlm@488 3418 Muscle nodes should have the following metadata under the
rlm@488 3419 ``muscle'' label:
rlm@488 3420
rlm@488 3421 #+BEGIN_EXAMPLE
rlm@488 3422 <muscle-profile-file-name>
rlm@488 3423 #+END_EXAMPLE
rlm@488 3424
rlm@488 3425 Muscles should also have a ``strength'' metadata entry describing
rlm@488 3426 the muscle's total strength at full activation.
rlm@488 3427
rlm@488 3428 Muscle profiles are simple images that contain the relative amount
rlm@488 3429 of muscle power in each simulated alpha motor neuron. The width of
rlm@488 3430 the image is the total size of the motor pool, and the redness of
rlm@488 3431 each neuron is the relative power of that motor pool.
rlm@488 3432
rlm@488 3433 While the profile image can have any dimensions, only the first
rlm@488 3434 line of pixels is used to define the muscle. Here is a sample
rlm@488 3435 muscle profile image that defines a human-like muscle.
rlm@488 3436
rlm@488 3437 #+caption: A muscle profile image that describes the strengths
rlm@488 3438 #+caption: of each motor neuron in a muscle. White is weakest
rlm@488 3439 #+caption: and dark red is strongest. This particular pattern
rlm@488 3440 #+caption: has weaker motor neurons at the beginning, just
rlm@488 3441 #+caption: like human muscle.
rlm@488 3442 #+name: muscle-recruit
rlm@488 3443 #+ATTR_LaTeX: :width 7cm :placement [H]
rlm@488 3444 [[./images/basic-muscle.png]]
rlm@488 3445
rlm@488 3446 Muscles twist the nearest physical object about the muscle node's
rlm@488 3447 Z-axis. I recommend using the ``Single Arrow'' display mode for
rlm@488 3448 muscles and using the right hand rule to determine which way the
rlm@488 3449 muscle will twist. To make a segment that can twist in multiple
rlm@488 3450 directions, create multiple, differently aligned muscles.
rlm@488 3451
rlm@488 3452 ** =CORTEX= API
rlm@488 3453
rlm@488 3454 These are the some functions exposed by =CORTEX= for creating
rlm@488 3455 worlds and simulating creatures. These are in addition to
rlm@488 3456 jMonkeyEngine3's extensive library, which is documented elsewhere.
rlm@488 3457
rlm@488 3458 *** Simulation
rlm@488 3459 - =(world root-node key-map setup-fn update-fn)= :: create
rlm@488 3460 a simulation.
rlm@488 3461 - /root-node/ :: a =com.jme3.scene.Node= object which
rlm@488 3462 contains all of the objects that should be in the
rlm@488 3463 simulation.
rlm@488 3464
rlm@488 3465 - /key-map/ :: a map from strings describing keys to
rlm@488 3466 functions that should be executed whenever that key is
rlm@488 3467 pressed. the functions should take a SimpleApplication
rlm@488 3468 object and a boolean value. The SimpleApplication is the
rlm@488 3469 current simulation that is running, and the boolean is true
rlm@488 3470 if the key is being pressed, and false if it is being
rlm@488 3471 released. As an example,
rlm@488 3472 #+BEGIN_SRC clojure
rlm@488 3473 {"key-j" (fn [game value] (if value (println "key j pressed")))}
rlm@488 3474 #+END_SRC
rlm@488 3475 is a valid key-map which will cause the simulation to print
rlm@488 3476 a message whenever the 'j' key on the keyboard is pressed.
rlm@488 3477
rlm@488 3478 - /setup-fn/ :: a function that takes a =SimpleApplication=
rlm@488 3479 object. It is called once when initializing the simulation.
rlm@488 3480 Use it to create things like lights, change the gravity,
rlm@488 3481 initialize debug nodes, etc.
rlm@488 3482
rlm@488 3483 - /update-fn/ :: this function takes a =SimpleApplication=
rlm@488 3484 object and a float and is called every frame of the
rlm@488 3485 simulation. The float tells how many seconds is has been
rlm@488 3486 since the last frame was rendered, according to whatever
rlm@488 3487 clock jme is currently using. The default is to use IsoTimer
rlm@488 3488 which will result in this value always being the same.
rlm@488 3489
rlm@488 3490 - =(position-camera world position rotation)= :: set the position
rlm@488 3491 of the simulation's main camera.
rlm@488 3492
rlm@488 3493 - =(enable-debug world)= :: turn on debug wireframes for each
rlm@488 3494 simulated object.
rlm@488 3495
rlm@488 3496 - =(set-gravity world gravity)= :: set the gravity of a running
rlm@488 3497 simulation.
rlm@488 3498
rlm@488 3499 - =(box length width height & {options})= :: create a box in the
rlm@488 3500 simulation. Options is a hash map specifying texture, mass,
rlm@488 3501 etc. Possible options are =:name=, =:color=, =:mass=,
rlm@488 3502 =:friction=, =:texture=, =:material=, =:position=,
rlm@488 3503 =:rotation=, =:shape=, and =:physical?=.
rlm@488 3504
rlm@488 3505 - =(sphere radius & {options})= :: create a sphere in the simulation.
rlm@488 3506 Options are the same as in =box=.
rlm@488 3507
rlm@488 3508 - =(load-blender-model file-name)= :: create a node structure
rlm@488 3509 representing that described in a blender file.
rlm@488 3510
rlm@488 3511 - =(light-up-everything world)= :: distribute a standard compliment
rlm@488 3512 of lights throught the simulation. Should be adequate for most
rlm@488 3513 purposes.
rlm@488 3514
rlm@488 3515 - =(node-seq node)= :: return a recursuve list of the node's
rlm@488 3516 children.
rlm@488 3517
rlm@488 3518 - =(nodify name children)= :: construct a node given a node-name and
rlm@488 3519 desired children.
rlm@488 3520
rlm@488 3521 - =(add-element world element)= :: add an object to a running world
rlm@488 3522 simulation.
rlm@488 3523
rlm@488 3524 - =(set-accuracy world accuracy)= :: change the accuracy of the
rlm@488 3525 world's physics simulator.
rlm@488 3526
rlm@488 3527 - =(asset-manager)= :: get an /AssetManager/, a jMonkeyEngine
rlm@488 3528 construct that is useful for loading textures and is required
rlm@488 3529 for smooth interaction with jMonkeyEngine library functions.
rlm@488 3530
rlm@488 3531 - =(load-bullet)= :: unpack native libraries and initialize
rlm@488 3532 blender. This function is required before other world building
rlm@488 3533 functions are called.
rlm@488 3534
rlm@488 3535 *** Creature Manipulation / Import
rlm@488 3536
rlm@488 3537 - =(body! creature)= :: give the creature a physical body.
rlm@488 3538
rlm@488 3539 - =(vision! creature)= :: give the creature a sense of vision.
rlm@488 3540 Returns a list of functions which will each, when called
rlm@488 3541 during a simulation, return the vision data for the channel of
rlm@488 3542 one of the eyes. The functions are ordered depending on the
rlm@488 3543 alphabetical order of the names of the eye nodes in the
rlm@488 3544 blender file. The data returned by the functions is a vector
rlm@488 3545 containing the eye's /topology/, a vector of coordinates, and
rlm@488 3546 the eye's /data/, a vector of RGB values filtered by the eye's
rlm@488 3547 sensitivity.
rlm@488 3548
rlm@488 3549 - =(hearing! creature)= :: give the creature a sense of hearing.
rlm@488 3550 Returns a list of functions, one for each ear, that when
rlm@488 3551 called will return a frame's worth of hearing data for that
rlm@488 3552 ear. The functions are ordered depending on the alphabetical
rlm@488 3553 order of the names of the ear nodes in the blender file. The
rlm@488 3554 data returned by the functions is an array PCM encoded wav
rlm@488 3555 data.
rlm@488 3556
rlm@488 3557 - =(touch! creature)= :: give the creature a sense of touch. Returns
rlm@488 3558 a single function that must be called with the /root node/ of
rlm@488 3559 the world, and which will return a vector of /touch-data/
rlm@488 3560 one entry for each touch sensitive component, each entry of
rlm@488 3561 which contains a /topology/ that specifies the distribution of
rlm@488 3562 touch sensors, and the /data/, which is a vector of
rlm@488 3563 =[activation, length]= pairs for each touch hair.
rlm@488 3564
rlm@488 3565 - =(proprioception! creature)= :: give the creature the sense of
rlm@488 3566 proprioception. Returns a list of functions, one for each
rlm@488 3567 joint, that when called during a running simulation will
rlm@488 3568 report the =[headnig, pitch, roll]= of the joint.
rlm@488 3569
rlm@488 3570 - =(movement! creature)= :: give the creature the power of movement.
rlm@488 3571 Creates a list of functions, one for each muscle, that when
rlm@488 3572 called with an integer, will set the recruitment of that
rlm@488 3573 muscle to that integer, and will report the current power
rlm@488 3574 being exerted by the muscle. Order of muscles is determined by
rlm@488 3575 the alphabetical sort order of the names of the muscle nodes.
rlm@488 3576
rlm@488 3577 *** Visualization/Debug
rlm@488 3578
rlm@488 3579 - =(view-vision)= :: create a function that when called with a list
rlm@488 3580 of visual data returned from the functions made by =vision!=,
rlm@488 3581 will display that visual data on the screen.
rlm@488 3582
rlm@488 3583 - =(view-hearing)= :: same as =view-vision= but for hearing.
rlm@488 3584
rlm@488 3585 - =(view-touch)= :: same as =view-vision= but for touch.
rlm@488 3586
rlm@488 3587 - =(view-proprioception)= :: same as =view-vision= but for
rlm@488 3588 proprioception.
rlm@488 3589
rlm@488 3590 - =(view-movement)= :: same as =view-vision= but for
rlm@488 3591 proprioception.
rlm@488 3592
rlm@488 3593 - =(view anything)= :: =view= is a polymorphic function that allows
rlm@488 3594 you to inspect almost anything you could reasonably expect to
rlm@488 3595 be able to ``see'' in =CORTEX=.
rlm@488 3596
rlm@488 3597 - =(text anything)= :: =text= is a polymorphic function that allows
rlm@488 3598 you to convert practically anything into a text string.
rlm@488 3599
rlm@488 3600 - =(println-repl anything)= :: print messages to clojure's repl
rlm@488 3601 instead of the simulation's terminal window.
rlm@488 3602
rlm@488 3603 - =(mega-import-jme3)= :: for experimenting at the REPL. This
rlm@488 3604 function will import all jMonkeyEngine3 classes for immediate
rlm@488 3605 use.
rlm@488 3606
rlm@488 3607 - =(display-dialated-time world timer)= :: Shows the time as it is
rlm@488 3608 flowing in the simulation on a HUD display.
rlm@488 3609
rlm@488 3610
rlm@488 3611