view thesis/cortex.org @ 547:5d89879fc894

couple hours worth of edits.
author Robert McIntyre <rlm@mit.edu>
date Mon, 28 Apr 2014 15:10:59 -0400
parents b2c66ea58c39
children 0b891e0dd809
line wrap: on
line source
1 #+title: =CORTEX=
2 #+author: Robert McIntyre
3 #+email: rlm@mit.edu
4 #+description: Using embodied AI to facilitate Artificial Imagination.
5 #+keywords: AI, clojure, embodiment
6 #+LaTeX_CLASS_OPTIONS: [nofloat]
8 * COMMENT templates
9 #+caption:
10 #+caption:
11 #+caption:
12 #+caption:
13 #+name: name
14 #+begin_listing clojure
15 #+BEGIN_SRC clojure
16 #+END_SRC
17 #+end_listing
19 #+caption:
20 #+caption:
21 #+caption:
22 #+name: name
23 #+ATTR_LaTeX: :width 10cm
24 [[./images/aurellem-gray.png]]
26 #+caption:
27 #+caption:
28 #+caption:
29 #+caption:
30 #+name: name
31 #+begin_listing clojure
32 #+BEGIN_SRC clojure
33 #+END_SRC
34 #+end_listing
36 #+caption:
37 #+caption:
38 #+caption:
39 #+name: name
40 #+ATTR_LaTeX: :width 10cm
41 [[./images/aurellem-gray.png]]
44 * Empathy \& Embodiment: problem solving strategies
46 By the end of this thesis, you will have a novel approach to
47 representing an recognizing physical actions using embodiment and
48 empathy. You will also see one way to efficiently implement physical
49 empathy for embodied creatures. Finally, you will become familiar
50 with =CORTEX=, a system for designing and simulating creatures with
51 rich senses, which I have designed as a library that you can use in
52 your own research. Note that I /do not/ process video directly --- I
53 start with knowledge of the positions of a creature's body parts and
54 works from there.
56 This is the core vision of my thesis: That one of the important ways
57 in which we understand others is by imagining ourselves in their
58 position and emphatically feeling experiences relative to our own
59 bodies. By understanding events in terms of our own previous
60 corporeal experience, we greatly constrain the possibilities of what
61 would otherwise be an unwieldy exponential search. This extra
62 constraint can be the difference between easily understanding what
63 is happening in a video and being completely lost in a sea of
64 incomprehensible color and movement.
66 ** The problem: recognizing actions is hard!
68 Examine the following image. What is happening? As you, and indeed
69 very young children, can easily determine, this is an image of
70 drinking.
72 #+caption: A cat drinking some water. Identifying this action is
73 #+caption: beyond the capabilities of existing computer vision systems.
74 #+ATTR_LaTeX: :width 7cm
75 [[./images/cat-drinking.jpg]]
77 Nevertheless, it is beyond the state of the art for a computer
78 vision program to describe what's happening in this image. Part of
79 the problem is that many computer vision systems focus on
80 pixel-level details or comparisons to example images (such as
81 \cite{volume-action-recognition}), but the 3D world is so variable
82 that it is hard to describe the world in terms of possible images.
84 In fact, the contents of a scene may have much less to do with
85 pixel probabilities than with recognizing various affordances:
86 things you can move, objects you can grasp, spaces that can be
87 filled . For example, what processes might enable you to see the
88 chair in figure \ref{hidden-chair}?
90 #+caption: The chair in this image is quite obvious to humans, but
91 #+caption: it can't be found by any modern computer vision program.
92 #+name: hidden-chair
93 #+ATTR_LaTeX: :width 10cm
94 [[./images/fat-person-sitting-at-desk.jpg]]
96 Finally, how is it that you can easily tell the difference between
97 how the girls /muscles/ are working in figure \ref{girl}?
99 #+caption: The mysterious ``common sense'' appears here as you are able
100 #+caption: to discern the difference in how the girl's arm muscles
101 #+caption: are activated between the two images.
102 #+name: girl
103 #+ATTR_LaTeX: :width 7cm
104 [[./images/wall-push.png]]
106 Each of these examples tells us something about what might be going
107 on in our minds as we easily solve these recognition problems:
109 - The hidden chair shows us that we are strongly triggered by cues
110 relating to the position of human bodies, and that we can
111 determine the overall physical configuration of a human body even
112 if much of that body is occluded.
114 - The picture of the girl pushing against the wall tells us that we
115 have common sense knowledge about the kinetics of our own bodies.
116 We know well how our muscles would have to work to maintain us in
117 most positions, and we can easily project this self-knowledge to
118 imagined positions triggered by images of the human body.
120 - The cat tells us that imagination of some kind plays an important
121 role in understanding actions. The question is: Can we be more
122 precise about what sort of imagination is required to understand
123 these actions?
125 ** A step forward: the sensorimotor-centered approach
127 In this thesis, I explore the idea that our knowledge of our own
128 bodies, combined with our own rich senses, enables us to recognize
129 the actions of others.
131 For example, I think humans are able to label the cat video as
132 ``drinking'' because they imagine /themselves/ as the cat, and
133 imagine putting their face up against a stream of water and
134 sticking out their tongue. In that imagined world, they can feel
135 the cool water hitting their tongue, and feel the water entering
136 their body, and are able to recognize that /feeling/ as drinking.
137 So, the label of the action is not really in the pixels of the
138 image, but is found clearly in a simulation / recollection inspired
139 by those pixels. An imaginative system, having been trained on
140 drinking and non-drinking examples and learning that the most
141 important component of drinking is the feeling of water sliding
142 down one's throat, would analyze a video of a cat drinking in the
143 following manner:
145 1. Create a physical model of the video by putting a ``fuzzy''
146 model of its own body in place of the cat. Possibly also create
147 a simulation of the stream of water.
149 2. ``Play out'' this simulated scene and generate imagined sensory
150 experience. This will include relevant muscle contractions, a
151 close up view of the stream from the cat's perspective, and most
152 importantly, the imagined feeling of water entering the mouth.
153 The imagined sensory experience can come from a simulation of
154 the event, but can also be pattern-matched from previous,
155 similar embodied experience.
157 3. The action is now easily identified as drinking by the sense of
158 taste alone. The other senses (such as the tongue moving in and
159 out) help to give plausibility to the simulated action. Note that
160 the sense of vision, while critical in creating the simulation,
161 is not critical for identifying the action from the simulation.
163 For the chair examples, the process is even easier:
165 1. Align a model of your body to the person in the image.
167 2. Generate proprioceptive sensory data from this alignment.
169 3. Use the imagined proprioceptive data as a key to lookup related
170 sensory experience associated with that particular proprioceptive
171 feeling.
173 4. Retrieve the feeling of your bottom resting on a surface, your
174 knees bent, and your leg muscles relaxed.
176 5. This sensory information is consistent with your =sitting?=
177 sensory predicate, so you (and the entity in the image) must be
178 sitting.
180 6. There must be a chair-like object since you are sitting.
182 Empathy offers yet another alternative to the age-old AI
183 representation question: ``What is a chair?'' --- A chair is the
184 feeling of sitting!
186 One powerful advantage of empathic problem solving is that it
187 factors the action recognition problem into two easier problems. To
188 use empathy, you need an /aligner/, which takes the video and a
189 model of your body, and aligns the model with the video. Then, you
190 need a /recognizer/, which uses the aligned model to interpret the
191 action. The power in this method lies in the fact that you describe
192 all actions from a body-centered viewpoint. You are less tied to
193 the particulars of any visual representation of the actions. If you
194 teach the system what ``running'' is, and you have a good enough
195 aligner, the system will from then on be able to recognize running
196 from any point of view -- even strange points of view like above or
197 underneath the runner. This is in contrast to action recognition
198 schemes that try to identify actions using a non-embodied approach.
199 If these systems learn about running as viewed from the side, they
200 will not automatically be able to recognize running from any other
201 viewpoint.
203 Another powerful advantage is that using the language of multiple
204 body-centered rich senses to describe body-centered actions offers
205 a massive boost in descriptive capability. Consider how difficult
206 it would be to compose a set of HOG (Histogram of Oriented
207 Gradients) filters to describe the action of a simple worm-creature
208 ``curling'' so that its head touches its tail, and then behold the
209 simplicity of describing thus action in a language designed for the
210 task (listing \ref{grand-circle-intro}):
212 #+caption: Body-centered actions are best expressed in a body-centered
213 #+caption: language. This code detects when the worm has curled into a
214 #+caption: full circle. Imagine how you would replicate this functionality
215 #+caption: using low-level pixel features such as HOG filters!
216 #+name: grand-circle-intro
217 #+begin_listing clojure
218 #+begin_src clojure
219 (defn grand-circle?
220 "Does the worm form a majestic circle (one end touching the other)?"
221 [experiences]
222 (and (curled? experiences)
223 (let [worm-touch (:touch (peek experiences))
224 tail-touch (worm-touch 0)
225 head-touch (worm-touch 4)]
226 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
227 (< 0.2 (contact worm-segment-top-tip head-touch))))))
228 #+end_src
229 #+end_listing
231 ** =EMPATH= recognizes actions using empathy
233 Exploring these ideas further demands a concrete implementation, so
234 first, I built a system for constructing virtual creatures with
235 physiologically plausible sensorimotor systems and detailed
236 environments. The result is =CORTEX=, which is described in section
237 \ref{sec-2}.
239 Next, I wrote routines which enabled a simple worm-like creature to
240 infer the actions of a second worm-like creature, using only its
241 own prior sensorimotor experiences and knowledge of the second
242 worm's joint positions. This program, =EMPATH=, is described in
243 section \ref{sec-3}. It's main components are:
245 - Embodied Action Definitions :: Many otherwise complicated actions
246 are easily described in the language of a full suite of
247 body-centered, rich senses and experiences. For example,
248 drinking is the feeling of water sliding down your throat, and
249 cooling your insides. It's often accompanied by bringing your
250 hand close to your face, or bringing your face close to water.
251 Sitting down is the feeling of bending your knees, activating
252 your quadriceps, then feeling a surface with your bottom and
253 relaxing your legs. These body-centered action descriptions
254 can be either learned or hard coded.
256 - Guided Play :: The creature moves around and experiences the
257 world through its unique perspective. As the creature moves,
258 it gathers experiences that satisfy the embodied action
259 definitions.
261 - Posture imitation :: When trying to interpret a video or image,
262 the creature takes a model of itself and aligns it with
263 whatever it sees. This alignment might even cross species, as
264 when humans try to align themselves with things like ponies,
265 dogs, or other humans with a different body type.
267 - Empathy :: The alignment triggers associations with
268 sensory data from prior experiences. For example, the
269 alignment itself easily maps to proprioceptive data. Any
270 sounds or obvious skin contact in the video can to a lesser
271 extent trigger previous experience keyed to hearing or touch.
272 Segments of previous experiences gained from play are stitched
273 together to form a coherent and complete sensory portrait of
274 the scene.
276 - Recognition :: With the scene described in terms of remembered
277 first person sensory events, the creature can now run its
278 action-definition programs (such as the one in listing
279 \ref{grand-circle-intro}) on this synthesized sensory data,
280 just as it would if it were actually experiencing the scene
281 first-hand. If previous experience has been accurately
282 retrieved, and if it is analogous enough to the scene, then
283 the creature will correctly identify the action in the scene.
285 My program, =EMPATH= uses this empathic problem solving technique
286 to interpret the actions of a simple, worm-like creature.
288 #+caption: The worm performs many actions during free play such as
289 #+caption: curling, wiggling, and resting.
290 #+name: worm-intro
291 #+ATTR_LaTeX: :width 15cm
292 [[./images/worm-intro-white.png]]
294 #+caption: =EMPATH= recognized and classified each of these
295 #+caption: poses by inferring the complete sensory experience
296 #+caption: from proprioceptive data.
297 #+name: worm-recognition-intro
298 #+ATTR_LaTeX: :width 15cm
299 [[./images/worm-poses.png]]
301 *** Main Results
303 - After one-shot supervised training, =EMPATH= was able to
304 recognize a wide variety of static poses and dynamic
305 actions---ranging from curling in a circle to wiggling with a
306 particular frequency --- with 95\% accuracy.
308 - These results were completely independent of viewing angle
309 because the underlying body-centered language fundamentally is
310 independent; once an action is learned, it can be recognized
311 equally well from any viewing angle.
313 - =EMPATH= is surprisingly short; the sensorimotor-centered
314 language provided by =CORTEX= resulted in extremely economical
315 recognition routines --- about 500 lines in all --- suggesting
316 that such representations are very powerful, and often
317 indispensable for the types of recognition tasks considered here.
319 - Although for expediency's sake, I relied on direct knowledge of
320 joint positions in this proof of concept, it would be
321 straightforward to extend =EMPATH= so that it (more
322 realistically) infers joint positions from its visual data.
324 ** =EMPATH= is built on =CORTEX=, a creature builder.
326 I built =CORTEX= to be a general AI research platform for doing
327 experiments involving multiple rich senses and a wide variety and
328 number of creatures. I intend it to be useful as a library for many
329 more projects than just this thesis. =CORTEX= was necessary to meet
330 a need among AI researchers at CSAIL and beyond, which is that
331 people often will invent wonderful ideas that are best expressed in
332 the language of creatures and senses, but in order to explore those
333 ideas they must first build a platform in which they can create
334 simulated creatures with rich senses! There are many ideas that
335 would be simple to execute (such as =EMPATH= or Larson's
336 self-organizing maps (\cite{larson-symbols})), but attached to them
337 is the multi-month effort to make a good creature simulator. Often,
338 that initial investment of time proves to be too much, and the
339 project must make do with a lesser environment or be abandoned
340 entirely.
342 =CORTEX= is well suited as an environment for embodied AI research
343 for three reasons:
345 - You can design new creatures using Blender (\cite{blender}), a
346 popular 3D modeling program. Each sense can be specified using
347 special blender nodes with biologically inspired parameters. You
348 need not write any code to create a creature, and can use a wide
349 library of pre-existing blender models as a base for your own
350 creatures.
352 - =CORTEX= implements a wide variety of senses: touch,
353 proprioception, vision, hearing, and muscle tension. Complicated
354 senses like touch and vision involve multiple sensory elements
355 embedded in a 2D surface. You have complete control over the
356 distribution of these sensor elements through the use of simple
357 png image files. =CORTEX= implements more comprehensive hearing
358 than any other creature simulation system available.
360 - =CORTEX= supports any number of creatures and any number of
361 senses. Time in =CORTEX= dilates so that the simulated creatures
362 always perceive a perfectly smooth flow of time, regardless of
363 the actual computational load.
365 =CORTEX= is built on top of =jMonkeyEngine3=
366 (\cite{jmonkeyengine}), which is a video game engine designed to
367 create cross-platform 3D desktop games. =CORTEX= is mainly written
368 in clojure, a dialect of =LISP= that runs on the java virtual
369 machine (JVM). The API for creating and simulating creatures and
370 senses is entirely expressed in clojure, though many senses are
371 implemented at the layer of jMonkeyEngine or below. For example,
372 for the sense of hearing I use a layer of clojure code on top of a
373 layer of java JNI bindings that drive a layer of =C++= code which
374 implements a modified version of =OpenAL= to support multiple
375 listeners. =CORTEX= is the only simulation environment that I know
376 of that can support multiple entities that can each hear the world
377 from their own perspective. Other senses also require a small layer
378 of Java code. =CORTEX= also uses =bullet=, a physics simulator
379 written in =C=.
381 #+caption: Here is the worm from figure \ref{worm-intro} modeled
382 #+caption: in Blender, a free 3D-modeling program. Senses and
383 #+caption: joints are described using special nodes in Blender.
384 #+name: worm-recognition-intro-2
385 #+ATTR_LaTeX: :width 12cm
386 [[./images/blender-worm.png]]
388 Here are some things I anticipate that =CORTEX= might be used for:
390 - exploring new ideas about sensory integration
391 - distributed communication among swarm creatures
392 - self-learning using free exploration,
393 - evolutionary algorithms involving creature construction
394 - exploration of exotic senses and effectors that are not possible
395 in the real world (such as telekinesis or a semantic sense)
396 - imagination using subworlds
398 During one test with =CORTEX=, I created 3,000 creatures each with
399 their own independent senses and ran them all at only 1/80 real
400 time. In another test, I created a detailed model of my own hand,
401 equipped with a realistic distribution of touch (more sensitive at
402 the fingertips), as well as eyes and ears, and it ran at around 1/4
403 real time.
405 #+BEGIN_LaTeX
406 \begin{sidewaysfigure}
407 \includegraphics[width=9.5in]{images/full-hand.png}
408 \caption{
409 I modeled my own right hand in Blender and rigged it with all the
410 senses that {\tt CORTEX} supports. My simulated hand has a
411 biologically inspired distribution of touch sensors. The senses are
412 displayed on the right, and the simulation is displayed on the
413 left. Notice that my hand is curling its fingers, that it can see
414 its own finger from the eye in its palm, and that it can feel its
415 own thumb touching its palm.}
416 \end{sidewaysfigure}
417 #+END_LaTeX
419 * Designing =CORTEX=
421 In this section, I outline the design decisions that went into
422 making =CORTEX=, along with some details about its implementation.
423 (A practical guide to getting started with =CORTEX=, which skips
424 over the history and implementation details presented here, is
425 provided in an appendix at the end of this thesis.)
427 Throughout this project, I intended for =CORTEX= to be flexible and
428 extensible enough to be useful for other researchers who want to
429 test ideas of their own. To this end, wherever I have had to make
430 architectural choices about =CORTEX=, I have chosen to give as much
431 freedom to the user as possible, so that =CORTEX= may be used for
432 things I have not foreseen.
434 ** Building in simulation versus reality
435 The most important architectural decision of all is the choice to
436 use a computer-simulated environment in the first place! The world
437 is a vast and rich place, and for now simulations are a very poor
438 reflection of its complexity. It may be that there is a significant
439 qualitative difference between dealing with senses in the real
440 world and dealing with pale facsimiles of them in a simulation
441 (\cite{brooks-representation}). What are the advantages and
442 disadvantages of a simulation vs. reality?
444 *** Simulation
446 The advantages of virtual reality are that when everything is a
447 simulation, experiments in that simulation are absolutely
448 reproducible. It's also easier to change the creature and
449 environment to explore new situations and different sensory
450 combinations.
452 If the world is to be simulated on a computer, then not only do
453 you have to worry about whether the creature's senses are rich
454 enough to learn from the world, but whether the world itself is
455 rendered with enough detail and realism to give enough working
456 material to the creature's senses. To name just a few
457 difficulties facing modern physics simulators: destructibility of
458 the environment, simulation of water/other fluids, large areas,
459 nonrigid bodies, lots of objects, smoke. I don't know of any
460 computer simulation that would allow a creature to take a rock
461 and grind it into fine dust, then use that dust to make a clay
462 sculpture, at least not without spending years calculating the
463 interactions of every single small grain of dust. Maybe a
464 simulated world with today's limitations doesn't provide enough
465 richness for real intelligence to evolve.
467 *** Reality
469 The other approach for playing with senses is to hook your
470 software up to real cameras, microphones, robots, etc., and let it
471 loose in the real world. This has the advantage of eliminating
472 concerns about simulating the world at the expense of increasing
473 the complexity of implementing the senses. Instead of just
474 grabbing the current rendered frame for processing, you have to
475 use an actual camera with real lenses and interact with photons to
476 get an image. It is much harder to change the creature, which is
477 now partly a physical robot of some sort, since doing so involves
478 changing things around in the real world instead of modifying
479 lines of code. While the real world is very rich and definitely
480 provides enough stimulation for intelligence to develop (as
481 evidenced by our own existence), it is also uncontrollable in the
482 sense that a particular situation cannot be recreated perfectly or
483 saved for later use. It is harder to conduct Science because it is
484 harder to repeat an experiment. The worst thing about using the
485 real world instead of a simulation is the matter of time. Instead
486 of simulated time you get the constant and unstoppable flow of
487 real time. This severely limits the sorts of software you can use
488 to program an AI, because all sense inputs must be handled in real
489 time. Complicated ideas may have to be implemented in hardware or
490 may simply be impossible given the current speed of our
491 processors. Contrast this with a simulation, in which the flow of
492 time in the simulated world can be slowed down to accommodate the
493 limitations of the creature's programming. In terms of cost, doing
494 everything in software is far cheaper than building custom
495 real-time hardware. All you need is a laptop and some patience.
497 ** Simulated time enables rapid prototyping \& simple programs
499 I envision =CORTEX= being used to support rapid prototyping and
500 iteration of ideas. Even if I could put together a well constructed
501 kit for creating robots, it would still not be enough because of
502 the scourge of real-time processing. Anyone who wants to test their
503 ideas in the real world must always worry about getting their
504 algorithms to run fast enough to process information in real time.
505 The need for real time processing only increases if multiple senses
506 are involved. In the extreme case, even simple algorithms will have
507 to be accelerated by ASIC chips or FPGAs, turning what would
508 otherwise be a few lines of code and a 10x speed penalty into a
509 multi-month ordeal. For this reason, =CORTEX= supports
510 /time-dilation/, which scales back the framerate of the simulation
511 in proportion to the amount of processing each frame. From the
512 perspective of the creatures inside the simulation, time always
513 appears to flow at a constant rate, regardless of how complicated
514 the environment becomes or how many creatures are in the
515 simulation. The cost is that =CORTEX= can sometimes run slower than
516 real time. Time dialation works both ways, however --- simulations
517 of very simple creatures in =CORTEX= generally run at 40x real-time
518 on my machine!
520 ** All sense organs are two-dimensional surfaces
522 If =CORTEX= is to support a wide variety of senses, it would help
523 to have a better understanding of what a sense actually is! While
524 vision, touch, and hearing all seem like they are quite different
525 things, I was surprised to learn during the course of this thesis
526 that they (and all physical senses) can be expressed as exactly the
527 same mathematical object!
529 Human beings are three-dimensional objects, and the nerves that
530 transmit data from our various sense organs to our brain are
531 essentially one-dimensional. This leaves up to two dimensions in
532 which our sensory information may flow. For example, imagine your
533 skin: it is a two-dimensional surface around a three-dimensional
534 object (your body). It has discrete touch sensors embedded at
535 various points, and the density of these sensors corresponds to the
536 sensitivity of that region of skin. Each touch sensor connects to a
537 nerve, all of which eventually are bundled together as they travel
538 up the spinal cord to the brain. Intersect the spinal nerves with a
539 guillotining plane and you will see all of the sensory data of the
540 skin revealed in a roughly circular two-dimensional image which is
541 the cross section of the spinal cord. Points on this image that are
542 close together in this circle represent touch sensors that are
543 /probably/ close together on the skin, although there is of course
544 some cutting and rearrangement that has to be done to transfer the
545 complicated surface of the skin onto a two dimensional image.
547 Most human senses consist of many discrete sensors of various
548 properties distributed along a surface at various densities. For
549 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
550 disks, and Ruffini's endings (\cite{textbook901}), which detect
551 pressure and vibration of various intensities. For ears, it is the
552 stereocilia distributed along the basilar membrane inside the
553 cochlea; each one is sensitive to a slightly different frequency of
554 sound. For eyes, it is rods and cones distributed along the surface
555 of the retina. In each case, we can describe the sense with a
556 surface and a distribution of sensors along that surface.
558 In fact, almost every human sense can be effectively described in
559 terms of a surface containing embedded sensors. If the sense had
560 any more dimensions, then there wouldn't be enough room in the
561 spinal cord to transmit the information!
563 Therefore, =CORTEX= must support the ability to create objects and
564 then be able to ``paint'' points along their surfaces to describe
565 each sense.
567 Fortunately this idea is already a well known computer graphics
568 technique called /UV-mapping/. In UV-maping, the three-dimensional
569 surface of a model is cut and smooshed until it fits on a
570 two-dimensional image. You paint whatever you want on that image,
571 and when the three-dimensional shape is rendered in a game the
572 smooshing and cutting is reversed and the image appears on the
573 three-dimensional object.
575 To make a sense, interpret the UV-image as describing the
576 distribution of that senses sensors. To get different types of
577 sensors, you can either use a different color for each type of
578 sensor, or use multiple UV-maps, each labeled with that sensor
579 type. I generally use a white pixel to mean the presence of a
580 sensor and a black pixel to mean the absence of a sensor, and use
581 one UV-map for each sensor-type within a given sense.
583 #+CAPTION: The UV-map for an elongated icososphere. The white
584 #+caption: dots each represent a touch sensor. They are dense
585 #+caption: in the regions that describe the tip of the finger,
586 #+caption: and less dense along the dorsal side of the finger
587 #+caption: opposite the tip.
588 #+name: finger-UV
589 #+ATTR_latex: :width 10cm
590 [[./images/finger-UV.png]]
592 #+caption: Ventral side of the UV-mapped finger. Notice the
593 #+caption: density of touch sensors at the tip.
594 #+name: finger-side-view
595 #+ATTR_LaTeX: :width 10cm
596 [[./images/finger-1.png]]
598 ** Video game engines provide ready-made physics and shading
600 I did not need to write my own physics simulation code or shader to
601 build =CORTEX=. Doing so would lead to a system that is impossible
602 for anyone but myself to use anyway. Instead, I use a video game
603 engine as a base and modify it to accommodate the additional needs
604 of =CORTEX=. Video game engines are an ideal starting point to
605 build =CORTEX=, because they are not far from being creature
606 building systems themselves.
608 First off, general purpose video game engines come with a physics
609 engine and lighting / sound system. The physics system provides
610 tools that can be co-opted to serve as touch, proprioception, and
611 muscles. Since some games support split screen views, a good video
612 game engine will allow you to efficiently create multiple cameras
613 in the simulated world that can be used as eyes. Video game systems
614 offer integrated asset management for things like textures and
615 creature models, providing an avenue for defining creatures. They
616 also understand UV-mapping, since this technique is used to apply a
617 texture to a model. Finally, because video game engines support a
618 large number of developers, as long as =CORTEX= doesn't stray too
619 far from the base system, other researchers can turn to this
620 community for help when doing their research.
622 ** =CORTEX= is based on jMonkeyEngine3
624 While preparing to build =CORTEX= I studied several video game
625 engines to see which would best serve as a base. The top contenders
626 were:
628 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID software
629 in 1997. All the source code was released by ID software into
630 the Public Domain several years ago, and as a result it has
631 been ported to many different languages. This engine was
632 famous for its advanced use of realistic shading and it had
633 decent and fast physics simulation. The main advantage of the
634 Quake II engine is its simplicity, but I ultimately rejected
635 it because the engine is too tied to the concept of a
636 first-person shooter game. One of the problems I had was that
637 there does not seem to be any easy way to attach multiple
638 cameras to a single character. There are also several physics
639 clipping issues that are corrected in a way that only applies
640 to the main character and do not apply to arbitrary objects.
642 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
643 and Quake I engines and is used by Valve in the Half-Life
644 series of games. The physics simulation in the Source Engine
645 is quite accurate and probably the best out of all the engines
646 I investigated. There is also an extensive community actively
647 working with the engine. However, applications that use the
648 Source Engine must be written in C++, the code is not open, it
649 only runs on Windows, and the tools that come with the SDK to
650 handle models and textures are complicated and awkward to use.
652 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
653 games in Java. It uses OpenGL to render to the screen and uses
654 screengraphs to avoid drawing things that do not appear on the
655 screen. It has an active community and several games in the
656 pipeline. The engine was not built to serve any particular
657 game but is instead meant to be used for any 3D game.
659 I chose jMonkeyEngine3 because it had the most features out of all
660 the free projects I looked at, and because I could then write my
661 code in clojure, an implementation of =LISP= that runs on the JVM.
663 ** =CORTEX= uses Blender to create creature models
665 For the simple worm-like creatures I will use later on in this
666 thesis, I could define a simple API in =CORTEX= that would allow
667 one to create boxes, spheres, etc., and leave that API as the sole
668 way to create creatures. However, for =CORTEX= to truly be useful
669 for other projects, it needs a way to construct complicated
670 creatures. If possible, it would be nice to leverage work that has
671 already been done by the community of 3D modelers, or at least
672 enable people who are talented at modeling but not programming to
673 design =CORTEX= creatures.
675 Therefore I use Blender, a free 3D modeling program, as the main
676 way to create creatures in =CORTEX=. However, the creatures modeled
677 in Blender must also be simple to simulate in jMonkeyEngine3's game
678 engine, and must also be easy to rig with =CORTEX='s senses. I
679 accomplish this with extensive use of Blender's ``empty nodes.''
681 Empty nodes have no mass, physical presence, or appearance, but
682 they can hold metadata and have names. I use a tree structure of
683 empty nodes to specify senses in the following manner:
685 - Create a single top-level empty node whose name is the name of
686 the sense.
687 - Add empty nodes which each contain meta-data relevant to the
688 sense, including a UV-map describing the number/distribution of
689 sensors if applicable.
690 - Make each empty-node the child of the top-level node.
692 #+caption: An example of annotating a creature model with empty
693 #+caption: nodes to describe the layout of senses. There are
694 #+caption: multiple empty nodes which each describe the position
695 #+caption: of muscles, ears, eyes, or joints.
696 #+name: sense-nodes
697 #+ATTR_LaTeX: :width 10cm
698 [[./images/empty-sense-nodes.png]]
700 ** Bodies are composed of segments connected by joints
702 Blender is a general purpose animation tool, which has been used in
703 the past to create high quality movies such as Sintel
704 (\cite{blender}). Though Blender can model and render even
705 complicated things like water, it is crucial to keep models that
706 are meant to be simulated as creatures simple. =Bullet=, which
707 =CORTEX= uses though jMonkeyEngine3, is a rigid-body physics
708 system. This offers a compromise between the expressiveness of a
709 game level and the speed at which it can be simulated, and it means
710 that creatures should be naturally expressed as rigid components
711 held together by joint constraints.
713 But humans are more like a squishy bag wrapped around some hard
714 bones which define the overall shape. When we move, our skin bends
715 and stretches to accommodate the new positions of our bones.
717 One way to make bodies composed of rigid pieces connected by joints
718 /seem/ more human-like is to use an /armature/, (or /rigging/)
719 system, which defines a overall ``body mesh'' and defines how the
720 mesh deforms as a function of the position of each ``bone'' which
721 is a standard rigid body. This technique is used extensively to
722 model humans and create realistic animations. It is not a good
723 technique for physical simulation because it is a lie -- the skin
724 is not a physical part of the simulation and does not interact with
725 any objects in the world or itself. Objects will pass right though
726 the skin until they come in contact with the underlying bone, which
727 is a physical object. Without simulating the skin, the sense of
728 touch has little meaning, and the creature's own vision will lie to
729 it about the true extent of its body. Simulating the skin as a
730 physical object requires some way to continuously update the
731 physical model of the skin along with the movement of the bones,
732 which is unacceptably slow compared to rigid body simulation.
734 Therefore, instead of using the human-like ``bony meatbag''
735 approach, I decided to base my body plans on multiple solid objects
736 that are connected by joints, inspired by the robot =EVE= from the
737 movie WALL-E.
739 #+caption: =EVE= from the movie WALL-E. This body plan turns
740 #+caption: out to be much better suited to my purposes than a more
741 #+caption: human-like one.
742 #+ATTR_LaTeX: :width 10cm
743 [[./images/Eve.jpg]]
745 =EVE='s body is composed of several rigid components that are held
746 together by invisible joint constraints. This is what I mean by
747 /eve-like/. The main reason that I use eve-like bodies is for
748 simulation efficiency, and so that there will be correspondence
749 between the AI's senses and the physical presence of its body. Each
750 individual section is simulated by a separate rigid body that
751 corresponds exactly with its visual representation and does not
752 change. Sections are connected by invisible joints that are well
753 supported in jMonkeyEngine3. Bullet, the physics backend for
754 jMonkeyEngine3, can efficiently simulate hundreds of rigid bodies
755 connected by joints. Just because sections are rigid does not mean
756 they have to stay as one piece forever; they can be dynamically
757 replaced with multiple sections to simulate splitting in two. This
758 could be used to simulate retractable claws or =EVE='s hands, which
759 are able to coalesce into one object in the movie.
761 *** Solidifying/Connecting a body
763 =CORTEX= creates a creature in two steps: first, it traverses the
764 nodes in the blender file and creates physical representations for
765 any of them that have mass defined in their blender meta-data.
767 #+caption: Program for iterating through the nodes in a blender file
768 #+caption: and generating physical jMonkeyEngine3 objects with mass
769 #+caption: and a matching physics shape.
770 #+name: physical
771 #+begin_listing clojure
772 #+begin_src clojure
773 (defn physical!
774 "Iterate through the nodes in creature and make them real physical
775 objects in the simulation."
776 [#^Node creature]
777 (dorun
778 (map
779 (fn [geom]
780 (let [physics-control
781 (RigidBodyControl.
782 (HullCollisionShape.
783 (.getMesh geom))
784 (if-let [mass (meta-data geom "mass")]
785 (float mass) (float 1)))]
786 (.addControl geom physics-control)))
787 (filter #(isa? (class %) Geometry )
788 (node-seq creature)))))
789 #+end_src
790 #+end_listing
792 The next step to making a proper body is to connect those pieces
793 together with joints. jMonkeyEngine has a large array of joints
794 available via =bullet=, such as Point2Point, Cone, Hinge, and a
795 generic Six Degree of Freedom joint, with or without spring
796 restitution.
798 Joints are treated a lot like proper senses, in that there is a
799 top-level empty node named ``joints'' whose children each
800 represent a joint.
802 #+caption: View of the hand model in Blender showing the main ``joints''
803 #+caption: node (highlighted in yellow) and its children which each
804 #+caption: represent a joint in the hand. Each joint node has metadata
805 #+caption: specifying what sort of joint it is.
806 #+name: blender-hand
807 #+ATTR_LaTeX: :width 10cm
808 [[./images/hand-screenshot1.png]]
811 =CORTEX='s procedure for binding the creature together with joints
812 is as follows:
814 - Find the children of the ``joints'' node.
815 - Determine the two spatials the joint is meant to connect.
816 - Create the joint based on the meta-data of the empty node.
818 The higher order function =sense-nodes= from =cortex.sense=
819 simplifies finding the joints based on their parent ``joints''
820 node.
822 #+caption: Retrieving the children empty nodes from a single
823 #+caption: named empty node is a common pattern in =CORTEX=
824 #+caption: further instances of this technique for the senses
825 #+caption: will be omitted
826 #+name: get-empty-nodes
827 #+begin_listing clojure
828 #+begin_src clojure
829 (defn sense-nodes
830 "For some senses there is a special empty blender node whose
831 children are considered markers for an instance of that sense. This
832 function generates functions to find those children, given the name
833 of the special parent node."
834 [parent-name]
835 (fn [#^Node creature]
836 (if-let [sense-node (.getChild creature parent-name)]
837 (seq (.getChildren sense-node)) [])))
839 (def
840 ^{:doc "Return the children of the creature's \"joints\" node."
841 :arglists '([creature])}
842 joints
843 (sense-nodes "joints"))
844 #+end_src
845 #+end_listing
847 To find a joint's targets, =CORTEX= creates a small cube, centered
848 around the empty-node, and grows the cube exponentially until it
849 intersects two physical objects. The objects are ordered according
850 to the joint's rotation, with the first one being the object that
851 has more negative coordinates in the joint's reference frame.
852 Since the objects must be physical, the empty-node itself escapes
853 detection. Because the objects must be physical, =joint-targets=
854 must be called /after/ =physical!= is called.
856 #+caption: Program to find the targets of a joint node by
857 #+caption: exponentially growth of a search cube.
858 #+name: joint-targets
859 #+begin_listing clojure
860 #+begin_src clojure
861 (defn joint-targets
862 "Return the two closest two objects to the joint object, ordered
863 from bottom to top according to the joint's rotation."
864 [#^Node parts #^Node joint]
865 (loop [radius (float 0.01)]
866 (let [results (CollisionResults.)]
867 (.collideWith
868 parts
869 (BoundingBox. (.getWorldTranslation joint)
870 radius radius radius) results)
871 (let [targets
872 (distinct
873 (map #(.getGeometry %) results))]
874 (if (>= (count targets) 2)
875 (sort-by
876 #(let [joint-ref-frame-position
877 (jme-to-blender
878 (.mult
879 (.inverse (.getWorldRotation joint))
880 (.subtract (.getWorldTranslation %)
881 (.getWorldTranslation joint))))]
882 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
883 (take 2 targets))
884 (recur (float (* radius 2))))))))
885 #+end_src
886 #+end_listing
888 Once =CORTEX= finds all joints and targets, it creates them using
889 a dispatch on the metadata of each joint node.
891 #+caption: Program to dispatch on blender metadata and create joints
892 #+caption: suitable for physical simulation.
893 #+name: joint-dispatch
894 #+begin_listing clojure
895 #+begin_src clojure
896 (defmulti joint-dispatch
897 "Translate blender pseudo-joints into real JME joints."
898 (fn [constraints & _]
899 (:type constraints)))
901 (defmethod joint-dispatch :point
902 [constraints control-a control-b pivot-a pivot-b rotation]
903 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
904 (.setLinearLowerLimit Vector3f/ZERO)
905 (.setLinearUpperLimit Vector3f/ZERO)))
907 (defmethod joint-dispatch :hinge
908 [constraints control-a control-b pivot-a pivot-b rotation]
909 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
910 [limit-1 limit-2] (:limit constraints)
911 hinge-axis (.mult rotation (blender-to-jme axis))]
912 (doto (HingeJoint. control-a control-b pivot-a pivot-b
913 hinge-axis hinge-axis)
914 (.setLimit limit-1 limit-2))))
916 (defmethod joint-dispatch :cone
917 [constraints control-a control-b pivot-a pivot-b rotation]
918 (let [limit-xz (:limit-xz constraints)
919 limit-xy (:limit-xy constraints)
920 twist (:twist constraints)]
921 (doto (ConeJoint. control-a control-b pivot-a pivot-b
922 rotation rotation)
923 (.setLimit (float limit-xz) (float limit-xy)
924 (float twist)))))
925 #+end_src
926 #+end_listing
928 All that is left for joints is to combine the above pieces into
929 something that can operate on the collection of nodes that a
930 blender file represents.
932 #+caption: Program to completely create a joint given information
933 #+caption: from a blender file.
934 #+name: connect
935 #+begin_listing clojure
936 #+begin_src clojure
937 (defn connect
938 "Create a joint between 'obj-a and 'obj-b at the location of
939 'joint. The type of joint is determined by the metadata on 'joint.
941 Here are some examples:
942 {:type :point}
943 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
944 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
946 {:type :cone :limit-xz 0]
947 :limit-xy 0]
948 :twist 0]} (use XZY rotation mode in blender!)"
949 [#^Node obj-a #^Node obj-b #^Node joint]
950 (let [control-a (.getControl obj-a RigidBodyControl)
951 control-b (.getControl obj-b RigidBodyControl)
952 joint-center (.getWorldTranslation joint)
953 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
954 pivot-a (world-to-local obj-a joint-center)
955 pivot-b (world-to-local obj-b joint-center)]
956 (if-let
957 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
958 ;; A side-effect of creating a joint registers
959 ;; it with both physics objects which in turn
960 ;; will register the joint with the physics system
961 ;; when the simulation is started.
962 (joint-dispatch constraints
963 control-a control-b
964 pivot-a pivot-b
965 joint-rotation))))
966 #+end_src
967 #+end_listing
969 In general, whenever =CORTEX= exposes a sense (or in this case
970 physicality), it provides a function of the type =sense!=, which
971 takes in a collection of nodes and augments it to support that
972 sense. The function returns any controls necessary to use that
973 sense. In this case =body!= creates a physical body and returns no
974 control functions.
976 #+caption: Program to give joints to a creature.
977 #+name: joints
978 #+begin_listing clojure
979 #+begin_src clojure
980 (defn joints!
981 "Connect the solid parts of the creature with physical joints. The
982 joints are taken from the \"joints\" node in the creature."
983 [#^Node creature]
984 (dorun
985 (map
986 (fn [joint]
987 (let [[obj-a obj-b] (joint-targets creature joint)]
988 (connect obj-a obj-b joint)))
989 (joints creature))))
990 (defn body!
991 "Endow the creature with a physical body connected with joints. The
992 particulars of the joints and the masses of each body part are
993 determined in blender."
994 [#^Node creature]
995 (physical! creature)
996 (joints! creature))
997 #+end_src
998 #+end_listing
1000 All of the code you have just seen amounts to only 130 lines, yet
1001 because it builds on top of Blender and jMonkeyEngine3, those few
1002 lines pack quite a punch!
1004 The hand from figure \ref{blender-hand}, which was modeled after
1005 my own right hand, can now be given joints and simulated as a
1006 creature.
1008 #+caption: With the ability to create physical creatures from blender,
1009 #+caption: =CORTEX= gets one step closer to becoming a full creature
1010 #+caption: simulation environment.
1011 #+name: physical-hand
1012 #+ATTR_LaTeX: :width 15cm
1013 [[./images/physical-hand.png]]
1015 ** Sight reuses standard video game components...
1017 Vision is one of the most important senses for humans, so I need to
1018 build a simulated sense of vision for my AI. I will do this with
1019 simulated eyes. Each eye can be independently moved and should see
1020 its own version of the world depending on where it is.
1022 Making these simulated eyes a reality is simple because
1023 jMonkeyEngine already contains extensive support for multiple views
1024 of the same 3D simulated world. The reason jMonkeyEngine has this
1025 support is because the support is necessary to create games with
1026 split-screen views. Multiple views are also used to create
1027 efficient pseudo-reflections by rendering the scene from a certain
1028 perspective and then projecting it back onto a surface in the 3D
1029 world.
1031 #+caption: jMonkeyEngine supports multiple views to enable
1032 #+caption: split-screen games, like GoldenEye, which was one of
1033 #+caption: the first games to use split-screen views.
1034 #+name: goldeneye
1035 #+ATTR_LaTeX: :width 10cm
1036 [[./images/goldeneye-4-player.png]]
1038 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
1040 jMonkeyEngine allows you to create a =ViewPort=, which represents a
1041 view of the simulated world. You can create as many of these as you
1042 want. Every frame, the =RenderManager= iterates through each
1043 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
1044 is a =FrameBuffer= which represents the rendered image in the GPU.
1046 #+caption: =ViewPorts= are cameras in the world. During each frame,
1047 #+caption: the =RenderManager= records a snapshot of what each view
1048 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
1049 #+name: rendermanagers
1050 #+ATTR_LaTeX: :width 10cm
1051 [[./images/diagram_rendermanager2.png]]
1053 Each =ViewPort= can have any number of attached =SceneProcessor=
1054 objects, which are called every time a new frame is rendered. A
1055 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
1056 whatever it wants to the data. Often this consists of invoking GPU
1057 specific operations on the rendered image. The =SceneProcessor= can
1058 also copy the GPU image data to RAM and process it with the CPU.
1060 *** Appropriating Views for Vision
1062 Each eye in the simulated creature needs its own =ViewPort= so
1063 that it can see the world from its own perspective. To this
1064 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
1065 any arbitrary continuation function for further processing. That
1066 continuation function may perform both CPU and GPU operations on
1067 the data. To make this easy for the continuation function, the
1068 =SceneProcessor= maintains appropriately sized buffers in RAM to
1069 hold the data. It does not do any copying from the GPU to the CPU
1070 itself because it is a slow operation.
1072 #+caption: Function to make the rendered scene in jMonkeyEngine
1073 #+caption: available for further processing.
1074 #+name: pipeline-1
1075 #+begin_listing clojure
1076 #+begin_src clojure
1077 (defn vision-pipeline
1078 "Create a SceneProcessor object which wraps a vision processing
1079 continuation function. The continuation is a function that takes
1080 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
1081 each of which has already been appropriately sized."
1082 [continuation]
1083 (let [byte-buffer (atom nil)
1084 renderer (atom nil)
1085 image (atom nil)]
1086 (proxy [SceneProcessor] []
1087 (initialize
1088 [renderManager viewPort]
1089 (let [cam (.getCamera viewPort)
1090 width (.getWidth cam)
1091 height (.getHeight cam)]
1092 (reset! renderer (.getRenderer renderManager))
1093 (reset! byte-buffer
1094 (BufferUtils/createByteBuffer
1095 (* width height 4)))
1096 (reset! image (BufferedImage.
1097 width height
1098 BufferedImage/TYPE_4BYTE_ABGR))))
1099 (isInitialized [] (not (nil? @byte-buffer)))
1100 (reshape [_ _ _])
1101 (preFrame [_])
1102 (postQueue [_])
1103 (postFrame
1104 [#^FrameBuffer fb]
1105 (.clear @byte-buffer)
1106 (continuation @renderer fb @byte-buffer @image))
1107 (cleanup []))))
1108 #+end_src
1109 #+end_listing
1111 The continuation function given to =vision-pipeline= above will be
1112 given a =Renderer= and three containers for image data. The
1113 =FrameBuffer= references the GPU image data, but the pixel data
1114 can not be used directly on the CPU. The =ByteBuffer= and
1115 =BufferedImage= are initially "empty" but are sized to hold the
1116 data in the =FrameBuffer=. I call transferring the GPU image data
1117 to the CPU structures "mixing" the image data.
1119 *** Optical sensor arrays are described with images and referenced with metadata
1121 The vision pipeline described above handles the flow of rendered
1122 images. Now, =CORTEX= needs simulated eyes to serve as the source
1123 of these images.
1125 An eye is described in blender in the same way as a joint. They
1126 are zero dimensional empty objects with no geometry whose local
1127 coordinate system determines the orientation of the resulting eye.
1128 All eyes are children of a parent node named "eyes" just as all
1129 joints have a parent named "joints". An eye binds to the nearest
1130 physical object with =bind-sense=.
1132 #+caption: Here, the camera is created based on metadata on the
1133 #+caption: eye-node and attached to the nearest physical object
1134 #+caption: with =bind-sense=
1135 #+name: add-eye
1136 #+begin_listing clojure
1137 #+begin_src clojure
1138 (defn add-eye!
1139 "Create a Camera centered on the current position of 'eye which
1140 follows the closest physical node in 'creature. The camera will
1141 point in the X direction and use the Z vector as up as determined
1142 by the rotation of these vectors in blender coordinate space. Use
1143 XZY rotation for the node in blender."
1144 [#^Node creature #^Spatial eye]
1145 (let [target (closest-node creature eye)
1146 [cam-width cam-height]
1147 ;;[640 480] ;; graphics card on laptop doesn't support
1148 ;; arbitrary dimensions.
1149 (eye-dimensions eye)
1150 cam (Camera. cam-width cam-height)
1151 rot (.getWorldRotation eye)]
1152 (.setLocation cam (.getWorldTranslation eye))
1153 (.lookAtDirection
1154 cam ; this part is not a mistake and
1155 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
1156 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
1157 (.setFrustumPerspective
1158 cam (float 45)
1159 (float (/ (.getWidth cam) (.getHeight cam)))
1160 (float 1)
1161 (float 1000))
1162 (bind-sense target cam) cam))
1163 #+end_src
1164 #+end_listing
1166 *** Simulated Retina
1168 An eye is a surface (the retina) which contains many discrete
1169 sensors to detect light. These sensors can have different
1170 light-sensing properties. In humans, each discrete sensor is
1171 sensitive to red, blue, green, or gray. These different types of
1172 sensors can have different spatial distributions along the retina.
1173 In humans, there is a fovea in the center of the retina which has
1174 a very high density of color sensors, and a blind spot which has
1175 no sensors at all. Sensor density decreases in proportion to
1176 distance from the fovea.
1178 I want to be able to model any retinal configuration, so my
1179 eye-nodes in blender contain metadata pointing to images that
1180 describe the precise position of the individual sensors using
1181 white pixels. The meta-data also describes the precise sensitivity
1182 to light that the sensors described in the image have. An eye can
1183 contain any number of these images. For example, the metadata for
1184 an eye might look like this:
1186 #+begin_src clojure
1187 {0xFF0000 "Models/test-creature/retina-small.png"}
1188 #+end_src
1190 #+caption: An example retinal profile image. White pixels are
1191 #+caption: photo-sensitive elements. The distribution of white
1192 #+caption: pixels is denser in the middle and falls off at the
1193 #+caption: edges and is inspired by the human retina.
1194 #+name: retina
1195 #+ATTR_LaTeX: :width 7cm
1196 [[./images/retina-small.png]]
1198 Together, the number 0xFF0000 and the image above describe the
1199 placement of red-sensitive sensory elements.
1201 Meta-data to very crudely approximate a human eye might be
1202 something like this:
1204 #+begin_src clojure
1205 (let [retinal-profile "Models/test-creature/retina-small.png"]
1206 {0xFF0000 retinal-profile
1207 0x00FF00 retinal-profile
1208 0x0000FF retinal-profile
1209 0xFFFFFF retinal-profile})
1210 #+end_src
1212 The numbers that serve as keys in the map determine a sensor's
1213 relative sensitivity to the channels red, green, and blue. These
1214 sensitivity values are packed into an integer in the order
1215 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
1216 image are added together with these sensitivities as linear
1217 weights. Therefore, 0xFF0000 means sensitive to red only while
1218 0xFFFFFF means sensitive to all colors equally (gray).
1220 #+caption: This is the core of vision in =CORTEX=. A given eye node
1221 #+caption: is converted into a function that returns visual
1222 #+caption: information from the simulation.
1223 #+name: vision-kernel
1224 #+begin_listing clojure
1225 #+BEGIN_SRC clojure
1226 (defn vision-kernel
1227 "Returns a list of functions, each of which will return a color
1228 channel's worth of visual information when called inside a running
1229 simulation."
1230 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
1231 (let [retinal-map (retina-sensor-profile eye)
1232 camera (add-eye! creature eye)
1233 vision-image
1234 (atom
1235 (BufferedImage. (.getWidth camera)
1236 (.getHeight camera)
1237 BufferedImage/TYPE_BYTE_BINARY))
1238 register-eye!
1239 (runonce
1240 (fn [world]
1241 (add-camera!
1242 world camera
1243 (let [counter (atom 0)]
1244 (fn [r fb bb bi]
1245 (if (zero? (rem (swap! counter inc) (inc skip)))
1246 (reset! vision-image
1247 (BufferedImage! r fb bb bi))))))))]
1248 (vec
1249 (map
1250 (fn [[key image]]
1251 (let [whites (white-coordinates image)
1252 topology (vec (collapse whites))
1253 sensitivity (sensitivity-presets key key)]
1254 (attached-viewport.
1255 (fn [world]
1256 (register-eye! world)
1257 (vector
1258 topology
1259 (vec
1260 (for [[x y] whites]
1261 (pixel-sense
1262 sensitivity
1263 (.getRGB @vision-image x y))))))
1264 register-eye!)))
1265 retinal-map))))
1266 #+END_SRC
1267 #+end_listing
1269 Note that since each of the functions generated by =vision-kernel=
1270 shares the same =register-eye!= function, the eye will be
1271 registered only once the first time any of the functions from the
1272 list returned by =vision-kernel= is called. Each of the functions
1273 returned by =vision-kernel= also allows access to the =Viewport=
1274 through which it receives images.
1276 All the hard work has been done; all that remains is to apply
1277 =vision-kernel= to each eye in the creature and gather the results
1278 into one list of functions.
1281 #+caption: With =vision!=, =CORTEX= is already a fine simulation
1282 #+caption: environment for experimenting with different types of
1283 #+caption: eyes.
1284 #+name: vision!
1285 #+begin_listing clojure
1286 #+BEGIN_SRC clojure
1287 (defn vision!
1288 "Returns a list of functions, each of which returns visual sensory
1289 data when called inside a running simulation."
1290 [#^Node creature & {skip :skip :or {skip 0}}]
1291 (reduce
1292 concat
1293 (for [eye (eyes creature)]
1294 (vision-kernel creature eye))))
1295 #+END_SRC
1296 #+end_listing
1298 #+caption: Simulated vision with a test creature and the
1299 #+caption: human-like eye approximation. Notice how each channel
1300 #+caption: of the eye responds differently to the differently
1301 #+caption: colored balls.
1302 #+name: worm-vision-test.
1303 #+ATTR_LaTeX: :width 13cm
1304 [[./images/worm-vision.png]]
1306 The vision code is not much more complicated than the body code,
1307 and enables multiple further paths for simulated vision. For
1308 example, it is quite easy to create bifocal vision -- you just
1309 make two eyes next to each other in blender! It is also possible
1310 to encode vision transforms in the retinal files. For example, the
1311 human like retina file in figure \ref{retina} approximates a
1312 log-polar transform.
1314 This vision code has already been absorbed by the jMonkeyEngine
1315 community and is now (in modified form) part of a system for
1316 capturing in-game video to a file.
1318 ** ...but hearing must be built from scratch
1320 At the end of this section I will have simulated ears that work the
1321 same way as the simulated eyes in the last section. I will be able to
1322 place any number of ear-nodes in a blender file, and they will bind to
1323 the closest physical object and follow it as it moves around. Each ear
1324 will provide access to the sound data it picks up between every frame.
1326 Hearing is one of the more difficult senses to simulate, because there
1327 is less support for obtaining the actual sound data that is processed
1328 by jMonkeyEngine3. There is no "split-screen" support for rendering
1329 sound from different points of view, and there is no way to directly
1330 access the rendered sound data.
1332 =CORTEX='s hearing is unique because it does not have any
1333 limitations compared to other simulation environments. As far as I
1334 know, there is no other system that supports multiple listeners,
1335 and the sound demo at the end of this section is the first time
1336 it's been done in a video game environment.
1338 *** Brief Description of jMonkeyEngine's Sound System
1340 jMonkeyEngine's sound system works as follows:
1342 - jMonkeyEngine uses the =AppSettings= for the particular
1343 application to determine what sort of =AudioRenderer= should be
1344 used.
1345 - Although some support is provided for multiple AudioRendering
1346 backends, jMonkeyEngine at the time of this writing will either
1347 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
1348 - jMonkeyEngine tries to figure out what sort of system you're
1349 running and extracts the appropriate native libraries.
1350 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
1351 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
1352 - =OpenAL= renders the 3D sound and feeds the rendered sound
1353 directly to any of various sound output devices with which it
1354 knows how to communicate.
1356 A consequence of this is that there's no way to access the actual
1357 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
1358 one /listener/ (it renders sound data from only one perspective),
1359 which normally isn't a problem for games, but becomes a problem
1360 when trying to make multiple AI creatures that can each hear the
1361 world from a different perspective.
1363 To make many AI creatures in jMonkeyEngine that can each hear the
1364 world from their own perspective, or to make a single creature with
1365 many ears, it is necessary to go all the way back to =OpenAL= and
1366 implement support for simulated hearing there.
1368 *** Extending =OpenAl=
1370 Extending =OpenAL= to support multiple listeners requires 500
1371 lines of =C= code and is too hairy to mention here. Instead, I
1372 will show a small amount of extension code and go over the high
1373 level strategy. Full source is of course available with the
1374 =CORTEX= distribution if you're interested.
1376 =OpenAL= goes to great lengths to support many different systems,
1377 all with different sound capabilities and interfaces. It
1378 accomplishes this difficult task by providing code for many
1379 different sound backends in pseudo-objects called /Devices/.
1380 There's a device for the Linux Open Sound System and the Advanced
1381 Linux Sound Architecture, there's one for Direct Sound on Windows,
1382 and there's even one for Solaris. =OpenAL= solves the problem of
1383 platform independence by providing all these Devices.
1385 Wrapper libraries such as LWJGL are free to examine the system on
1386 which they are running and then select an appropriate device for
1387 that system.
1389 There are also a few "special" devices that don't interface with
1390 any particular system. These include the Null Device, which
1391 doesn't do anything, and the Wave Device, which writes whatever
1392 sound it receives to a file, if everything has been set up
1393 correctly when configuring =OpenAL=.
1395 Actual mixing (Doppler shift and distance.environment-based
1396 attenuation) of the sound data happens in the Devices, and they
1397 are the only point in the sound rendering process where this data
1398 is available.
1400 Therefore, in order to support multiple listeners, and get the
1401 sound data in a form that the AIs can use, it is necessary to
1402 create a new Device which supports this feature.
1404 Adding a device to OpenAL is rather tricky -- there are five
1405 separate files in the =OpenAL= source tree that must be modified
1406 to do so. I named my device the "Multiple Audio Send" Device, or
1407 =Send= Device for short, since it sends audio data back to the
1408 calling application like an Aux-Send cable on a mixing board.
1410 The main idea behind the Send device is to take advantage of the
1411 fact that LWJGL only manages one /context/ when using OpenAL. A
1412 /context/ is like a container that holds samples and keeps track
1413 of where the listener is. In order to support multiple listeners,
1414 the Send device identifies the LWJGL context as the master
1415 context, and creates any number of slave contexts to represent
1416 additional listeners. Every time the device renders sound, it
1417 synchronizes every source from the master LWJGL context to the
1418 slave contexts. Then, it renders each context separately, using a
1419 different listener for each one. The rendered sound is made
1420 available via JNI to jMonkeyEngine.
1422 Switching between contexts is not the normal operation of a
1423 Device, and one of the problems with doing so is that a Device
1424 normally keeps around a few pieces of state such as the
1425 =ClickRemoval= array above which will become corrupted if the
1426 contexts are not rendered in parallel. The solution is to create a
1427 copy of this normally global device state for each context, and
1428 copy it back and forth into and out of the actual device state
1429 whenever a context is rendered.
1431 The core of the =Send= device is the =syncSources= function, which
1432 does the job of copying all relevant data from one context to
1433 another.
1435 #+caption: Program for extending =OpenAL= to support multiple
1436 #+caption: listeners via context copying/switching.
1437 #+name: sync-openal-sources
1438 #+begin_listing c
1439 #+BEGIN_SRC c
1440 void syncSources(ALsource *masterSource, ALsource *slaveSource,
1441 ALCcontext *masterCtx, ALCcontext *slaveCtx){
1442 ALuint master = masterSource->source;
1443 ALuint slave = slaveSource->source;
1444 ALCcontext *current = alcGetCurrentContext();
1446 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
1447 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
1448 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
1449 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
1450 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
1451 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
1452 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
1453 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
1454 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
1455 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
1456 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
1457 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
1458 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
1460 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
1461 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
1462 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
1464 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
1465 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
1467 alcMakeContextCurrent(masterCtx);
1468 ALint source_type;
1469 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
1471 // Only static sources are currently synchronized!
1472 if (AL_STATIC == source_type){
1473 ALint master_buffer;
1474 ALint slave_buffer;
1475 alGetSourcei(master, AL_BUFFER, &master_buffer);
1476 alcMakeContextCurrent(slaveCtx);
1477 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
1478 if (master_buffer != slave_buffer){
1479 alSourcei(slave, AL_BUFFER, master_buffer);
1483 // Synchronize the state of the two sources.
1484 alcMakeContextCurrent(masterCtx);
1485 ALint masterState;
1486 ALint slaveState;
1488 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
1489 alcMakeContextCurrent(slaveCtx);
1490 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
1492 if (masterState != slaveState){
1493 switch (masterState){
1494 case AL_INITIAL : alSourceRewind(slave); break;
1495 case AL_PLAYING : alSourcePlay(slave); break;
1496 case AL_PAUSED : alSourcePause(slave); break;
1497 case AL_STOPPED : alSourceStop(slave); break;
1500 // Restore whatever context was previously active.
1501 alcMakeContextCurrent(current);
1503 #+END_SRC
1504 #+end_listing
1506 With this special context-switching device, and some ugly JNI
1507 bindings that are not worth mentioning, =CORTEX= gains the ability
1508 to access multiple sound streams from =OpenAL=.
1510 #+caption: Program to create an ear from a blender empty node. The ear
1511 #+caption: follows around the nearest physical object and passes
1512 #+caption: all sensory data to a continuation function.
1513 #+name: add-ear
1514 #+begin_listing clojure
1515 #+BEGIN_SRC clojure
1516 (defn add-ear!
1517 "Create a Listener centered on the current position of 'ear
1518 which follows the closest physical node in 'creature and
1519 sends sound data to 'continuation."
1520 [#^Application world #^Node creature #^Spatial ear continuation]
1521 (let [target (closest-node creature ear)
1522 lis (Listener.)
1523 audio-renderer (.getAudioRenderer world)
1524 sp (hearing-pipeline continuation)]
1525 (.setLocation lis (.getWorldTranslation ear))
1526 (.setRotation lis (.getWorldRotation ear))
1527 (bind-sense target lis)
1528 (update-listener-velocity! target lis)
1529 (.addListener audio-renderer lis)
1530 (.registerSoundProcessor audio-renderer lis sp)))
1531 #+END_SRC
1532 #+end_listing
1534 The =Send= device, unlike most of the other devices in =OpenAL=,
1535 does not render sound unless asked. This enables the system to
1536 slow down or speed up depending on the needs of the AIs who are
1537 using it to listen. If the device tried to render samples in
1538 real-time, a complicated AI whose mind takes 100 seconds of
1539 computer time to simulate 1 second of AI-time would miss almost
1540 all of the sound in its environment!
1542 #+caption: Program to enable arbitrary hearing in =CORTEX=
1543 #+name: hearing
1544 #+begin_listing clojure
1545 #+BEGIN_SRC clojure
1546 (defn hearing-kernel
1547 "Returns a function which returns auditory sensory data when called
1548 inside a running simulation."
1549 [#^Node creature #^Spatial ear]
1550 (let [hearing-data (atom [])
1551 register-listener!
1552 (runonce
1553 (fn [#^Application world]
1554 (add-ear!
1555 world creature ear
1556 (comp #(reset! hearing-data %)
1557 byteBuffer->pulse-vector))))]
1558 (fn [#^Application world]
1559 (register-listener! world)
1560 (let [data @hearing-data
1561 topology
1562 (vec (map #(vector % 0) (range 0 (count data))))]
1563 [topology data]))))
1565 (defn hearing!
1566 "Endow the creature in a particular world with the sense of
1567 hearing. Will return a sequence of functions, one for each ear,
1568 which when called will return the auditory data from that ear."
1569 [#^Node creature]
1570 (for [ear (ears creature)]
1571 (hearing-kernel creature ear)))
1572 #+END_SRC
1573 #+end_listing
1575 Armed with these functions, =CORTEX= is able to test possibly the
1576 first ever instance of multiple listeners in a video game engine
1577 based simulation!
1579 #+caption: Here a simple creature responds to sound by changing
1580 #+caption: its color from gray to green when the total volume
1581 #+caption: goes over a threshold.
1582 #+name: sound-test
1583 #+begin_listing java
1584 #+BEGIN_SRC java
1585 /**
1586 * Respond to sound! This is the brain of an AI entity that
1587 * hears its surroundings and reacts to them.
1588 */
1589 public void process(ByteBuffer audioSamples,
1590 int numSamples, AudioFormat format) {
1591 audioSamples.clear();
1592 byte[] data = new byte[numSamples];
1593 float[] out = new float[numSamples];
1594 audioSamples.get(data);
1595 FloatSampleTools.
1596 byte2floatInterleaved
1597 (data, 0, out, 0, numSamples/format.getFrameSize(), format);
1599 float max = Float.NEGATIVE_INFINITY;
1600 for (float f : out){if (f > max) max = f;}
1601 audioSamples.clear();
1603 if (max > 0.1){
1604 entity.getMaterial().setColor("Color", ColorRGBA.Green);
1606 else {
1607 entity.getMaterial().setColor("Color", ColorRGBA.Gray);
1609 #+END_SRC
1610 #+end_listing
1612 #+caption: First ever simulation of multiple listeners in =CORTEX=.
1613 #+caption: Each cube is a creature which processes sound data with
1614 #+caption: the =process= function from listing \ref{sound-test}.
1615 #+caption: the ball is constantly emitting a pure tone of
1616 #+caption: constant volume. As it approaches the cubes, they each
1617 #+caption: change color in response to the sound.
1618 #+name: sound-cubes.
1619 #+ATTR_LaTeX: :width 10cm
1620 [[./images/java-hearing-test.png]]
1622 This system of hearing has also been co-opted by the
1623 jMonkeyEngine3 community and is used to record audio for demo
1624 videos.
1626 ** Hundreds of hair-like elements provide a sense of touch
1628 Touch is critical to navigation and spatial reasoning and as such I
1629 need a simulated version of it to give to my AI creatures.
1631 Human skin has a wide array of touch sensors, each of which
1632 specialize in detecting different vibrational modes and pressures.
1633 These sensors can integrate a vast expanse of skin (i.e. your
1634 entire palm), or a tiny patch of skin at the tip of your finger.
1635 The hairs of the skin help detect objects before they even come
1636 into contact with the skin proper.
1638 However, touch in my simulated world can not exactly correspond to
1639 human touch because my creatures are made out of completely rigid
1640 segments that don't deform like human skin.
1642 Instead of measuring deformation or vibration, I surround each
1643 rigid part with a plenitude of hair-like objects (/feelers/) which
1644 do not interact with the physical world. Physical objects can pass
1645 through them with no effect. The feelers are able to tell when
1646 other objects pass through them, and they constantly report how
1647 much of their extent is covered. So even though the creature's body
1648 parts do not deform, the feelers create a margin around those body
1649 parts which achieves a sense of touch which is a hybrid between a
1650 human's sense of deformation and sense from hairs.
1652 Implementing touch in jMonkeyEngine follows a different technical
1653 route than vision and hearing. Those two senses piggybacked off
1654 jMonkeyEngine's 3D audio and video rendering subsystems. To
1655 simulate touch, I use jMonkeyEngine's physics system to execute
1656 many small collision detections, one for each feeler. The placement
1657 of the feelers is determined by a UV-mapped image which shows where
1658 each feeler should be on the 3D surface of the body.
1660 *** Defining Touch Meta-Data in Blender
1662 Each geometry can have a single UV map which describes the
1663 position of the feelers which will constitute its sense of touch.
1664 This image path is stored under the ``touch'' key. The image itself
1665 is black and white, with black meaning a feeler length of 0 (no
1666 feeler is present) and white meaning a feeler length of =scale=,
1667 which is a float stored under the key "scale".
1669 #+caption: Touch does not use empty nodes, to store metadata,
1670 #+caption: because the metadata of each solid part of a
1671 #+caption: creature's body is sufficient.
1672 #+name: touch-meta-data
1673 #+begin_listing clojure
1674 #+BEGIN_SRC clojure
1675 (defn tactile-sensor-profile
1676 "Return the touch-sensor distribution image in BufferedImage format,
1677 or nil if it does not exist."
1678 [#^Geometry obj]
1679 (if-let [image-path (meta-data obj "touch")]
1680 (load-image image-path)))
1682 (defn tactile-scale
1683 "Return the length of each feeler. Default scale is 0.01
1684 jMonkeyEngine units."
1685 [#^Geometry obj]
1686 (if-let [scale (meta-data obj "scale")]
1687 scale 0.1))
1688 #+END_SRC
1689 #+end_listing
1691 Here is an example of a UV-map which specifies the position of
1692 touch sensors along the surface of the upper segment of a fingertip.
1694 #+caption: This is the tactile-sensor-profile for the upper segment
1695 #+caption: of a fingertip. It defines regions of high touch sensitivity
1696 #+caption: (where there are many white pixels) and regions of low
1697 #+caption: sensitivity (where white pixels are sparse).
1698 #+name: fingertip-UV
1699 #+ATTR_LaTeX: :width 13cm
1700 [[./images/finger-UV.png]]
1702 *** Implementation Summary
1704 To simulate touch there are three conceptual steps. For each solid
1705 object in the creature, you first have to get UV image and scale
1706 parameter which define the position and length of the feelers.
1707 Then, you use the triangles which comprise the mesh and the UV
1708 data stored in the mesh to determine the world-space position and
1709 orientation of each feeler. Then once every frame, update these
1710 positions and orientations to match the current position and
1711 orientation of the object, and use physics collision detection to
1712 gather tactile data.
1714 Extracting the meta-data has already been described. The third
1715 step, physics collision detection, is handled in =touch-kernel=.
1716 Translating the positions and orientations of the feelers from the
1717 UV-map to world-space is itself a three-step process.
1719 - Find the triangles which make up the mesh in pixel-space and in
1720 world-space. \\(=triangles=, =pixel-triangles=).
1722 - Find the coordinates of each feeler in world-space. These are
1723 the origins of the feelers. (=feeler-origins=).
1725 - Calculate the normals of the triangles in world space, and add
1726 them to each of the origins of the feelers. These are the
1727 normalized coordinates of the tips of the feelers.
1728 (=feeler-tips=).
1730 *** Triangle Math
1732 The rigid objects which make up a creature have an underlying
1733 =Geometry=, which is a =Mesh= plus a =Material= and other
1734 important data involved with displaying the object.
1736 A =Mesh= is composed of =Triangles=, and each =Triangle= has three
1737 vertices which have coordinates in world space and UV space.
1739 Here, =triangles= gets all the world-space triangles which
1740 comprise a mesh, while =pixel-triangles= gets those same triangles
1741 expressed in pixel coordinates (which are UV coordinates scaled to
1742 fit the height and width of the UV image).
1744 #+caption: Programs to extract triangles from a geometry and get
1745 #+caption: their vertices in both world and UV-coordinates.
1746 #+name: get-triangles
1747 #+begin_listing clojure
1748 #+BEGIN_SRC clojure
1749 (defn triangle
1750 "Get the triangle specified by triangle-index from the mesh."
1751 [#^Geometry geo triangle-index]
1752 (triangle-seq
1753 (let [scratch (Triangle.)]
1754 (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
1756 (defn triangles
1757 "Return a sequence of all the Triangles which comprise a given
1758 Geometry."
1759 [#^Geometry geo]
1760 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
1762 (defn triangle-vertex-indices
1763 "Get the triangle vertex indices of a given triangle from a given
1764 mesh."
1765 [#^Mesh mesh triangle-index]
1766 (let [indices (int-array 3)]
1767 (.getTriangle mesh triangle-index indices)
1768 (vec indices)))
1770 (defn vertex-UV-coord
1771 "Get the UV-coordinates of the vertex named by vertex-index"
1772 [#^Mesh mesh vertex-index]
1773 (let [UV-buffer
1774 (.getData
1775 (.getBuffer
1776 mesh
1777 VertexBuffer$Type/TexCoord))]
1778 [(.get UV-buffer (* vertex-index 2))
1779 (.get UV-buffer (+ 1 (* vertex-index 2)))]))
1781 (defn pixel-triangle [#^Geometry geo image index]
1782 (let [mesh (.getMesh geo)
1783 width (.getWidth image)
1784 height (.getHeight image)]
1785 (vec (map (fn [[u v]] (vector (* width u) (* height v)))
1786 (map (partial vertex-UV-coord mesh)
1787 (triangle-vertex-indices mesh index))))))
1789 (defn pixel-triangles
1790 "The pixel-space triangles of the Geometry, in the same order as
1791 (triangles geo)"
1792 [#^Geometry geo image]
1793 (let [height (.getHeight image)
1794 width (.getWidth image)]
1795 (map (partial pixel-triangle geo image)
1796 (range (.getTriangleCount (.getMesh geo))))))
1797 #+END_SRC
1798 #+end_listing
1800 *** The Affine Transform from one Triangle to Another
1802 =pixel-triangles= gives us the mesh triangles expressed in pixel
1803 coordinates and =triangles= gives us the mesh triangles expressed
1804 in world coordinates. The tactile-sensor-profile gives the
1805 position of each feeler in pixel-space. In order to convert
1806 pixel-space coordinates into world-space coordinates we need
1807 something that takes coordinates on the surface of one triangle
1808 and gives the corresponding coordinates on the surface of another
1809 triangle.
1811 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed
1812 into any other by a combination of translation, scaling, and
1813 rotation. The affine transformation from one triangle to another
1814 is readily computable if the triangle is expressed in terms of a
1815 $4x4$ matrix.
1817 #+BEGIN_LaTeX
1818 $$
1819 \begin{bmatrix}
1820 x_1 & x_2 & x_3 & n_x \\
1821 y_1 & y_2 & y_3 & n_y \\
1822 z_1 & z_2 & z_3 & n_z \\
1823 1 & 1 & 1 & 1
1824 \end{bmatrix}
1825 $$
1826 #+END_LaTeX
1828 Here, the first three columns of the matrix are the vertices of
1829 the triangle. The last column is the right-handed unit normal of
1830 the triangle.
1832 With two triangles $T_{1}$ and $T_{2}$ each expressed as a
1833 matrix like above, the affine transform from $T_{1}$ to $T_{2}$
1834 is $T_{2}T_{1}^{-1}$.
1836 The clojure code below recapitulates the formulas above, using
1837 jMonkeyEngine's =Matrix4f= objects, which can describe any affine
1838 transformation.
1840 #+caption: Program to interpret triangles as affine transforms.
1841 #+name: triangle-affine
1842 #+begin_listing clojure
1843 #+BEGIN_SRC clojure
1844 (defn triangle->matrix4f
1845 "Converts the triangle into a 4x4 matrix: The first three columns
1846 contain the vertices of the triangle; the last contains the unit
1847 normal of the triangle. The bottom row is filled with 1s."
1848 [#^Triangle t]
1849 (let [mat (Matrix4f.)
1850 [vert-1 vert-2 vert-3]
1851 (mapv #(.get t %) (range 3))
1852 unit-normal (do (.calculateNormal t)(.getNormal t))
1853 vertices [vert-1 vert-2 vert-3 unit-normal]]
1854 (dorun
1855 (for [row (range 4) col (range 3)]
1856 (do
1857 (.set mat col row (.get (vertices row) col))
1858 (.set mat 3 row 1)))) mat))
1860 (defn triangles->affine-transform
1861 "Returns the affine transformation that converts each vertex in the
1862 first triangle into the corresponding vertex in the second
1863 triangle."
1864 [#^Triangle tri-1 #^Triangle tri-2]
1865 (.mult
1866 (triangle->matrix4f tri-2)
1867 (.invert (triangle->matrix4f tri-1))))
1868 #+END_SRC
1869 #+end_listing
1871 *** Triangle Boundaries
1873 For efficiency's sake I will divide the tactile-profile image into
1874 small squares which inscribe each pixel-triangle, then extract the
1875 points which lie inside the triangle and map them to 3D-space using
1876 =triangle-transform= above. To do this I need a function,
1877 =convex-bounds= which finds the smallest box which inscribes a 2D
1878 triangle.
1880 =inside-triangle?= determines whether a point is inside a triangle
1881 in 2D pixel-space.
1883 #+caption: Program to efficiently determine point inclusion
1884 #+caption: in a triangle.
1885 #+name: in-triangle
1886 #+begin_listing clojure
1887 #+BEGIN_SRC clojure
1888 (defn convex-bounds
1889 "Returns the smallest square containing the given vertices, as a
1890 vector of integers [left top width height]."
1891 [verts]
1892 (let [xs (map first verts)
1893 ys (map second verts)
1894 x0 (Math/floor (apply min xs))
1895 y0 (Math/floor (apply min ys))
1896 x1 (Math/ceil (apply max xs))
1897 y1 (Math/ceil (apply max ys))]
1898 [x0 y0 (- x1 x0) (- y1 y0)]))
1900 (defn same-side?
1901 "Given the points p1 and p2 and the reference point ref, is point p
1902 on the same side of the line that goes through p1 and p2 as ref is?"
1903 [p1 p2 ref p]
1904 (<=
1906 (.dot
1907 (.cross (.subtract p2 p1) (.subtract p p1))
1908 (.cross (.subtract p2 p1) (.subtract ref p1)))))
1910 (defn inside-triangle?
1911 "Is the point inside the triangle?"
1912 {:author "Dylan Holmes"}
1913 [#^Triangle tri #^Vector3f p]
1914 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
1915 (and
1916 (same-side? vert-1 vert-2 vert-3 p)
1917 (same-side? vert-2 vert-3 vert-1 p)
1918 (same-side? vert-3 vert-1 vert-2 p))))
1919 #+END_SRC
1920 #+end_listing
1922 *** Feeler Coordinates
1924 The triangle-related functions above make short work of
1925 calculating the positions and orientations of each feeler in
1926 world-space.
1928 #+caption: Program to get the coordinates of ``feelers '' in
1929 #+caption: both world and UV-coordinates.
1930 #+name: feeler-coordinates
1931 #+begin_listing clojure
1932 #+BEGIN_SRC clojure
1933 (defn feeler-pixel-coords
1934 "Returns the coordinates of the feelers in pixel space in lists, one
1935 list for each triangle, ordered in the same way as (triangles) and
1936 (pixel-triangles)."
1937 [#^Geometry geo image]
1938 (map
1939 (fn [pixel-triangle]
1940 (filter
1941 (fn [coord]
1942 (inside-triangle? (->triangle pixel-triangle)
1943 (->vector3f coord)))
1944 (white-coordinates image (convex-bounds pixel-triangle))))
1945 (pixel-triangles geo image)))
1947 (defn feeler-world-coords
1948 "Returns the coordinates of the feelers in world space in lists, one
1949 list for each triangle, ordered in the same way as (triangles) and
1950 (pixel-triangles)."
1951 [#^Geometry geo image]
1952 (let [transforms
1953 (map #(triangles->affine-transform
1954 (->triangle %1) (->triangle %2))
1955 (pixel-triangles geo image)
1956 (triangles geo))]
1957 (map (fn [transform coords]
1958 (map #(.mult transform (->vector3f %)) coords))
1959 transforms (feeler-pixel-coords geo image))))
1960 #+END_SRC
1961 #+end_listing
1963 #+caption: Program to get the position of the base and tip of
1964 #+caption: each ``feeler''
1965 #+name: feeler-tips
1966 #+begin_listing clojure
1967 #+BEGIN_SRC clojure
1968 (defn feeler-origins
1969 "The world space coordinates of the root of each feeler."
1970 [#^Geometry geo image]
1971 (reduce concat (feeler-world-coords geo image)))
1973 (defn feeler-tips
1974 "The world space coordinates of the tip of each feeler."
1975 [#^Geometry geo image]
1976 (let [world-coords (feeler-world-coords geo image)
1977 normals
1978 (map
1979 (fn [triangle]
1980 (.calculateNormal triangle)
1981 (.clone (.getNormal triangle)))
1982 (map ->triangle (triangles geo)))]
1984 (mapcat (fn [origins normal]
1985 (map #(.add % normal) origins))
1986 world-coords normals)))
1988 (defn touch-topology
1989 [#^Geometry geo image]
1990 (collapse (reduce concat (feeler-pixel-coords geo image))))
1991 #+END_SRC
1992 #+end_listing
1994 *** Simulated Touch
1996 Now that the functions to construct feelers are complete,
1997 =touch-kernel= generates functions to be called from within a
1998 simulation that perform the necessary physics collisions to
1999 collect tactile data, and =touch!= recursively applies it to every
2000 node in the creature.
2002 #+caption: Efficient program to transform a ray from
2003 #+caption: one position to another.
2004 #+name: set-ray
2005 #+begin_listing clojure
2006 #+BEGIN_SRC clojure
2007 (defn set-ray [#^Ray ray #^Matrix4f transform
2008 #^Vector3f origin #^Vector3f tip]
2009 ;; Doing everything locally reduces garbage collection by enough to
2010 ;; be worth it.
2011 (.mult transform origin (.getOrigin ray))
2012 (.mult transform tip (.getDirection ray))
2013 (.subtractLocal (.getDirection ray) (.getOrigin ray))
2014 (.normalizeLocal (.getDirection ray)))
2015 #+END_SRC
2016 #+end_listing
2018 #+caption: This is the core of touch in =CORTEX= each feeler
2019 #+caption: follows the object it is bound to, reporting any
2020 #+caption: collisions that may happen.
2021 #+name: touch-kernel
2022 #+begin_listing clojure
2023 #+BEGIN_SRC clojure
2024 (defn touch-kernel
2025 "Constructs a function which will return tactile sensory data from
2026 'geo when called from inside a running simulation"
2027 [#^Geometry geo]
2028 (if-let
2029 [profile (tactile-sensor-profile geo)]
2030 (let [ray-reference-origins (feeler-origins geo profile)
2031 ray-reference-tips (feeler-tips geo profile)
2032 ray-length (tactile-scale geo)
2033 current-rays (map (fn [_] (Ray.)) ray-reference-origins)
2034 topology (touch-topology geo profile)
2035 correction (float (* ray-length -0.2))]
2036 ;; slight tolerance for very close collisions.
2037 (dorun
2038 (map (fn [origin tip]
2039 (.addLocal origin (.mult (.subtract tip origin)
2040 correction)))
2041 ray-reference-origins ray-reference-tips))
2042 (dorun (map #(.setLimit % ray-length) current-rays))
2043 (fn [node]
2044 (let [transform (.getWorldMatrix geo)]
2045 (dorun
2046 (map (fn [ray ref-origin ref-tip]
2047 (set-ray ray transform ref-origin ref-tip))
2048 current-rays ray-reference-origins
2049 ray-reference-tips))
2050 (vector
2051 topology
2052 (vec
2053 (for [ray current-rays]
2054 (do
2055 (let [results (CollisionResults.)]
2056 (.collideWith node ray results)
2057 (let [touch-objects
2058 (filter #(not (= geo (.getGeometry %)))
2059 results)
2060 limit (.getLimit ray)]
2061 [(if (empty? touch-objects)
2062 limit
2063 (let [response
2064 (apply min (map #(.getDistance %)
2065 touch-objects))]
2066 (FastMath/clamp
2067 (float
2068 (if (> response limit) (float 0.0)
2069 (+ response correction)))
2070 (float 0.0)
2071 limit)))
2072 limit])))))))))))
2073 #+END_SRC
2074 #+end_listing
2076 Armed with the =touch!= function, =CORTEX= becomes capable of
2077 giving creatures a sense of touch. A simple test is to create a
2078 cube that is outfitted with a uniform distribution of touch
2079 sensors. It can feel the ground and any balls that it touches.
2081 #+caption: =CORTEX= interface for creating touch in a simulated
2082 #+caption: creature.
2083 #+name: touch
2084 #+begin_listing clojure
2085 #+BEGIN_SRC clojure
2086 (defn touch!
2087 "Endow the creature with the sense of touch. Returns a sequence of
2088 functions, one for each body part with a tactile-sensor-profile,
2089 each of which when called returns sensory data for that body part."
2090 [#^Node creature]
2091 (filter
2092 (comp not nil?)
2093 (map touch-kernel
2094 (filter #(isa? (class %) Geometry)
2095 (node-seq creature)))))
2096 #+END_SRC
2097 #+end_listing
2099 The tactile-sensor-profile image for the touch cube is a simple
2100 cross with a uniform distribution of touch sensors:
2102 #+caption: The touch profile for the touch-cube. Each pure white
2103 #+caption: pixel defines a touch sensitive feeler.
2104 #+name: touch-cube-uv-map
2105 #+ATTR_LaTeX: :width 7cm
2106 [[./images/touch-profile.png]]
2108 #+caption: The touch cube reacts to cannonballs. The black, red,
2109 #+caption: and white cross on the right is a visual display of
2110 #+caption: the creature's touch. White means that it is feeling
2111 #+caption: something strongly, black is not feeling anything,
2112 #+caption: and gray is in-between. The cube can feel both the
2113 #+caption: floor and the ball. Notice that when the ball causes
2114 #+caption: the cube to tip, that the bottom face can still feel
2115 #+caption: part of the ground.
2116 #+name: touch-cube-uv-map-2
2117 #+ATTR_LaTeX: :width 15cm
2118 [[./images/touch-cube.png]]
2120 ** Proprioception provides knowledge of your own body's position
2122 Close your eyes, and touch your nose with your right index finger.
2123 How did you do it? You could not see your hand, and neither your
2124 hand nor your nose could use the sense of touch to guide the path
2125 of your hand. There are no sound cues, and Taste and Smell
2126 certainly don't provide any help. You know where your hand is
2127 without your other senses because of Proprioception.
2129 Humans can sometimes loose this sense through viral infections or
2130 damage to the spinal cord or brain, and when they do, they loose
2131 the ability to control their own bodies without looking directly at
2132 the parts they want to move. In [[http://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat][The Man Who Mistook His Wife for a
2133 Hat]] (\cite{man-wife-hat}), a woman named Christina looses this
2134 sense and has to learn how to move by carefully watching her arms
2135 and legs. She describes proprioception as the "eyes of the body,
2136 the way the body sees itself".
2138 Proprioception in humans is mediated by [[http://en.wikipedia.org/wiki/Articular_capsule][joint capsules]], [[http://en.wikipedia.org/wiki/Muscle_spindle][muscle
2139 spindles]], and the [[http://en.wikipedia.org/wiki/Golgi_tendon_organ][Golgi tendon organs]]. These measure the relative
2140 positions of each body part by monitoring muscle strain and length.
2142 It's clear that this is a vital sense for fluid, graceful movement.
2143 It's also particularly easy to implement in jMonkeyEngine.
2145 My simulated proprioception calculates the relative angles of each
2146 joint from the rest position defined in the blender file. This
2147 simulates the muscle-spindles and joint capsules. I will deal with
2148 Golgi tendon organs, which calculate muscle strain, in the next
2149 section.
2151 *** Helper functions
2153 =absolute-angle= calculates the angle between two vectors,
2154 relative to a third axis vector. This angle is the number of
2155 radians you have to move counterclockwise around the axis vector
2156 to get from the first to the second vector. It is not commutative
2157 like a normal dot-product angle is.
2159 The purpose of these functions is to build a system of angle
2160 measurement that is biologically plausible.
2162 #+caption: Program to measure angles along a vector
2163 #+name: helpers
2164 #+begin_listing clojure
2165 #+BEGIN_SRC clojure
2166 (defn right-handed?
2167 "true iff the three vectors form a right handed coordinate
2168 system. The three vectors do not have to be normalized or
2169 orthogonal."
2170 [vec1 vec2 vec3]
2171 (pos? (.dot (.cross vec1 vec2) vec3)))
2173 (defn absolute-angle
2174 "The angle between 'vec1 and 'vec2 around 'axis. In the range
2175 [0 (* 2 Math/PI)]."
2176 [vec1 vec2 axis]
2177 (let [angle (.angleBetween vec1 vec2)]
2178 (if (right-handed? vec1 vec2 axis)
2179 angle (- (* 2 Math/PI) angle))))
2180 #+END_SRC
2181 #+end_listing
2183 *** Proprioception Kernel
2185 Given a joint, =proprioception-kernel= produces a function that
2186 calculates the Euler angles between the objects the joint
2187 connects. The only tricky part here is making the angles relative
2188 to the joint's initial ``straightness''.
2190 #+caption: Program to return biologically reasonable proprioceptive
2191 #+caption: data for each joint.
2192 #+name: proprioception
2193 #+begin_listing clojure
2194 #+BEGIN_SRC clojure
2195 (defn proprioception-kernel
2196 "Returns a function which returns proprioceptive sensory data when
2197 called inside a running simulation."
2198 [#^Node parts #^Node joint]
2199 (let [[obj-a obj-b] (joint-targets parts joint)
2200 joint-rot (.getWorldRotation joint)
2201 x0 (.mult joint-rot Vector3f/UNIT_X)
2202 y0 (.mult joint-rot Vector3f/UNIT_Y)
2203 z0 (.mult joint-rot Vector3f/UNIT_Z)]
2204 (fn []
2205 (let [rot-a (.clone (.getWorldRotation obj-a))
2206 rot-b (.clone (.getWorldRotation obj-b))
2207 x (.mult rot-a x0)
2208 y (.mult rot-a y0)
2209 z (.mult rot-a z0)
2211 X (.mult rot-b x0)
2212 Y (.mult rot-b y0)
2213 Z (.mult rot-b z0)
2214 heading (Math/atan2 (.dot X z) (.dot X x))
2215 pitch (Math/atan2 (.dot X y) (.dot X x))
2217 ;; rotate x-vector back to origin
2218 reverse
2219 (doto (Quaternion.)
2220 (.fromAngleAxis
2221 (.angleBetween X x)
2222 (let [cross (.normalize (.cross X x))]
2223 (if (= 0 (.length cross)) y cross))))
2224 roll (absolute-angle (.mult reverse Y) y x)]
2225 [heading pitch roll]))))
2227 (defn proprioception!
2228 "Endow the creature with the sense of proprioception. Returns a
2229 sequence of functions, one for each child of the \"joints\" node in
2230 the creature, which each report proprioceptive information about
2231 that joint."
2232 [#^Node creature]
2233 ;; extract the body's joints
2234 (let [senses (map (partial proprioception-kernel creature)
2235 (joints creature))]
2236 (fn []
2237 (map #(%) senses))))
2238 #+END_SRC
2239 #+end_listing
2241 =proprioception!= maps =proprioception-kernel= across all the
2242 joints of the creature. It uses the same list of joints that
2243 =joints= uses. Proprioception is the easiest sense to implement in
2244 =CORTEX=, and it will play a crucial role when efficiently
2245 implementing empathy.
2247 #+caption: In the upper right corner, the three proprioceptive
2248 #+caption: angle measurements are displayed. Red is yaw, Green is
2249 #+caption: pitch, and White is roll.
2250 #+name: proprio
2251 #+ATTR_LaTeX: :width 11cm
2252 [[./images/proprio.png]]
2254 ** Muscles contain both sensors and effectors
2256 Surprisingly enough, terrestrial creatures only move by using
2257 torque applied about their joints. There's not a single straight
2258 line of force in the human body at all! (A straight line of force
2259 would correspond to some sort of jet or rocket propulsion.)
2261 In humans, muscles are composed of muscle fibers which can contract
2262 to exert force. The muscle fibers which compose a muscle are
2263 partitioned into discrete groups which are each controlled by a
2264 single alpha motor neuron. A single alpha motor neuron might
2265 control as little as three or as many as one thousand muscle
2266 fibers. When the alpha motor neuron is engaged by the spinal cord,
2267 it activates all of the muscle fibers to which it is attached. The
2268 spinal cord generally engages the alpha motor neurons which control
2269 few muscle fibers before the motor neurons which control many
2270 muscle fibers. This recruitment strategy allows for precise
2271 movements at low strength. The collection of all motor neurons that
2272 control a muscle is called the motor pool. The brain essentially
2273 says "activate 30% of the motor pool" and the spinal cord recruits
2274 motor neurons until 30% are activated. Since the distribution of
2275 power among motor neurons is unequal and recruitment goes from
2276 weakest to strongest, the first 30% of the motor pool might be 5%
2277 of the strength of the muscle.
2279 My simulated muscles follow a similar design: Each muscle is
2280 defined by a 1-D array of numbers (the "motor pool"). Each entry in
2281 the array represents a motor neuron which controls a number of
2282 muscle fibers equal to the value of the entry. Each muscle has a
2283 scalar strength factor which determines the total force the muscle
2284 can exert when all motor neurons are activated. The effector
2285 function for a muscle takes a number to index into the motor pool,
2286 and then "activates" all the motor neurons whose index is lower or
2287 equal to the number. Each motor-neuron will apply force in
2288 proportion to its value in the array. Lower values cause less
2289 force. The lower values can be put at the "beginning" of the 1-D
2290 array to simulate the layout of actual human muscles, which are
2291 capable of more precise movements when exerting less force. Or, the
2292 motor pool can simulate more exotic recruitment strategies which do
2293 not correspond to human muscles.
2295 This 1D array is defined in an image file for ease of
2296 creation/visualization. Here is an example muscle profile image.
2298 #+caption: A muscle profile image that describes the strengths
2299 #+caption: of each motor neuron in a muscle. White is weakest
2300 #+caption: and dark red is strongest. This particular pattern
2301 #+caption: has weaker motor neurons at the beginning, just
2302 #+caption: like human muscle.
2303 #+name: muscle-recruit
2304 #+ATTR_LaTeX: :width 7cm
2305 [[./images/basic-muscle.png]]
2307 *** Muscle meta-data
2309 #+caption: Program to deal with loading muscle data from a blender
2310 #+caption: file's metadata.
2311 #+name: motor-pool
2312 #+begin_listing clojure
2313 #+BEGIN_SRC clojure
2314 (defn muscle-profile-image
2315 "Get the muscle-profile image from the node's blender meta-data."
2316 [#^Node muscle]
2317 (if-let [image (meta-data muscle "muscle")]
2318 (load-image image)))
2320 (defn muscle-strength
2321 "Return the strength of this muscle, or 1 if it is not defined."
2322 [#^Node muscle]
2323 (if-let [strength (meta-data muscle "strength")]
2324 strength 1))
2326 (defn motor-pool
2327 "Return a vector where each entry is the strength of the \"motor
2328 neuron\" at that part in the muscle."
2329 [#^Node muscle]
2330 (let [profile (muscle-profile-image muscle)]
2331 (vec
2332 (let [width (.getWidth profile)]
2333 (for [x (range width)]
2334 (- 255
2335 (bit-and
2336 0x0000FF
2337 (.getRGB profile x 0))))))))
2338 #+END_SRC
2339 #+end_listing
2341 Of note here is =motor-pool= which interprets the muscle-profile
2342 image in a way that allows me to use gradients between white and
2343 red, instead of shades of gray as I've been using for all the
2344 other senses. This is purely an aesthetic touch.
2346 *** Creating muscles
2348 #+caption: This is the core movement function in =CORTEX=, which
2349 #+caption: implements muscles that report on their activation.
2350 #+name: muscle-kernel
2351 #+begin_listing clojure
2352 #+BEGIN_SRC clojure
2353 (defn movement-kernel
2354 "Returns a function which when called with a integer value inside a
2355 running simulation will cause movement in the creature according
2356 to the muscle's position and strength profile. Each function
2357 returns the amount of force applied / max force."
2358 [#^Node creature #^Node muscle]
2359 (let [target (closest-node creature muscle)
2360 axis
2361 (.mult (.getWorldRotation muscle) Vector3f/UNIT_Y)
2362 strength (muscle-strength muscle)
2364 pool (motor-pool muscle)
2365 pool-integral (reductions + pool)
2366 forces
2367 (vec (map #(float (* strength (/ % (last pool-integral))))
2368 pool-integral))
2369 control (.getControl target RigidBodyControl)]
2370 ;;(println-repl (.getName target) axis)
2371 (fn [n]
2372 (let [pool-index (max 0 (min n (dec (count pool))))
2373 force (forces pool-index)]
2374 (.applyTorque control (.mult axis force))
2375 (float (/ force strength))))))
2377 (defn movement!
2378 "Endow the creature with the power of movement. Returns a sequence
2379 of functions, each of which accept an integer value and will
2380 activate their corresponding muscle."
2381 [#^Node creature]
2382 (for [muscle (muscles creature)]
2383 (movement-kernel creature muscle)))
2384 #+END_SRC
2385 #+end_listing
2388 =movement-kernel= creates a function that controls the movement
2389 of the nearest physical node to the muscle node. The muscle exerts
2390 a rotational force dependent on it's orientation to the object in
2391 the blender file. The function returned by =movement-kernel= is
2392 also a sense function: it returns the percent of the total muscle
2393 strength that is currently being employed. This is analogous to
2394 muscle tension in humans and completes the sense of proprioception
2395 begun in the last section.
2397 ** =CORTEX= brings complex creatures to life!
2399 The ultimate test of =CORTEX= is to create a creature with the full
2400 gamut of senses and put it though its paces.
2402 With all senses enabled, my right hand model looks like an
2403 intricate marionette hand with several strings for each finger:
2405 #+caption: View of the hand model with all sense nodes. You can see
2406 #+caption: the joint, muscle, ear, and eye nodes here.
2407 #+name: hand-nodes-1
2408 #+ATTR_LaTeX: :width 11cm
2409 [[./images/hand-with-all-senses2.png]]
2411 #+caption: An alternate view of the hand.
2412 #+name: hand-nodes-2
2413 #+ATTR_LaTeX: :width 15cm
2414 [[./images/hand-with-all-senses3.png]]
2416 With the hand fully rigged with senses, I can run it though a test
2417 that will test everything.
2419 #+caption: A full test of the hand with all senses. Note especially
2420 #+caption: the interactions the hand has with itself: it feels
2421 #+caption: its own palm and fingers, and when it curls its fingers,
2422 #+caption: it sees them with its eye (which is located in the center
2423 #+caption: of the palm. The red block appears with a pure tone sound.
2424 #+caption: The hand then uses its muscles to launch the cube!
2425 #+name: integration
2426 #+ATTR_LaTeX: :width 16cm
2427 [[./images/integration.png]]
2429 ** =CORTEX= enables many possibilities for further research
2431 Often times, the hardest part of building a system involving
2432 creatures is dealing with physics and graphics. =CORTEX= removes
2433 much of this initial difficulty and leaves researchers free to
2434 directly pursue their ideas. I hope that even undergrads with a
2435 passing curiosity about simulated touch or creature evolution will
2436 be able to use cortex for experimentation. =CORTEX= is a completely
2437 simulated world, and far from being a disadvantage, its simulated
2438 nature enables you to create senses and creatures that would be
2439 impossible to make in the real world.
2441 While not by any means a complete list, here are some paths
2442 =CORTEX= is well suited to help you explore:
2444 - Empathy :: my empathy program leaves many areas for
2445 improvement, among which are using vision to infer
2446 proprioception and looking up sensory experience with imagined
2447 vision, touch, and sound.
2448 - Evolution :: Karl Sims created a rich environment for simulating
2449 the evolution of creatures on a Connection Machine
2450 (\cite{sims-evolving-creatures}). Today, this can be redone
2451 and expanded with =CORTEX= on an ordinary computer.
2452 - Exotic senses :: Cortex enables many fascinating senses that are
2453 not possible to build in the real world. For example,
2454 telekinesis is an interesting avenue to explore. You can also
2455 make a ``semantic'' sense which looks up metadata tags on
2456 objects in the environment the metadata tags might contain
2457 other sensory information.
2458 - Imagination via subworlds :: this would involve a creature with
2459 an effector which creates an entire new sub-simulation where
2460 the creature has direct control over placement/creation of
2461 objects via simulated telekinesis. The creature observes this
2462 sub-world through its normal senses and uses its observations
2463 to make predictions about its top level world.
2464 - Simulated prescience :: step the simulation forward a few ticks,
2465 gather sensory data, then supply this data for the creature as
2466 one of its actual senses. The cost of prescience is slowing
2467 the simulation down by a factor proportional to however far
2468 you want the entities to see into the future. What happens
2469 when two evolved creatures that can each see into the future
2470 fight each other?
2471 - Swarm creatures :: Program a group of creatures that cooperate
2472 with each other. Because the creatures would be simulated, you
2473 could investigate computationally complex rules of behavior
2474 which still, from the group's point of view, would happen in
2475 real time. Interactions could be as simple as cellular
2476 organisms communicating via flashing lights, or as complex as
2477 humanoids completing social tasks, etc.
2478 - =HACKER= for writing muscle-control programs :: Presented with a
2479 low-level muscle control / sense API, generate higher level
2480 programs for accomplishing various stated goals. Example goals
2481 might be "extend all your fingers" or "move your hand into the
2482 area with blue light" or "decrease the angle of this joint".
2483 It would be like Sussman's HACKER, except it would operate
2484 with much more data in a more realistic world. Start off with
2485 "calisthenics" to develop subroutines over the motor control
2486 API. The low level programming code might be a turning machine
2487 that could develop programs to iterate over a "tape" where
2488 each entry in the tape could control recruitment of the fibers
2489 in a muscle.
2490 - Sense fusion :: There is much work to be done on sense
2491 integration -- building up a coherent picture of the world and
2492 the things in it. With =CORTEX= as a base, you can explore
2493 concepts like self-organizing maps or cross modal clustering
2494 in ways that have never before been tried.
2495 - Inverse kinematics :: experiments in sense guided motor control
2496 are easy given =CORTEX='s support -- you can get right to the
2497 hard control problems without worrying about physics or
2498 senses.
2500 \newpage
2502 * =EMPATH=: action recognition in a simulated worm
2504 Here I develop a computational model of empathy, using =CORTEX= as a
2505 base. Empathy in this context is the ability to observe another
2506 creature and infer what sorts of sensations that creature is
2507 feeling. My empathy algorithm involves multiple phases. First is
2508 free-play, where the creature moves around and gains sensory
2509 experience. From this experience I construct a representation of the
2510 creature's sensory state space, which I call \Phi-space. Using
2511 \Phi-space, I construct an efficient function which takes the
2512 limited data that comes from observing another creature and enriches
2513 it with a full compliment of imagined sensory data. I can then use
2514 the imagined sensory data to recognize what the observed creature is
2515 doing and feeling, using straightforward embodied action predicates.
2516 This is all demonstrated with using a simple worm-like creature, and
2517 recognizing worm-actions based on limited data.
2519 #+caption: Here is the worm with which we will be working.
2520 #+caption: It is composed of 5 segments. Each segment has a
2521 #+caption: pair of extensor and flexor muscles. Each of the
2522 #+caption: worm's four joints is a hinge joint which allows
2523 #+caption: about 30 degrees of rotation to either side. Each segment
2524 #+caption: of the worm is touch-capable and has a uniform
2525 #+caption: distribution of touch sensors on each of its faces.
2526 #+caption: Each joint has a proprioceptive sense to detect
2527 #+caption: relative positions. The worm segments are all the
2528 #+caption: same except for the first one, which has a much
2529 #+caption: higher weight than the others to allow for easy
2530 #+caption: manual motor control.
2531 #+name: basic-worm-view
2532 #+ATTR_LaTeX: :width 10cm
2533 [[./images/basic-worm-view.png]]
2535 #+caption: Program for reading a worm from a blender file and
2536 #+caption: outfitting it with the senses of proprioception,
2537 #+caption: touch, and the ability to move, as specified in the
2538 #+caption: blender file.
2539 #+name: get-worm
2540 #+begin_listing clojure
2541 #+begin_src clojure
2542 (defn worm []
2543 (let [model (load-blender-model "Models/worm/worm.blend")]
2544 {:body (doto model (body!))
2545 :touch (touch! model)
2546 :proprioception (proprioception! model)
2547 :muscles (movement! model)}))
2548 #+end_src
2549 #+end_listing
2551 ** Embodiment factors action recognition into manageable parts
2553 Using empathy, I divide the problem of action recognition into a
2554 recognition process expressed in the language of a full compliment
2555 of senses, and an imaginative process that generates full sensory
2556 data from partial sensory data. Splitting the action recognition
2557 problem in this manner greatly reduces the total amount of work to
2558 recognize actions: The imaginative process is mostly just matching
2559 previous experience, and the recognition process gets to use all
2560 the senses to directly describe any action.
2562 ** Action recognition is easy with a full gamut of senses
2564 Embodied representations using multiple senses such as touch,
2565 proprioception, and muscle tension turns out be exceedingly
2566 efficient at describing body-centered actions. It is the right
2567 language for the job. For example, it takes only around 5 lines of
2568 LISP code to describe the action of curling using embodied
2569 primitives. It takes about 10 lines to describe the seemingly
2570 complicated action of wiggling.
2572 The following action predicates each take a stream of sensory
2573 experience, observe however much of it they desire, and decide
2574 whether the worm is doing the action they describe. =curled?=
2575 relies on proprioception, =resting?= relies on touch, =wiggling?=
2576 relies on a Fourier analysis of muscle contraction, and
2577 =grand-circle?= relies on touch and reuses =curled?= in its
2578 definition, showing how embodied predicates can be composed.
2581 #+caption: Program for detecting whether the worm is curled. This is the
2582 #+caption: simplest action predicate, because it only uses the last frame
2583 #+caption: of sensory experience, and only uses proprioceptive data. Even
2584 #+caption: this simple predicate, however, is automatically frame
2585 #+caption: independent and ignores vermopomorphic\protect\footnotemark
2586 #+caption: \space differences such as worm textures and colors.
2587 #+name: curled
2588 #+begin_listing clojure
2589 #+begin_src clojure
2590 (defn curled?
2591 "Is the worm curled up?"
2592 [experiences]
2593 (every?
2594 (fn [[_ _ bend]]
2595 (> (Math/sin bend) 0.64))
2596 (:proprioception (peek experiences))))
2597 #+end_src
2598 #+end_listing
2600 #+BEGIN_LaTeX
2601 \footnotetext{Like \emph{anthropomorphic} except for worms instead of humans.}
2602 #+END_LaTeX
2604 #+caption: Program for summarizing the touch information in a patch
2605 #+caption: of skin.
2606 #+name: touch-summary
2607 #+begin_listing clojure
2608 #+begin_src clojure
2609 (defn contact
2610 "Determine how much contact a particular worm segment has with
2611 other objects. Returns a value between 0 and 1, where 1 is full
2612 contact and 0 is no contact."
2613 [touch-region [coords contact :as touch]]
2614 (-> (zipmap coords contact)
2615 (select-keys touch-region)
2616 (vals)
2617 (#(map first %))
2618 (average)
2619 (* 10)
2620 (- 1)
2621 (Math/abs)))
2622 #+end_src
2623 #+end_listing
2626 #+caption: Program for detecting whether the worm is at rest. This program
2627 #+caption: uses a summary of the tactile information from the underbelly
2628 #+caption: of the worm, and is only true if every segment is touching the
2629 #+caption: floor. Note that this function contains no references to
2630 #+caption: proprioception at all.
2631 #+name: resting
2632 #+begin_listing clojure
2633 #+begin_src clojure
2634 (def worm-segment-bottom (rect-region [8 15] [14 22]))
2636 (defn resting?
2637 "Is the worm resting on the ground?"
2638 [experiences]
2639 (every?
2640 (fn [touch-data]
2641 (< 0.9 (contact worm-segment-bottom touch-data)))
2642 (:touch (peek experiences))))
2643 #+end_src
2644 #+end_listing
2646 #+caption: Program for detecting whether the worm is curled up into a
2647 #+caption: full circle. Here the embodied approach begins to shine, as
2648 #+caption: I am able to both use a previous action predicate (=curled?=)
2649 #+caption: as well as the direct tactile experience of the head and tail.
2650 #+name: grand-circle
2651 #+begin_listing clojure
2652 #+begin_src clojure
2653 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
2655 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
2657 (defn grand-circle?
2658 "Does the worm form a majestic circle (one end touching the other)?"
2659 [experiences]
2660 (and (curled? experiences)
2661 (let [worm-touch (:touch (peek experiences))
2662 tail-touch (worm-touch 0)
2663 head-touch (worm-touch 4)]
2664 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
2665 (< 0.55 (contact worm-segment-top-tip head-touch))))))
2666 #+end_src
2667 #+end_listing
2670 #+caption: Program for detecting whether the worm has been wiggling for
2671 #+caption: the last few frames. It uses a Fourier analysis of the muscle
2672 #+caption: contractions of the worm's tail to determine wiggling. This is
2673 #+caption: significant because there is no particular frame that clearly
2674 #+caption: indicates that the worm is wiggling --- only when multiple frames
2675 #+caption: are analyzed together is the wiggling revealed. Defining
2676 #+caption: wiggling this way also gives the worm an opportunity to learn
2677 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
2678 #+caption: wiggle but can't. Frustrated wiggling is very visually different
2679 #+caption: from actual wiggling, but this definition gives it to us for free.
2680 #+name: wiggling
2681 #+begin_listing clojure
2682 #+begin_src clojure
2683 (defn fft [nums]
2684 (map
2685 #(.getReal %)
2686 (.transform
2687 (FastFourierTransformer. DftNormalization/STANDARD)
2688 (double-array nums) TransformType/FORWARD)))
2690 (def indexed (partial map-indexed vector))
2692 (defn max-indexed [s]
2693 (first (sort-by (comp - second) (indexed s))))
2695 (defn wiggling?
2696 "Is the worm wiggling?"
2697 [experiences]
2698 (let [analysis-interval 0x40]
2699 (when (> (count experiences) analysis-interval)
2700 (let [a-flex 3
2701 a-ex 2
2702 muscle-activity
2703 (map :muscle (vector:last-n experiences analysis-interval))
2704 base-activity
2705 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
2706 (= 2
2707 (first
2708 (max-indexed
2709 (map #(Math/abs %)
2710 (take 20 (fft base-activity))))))))))
2711 #+end_src
2712 #+end_listing
2714 With these action predicates, I can now recognize the actions of
2715 the worm while it is moving under my control and I have access to
2716 all the worm's senses.
2718 #+caption: Use the action predicates defined earlier to report on
2719 #+caption: what the worm is doing while in simulation.
2720 #+name: report-worm-activity
2721 #+begin_listing clojure
2722 #+begin_src clojure
2723 (defn debug-experience
2724 [experiences text]
2725 (cond
2726 (grand-circle? experiences) (.setText text "Grand Circle")
2727 (curled? experiences) (.setText text "Curled")
2728 (wiggling? experiences) (.setText text "Wiggling")
2729 (resting? experiences) (.setText text "Resting")))
2730 #+end_src
2731 #+end_listing
2733 #+caption: Using =debug-experience=, the body-centered predicates
2734 #+caption: work together to classify the behavior of the worm.
2735 #+caption: the predicates are operating with access to the worm's
2736 #+caption: full sensory data.
2737 #+name: basic-worm-view
2738 #+ATTR_LaTeX: :width 10cm
2739 [[./images/worm-identify-init.png]]
2741 These action predicates satisfy the recognition requirement of an
2742 empathic recognition system. There is power in the simplicity of
2743 the action predicates. They describe their actions without getting
2744 confused in visual details of the worm. Each one is independent of
2745 position and rotation, but more than that, they are each
2746 independent of irrelevant visual details of the worm and the
2747 environment. They will work regardless of whether the worm is a
2748 different color or heavily textured, or if the environment has
2749 strange lighting.
2751 Consider how the human act of jumping might be described with
2752 body-centered action predicates: You might specify that jumping is
2753 mainly the feeling of your knees bending, your thigh muscles
2754 contracting, and your inner ear experiencing a certain sort of back
2755 and forth acceleration. This representation is a very concrete
2756 description of jumping, couched in terms of muscles and senses, but
2757 it also has the ability to describe almost all kinds of jumping, a
2758 generality that you might think could only be achieved by a very
2759 abstract description. The body centered jumping predicate does not
2760 have terms that consider the color of a person's skin or whether
2761 they are male or female, instead it gets right to the meat of what
2762 jumping actually /is/.
2764 Of course, the action predicates are not directly applicable to
2765 video data, which lacks the advanced sensory information which they
2766 require!
2768 The trick now is to make the action predicates work even when the
2769 sensory data on which they depend is absent. If I can do that, then
2770 I will have gained much.
2772 ** \Phi-space describes the worm's experiences
2774 As a first step towards building empathy, I need to gather all of
2775 the worm's experiences during free play. I use a simple vector to
2776 store all the experiences.
2778 Each element of the experience vector exists in the vast space of
2779 all possible worm-experiences. Most of this vast space is actually
2780 unreachable due to physical constraints of the worm's body. For
2781 example, the worm's segments are connected by hinge joints that put
2782 a practical limit on the worm's range of motions without limiting
2783 its degrees of freedom. Some groupings of senses are impossible;
2784 the worm can not be bent into a circle so that its ends are
2785 touching and at the same time not also experience the sensation of
2786 touching itself.
2788 As the worm moves around during free play and its experience vector
2789 grows larger, the vector begins to define a subspace which is all
2790 the sensations the worm can practically experience during normal
2791 operation. I call this subspace \Phi-space, short for
2792 physical-space. The experience vector defines a path through
2793 \Phi-space. This path has interesting properties that all derive
2794 from physical embodiment. The proprioceptive components of the path
2795 vary smoothly, because in order for the worm to move from one
2796 position to another, it must pass through the intermediate
2797 positions. The path invariably forms loops as common actions are
2798 repeated. Finally and most importantly, proprioception alone
2799 actually gives very strong inference about the other senses. For
2800 example, when the worm is proprioceptively flat over several
2801 frames, you can infer that it is touching the ground and that its
2802 muscles are not active, because if the muscles were active, the
2803 worm would be moving and would not remain perfectly flat. In order
2804 to stay flat, the worm has to be touching the ground, or it would
2805 again be moving out of the flat position due to gravity. If the
2806 worm is positioned in such a way that it interacts with itself,
2807 then it is very likely to be feeling the same tactile feelings as
2808 the last time it was in that position, because it has the same body
2809 as then. As you observe multiple frames of proprioceptive data, you
2810 can become increasingly confident about the exact activations of
2811 the worm's muscles, because it generally takes a unique combination
2812 of muscle contractions to transform the worm's body along a
2813 specific path through \Phi-space.
2815 The worm's total life experience is a long looping path through
2816 \Phi-space. I will now introduce simple way of taking that
2817 experiece path and building a function that can infer complete
2818 sensory experience given only a stream of proprioceptive data. This
2819 /empathy/ function will provide a bridge to use the body centered
2820 action predicates on video-like streams of information.
2822 ** Empathy is the process of building paths in \Phi-space
2824 Here is the core of a basic empathy algorithm, starting with an
2825 experience vector:
2827 An /experience-index/ is an index into the grand experience vector
2828 that defines the worm's life. It is a time-stamp for each set of
2829 sensations the worm has experienced.
2831 First, group the experience-indices into bins according to the
2832 similarity of their proprioceptive data. I organize my bins into a
2833 3 level hierarchy. The smallest bins have an approximate size of
2834 0.001 radians in all proprioceptive dimensions. Each higher level
2835 is 10x bigger than the level below it.
2837 The bins serve as a hashing function for proprioceptive data. Given
2838 a single piece of proprioceptive experience, the bins allow us to
2839 rapidly find all other similar experience-indices of past
2840 experience that had a very similar proprioceptive configuration.
2841 When looking up a proprioceptive experience, if the smallest bin
2842 does not match any previous experience, then successively larger
2843 bins are used until a match is found or we reach the largest bin.
2845 Given a sequence of proprioceptive input, I use the bins to
2846 generate a set of similar experiences for each input using the
2847 tiered proprioceptive bins.
2849 Finally, to infer sensory data, I select the longest consecutive
2850 chain of experiences that threads through the sets of similar
2851 experiences, starting with the current moment as a root and going
2852 backwards. Consecutive experience means that the experiences appear
2853 next to each other in the experience vector.
2855 A stream of proprioceptive input might be:
2857 #+BEGIN_EXAMPLE
2858 [ flat, flat, flat, flat, flat, flat, lift-head ]
2859 #+END_EXAMPLE
2861 The worm's previous experience of lying on the ground and lifting
2862 its head generates possible interpretations for each frame (the
2863 numbers are experience-indices):
2865 #+BEGIN_EXAMPLE
2866 [ flat, flat, flat, flat, flat, flat, flat, lift-head ]
2867 1 1 1 1 1 1 1 4
2868 2 2 2 2 2 2 2
2869 3 3 3 3 3 3 3
2870 7 7 7 7 7 7 7
2871 8 8 8 8 8 8 8
2872 9 9 9 9 9 9 9
2873 #+END_EXAMPLE
2875 These interpretations suggest a new path through phi space:
2877 #+BEGIN_EXAMPLE
2878 [ flat, flat, flat, flat, flat, flat, flat, lift-head ]
2879 6 7 8 9 1 2 3 4
2880 #+END_EXAMPLE
2882 The new path through \Phi-space is synthesized from two actual
2883 paths that the creature has experienced: the "1-2-3-4" chain and
2884 the "6-7-8-9" chain. The "1-2-3-4" chain is necessary because it
2885 ends with the worm lifting its head. It originated from a short
2886 training session where the worm rested on the floor for a brief
2887 while and then raised its head. The "6-7-8-9" chain is part of a
2888 longer chain of inactivity where the worm simply rested on the
2889 floor without moving. It is preferred over a "1-2-3" chain (which
2890 also describes inactivity) because it is longer. The main ideas
2891 again:
2893 - Imagined \Phi-space paths are synthesized by looping and mixing
2894 previous experiences.
2896 - Longer experience paths (less edits) are preferred.
2898 - The present is more important than the past --- more recent
2899 events take precedence in interpretation.
2901 This algorithm has three advantages:
2903 1. It's simple
2905 3. It's very fast -- retrieving possible interpretations takes
2906 constant time. Tracing through chains of interpretations takes
2907 time proportional to the average number of experiences in a
2908 proprioceptive bin. Redundant experiences in \Phi-space can be
2909 merged to save computation.
2911 2. It protects from wrong interpretations of transient ambiguous
2912 proprioceptive data. For example, if the worm is flat for just
2913 an instant, this flatness will not be interpreted as implying
2914 that the worm has its muscles relaxed, since the flatness is
2915 part of a longer chain which includes a distinct pattern of
2916 muscle activation. Markov chains or other memoryless statistical
2917 models that operate on individual frames may very well make this
2918 mistake.
2920 #+caption: Program to convert an experience vector into a
2921 #+caption: proprioceptively binned lookup function.
2922 #+name: bin
2923 #+begin_listing clojure
2924 #+begin_src clojure
2925 (defn bin [digits]
2926 (fn [angles]
2927 (->> angles
2928 (flatten)
2929 (map (juxt #(Math/sin %) #(Math/cos %)))
2930 (flatten)
2931 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
2933 (defn gen-phi-scan
2934 "Nearest-neighbors with binning. Only returns a result if
2935 the proprioceptive data is within 10% of a previously recorded
2936 result in all dimensions."
2937 [phi-space]
2938 (let [bin-keys (map bin [3 2 1])
2939 bin-maps
2940 (map (fn [bin-key]
2941 (group-by
2942 (comp bin-key :proprioception phi-space)
2943 (range (count phi-space)))) bin-keys)
2944 lookups (map (fn [bin-key bin-map]
2945 (fn [proprio] (bin-map (bin-key proprio))))
2946 bin-keys bin-maps)]
2947 (fn lookup [proprio-data]
2948 (set (some #(% proprio-data) lookups)))))
2949 #+end_src
2950 #+end_listing
2952 #+caption: =longest-thread= finds the longest path of consecutive
2953 #+caption: past experiences to explain proprioceptive worm data from
2954 #+caption: previous data. Here, the film strip represents the
2955 #+caption: creature's previous experience. Sort sequences of
2956 #+caption: memories are spliced together to match the
2957 #+caption: proprioceptive data. Their carry the other senses
2958 #+caption: along with them.
2959 #+name: phi-space-history-scan
2960 #+ATTR_LaTeX: :width 10cm
2961 [[./images/film-of-imagination.png]]
2963 =longest-thread= infers sensory data by stitching together pieces
2964 from previous experience. It prefers longer chains of previous
2965 experience to shorter ones. For example, during training the worm
2966 might rest on the ground for one second before it performs its
2967 exercises. If during recognition the worm rests on the ground for
2968 five seconds, =longest-thread= will accommodate this five second
2969 rest period by looping the one second rest chain five times.
2971 =longest-thread= takes time proportional to the average number of
2972 entries in a proprioceptive bin, because for each element in the
2973 starting bin it performs a series of set lookups in the preceding
2974 bins. If the total history is limited, then this takes time
2975 proprotional to a only a constant multiple of the number of entries
2976 in the starting bin. This analysis also applies, even if the action
2977 requires multiple longest chains -- it's still the average number
2978 of entries in a proprioceptive bin times the desired chain length.
2979 Because =longest-thread= is so efficient and simple, I can
2980 interpret worm-actions in real time.
2982 #+caption: Program to calculate empathy by tracing though \Phi-space
2983 #+caption: and finding the longest (ie. most coherent) interpretation
2984 #+caption: of the data.
2985 #+name: longest-thread
2986 #+begin_listing clojure
2987 #+begin_src clojure
2988 (defn longest-thread
2989 "Find the longest thread from phi-index-sets. The index sets should
2990 be ordered from most recent to least recent."
2991 [phi-index-sets]
2992 (loop [result '()
2993 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
2994 (if (empty? phi-index-sets)
2995 (vec result)
2996 (let [threads
2997 (for [thread-base thread-bases]
2998 (loop [thread (list thread-base)
2999 remaining remaining]
3000 (let [next-index (dec (first thread))]
3001 (cond (empty? remaining) thread
3002 (contains? (first remaining) next-index)
3003 (recur
3004 (cons next-index thread) (rest remaining))
3005 :else thread))))
3006 longest-thread
3007 (reduce (fn [thread-a thread-b]
3008 (if (> (count thread-a) (count thread-b))
3009 thread-a thread-b))
3010 '(nil)
3011 threads)]
3012 (recur (concat longest-thread result)
3013 (drop (count longest-thread) phi-index-sets))))))
3014 #+end_src
3015 #+end_listing
3017 There is one final piece, which is to replace missing sensory data
3018 with a best-guess estimate. While I could fill in missing data by
3019 using a gradient over the closest known sensory data points,
3020 averages can be misleading. It is certainly possible to create an
3021 impossible sensory state by averaging two possible sensory states.
3022 For example, consider moving your hand in an arc over your head. If
3023 for some reason you only have the initial and final positions of
3024 this movement in your \Phi-space, averaging them together will
3025 produce the proprioceptive sensation of having your hand /inside/
3026 your head, which is physically impossible to ever experience
3027 (barring motor adaption illusions). Therefore I simply replicate
3028 the most recent sensory experience to fill in the gaps.
3030 #+caption: Fill in blanks in sensory experience by replicating the most
3031 #+caption: recent experience.
3032 #+name: infer-nils
3033 #+begin_listing clojure
3034 #+begin_src clojure
3035 (defn infer-nils
3036 "Replace nils with the next available non-nil element in the
3037 sequence, or barring that, 0."
3038 [s]
3039 (loop [i (dec (count s))
3040 v (transient s)]
3041 (if (zero? i) (persistent! v)
3042 (if-let [cur (v i)]
3043 (if (get v (dec i) 0)
3044 (recur (dec i) v)
3045 (recur (dec i) (assoc! v (dec i) cur)))
3046 (recur i (assoc! v i 0))))))
3047 #+end_src
3048 #+end_listing
3050 ** =EMPATH= recognizes actions efficiently
3052 To use =EMPATH= with the worm, I first need to gather a set of
3053 experiences from the worm that includes the actions I want to
3054 recognize. The =generate-phi-space= program (listing
3055 \ref{generate-phi-space} runs the worm through a series of
3056 exercises and gathers those experiences into a vector. The
3057 =do-all-the-things= program is a routine expressed in a simple
3058 muscle contraction script language for automated worm control. It
3059 causes the worm to rest, curl, and wiggle over about 700 frames
3060 (approx. 11 seconds).
3062 #+caption: Program to gather the worm's experiences into a vector for
3063 #+caption: further processing. The =motor-control-program= line uses
3064 #+caption: a motor control script that causes the worm to execute a series
3065 #+caption: of ``exercises'' that include all the action predicates.
3066 #+name: generate-phi-space
3067 #+begin_listing clojure
3068 #+begin_src clojure
3069 (def do-all-the-things
3070 (concat
3071 curl-script
3072 [[300 :d-ex 40]
3073 [320 :d-ex 0]]
3074 (shift-script 280 (take 16 wiggle-script))))
3076 (defn generate-phi-space []
3077 (let [experiences (atom [])]
3078 (run-world
3079 (apply-map
3080 worm-world
3081 (merge
3082 (worm-world-defaults)
3083 {:end-frame 700
3084 :motor-control
3085 (motor-control-program worm-muscle-labels do-all-the-things)
3086 :experiences experiences})))
3087 @experiences))
3088 #+end_src
3089 #+end_listing
3091 #+caption: Use =longest-thread= and a \Phi-space generated from a short
3092 #+caption: exercise routine to interpret actions during free play.
3093 #+name: empathy-debug
3094 #+begin_listing clojure
3095 #+begin_src clojure
3096 (defn init []
3097 (def phi-space (generate-phi-space))
3098 (def phi-scan (gen-phi-scan phi-space)))
3100 (defn empathy-demonstration []
3101 (let [proprio (atom ())]
3102 (fn
3103 [experiences text]
3104 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
3105 (swap! proprio (partial cons phi-indices))
3106 (let [exp-thread (longest-thread (take 300 @proprio))
3107 empathy (mapv phi-space (infer-nils exp-thread))]
3108 (println-repl (vector:last-n exp-thread 22))
3109 (cond
3110 (grand-circle? empathy) (.setText text "Grand Circle")
3111 (curled? empathy) (.setText text "Curled")
3112 (wiggling? empathy) (.setText text "Wiggling")
3113 (resting? empathy) (.setText text "Resting")
3114 :else (.setText text "Unknown")))))))
3116 (defn empathy-experiment [record]
3117 (.start (worm-world :experience-watch (debug-experience-phi)
3118 :record record :worm worm*)))
3119 #+end_src
3120 #+end_listing
3122 These programs create a test for the empathy system. First, the
3123 worm's \Phi-space is generated from a simple motor script. Then the
3124 worm is re-created in an environment almost exactly identical to
3125 the testing environment for the action-predicates, with one major
3126 difference : the only sensory information available to the system
3127 is proprioception. From just the proprioception data and
3128 \Phi-space, =longest-thread= synthesises a complete record the last
3129 300 sensory experiences of the worm. These synthesized experiences
3130 are fed directly into the action predicates =grand-circle?=,
3131 =curled?=, =wiggling?=, and =resting?= from before and their output
3132 is printed to the screen at each frame.
3134 The result of running =empathy-experiment= is that the system is
3135 generally able to interpret worm actions using the action-predicates
3136 on simulated sensory data just as well as with actual data. Figure
3137 \ref{empathy-debug-image} was generated using =empathy-experiment=:
3139 #+caption: From only proprioceptive data, =EMPATH= was able to infer
3140 #+caption: the complete sensory experience and classify four poses
3141 #+caption: (The last panel shows a composite image of /wiggling/,
3142 #+caption: a dynamic pose.)
3143 #+name: empathy-debug-image
3144 #+ATTR_LaTeX: :width 10cm :placement [H]
3145 [[./images/empathy-1.png]]
3147 One way to measure the performance of =EMPATH= is to compare the
3148 suitability of the imagined sense experience to trigger the same
3149 action predicates as the real sensory experience.
3151 #+caption: Determine how closely empathy approximates actual
3152 #+caption: sensory data.
3153 #+name: test-empathy-accuracy
3154 #+begin_listing clojure
3155 #+begin_src clojure
3156 (def worm-action-label
3157 (juxt grand-circle? curled? wiggling?))
3159 (defn compare-empathy-with-baseline [matches]
3160 (let [proprio (atom ())]
3161 (fn
3162 [experiences text]
3163 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
3164 (swap! proprio (partial cons phi-indices))
3165 (let [exp-thread (longest-thread (take 300 @proprio))
3166 empathy (mapv phi-space (infer-nils exp-thread))
3167 experience-matches-empathy
3168 (= (worm-action-label experiences)
3169 (worm-action-label empathy))]
3170 (println-repl experience-matches-empathy)
3171 (swap! matches #(conj % experience-matches-empathy)))))))
3173 (defn accuracy [v]
3174 (float (/ (count (filter true? v)) (count v))))
3176 (defn test-empathy-accuracy []
3177 (let [res (atom [])]
3178 (run-world
3179 (worm-world :experience-watch
3180 (compare-empathy-with-baseline res)
3181 :worm worm*))
3182 (accuracy @res)))
3183 #+end_src
3184 #+end_listing
3186 Running =test-empathy-accuracy= using the very short exercise
3187 program defined in listing \ref{generate-phi-space}, and then doing
3188 a similar pattern of activity manually yields an accuracy of around
3189 73%. This is based on very limited worm experience. By training the
3190 worm for longer, the accuracy dramatically improves.
3192 #+caption: Program to generate \Phi-space using manual training.
3193 #+name: manual-phi-space
3194 #+begin_listing clojure
3195 #+begin_src clojure
3196 (defn init-interactive []
3197 (def phi-space
3198 (let [experiences (atom [])]
3199 (run-world
3200 (apply-map
3201 worm-world
3202 (merge
3203 (worm-world-defaults)
3204 {:experiences experiences})))
3205 @experiences))
3206 (def phi-scan (gen-phi-scan phi-space)))
3207 #+end_src
3208 #+end_listing
3210 After about 1 minute of manual training, I was able to achieve 95%
3211 accuracy on manual testing of the worm using =init-interactive= and
3212 =test-empathy-accuracy=. The majority of errors are near the
3213 boundaries of transitioning from one type of action to another.
3214 During these transitions the exact label for the action is more open
3215 to interpretation, and disagreement between empathy and experience
3216 is essentially irrelevant at this point, giving a practical
3217 identification accuracy of even higher than 95%. When I watch this
3218 system myself, I generally see no errors in action identification.
3220 ** Digression: Learning touch sensor layout through free play
3222 In the previous section I showed how to compute actions in terms of
3223 body-centered predicates, but some of those predicates relied on
3224 the average touch activation of pre-defined regions of the worm's
3225 skin. What if, instead of receiving touch pre-grouped into the six
3226 faces of each worm segment, the true topology of the worm's skin
3227 was unknown? This is more similar to how a nerve fiber bundle might
3228 be arranged inside an animal. While two fibers that are close in a
3229 nerve bundle /might/ correspond to two touch sensors that are close
3230 together on the skin, the process of taking a complicated surface
3231 and forcing it into essentially a circle requires that some regions
3232 of skin that are close together in the animal end up far apart in
3233 the nerve bundle.
3235 In this section I show how to automatically learn the skin-topology of
3236 a worm segment by free exploration. As the worm rolls around on the
3237 floor, large sections of its surface get activated. If the worm has
3238 stopped moving, then whatever region of skin that is touching the
3239 floor is probably an important region, and should be recorded.
3241 #+caption: Program to detect whether the worm is in a resting state
3242 #+caption: with one face touching the floor.
3243 #+name: pure-touch
3244 #+begin_listing clojure
3245 #+begin_src clojure
3246 (def full-contact [(float 0.0) (float 0.1)])
3248 (defn pure-touch?
3249 "This is worm specific code to determine if a large region of touch
3250 sensors is either all on or all off."
3251 [[coords touch :as touch-data]]
3252 (= (set (map first touch)) (set full-contact)))
3253 #+end_src
3254 #+end_listing
3256 After collecting these important regions, there will many nearly
3257 similar touch regions. While for some purposes the subtle
3258 differences between these regions will be important, for my
3259 purposes I collapse them into mostly non-overlapping sets using
3260 =remove-similar= in listing \ref{remove-similar}
3262 #+caption: Program to take a list of sets of points and ``collapse them''
3263 #+caption: so that the remaining sets in the list are significantly
3264 #+caption: different from each other. Prefer smaller sets to larger ones.
3265 #+name: remove-similar
3266 #+begin_listing clojure
3267 #+begin_src clojure
3268 (defn remove-similar
3269 [coll]
3270 (loop [result () coll (sort-by (comp - count) coll)]
3271 (if (empty? coll) result
3272 (let [[x & xs] coll
3273 c (count x)]
3274 (if (some
3275 (fn [other-set]
3276 (let [oc (count other-set)]
3277 (< (- (count (union other-set x)) c) (* oc 0.1))))
3278 xs)
3279 (recur result xs)
3280 (recur (cons x result) xs))))))
3281 #+end_src
3282 #+end_listing
3284 Actually running this simulation is easy given =CORTEX='s facilities.
3286 #+caption: Collect experiences while the worm moves around. Filter the touch
3287 #+caption: sensations by stable ones, collapse similar ones together,
3288 #+caption: and report the regions learned.
3289 #+name: learn-touch
3290 #+begin_listing clojure
3291 #+begin_src clojure
3292 (defn learn-touch-regions []
3293 (let [experiences (atom [])
3294 world (apply-map
3295 worm-world
3296 (assoc (worm-segment-defaults)
3297 :experiences experiences))]
3298 (run-world world)
3299 (->>
3300 @experiences
3301 (drop 175)
3302 ;; access the single segment's touch data
3303 (map (comp first :touch))
3304 ;; only deal with "pure" touch data to determine surfaces
3305 (filter pure-touch?)
3306 ;; associate coordinates with touch values
3307 (map (partial apply zipmap))
3308 ;; select those regions where contact is being made
3309 (map (partial group-by second))
3310 (map #(get % full-contact))
3311 (map (partial map first))
3312 ;; remove redundant/subset regions
3313 (map set)
3314 remove-similar)))
3316 (defn learn-and-view-touch-regions []
3317 (map view-touch-region
3318 (learn-touch-regions)))
3319 #+end_src
3320 #+end_listing
3322 The only thing remaining to define is the particular motion the worm
3323 must take. I accomplish this with a simple motor control program.
3325 #+caption: Motor control program for making the worm roll on the ground.
3326 #+caption: This could also be replaced with random motion.
3327 #+name: worm-roll
3328 #+begin_listing clojure
3329 #+begin_src clojure
3330 (defn touch-kinesthetics []
3331 [[170 :lift-1 40]
3332 [190 :lift-1 19]
3333 [206 :lift-1 0]
3335 [400 :lift-2 40]
3336 [410 :lift-2 0]
3338 [570 :lift-2 40]
3339 [590 :lift-2 21]
3340 [606 :lift-2 0]
3342 [800 :lift-1 30]
3343 [809 :lift-1 0]
3345 [900 :roll-2 40]
3346 [905 :roll-2 20]
3347 [910 :roll-2 0]
3349 [1000 :roll-2 40]
3350 [1005 :roll-2 20]
3351 [1010 :roll-2 0]
3353 [1100 :roll-2 40]
3354 [1105 :roll-2 20]
3355 [1110 :roll-2 0]
3356 ])
3357 #+end_src
3358 #+end_listing
3361 #+caption: The small worm rolls around on the floor, driven
3362 #+caption: by the motor control program in listing \ref{worm-roll}.
3363 #+name: worm-roll
3364 #+ATTR_LaTeX: :width 12cm
3365 [[./images/worm-roll.png]]
3367 #+caption: After completing its adventures, the worm now knows
3368 #+caption: how its touch sensors are arranged along its skin. These
3369 #+caption: are the regions that were deemed important by
3370 #+caption: =learn-touch-regions=. Each white square in the rectangles
3371 #+caption: above is a cluster of ``related" touch nodes as determined
3372 #+caption: by the system. Since each square in the ``cross" corresponds
3373 #+caption: to a face, the worm has correctly discovered that it has
3374 #+caption: six faces.
3375 #+name: worm-touch-map
3376 #+ATTR_LaTeX: :width 12cm
3377 [[./images/touch-learn.png]]
3379 While simple, =learn-touch-regions= exploits regularities in both
3380 the worm's physiology and the worm's environment to correctly
3381 deduce that the worm has six sides. Note that =learn-touch-regions=
3382 would work just as well even if the worm's touch sense data were
3383 completely scrambled. The cross shape is just for convenience. This
3384 example justifies the use of pre-defined touch regions in =EMPATH=.
3386 * Contributions
3388 The big idea behind this thesis is a new way to represent and
3389 recognize physical actions -- empathic representation. Actions are
3390 represented as predicates which have available the totality of a
3391 creature's sensory abilities. To recognize the physical actions of
3392 another creature similar to yourself, you imagine what they would
3393 feel by examining the position of their body and relating it to your
3394 own previous experience.
3396 Empathic description of physical actions is very robust and general.
3397 Because the representation is body-centered, it avoids the fragility
3398 of learning from example videos. Because it relies on all of a
3399 creature's senses, it can describe exactly what an action /feels
3400 like/ without getting caught up in irrelevant details such as visual
3401 appearance. I think it is important that a correct description of
3402 jumping (for example) should not waste even a single bit on the
3403 color of a person's clothes or skin; empathic representation can
3404 avoid this waste by describing jumping in terms of touch, muscle
3405 contractions, and the brief feeling of weightlessness. Empathic
3406 representation is very low-level in that it describes actions using
3407 concrete sensory data with little abstraction, but it has the
3408 generality of much more abstract representations!
3410 Another important contribution of this thesis is the development of
3411 the =CORTEX= system, a complete environment for creating simulated
3412 creatures. You have seen how to implement five senses: touch,
3413 proprioception, hearing, vision, and muscle tension. You have seen
3414 how to create new creatures using blender, a 3D modeling tool.
3416 I hope that =CORTEX= will be useful in further research projects. To
3417 this end I have included the full source to =CORTEX= along with a
3418 large suite of tests and examples. I have also created a user guide
3419 for =CORTEX= which is included in an appendix to this thesis.
3421 As a minor digression, you also saw how I used =CORTEX= to enable a
3422 tiny worm to discover the topology of its skin simply by rolling on
3423 the ground.
3425 In conclusion, the main contributions of this thesis are:
3427 - =CORTEX=, a comprehensive platform for embodied AI experiments.
3428 =CORTEX= supports many features lacking in other systems, such
3429 proper simulation of hearing. It is easy to create new =CORTEX=
3430 creatures using Blender, a free 3D modeling program.
3432 - =EMPATH=, which uses =CORTEX= to identify the actions of a
3433 worm-like creature using a computational model of empathy. This
3434 empathic representation of actions is an important new kind of
3435 representation for physical actions.
3437 #+BEGIN_LaTeX
3438 \newpage
3439 \appendix
3440 #+END_LaTeX
3442 * Appendix: =CORTEX= User Guide
3444 Those who write a thesis should endeavor to make their code not only
3445 accessible, but actually usable, as a way to pay back the community
3446 that made the thesis possible in the first place. This thesis would
3447 not be possible without Free Software such as jMonkeyEngine3,
3448 Blender, clojure, emacs, ffmpeg, and many other tools. That is why I
3449 have included this user guide, in the hope that someone else might
3450 find =CORTEX= useful.
3452 ** Obtaining =CORTEX=
3454 You can get cortex from its mercurial repository at
3455 http://hg.bortreb.com/cortex. You may also download =CORTEX=
3456 releases at http://aurellem.org/cortex/releases/. As a condition of
3457 making this thesis, I have also provided Professor Winston the
3458 =CORTEX= source, and he knows how to run the demos and get started.
3459 You may also email me at =cortex@aurellem.org= and I may help where
3460 I can.
3462 ** Running =CORTEX=
3464 =CORTEX= comes with README and INSTALL files that will guide you
3465 through installation and running the test suite. In particular you
3466 should look at test =cortex.test= which contains test suites that
3467 run through all senses and multiple creatures.
3469 ** Creating creatures
3471 Creatures are created using /Blender/, a free 3D modeling program.
3472 You will need Blender version 2.6 when using the =CORTEX= included
3473 in this thesis. You create a =CORTEX= creature in a similar manner
3474 to modeling anything in Blender, except that you also create
3475 several trees of empty nodes which define the creature's senses.
3477 *** Mass
3479 To give an object mass in =CORTEX=, add a ``mass'' metadata label
3480 to the object with the mass in jMonkeyEngine units. Note that
3481 setting the mass to 0 causes the object to be immovable.
3483 *** Joints
3485 Joints are created by creating an empty node named =joints= and
3486 then creating any number of empty child nodes to represent your
3487 creature's joints. The joint will automatically connect the
3488 closest two physical objects. It will help to set the empty node's
3489 display mode to ``Arrows'' so that you can clearly see the
3490 direction of the axes.
3492 Joint nodes should have the following metadata under the ``joint''
3493 label:
3495 #+BEGIN_SRC clojure
3496 ;; ONE of the following, under the label "joint":
3497 {:type :point}
3499 ;; OR
3501 {:type :hinge
3502 :limit [<limit-low> <limit-high>]
3503 :axis (Vector3f. <x> <y> <z>)}
3504 ;;(:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
3506 ;; OR
3508 {:type :cone
3509 :limit-xz <lim-xz>
3510 :limit-xy <lim-xy>
3511 :twist <lim-twist>} ;(use XZY rotation mode in blender!)
3512 #+END_SRC
3514 *** Eyes
3516 Eyes are created by creating an empty node named =eyes= and then
3517 creating any number of empty child nodes to represent your
3518 creature's eyes.
3520 Eye nodes should have the following metadata under the ``eye''
3521 label:
3523 #+BEGIN_SRC clojure
3524 {:red <red-retina-definition>
3525 :blue <blue-retina-definition>
3526 :green <green-retina-definition>
3527 :all <all-retina-definition>
3528 (<0xrrggbb> <custom-retina-image>)...
3530 #+END_SRC
3532 Any of the color channels may be omitted. You may also include
3533 your own color selectors, and in fact :red is equivalent to
3534 0xFF0000 and so forth. The eye will be placed at the same position
3535 as the empty node and will bind to the neatest physical object.
3536 The eye will point outward from the X-axis of the node, and ``up''
3537 will be in the direction of the X-axis of the node. It will help
3538 to set the empty node's display mode to ``Arrows'' so that you can
3539 clearly see the direction of the axes.
3541 Each retina file should contain white pixels wherever you want to be
3542 sensitive to your chosen color. If you want the entire field of
3543 view, specify :all of 0xFFFFFF and a retinal map that is entirely
3544 white.
3546 Here is a sample retinal map:
3548 #+caption: An example retinal profile image. White pixels are
3549 #+caption: photo-sensitive elements. The distribution of white
3550 #+caption: pixels is denser in the middle and falls off at the
3551 #+caption: edges and is inspired by the human retina.
3552 #+name: retina
3553 #+ATTR_LaTeX: :width 7cm :placement [H]
3554 [[./images/retina-small.png]]
3556 *** Hearing
3558 Ears are created by creating an empty node named =ears= and then
3559 creating any number of empty child nodes to represent your
3560 creature's ears.
3562 Ear nodes do not require any metadata.
3564 The ear will bind to and follow the closest physical node.
3566 *** Touch
3568 Touch is handled similarly to mass. To make a particular object
3569 touch sensitive, add metadata of the following form under the
3570 object's ``touch'' metadata field:
3572 #+BEGIN_EXAMPLE
3573 <touch-UV-map-file-name>
3574 #+END_EXAMPLE
3576 You may also include an optional ``scale'' metadata number to
3577 specify the length of the touch feelers. The default is $0.1$,
3578 and this is generally sufficient.
3580 The touch UV should contain white pixels for each touch sensor.
3582 Here is an example touch-uv map that approximates a human finger,
3583 and its corresponding model.
3585 #+caption: This is the tactile-sensor-profile for the upper segment
3586 #+caption: of a fingertip. It defines regions of high touch sensitivity
3587 #+caption: (where there are many white pixels) and regions of low
3588 #+caption: sensitivity (where white pixels are sparse).
3589 #+name: guide-fingertip-UV
3590 #+ATTR_LaTeX: :width 9cm :placement [H]
3591 [[./images/finger-UV.png]]
3593 #+caption: The fingertip UV-image form above applied to a simple
3594 #+caption: model of a fingertip.
3595 #+name: guide-fingertip
3596 #+ATTR_LaTeX: :width 9cm :placement [H]
3597 [[./images/finger-2.png]]
3599 *** Proprioception
3601 Proprioception is tied to each joint node -- nothing special must
3602 be done in a blender model to enable proprioception other than
3603 creating joint nodes.
3605 *** Muscles
3607 Muscles are created by creating an empty node named =muscles= and
3608 then creating any number of empty child nodes to represent your
3609 creature's muscles.
3612 Muscle nodes should have the following metadata under the
3613 ``muscle'' label:
3615 #+BEGIN_EXAMPLE
3616 <muscle-profile-file-name>
3617 #+END_EXAMPLE
3619 Muscles should also have a ``strength'' metadata entry describing
3620 the muscle's total strength at full activation.
3622 Muscle profiles are simple images that contain the relative amount
3623 of muscle power in each simulated alpha motor neuron. The width of
3624 the image is the total size of the motor pool, and the redness of
3625 each neuron is the relative power of that motor pool.
3627 While the profile image can have any dimensions, only the first
3628 line of pixels is used to define the muscle. Here is a sample
3629 muscle profile image that defines a human-like muscle.
3631 #+caption: A muscle profile image that describes the strengths
3632 #+caption: of each motor neuron in a muscle. White is weakest
3633 #+caption: and dark red is strongest. This particular pattern
3634 #+caption: has weaker motor neurons at the beginning, just
3635 #+caption: like human muscle.
3636 #+name: muscle-recruit
3637 #+ATTR_LaTeX: :width 7cm :placement [H]
3638 [[./images/basic-muscle.png]]
3640 Muscles twist the nearest physical object about the muscle node's
3641 Z-axis. I recommend using the ``Single Arrow'' display mode for
3642 muscles and using the right hand rule to determine which way the
3643 muscle will twist. To make a segment that can twist in multiple
3644 directions, create multiple, differently aligned muscles.
3646 ** =CORTEX= API
3648 These are the some functions exposed by =CORTEX= for creating
3649 worlds and simulating creatures. These are in addition to
3650 jMonkeyEngine3's extensive library, which is documented elsewhere.
3652 *** Simulation
3653 - =(world root-node key-map setup-fn update-fn)= :: create
3654 a simulation.
3655 - /root-node/ :: a =com.jme3.scene.Node= object which
3656 contains all of the objects that should be in the
3657 simulation.
3659 - /key-map/ :: a map from strings describing keys to
3660 functions that should be executed whenever that key is
3661 pressed. the functions should take a SimpleApplication
3662 object and a boolean value. The SimpleApplication is the
3663 current simulation that is running, and the boolean is true
3664 if the key is being pressed, and false if it is being
3665 released. As an example,
3666 #+BEGIN_SRC clojure
3667 {"key-j" (fn [game value] (if value (println "key j pressed")))}
3668 #+END_SRC
3669 is a valid key-map which will cause the simulation to print
3670 a message whenever the 'j' key on the keyboard is pressed.
3672 - /setup-fn/ :: a function that takes a =SimpleApplication=
3673 object. It is called once when initializing the simulation.
3674 Use it to create things like lights, change the gravity,
3675 initialize debug nodes, etc.
3677 - /update-fn/ :: this function takes a =SimpleApplication=
3678 object and a float and is called every frame of the
3679 simulation. The float tells how many seconds is has been
3680 since the last frame was rendered, according to whatever
3681 clock jme is currently using. The default is to use IsoTimer
3682 which will result in this value always being the same.
3684 - =(position-camera world position rotation)= :: set the position
3685 of the simulation's main camera.
3687 - =(enable-debug world)= :: turn on debug wireframes for each
3688 simulated object.
3690 - =(set-gravity world gravity)= :: set the gravity of a running
3691 simulation.
3693 - =(box length width height & {options})= :: create a box in the
3694 simulation. Options is a hash map specifying texture, mass,
3695 etc. Possible options are =:name=, =:color=, =:mass=,
3696 =:friction=, =:texture=, =:material=, =:position=,
3697 =:rotation=, =:shape=, and =:physical?=.
3699 - =(sphere radius & {options})= :: create a sphere in the simulation.
3700 Options are the same as in =box=.
3702 - =(load-blender-model file-name)= :: create a node structure
3703 representing the model described in a blender file.
3705 - =(light-up-everything world)= :: distribute a standard compliment
3706 of lights throughout the simulation. Should be adequate for most
3707 purposes.
3709 - =(node-seq node)= :: return a recursive list of the node's
3710 children.
3712 - =(nodify name children)= :: construct a node given a node-name and
3713 desired children.
3715 - =(add-element world element)= :: add an object to a running world
3716 simulation.
3718 - =(set-accuracy world accuracy)= :: change the accuracy of the
3719 world's physics simulator.
3721 - =(asset-manager)= :: get an /AssetManager/, a jMonkeyEngine
3722 construct that is useful for loading textures and is required
3723 for smooth interaction with jMonkeyEngine library functions.
3725 - =(load-bullet)= :: unpack native libraries and initialize the
3726 bullet physics subsystem. This function is required before
3727 other world building functions are called.
3729 *** Creature Manipulation / Import
3731 - =(body! creature)= :: give the creature a physical body.
3733 - =(vision! creature)= :: give the creature a sense of vision.
3734 Returns a list of functions which will each, when called
3735 during a simulation, return the vision data for the channel of
3736 one of the eyes. The functions are ordered depending on the
3737 alphabetical order of the names of the eye nodes in the
3738 blender file. The data returned by the functions is a vector
3739 containing the eye's /topology/, a vector of coordinates, and
3740 the eye's /data/, a vector of RGB values filtered by the eye's
3741 sensitivity.
3743 - =(hearing! creature)= :: give the creature a sense of hearing.
3744 Returns a list of functions, one for each ear, that when
3745 called will return a frame's worth of hearing data for that
3746 ear. The functions are ordered depending on the alphabetical
3747 order of the names of the ear nodes in the blender file. The
3748 data returned by the functions is an array of PCM (pulse code
3749 modulated) wav data.
3751 - =(touch! creature)= :: give the creature a sense of touch. Returns
3752 a single function that must be called with the /root node/ of
3753 the world, and which will return a vector of /touch-data/
3754 one entry for each touch sensitive component, each entry of
3755 which contains a /topology/ that specifies the distribution of
3756 touch sensors, and the /data/, which is a vector of
3757 =[activation, length]= pairs for each touch hair.
3759 - =(proprioception! creature)= :: give the creature the sense of
3760 proprioception. Returns a list of functions, one for each
3761 joint, that when called during a running simulation will
3762 report the =[heading, pitch, roll]= of the joint.
3764 - =(movement! creature)= :: give the creature the power of movement.
3765 Creates a list of functions, one for each muscle, that when
3766 called with an integer, will set the recruitment of that
3767 muscle to that integer, and will report the current power
3768 being exerted by the muscle. Order of muscles is determined by
3769 the alphabetical sort order of the names of the muscle nodes.
3771 *** Visualization/Debug
3773 - =(view-vision)= :: create a function that when called with a list
3774 of visual data returned from the functions made by =vision!=,
3775 will display that visual data on the screen.
3777 - =(view-hearing)= :: same as =view-vision= but for hearing.
3779 - =(view-touch)= :: same as =view-vision= but for touch.
3781 - =(view-proprioception)= :: same as =view-vision= but for
3782 proprioception.
3784 - =(view-movement)= :: same as =view-vision= but for muscles.
3786 - =(view anything)= :: =view= is a polymorphic function that allows
3787 you to inspect almost anything you could reasonably expect to
3788 be able to ``see'' in =CORTEX=.
3790 - =(text anything)= :: =text= is a polymorphic function that allows
3791 you to convert practically anything into a text string.
3793 - =(println-repl anything)= :: print messages to clojure's repl
3794 instead of the simulation's terminal window.
3796 - =(mega-import-jme3)= :: for experimenting at the REPL. This
3797 function will import all jMonkeyEngine3 classes for immediate
3798 use.
3800 - =(display-dilated-time world timer)= :: Shows the time as it is
3801 flowing in the simulation on a HUD display.
3805 TODO -- add a paper about detecting biological motion from only a few dots.