view thesis/cortex.org @ 477:ba54df21fc7c

complete first draft of touch.
author Robert McIntyre <rlm@mit.edu>
date Fri, 28 Mar 2014 22:51:14 -0400
parents 5a15611fbb9f
children a5480a06d5fe
line wrap: on
line source
1 #+title: =CORTEX=
2 #+author: Robert McIntyre
3 #+email: rlm@mit.edu
4 #+description: Using embodied AI to facilitate Artificial Imagination.
5 #+keywords: AI, clojure, embodiment
6 #+LaTeX_CLASS_OPTIONS: [nofloat]
8 * COMMENT templates
9 #+caption:
10 #+caption:
11 #+caption:
12 #+caption:
13 #+name: name
14 #+begin_listing clojure
15 #+end_listing
17 #+caption:
18 #+caption:
19 #+caption:
20 #+name: name
21 #+ATTR_LaTeX: :width 10cm
22 [[./images/aurellem-gray.png]]
24 #+caption:
25 #+caption:
26 #+caption:
27 #+caption:
28 #+name: name
29 #+begin_listing clojure
30 #+BEGIN_SRC clojure
31 #+END_SRC
32 #+end_listing
34 #+caption:
35 #+caption:
36 #+caption:
37 #+name: name
38 #+ATTR_LaTeX: :width 10cm
39 [[./images/aurellem-gray.png]]
42 * COMMENT Empathy and Embodiment as problem solving strategies
44 By the end of this thesis, you will have seen a novel approach to
45 interpreting video using embodiment and empathy. You will have also
46 seen one way to efficiently implement empathy for embodied
47 creatures. Finally, you will become familiar with =CORTEX=, a system
48 for designing and simulating creatures with rich senses, which you
49 may choose to use in your own research.
51 This is the core vision of my thesis: That one of the important ways
52 in which we understand others is by imagining ourselves in their
53 position and emphatically feeling experiences relative to our own
54 bodies. By understanding events in terms of our own previous
55 corporeal experience, we greatly constrain the possibilities of what
56 would otherwise be an unwieldy exponential search. This extra
57 constraint can be the difference between easily understanding what
58 is happening in a video and being completely lost in a sea of
59 incomprehensible color and movement.
61 ** Recognizing actions in video is extremely difficult
63 Consider for example the problem of determining what is happening
64 in a video of which this is one frame:
66 #+caption: A cat drinking some water. Identifying this action is
67 #+caption: beyond the state of the art for computers.
68 #+ATTR_LaTeX: :width 7cm
69 [[./images/cat-drinking.jpg]]
71 It is currently impossible for any computer program to reliably
72 label such a video as ``drinking''. And rightly so -- it is a very
73 hard problem! What features can you describe in terms of low level
74 functions of pixels that can even begin to describe at a high level
75 what is happening here?
77 Or suppose that you are building a program that recognizes chairs.
78 How could you ``see'' the chair in figure \ref{hidden-chair}?
80 #+caption: The chair in this image is quite obvious to humans, but I
81 #+caption: doubt that any modern computer vision program can find it.
82 #+name: hidden-chair
83 #+ATTR_LaTeX: :width 10cm
84 [[./images/fat-person-sitting-at-desk.jpg]]
86 Finally, how is it that you can easily tell the difference between
87 how the girls /muscles/ are working in figure \ref{girl}?
89 #+caption: The mysterious ``common sense'' appears here as you are able
90 #+caption: to discern the difference in how the girl's arm muscles
91 #+caption: are activated between the two images.
92 #+name: girl
93 #+ATTR_LaTeX: :width 7cm
94 [[./images/wall-push.png]]
96 Each of these examples tells us something about what might be going
97 on in our minds as we easily solve these recognition problems.
99 The hidden chairs show us that we are strongly triggered by cues
100 relating to the position of human bodies, and that we can determine
101 the overall physical configuration of a human body even if much of
102 that body is occluded.
104 The picture of the girl pushing against the wall tells us that we
105 have common sense knowledge about the kinetics of our own bodies.
106 We know well how our muscles would have to work to maintain us in
107 most positions, and we can easily project this self-knowledge to
108 imagined positions triggered by images of the human body.
110 ** =EMPATH= neatly solves recognition problems
112 I propose a system that can express the types of recognition
113 problems above in a form amenable to computation. It is split into
114 four parts:
116 - Free/Guided Play :: The creature moves around and experiences the
117 world through its unique perspective. Many otherwise
118 complicated actions are easily described in the language of a
119 full suite of body-centered, rich senses. For example,
120 drinking is the feeling of water sliding down your throat, and
121 cooling your insides. It's often accompanied by bringing your
122 hand close to your face, or bringing your face close to water.
123 Sitting down is the feeling of bending your knees, activating
124 your quadriceps, then feeling a surface with your bottom and
125 relaxing your legs. These body-centered action descriptions
126 can be either learned or hard coded.
127 - Posture Imitation :: When trying to interpret a video or image,
128 the creature takes a model of itself and aligns it with
129 whatever it sees. This alignment can even cross species, as
130 when humans try to align themselves with things like ponies,
131 dogs, or other humans with a different body type.
132 - Empathy :: The alignment triggers associations with
133 sensory data from prior experiences. For example, the
134 alignment itself easily maps to proprioceptive data. Any
135 sounds or obvious skin contact in the video can to a lesser
136 extent trigger previous experience. Segments of previous
137 experiences are stitched together to form a coherent and
138 complete sensory portrait of the scene.
139 - Recognition :: With the scene described in terms of first
140 person sensory events, the creature can now run its
141 action-identification programs on this synthesized sensory
142 data, just as it would if it were actually experiencing the
143 scene first-hand. If previous experience has been accurately
144 retrieved, and if it is analogous enough to the scene, then
145 the creature will correctly identify the action in the scene.
147 For example, I think humans are able to label the cat video as
148 ``drinking'' because they imagine /themselves/ as the cat, and
149 imagine putting their face up against a stream of water and
150 sticking out their tongue. In that imagined world, they can feel
151 the cool water hitting their tongue, and feel the water entering
152 their body, and are able to recognize that /feeling/ as drinking.
153 So, the label of the action is not really in the pixels of the
154 image, but is found clearly in a simulation inspired by those
155 pixels. An imaginative system, having been trained on drinking and
156 non-drinking examples and learning that the most important
157 component of drinking is the feeling of water sliding down one's
158 throat, would analyze a video of a cat drinking in the following
159 manner:
161 1. Create a physical model of the video by putting a ``fuzzy''
162 model of its own body in place of the cat. Possibly also create
163 a simulation of the stream of water.
165 2. Play out this simulated scene and generate imagined sensory
166 experience. This will include relevant muscle contractions, a
167 close up view of the stream from the cat's perspective, and most
168 importantly, the imagined feeling of water entering the
169 mouth. The imagined sensory experience can come from a
170 simulation of the event, but can also be pattern-matched from
171 previous, similar embodied experience.
173 3. The action is now easily identified as drinking by the sense of
174 taste alone. The other senses (such as the tongue moving in and
175 out) help to give plausibility to the simulated action. Note that
176 the sense of vision, while critical in creating the simulation,
177 is not critical for identifying the action from the simulation.
179 For the chair examples, the process is even easier:
181 1. Align a model of your body to the person in the image.
183 2. Generate proprioceptive sensory data from this alignment.
185 3. Use the imagined proprioceptive data as a key to lookup related
186 sensory experience associated with that particular proproceptive
187 feeling.
189 4. Retrieve the feeling of your bottom resting on a surface, your
190 knees bent, and your leg muscles relaxed.
192 5. This sensory information is consistent with the =sitting?=
193 sensory predicate, so you (and the entity in the image) must be
194 sitting.
196 6. There must be a chair-like object since you are sitting.
198 Empathy offers yet another alternative to the age-old AI
199 representation question: ``What is a chair?'' --- A chair is the
200 feeling of sitting.
202 My program, =EMPATH= uses this empathic problem solving technique
203 to interpret the actions of a simple, worm-like creature.
205 #+caption: The worm performs many actions during free play such as
206 #+caption: curling, wiggling, and resting.
207 #+name: worm-intro
208 #+ATTR_LaTeX: :width 15cm
209 [[./images/worm-intro-white.png]]
211 #+caption: =EMPATH= recognized and classified each of these
212 #+caption: poses by inferring the complete sensory experience
213 #+caption: from proprioceptive data.
214 #+name: worm-recognition-intro
215 #+ATTR_LaTeX: :width 15cm
216 [[./images/worm-poses.png]]
218 One powerful advantage of empathic problem solving is that it
219 factors the action recognition problem into two easier problems. To
220 use empathy, you need an /aligner/, which takes the video and a
221 model of your body, and aligns the model with the video. Then, you
222 need a /recognizer/, which uses the aligned model to interpret the
223 action. The power in this method lies in the fact that you describe
224 all actions form a body-centered viewpoint. You are less tied to
225 the particulars of any visual representation of the actions. If you
226 teach the system what ``running'' is, and you have a good enough
227 aligner, the system will from then on be able to recognize running
228 from any point of view, even strange points of view like above or
229 underneath the runner. This is in contrast to action recognition
230 schemes that try to identify actions using a non-embodied approach.
231 If these systems learn about running as viewed from the side, they
232 will not automatically be able to recognize running from any other
233 viewpoint.
235 Another powerful advantage is that using the language of multiple
236 body-centered rich senses to describe body-centerd actions offers a
237 massive boost in descriptive capability. Consider how difficult it
238 would be to compose a set of HOG filters to describe the action of
239 a simple worm-creature ``curling'' so that its head touches its
240 tail, and then behold the simplicity of describing thus action in a
241 language designed for the task (listing \ref{grand-circle-intro}):
243 #+caption: Body-centerd actions are best expressed in a body-centered
244 #+caption: language. This code detects when the worm has curled into a
245 #+caption: full circle. Imagine how you would replicate this functionality
246 #+caption: using low-level pixel features such as HOG filters!
247 #+name: grand-circle-intro
248 #+attr_latex: [htpb]
249 #+begin_listing clojure
250 #+begin_src clojure
251 (defn grand-circle?
252 "Does the worm form a majestic circle (one end touching the other)?"
253 [experiences]
254 (and (curled? experiences)
255 (let [worm-touch (:touch (peek experiences))
256 tail-touch (worm-touch 0)
257 head-touch (worm-touch 4)]
258 (and (< 0.2 (contact worm-segment-bottom-tip tail-touch))
259 (< 0.2 (contact worm-segment-top-tip head-touch))))))
260 #+end_src
261 #+end_listing
264 ** =CORTEX= is a toolkit for building sensate creatures
266 I built =CORTEX= to be a general AI research platform for doing
267 experiments involving multiple rich senses and a wide variety and
268 number of creatures. I intend it to be useful as a library for many
269 more projects than just this thesis. =CORTEX= was necessary to meet
270 a need among AI researchers at CSAIL and beyond, which is that
271 people often will invent neat ideas that are best expressed in the
272 language of creatures and senses, but in order to explore those
273 ideas they must first build a platform in which they can create
274 simulated creatures with rich senses! There are many ideas that
275 would be simple to execute (such as =EMPATH=), but attached to them
276 is the multi-month effort to make a good creature simulator. Often,
277 that initial investment of time proves to be too much, and the
278 project must make do with a lesser environment.
280 =CORTEX= is well suited as an environment for embodied AI research
281 for three reasons:
283 - You can create new creatures using Blender, a popular 3D modeling
284 program. Each sense can be specified using special blender nodes
285 with biologically inspired paramaters. You need not write any
286 code to create a creature, and can use a wide library of
287 pre-existing blender models as a base for your own creatures.
289 - =CORTEX= implements a wide variety of senses, including touch,
290 proprioception, vision, hearing, and muscle tension. Complicated
291 senses like touch, and vision involve multiple sensory elements
292 embedded in a 2D surface. You have complete control over the
293 distribution of these sensor elements through the use of simple
294 png image files. In particular, =CORTEX= implements more
295 comprehensive hearing than any other creature simulation system
296 available.
298 - =CORTEX= supports any number of creatures and any number of
299 senses. Time in =CORTEX= dialates so that the simulated creatures
300 always precieve a perfectly smooth flow of time, regardless of
301 the actual computational load.
303 =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game
304 engine designed to create cross-platform 3D desktop games. =CORTEX=
305 is mainly written in clojure, a dialect of =LISP= that runs on the
306 java virtual machine (JVM). The API for creating and simulating
307 creatures and senses is entirely expressed in clojure, though many
308 senses are implemented at the layer of jMonkeyEngine or below. For
309 example, for the sense of hearing I use a layer of clojure code on
310 top of a layer of java JNI bindings that drive a layer of =C++=
311 code which implements a modified version of =OpenAL= to support
312 multiple listeners. =CORTEX= is the only simulation environment
313 that I know of that can support multiple entities that can each
314 hear the world from their own perspective. Other senses also
315 require a small layer of Java code. =CORTEX= also uses =bullet=, a
316 physics simulator written in =C=.
318 #+caption: Here is the worm from above modeled in Blender, a free
319 #+caption: 3D-modeling program. Senses and joints are described
320 #+caption: using special nodes in Blender.
321 #+name: worm-recognition-intro
322 #+ATTR_LaTeX: :width 12cm
323 [[./images/blender-worm.png]]
325 Here are some thing I anticipate that =CORTEX= might be used for:
327 - exploring new ideas about sensory integration
328 - distributed communication among swarm creatures
329 - self-learning using free exploration,
330 - evolutionary algorithms involving creature construction
331 - exploration of exoitic senses and effectors that are not possible
332 in the real world (such as telekenisis or a semantic sense)
333 - imagination using subworlds
335 During one test with =CORTEX=, I created 3,000 creatures each with
336 their own independent senses and ran them all at only 1/80 real
337 time. In another test, I created a detailed model of my own hand,
338 equipped with a realistic distribution of touch (more sensitive at
339 the fingertips), as well as eyes and ears, and it ran at around 1/4
340 real time.
342 #+BEGIN_LaTeX
343 \begin{sidewaysfigure}
344 \includegraphics[width=9.5in]{images/full-hand.png}
345 \caption{
346 I modeled my own right hand in Blender and rigged it with all the
347 senses that {\tt CORTEX} supports. My simulated hand has a
348 biologically inspired distribution of touch sensors. The senses are
349 displayed on the right, and the simulation is displayed on the
350 left. Notice that my hand is curling its fingers, that it can see
351 its own finger from the eye in its palm, and that it can feel its
352 own thumb touching its palm.}
353 \end{sidewaysfigure}
354 #+END_LaTeX
356 ** Contributions
358 - I built =CORTEX=, a comprehensive platform for embodied AI
359 experiments. =CORTEX= supports many features lacking in other
360 systems, such proper simulation of hearing. It is easy to create
361 new =CORTEX= creatures using Blender, a free 3D modeling program.
363 - I built =EMPATH=, which uses =CORTEX= to identify the actions of
364 a worm-like creature using a computational model of empathy.
366 * Building =CORTEX=
368 I intend for =CORTEX= to be used as a general purpose library for
369 building creatures and outfitting them with senses, so that it will
370 be useful for other researchers who want to test out ideas of their
371 own. To this end, wherver I have had to make archetictural choices
372 about =CORTEX=, I have chosen to give as much freedom to the user as
373 possible, so that =CORTEX= may be used for things I have not
374 forseen.
376 ** COMMENT Simulation or Reality?
378 The most important archetictural decision of all is the choice to
379 use a computer-simulated environemnt in the first place! The world
380 is a vast and rich place, and for now simulations are a very poor
381 reflection of its complexity. It may be that there is a significant
382 qualatative difference between dealing with senses in the real
383 world and dealing with pale facilimilies of them in a simulation.
384 What are the advantages and disadvantages of a simulation vs.
385 reality?
387 *** Simulation
389 The advantages of virtual reality are that when everything is a
390 simulation, experiments in that simulation are absolutely
391 reproducible. It's also easier to change the character and world
392 to explore new situations and different sensory combinations.
394 If the world is to be simulated on a computer, then not only do
395 you have to worry about whether the character's senses are rich
396 enough to learn from the world, but whether the world itself is
397 rendered with enough detail and realism to give enough working
398 material to the character's senses. To name just a few
399 difficulties facing modern physics simulators: destructibility of
400 the environment, simulation of water/other fluids, large areas,
401 nonrigid bodies, lots of objects, smoke. I don't know of any
402 computer simulation that would allow a character to take a rock
403 and grind it into fine dust, then use that dust to make a clay
404 sculpture, at least not without spending years calculating the
405 interactions of every single small grain of dust. Maybe a
406 simulated world with today's limitations doesn't provide enough
407 richness for real intelligence to evolve.
409 *** Reality
411 The other approach for playing with senses is to hook your
412 software up to real cameras, microphones, robots, etc., and let it
413 loose in the real world. This has the advantage of eliminating
414 concerns about simulating the world at the expense of increasing
415 the complexity of implementing the senses. Instead of just
416 grabbing the current rendered frame for processing, you have to
417 use an actual camera with real lenses and interact with photons to
418 get an image. It is much harder to change the character, which is
419 now partly a physical robot of some sort, since doing so involves
420 changing things around in the real world instead of modifying
421 lines of code. While the real world is very rich and definitely
422 provides enough stimulation for intelligence to develop as
423 evidenced by our own existence, it is also uncontrollable in the
424 sense that a particular situation cannot be recreated perfectly or
425 saved for later use. It is harder to conduct science because it is
426 harder to repeat an experiment. The worst thing about using the
427 real world instead of a simulation is the matter of time. Instead
428 of simulated time you get the constant and unstoppable flow of
429 real time. This severely limits the sorts of software you can use
430 to program the AI because all sense inputs must be handled in real
431 time. Complicated ideas may have to be implemented in hardware or
432 may simply be impossible given the current speed of our
433 processors. Contrast this with a simulation, in which the flow of
434 time in the simulated world can be slowed down to accommodate the
435 limitations of the character's programming. In terms of cost,
436 doing everything in software is far cheaper than building custom
437 real-time hardware. All you need is a laptop and some patience.
439 ** COMMENT Because of Time, simulation is perferable to reality
441 I envision =CORTEX= being used to support rapid prototyping and
442 iteration of ideas. Even if I could put together a well constructed
443 kit for creating robots, it would still not be enough because of
444 the scourge of real-time processing. Anyone who wants to test their
445 ideas in the real world must always worry about getting their
446 algorithms to run fast enough to process information in real time.
447 The need for real time processing only increases if multiple senses
448 are involved. In the extreme case, even simple algorithms will have
449 to be accelerated by ASIC chips or FPGAs, turning what would
450 otherwise be a few lines of code and a 10x speed penality into a
451 multi-month ordeal. For this reason, =CORTEX= supports
452 /time-dialiation/, which scales back the framerate of the
453 simulation in proportion to the amount of processing each frame.
454 From the perspective of the creatures inside the simulation, time
455 always appears to flow at a constant rate, regardless of how
456 complicated the envorimnent becomes or how many creatures are in
457 the simulation. The cost is that =CORTEX= can sometimes run slower
458 than real time. This can also be an advantage, however ---
459 simulations of very simple creatures in =CORTEX= generally run at
460 40x on my machine!
462 ** COMMENT What is a sense?
464 If =CORTEX= is to support a wide variety of senses, it would help
465 to have a better understanding of what a ``sense'' actually is!
466 While vision, touch, and hearing all seem like they are quite
467 different things, I was supprised to learn during the course of
468 this thesis that they (and all physical senses) can be expressed as
469 exactly the same mathematical object due to a dimensional argument!
471 Human beings are three-dimensional objects, and the nerves that
472 transmit data from our various sense organs to our brain are
473 essentially one-dimensional. This leaves up to two dimensions in
474 which our sensory information may flow. For example, imagine your
475 skin: it is a two-dimensional surface around a three-dimensional
476 object (your body). It has discrete touch sensors embedded at
477 various points, and the density of these sensors corresponds to the
478 sensitivity of that region of skin. Each touch sensor connects to a
479 nerve, all of which eventually are bundled together as they travel
480 up the spinal cord to the brain. Intersect the spinal nerves with a
481 guillotining plane and you will see all of the sensory data of the
482 skin revealed in a roughly circular two-dimensional image which is
483 the cross section of the spinal cord. Points on this image that are
484 close together in this circle represent touch sensors that are
485 /probably/ close together on the skin, although there is of course
486 some cutting and rearrangement that has to be done to transfer the
487 complicated surface of the skin onto a two dimensional image.
489 Most human senses consist of many discrete sensors of various
490 properties distributed along a surface at various densities. For
491 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
492 disks, and Ruffini's endings, which detect pressure and vibration
493 of various intensities. For ears, it is the stereocilia distributed
494 along the basilar membrane inside the cochlea; each one is
495 sensitive to a slightly different frequency of sound. For eyes, it
496 is rods and cones distributed along the surface of the retina. In
497 each case, we can describe the sense with a surface and a
498 distribution of sensors along that surface.
500 The neat idea is that every human sense can be effectively
501 described in terms of a surface containing embedded sensors. If the
502 sense had any more dimensions, then there wouldn't be enough room
503 in the spinal chord to transmit the information!
505 Therefore, =CORTEX= must support the ability to create objects and
506 then be able to ``paint'' points along their surfaces to describe
507 each sense.
509 Fortunately this idea is already a well known computer graphics
510 technique called called /UV-mapping/. The three-dimensional surface
511 of a model is cut and smooshed until it fits on a two-dimensional
512 image. You paint whatever you want on that image, and when the
513 three-dimensional shape is rendered in a game the smooshing and
514 cutting is reversed and the image appears on the three-dimensional
515 object.
517 To make a sense, interpret the UV-image as describing the
518 distribution of that senses sensors. To get different types of
519 sensors, you can either use a different color for each type of
520 sensor, or use multiple UV-maps, each labeled with that sensor
521 type. I generally use a white pixel to mean the presence of a
522 sensor and a black pixel to mean the absence of a sensor, and use
523 one UV-map for each sensor-type within a given sense.
525 #+CAPTION: The UV-map for an elongated icososphere. The white
526 #+caption: dots each represent a touch sensor. They are dense
527 #+caption: in the regions that describe the tip of the finger,
528 #+caption: and less dense along the dorsal side of the finger
529 #+caption: opposite the tip.
530 #+name: finger-UV
531 #+ATTR_latex: :width 10cm
532 [[./images/finger-UV.png]]
534 #+caption: Ventral side of the UV-mapped finger. Notice the
535 #+caption: density of touch sensors at the tip.
536 #+name: finger-side-view
537 #+ATTR_LaTeX: :width 10cm
538 [[./images/finger-1.png]]
540 ** COMMENT Video game engines are a great starting point
542 I did not need to write my own physics simulation code or shader to
543 build =CORTEX=. Doing so would lead to a system that is impossible
544 for anyone but myself to use anyway. Instead, I use a video game
545 engine as a base and modify it to accomodate the additional needs
546 of =CORTEX=. Video game engines are an ideal starting point to
547 build =CORTEX=, because they are not far from being creature
548 building systems themselves.
550 First off, general purpose video game engines come with a physics
551 engine and lighting / sound system. The physics system provides
552 tools that can be co-opted to serve as touch, proprioception, and
553 muscles. Since some games support split screen views, a good video
554 game engine will allow you to efficiently create multiple cameras
555 in the simulated world that can be used as eyes. Video game systems
556 offer integrated asset management for things like textures and
557 creatures models, providing an avenue for defining creatures. They
558 also understand UV-mapping, since this technique is used to apply a
559 texture to a model. Finally, because video game engines support a
560 large number of users, as long as =CORTEX= doesn't stray too far
561 from the base system, other researchers can turn to this community
562 for help when doing their research.
564 ** COMMENT =CORTEX= is based on jMonkeyEngine3
566 While preparing to build =CORTEX= I studied several video game
567 engines to see which would best serve as a base. The top contenders
568 were:
570 - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID
571 software in 1997. All the source code was released by ID
572 software into the Public Domain several years ago, and as a
573 result it has been ported to many different languages. This
574 engine was famous for its advanced use of realistic shading
575 and had decent and fast physics simulation. The main advantage
576 of the Quake II engine is its simplicity, but I ultimately
577 rejected it because the engine is too tied to the concept of a
578 first-person shooter game. One of the problems I had was that
579 there does not seem to be any easy way to attach multiple
580 cameras to a single character. There are also several physics
581 clipping issues that are corrected in a way that only applies
582 to the main character and do not apply to arbitrary objects.
584 - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II
585 and Quake I engines and is used by Valve in the Half-Life
586 series of games. The physics simulation in the Source Engine
587 is quite accurate and probably the best out of all the engines
588 I investigated. There is also an extensive community actively
589 working with the engine. However, applications that use the
590 Source Engine must be written in C++, the code is not open, it
591 only runs on Windows, and the tools that come with the SDK to
592 handle models and textures are complicated and awkward to use.
594 - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating
595 games in Java. It uses OpenGL to render to the screen and uses
596 screengraphs to avoid drawing things that do not appear on the
597 screen. It has an active community and several games in the
598 pipeline. The engine was not built to serve any particular
599 game but is instead meant to be used for any 3D game.
601 I chose jMonkeyEngine3 because it because it had the most features
602 out of all the free projects I looked at, and because I could then
603 write my code in clojure, an implementation of =LISP= that runs on
604 the JVM.
606 ** COMMENT =CORTEX= uses Blender to create creature models
608 For the simple worm-like creatures I will use later on in this
609 thesis, I could define a simple API in =CORTEX= that would allow
610 one to create boxes, spheres, etc., and leave that API as the sole
611 way to create creatures. However, for =CORTEX= to truly be useful
612 for other projects, it needs a way to construct complicated
613 creatures. If possible, it would be nice to leverage work that has
614 already been done by the community of 3D modelers, or at least
615 enable people who are talented at moedling but not programming to
616 design =CORTEX= creatures.
618 Therefore, I use Blender, a free 3D modeling program, as the main
619 way to create creatures in =CORTEX=. However, the creatures modeled
620 in Blender must also be simple to simulate in jMonkeyEngine3's game
621 engine, and must also be easy to rig with =CORTEX='s senses. I
622 accomplish this with extensive use of Blender's ``empty nodes.''
624 Empty nodes have no mass, physical presence, or appearance, but
625 they can hold metadata and have names. I use a tree structure of
626 empty nodes to specify senses in the following manner:
628 - Create a single top-level empty node whose name is the name of
629 the sense.
630 - Add empty nodes which each contain meta-data relevant to the
631 sense, including a UV-map describing the number/distribution of
632 sensors if applicable.
633 - Make each empty-node the child of the top-level node.
635 #+caption: An example of annoting a creature model with empty
636 #+caption: nodes to describe the layout of senses. There are
637 #+caption: multiple empty nodes which each describe the position
638 #+caption: of muscles, ears, eyes, or joints.
639 #+name: sense-nodes
640 #+ATTR_LaTeX: :width 10cm
641 [[./images/empty-sense-nodes.png]]
643 ** COMMENT Bodies are composed of segments connected by joints
645 Blender is a general purpose animation tool, which has been used in
646 the past to create high quality movies such as Sintel
647 \cite{sintel}. Though Blender can model and render even complicated
648 things like water, it is crucual to keep models that are meant to
649 be simulated as creatures simple. =Bullet=, which =CORTEX= uses
650 though jMonkeyEngine3, is a rigid-body physics system. This offers
651 a compromise between the expressiveness of a game level and the
652 speed at which it can be simulated, and it means that creatures
653 should be naturally expressed as rigid components held together by
654 joint constraints.
656 But humans are more like a squishy bag with wrapped around some
657 hard bones which define the overall shape. When we move, our skin
658 bends and stretches to accomodate the new positions of our bones.
660 One way to make bodies composed of rigid pieces connected by joints
661 /seem/ more human-like is to use an /armature/, (or /rigging/)
662 system, which defines a overall ``body mesh'' and defines how the
663 mesh deforms as a function of the position of each ``bone'' which
664 is a standard rigid body. This technique is used extensively to
665 model humans and create realistic animations. It is not a good
666 technique for physical simulation, however because it creates a lie
667 -- the skin is not a physical part of the simulation and does not
668 interact with any objects in the world or itself. Objects will pass
669 right though the skin until they come in contact with the
670 underlying bone, which is a physical object. Whithout simulating
671 the skin, the sense of touch has little meaning, and the creature's
672 own vision will lie to it about the true extent of its body.
673 Simulating the skin as a physical object requires some way to
674 continuously update the physical model of the skin along with the
675 movement of the bones, which is unacceptably slow compared to rigid
676 body simulation.
678 Therefore, instead of using the human-like ``deformable bag of
679 bones'' approach, I decided to base my body plans on multiple solid
680 objects that are connected by joints, inspired by the robot =EVE=
681 from the movie WALL-E.
683 #+caption: =EVE= from the movie WALL-E. This body plan turns
684 #+caption: out to be much better suited to my purposes than a more
685 #+caption: human-like one.
686 #+ATTR_LaTeX: :width 10cm
687 [[./images/Eve.jpg]]
689 =EVE='s body is composed of several rigid components that are held
690 together by invisible joint constraints. This is what I mean by
691 ``eve-like''. The main reason that I use eve-style bodies is for
692 efficiency, and so that there will be correspondence between the
693 AI's semses and the physical presence of its body. Each individual
694 section is simulated by a separate rigid body that corresponds
695 exactly with its visual representation and does not change.
696 Sections are connected by invisible joints that are well supported
697 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3,
698 can efficiently simulate hundreds of rigid bodies connected by
699 joints. Just because sections are rigid does not mean they have to
700 stay as one piece forever; they can be dynamically replaced with
701 multiple sections to simulate splitting in two. This could be used
702 to simulate retractable claws or =EVE='s hands, which are able to
703 coalesce into one object in the movie.
705 *** Solidifying/Connecting a body
707 =CORTEX= creates a creature in two steps: first, it traverses the
708 nodes in the blender file and creates physical representations for
709 any of them that have mass defined in their blender meta-data.
711 #+caption: Program for iterating through the nodes in a blender file
712 #+caption: and generating physical jMonkeyEngine3 objects with mass
713 #+caption: and a matching physics shape.
714 #+name: name
715 #+begin_listing clojure
716 #+begin_src clojure
717 (defn physical!
718 "Iterate through the nodes in creature and make them real physical
719 objects in the simulation."
720 [#^Node creature]
721 (dorun
722 (map
723 (fn [geom]
724 (let [physics-control
725 (RigidBodyControl.
726 (HullCollisionShape.
727 (.getMesh geom))
728 (if-let [mass (meta-data geom "mass")]
729 (float mass) (float 1)))]
730 (.addControl geom physics-control)))
731 (filter #(isa? (class %) Geometry )
732 (node-seq creature)))))
733 #+end_src
734 #+end_listing
736 The next step to making a proper body is to connect those pieces
737 together with joints. jMonkeyEngine has a large array of joints
738 available via =bullet=, such as Point2Point, Cone, Hinge, and a
739 generic Six Degree of Freedom joint, with or without spring
740 restitution.
742 Joints are treated a lot like proper senses, in that there is a
743 top-level empty node named ``joints'' whose children each
744 represent a joint.
746 #+caption: View of the hand model in Blender showing the main ``joints''
747 #+caption: node (highlighted in yellow) and its children which each
748 #+caption: represent a joint in the hand. Each joint node has metadata
749 #+caption: specifying what sort of joint it is.
750 #+name: blender-hand
751 #+ATTR_LaTeX: :width 10cm
752 [[./images/hand-screenshot1.png]]
755 =CORTEX='s procedure for binding the creature together with joints
756 is as follows:
758 - Find the children of the ``joints'' node.
759 - Determine the two spatials the joint is meant to connect.
760 - Create the joint based on the meta-data of the empty node.
762 The higher order function =sense-nodes= from =cortex.sense=
763 simplifies finding the joints based on their parent ``joints''
764 node.
766 #+caption: Retrieving the children empty nodes from a single
767 #+caption: named empty node is a common pattern in =CORTEX=
768 #+caption: further instances of this technique for the senses
769 #+caption: will be omitted
770 #+name: get-empty-nodes
771 #+begin_listing clojure
772 #+begin_src clojure
773 (defn sense-nodes
774 "For some senses there is a special empty blender node whose
775 children are considered markers for an instance of that sense. This
776 function generates functions to find those children, given the name
777 of the special parent node."
778 [parent-name]
779 (fn [#^Node creature]
780 (if-let [sense-node (.getChild creature parent-name)]
781 (seq (.getChildren sense-node)) [])))
783 (def
784 ^{:doc "Return the children of the creature's \"joints\" node."
785 :arglists '([creature])}
786 joints
787 (sense-nodes "joints"))
788 #+end_src
789 #+end_listing
791 To find a joint's targets, =CORTEX= creates a small cube, centered
792 around the empty-node, and grows the cube exponentially until it
793 intersects two physical objects. The objects are ordered according
794 to the joint's rotation, with the first one being the object that
795 has more negative coordinates in the joint's reference frame.
796 Since the objects must be physical, the empty-node itself escapes
797 detection. Because the objects must be physical, =joint-targets=
798 must be called /after/ =physical!= is called.
800 #+caption: Program to find the targets of a joint node by
801 #+caption: exponentiallly growth of a search cube.
802 #+name: joint-targets
803 #+begin_listing clojure
804 #+begin_src clojure
805 (defn joint-targets
806 "Return the two closest two objects to the joint object, ordered
807 from bottom to top according to the joint's rotation."
808 [#^Node parts #^Node joint]
809 (loop [radius (float 0.01)]
810 (let [results (CollisionResults.)]
811 (.collideWith
812 parts
813 (BoundingBox. (.getWorldTranslation joint)
814 radius radius radius) results)
815 (let [targets
816 (distinct
817 (map #(.getGeometry %) results))]
818 (if (>= (count targets) 2)
819 (sort-by
820 #(let [joint-ref-frame-position
821 (jme-to-blender
822 (.mult
823 (.inverse (.getWorldRotation joint))
824 (.subtract (.getWorldTranslation %)
825 (.getWorldTranslation joint))))]
826 (.dot (Vector3f. 1 1 1) joint-ref-frame-position))
827 (take 2 targets))
828 (recur (float (* radius 2))))))))
829 #+end_src
830 #+end_listing
832 Once =CORTEX= finds all joints and targets, it creates them using
833 a dispatch on the metadata of each joint node.
835 #+caption: Program to dispatch on blender metadata and create joints
836 #+caption: sutiable for physical simulation.
837 #+name: joint-dispatch
838 #+begin_listing clojure
839 #+begin_src clojure
840 (defmulti joint-dispatch
841 "Translate blender pseudo-joints into real JME joints."
842 (fn [constraints & _]
843 (:type constraints)))
845 (defmethod joint-dispatch :point
846 [constraints control-a control-b pivot-a pivot-b rotation]
847 (doto (SixDofJoint. control-a control-b pivot-a pivot-b false)
848 (.setLinearLowerLimit Vector3f/ZERO)
849 (.setLinearUpperLimit Vector3f/ZERO)))
851 (defmethod joint-dispatch :hinge
852 [constraints control-a control-b pivot-a pivot-b rotation]
853 (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X)
854 [limit-1 limit-2] (:limit constraints)
855 hinge-axis (.mult rotation (blender-to-jme axis))]
856 (doto (HingeJoint. control-a control-b pivot-a pivot-b
857 hinge-axis hinge-axis)
858 (.setLimit limit-1 limit-2))))
860 (defmethod joint-dispatch :cone
861 [constraints control-a control-b pivot-a pivot-b rotation]
862 (let [limit-xz (:limit-xz constraints)
863 limit-xy (:limit-xy constraints)
864 twist (:twist constraints)]
865 (doto (ConeJoint. control-a control-b pivot-a pivot-b
866 rotation rotation)
867 (.setLimit (float limit-xz) (float limit-xy)
868 (float twist)))))
869 #+end_src
870 #+end_listing
872 All that is left for joints it to combine the above pieces into a
873 something that can operate on the collection of nodes that a
874 blender file represents.
876 #+caption: Program to completely create a joint given information
877 #+caption: from a blender file.
878 #+name: connect
879 #+begin_listing clojure
880 #+begin_src clojure
881 (defn connect
882 "Create a joint between 'obj-a and 'obj-b at the location of
883 'joint. The type of joint is determined by the metadata on 'joint.
885 Here are some examples:
886 {:type :point}
887 {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)}
888 (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints)
890 {:type :cone :limit-xz 0]
891 :limit-xy 0]
892 :twist 0]} (use XZY rotation mode in blender!)"
893 [#^Node obj-a #^Node obj-b #^Node joint]
894 (let [control-a (.getControl obj-a RigidBodyControl)
895 control-b (.getControl obj-b RigidBodyControl)
896 joint-center (.getWorldTranslation joint)
897 joint-rotation (.toRotationMatrix (.getWorldRotation joint))
898 pivot-a (world-to-local obj-a joint-center)
899 pivot-b (world-to-local obj-b joint-center)]
900 (if-let
901 [constraints (map-vals eval (read-string (meta-data joint "joint")))]
902 ;; A side-effect of creating a joint registers
903 ;; it with both physics objects which in turn
904 ;; will register the joint with the physics system
905 ;; when the simulation is started.
906 (joint-dispatch constraints
907 control-a control-b
908 pivot-a pivot-b
909 joint-rotation))))
910 #+end_src
911 #+end_listing
913 In general, whenever =CORTEX= exposes a sense (or in this case
914 physicality), it provides a function of the type =sense!=, which
915 takes in a collection of nodes and augments it to support that
916 sense. The function returns any controlls necessary to use that
917 sense. In this case =body!= cerates a physical body and returns no
918 control functions.
920 #+caption: Program to give joints to a creature.
921 #+name: name
922 #+begin_listing clojure
923 #+begin_src clojure
924 (defn joints!
925 "Connect the solid parts of the creature with physical joints. The
926 joints are taken from the \"joints\" node in the creature."
927 [#^Node creature]
928 (dorun
929 (map
930 (fn [joint]
931 (let [[obj-a obj-b] (joint-targets creature joint)]
932 (connect obj-a obj-b joint)))
933 (joints creature))))
934 (defn body!
935 "Endow the creature with a physical body connected with joints. The
936 particulars of the joints and the masses of each body part are
937 determined in blender."
938 [#^Node creature]
939 (physical! creature)
940 (joints! creature))
941 #+end_src
942 #+end_listing
944 All of the code you have just seen amounts to only 130 lines, yet
945 because it builds on top of Blender and jMonkeyEngine3, those few
946 lines pack quite a punch!
948 The hand from figure \ref{blender-hand}, which was modeled after
949 my own right hand, can now be given joints and simulated as a
950 creature.
952 #+caption: With the ability to create physical creatures from blender,
953 #+caption: =CORTEX= gets one step closer to becomming a full creature
954 #+caption: simulation environment.
955 #+name: name
956 #+ATTR_LaTeX: :width 15cm
957 [[./images/physical-hand.png]]
959 ** COMMENT Eyes reuse standard video game components
961 Vision is one of the most important senses for humans, so I need to
962 build a simulated sense of vision for my AI. I will do this with
963 simulated eyes. Each eye can be independently moved and should see
964 its own version of the world depending on where it is.
966 Making these simulated eyes a reality is simple because
967 jMonkeyEngine already contains extensive support for multiple views
968 of the same 3D simulated world. The reason jMonkeyEngine has this
969 support is because the support is necessary to create games with
970 split-screen views. Multiple views are also used to create
971 efficient pseudo-reflections by rendering the scene from a certain
972 perspective and then projecting it back onto a surface in the 3D
973 world.
975 #+caption: jMonkeyEngine supports multiple views to enable
976 #+caption: split-screen games, like GoldenEye, which was one of
977 #+caption: the first games to use split-screen views.
978 #+name: name
979 #+ATTR_LaTeX: :width 10cm
980 [[./images/goldeneye-4-player.png]]
982 *** A Brief Description of jMonkeyEngine's Rendering Pipeline
984 jMonkeyEngine allows you to create a =ViewPort=, which represents a
985 view of the simulated world. You can create as many of these as you
986 want. Every frame, the =RenderManager= iterates through each
987 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
988 is a =FrameBuffer= which represents the rendered image in the GPU.
990 #+caption: =ViewPorts= are cameras in the world. During each frame,
991 #+caption: the =RenderManager= records a snapshot of what each view
992 #+caption: is currently seeing; these snapshots are =FrameBuffer= objects.
993 #+name: name
994 #+ATTR_LaTeX: :width 10cm
995 [[../images/diagram_rendermanager2.png]]
997 Each =ViewPort= can have any number of attached =SceneProcessor=
998 objects, which are called every time a new frame is rendered. A
999 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
1000 whatever it wants to the data. Often this consists of invoking GPU
1001 specific operations on the rendered image. The =SceneProcessor= can
1002 also copy the GPU image data to RAM and process it with the CPU.
1004 *** Appropriating Views for Vision
1006 Each eye in the simulated creature needs its own =ViewPort= so
1007 that it can see the world from its own perspective. To this
1008 =ViewPort=, I add a =SceneProcessor= that feeds the visual data to
1009 any arbitrary continuation function for further processing. That
1010 continuation function may perform both CPU and GPU operations on
1011 the data. To make this easy for the continuation function, the
1012 =SceneProcessor= maintains appropriately sized buffers in RAM to
1013 hold the data. It does not do any copying from the GPU to the CPU
1014 itself because it is a slow operation.
1016 #+caption: Function to make the rendered secne in jMonkeyEngine
1017 #+caption: available for further processing.
1018 #+name: pipeline-1
1019 #+begin_listing clojure
1020 #+begin_src clojure
1021 (defn vision-pipeline
1022 "Create a SceneProcessor object which wraps a vision processing
1023 continuation function. The continuation is a function that takes
1024 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
1025 each of which has already been appropriately sized."
1026 [continuation]
1027 (let [byte-buffer (atom nil)
1028 renderer (atom nil)
1029 image (atom nil)]
1030 (proxy [SceneProcessor] []
1031 (initialize
1032 [renderManager viewPort]
1033 (let [cam (.getCamera viewPort)
1034 width (.getWidth cam)
1035 height (.getHeight cam)]
1036 (reset! renderer (.getRenderer renderManager))
1037 (reset! byte-buffer
1038 (BufferUtils/createByteBuffer
1039 (* width height 4)))
1040 (reset! image (BufferedImage.
1041 width height
1042 BufferedImage/TYPE_4BYTE_ABGR))))
1043 (isInitialized [] (not (nil? @byte-buffer)))
1044 (reshape [_ _ _])
1045 (preFrame [_])
1046 (postQueue [_])
1047 (postFrame
1048 [#^FrameBuffer fb]
1049 (.clear @byte-buffer)
1050 (continuation @renderer fb @byte-buffer @image))
1051 (cleanup []))))
1052 #+end_src
1053 #+end_listing
1055 The continuation function given to =vision-pipeline= above will be
1056 given a =Renderer= and three containers for image data. The
1057 =FrameBuffer= references the GPU image data, but the pixel data
1058 can not be used directly on the CPU. The =ByteBuffer= and
1059 =BufferedImage= are initially "empty" but are sized to hold the
1060 data in the =FrameBuffer=. I call transferring the GPU image data
1061 to the CPU structures "mixing" the image data.
1063 *** Optical sensor arrays are described with images and referenced with metadata
1065 The vision pipeline described above handles the flow of rendered
1066 images. Now, =CORTEX= needs simulated eyes to serve as the source
1067 of these images.
1069 An eye is described in blender in the same way as a joint. They
1070 are zero dimensional empty objects with no geometry whose local
1071 coordinate system determines the orientation of the resulting eye.
1072 All eyes are children of a parent node named "eyes" just as all
1073 joints have a parent named "joints". An eye binds to the nearest
1074 physical object with =bind-sense=.
1076 #+caption: Here, the camera is created based on metadata on the
1077 #+caption: eye-node and attached to the nearest physical object
1078 #+caption: with =bind-sense=
1079 #+name: add-eye
1080 #+begin_listing clojure
1081 (defn add-eye!
1082 "Create a Camera centered on the current position of 'eye which
1083 follows the closest physical node in 'creature. The camera will
1084 point in the X direction and use the Z vector as up as determined
1085 by the rotation of these vectors in blender coordinate space. Use
1086 XZY rotation for the node in blender."
1087 [#^Node creature #^Spatial eye]
1088 (let [target (closest-node creature eye)
1089 [cam-width cam-height]
1090 ;;[640 480] ;; graphics card on laptop doesn't support
1091 ;; arbitray dimensions.
1092 (eye-dimensions eye)
1093 cam (Camera. cam-width cam-height)
1094 rot (.getWorldRotation eye)]
1095 (.setLocation cam (.getWorldTranslation eye))
1096 (.lookAtDirection
1097 cam ; this part is not a mistake and
1098 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
1099 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
1100 (.setFrustumPerspective
1101 cam (float 45)
1102 (float (/ (.getWidth cam) (.getHeight cam)))
1103 (float 1)
1104 (float 1000))
1105 (bind-sense target cam) cam))
1106 #+end_listing
1108 *** Simulated Retina
1110 An eye is a surface (the retina) which contains many discrete
1111 sensors to detect light. These sensors can have different
1112 light-sensing properties. In humans, each discrete sensor is
1113 sensitive to red, blue, green, or gray. These different types of
1114 sensors can have different spatial distributions along the retina.
1115 In humans, there is a fovea in the center of the retina which has
1116 a very high density of color sensors, and a blind spot which has
1117 no sensors at all. Sensor density decreases in proportion to
1118 distance from the fovea.
1120 I want to be able to model any retinal configuration, so my
1121 eye-nodes in blender contain metadata pointing to images that
1122 describe the precise position of the individual sensors using
1123 white pixels. The meta-data also describes the precise sensitivity
1124 to light that the sensors described in the image have. An eye can
1125 contain any number of these images. For example, the metadata for
1126 an eye might look like this:
1128 #+begin_src clojure
1129 {0xFF0000 "Models/test-creature/retina-small.png"}
1130 #+end_src
1132 #+caption: An example retinal profile image. White pixels are
1133 #+caption: photo-sensitive elements. The distribution of white
1134 #+caption: pixels is denser in the middle and falls off at the
1135 #+caption: edges and is inspired by the human retina.
1136 #+name: retina
1137 #+ATTR_LaTeX: :width 10cm
1138 [[./images/retina-small.png]]
1140 Together, the number 0xFF0000 and the image image above describe
1141 the placement of red-sensitive sensory elements.
1143 Meta-data to very crudely approximate a human eye might be
1144 something like this:
1146 #+begin_src clojure
1147 (let [retinal-profile "Models/test-creature/retina-small.png"]
1148 {0xFF0000 retinal-profile
1149 0x00FF00 retinal-profile
1150 0x0000FF retinal-profile
1151 0xFFFFFF retinal-profile})
1152 #+end_src
1154 The numbers that serve as keys in the map determine a sensor's
1155 relative sensitivity to the channels red, green, and blue. These
1156 sensitivity values are packed into an integer in the order
1157 =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the
1158 image are added together with these sensitivities as linear
1159 weights. Therefore, 0xFF0000 means sensitive to red only while
1160 0xFFFFFF means sensitive to all colors equally (gray).
1162 #+caption: This is the core of vision in =CORTEX=. A given eye node
1163 #+caption: is converted into a function that returns visual
1164 #+caption: information from the simulation.
1165 #+name: vision-kernel
1166 #+begin_listing clojure
1167 (defn vision-kernel
1168 "Returns a list of functions, each of which will return a color
1169 channel's worth of visual information when called inside a running
1170 simulation."
1171 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
1172 (let [retinal-map (retina-sensor-profile eye)
1173 camera (add-eye! creature eye)
1174 vision-image
1175 (atom
1176 (BufferedImage. (.getWidth camera)
1177 (.getHeight camera)
1178 BufferedImage/TYPE_BYTE_BINARY))
1179 register-eye!
1180 (runonce
1181 (fn [world]
1182 (add-camera!
1183 world camera
1184 (let [counter (atom 0)]
1185 (fn [r fb bb bi]
1186 (if (zero? (rem (swap! counter inc) (inc skip)))
1187 (reset! vision-image
1188 (BufferedImage! r fb bb bi))))))))]
1189 (vec
1190 (map
1191 (fn [[key image]]
1192 (let [whites (white-coordinates image)
1193 topology (vec (collapse whites))
1194 sensitivity (sensitivity-presets key key)]
1195 (attached-viewport.
1196 (fn [world]
1197 (register-eye! world)
1198 (vector
1199 topology
1200 (vec
1201 (for [[x y] whites]
1202 (pixel-sense
1203 sensitivity
1204 (.getRGB @vision-image x y))))))
1205 register-eye!)))
1206 retinal-map))))
1207 #+end_listing
1209 Note that since each of the functions generated by =vision-kernel=
1210 shares the same =register-eye!= function, the eye will be
1211 registered only once the first time any of the functions from the
1212 list returned by =vision-kernel= is called. Each of the functions
1213 returned by =vision-kernel= also allows access to the =Viewport=
1214 through which it receives images.
1216 All the hard work has been done; all that remains is to apply
1217 =vision-kernel= to each eye in the creature and gather the results
1218 into one list of functions.
1221 #+caption: With =vision!=, =CORTEX= is already a fine simulation
1222 #+caption: environment for experimenting with different types of
1223 #+caption: eyes.
1224 #+name: vision!
1225 #+begin_listing clojure
1226 (defn vision!
1227 "Returns a list of functions, each of which returns visual sensory
1228 data when called inside a running simulation."
1229 [#^Node creature & {skip :skip :or {skip 0}}]
1230 (reduce
1231 concat
1232 (for [eye (eyes creature)]
1233 (vision-kernel creature eye))))
1234 #+end_listing
1236 #+caption: Simulated vision with a test creature and the
1237 #+caption: human-like eye approximation. Notice how each channel
1238 #+caption: of the eye responds differently to the differently
1239 #+caption: colored balls.
1240 #+name: worm-vision-test.
1241 #+ATTR_LaTeX: :width 13cm
1242 [[./images/worm-vision.png]]
1244 The vision code is not much more complicated than the body code,
1245 and enables multiple further paths for simulated vision. For
1246 example, it is quite easy to create bifocal vision -- you just
1247 make two eyes next to each other in blender! It is also possible
1248 to encode vision transforms in the retinal files. For example, the
1249 human like retina file in figure \ref{retina} approximates a
1250 log-polar transform.
1252 This vision code has already been absorbed by the jMonkeyEngine
1253 community and is now (in modified form) part of a system for
1254 capturing in-game video to a file.
1256 ** COMMENT Hearing is hard; =CORTEX= does it right
1258 At the end of this section I will have simulated ears that work the
1259 same way as the simulated eyes in the last section. I will be able to
1260 place any number of ear-nodes in a blender file, and they will bind to
1261 the closest physical object and follow it as it moves around. Each ear
1262 will provide access to the sound data it picks up between every frame.
1264 Hearing is one of the more difficult senses to simulate, because there
1265 is less support for obtaining the actual sound data that is processed
1266 by jMonkeyEngine3. There is no "split-screen" support for rendering
1267 sound from different points of view, and there is no way to directly
1268 access the rendered sound data.
1270 =CORTEX='s hearing is unique because it does not have any
1271 limitations compared to other simulation environments. As far as I
1272 know, there is no other system that supports multiple listerers,
1273 and the sound demo at the end of this section is the first time
1274 it's been done in a video game environment.
1276 *** Brief Description of jMonkeyEngine's Sound System
1278 jMonkeyEngine's sound system works as follows:
1280 - jMonkeyEngine uses the =AppSettings= for the particular
1281 application to determine what sort of =AudioRenderer= should be
1282 used.
1283 - Although some support is provided for multiple AudioRendering
1284 backends, jMonkeyEngine at the time of this writing will either
1285 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
1286 - jMonkeyEngine tries to figure out what sort of system you're
1287 running and extracts the appropriate native libraries.
1288 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
1289 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
1290 - =OpenAL= renders the 3D sound and feeds the rendered sound
1291 directly to any of various sound output devices with which it
1292 knows how to communicate.
1294 A consequence of this is that there's no way to access the actual
1295 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
1296 one /listener/ (it renders sound data from only one perspective),
1297 which normally isn't a problem for games, but becomes a problem
1298 when trying to make multiple AI creatures that can each hear the
1299 world from a different perspective.
1301 To make many AI creatures in jMonkeyEngine that can each hear the
1302 world from their own perspective, or to make a single creature with
1303 many ears, it is necessary to go all the way back to =OpenAL= and
1304 implement support for simulated hearing there.
1306 *** Extending =OpenAl=
1308 Extending =OpenAL= to support multiple listeners requires 500
1309 lines of =C= code and is too hairy to mention here. Instead, I
1310 will show a small amount of extension code and go over the high
1311 level stragety. Full source is of course available with the
1312 =CORTEX= distribution if you're interested.
1314 =OpenAL= goes to great lengths to support many different systems,
1315 all with different sound capabilities and interfaces. It
1316 accomplishes this difficult task by providing code for many
1317 different sound backends in pseudo-objects called /Devices/.
1318 There's a device for the Linux Open Sound System and the Advanced
1319 Linux Sound Architecture, there's one for Direct Sound on Windows,
1320 and there's even one for Solaris. =OpenAL= solves the problem of
1321 platform independence by providing all these Devices.
1323 Wrapper libraries such as LWJGL are free to examine the system on
1324 which they are running and then select an appropriate device for
1325 that system.
1327 There are also a few "special" devices that don't interface with
1328 any particular system. These include the Null Device, which
1329 doesn't do anything, and the Wave Device, which writes whatever
1330 sound it receives to a file, if everything has been set up
1331 correctly when configuring =OpenAL=.
1333 Actual mixing (doppler shift and distance.environment-based
1334 attenuation) of the sound data happens in the Devices, and they
1335 are the only point in the sound rendering process where this data
1336 is available.
1338 Therefore, in order to support multiple listeners, and get the
1339 sound data in a form that the AIs can use, it is necessary to
1340 create a new Device which supports this feature.
1342 Adding a device to OpenAL is rather tricky -- there are five
1343 separate files in the =OpenAL= source tree that must be modified
1344 to do so. I named my device the "Multiple Audio Send" Device, or
1345 =Send= Device for short, since it sends audio data back to the
1346 calling application like an Aux-Send cable on a mixing board.
1348 The main idea behind the Send device is to take advantage of the
1349 fact that LWJGL only manages one /context/ when using OpenAL. A
1350 /context/ is like a container that holds samples and keeps track
1351 of where the listener is. In order to support multiple listeners,
1352 the Send device identifies the LWJGL context as the master
1353 context, and creates any number of slave contexts to represent
1354 additional listeners. Every time the device renders sound, it
1355 synchronizes every source from the master LWJGL context to the
1356 slave contexts. Then, it renders each context separately, using a
1357 different listener for each one. The rendered sound is made
1358 available via JNI to jMonkeyEngine.
1360 Switching between contexts is not the normal operation of a
1361 Device, and one of the problems with doing so is that a Device
1362 normally keeps around a few pieces of state such as the
1363 =ClickRemoval= array above which will become corrupted if the
1364 contexts are not rendered in parallel. The solution is to create a
1365 copy of this normally global device state for each context, and
1366 copy it back and forth into and out of the actual device state
1367 whenever a context is rendered.
1369 The core of the =Send= device is the =syncSources= function, which
1370 does the job of copying all relevant data from one context to
1371 another.
1373 #+caption: Program for extending =OpenAL= to support multiple
1374 #+caption: listeners via context copying/switching.
1375 #+name: sync-openal-sources
1376 #+begin_listing C
1377 void syncSources(ALsource *masterSource, ALsource *slaveSource,
1378 ALCcontext *masterCtx, ALCcontext *slaveCtx){
1379 ALuint master = masterSource->source;
1380 ALuint slave = slaveSource->source;
1381 ALCcontext *current = alcGetCurrentContext();
1383 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
1384 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
1385 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
1386 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
1387 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
1388 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
1389 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
1390 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
1391 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
1392 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
1393 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
1394 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
1395 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
1397 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
1398 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
1399 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
1401 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
1402 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
1404 alcMakeContextCurrent(masterCtx);
1405 ALint source_type;
1406 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
1408 // Only static sources are currently synchronized!
1409 if (AL_STATIC == source_type){
1410 ALint master_buffer;
1411 ALint slave_buffer;
1412 alGetSourcei(master, AL_BUFFER, &master_buffer);
1413 alcMakeContextCurrent(slaveCtx);
1414 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
1415 if (master_buffer != slave_buffer){
1416 alSourcei(slave, AL_BUFFER, master_buffer);
1420 // Synchronize the state of the two sources.
1421 alcMakeContextCurrent(masterCtx);
1422 ALint masterState;
1423 ALint slaveState;
1425 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
1426 alcMakeContextCurrent(slaveCtx);
1427 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
1429 if (masterState != slaveState){
1430 switch (masterState){
1431 case AL_INITIAL : alSourceRewind(slave); break;
1432 case AL_PLAYING : alSourcePlay(slave); break;
1433 case AL_PAUSED : alSourcePause(slave); break;
1434 case AL_STOPPED : alSourceStop(slave); break;
1437 // Restore whatever context was previously active.
1438 alcMakeContextCurrent(current);
1440 #+end_listing
1442 With this special context-switching device, and some ugly JNI
1443 bindings that are not worth mentioning, =CORTEX= gains the ability
1444 to access multiple sound streams from =OpenAL=.
1446 #+caption: Program to create an ear from a blender empty node. The ear
1447 #+caption: follows around the nearest physical object and passes
1448 #+caption: all sensory data to a continuation function.
1449 #+name: add-ear
1450 #+begin_listing clojure
1451 (defn add-ear!
1452 "Create a Listener centered on the current position of 'ear
1453 which follows the closest physical node in 'creature and
1454 sends sound data to 'continuation."
1455 [#^Application world #^Node creature #^Spatial ear continuation]
1456 (let [target (closest-node creature ear)
1457 lis (Listener.)
1458 audio-renderer (.getAudioRenderer world)
1459 sp (hearing-pipeline continuation)]
1460 (.setLocation lis (.getWorldTranslation ear))
1461 (.setRotation lis (.getWorldRotation ear))
1462 (bind-sense target lis)
1463 (update-listener-velocity! target lis)
1464 (.addListener audio-renderer lis)
1465 (.registerSoundProcessor audio-renderer lis sp)))
1466 #+end_listing
1469 The =Send= device, unlike most of the other devices in =OpenAL=,
1470 does not render sound unless asked. This enables the system to
1471 slow down or speed up depending on the needs of the AIs who are
1472 using it to listen. If the device tried to render samples in
1473 real-time, a complicated AI whose mind takes 100 seconds of
1474 computer time to simulate 1 second of AI-time would miss almost
1475 all of the sound in its environment!
1477 #+caption: Program to enable arbitrary hearing in =CORTEX=
1478 #+name: hearing
1479 #+begin_listing clojure
1480 (defn hearing-kernel
1481 "Returns a function which returns auditory sensory data when called
1482 inside a running simulation."
1483 [#^Node creature #^Spatial ear]
1484 (let [hearing-data (atom [])
1485 register-listener!
1486 (runonce
1487 (fn [#^Application world]
1488 (add-ear!
1489 world creature ear
1490 (comp #(reset! hearing-data %)
1491 byteBuffer->pulse-vector))))]
1492 (fn [#^Application world]
1493 (register-listener! world)
1494 (let [data @hearing-data
1495 topology
1496 (vec (map #(vector % 0) (range 0 (count data))))]
1497 [topology data]))))
1499 (defn hearing!
1500 "Endow the creature in a particular world with the sense of
1501 hearing. Will return a sequence of functions, one for each ear,
1502 which when called will return the auditory data from that ear."
1503 [#^Node creature]
1504 (for [ear (ears creature)]
1505 (hearing-kernel creature ear)))
1506 #+end_listing
1508 Armed with these functions, =CORTEX= is able to test possibly the
1509 first ever instance of multiple listeners in a video game engine
1510 based simulation!
1512 #+caption: Here a simple creature responds to sound by changing
1513 #+caption: its color from gray to green when the total volume
1514 #+caption: goes over a threshold.
1515 #+name: sound-test
1516 #+begin_listing java
1517 /**
1518 * Respond to sound! This is the brain of an AI entity that
1519 * hears its surroundings and reacts to them.
1520 */
1521 public void process(ByteBuffer audioSamples,
1522 int numSamples, AudioFormat format) {
1523 audioSamples.clear();
1524 byte[] data = new byte[numSamples];
1525 float[] out = new float[numSamples];
1526 audioSamples.get(data);
1527 FloatSampleTools.
1528 byte2floatInterleaved
1529 (data, 0, out, 0, numSamples/format.getFrameSize(), format);
1531 float max = Float.NEGATIVE_INFINITY;
1532 for (float f : out){if (f > max) max = f;}
1533 audioSamples.clear();
1535 if (max > 0.1){
1536 entity.getMaterial().setColor("Color", ColorRGBA.Green);
1538 else {
1539 entity.getMaterial().setColor("Color", ColorRGBA.Gray);
1541 #+end_listing
1543 #+caption: First ever simulation of multiple listerners in =CORTEX=.
1544 #+caption: Each cube is a creature which processes sound data with
1545 #+caption: the =process= function from listing \ref{sound-test}.
1546 #+caption: the ball is constantally emiting a pure tone of
1547 #+caption: constant volume. As it approaches the cubes, they each
1548 #+caption: change color in response to the sound.
1549 #+name: sound-cubes.
1550 #+ATTR_LaTeX: :width 10cm
1551 [[./images/aurellem-gray.png]]
1553 This system of hearing has also been co-opted by the
1554 jMonkeyEngine3 community and is used to record audio for demo
1555 videos.
1557 ** COMMENT Touch uses hundreds of hair-like elements
1559 Touch is critical to navigation and spatial reasoning and as such I
1560 need a simulated version of it to give to my AI creatures.
1562 Human skin has a wide array of touch sensors, each of which
1563 specialize in detecting different vibrational modes and pressures.
1564 These sensors can integrate a vast expanse of skin (i.e. your
1565 entire palm), or a tiny patch of skin at the tip of your finger.
1566 The hairs of the skin help detect objects before they even come
1567 into contact with the skin proper.
1569 However, touch in my simulated world can not exactly correspond to
1570 human touch because my creatures are made out of completely rigid
1571 segments that don't deform like human skin.
1573 Instead of measuring deformation or vibration, I surround each
1574 rigid part with a plenitude of hair-like objects (/feelers/) which
1575 do not interact with the physical world. Physical objects can pass
1576 through them with no effect. The feelers are able to tell when
1577 other objects pass through them, and they constantly report how
1578 much of their extent is covered. So even though the creature's body
1579 parts do not deform, the feelers create a margin around those body
1580 parts which achieves a sense of touch which is a hybrid between a
1581 human's sense of deformation and sense from hairs.
1583 Implementing touch in jMonkeyEngine follows a different technical
1584 route than vision and hearing. Those two senses piggybacked off
1585 jMonkeyEngine's 3D audio and video rendering subsystems. To
1586 simulate touch, I use jMonkeyEngine's physics system to execute
1587 many small collision detections, one for each feeler. The placement
1588 of the feelers is determined by a UV-mapped image which shows where
1589 each feeler should be on the 3D surface of the body.
1591 *** Defining Touch Meta-Data in Blender
1593 Each geometry can have a single UV map which describes the
1594 position of the feelers which will constitute its sense of touch.
1595 This image path is stored under the ``touch'' key. The image itself
1596 is black and white, with black meaning a feeler length of 0 (no
1597 feeler is present) and white meaning a feeler length of =scale=,
1598 which is a float stored under the key "scale".
1600 #+caption: Touch does not use empty nodes, to store metadata,
1601 #+caption: because the metadata of each solid part of a
1602 #+caption: creature's body is sufficient.
1603 #+name: touch-meta-data
1604 #+begin_listing clojure
1605 #+BEGIN_SRC clojure
1606 (defn tactile-sensor-profile
1607 "Return the touch-sensor distribution image in BufferedImage format,
1608 or nil if it does not exist."
1609 [#^Geometry obj]
1610 (if-let [image-path (meta-data obj "touch")]
1611 (load-image image-path)))
1613 (defn tactile-scale
1614 "Return the length of each feeler. Default scale is 0.01
1615 jMonkeyEngine units."
1616 [#^Geometry obj]
1617 (if-let [scale (meta-data obj "scale")]
1618 scale 0.1))
1619 #+END_SRC
1620 #+end_listing
1622 Here is an example of a UV-map which specifies the position of
1623 touch sensors along the surface of the upper segment of a fingertip.
1625 #+caption: This is the tactile-sensor-profile for the upper segment
1626 #+caption: of a fingertip. It defines regions of high touch sensitivity
1627 #+caption: (where there are many white pixels) and regions of low
1628 #+caption: sensitivity (where white pixels are sparse).
1629 #+name: fimgertip-UV
1630 #+ATTR_LaTeX: :width 13cm
1631 [[./images/finger-UV.png]]
1633 *** Implementation Summary
1635 To simulate touch there are three conceptual steps. For each solid
1636 object in the creature, you first have to get UV image and scale
1637 parameter which define the position and length of the feelers.
1638 Then, you use the triangles which comprise the mesh and the UV
1639 data stored in the mesh to determine the world-space position and
1640 orientation of each feeler. Then once every frame, update these
1641 positions and orientations to match the current position and
1642 orientation of the object, and use physics collision detection to
1643 gather tactile data.
1645 Extracting the meta-data has already been described. The third
1646 step, physics collision detection, is handled in =touch-kernel=.
1647 Translating the positions and orientations of the feelers from the
1648 UV-map to world-space is itself a three-step process.
1650 - Find the triangles which make up the mesh in pixel-space and in
1651 world-space. (=triangles= =pixel-triangles=).
1653 - Find the coordinates of each feeler in world-space. These are
1654 the origins of the feelers. (=feeler-origins=).
1656 - Calculate the normals of the triangles in world space, and add
1657 them to each of the origins of the feelers. These are the
1658 normalized coordinates of the tips of the feelers.
1659 (=feeler-tips=).
1661 *** Triangle Math
1663 The rigid objects which make up a creature have an underlying
1664 =Geometry=, which is a =Mesh= plus a =Material= and other
1665 important data involved with displaying the object.
1667 A =Mesh= is composed of =Triangles=, and each =Triangle= has three
1668 vertices which have coordinates in world space and UV space.
1670 Here, =triangles= gets all the world-space triangles which
1671 comprise a mesh, while =pixel-triangles= gets those same triangles
1672 expressed in pixel coordinates (which are UV coordinates scaled to
1673 fit the height and width of the UV image).
1675 #+caption: Programs to extract triangles from a geometry and get
1676 #+caption: their verticies in both world and UV-coordinates.
1677 #+name: get-triangles
1678 #+begin_listing clojure
1679 #+BEGIN_SRC clojure
1680 (defn triangle
1681 "Get the triangle specified by triangle-index from the mesh."
1682 [#^Geometry geo triangle-index]
1683 (triangle-seq
1684 (let [scratch (Triangle.)]
1685 (.getTriangle (.getMesh geo) triangle-index scratch) scratch)))
1687 (defn triangles
1688 "Return a sequence of all the Triangles which comprise a given
1689 Geometry."
1690 [#^Geometry geo]
1691 (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo)))))
1693 (defn triangle-vertex-indices
1694 "Get the triangle vertex indices of a given triangle from a given
1695 mesh."
1696 [#^Mesh mesh triangle-index]
1697 (let [indices (int-array 3)]
1698 (.getTriangle mesh triangle-index indices)
1699 (vec indices)))
1701 (defn vertex-UV-coord
1702 "Get the UV-coordinates of the vertex named by vertex-index"
1703 [#^Mesh mesh vertex-index]
1704 (let [UV-buffer
1705 (.getData
1706 (.getBuffer
1707 mesh
1708 VertexBuffer$Type/TexCoord))]
1709 [(.get UV-buffer (* vertex-index 2))
1710 (.get UV-buffer (+ 1 (* vertex-index 2)))]))
1712 (defn pixel-triangle [#^Geometry geo image index]
1713 (let [mesh (.getMesh geo)
1714 width (.getWidth image)
1715 height (.getHeight image)]
1716 (vec (map (fn [[u v]] (vector (* width u) (* height v)))
1717 (map (partial vertex-UV-coord mesh)
1718 (triangle-vertex-indices mesh index))))))
1720 (defn pixel-triangles
1721 "The pixel-space triangles of the Geometry, in the same order as
1722 (triangles geo)"
1723 [#^Geometry geo image]
1724 (let [height (.getHeight image)
1725 width (.getWidth image)]
1726 (map (partial pixel-triangle geo image)
1727 (range (.getTriangleCount (.getMesh geo))))))
1728 #+END_SRC
1729 #+end_listing
1731 *** The Affine Transform from one Triangle to Another
1733 =pixel-triangles= gives us the mesh triangles expressed in pixel
1734 coordinates and =triangles= gives us the mesh triangles expressed
1735 in world coordinates. The tactile-sensor-profile gives the
1736 position of each feeler in pixel-space. In order to convert
1737 pixel-space coordinates into world-space coordinates we need
1738 something that takes coordinates on the surface of one triangle
1739 and gives the corresponding coordinates on the surface of another
1740 triangle.
1742 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed
1743 into any other by a combination of translation, scaling, and
1744 rotation. The affine transformation from one triangle to another
1745 is readily computable if the triangle is expressed in terms of a
1746 $4x4$ matrix.
1748 #+BEGIN_LaTeX
1749 $$
1750 \begin{bmatrix}
1751 x_1 & x_2 & x_3 & n_x \\
1752 y_1 & y_2 & y_3 & n_y \\
1753 z_1 & z_2 & z_3 & n_z \\
1754 1 & 1 & 1 & 1
1755 \end{bmatrix}
1756 $$
1757 #+END_LaTeX
1759 Here, the first three columns of the matrix are the vertices of
1760 the triangle. The last column is the right-handed unit normal of
1761 the triangle.
1763 With two triangles $T_{1}$ and $T_{2}$ each expressed as a
1764 matrix like above, the affine transform from $T_{1}$ to $T_{2}$
1765 is $T_{2}T_{1}^{-1}$.
1767 The clojure code below recapitulates the formulas above, using
1768 jMonkeyEngine's =Matrix4f= objects, which can describe any affine
1769 transformation.
1771 #+caption: Program to interpert triangles as affine transforms.
1772 #+name: triangle-affine
1773 #+begin_listing clojure
1774 #+BEGIN_SRC clojure
1775 (defn triangle->matrix4f
1776 "Converts the triangle into a 4x4 matrix: The first three columns
1777 contain the vertices of the triangle; the last contains the unit
1778 normal of the triangle. The bottom row is filled with 1s."
1779 [#^Triangle t]
1780 (let [mat (Matrix4f.)
1781 [vert-1 vert-2 vert-3]
1782 (mapv #(.get t %) (range 3))
1783 unit-normal (do (.calculateNormal t)(.getNormal t))
1784 vertices [vert-1 vert-2 vert-3 unit-normal]]
1785 (dorun
1786 (for [row (range 4) col (range 3)]
1787 (do
1788 (.set mat col row (.get (vertices row) col))
1789 (.set mat 3 row 1)))) mat))
1791 (defn triangles->affine-transform
1792 "Returns the affine transformation that converts each vertex in the
1793 first triangle into the corresponding vertex in the second
1794 triangle."
1795 [#^Triangle tri-1 #^Triangle tri-2]
1796 (.mult
1797 (triangle->matrix4f tri-2)
1798 (.invert (triangle->matrix4f tri-1))))
1799 #+END_SRC
1800 #+end_listing
1802 *** Triangle Boundaries
1804 For efficiency's sake I will divide the tactile-profile image into
1805 small squares which inscribe each pixel-triangle, then extract the
1806 points which lie inside the triangle and map them to 3D-space using
1807 =triangle-transform= above. To do this I need a function,
1808 =convex-bounds= which finds the smallest box which inscribes a 2D
1809 triangle.
1811 =inside-triangle?= determines whether a point is inside a triangle
1812 in 2D pixel-space.
1814 #+caption: Program to efficiently determine point includion
1815 #+caption: in a triangle.
1816 #+name: in-triangle
1817 #+begin_listing clojure
1818 #+BEGIN_SRC clojure
1819 (defn convex-bounds
1820 "Returns the smallest square containing the given vertices, as a
1821 vector of integers [left top width height]."
1822 [verts]
1823 (let [xs (map first verts)
1824 ys (map second verts)
1825 x0 (Math/floor (apply min xs))
1826 y0 (Math/floor (apply min ys))
1827 x1 (Math/ceil (apply max xs))
1828 y1 (Math/ceil (apply max ys))]
1829 [x0 y0 (- x1 x0) (- y1 y0)]))
1831 (defn same-side?
1832 "Given the points p1 and p2 and the reference point ref, is point p
1833 on the same side of the line that goes through p1 and p2 as ref is?"
1834 [p1 p2 ref p]
1835 (<=
1837 (.dot
1838 (.cross (.subtract p2 p1) (.subtract p p1))
1839 (.cross (.subtract p2 p1) (.subtract ref p1)))))
1841 (defn inside-triangle?
1842 "Is the point inside the triangle?"
1843 {:author "Dylan Holmes"}
1844 [#^Triangle tri #^Vector3f p]
1845 (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]]
1846 (and
1847 (same-side? vert-1 vert-2 vert-3 p)
1848 (same-side? vert-2 vert-3 vert-1 p)
1849 (same-side? vert-3 vert-1 vert-2 p))))
1850 #+END_SRC
1851 #+end_listing
1853 *** Feeler Coordinates
1855 The triangle-related functions above make short work of
1856 calculating the positions and orientations of each feeler in
1857 world-space.
1859 #+caption: Program to get the coordinates of ``feelers '' in
1860 #+caption: both world and UV-coordinates.
1861 #+name: feeler-coordinates
1862 #+begin_listing clojure
1863 #+BEGIN_SRC clojure
1864 (defn feeler-pixel-coords
1865 "Returns the coordinates of the feelers in pixel space in lists, one
1866 list for each triangle, ordered in the same way as (triangles) and
1867 (pixel-triangles)."
1868 [#^Geometry geo image]
1869 (map
1870 (fn [pixel-triangle]
1871 (filter
1872 (fn [coord]
1873 (inside-triangle? (->triangle pixel-triangle)
1874 (->vector3f coord)))
1875 (white-coordinates image (convex-bounds pixel-triangle))))
1876 (pixel-triangles geo image)))
1878 (defn feeler-world-coords
1879 "Returns the coordinates of the feelers in world space in lists, one
1880 list for each triangle, ordered in the same way as (triangles) and
1881 (pixel-triangles)."
1882 [#^Geometry geo image]
1883 (let [transforms
1884 (map #(triangles->affine-transform
1885 (->triangle %1) (->triangle %2))
1886 (pixel-triangles geo image)
1887 (triangles geo))]
1888 (map (fn [transform coords]
1889 (map #(.mult transform (->vector3f %)) coords))
1890 transforms (feeler-pixel-coords geo image))))
1891 #+END_SRC
1892 #+end_listing
1894 #+caption: Program to get the position of the base and tip of
1895 #+caption: each ``feeler''
1896 #+name: feeler-tips
1897 #+begin_listing clojure
1898 #+BEGIN_SRC clojure
1899 (defn feeler-origins
1900 "The world space coordinates of the root of each feeler."
1901 [#^Geometry geo image]
1902 (reduce concat (feeler-world-coords geo image)))
1904 (defn feeler-tips
1905 "The world space coordinates of the tip of each feeler."
1906 [#^Geometry geo image]
1907 (let [world-coords (feeler-world-coords geo image)
1908 normals
1909 (map
1910 (fn [triangle]
1911 (.calculateNormal triangle)
1912 (.clone (.getNormal triangle)))
1913 (map ->triangle (triangles geo)))]
1915 (mapcat (fn [origins normal]
1916 (map #(.add % normal) origins))
1917 world-coords normals)))
1919 (defn touch-topology
1920 [#^Geometry geo image]
1921 (collapse (reduce concat (feeler-pixel-coords geo image))))
1922 #+END_SRC
1923 #+end_listing
1925 *** Simulated Touch
1927 Now that the functions to construct feelers are complete,
1928 =touch-kernel= generates functions to be called from within a
1929 simulation that perform the necessary physics collisions to
1930 collect tactile data, and =touch!= recursively applies it to every
1931 node in the creature.
1933 #+caption: Efficient program to transform a ray from
1934 #+caption: one position to another.
1935 #+name: set-ray
1936 #+begin_listing clojure
1937 #+BEGIN_SRC clojure
1938 (defn set-ray [#^Ray ray #^Matrix4f transform
1939 #^Vector3f origin #^Vector3f tip]
1940 ;; Doing everything locally reduces garbage collection by enough to
1941 ;; be worth it.
1942 (.mult transform origin (.getOrigin ray))
1943 (.mult transform tip (.getDirection ray))
1944 (.subtractLocal (.getDirection ray) (.getOrigin ray))
1945 (.normalizeLocal (.getDirection ray)))
1946 #+END_SRC
1947 #+end_listing
1949 #+caption: This is the core of touch in =CORTEX= each feeler
1950 #+caption: follows the object it is bound to, reporting any
1951 #+caption: collisions that may happen.
1952 #+name: touch-kernel
1953 #+begin_listing clojure
1954 #+BEGIN_SRC clojure
1955 (defn touch-kernel
1956 "Constructs a function which will return tactile sensory data from
1957 'geo when called from inside a running simulation"
1958 [#^Geometry geo]
1959 (if-let
1960 [profile (tactile-sensor-profile geo)]
1961 (let [ray-reference-origins (feeler-origins geo profile)
1962 ray-reference-tips (feeler-tips geo profile)
1963 ray-length (tactile-scale geo)
1964 current-rays (map (fn [_] (Ray.)) ray-reference-origins)
1965 topology (touch-topology geo profile)
1966 correction (float (* ray-length -0.2))]
1967 ;; slight tolerance for very close collisions.
1968 (dorun
1969 (map (fn [origin tip]
1970 (.addLocal origin (.mult (.subtract tip origin)
1971 correction)))
1972 ray-reference-origins ray-reference-tips))
1973 (dorun (map #(.setLimit % ray-length) current-rays))
1974 (fn [node]
1975 (let [transform (.getWorldMatrix geo)]
1976 (dorun
1977 (map (fn [ray ref-origin ref-tip]
1978 (set-ray ray transform ref-origin ref-tip))
1979 current-rays ray-reference-origins
1980 ray-reference-tips))
1981 (vector
1982 topology
1983 (vec
1984 (for [ray current-rays]
1985 (do
1986 (let [results (CollisionResults.)]
1987 (.collideWith node ray results)
1988 (let [touch-objects
1989 (filter #(not (= geo (.getGeometry %)))
1990 results)
1991 limit (.getLimit ray)]
1992 [(if (empty? touch-objects)
1993 limit
1994 (let [response
1995 (apply min (map #(.getDistance %)
1996 touch-objects))]
1997 (FastMath/clamp
1998 (float
1999 (if (> response limit) (float 0.0)
2000 (+ response correction)))
2001 (float 0.0)
2002 limit)))
2003 limit])))))))))))
2004 #+END_SRC
2005 #+end_listing
2007 Armed with the =touch!= function, =CORTEX= becomes capable of
2008 giving creatures a sense of touch. A simple test is to create a
2009 cube that is outfitted with a uniform distrubition of touch
2010 sensors. It can feel the ground and any balls that it touches.
2012 #+caption: =CORTEX= interface for creating touch in a simulated
2013 #+caption: creature.
2014 #+name: touch
2015 #+begin_listing clojure
2016 #+BEGIN_SRC clojure
2017 (defn touch!
2018 "Endow the creature with the sense of touch. Returns a sequence of
2019 functions, one for each body part with a tactile-sensor-profile,
2020 each of which when called returns sensory data for that body part."
2021 [#^Node creature]
2022 (filter
2023 (comp not nil?)
2024 (map touch-kernel
2025 (filter #(isa? (class %) Geometry)
2026 (node-seq creature)))))
2027 #+END_SRC
2028 #+end_listing
2030 The tactile-sensor-profile image for the touch cube is a simple
2031 cross with a unifom distribution of touch sensors:
2033 #+caption: The touch profile for the touch-cube. Each pure white
2034 #+caption: pixel defines a touch sensitive feeler.
2035 #+name: touch-cube-uv-map
2036 #+ATTR_LaTeX: :width 10cm
2037 [[./images/touch-profile.png]]
2039 #+caption: The touch cube reacts to canonballs. The black, red,
2040 #+caption: and white cross on the right is a visual display of
2041 #+caption: the creature's touch. White means that it is feeling
2042 #+caption: something strongly, black is not feeling anything,
2043 #+caption: and gray is in-between. The cube can feel both the
2044 #+caption: floor and the ball. Notice that when the ball causes
2045 #+caption: the cube to tip, that the bottom face can still feel
2046 #+caption: part of the ground.
2047 #+name: touch-cube-uv-map
2048 #+ATTR_LaTeX: :width 15cm
2049 [[./images/touch-cube.png]]
2051 ** Proprioception is the sense that makes everything ``real''
2053 ** Muscles are both effectors and sensors
2055 ** =CORTEX= brings complex creatures to life!
2057 ** =CORTEX= enables many possiblities for further research
2059 * COMMENT Empathy in a simulated worm
2061 Here I develop a computational model of empathy, using =CORTEX= as a
2062 base. Empathy in this context is the ability to observe another
2063 creature and infer what sorts of sensations that creature is
2064 feeling. My empathy algorithm involves multiple phases. First is
2065 free-play, where the creature moves around and gains sensory
2066 experience. From this experience I construct a representation of the
2067 creature's sensory state space, which I call \Phi-space. Using
2068 \Phi-space, I construct an efficient function which takes the
2069 limited data that comes from observing another creature and enriches
2070 it full compliment of imagined sensory data. I can then use the
2071 imagined sensory data to recognize what the observed creature is
2072 doing and feeling, using straightforward embodied action predicates.
2073 This is all demonstrated with using a simple worm-like creature, and
2074 recognizing worm-actions based on limited data.
2076 #+caption: Here is the worm with which we will be working.
2077 #+caption: It is composed of 5 segments. Each segment has a
2078 #+caption: pair of extensor and flexor muscles. Each of the
2079 #+caption: worm's four joints is a hinge joint which allows
2080 #+caption: about 30 degrees of rotation to either side. Each segment
2081 #+caption: of the worm is touch-capable and has a uniform
2082 #+caption: distribution of touch sensors on each of its faces.
2083 #+caption: Each joint has a proprioceptive sense to detect
2084 #+caption: relative positions. The worm segments are all the
2085 #+caption: same except for the first one, which has a much
2086 #+caption: higher weight than the others to allow for easy
2087 #+caption: manual motor control.
2088 #+name: basic-worm-view
2089 #+ATTR_LaTeX: :width 10cm
2090 [[./images/basic-worm-view.png]]
2092 #+caption: Program for reading a worm from a blender file and
2093 #+caption: outfitting it with the senses of proprioception,
2094 #+caption: touch, and the ability to move, as specified in the
2095 #+caption: blender file.
2096 #+name: get-worm
2097 #+begin_listing clojure
2098 #+begin_src clojure
2099 (defn worm []
2100 (let [model (load-blender-model "Models/worm/worm.blend")]
2101 {:body (doto model (body!))
2102 :touch (touch! model)
2103 :proprioception (proprioception! model)
2104 :muscles (movement! model)}))
2105 #+end_src
2106 #+end_listing
2108 ** Embodiment factors action recognition into managable parts
2110 Using empathy, I divide the problem of action recognition into a
2111 recognition process expressed in the language of a full compliment
2112 of senses, and an imaganitive process that generates full sensory
2113 data from partial sensory data. Splitting the action recognition
2114 problem in this manner greatly reduces the total amount of work to
2115 recognize actions: The imaganitive process is mostly just matching
2116 previous experience, and the recognition process gets to use all
2117 the senses to directly describe any action.
2119 ** Action recognition is easy with a full gamut of senses
2121 Embodied representations using multiple senses such as touch,
2122 proprioception, and muscle tension turns out be be exceedingly
2123 efficient at describing body-centered actions. It is the ``right
2124 language for the job''. For example, it takes only around 5 lines
2125 of LISP code to describe the action of ``curling'' using embodied
2126 primitives. It takes about 10 lines to describe the seemingly
2127 complicated action of wiggling.
2129 The following action predicates each take a stream of sensory
2130 experience, observe however much of it they desire, and decide
2131 whether the worm is doing the action they describe. =curled?=
2132 relies on proprioception, =resting?= relies on touch, =wiggling?=
2133 relies on a fourier analysis of muscle contraction, and
2134 =grand-circle?= relies on touch and reuses =curled?= as a gaurd.
2136 #+caption: Program for detecting whether the worm is curled. This is the
2137 #+caption: simplest action predicate, because it only uses the last frame
2138 #+caption: of sensory experience, and only uses proprioceptive data. Even
2139 #+caption: this simple predicate, however, is automatically frame
2140 #+caption: independent and ignores vermopomorphic differences such as
2141 #+caption: worm textures and colors.
2142 #+name: curled
2143 #+attr_latex: [htpb]
2144 #+begin_listing clojure
2145 #+begin_src clojure
2146 (defn curled?
2147 "Is the worm curled up?"
2148 [experiences]
2149 (every?
2150 (fn [[_ _ bend]]
2151 (> (Math/sin bend) 0.64))
2152 (:proprioception (peek experiences))))
2153 #+end_src
2154 #+end_listing
2156 #+caption: Program for summarizing the touch information in a patch
2157 #+caption: of skin.
2158 #+name: touch-summary
2159 #+attr_latex: [htpb]
2161 #+begin_listing clojure
2162 #+begin_src clojure
2163 (defn contact
2164 "Determine how much contact a particular worm segment has with
2165 other objects. Returns a value between 0 and 1, where 1 is full
2166 contact and 0 is no contact."
2167 [touch-region [coords contact :as touch]]
2168 (-> (zipmap coords contact)
2169 (select-keys touch-region)
2170 (vals)
2171 (#(map first %))
2172 (average)
2173 (* 10)
2174 (- 1)
2175 (Math/abs)))
2176 #+end_src
2177 #+end_listing
2180 #+caption: Program for detecting whether the worm is at rest. This program
2181 #+caption: uses a summary of the tactile information from the underbelly
2182 #+caption: of the worm, and is only true if every segment is touching the
2183 #+caption: floor. Note that this function contains no references to
2184 #+caption: proprioction at all.
2185 #+name: resting
2186 #+attr_latex: [htpb]
2187 #+begin_listing clojure
2188 #+begin_src clojure
2189 (def worm-segment-bottom (rect-region [8 15] [14 22]))
2191 (defn resting?
2192 "Is the worm resting on the ground?"
2193 [experiences]
2194 (every?
2195 (fn [touch-data]
2196 (< 0.9 (contact worm-segment-bottom touch-data)))
2197 (:touch (peek experiences))))
2198 #+end_src
2199 #+end_listing
2201 #+caption: Program for detecting whether the worm is curled up into a
2202 #+caption: full circle. Here the embodied approach begins to shine, as
2203 #+caption: I am able to both use a previous action predicate (=curled?=)
2204 #+caption: as well as the direct tactile experience of the head and tail.
2205 #+name: grand-circle
2206 #+attr_latex: [htpb]
2207 #+begin_listing clojure
2208 #+begin_src clojure
2209 (def worm-segment-bottom-tip (rect-region [15 15] [22 22]))
2211 (def worm-segment-top-tip (rect-region [0 15] [7 22]))
2213 (defn grand-circle?
2214 "Does the worm form a majestic circle (one end touching the other)?"
2215 [experiences]
2216 (and (curled? experiences)
2217 (let [worm-touch (:touch (peek experiences))
2218 tail-touch (worm-touch 0)
2219 head-touch (worm-touch 4)]
2220 (and (< 0.55 (contact worm-segment-bottom-tip tail-touch))
2221 (< 0.55 (contact worm-segment-top-tip head-touch))))))
2222 #+end_src
2223 #+end_listing
2226 #+caption: Program for detecting whether the worm has been wiggling for
2227 #+caption: the last few frames. It uses a fourier analysis of the muscle
2228 #+caption: contractions of the worm's tail to determine wiggling. This is
2229 #+caption: signigicant because there is no particular frame that clearly
2230 #+caption: indicates that the worm is wiggling --- only when multiple frames
2231 #+caption: are analyzed together is the wiggling revealed. Defining
2232 #+caption: wiggling this way also gives the worm an opportunity to learn
2233 #+caption: and recognize ``frustrated wiggling'', where the worm tries to
2234 #+caption: wiggle but can't. Frustrated wiggling is very visually different
2235 #+caption: from actual wiggling, but this definition gives it to us for free.
2236 #+name: wiggling
2237 #+attr_latex: [htpb]
2238 #+begin_listing clojure
2239 #+begin_src clojure
2240 (defn fft [nums]
2241 (map
2242 #(.getReal %)
2243 (.transform
2244 (FastFourierTransformer. DftNormalization/STANDARD)
2245 (double-array nums) TransformType/FORWARD)))
2247 (def indexed (partial map-indexed vector))
2249 (defn max-indexed [s]
2250 (first (sort-by (comp - second) (indexed s))))
2252 (defn wiggling?
2253 "Is the worm wiggling?"
2254 [experiences]
2255 (let [analysis-interval 0x40]
2256 (when (> (count experiences) analysis-interval)
2257 (let [a-flex 3
2258 a-ex 2
2259 muscle-activity
2260 (map :muscle (vector:last-n experiences analysis-interval))
2261 base-activity
2262 (map #(- (% a-flex) (% a-ex)) muscle-activity)]
2263 (= 2
2264 (first
2265 (max-indexed
2266 (map #(Math/abs %)
2267 (take 20 (fft base-activity))))))))))
2268 #+end_src
2269 #+end_listing
2271 With these action predicates, I can now recognize the actions of
2272 the worm while it is moving under my control and I have access to
2273 all the worm's senses.
2275 #+caption: Use the action predicates defined earlier to report on
2276 #+caption: what the worm is doing while in simulation.
2277 #+name: report-worm-activity
2278 #+attr_latex: [htpb]
2279 #+begin_listing clojure
2280 #+begin_src clojure
2281 (defn debug-experience
2282 [experiences text]
2283 (cond
2284 (grand-circle? experiences) (.setText text "Grand Circle")
2285 (curled? experiences) (.setText text "Curled")
2286 (wiggling? experiences) (.setText text "Wiggling")
2287 (resting? experiences) (.setText text "Resting")))
2288 #+end_src
2289 #+end_listing
2291 #+caption: Using =debug-experience=, the body-centered predicates
2292 #+caption: work together to classify the behaviour of the worm.
2293 #+caption: the predicates are operating with access to the worm's
2294 #+caption: full sensory data.
2295 #+name: basic-worm-view
2296 #+ATTR_LaTeX: :width 10cm
2297 [[./images/worm-identify-init.png]]
2299 These action predicates satisfy the recognition requirement of an
2300 empathic recognition system. There is power in the simplicity of
2301 the action predicates. They describe their actions without getting
2302 confused in visual details of the worm. Each one is frame
2303 independent, but more than that, they are each indepent of
2304 irrelevant visual details of the worm and the environment. They
2305 will work regardless of whether the worm is a different color or
2306 hevaily textured, or if the environment has strange lighting.
2308 The trick now is to make the action predicates work even when the
2309 sensory data on which they depend is absent. If I can do that, then
2310 I will have gained much,
2312 ** \Phi-space describes the worm's experiences
2314 As a first step towards building empathy, I need to gather all of
2315 the worm's experiences during free play. I use a simple vector to
2316 store all the experiences.
2318 Each element of the experience vector exists in the vast space of
2319 all possible worm-experiences. Most of this vast space is actually
2320 unreachable due to physical constraints of the worm's body. For
2321 example, the worm's segments are connected by hinge joints that put
2322 a practical limit on the worm's range of motions without limiting
2323 its degrees of freedom. Some groupings of senses are impossible;
2324 the worm can not be bent into a circle so that its ends are
2325 touching and at the same time not also experience the sensation of
2326 touching itself.
2328 As the worm moves around during free play and its experience vector
2329 grows larger, the vector begins to define a subspace which is all
2330 the sensations the worm can practicaly experience during normal
2331 operation. I call this subspace \Phi-space, short for
2332 physical-space. The experience vector defines a path through
2333 \Phi-space. This path has interesting properties that all derive
2334 from physical embodiment. The proprioceptive components are
2335 completely smooth, because in order for the worm to move from one
2336 position to another, it must pass through the intermediate
2337 positions. The path invariably forms loops as actions are repeated.
2338 Finally and most importantly, proprioception actually gives very
2339 strong inference about the other senses. For example, when the worm
2340 is flat, you can infer that it is touching the ground and that its
2341 muscles are not active, because if the muscles were active, the
2342 worm would be moving and would not be perfectly flat. In order to
2343 stay flat, the worm has to be touching the ground, or it would
2344 again be moving out of the flat position due to gravity. If the
2345 worm is positioned in such a way that it interacts with itself,
2346 then it is very likely to be feeling the same tactile feelings as
2347 the last time it was in that position, because it has the same body
2348 as then. If you observe multiple frames of proprioceptive data,
2349 then you can become increasingly confident about the exact
2350 activations of the worm's muscles, because it generally takes a
2351 unique combination of muscle contractions to transform the worm's
2352 body along a specific path through \Phi-space.
2354 There is a simple way of taking \Phi-space and the total ordering
2355 provided by an experience vector and reliably infering the rest of
2356 the senses.
2358 ** Empathy is the process of tracing though \Phi-space
2360 Here is the core of a basic empathy algorithm, starting with an
2361 experience vector:
2363 First, group the experiences into tiered proprioceptive bins. I use
2364 powers of 10 and 3 bins, and the smallest bin has an approximate
2365 size of 0.001 radians in all proprioceptive dimensions.
2367 Then, given a sequence of proprioceptive input, generate a set of
2368 matching experience records for each input, using the tiered
2369 proprioceptive bins.
2371 Finally, to infer sensory data, select the longest consective chain
2372 of experiences. Conecutive experience means that the experiences
2373 appear next to each other in the experience vector.
2375 This algorithm has three advantages:
2377 1. It's simple
2379 3. It's very fast -- retrieving possible interpretations takes
2380 constant time. Tracing through chains of interpretations takes
2381 time proportional to the average number of experiences in a
2382 proprioceptive bin. Redundant experiences in \Phi-space can be
2383 merged to save computation.
2385 2. It protects from wrong interpretations of transient ambiguous
2386 proprioceptive data. For example, if the worm is flat for just
2387 an instant, this flattness will not be interpreted as implying
2388 that the worm has its muscles relaxed, since the flattness is
2389 part of a longer chain which includes a distinct pattern of
2390 muscle activation. Markov chains or other memoryless statistical
2391 models that operate on individual frames may very well make this
2392 mistake.
2394 #+caption: Program to convert an experience vector into a
2395 #+caption: proprioceptively binned lookup function.
2396 #+name: bin
2397 #+attr_latex: [htpb]
2398 #+begin_listing clojure
2399 #+begin_src clojure
2400 (defn bin [digits]
2401 (fn [angles]
2402 (->> angles
2403 (flatten)
2404 (map (juxt #(Math/sin %) #(Math/cos %)))
2405 (flatten)
2406 (mapv #(Math/round (* % (Math/pow 10 (dec digits))))))))
2408 (defn gen-phi-scan
2409 "Nearest-neighbors with binning. Only returns a result if
2410 the propriceptive data is within 10% of a previously recorded
2411 result in all dimensions."
2412 [phi-space]
2413 (let [bin-keys (map bin [3 2 1])
2414 bin-maps
2415 (map (fn [bin-key]
2416 (group-by
2417 (comp bin-key :proprioception phi-space)
2418 (range (count phi-space)))) bin-keys)
2419 lookups (map (fn [bin-key bin-map]
2420 (fn [proprio] (bin-map (bin-key proprio))))
2421 bin-keys bin-maps)]
2422 (fn lookup [proprio-data]
2423 (set (some #(% proprio-data) lookups)))))
2424 #+end_src
2425 #+end_listing
2427 #+caption: =longest-thread= finds the longest path of consecutive
2428 #+caption: experiences to explain proprioceptive worm data.
2429 #+name: phi-space-history-scan
2430 #+ATTR_LaTeX: :width 10cm
2431 [[./images/aurellem-gray.png]]
2433 =longest-thread= infers sensory data by stitching together pieces
2434 from previous experience. It prefers longer chains of previous
2435 experience to shorter ones. For example, during training the worm
2436 might rest on the ground for one second before it performs its
2437 excercises. If during recognition the worm rests on the ground for
2438 five seconds, =longest-thread= will accomodate this five second
2439 rest period by looping the one second rest chain five times.
2441 =longest-thread= takes time proportinal to the average number of
2442 entries in a proprioceptive bin, because for each element in the
2443 starting bin it performes a series of set lookups in the preceeding
2444 bins. If the total history is limited, then this is only a constant
2445 multiple times the number of entries in the starting bin. This
2446 analysis also applies even if the action requires multiple longest
2447 chains -- it's still the average number of entries in a
2448 proprioceptive bin times the desired chain length. Because
2449 =longest-thread= is so efficient and simple, I can interpret
2450 worm-actions in real time.
2452 #+caption: Program to calculate empathy by tracing though \Phi-space
2453 #+caption: and finding the longest (ie. most coherent) interpretation
2454 #+caption: of the data.
2455 #+name: longest-thread
2456 #+attr_latex: [htpb]
2457 #+begin_listing clojure
2458 #+begin_src clojure
2459 (defn longest-thread
2460 "Find the longest thread from phi-index-sets. The index sets should
2461 be ordered from most recent to least recent."
2462 [phi-index-sets]
2463 (loop [result '()
2464 [thread-bases & remaining :as phi-index-sets] phi-index-sets]
2465 (if (empty? phi-index-sets)
2466 (vec result)
2467 (let [threads
2468 (for [thread-base thread-bases]
2469 (loop [thread (list thread-base)
2470 remaining remaining]
2471 (let [next-index (dec (first thread))]
2472 (cond (empty? remaining) thread
2473 (contains? (first remaining) next-index)
2474 (recur
2475 (cons next-index thread) (rest remaining))
2476 :else thread))))
2477 longest-thread
2478 (reduce (fn [thread-a thread-b]
2479 (if (> (count thread-a) (count thread-b))
2480 thread-a thread-b))
2481 '(nil)
2482 threads)]
2483 (recur (concat longest-thread result)
2484 (drop (count longest-thread) phi-index-sets))))))
2485 #+end_src
2486 #+end_listing
2488 There is one final piece, which is to replace missing sensory data
2489 with a best-guess estimate. While I could fill in missing data by
2490 using a gradient over the closest known sensory data points,
2491 averages can be misleading. It is certainly possible to create an
2492 impossible sensory state by averaging two possible sensory states.
2493 Therefore, I simply replicate the most recent sensory experience to
2494 fill in the gaps.
2496 #+caption: Fill in blanks in sensory experience by replicating the most
2497 #+caption: recent experience.
2498 #+name: infer-nils
2499 #+attr_latex: [htpb]
2500 #+begin_listing clojure
2501 #+begin_src clojure
2502 (defn infer-nils
2503 "Replace nils with the next available non-nil element in the
2504 sequence, or barring that, 0."
2505 [s]
2506 (loop [i (dec (count s))
2507 v (transient s)]
2508 (if (zero? i) (persistent! v)
2509 (if-let [cur (v i)]
2510 (if (get v (dec i) 0)
2511 (recur (dec i) v)
2512 (recur (dec i) (assoc! v (dec i) cur)))
2513 (recur i (assoc! v i 0))))))
2514 #+end_src
2515 #+end_listing
2517 ** Efficient action recognition with =EMPATH=
2519 To use =EMPATH= with the worm, I first need to gather a set of
2520 experiences from the worm that includes the actions I want to
2521 recognize. The =generate-phi-space= program (listing
2522 \ref{generate-phi-space} runs the worm through a series of
2523 exercices and gatheres those experiences into a vector. The
2524 =do-all-the-things= program is a routine expressed in a simple
2525 muscle contraction script language for automated worm control. It
2526 causes the worm to rest, curl, and wiggle over about 700 frames
2527 (approx. 11 seconds).
2529 #+caption: Program to gather the worm's experiences into a vector for
2530 #+caption: further processing. The =motor-control-program= line uses
2531 #+caption: a motor control script that causes the worm to execute a series
2532 #+caption: of ``exercices'' that include all the action predicates.
2533 #+name: generate-phi-space
2534 #+attr_latex: [htpb]
2535 #+begin_listing clojure
2536 #+begin_src clojure
2537 (def do-all-the-things
2538 (concat
2539 curl-script
2540 [[300 :d-ex 40]
2541 [320 :d-ex 0]]
2542 (shift-script 280 (take 16 wiggle-script))))
2544 (defn generate-phi-space []
2545 (let [experiences (atom [])]
2546 (run-world
2547 (apply-map
2548 worm-world
2549 (merge
2550 (worm-world-defaults)
2551 {:end-frame 700
2552 :motor-control
2553 (motor-control-program worm-muscle-labels do-all-the-things)
2554 :experiences experiences})))
2555 @experiences))
2556 #+end_src
2557 #+end_listing
2559 #+caption: Use longest thread and a phi-space generated from a short
2560 #+caption: exercise routine to interpret actions during free play.
2561 #+name: empathy-debug
2562 #+attr_latex: [htpb]
2563 #+begin_listing clojure
2564 #+begin_src clojure
2565 (defn init []
2566 (def phi-space (generate-phi-space))
2567 (def phi-scan (gen-phi-scan phi-space)))
2569 (defn empathy-demonstration []
2570 (let [proprio (atom ())]
2571 (fn
2572 [experiences text]
2573 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
2574 (swap! proprio (partial cons phi-indices))
2575 (let [exp-thread (longest-thread (take 300 @proprio))
2576 empathy (mapv phi-space (infer-nils exp-thread))]
2577 (println-repl (vector:last-n exp-thread 22))
2578 (cond
2579 (grand-circle? empathy) (.setText text "Grand Circle")
2580 (curled? empathy) (.setText text "Curled")
2581 (wiggling? empathy) (.setText text "Wiggling")
2582 (resting? empathy) (.setText text "Resting")
2583 :else (.setText text "Unknown")))))))
2585 (defn empathy-experiment [record]
2586 (.start (worm-world :experience-watch (debug-experience-phi)
2587 :record record :worm worm*)))
2588 #+end_src
2589 #+end_listing
2591 The result of running =empathy-experiment= is that the system is
2592 generally able to interpret worm actions using the action-predicates
2593 on simulated sensory data just as well as with actual data. Figure
2594 \ref{empathy-debug-image} was generated using =empathy-experiment=:
2596 #+caption: From only proprioceptive data, =EMPATH= was able to infer
2597 #+caption: the complete sensory experience and classify four poses
2598 #+caption: (The last panel shows a composite image of \emph{wriggling},
2599 #+caption: a dynamic pose.)
2600 #+name: empathy-debug-image
2601 #+ATTR_LaTeX: :width 10cm :placement [H]
2602 [[./images/empathy-1.png]]
2604 One way to measure the performance of =EMPATH= is to compare the
2605 sutiability of the imagined sense experience to trigger the same
2606 action predicates as the real sensory experience.
2608 #+caption: Determine how closely empathy approximates actual
2609 #+caption: sensory data.
2610 #+name: test-empathy-accuracy
2611 #+attr_latex: [htpb]
2612 #+begin_listing clojure
2613 #+begin_src clojure
2614 (def worm-action-label
2615 (juxt grand-circle? curled? wiggling?))
2617 (defn compare-empathy-with-baseline [matches]
2618 (let [proprio (atom ())]
2619 (fn
2620 [experiences text]
2621 (let [phi-indices (phi-scan (:proprioception (peek experiences)))]
2622 (swap! proprio (partial cons phi-indices))
2623 (let [exp-thread (longest-thread (take 300 @proprio))
2624 empathy (mapv phi-space (infer-nils exp-thread))
2625 experience-matches-empathy
2626 (= (worm-action-label experiences)
2627 (worm-action-label empathy))]
2628 (println-repl experience-matches-empathy)
2629 (swap! matches #(conj % experience-matches-empathy)))))))
2631 (defn accuracy [v]
2632 (float (/ (count (filter true? v)) (count v))))
2634 (defn test-empathy-accuracy []
2635 (let [res (atom [])]
2636 (run-world
2637 (worm-world :experience-watch
2638 (compare-empathy-with-baseline res)
2639 :worm worm*))
2640 (accuracy @res)))
2641 #+end_src
2642 #+end_listing
2644 Running =test-empathy-accuracy= using the very short exercise
2645 program defined in listing \ref{generate-phi-space}, and then doing
2646 a similar pattern of activity manually yeilds an accuracy of around
2647 73%. This is based on very limited worm experience. By training the
2648 worm for longer, the accuracy dramatically improves.
2650 #+caption: Program to generate \Phi-space using manual training.
2651 #+name: manual-phi-space
2652 #+attr_latex: [htpb]
2653 #+begin_listing clojure
2654 #+begin_src clojure
2655 (defn init-interactive []
2656 (def phi-space
2657 (let [experiences (atom [])]
2658 (run-world
2659 (apply-map
2660 worm-world
2661 (merge
2662 (worm-world-defaults)
2663 {:experiences experiences})))
2664 @experiences))
2665 (def phi-scan (gen-phi-scan phi-space)))
2666 #+end_src
2667 #+end_listing
2669 After about 1 minute of manual training, I was able to achieve 95%
2670 accuracy on manual testing of the worm using =init-interactive= and
2671 =test-empathy-accuracy=. The majority of errors are near the
2672 boundaries of transitioning from one type of action to another.
2673 During these transitions the exact label for the action is more open
2674 to interpretation, and dissaggrement between empathy and experience
2675 is more excusable.
2677 ** Digression: bootstrapping touch using free exploration
2679 In the previous section I showed how to compute actions in terms of
2680 body-centered predicates which relied averate touch activation of
2681 pre-defined regions of the worm's skin. What if, instead of recieving
2682 touch pre-grouped into the six faces of each worm segment, the true
2683 topology of the worm's skin was unknown? This is more similiar to how
2684 a nerve fiber bundle might be arranged. While two fibers that are
2685 close in a nerve bundle /might/ correspond to two touch sensors that
2686 are close together on the skin, the process of taking a complicated
2687 surface and forcing it into essentially a circle requires some cuts
2688 and rerragenments.
2690 In this section I show how to automatically learn the skin-topology of
2691 a worm segment by free exploration. As the worm rolls around on the
2692 floor, large sections of its surface get activated. If the worm has
2693 stopped moving, then whatever region of skin that is touching the
2694 floor is probably an important region, and should be recorded.
2696 #+caption: Program to detect whether the worm is in a resting state
2697 #+caption: with one face touching the floor.
2698 #+name: pure-touch
2699 #+begin_listing clojure
2700 #+begin_src clojure
2701 (def full-contact [(float 0.0) (float 0.1)])
2703 (defn pure-touch?
2704 "This is worm specific code to determine if a large region of touch
2705 sensors is either all on or all off."
2706 [[coords touch :as touch-data]]
2707 (= (set (map first touch)) (set full-contact)))
2708 #+end_src
2709 #+end_listing
2711 After collecting these important regions, there will many nearly
2712 similiar touch regions. While for some purposes the subtle
2713 differences between these regions will be important, for my
2714 purposes I colapse them into mostly non-overlapping sets using
2715 =remove-similiar= in listing \ref{remove-similiar}
2717 #+caption: Program to take a lits of set of points and ``collapse them''
2718 #+caption: so that the remaining sets in the list are siginificantly
2719 #+caption: different from each other. Prefer smaller sets to larger ones.
2720 #+name: remove-similiar
2721 #+begin_listing clojure
2722 #+begin_src clojure
2723 (defn remove-similar
2724 [coll]
2725 (loop [result () coll (sort-by (comp - count) coll)]
2726 (if (empty? coll) result
2727 (let [[x & xs] coll
2728 c (count x)]
2729 (if (some
2730 (fn [other-set]
2731 (let [oc (count other-set)]
2732 (< (- (count (union other-set x)) c) (* oc 0.1))))
2733 xs)
2734 (recur result xs)
2735 (recur (cons x result) xs))))))
2736 #+end_src
2737 #+end_listing
2739 Actually running this simulation is easy given =CORTEX='s facilities.
2741 #+caption: Collect experiences while the worm moves around. Filter the touch
2742 #+caption: sensations by stable ones, collapse similiar ones together,
2743 #+caption: and report the regions learned.
2744 #+name: learn-touch
2745 #+begin_listing clojure
2746 #+begin_src clojure
2747 (defn learn-touch-regions []
2748 (let [experiences (atom [])
2749 world (apply-map
2750 worm-world
2751 (assoc (worm-segment-defaults)
2752 :experiences experiences))]
2753 (run-world world)
2754 (->>
2755 @experiences
2756 (drop 175)
2757 ;; access the single segment's touch data
2758 (map (comp first :touch))
2759 ;; only deal with "pure" touch data to determine surfaces
2760 (filter pure-touch?)
2761 ;; associate coordinates with touch values
2762 (map (partial apply zipmap))
2763 ;; select those regions where contact is being made
2764 (map (partial group-by second))
2765 (map #(get % full-contact))
2766 (map (partial map first))
2767 ;; remove redundant/subset regions
2768 (map set)
2769 remove-similar)))
2771 (defn learn-and-view-touch-regions []
2772 (map view-touch-region
2773 (learn-touch-regions)))
2774 #+end_src
2775 #+end_listing
2777 The only thing remining to define is the particular motion the worm
2778 must take. I accomplish this with a simple motor control program.
2780 #+caption: Motor control program for making the worm roll on the ground.
2781 #+caption: This could also be replaced with random motion.
2782 #+name: worm-roll
2783 #+begin_listing clojure
2784 #+begin_src clojure
2785 (defn touch-kinesthetics []
2786 [[170 :lift-1 40]
2787 [190 :lift-1 19]
2788 [206 :lift-1 0]
2790 [400 :lift-2 40]
2791 [410 :lift-2 0]
2793 [570 :lift-2 40]
2794 [590 :lift-2 21]
2795 [606 :lift-2 0]
2797 [800 :lift-1 30]
2798 [809 :lift-1 0]
2800 [900 :roll-2 40]
2801 [905 :roll-2 20]
2802 [910 :roll-2 0]
2804 [1000 :roll-2 40]
2805 [1005 :roll-2 20]
2806 [1010 :roll-2 0]
2808 [1100 :roll-2 40]
2809 [1105 :roll-2 20]
2810 [1110 :roll-2 0]
2811 ])
2812 #+end_src
2813 #+end_listing
2816 #+caption: The small worm rolls around on the floor, driven
2817 #+caption: by the motor control program in listing \ref{worm-roll}.
2818 #+name: worm-roll
2819 #+ATTR_LaTeX: :width 12cm
2820 [[./images/worm-roll.png]]
2823 #+caption: After completing its adventures, the worm now knows
2824 #+caption: how its touch sensors are arranged along its skin. These
2825 #+caption: are the regions that were deemed important by
2826 #+caption: =learn-touch-regions=. Note that the worm has discovered
2827 #+caption: that it has six sides.
2828 #+name: worm-touch-map
2829 #+ATTR_LaTeX: :width 12cm
2830 [[./images/touch-learn.png]]
2832 While simple, =learn-touch-regions= exploits regularities in both
2833 the worm's physiology and the worm's environment to correctly
2834 deduce that the worm has six sides. Note that =learn-touch-regions=
2835 would work just as well even if the worm's touch sense data were
2836 completely scrambled. The cross shape is just for convienence. This
2837 example justifies the use of pre-defined touch regions in =EMPATH=.
2839 * COMMENT Contributions
2841 In this thesis you have seen the =CORTEX= system, a complete
2842 environment for creating simulated creatures. You have seen how to
2843 implement five senses including touch, proprioception, hearing,
2844 vision, and muscle tension. You have seen how to create new creatues
2845 using blender, a 3D modeling tool. I hope that =CORTEX= will be
2846 useful in further research projects. To this end I have included the
2847 full source to =CORTEX= along with a large suite of tests and
2848 examples. I have also created a user guide for =CORTEX= which is
2849 inculded in an appendix to this thesis.
2851 You have also seen how I used =CORTEX= as a platform to attach the
2852 /action recognition/ problem, which is the problem of recognizing
2853 actions in video. You saw a simple system called =EMPATH= which
2854 ientifies actions by first describing actions in a body-centerd,
2855 rich sense language, then infering a full range of sensory
2856 experience from limited data using previous experience gained from
2857 free play.
2859 As a minor digression, you also saw how I used =CORTEX= to enable a
2860 tiny worm to discover the topology of its skin simply by rolling on
2861 the ground.
2863 In conclusion, the main contributions of this thesis are:
2865 - =CORTEX=, a system for creating simulated creatures with rich
2866 senses.
2867 - =EMPATH=, a program for recognizing actions by imagining sensory
2868 experience.
2870 # An anatomical joke:
2871 # - Training
2872 # - Skeletal imitation
2873 # - Sensory fleshing-out
2874 # - Classification