rlm@425: #+title: =CORTEX= rlm@425: #+author: Robert McIntyre rlm@425: #+email: rlm@mit.edu rlm@425: #+description: Using embodied AI to facilitate Artificial Imagination. rlm@425: #+keywords: AI, clojure, embodiment rlm@451: #+LaTeX_CLASS_OPTIONS: [nofloat] rlm@422: rlm@465: * COMMENT templates rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+name: name rlm@470: #+begin_listing clojure rlm@470: #+end_listing rlm@465: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+name: name rlm@470: #+ATTR_LaTeX: :width 10cm rlm@470: [[./images/aurellem-gray.png]] rlm@470: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+name: name rlm@470: #+begin_listing clojure rlm@470: #+end_listing rlm@470: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+caption: rlm@470: #+name: name rlm@470: #+ATTR_LaTeX: :width 10cm rlm@470: [[./images/aurellem-gray.png]] rlm@470: rlm@465: rlm@465: * COMMENT Empathy and Embodiment as problem solving strategies rlm@437: rlm@437: By the end of this thesis, you will have seen a novel approach to rlm@437: interpreting video using embodiment and empathy. You will have also rlm@437: seen one way to efficiently implement empathy for embodied rlm@447: creatures. Finally, you will become familiar with =CORTEX=, a system rlm@447: for designing and simulating creatures with rich senses, which you rlm@447: may choose to use in your own research. rlm@437: rlm@441: This is the core vision of my thesis: That one of the important ways rlm@441: in which we understand others is by imagining ourselves in their rlm@441: position and emphatically feeling experiences relative to our own rlm@441: bodies. By understanding events in terms of our own previous rlm@441: corporeal experience, we greatly constrain the possibilities of what rlm@441: would otherwise be an unwieldy exponential search. This extra rlm@441: constraint can be the difference between easily understanding what rlm@441: is happening in a video and being completely lost in a sea of rlm@441: incomprehensible color and movement. rlm@435: rlm@436: ** Recognizing actions in video is extremely difficult rlm@437: rlm@447: Consider for example the problem of determining what is happening rlm@447: in a video of which this is one frame: rlm@437: rlm@441: #+caption: A cat drinking some water. Identifying this action is rlm@441: #+caption: beyond the state of the art for computers. rlm@441: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/cat-drinking.jpg]] rlm@441: rlm@441: It is currently impossible for any computer program to reliably rlm@447: label such a video as ``drinking''. And rightly so -- it is a very rlm@441: hard problem! What features can you describe in terms of low level rlm@441: functions of pixels that can even begin to describe at a high level rlm@441: what is happening here? rlm@437: rlm@447: Or suppose that you are building a program that recognizes chairs. rlm@448: How could you ``see'' the chair in figure \ref{hidden-chair}? rlm@441: rlm@441: #+caption: The chair in this image is quite obvious to humans, but I rlm@448: #+caption: doubt that any modern computer vision program can find it. rlm@441: #+name: hidden-chair rlm@441: #+ATTR_LaTeX: :width 10cm rlm@441: [[./images/fat-person-sitting-at-desk.jpg]] rlm@441: rlm@441: Finally, how is it that you can easily tell the difference between rlm@441: how the girls /muscles/ are working in figure \ref{girl}? rlm@441: rlm@441: #+caption: The mysterious ``common sense'' appears here as you are able rlm@441: #+caption: to discern the difference in how the girl's arm muscles rlm@441: #+caption: are activated between the two images. rlm@441: #+name: girl rlm@448: #+ATTR_LaTeX: :width 7cm rlm@441: [[./images/wall-push.png]] rlm@437: rlm@441: Each of these examples tells us something about what might be going rlm@441: on in our minds as we easily solve these recognition problems. rlm@441: rlm@441: The hidden chairs show us that we are strongly triggered by cues rlm@447: relating to the position of human bodies, and that we can determine rlm@447: the overall physical configuration of a human body even if much of rlm@447: that body is occluded. rlm@437: rlm@441: The picture of the girl pushing against the wall tells us that we rlm@441: have common sense knowledge about the kinetics of our own bodies. rlm@441: We know well how our muscles would have to work to maintain us in rlm@441: most positions, and we can easily project this self-knowledge to rlm@441: imagined positions triggered by images of the human body. rlm@441: rlm@441: ** =EMPATH= neatly solves recognition problems rlm@441: rlm@441: I propose a system that can express the types of recognition rlm@441: problems above in a form amenable to computation. It is split into rlm@441: four parts: rlm@441: rlm@448: - Free/Guided Play :: The creature moves around and experiences the rlm@448: world through its unique perspective. Many otherwise rlm@448: complicated actions are easily described in the language of a rlm@448: full suite of body-centered, rich senses. For example, rlm@448: drinking is the feeling of water sliding down your throat, and rlm@448: cooling your insides. It's often accompanied by bringing your rlm@448: hand close to your face, or bringing your face close to water. rlm@448: Sitting down is the feeling of bending your knees, activating rlm@448: your quadriceps, then feeling a surface with your bottom and rlm@448: relaxing your legs. These body-centered action descriptions rlm@448: can be either learned or hard coded. rlm@448: - Posture Imitation :: When trying to interpret a video or image, rlm@448: the creature takes a model of itself and aligns it with rlm@448: whatever it sees. This alignment can even cross species, as rlm@448: when humans try to align themselves with things like ponies, rlm@448: dogs, or other humans with a different body type. rlm@448: - Empathy :: The alignment triggers associations with rlm@448: sensory data from prior experiences. For example, the rlm@448: alignment itself easily maps to proprioceptive data. Any rlm@448: sounds or obvious skin contact in the video can to a lesser rlm@448: extent trigger previous experience. Segments of previous rlm@448: experiences are stitched together to form a coherent and rlm@448: complete sensory portrait of the scene. rlm@448: - Recognition :: With the scene described in terms of first rlm@448: person sensory events, the creature can now run its rlm@447: action-identification programs on this synthesized sensory rlm@447: data, just as it would if it were actually experiencing the rlm@447: scene first-hand. If previous experience has been accurately rlm@447: retrieved, and if it is analogous enough to the scene, then rlm@447: the creature will correctly identify the action in the scene. rlm@447: rlm@441: For example, I think humans are able to label the cat video as rlm@447: ``drinking'' because they imagine /themselves/ as the cat, and rlm@441: imagine putting their face up against a stream of water and rlm@441: sticking out their tongue. In that imagined world, they can feel rlm@441: the cool water hitting their tongue, and feel the water entering rlm@447: their body, and are able to recognize that /feeling/ as drinking. rlm@447: So, the label of the action is not really in the pixels of the rlm@447: image, but is found clearly in a simulation inspired by those rlm@447: pixels. An imaginative system, having been trained on drinking and rlm@447: non-drinking examples and learning that the most important rlm@447: component of drinking is the feeling of water sliding down one's rlm@447: throat, would analyze a video of a cat drinking in the following rlm@447: manner: rlm@441: rlm@447: 1. Create a physical model of the video by putting a ``fuzzy'' rlm@447: model of its own body in place of the cat. Possibly also create rlm@447: a simulation of the stream of water. rlm@441: rlm@441: 2. Play out this simulated scene and generate imagined sensory rlm@441: experience. This will include relevant muscle contractions, a rlm@441: close up view of the stream from the cat's perspective, and most rlm@441: importantly, the imagined feeling of water entering the rlm@443: mouth. The imagined sensory experience can come from a rlm@441: simulation of the event, but can also be pattern-matched from rlm@441: previous, similar embodied experience. rlm@441: rlm@441: 3. The action is now easily identified as drinking by the sense of rlm@441: taste alone. The other senses (such as the tongue moving in and rlm@441: out) help to give plausibility to the simulated action. Note that rlm@441: the sense of vision, while critical in creating the simulation, rlm@441: is not critical for identifying the action from the simulation. rlm@441: rlm@441: For the chair examples, the process is even easier: rlm@441: rlm@441: 1. Align a model of your body to the person in the image. rlm@441: rlm@441: 2. Generate proprioceptive sensory data from this alignment. rlm@437: rlm@441: 3. Use the imagined proprioceptive data as a key to lookup related rlm@441: sensory experience associated with that particular proproceptive rlm@441: feeling. rlm@437: rlm@443: 4. Retrieve the feeling of your bottom resting on a surface, your rlm@443: knees bent, and your leg muscles relaxed. rlm@437: rlm@441: 5. This sensory information is consistent with the =sitting?= rlm@441: sensory predicate, so you (and the entity in the image) must be rlm@441: sitting. rlm@440: rlm@441: 6. There must be a chair-like object since you are sitting. rlm@440: rlm@441: Empathy offers yet another alternative to the age-old AI rlm@441: representation question: ``What is a chair?'' --- A chair is the rlm@441: feeling of sitting. rlm@441: rlm@441: My program, =EMPATH= uses this empathic problem solving technique rlm@441: to interpret the actions of a simple, worm-like creature. rlm@437: rlm@441: #+caption: The worm performs many actions during free play such as rlm@441: #+caption: curling, wiggling, and resting. rlm@441: #+name: worm-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-intro-white.png]] rlm@437: rlm@462: #+caption: =EMPATH= recognized and classified each of these rlm@462: #+caption: poses by inferring the complete sensory experience rlm@462: #+caption: from proprioceptive data. rlm@441: #+name: worm-recognition-intro rlm@446: #+ATTR_LaTeX: :width 15cm rlm@445: [[./images/worm-poses.png]] rlm@441: rlm@441: One powerful advantage of empathic problem solving is that it rlm@441: factors the action recognition problem into two easier problems. To rlm@441: use empathy, you need an /aligner/, which takes the video and a rlm@441: model of your body, and aligns the model with the video. Then, you rlm@441: need a /recognizer/, which uses the aligned model to interpret the rlm@441: action. The power in this method lies in the fact that you describe rlm@448: all actions form a body-centered viewpoint. You are less tied to rlm@447: the particulars of any visual representation of the actions. If you rlm@441: teach the system what ``running'' is, and you have a good enough rlm@441: aligner, the system will from then on be able to recognize running rlm@441: from any point of view, even strange points of view like above or rlm@441: underneath the runner. This is in contrast to action recognition rlm@448: schemes that try to identify actions using a non-embodied approach. rlm@448: If these systems learn about running as viewed from the side, they rlm@448: will not automatically be able to recognize running from any other rlm@448: viewpoint. rlm@441: rlm@441: Another powerful advantage is that using the language of multiple rlm@441: body-centered rich senses to describe body-centerd actions offers a rlm@441: massive boost in descriptive capability. Consider how difficult it rlm@441: would be to compose a set of HOG filters to describe the action of rlm@447: a simple worm-creature ``curling'' so that its head touches its rlm@447: tail, and then behold the simplicity of describing thus action in a rlm@441: language designed for the task (listing \ref{grand-circle-intro}): rlm@441: rlm@446: #+caption: Body-centerd actions are best expressed in a body-centered rlm@446: #+caption: language. This code detects when the worm has curled into a rlm@446: #+caption: full circle. Imagine how you would replicate this functionality rlm@446: #+caption: using low-level pixel features such as HOG filters! rlm@446: #+name: grand-circle-intro rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@446: #+begin_src clojure rlm@446: (defn grand-circle? rlm@446: "Does the worm form a majestic circle (one end touching the other)?" rlm@446: [experiences] rlm@446: (and (curled? experiences) rlm@446: (let [worm-touch (:touch (peek experiences)) rlm@446: tail-touch (worm-touch 0) rlm@446: head-touch (worm-touch 4)] rlm@462: (and (< 0.2 (contact worm-segment-bottom-tip tail-touch)) rlm@462: (< 0.2 (contact worm-segment-top-tip head-touch)))))) rlm@446: #+end_src rlm@446: #+end_listing rlm@446: rlm@435: rlm@449: ** =CORTEX= is a toolkit for building sensate creatures rlm@435: rlm@448: I built =CORTEX= to be a general AI research platform for doing rlm@448: experiments involving multiple rich senses and a wide variety and rlm@448: number of creatures. I intend it to be useful as a library for many rlm@462: more projects than just this thesis. =CORTEX= was necessary to meet rlm@462: a need among AI researchers at CSAIL and beyond, which is that rlm@462: people often will invent neat ideas that are best expressed in the rlm@448: language of creatures and senses, but in order to explore those rlm@448: ideas they must first build a platform in which they can create rlm@448: simulated creatures with rich senses! There are many ideas that rlm@448: would be simple to execute (such as =EMPATH=), but attached to them rlm@448: is the multi-month effort to make a good creature simulator. Often, rlm@448: that initial investment of time proves to be too much, and the rlm@448: project must make do with a lesser environment. rlm@435: rlm@448: =CORTEX= is well suited as an environment for embodied AI research rlm@448: for three reasons: rlm@448: rlm@448: - You can create new creatures using Blender, a popular 3D modeling rlm@448: program. Each sense can be specified using special blender nodes rlm@448: with biologically inspired paramaters. You need not write any rlm@448: code to create a creature, and can use a wide library of rlm@448: pre-existing blender models as a base for your own creatures. rlm@448: rlm@448: - =CORTEX= implements a wide variety of senses, including touch, rlm@448: proprioception, vision, hearing, and muscle tension. Complicated rlm@448: senses like touch, and vision involve multiple sensory elements rlm@448: embedded in a 2D surface. You have complete control over the rlm@448: distribution of these sensor elements through the use of simple rlm@448: png image files. In particular, =CORTEX= implements more rlm@448: comprehensive hearing than any other creature simulation system rlm@448: available. rlm@448: rlm@448: - =CORTEX= supports any number of creatures and any number of rlm@448: senses. Time in =CORTEX= dialates so that the simulated creatures rlm@448: always precieve a perfectly smooth flow of time, regardless of rlm@448: the actual computational load. rlm@448: rlm@448: =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game rlm@448: engine designed to create cross-platform 3D desktop games. =CORTEX= rlm@448: is mainly written in clojure, a dialect of =LISP= that runs on the rlm@448: java virtual machine (JVM). The API for creating and simulating rlm@449: creatures and senses is entirely expressed in clojure, though many rlm@449: senses are implemented at the layer of jMonkeyEngine or below. For rlm@449: example, for the sense of hearing I use a layer of clojure code on rlm@449: top of a layer of java JNI bindings that drive a layer of =C++= rlm@449: code which implements a modified version of =OpenAL= to support rlm@449: multiple listeners. =CORTEX= is the only simulation environment rlm@449: that I know of that can support multiple entities that can each rlm@449: hear the world from their own perspective. Other senses also rlm@449: require a small layer of Java code. =CORTEX= also uses =bullet=, a rlm@449: physics simulator written in =C=. rlm@448: rlm@448: #+caption: Here is the worm from above modeled in Blender, a free rlm@448: #+caption: 3D-modeling program. Senses and joints are described rlm@448: #+caption: using special nodes in Blender. rlm@448: #+name: worm-recognition-intro rlm@448: #+ATTR_LaTeX: :width 12cm rlm@448: [[./images/blender-worm.png]] rlm@448: rlm@449: Here are some thing I anticipate that =CORTEX= might be used for: rlm@449: rlm@449: - exploring new ideas about sensory integration rlm@449: - distributed communication among swarm creatures rlm@449: - self-learning using free exploration, rlm@449: - evolutionary algorithms involving creature construction rlm@449: - exploration of exoitic senses and effectors that are not possible rlm@449: in the real world (such as telekenisis or a semantic sense) rlm@449: - imagination using subworlds rlm@449: rlm@451: During one test with =CORTEX=, I created 3,000 creatures each with rlm@448: their own independent senses and ran them all at only 1/80 real rlm@448: time. In another test, I created a detailed model of my own hand, rlm@448: equipped with a realistic distribution of touch (more sensitive at rlm@448: the fingertips), as well as eyes and ears, and it ran at around 1/4 rlm@451: real time. rlm@448: rlm@451: #+BEGIN_LaTeX rlm@449: \begin{sidewaysfigure} rlm@449: \includegraphics[width=9.5in]{images/full-hand.png} rlm@451: \caption{ rlm@451: I modeled my own right hand in Blender and rigged it with all the rlm@451: senses that {\tt CORTEX} supports. My simulated hand has a rlm@451: biologically inspired distribution of touch sensors. The senses are rlm@451: displayed on the right, and the simulation is displayed on the rlm@451: left. Notice that my hand is curling its fingers, that it can see rlm@451: its own finger from the eye in its palm, and that it can feel its rlm@451: own thumb touching its palm.} rlm@449: \end{sidewaysfigure} rlm@451: #+END_LaTeX rlm@448: rlm@437: ** Contributions rlm@435: rlm@451: - I built =CORTEX=, a comprehensive platform for embodied AI rlm@451: experiments. =CORTEX= supports many features lacking in other rlm@451: systems, such proper simulation of hearing. It is easy to create rlm@451: new =CORTEX= creatures using Blender, a free 3D modeling program. rlm@449: rlm@451: - I built =EMPATH=, which uses =CORTEX= to identify the actions of rlm@451: a worm-like creature using a computational model of empathy. rlm@449: rlm@436: * Building =CORTEX= rlm@435: rlm@462: I intend for =CORTEX= to be used as a general purpose library for rlm@462: building creatures and outfitting them with senses, so that it will rlm@462: be useful for other researchers who want to test out ideas of their rlm@462: own. To this end, wherver I have had to make archetictural choices rlm@462: about =CORTEX=, I have chosen to give as much freedom to the user as rlm@462: possible, so that =CORTEX= may be used for things I have not rlm@462: forseen. rlm@462: rlm@465: ** COMMENT Simulation or Reality? rlm@462: rlm@462: The most important archetictural decision of all is the choice to rlm@462: use a computer-simulated environemnt in the first place! The world rlm@462: is a vast and rich place, and for now simulations are a very poor rlm@462: reflection of its complexity. It may be that there is a significant rlm@462: qualatative difference between dealing with senses in the real rlm@468: world and dealing with pale facilimilies of them in a simulation. rlm@468: What are the advantages and disadvantages of a simulation vs. rlm@468: reality? rlm@462: rlm@462: *** Simulation rlm@462: rlm@462: The advantages of virtual reality are that when everything is a rlm@462: simulation, experiments in that simulation are absolutely rlm@462: reproducible. It's also easier to change the character and world rlm@462: to explore new situations and different sensory combinations. rlm@462: rlm@462: If the world is to be simulated on a computer, then not only do rlm@462: you have to worry about whether the character's senses are rich rlm@462: enough to learn from the world, but whether the world itself is rlm@462: rendered with enough detail and realism to give enough working rlm@462: material to the character's senses. To name just a few rlm@462: difficulties facing modern physics simulators: destructibility of rlm@462: the environment, simulation of water/other fluids, large areas, rlm@462: nonrigid bodies, lots of objects, smoke. I don't know of any rlm@462: computer simulation that would allow a character to take a rock rlm@462: and grind it into fine dust, then use that dust to make a clay rlm@462: sculpture, at least not without spending years calculating the rlm@462: interactions of every single small grain of dust. Maybe a rlm@462: simulated world with today's limitations doesn't provide enough rlm@462: richness for real intelligence to evolve. rlm@462: rlm@462: *** Reality rlm@462: rlm@462: The other approach for playing with senses is to hook your rlm@462: software up to real cameras, microphones, robots, etc., and let it rlm@462: loose in the real world. This has the advantage of eliminating rlm@462: concerns about simulating the world at the expense of increasing rlm@462: the complexity of implementing the senses. Instead of just rlm@462: grabbing the current rendered frame for processing, you have to rlm@462: use an actual camera with real lenses and interact with photons to rlm@462: get an image. It is much harder to change the character, which is rlm@462: now partly a physical robot of some sort, since doing so involves rlm@462: changing things around in the real world instead of modifying rlm@462: lines of code. While the real world is very rich and definitely rlm@462: provides enough stimulation for intelligence to develop as rlm@462: evidenced by our own existence, it is also uncontrollable in the rlm@462: sense that a particular situation cannot be recreated perfectly or rlm@462: saved for later use. It is harder to conduct science because it is rlm@462: harder to repeat an experiment. The worst thing about using the rlm@462: real world instead of a simulation is the matter of time. Instead rlm@462: of simulated time you get the constant and unstoppable flow of rlm@462: real time. This severely limits the sorts of software you can use rlm@462: to program the AI because all sense inputs must be handled in real rlm@462: time. Complicated ideas may have to be implemented in hardware or rlm@462: may simply be impossible given the current speed of our rlm@462: processors. Contrast this with a simulation, in which the flow of rlm@462: time in the simulated world can be slowed down to accommodate the rlm@462: limitations of the character's programming. In terms of cost, rlm@462: doing everything in software is far cheaper than building custom rlm@462: real-time hardware. All you need is a laptop and some patience. rlm@435: rlm@465: ** COMMENT Because of Time, simulation is perferable to reality rlm@435: rlm@462: I envision =CORTEX= being used to support rapid prototyping and rlm@462: iteration of ideas. Even if I could put together a well constructed rlm@462: kit for creating robots, it would still not be enough because of rlm@462: the scourge of real-time processing. Anyone who wants to test their rlm@462: ideas in the real world must always worry about getting their rlm@465: algorithms to run fast enough to process information in real time. rlm@465: The need for real time processing only increases if multiple senses rlm@465: are involved. In the extreme case, even simple algorithms will have rlm@465: to be accelerated by ASIC chips or FPGAs, turning what would rlm@465: otherwise be a few lines of code and a 10x speed penality into a rlm@465: multi-month ordeal. For this reason, =CORTEX= supports rlm@462: /time-dialiation/, which scales back the framerate of the rlm@465: simulation in proportion to the amount of processing each frame. rlm@465: From the perspective of the creatures inside the simulation, time rlm@465: always appears to flow at a constant rate, regardless of how rlm@462: complicated the envorimnent becomes or how many creatures are in rlm@462: the simulation. The cost is that =CORTEX= can sometimes run slower rlm@462: than real time. This can also be an advantage, however --- rlm@462: simulations of very simple creatures in =CORTEX= generally run at rlm@462: 40x on my machine! rlm@462: rlm@469: ** COMMENT What is a sense? rlm@468: rlm@468: If =CORTEX= is to support a wide variety of senses, it would help rlm@468: to have a better understanding of what a ``sense'' actually is! rlm@468: While vision, touch, and hearing all seem like they are quite rlm@468: different things, I was supprised to learn during the course of rlm@468: this thesis that they (and all physical senses) can be expressed as rlm@468: exactly the same mathematical object due to a dimensional argument! rlm@468: rlm@468: Human beings are three-dimensional objects, and the nerves that rlm@468: transmit data from our various sense organs to our brain are rlm@468: essentially one-dimensional. This leaves up to two dimensions in rlm@468: which our sensory information may flow. For example, imagine your rlm@468: skin: it is a two-dimensional surface around a three-dimensional rlm@468: object (your body). It has discrete touch sensors embedded at rlm@468: various points, and the density of these sensors corresponds to the rlm@468: sensitivity of that region of skin. Each touch sensor connects to a rlm@468: nerve, all of which eventually are bundled together as they travel rlm@468: up the spinal cord to the brain. Intersect the spinal nerves with a rlm@468: guillotining plane and you will see all of the sensory data of the rlm@468: skin revealed in a roughly circular two-dimensional image which is rlm@468: the cross section of the spinal cord. Points on this image that are rlm@468: close together in this circle represent touch sensors that are rlm@468: /probably/ close together on the skin, although there is of course rlm@468: some cutting and rearrangement that has to be done to transfer the rlm@468: complicated surface of the skin onto a two dimensional image. rlm@468: rlm@468: Most human senses consist of many discrete sensors of various rlm@468: properties distributed along a surface at various densities. For rlm@468: skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's rlm@468: disks, and Ruffini's endings, which detect pressure and vibration rlm@468: of various intensities. For ears, it is the stereocilia distributed rlm@468: along the basilar membrane inside the cochlea; each one is rlm@468: sensitive to a slightly different frequency of sound. For eyes, it rlm@468: is rods and cones distributed along the surface of the retina. In rlm@468: each case, we can describe the sense with a surface and a rlm@468: distribution of sensors along that surface. rlm@468: rlm@468: The neat idea is that every human sense can be effectively rlm@468: described in terms of a surface containing embedded sensors. If the rlm@468: sense had any more dimensions, then there wouldn't be enough room rlm@468: in the spinal chord to transmit the information! rlm@468: rlm@468: Therefore, =CORTEX= must support the ability to create objects and rlm@468: then be able to ``paint'' points along their surfaces to describe rlm@468: each sense. rlm@468: rlm@468: Fortunately this idea is already a well known computer graphics rlm@468: technique called called /UV-mapping/. The three-dimensional surface rlm@468: of a model is cut and smooshed until it fits on a two-dimensional rlm@468: image. You paint whatever you want on that image, and when the rlm@468: three-dimensional shape is rendered in a game the smooshing and rlm@468: cutting is reversed and the image appears on the three-dimensional rlm@468: object. rlm@468: rlm@468: To make a sense, interpret the UV-image as describing the rlm@468: distribution of that senses sensors. To get different types of rlm@468: sensors, you can either use a different color for each type of rlm@468: sensor, or use multiple UV-maps, each labeled with that sensor rlm@468: type. I generally use a white pixel to mean the presence of a rlm@468: sensor and a black pixel to mean the absence of a sensor, and use rlm@468: one UV-map for each sensor-type within a given sense. rlm@468: rlm@468: #+CAPTION: The UV-map for an elongated icososphere. The white rlm@468: #+caption: dots each represent a touch sensor. They are dense rlm@468: #+caption: in the regions that describe the tip of the finger, rlm@468: #+caption: and less dense along the dorsal side of the finger rlm@468: #+caption: opposite the tip. rlm@468: #+name: finger-UV rlm@468: #+ATTR_latex: :width 10cm rlm@468: [[./images/finger-UV.png]] rlm@468: rlm@468: #+caption: Ventral side of the UV-mapped finger. Notice the rlm@468: #+caption: density of touch sensors at the tip. rlm@468: #+name: finger-side-view rlm@468: #+ATTR_LaTeX: :width 10cm rlm@468: [[./images/finger-1.png]] rlm@468: rlm@465: ** COMMENT Video game engines are a great starting point rlm@462: rlm@462: I did not need to write my own physics simulation code or shader to rlm@462: build =CORTEX=. Doing so would lead to a system that is impossible rlm@462: for anyone but myself to use anyway. Instead, I use a video game rlm@462: engine as a base and modify it to accomodate the additional needs rlm@462: of =CORTEX=. Video game engines are an ideal starting point to rlm@462: build =CORTEX=, because they are not far from being creature rlm@463: building systems themselves. rlm@462: rlm@462: First off, general purpose video game engines come with a physics rlm@462: engine and lighting / sound system. The physics system provides rlm@462: tools that can be co-opted to serve as touch, proprioception, and rlm@462: muscles. Since some games support split screen views, a good video rlm@462: game engine will allow you to efficiently create multiple cameras rlm@463: in the simulated world that can be used as eyes. Video game systems rlm@463: offer integrated asset management for things like textures and rlm@468: creatures models, providing an avenue for defining creatures. They rlm@468: also understand UV-mapping, since this technique is used to apply a rlm@468: texture to a model. Finally, because video game engines support a rlm@468: large number of users, as long as =CORTEX= doesn't stray too far rlm@468: from the base system, other researchers can turn to this community rlm@468: for help when doing their research. rlm@463: rlm@465: ** COMMENT =CORTEX= is based on jMonkeyEngine3 rlm@463: rlm@463: While preparing to build =CORTEX= I studied several video game rlm@463: engines to see which would best serve as a base. The top contenders rlm@463: were: rlm@463: rlm@463: - [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]] :: The Quake II engine was designed by ID rlm@463: software in 1997. All the source code was released by ID rlm@463: software into the Public Domain several years ago, and as a rlm@463: result it has been ported to many different languages. This rlm@463: engine was famous for its advanced use of realistic shading rlm@463: and had decent and fast physics simulation. The main advantage rlm@463: of the Quake II engine is its simplicity, but I ultimately rlm@463: rejected it because the engine is too tied to the concept of a rlm@463: first-person shooter game. One of the problems I had was that rlm@463: there does not seem to be any easy way to attach multiple rlm@463: cameras to a single character. There are also several physics rlm@463: clipping issues that are corrected in a way that only applies rlm@463: to the main character and do not apply to arbitrary objects. rlm@463: rlm@463: - [[http://source.valvesoftware.com/][Source Engine]] :: The Source Engine evolved from the Quake II rlm@463: and Quake I engines and is used by Valve in the Half-Life rlm@463: series of games. The physics simulation in the Source Engine rlm@463: is quite accurate and probably the best out of all the engines rlm@463: I investigated. There is also an extensive community actively rlm@463: working with the engine. However, applications that use the rlm@463: Source Engine must be written in C++, the code is not open, it rlm@463: only runs on Windows, and the tools that come with the SDK to rlm@463: handle models and textures are complicated and awkward to use. rlm@463: rlm@463: - [[http://jmonkeyengine.com/][jMonkeyEngine3]] :: jMonkeyEngine3 is a new library for creating rlm@463: games in Java. It uses OpenGL to render to the screen and uses rlm@463: screengraphs to avoid drawing things that do not appear on the rlm@463: screen. It has an active community and several games in the rlm@463: pipeline. The engine was not built to serve any particular rlm@463: game but is instead meant to be used for any 3D game. rlm@463: rlm@463: I chose jMonkeyEngine3 because it because it had the most features rlm@464: out of all the free projects I looked at, and because I could then rlm@463: write my code in clojure, an implementation of =LISP= that runs on rlm@463: the JVM. rlm@435: rlm@469: ** COMMENT =CORTEX= uses Blender to create creature models rlm@435: rlm@464: For the simple worm-like creatures I will use later on in this rlm@464: thesis, I could define a simple API in =CORTEX= that would allow rlm@464: one to create boxes, spheres, etc., and leave that API as the sole rlm@464: way to create creatures. However, for =CORTEX= to truly be useful rlm@468: for other projects, it needs a way to construct complicated rlm@464: creatures. If possible, it would be nice to leverage work that has rlm@464: already been done by the community of 3D modelers, or at least rlm@464: enable people who are talented at moedling but not programming to rlm@468: design =CORTEX= creatures. rlm@464: rlm@464: Therefore, I use Blender, a free 3D modeling program, as the main rlm@464: way to create creatures in =CORTEX=. However, the creatures modeled rlm@464: in Blender must also be simple to simulate in jMonkeyEngine3's game rlm@468: engine, and must also be easy to rig with =CORTEX='s senses. I rlm@468: accomplish this with extensive use of Blender's ``empty nodes.'' rlm@464: rlm@468: Empty nodes have no mass, physical presence, or appearance, but rlm@468: they can hold metadata and have names. I use a tree structure of rlm@468: empty nodes to specify senses in the following manner: rlm@468: rlm@468: - Create a single top-level empty node whose name is the name of rlm@468: the sense. rlm@468: - Add empty nodes which each contain meta-data relevant to the rlm@468: sense, including a UV-map describing the number/distribution of rlm@468: sensors if applicable. rlm@468: - Make each empty-node the child of the top-level node. rlm@468: rlm@468: #+caption: An example of annoting a creature model with empty rlm@468: #+caption: nodes to describe the layout of senses. There are rlm@468: #+caption: multiple empty nodes which each describe the position rlm@468: #+caption: of muscles, ears, eyes, or joints. rlm@468: #+name: sense-nodes rlm@468: #+ATTR_LaTeX: :width 10cm rlm@468: [[./images/empty-sense-nodes.png]] rlm@468: rlm@469: ** COMMENT Bodies are composed of segments connected by joints rlm@468: rlm@468: Blender is a general purpose animation tool, which has been used in rlm@468: the past to create high quality movies such as Sintel rlm@468: \cite{sintel}. Though Blender can model and render even complicated rlm@468: things like water, it is crucual to keep models that are meant to rlm@468: be simulated as creatures simple. =Bullet=, which =CORTEX= uses rlm@468: though jMonkeyEngine3, is a rigid-body physics system. This offers rlm@468: a compromise between the expressiveness of a game level and the rlm@468: speed at which it can be simulated, and it means that creatures rlm@468: should be naturally expressed as rigid components held together by rlm@468: joint constraints. rlm@468: rlm@468: But humans are more like a squishy bag with wrapped around some rlm@468: hard bones which define the overall shape. When we move, our skin rlm@468: bends and stretches to accomodate the new positions of our bones. rlm@468: rlm@468: One way to make bodies composed of rigid pieces connected by joints rlm@468: /seem/ more human-like is to use an /armature/, (or /rigging/) rlm@468: system, which defines a overall ``body mesh'' and defines how the rlm@468: mesh deforms as a function of the position of each ``bone'' which rlm@468: is a standard rigid body. This technique is used extensively to rlm@468: model humans and create realistic animations. It is not a good rlm@468: technique for physical simulation, however because it creates a lie rlm@468: -- the skin is not a physical part of the simulation and does not rlm@468: interact with any objects in the world or itself. Objects will pass rlm@468: right though the skin until they come in contact with the rlm@468: underlying bone, which is a physical object. Whithout simulating rlm@468: the skin, the sense of touch has little meaning, and the creature's rlm@468: own vision will lie to it about the true extent of its body. rlm@468: Simulating the skin as a physical object requires some way to rlm@468: continuously update the physical model of the skin along with the rlm@468: movement of the bones, which is unacceptably slow compared to rigid rlm@468: body simulation. rlm@468: rlm@468: Therefore, instead of using the human-like ``deformable bag of rlm@468: bones'' approach, I decided to base my body plans on multiple solid rlm@468: objects that are connected by joints, inspired by the robot =EVE= rlm@468: from the movie WALL-E. rlm@464: rlm@464: #+caption: =EVE= from the movie WALL-E. This body plan turns rlm@464: #+caption: out to be much better suited to my purposes than a more rlm@464: #+caption: human-like one. rlm@465: #+ATTR_LaTeX: :width 10cm rlm@464: [[./images/Eve.jpg]] rlm@464: rlm@464: =EVE='s body is composed of several rigid components that are held rlm@464: together by invisible joint constraints. This is what I mean by rlm@464: ``eve-like''. The main reason that I use eve-style bodies is for rlm@464: efficiency, and so that there will be correspondence between the rlm@468: AI's semses and the physical presence of its body. Each individual rlm@464: section is simulated by a separate rigid body that corresponds rlm@464: exactly with its visual representation and does not change. rlm@464: Sections are connected by invisible joints that are well supported rlm@464: in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3, rlm@464: can efficiently simulate hundreds of rigid bodies connected by rlm@468: joints. Just because sections are rigid does not mean they have to rlm@468: stay as one piece forever; they can be dynamically replaced with rlm@468: multiple sections to simulate splitting in two. This could be used rlm@468: to simulate retractable claws or =EVE='s hands, which are able to rlm@468: coalesce into one object in the movie. rlm@465: rlm@469: *** Solidifying/Connecting a body rlm@465: rlm@469: =CORTEX= creates a creature in two steps: first, it traverses the rlm@469: nodes in the blender file and creates physical representations for rlm@469: any of them that have mass defined in their blender meta-data. rlm@466: rlm@466: #+caption: Program for iterating through the nodes in a blender file rlm@466: #+caption: and generating physical jMonkeyEngine3 objects with mass rlm@466: #+caption: and a matching physics shape. rlm@466: #+name: name rlm@466: #+begin_listing clojure rlm@466: #+begin_src clojure rlm@466: (defn physical! rlm@466: "Iterate through the nodes in creature and make them real physical rlm@466: objects in the simulation." rlm@466: [#^Node creature] rlm@466: (dorun rlm@466: (map rlm@466: (fn [geom] rlm@466: (let [physics-control rlm@466: (RigidBodyControl. rlm@466: (HullCollisionShape. rlm@466: (.getMesh geom)) rlm@466: (if-let [mass (meta-data geom "mass")] rlm@466: (float mass) (float 1)))] rlm@466: (.addControl geom physics-control))) rlm@466: (filter #(isa? (class %) Geometry ) rlm@466: (node-seq creature))))) rlm@466: #+end_src rlm@466: #+end_listing rlm@465: rlm@469: The next step to making a proper body is to connect those pieces rlm@469: together with joints. jMonkeyEngine has a large array of joints rlm@469: available via =bullet=, such as Point2Point, Cone, Hinge, and a rlm@469: generic Six Degree of Freedom joint, with or without spring rlm@469: restitution. rlm@465: rlm@469: Joints are treated a lot like proper senses, in that there is a rlm@469: top-level empty node named ``joints'' whose children each rlm@469: represent a joint. rlm@466: rlm@469: #+caption: View of the hand model in Blender showing the main ``joints'' rlm@469: #+caption: node (highlighted in yellow) and its children which each rlm@469: #+caption: represent a joint in the hand. Each joint node has metadata rlm@469: #+caption: specifying what sort of joint it is. rlm@469: #+name: blender-hand rlm@469: #+ATTR_LaTeX: :width 10cm rlm@469: [[./images/hand-screenshot1.png]] rlm@469: rlm@469: rlm@469: =CORTEX='s procedure for binding the creature together with joints rlm@469: is as follows: rlm@469: rlm@469: - Find the children of the ``joints'' node. rlm@469: - Determine the two spatials the joint is meant to connect. rlm@469: - Create the joint based on the meta-data of the empty node. rlm@469: rlm@469: The higher order function =sense-nodes= from =cortex.sense= rlm@469: simplifies finding the joints based on their parent ``joints'' rlm@469: node. rlm@466: rlm@466: #+caption: Retrieving the children empty nodes from a single rlm@466: #+caption: named empty node is a common pattern in =CORTEX= rlm@466: #+caption: further instances of this technique for the senses rlm@466: #+caption: will be omitted rlm@466: #+name: get-empty-nodes rlm@466: #+begin_listing clojure rlm@466: #+begin_src clojure rlm@466: (defn sense-nodes rlm@466: "For some senses there is a special empty blender node whose rlm@466: children are considered markers for an instance of that sense. This rlm@466: function generates functions to find those children, given the name rlm@466: of the special parent node." rlm@466: [parent-name] rlm@466: (fn [#^Node creature] rlm@466: (if-let [sense-node (.getChild creature parent-name)] rlm@466: (seq (.getChildren sense-node)) []))) rlm@466: rlm@466: (def rlm@466: ^{:doc "Return the children of the creature's \"joints\" node." rlm@466: :arglists '([creature])} rlm@466: joints rlm@466: (sense-nodes "joints")) rlm@466: #+end_src rlm@466: #+end_listing rlm@466: rlm@469: To find a joint's targets, =CORTEX= creates a small cube, centered rlm@469: around the empty-node, and grows the cube exponentially until it rlm@469: intersects two physical objects. The objects are ordered according rlm@469: to the joint's rotation, with the first one being the object that rlm@469: has more negative coordinates in the joint's reference frame. rlm@469: Since the objects must be physical, the empty-node itself escapes rlm@469: detection. Because the objects must be physical, =joint-targets= rlm@469: must be called /after/ =physical!= is called. rlm@464: rlm@469: #+caption: Program to find the targets of a joint node by rlm@469: #+caption: exponentiallly growth of a search cube. rlm@469: #+name: joint-targets rlm@469: #+begin_listing clojure rlm@469: #+begin_src clojure rlm@466: (defn joint-targets rlm@466: "Return the two closest two objects to the joint object, ordered rlm@466: from bottom to top according to the joint's rotation." rlm@466: [#^Node parts #^Node joint] rlm@466: (loop [radius (float 0.01)] rlm@466: (let [results (CollisionResults.)] rlm@466: (.collideWith rlm@466: parts rlm@466: (BoundingBox. (.getWorldTranslation joint) rlm@466: radius radius radius) results) rlm@466: (let [targets rlm@466: (distinct rlm@466: (map #(.getGeometry %) results))] rlm@466: (if (>= (count targets) 2) rlm@466: (sort-by rlm@466: #(let [joint-ref-frame-position rlm@466: (jme-to-blender rlm@466: (.mult rlm@466: (.inverse (.getWorldRotation joint)) rlm@466: (.subtract (.getWorldTranslation %) rlm@466: (.getWorldTranslation joint))))] rlm@466: (.dot (Vector3f. 1 1 1) joint-ref-frame-position)) rlm@466: (take 2 targets)) rlm@466: (recur (float (* radius 2)))))))) rlm@469: #+end_src rlm@469: #+end_listing rlm@464: rlm@469: Once =CORTEX= finds all joints and targets, it creates them using rlm@469: a dispatch on the metadata of each joint node. rlm@466: rlm@469: #+caption: Program to dispatch on blender metadata and create joints rlm@469: #+caption: sutiable for physical simulation. rlm@469: #+name: joint-dispatch rlm@469: #+begin_listing clojure rlm@469: #+begin_src clojure rlm@466: (defmulti joint-dispatch rlm@466: "Translate blender pseudo-joints into real JME joints." rlm@466: (fn [constraints & _] rlm@466: (:type constraints))) rlm@466: rlm@466: (defmethod joint-dispatch :point rlm@466: [constraints control-a control-b pivot-a pivot-b rotation] rlm@466: (doto (SixDofJoint. control-a control-b pivot-a pivot-b false) rlm@466: (.setLinearLowerLimit Vector3f/ZERO) rlm@466: (.setLinearUpperLimit Vector3f/ZERO))) rlm@466: rlm@466: (defmethod joint-dispatch :hinge rlm@466: [constraints control-a control-b pivot-a pivot-b rotation] rlm@466: (let [axis (if-let [axis (:axis constraints)] axis Vector3f/UNIT_X) rlm@466: [limit-1 limit-2] (:limit constraints) rlm@466: hinge-axis (.mult rotation (blender-to-jme axis))] rlm@466: (doto (HingeJoint. control-a control-b pivot-a pivot-b rlm@466: hinge-axis hinge-axis) rlm@466: (.setLimit limit-1 limit-2)))) rlm@466: rlm@466: (defmethod joint-dispatch :cone rlm@466: [constraints control-a control-b pivot-a pivot-b rotation] rlm@466: (let [limit-xz (:limit-xz constraints) rlm@466: limit-xy (:limit-xy constraints) rlm@466: twist (:twist constraints)] rlm@466: (doto (ConeJoint. control-a control-b pivot-a pivot-b rlm@466: rotation rotation) rlm@466: (.setLimit (float limit-xz) (float limit-xy) rlm@466: (float twist))))) rlm@469: #+end_src rlm@469: #+end_listing rlm@466: rlm@469: All that is left for joints it to combine the above pieces into a rlm@469: something that can operate on the collection of nodes that a rlm@469: blender file represents. rlm@466: rlm@469: #+caption: Program to completely create a joint given information rlm@469: #+caption: from a blender file. rlm@469: #+name: connect rlm@469: #+begin_listing clojure rlm@466: #+begin_src clojure rlm@466: (defn connect rlm@466: "Create a joint between 'obj-a and 'obj-b at the location of rlm@466: 'joint. The type of joint is determined by the metadata on 'joint. rlm@466: rlm@466: Here are some examples: rlm@466: {:type :point} rlm@466: {:type :hinge :limit [0 (/ Math/PI 2)] :axis (Vector3f. 0 1 0)} rlm@466: (:axis defaults to (Vector3f. 1 0 0) if not provided for hinge joints) rlm@466: rlm@466: {:type :cone :limit-xz 0] rlm@466: :limit-xy 0] rlm@466: :twist 0]} (use XZY rotation mode in blender!)" rlm@466: [#^Node obj-a #^Node obj-b #^Node joint] rlm@466: (let [control-a (.getControl obj-a RigidBodyControl) rlm@466: control-b (.getControl obj-b RigidBodyControl) rlm@466: joint-center (.getWorldTranslation joint) rlm@466: joint-rotation (.toRotationMatrix (.getWorldRotation joint)) rlm@466: pivot-a (world-to-local obj-a joint-center) rlm@466: pivot-b (world-to-local obj-b joint-center)] rlm@466: (if-let rlm@466: [constraints (map-vals eval (read-string (meta-data joint "joint")))] rlm@466: ;; A side-effect of creating a joint registers rlm@466: ;; it with both physics objects which in turn rlm@466: ;; will register the joint with the physics system rlm@466: ;; when the simulation is started. rlm@466: (joint-dispatch constraints rlm@466: control-a control-b rlm@466: pivot-a pivot-b rlm@466: joint-rotation)))) rlm@469: #+end_src rlm@469: #+end_listing rlm@466: rlm@469: In general, whenever =CORTEX= exposes a sense (or in this case rlm@469: physicality), it provides a function of the type =sense!=, which rlm@469: takes in a collection of nodes and augments it to support that rlm@469: sense. The function returns any controlls necessary to use that rlm@469: sense. In this case =body!= cerates a physical body and returns no rlm@469: control functions. rlm@466: rlm@469: #+caption: Program to give joints to a creature. rlm@469: #+name: name rlm@469: #+begin_listing clojure rlm@469: #+begin_src clojure rlm@466: (defn joints! rlm@466: "Connect the solid parts of the creature with physical joints. The rlm@466: joints are taken from the \"joints\" node in the creature." rlm@466: [#^Node creature] rlm@466: (dorun rlm@466: (map rlm@466: (fn [joint] rlm@466: (let [[obj-a obj-b] (joint-targets creature joint)] rlm@466: (connect obj-a obj-b joint))) rlm@466: (joints creature)))) rlm@466: (defn body! rlm@466: "Endow the creature with a physical body connected with joints. The rlm@466: particulars of the joints and the masses of each body part are rlm@466: determined in blender." rlm@466: [#^Node creature] rlm@466: (physical! creature) rlm@466: (joints! creature)) rlm@469: #+end_src rlm@469: #+end_listing rlm@466: rlm@469: All of the code you have just seen amounts to only 130 lines, yet rlm@469: because it builds on top of Blender and jMonkeyEngine3, those few rlm@469: lines pack quite a punch! rlm@466: rlm@469: The hand from figure \ref{blender-hand}, which was modeled after rlm@469: my own right hand, can now be given joints and simulated as a rlm@469: creature. rlm@466: rlm@469: #+caption: With the ability to create physical creatures from blender, rlm@469: #+caption: =CORTEX= gets one step closer to becomming a full creature rlm@469: #+caption: simulation environment. rlm@469: #+name: name rlm@469: #+ATTR_LaTeX: :width 15cm rlm@469: [[./images/physical-hand.png]] rlm@468: rlm@472: ** COMMENT Eyes reuse standard video game components rlm@436: rlm@470: Vision is one of the most important senses for humans, so I need to rlm@470: build a simulated sense of vision for my AI. I will do this with rlm@470: simulated eyes. Each eye can be independently moved and should see rlm@470: its own version of the world depending on where it is. rlm@470: rlm@470: Making these simulated eyes a reality is simple because rlm@470: jMonkeyEngine already contains extensive support for multiple views rlm@470: of the same 3D simulated world. The reason jMonkeyEngine has this rlm@470: support is because the support is necessary to create games with rlm@470: split-screen views. Multiple views are also used to create rlm@470: efficient pseudo-reflections by rendering the scene from a certain rlm@470: perspective and then projecting it back onto a surface in the 3D rlm@470: world. rlm@470: rlm@470: #+caption: jMonkeyEngine supports multiple views to enable rlm@470: #+caption: split-screen games, like GoldenEye, which was one of rlm@470: #+caption: the first games to use split-screen views. rlm@470: #+name: name rlm@470: #+ATTR_LaTeX: :width 10cm rlm@470: [[./images/goldeneye-4-player.png]] rlm@470: rlm@470: *** A Brief Description of jMonkeyEngine's Rendering Pipeline rlm@470: rlm@470: jMonkeyEngine allows you to create a =ViewPort=, which represents a rlm@470: view of the simulated world. You can create as many of these as you rlm@470: want. Every frame, the =RenderManager= iterates through each rlm@470: =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there rlm@470: is a =FrameBuffer= which represents the rendered image in the GPU. rlm@470: rlm@470: #+caption: =ViewPorts= are cameras in the world. During each frame, rlm@470: #+caption: the =RenderManager= records a snapshot of what each view rlm@470: #+caption: is currently seeing; these snapshots are =FrameBuffer= objects. rlm@470: #+name: name rlm@470: #+ATTR_LaTeX: :width 10cm rlm@470: [[../images/diagram_rendermanager2.png]] rlm@470: rlm@470: Each =ViewPort= can have any number of attached =SceneProcessor= rlm@470: objects, which are called every time a new frame is rendered. A rlm@470: =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do rlm@470: whatever it wants to the data. Often this consists of invoking GPU rlm@470: specific operations on the rendered image. The =SceneProcessor= can rlm@470: also copy the GPU image data to RAM and process it with the CPU. rlm@470: rlm@470: *** Appropriating Views for Vision rlm@470: rlm@470: Each eye in the simulated creature needs its own =ViewPort= so rlm@470: that it can see the world from its own perspective. To this rlm@470: =ViewPort=, I add a =SceneProcessor= that feeds the visual data to rlm@470: any arbitrary continuation function for further processing. That rlm@470: continuation function may perform both CPU and GPU operations on rlm@470: the data. To make this easy for the continuation function, the rlm@470: =SceneProcessor= maintains appropriately sized buffers in RAM to rlm@470: hold the data. It does not do any copying from the GPU to the CPU rlm@470: itself because it is a slow operation. rlm@470: rlm@470: #+caption: Function to make the rendered secne in jMonkeyEngine rlm@470: #+caption: available for further processing. rlm@470: #+name: pipeline-1 rlm@470: #+begin_listing clojure rlm@470: #+begin_src clojure rlm@470: (defn vision-pipeline rlm@470: "Create a SceneProcessor object which wraps a vision processing rlm@470: continuation function. The continuation is a function that takes rlm@470: [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi], rlm@470: each of which has already been appropriately sized." rlm@470: [continuation] rlm@470: (let [byte-buffer (atom nil) rlm@470: renderer (atom nil) rlm@470: image (atom nil)] rlm@470: (proxy [SceneProcessor] [] rlm@470: (initialize rlm@470: [renderManager viewPort] rlm@470: (let [cam (.getCamera viewPort) rlm@470: width (.getWidth cam) rlm@470: height (.getHeight cam)] rlm@470: (reset! renderer (.getRenderer renderManager)) rlm@470: (reset! byte-buffer rlm@470: (BufferUtils/createByteBuffer rlm@470: (* width height 4))) rlm@470: (reset! image (BufferedImage. rlm@470: width height rlm@470: BufferedImage/TYPE_4BYTE_ABGR)))) rlm@470: (isInitialized [] (not (nil? @byte-buffer))) rlm@470: (reshape [_ _ _]) rlm@470: (preFrame [_]) rlm@470: (postQueue [_]) rlm@470: (postFrame rlm@470: [#^FrameBuffer fb] rlm@470: (.clear @byte-buffer) rlm@470: (continuation @renderer fb @byte-buffer @image)) rlm@470: (cleanup [])))) rlm@470: #+end_src rlm@470: #+end_listing rlm@470: rlm@470: The continuation function given to =vision-pipeline= above will be rlm@470: given a =Renderer= and three containers for image data. The rlm@470: =FrameBuffer= references the GPU image data, but the pixel data rlm@470: can not be used directly on the CPU. The =ByteBuffer= and rlm@470: =BufferedImage= are initially "empty" but are sized to hold the rlm@470: data in the =FrameBuffer=. I call transferring the GPU image data rlm@470: to the CPU structures "mixing" the image data. rlm@470: rlm@470: *** Optical sensor arrays are described with images and referenced with metadata rlm@470: rlm@470: The vision pipeline described above handles the flow of rendered rlm@470: images. Now, =CORTEX= needs simulated eyes to serve as the source rlm@470: of these images. rlm@470: rlm@470: An eye is described in blender in the same way as a joint. They rlm@470: are zero dimensional empty objects with no geometry whose local rlm@470: coordinate system determines the orientation of the resulting eye. rlm@470: All eyes are children of a parent node named "eyes" just as all rlm@470: joints have a parent named "joints". An eye binds to the nearest rlm@470: physical object with =bind-sense=. rlm@470: rlm@470: #+caption: Here, the camera is created based on metadata on the rlm@470: #+caption: eye-node and attached to the nearest physical object rlm@470: #+caption: with =bind-sense= rlm@470: #+name: add-eye rlm@470: #+begin_listing clojure rlm@470: (defn add-eye! rlm@470: "Create a Camera centered on the current position of 'eye which rlm@470: follows the closest physical node in 'creature. The camera will rlm@470: point in the X direction and use the Z vector as up as determined rlm@470: by the rotation of these vectors in blender coordinate space. Use rlm@470: XZY rotation for the node in blender." rlm@470: [#^Node creature #^Spatial eye] rlm@470: (let [target (closest-node creature eye) rlm@470: [cam-width cam-height] rlm@470: ;;[640 480] ;; graphics card on laptop doesn't support rlm@470: ;; arbitray dimensions. rlm@470: (eye-dimensions eye) rlm@470: cam (Camera. cam-width cam-height) rlm@470: rot (.getWorldRotation eye)] rlm@470: (.setLocation cam (.getWorldTranslation eye)) rlm@470: (.lookAtDirection rlm@470: cam ; this part is not a mistake and rlm@470: (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in rlm@470: (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector. rlm@470: (.setFrustumPerspective rlm@470: cam (float 45) rlm@470: (float (/ (.getWidth cam) (.getHeight cam))) rlm@470: (float 1) rlm@470: (float 1000)) rlm@470: (bind-sense target cam) cam)) rlm@470: #+end_listing rlm@470: rlm@470: *** Simulated Retina rlm@470: rlm@470: An eye is a surface (the retina) which contains many discrete rlm@470: sensors to detect light. These sensors can have different rlm@470: light-sensing properties. In humans, each discrete sensor is rlm@470: sensitive to red, blue, green, or gray. These different types of rlm@470: sensors can have different spatial distributions along the retina. rlm@470: In humans, there is a fovea in the center of the retina which has rlm@470: a very high density of color sensors, and a blind spot which has rlm@470: no sensors at all. Sensor density decreases in proportion to rlm@470: distance from the fovea. rlm@470: rlm@470: I want to be able to model any retinal configuration, so my rlm@470: eye-nodes in blender contain metadata pointing to images that rlm@470: describe the precise position of the individual sensors using rlm@470: white pixels. The meta-data also describes the precise sensitivity rlm@470: to light that the sensors described in the image have. An eye can rlm@470: contain any number of these images. For example, the metadata for rlm@470: an eye might look like this: rlm@470: rlm@470: #+begin_src clojure rlm@470: {0xFF0000 "Models/test-creature/retina-small.png"} rlm@470: #+end_src rlm@470: rlm@470: #+caption: An example retinal profile image. White pixels are rlm@470: #+caption: photo-sensitive elements. The distribution of white rlm@470: #+caption: pixels is denser in the middle and falls off at the rlm@470: #+caption: edges and is inspired by the human retina. rlm@470: #+name: retina rlm@470: #+ATTR_LaTeX: :width 10cm rlm@470: [[./images/retina-small.png]] rlm@470: rlm@470: Together, the number 0xFF0000 and the image image above describe rlm@470: the placement of red-sensitive sensory elements. rlm@470: rlm@470: Meta-data to very crudely approximate a human eye might be rlm@470: something like this: rlm@470: rlm@470: #+begin_src clojure rlm@470: (let [retinal-profile "Models/test-creature/retina-small.png"] rlm@470: {0xFF0000 retinal-profile rlm@470: 0x00FF00 retinal-profile rlm@470: 0x0000FF retinal-profile rlm@470: 0xFFFFFF retinal-profile}) rlm@470: #+end_src rlm@470: rlm@470: The numbers that serve as keys in the map determine a sensor's rlm@470: relative sensitivity to the channels red, green, and blue. These rlm@470: sensitivity values are packed into an integer in the order rlm@470: =|_|R|G|B|= in 8-bit fields. The RGB values of a pixel in the rlm@470: image are added together with these sensitivities as linear rlm@470: weights. Therefore, 0xFF0000 means sensitive to red only while rlm@470: 0xFFFFFF means sensitive to all colors equally (gray). rlm@470: rlm@470: #+caption: This is the core of vision in =CORTEX=. A given eye node rlm@470: #+caption: is converted into a function that returns visual rlm@470: #+caption: information from the simulation. rlm@471: #+name: vision-kernel rlm@470: #+begin_listing clojure rlm@470: (defn vision-kernel rlm@470: "Returns a list of functions, each of which will return a color rlm@470: channel's worth of visual information when called inside a running rlm@470: simulation." rlm@470: [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}] rlm@470: (let [retinal-map (retina-sensor-profile eye) rlm@470: camera (add-eye! creature eye) rlm@470: vision-image rlm@470: (atom rlm@470: (BufferedImage. (.getWidth camera) rlm@470: (.getHeight camera) rlm@470: BufferedImage/TYPE_BYTE_BINARY)) rlm@470: register-eye! rlm@470: (runonce rlm@470: (fn [world] rlm@470: (add-camera! rlm@470: world camera rlm@470: (let [counter (atom 0)] rlm@470: (fn [r fb bb bi] rlm@470: (if (zero? (rem (swap! counter inc) (inc skip))) rlm@470: (reset! vision-image rlm@470: (BufferedImage! r fb bb bi))))))))] rlm@470: (vec rlm@470: (map rlm@470: (fn [[key image]] rlm@470: (let [whites (white-coordinates image) rlm@470: topology (vec (collapse whites)) rlm@470: sensitivity (sensitivity-presets key key)] rlm@470: (attached-viewport. rlm@470: (fn [world] rlm@470: (register-eye! world) rlm@470: (vector rlm@470: topology rlm@470: (vec rlm@470: (for [[x y] whites] rlm@470: (pixel-sense rlm@470: sensitivity rlm@470: (.getRGB @vision-image x y)))))) rlm@470: register-eye!))) rlm@470: retinal-map)))) rlm@470: #+end_listing rlm@470: rlm@470: Note that since each of the functions generated by =vision-kernel= rlm@470: shares the same =register-eye!= function, the eye will be rlm@470: registered only once the first time any of the functions from the rlm@470: list returned by =vision-kernel= is called. Each of the functions rlm@470: returned by =vision-kernel= also allows access to the =Viewport= rlm@470: through which it receives images. rlm@470: rlm@470: All the hard work has been done; all that remains is to apply rlm@470: =vision-kernel= to each eye in the creature and gather the results rlm@470: into one list of functions. rlm@470: rlm@470: rlm@470: #+caption: With =vision!=, =CORTEX= is already a fine simulation rlm@470: #+caption: environment for experimenting with different types of rlm@470: #+caption: eyes. rlm@470: #+name: vision! rlm@470: #+begin_listing clojure rlm@470: (defn vision! rlm@470: "Returns a list of functions, each of which returns visual sensory rlm@470: data when called inside a running simulation." rlm@470: [#^Node creature & {skip :skip :or {skip 0}}] rlm@470: (reduce rlm@470: concat rlm@470: (for [eye (eyes creature)] rlm@470: (vision-kernel creature eye)))) rlm@470: #+end_listing rlm@470: rlm@471: #+caption: Simulated vision with a test creature and the rlm@471: #+caption: human-like eye approximation. Notice how each channel rlm@471: #+caption: of the eye responds differently to the differently rlm@471: #+caption: colored balls. rlm@471: #+name: worm-vision-test. rlm@471: #+ATTR_LaTeX: :width 13cm rlm@471: [[./images/worm-vision.png]] rlm@470: rlm@471: The vision code is not much more complicated than the body code, rlm@471: and enables multiple further paths for simulated vision. For rlm@471: example, it is quite easy to create bifocal vision -- you just rlm@471: make two eyes next to each other in blender! It is also possible rlm@471: to encode vision transforms in the retinal files. For example, the rlm@471: human like retina file in figure \ref{retina} approximates a rlm@471: log-polar transform. rlm@470: rlm@471: This vision code has already been absorbed by the jMonkeyEngine rlm@471: community and is now (in modified form) part of a system for rlm@471: capturing in-game video to a file. rlm@470: rlm@473: ** COMMENT Hearing is hard; =CORTEX= does it right rlm@473: rlm@472: At the end of this section I will have simulated ears that work the rlm@472: same way as the simulated eyes in the last section. I will be able to rlm@472: place any number of ear-nodes in a blender file, and they will bind to rlm@472: the closest physical object and follow it as it moves around. Each ear rlm@472: will provide access to the sound data it picks up between every frame. rlm@472: rlm@472: Hearing is one of the more difficult senses to simulate, because there rlm@472: is less support for obtaining the actual sound data that is processed rlm@472: by jMonkeyEngine3. There is no "split-screen" support for rendering rlm@472: sound from different points of view, and there is no way to directly rlm@472: access the rendered sound data. rlm@472: rlm@472: =CORTEX='s hearing is unique because it does not have any rlm@472: limitations compared to other simulation environments. As far as I rlm@472: know, there is no other system that supports multiple listerers, rlm@472: and the sound demo at the end of this section is the first time rlm@472: it's been done in a video game environment. rlm@472: rlm@472: *** Brief Description of jMonkeyEngine's Sound System rlm@472: rlm@472: jMonkeyEngine's sound system works as follows: rlm@472: rlm@472: - jMonkeyEngine uses the =AppSettings= for the particular rlm@472: application to determine what sort of =AudioRenderer= should be rlm@472: used. rlm@472: - Although some support is provided for multiple AudioRendering rlm@472: backends, jMonkeyEngine at the time of this writing will either rlm@472: pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=. rlm@472: - jMonkeyEngine tries to figure out what sort of system you're rlm@472: running and extracts the appropriate native libraries. rlm@472: - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game rlm@472: Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] rlm@472: - =OpenAL= renders the 3D sound and feeds the rendered sound rlm@472: directly to any of various sound output devices with which it rlm@472: knows how to communicate. rlm@472: rlm@472: A consequence of this is that there's no way to access the actual rlm@472: sound data produced by =OpenAL=. Even worse, =OpenAL= only supports rlm@472: one /listener/ (it renders sound data from only one perspective), rlm@472: which normally isn't a problem for games, but becomes a problem rlm@472: when trying to make multiple AI creatures that can each hear the rlm@472: world from a different perspective. rlm@472: rlm@472: To make many AI creatures in jMonkeyEngine that can each hear the rlm@472: world from their own perspective, or to make a single creature with rlm@472: many ears, it is necessary to go all the way back to =OpenAL= and rlm@472: implement support for simulated hearing there. rlm@472: rlm@472: *** Extending =OpenAl= rlm@472: rlm@472: Extending =OpenAL= to support multiple listeners requires 500 rlm@472: lines of =C= code and is too hairy to mention here. Instead, I rlm@472: will show a small amount of extension code and go over the high rlm@472: level stragety. Full source is of course available with the rlm@472: =CORTEX= distribution if you're interested. rlm@472: rlm@472: =OpenAL= goes to great lengths to support many different systems, rlm@472: all with different sound capabilities and interfaces. It rlm@472: accomplishes this difficult task by providing code for many rlm@472: different sound backends in pseudo-objects called /Devices/. rlm@472: There's a device for the Linux Open Sound System and the Advanced rlm@472: Linux Sound Architecture, there's one for Direct Sound on Windows, rlm@472: and there's even one for Solaris. =OpenAL= solves the problem of rlm@472: platform independence by providing all these Devices. rlm@472: rlm@472: Wrapper libraries such as LWJGL are free to examine the system on rlm@472: which they are running and then select an appropriate device for rlm@472: that system. rlm@472: rlm@472: There are also a few "special" devices that don't interface with rlm@472: any particular system. These include the Null Device, which rlm@472: doesn't do anything, and the Wave Device, which writes whatever rlm@472: sound it receives to a file, if everything has been set up rlm@472: correctly when configuring =OpenAL=. rlm@472: rlm@472: Actual mixing (doppler shift and distance.environment-based rlm@472: attenuation) of the sound data happens in the Devices, and they rlm@472: are the only point in the sound rendering process where this data rlm@472: is available. rlm@472: rlm@472: Therefore, in order to support multiple listeners, and get the rlm@472: sound data in a form that the AIs can use, it is necessary to rlm@472: create a new Device which supports this feature. rlm@472: rlm@472: Adding a device to OpenAL is rather tricky -- there are five rlm@472: separate files in the =OpenAL= source tree that must be modified rlm@472: to do so. I named my device the "Multiple Audio Send" Device, or rlm@472: =Send= Device for short, since it sends audio data back to the rlm@472: calling application like an Aux-Send cable on a mixing board. rlm@472: rlm@472: The main idea behind the Send device is to take advantage of the rlm@472: fact that LWJGL only manages one /context/ when using OpenAL. A rlm@472: /context/ is like a container that holds samples and keeps track rlm@472: of where the listener is. In order to support multiple listeners, rlm@472: the Send device identifies the LWJGL context as the master rlm@472: context, and creates any number of slave contexts to represent rlm@472: additional listeners. Every time the device renders sound, it rlm@472: synchronizes every source from the master LWJGL context to the rlm@472: slave contexts. Then, it renders each context separately, using a rlm@472: different listener for each one. The rendered sound is made rlm@472: available via JNI to jMonkeyEngine. rlm@472: rlm@472: Switching between contexts is not the normal operation of a rlm@472: Device, and one of the problems with doing so is that a Device rlm@472: normally keeps around a few pieces of state such as the rlm@472: =ClickRemoval= array above which will become corrupted if the rlm@472: contexts are not rendered in parallel. The solution is to create a rlm@472: copy of this normally global device state for each context, and rlm@472: copy it back and forth into and out of the actual device state rlm@472: whenever a context is rendered. rlm@472: rlm@472: The core of the =Send= device is the =syncSources= function, which rlm@472: does the job of copying all relevant data from one context to rlm@472: another. rlm@472: rlm@472: #+caption: Program for extending =OpenAL= to support multiple rlm@472: #+caption: listeners via context copying/switching. rlm@472: #+name: sync-openal-sources rlm@472: #+begin_listing C rlm@472: void syncSources(ALsource *masterSource, ALsource *slaveSource, rlm@472: ALCcontext *masterCtx, ALCcontext *slaveCtx){ rlm@472: ALuint master = masterSource->source; rlm@472: ALuint slave = slaveSource->source; rlm@472: ALCcontext *current = alcGetCurrentContext(); rlm@472: rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); rlm@472: syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); rlm@472: rlm@472: syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); rlm@472: syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); rlm@472: syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); rlm@472: rlm@472: syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); rlm@472: syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); rlm@472: rlm@472: alcMakeContextCurrent(masterCtx); rlm@472: ALint source_type; rlm@472: alGetSourcei(master, AL_SOURCE_TYPE, &source_type); rlm@472: rlm@472: // Only static sources are currently synchronized! rlm@472: if (AL_STATIC == source_type){ rlm@472: ALint master_buffer; rlm@472: ALint slave_buffer; rlm@472: alGetSourcei(master, AL_BUFFER, &master_buffer); rlm@472: alcMakeContextCurrent(slaveCtx); rlm@472: alGetSourcei(slave, AL_BUFFER, &slave_buffer); rlm@472: if (master_buffer != slave_buffer){ rlm@472: alSourcei(slave, AL_BUFFER, master_buffer); rlm@472: } rlm@472: } rlm@472: rlm@472: // Synchronize the state of the two sources. rlm@472: alcMakeContextCurrent(masterCtx); rlm@472: ALint masterState; rlm@472: ALint slaveState; rlm@472: rlm@472: alGetSourcei(master, AL_SOURCE_STATE, &masterState); rlm@472: alcMakeContextCurrent(slaveCtx); rlm@472: alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); rlm@472: rlm@472: if (masterState != slaveState){ rlm@472: switch (masterState){ rlm@472: case AL_INITIAL : alSourceRewind(slave); break; rlm@472: case AL_PLAYING : alSourcePlay(slave); break; rlm@472: case AL_PAUSED : alSourcePause(slave); break; rlm@472: case AL_STOPPED : alSourceStop(slave); break; rlm@472: } rlm@472: } rlm@472: // Restore whatever context was previously active. rlm@472: alcMakeContextCurrent(current); rlm@472: } rlm@472: #+end_listing rlm@472: rlm@472: With this special context-switching device, and some ugly JNI rlm@472: bindings that are not worth mentioning, =CORTEX= gains the ability rlm@472: to access multiple sound streams from =OpenAL=. rlm@472: rlm@472: #+caption: Program to create an ear from a blender empty node. The ear rlm@472: #+caption: follows around the nearest physical object and passes rlm@472: #+caption: all sensory data to a continuation function. rlm@472: #+name: add-ear rlm@472: #+begin_listing clojure rlm@472: (defn add-ear! rlm@472: "Create a Listener centered on the current position of 'ear rlm@472: which follows the closest physical node in 'creature and rlm@472: sends sound data to 'continuation." rlm@472: [#^Application world #^Node creature #^Spatial ear continuation] rlm@472: (let [target (closest-node creature ear) rlm@472: lis (Listener.) rlm@472: audio-renderer (.getAudioRenderer world) rlm@472: sp (hearing-pipeline continuation)] rlm@472: (.setLocation lis (.getWorldTranslation ear)) rlm@472: (.setRotation lis (.getWorldRotation ear)) rlm@472: (bind-sense target lis) rlm@472: (update-listener-velocity! target lis) rlm@472: (.addListener audio-renderer lis) rlm@472: (.registerSoundProcessor audio-renderer lis sp))) rlm@472: #+end_listing rlm@472: rlm@472: rlm@472: The =Send= device, unlike most of the other devices in =OpenAL=, rlm@472: does not render sound unless asked. This enables the system to rlm@472: slow down or speed up depending on the needs of the AIs who are rlm@472: using it to listen. If the device tried to render samples in rlm@472: real-time, a complicated AI whose mind takes 100 seconds of rlm@472: computer time to simulate 1 second of AI-time would miss almost rlm@472: all of the sound in its environment! rlm@472: rlm@472: #+caption: Program to enable arbitrary hearing in =CORTEX= rlm@472: #+name: hearing rlm@472: #+begin_listing clojure rlm@472: (defn hearing-kernel rlm@472: "Returns a function which returns auditory sensory data when called rlm@472: inside a running simulation." rlm@472: [#^Node creature #^Spatial ear] rlm@472: (let [hearing-data (atom []) rlm@472: register-listener! rlm@472: (runonce rlm@472: (fn [#^Application world] rlm@472: (add-ear! rlm@472: world creature ear rlm@472: (comp #(reset! hearing-data %) rlm@472: byteBuffer->pulse-vector))))] rlm@472: (fn [#^Application world] rlm@472: (register-listener! world) rlm@472: (let [data @hearing-data rlm@472: topology rlm@472: (vec (map #(vector % 0) (range 0 (count data))))] rlm@472: [topology data])))) rlm@472: rlm@472: (defn hearing! rlm@472: "Endow the creature in a particular world with the sense of rlm@472: hearing. Will return a sequence of functions, one for each ear, rlm@472: which when called will return the auditory data from that ear." rlm@472: [#^Node creature] rlm@472: (for [ear (ears creature)] rlm@472: (hearing-kernel creature ear))) rlm@472: #+end_listing rlm@472: rlm@472: Armed with these functions, =CORTEX= is able to test possibly the rlm@472: first ever instance of multiple listeners in a video game engine rlm@472: based simulation! rlm@472: rlm@472: #+caption: Here a simple creature responds to sound by changing rlm@472: #+caption: its color from gray to green when the total volume rlm@472: #+caption: goes over a threshold. rlm@472: #+name: sound-test rlm@472: #+begin_listing java rlm@472: /** rlm@472: * Respond to sound! This is the brain of an AI entity that rlm@472: * hears its surroundings and reacts to them. rlm@472: */ rlm@472: public void process(ByteBuffer audioSamples, rlm@472: int numSamples, AudioFormat format) { rlm@472: audioSamples.clear(); rlm@472: byte[] data = new byte[numSamples]; rlm@472: float[] out = new float[numSamples]; rlm@472: audioSamples.get(data); rlm@472: FloatSampleTools. rlm@472: byte2floatInterleaved rlm@472: (data, 0, out, 0, numSamples/format.getFrameSize(), format); rlm@472: rlm@472: float max = Float.NEGATIVE_INFINITY; rlm@472: for (float f : out){if (f > max) max = f;} rlm@472: audioSamples.clear(); rlm@472: rlm@472: if (max > 0.1){ rlm@472: entity.getMaterial().setColor("Color", ColorRGBA.Green); rlm@472: } rlm@472: else { rlm@472: entity.getMaterial().setColor("Color", ColorRGBA.Gray); rlm@472: } rlm@472: #+end_listing rlm@472: rlm@472: #+caption: First ever simulation of multiple listerners in =CORTEX=. rlm@472: #+caption: Each cube is a creature which processes sound data with rlm@472: #+caption: the =process= function from listing \ref{sound-test}. rlm@472: #+caption: the ball is constantally emiting a pure tone of rlm@472: #+caption: constant volume. As it approaches the cubes, they each rlm@472: #+caption: change color in response to the sound. rlm@472: #+name: sound-cubes. rlm@472: #+ATTR_LaTeX: :width 10cm rlm@472: [[./images/aurellem-gray.png]] rlm@472: rlm@472: This system of hearing has also been co-opted by the rlm@472: jMonkeyEngine3 community and is used to record audio for demo rlm@472: videos. rlm@472: rlm@436: ** Touch uses hundreds of hair-like elements rlm@436: rlm@474: Touch is critical to navigation and spatial reasoning and as such I rlm@474: need a simulated version of it to give to my AI creatures. rlm@474: rlm@474: Human skin has a wide array of touch sensors, each of which rlm@474: specialize in detecting different vibrational modes and pressures. rlm@474: These sensors can integrate a vast expanse of skin (i.e. your rlm@474: entire palm), or a tiny patch of skin at the tip of your finger. rlm@474: The hairs of the skin help detect objects before they even come rlm@474: into contact with the skin proper. rlm@474: rlm@474: However, touch in my simulated world can not exactly correspond to rlm@474: human touch because my creatures are made out of completely rigid rlm@474: segments that don't deform like human skin. rlm@474: rlm@474: Instead of measuring deformation or vibration, I surround each rlm@474: rigid part with a plenitude of hair-like objects (/feelers/) which rlm@474: do not interact with the physical world. Physical objects can pass rlm@474: through them with no effect. The feelers are able to tell when rlm@474: other objects pass through them, and they constantly report how rlm@474: much of their extent is covered. So even though the creature's body rlm@474: parts do not deform, the feelers create a margin around those body rlm@474: parts which achieves a sense of touch which is a hybrid between a rlm@474: human's sense of deformation and sense from hairs. rlm@474: rlm@474: Implementing touch in jMonkeyEngine follows a different technical rlm@474: route than vision and hearing. Those two senses piggybacked off rlm@474: jMonkeyEngine's 3D audio and video rendering subsystems. To rlm@474: simulate touch, I use jMonkeyEngine's physics system to execute rlm@474: many small collision detections, one for each feeler. The placement rlm@474: of the feelers is determined by a UV-mapped image which shows where rlm@474: each feeler should be on the 3D surface of the body. rlm@474: rlm@474: *** Defining Touch Meta-Data in Blender rlm@474: rlm@474: Each geometry can have a single UV map which describes the rlm@474: position of the feelers which will constitute its sense of touch. rlm@474: This image path is stored under the ``touch'' key. The image itself rlm@474: is black and white, with black meaning a feeler length of 0 (no rlm@474: feeler is present) and white meaning a feeler length of =scale=, rlm@474: which is a float stored under the key "scale". rlm@474: rlm@474: #+name: meta-data rlm@474: #+begin_src clojure rlm@474: (defn tactile-sensor-profile rlm@474: "Return the touch-sensor distribution image in BufferedImage format, rlm@474: or nil if it does not exist." rlm@474: [#^Geometry obj] rlm@474: (if-let [image-path (meta-data obj "touch")] rlm@474: (load-image image-path))) rlm@474: rlm@474: (defn tactile-scale rlm@474: "Return the length of each feeler. Default scale is 0.01 rlm@474: jMonkeyEngine units." rlm@474: [#^Geometry obj] rlm@474: (if-let [scale (meta-data obj "scale")] rlm@474: scale 0.1)) rlm@474: #+end_src rlm@474: rlm@474: Here is an example of a UV-map which specifies the position of touch rlm@474: sensors along the surface of the upper segment of the worm. rlm@474: rlm@474: #+attr_html: width=755 rlm@474: #+caption: This is the tactile-sensor-profile for the upper segment of the worm. It defines regions of high touch sensitivity (where there are many white pixels) and regions of low sensitivity (where white pixels are sparse). rlm@474: [[../images/finger-UV.png]] rlm@474: rlm@474: *** Implementation Summary rlm@474: rlm@474: To simulate touch there are three conceptual steps. For each solid rlm@474: object in the creature, you first have to get UV image and scale rlm@474: parameter which define the position and length of the feelers. rlm@474: Then, you use the triangles which comprise the mesh and the UV rlm@474: data stored in the mesh to determine the world-space position and rlm@474: orientation of each feeler. Then once every frame, update these rlm@474: positions and orientations to match the current position and rlm@474: orientation of the object, and use physics collision detection to rlm@474: gather tactile data. rlm@474: rlm@474: Extracting the meta-data has already been described. The third rlm@474: step, physics collision detection, is handled in =touch-kernel=. rlm@474: Translating the positions and orientations of the feelers from the rlm@474: UV-map to world-space is itself a three-step process. rlm@474: rlm@474: - Find the triangles which make up the mesh in pixel-space and in rlm@474: world-space. =triangles= =pixel-triangles=. rlm@474: rlm@474: - Find the coordinates of each feeler in world-space. These are the rlm@474: origins of the feelers. =feeler-origins=. rlm@474: rlm@474: - Calculate the normals of the triangles in world space, and add rlm@474: them to each of the origins of the feelers. These are the rlm@474: normalized coordinates of the tips of the feelers. =feeler-tips=. rlm@474: rlm@474: *** Triangle Math rlm@474: rlm@474: The rigid objects which make up a creature have an underlying rlm@474: =Geometry=, which is a =Mesh= plus a =Material= and other important rlm@474: data involved with displaying the object. rlm@474: rlm@474: A =Mesh= is composed of =Triangles=, and each =Triangle= has three rlm@474: vertices which have coordinates in world space and UV space. rlm@474: rlm@474: Here, =triangles= gets all the world-space triangles which comprise a rlm@474: mesh, while =pixel-triangles= gets those same triangles expressed in rlm@474: pixel coordinates (which are UV coordinates scaled to fit the height rlm@474: and width of the UV image). rlm@474: rlm@474: #+name: triangles-2 rlm@474: #+begin_src clojure rlm@474: (in-ns 'cortex.touch) rlm@474: (defn triangle rlm@474: "Get the triangle specified by triangle-index from the mesh." rlm@474: [#^Geometry geo triangle-index] rlm@474: (triangle-seq rlm@474: (let [scratch (Triangle.)] rlm@474: (.getTriangle (.getMesh geo) triangle-index scratch) scratch))) rlm@474: rlm@474: (defn triangles rlm@474: "Return a sequence of all the Triangles which comprise a given rlm@474: Geometry." rlm@474: [#^Geometry geo] rlm@474: (map (partial triangle geo) (range (.getTriangleCount (.getMesh geo))))) rlm@474: rlm@474: (defn triangle-vertex-indices rlm@474: "Get the triangle vertex indices of a given triangle from a given rlm@474: mesh." rlm@474: [#^Mesh mesh triangle-index] rlm@474: (let [indices (int-array 3)] rlm@474: (.getTriangle mesh triangle-index indices) rlm@474: (vec indices))) rlm@474: rlm@474: (defn vertex-UV-coord rlm@474: "Get the UV-coordinates of the vertex named by vertex-index" rlm@474: [#^Mesh mesh vertex-index] rlm@474: (let [UV-buffer rlm@474: (.getData rlm@474: (.getBuffer rlm@474: mesh rlm@474: VertexBuffer$Type/TexCoord))] rlm@474: [(.get UV-buffer (* vertex-index 2)) rlm@474: (.get UV-buffer (+ 1 (* vertex-index 2)))])) rlm@474: rlm@474: (defn pixel-triangle [#^Geometry geo image index] rlm@474: (let [mesh (.getMesh geo) rlm@474: width (.getWidth image) rlm@474: height (.getHeight image)] rlm@474: (vec (map (fn [[u v]] (vector (* width u) (* height v))) rlm@474: (map (partial vertex-UV-coord mesh) rlm@474: (triangle-vertex-indices mesh index)))))) rlm@474: rlm@474: (defn pixel-triangles rlm@474: "The pixel-space triangles of the Geometry, in the same order as rlm@474: (triangles geo)" rlm@474: [#^Geometry geo image] rlm@474: (let [height (.getHeight image) rlm@474: width (.getWidth image)] rlm@474: (map (partial pixel-triangle geo image) rlm@474: (range (.getTriangleCount (.getMesh geo)))))) rlm@474: #+end_src rlm@474: rlm@474: *** The Affine Transform from one Triangle to Another rlm@474: rlm@474: =pixel-triangles= gives us the mesh triangles expressed in pixel rlm@474: coordinates and =triangles= gives us the mesh triangles expressed in rlm@474: world coordinates. The tactile-sensor-profile gives the position of rlm@474: each feeler in pixel-space. In order to convert pixel-space rlm@474: coordinates into world-space coordinates we need something that takes rlm@474: coordinates on the surface of one triangle and gives the corresponding rlm@474: coordinates on the surface of another triangle. rlm@474: rlm@474: Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed into rlm@474: any other by a combination of translation, scaling, and rlm@474: rotation. The affine transformation from one triangle to another rlm@474: is readily computable if the triangle is expressed in terms of a $4x4$ rlm@474: matrix. rlm@474: rlm@474: \begin{bmatrix} rlm@474: x_1 & x_2 & x_3 & n_x \\ rlm@474: y_1 & y_2 & y_3 & n_y \\ rlm@474: z_1 & z_2 & z_3 & n_z \\ rlm@474: 1 & 1 & 1 & 1 rlm@474: \end{bmatrix} rlm@474: rlm@474: Here, the first three columns of the matrix are the vertices of the rlm@474: triangle. The last column is the right-handed unit normal of the rlm@474: triangle. rlm@474: rlm@474: With two triangles $T_{1}$ and $T_{2}$ each expressed as a matrix like rlm@474: above, the affine transform from $T_{1}$ to $T_{2}$ is rlm@474: rlm@474: $T_{2}T_{1}^{-1}$ rlm@474: rlm@474: The clojure code below recapitulates the formulas above, using rlm@474: jMonkeyEngine's =Matrix4f= objects, which can describe any affine rlm@474: transformation. rlm@474: rlm@474: #+name: triangles-3 rlm@474: #+begin_src clojure rlm@474: (in-ns 'cortex.touch) rlm@474: rlm@474: (defn triangle->matrix4f rlm@474: "Converts the triangle into a 4x4 matrix: The first three columns rlm@474: contain the vertices of the triangle; the last contains the unit rlm@474: normal of the triangle. The bottom row is filled with 1s." rlm@474: [#^Triangle t] rlm@474: (let [mat (Matrix4f.) rlm@474: [vert-1 vert-2 vert-3] rlm@474: (mapv #(.get t %) (range 3)) rlm@474: unit-normal (do (.calculateNormal t)(.getNormal t)) rlm@474: vertices [vert-1 vert-2 vert-3 unit-normal]] rlm@474: (dorun rlm@474: (for [row (range 4) col (range 3)] rlm@474: (do rlm@474: (.set mat col row (.get (vertices row) col)) rlm@474: (.set mat 3 row 1)))) mat)) rlm@474: rlm@474: (defn triangles->affine-transform rlm@474: "Returns the affine transformation that converts each vertex in the rlm@474: first triangle into the corresponding vertex in the second rlm@474: triangle." rlm@474: [#^Triangle tri-1 #^Triangle tri-2] rlm@474: (.mult rlm@474: (triangle->matrix4f tri-2) rlm@474: (.invert (triangle->matrix4f tri-1)))) rlm@474: #+end_src rlm@474: rlm@474: *** Triangle Boundaries rlm@474: rlm@474: For efficiency's sake I will divide the tactile-profile image into rlm@474: small squares which inscribe each pixel-triangle, then extract the rlm@474: points which lie inside the triangle and map them to 3D-space using rlm@474: =triangle-transform= above. To do this I need a function, rlm@474: =convex-bounds= which finds the smallest box which inscribes a 2D rlm@474: triangle. rlm@474: rlm@474: =inside-triangle?= determines whether a point is inside a triangle rlm@474: in 2D pixel-space. rlm@474: rlm@474: #+name: triangles-4 rlm@474: #+begin_src clojure rlm@474: (defn convex-bounds rlm@474: "Returns the smallest square containing the given vertices, as a rlm@474: vector of integers [left top width height]." rlm@474: [verts] rlm@474: (let [xs (map first verts) rlm@474: ys (map second verts) rlm@474: x0 (Math/floor (apply min xs)) rlm@474: y0 (Math/floor (apply min ys)) rlm@474: x1 (Math/ceil (apply max xs)) rlm@474: y1 (Math/ceil (apply max ys))] rlm@474: [x0 y0 (- x1 x0) (- y1 y0)])) rlm@474: rlm@474: (defn same-side? rlm@474: "Given the points p1 and p2 and the reference point ref, is point p rlm@474: on the same side of the line that goes through p1 and p2 as ref is?" rlm@474: [p1 p2 ref p] rlm@474: (<= rlm@474: 0 rlm@474: (.dot rlm@474: (.cross (.subtract p2 p1) (.subtract p p1)) rlm@474: (.cross (.subtract p2 p1) (.subtract ref p1))))) rlm@474: rlm@474: (defn inside-triangle? rlm@474: "Is the point inside the triangle?" rlm@474: {:author "Dylan Holmes"} rlm@474: [#^Triangle tri #^Vector3f p] rlm@474: (let [[vert-1 vert-2 vert-3] [(.get1 tri) (.get2 tri) (.get3 tri)]] rlm@474: (and rlm@474: (same-side? vert-1 vert-2 vert-3 p) rlm@474: (same-side? vert-2 vert-3 vert-1 p) rlm@474: (same-side? vert-3 vert-1 vert-2 p)))) rlm@474: #+end_src rlm@474: rlm@474: *** Feeler Coordinates rlm@474: rlm@474: The triangle-related functions above make short work of calculating rlm@474: the positions and orientations of each feeler in world-space. rlm@474: rlm@474: #+name: sensors rlm@474: #+begin_src clojure rlm@474: (in-ns 'cortex.touch) rlm@474: rlm@474: (defn feeler-pixel-coords rlm@474: "Returns the coordinates of the feelers in pixel space in lists, one rlm@474: list for each triangle, ordered in the same way as (triangles) and rlm@474: (pixel-triangles)." rlm@474: [#^Geometry geo image] rlm@474: (map rlm@474: (fn [pixel-triangle] rlm@474: (filter rlm@474: (fn [coord] rlm@474: (inside-triangle? (->triangle pixel-triangle) rlm@474: (->vector3f coord))) rlm@474: (white-coordinates image (convex-bounds pixel-triangle)))) rlm@474: (pixel-triangles geo image))) rlm@474: rlm@474: (defn feeler-world-coords rlm@474: "Returns the coordinates of the feelers in world space in lists, one rlm@474: list for each triangle, ordered in the same way as (triangles) and rlm@474: (pixel-triangles)." rlm@474: [#^Geometry geo image] rlm@474: (let [transforms rlm@474: (map #(triangles->affine-transform rlm@474: (->triangle %1) (->triangle %2)) rlm@474: (pixel-triangles geo image) rlm@474: (triangles geo))] rlm@474: (map (fn [transform coords] rlm@474: (map #(.mult transform (->vector3f %)) coords)) rlm@474: transforms (feeler-pixel-coords geo image)))) rlm@474: rlm@474: (defn feeler-origins rlm@474: "The world space coordinates of the root of each feeler." rlm@474: [#^Geometry geo image] rlm@474: (reduce concat (feeler-world-coords geo image))) rlm@474: rlm@474: (defn feeler-tips rlm@474: "The world space coordinates of the tip of each feeler." rlm@474: [#^Geometry geo image] rlm@474: (let [world-coords (feeler-world-coords geo image) rlm@474: normals rlm@474: (map rlm@474: (fn [triangle] rlm@474: (.calculateNormal triangle) rlm@474: (.clone (.getNormal triangle))) rlm@474: (map ->triangle (triangles geo)))] rlm@474: rlm@474: (mapcat (fn [origins normal] rlm@474: (map #(.add % normal) origins)) rlm@474: world-coords normals))) rlm@474: rlm@474: (defn touch-topology rlm@474: "touch-topology? is not a function." rlm@474: [#^Geometry geo image] rlm@474: (collapse (reduce concat (feeler-pixel-coords geo image)))) rlm@474: #+end_src rlm@474: *** Simulated Touch rlm@474: rlm@474: =touch-kernel= generates functions to be called from within a rlm@474: simulation that perform the necessary physics collisions to collect rlm@474: tactile data, and =touch!= recursively applies it to every node in rlm@474: the creature. rlm@474: rlm@474: #+name: kernel rlm@474: #+begin_src clojure rlm@474: (in-ns 'cortex.touch) rlm@474: rlm@474: (defn set-ray [#^Ray ray #^Matrix4f transform rlm@474: #^Vector3f origin #^Vector3f tip] rlm@474: ;; Doing everything locally reduces garbage collection by enough to rlm@474: ;; be worth it. rlm@474: (.mult transform origin (.getOrigin ray)) rlm@474: (.mult transform tip (.getDirection ray)) rlm@474: (.subtractLocal (.getDirection ray) (.getOrigin ray)) rlm@474: (.normalizeLocal (.getDirection ray))) rlm@474: rlm@474: (import com.jme3.math.FastMath) rlm@474: rlm@474: (defn touch-kernel rlm@474: "Constructs a function which will return tactile sensory data from rlm@474: 'geo when called from inside a running simulation" rlm@474: [#^Geometry geo] rlm@474: (if-let rlm@474: [profile (tactile-sensor-profile geo)] rlm@474: (let [ray-reference-origins (feeler-origins geo profile) rlm@474: ray-reference-tips (feeler-tips geo profile) rlm@474: ray-length (tactile-scale geo) rlm@474: current-rays (map (fn [_] (Ray.)) ray-reference-origins) rlm@474: topology (touch-topology geo profile) rlm@474: correction (float (* ray-length -0.2))] rlm@474: rlm@474: ;; slight tolerance for very close collisions. rlm@474: (dorun rlm@474: (map (fn [origin tip] rlm@474: (.addLocal origin (.mult (.subtract tip origin) rlm@474: correction))) rlm@474: ray-reference-origins ray-reference-tips)) rlm@474: (dorun (map #(.setLimit % ray-length) current-rays)) rlm@474: (fn [node] rlm@474: (let [transform (.getWorldMatrix geo)] rlm@474: (dorun rlm@474: (map (fn [ray ref-origin ref-tip] rlm@474: (set-ray ray transform ref-origin ref-tip)) rlm@474: current-rays ray-reference-origins rlm@474: ray-reference-tips)) rlm@474: (vector rlm@474: topology rlm@474: (vec rlm@474: (for [ray current-rays] rlm@474: (do rlm@474: (let [results (CollisionResults.)] rlm@474: (.collideWith node ray results) rlm@474: (let [touch-objects rlm@474: (filter #(not (= geo (.getGeometry %))) rlm@474: results) rlm@474: limit (.getLimit ray)] rlm@474: [(if (empty? touch-objects) rlm@474: limit rlm@474: (let [response rlm@474: (apply min (map #(.getDistance %) rlm@474: touch-objects))] rlm@474: (FastMath/clamp rlm@474: (float rlm@474: (if (> response limit) (float 0.0) rlm@474: (+ response correction))) rlm@474: (float 0.0) rlm@474: limit))) rlm@474: limit]))))))))))) rlm@474: rlm@474: (defn touch! rlm@474: "Endow the creature with the sense of touch. Returns a sequence of rlm@474: functions, one for each body part with a tactile-sensor-profile, rlm@474: each of which when called returns sensory data for that body part." rlm@474: [#^Node creature] rlm@474: (filter rlm@474: (comp not nil?) rlm@474: (map touch-kernel rlm@474: (filter #(isa? (class %) Geometry) rlm@474: (node-seq creature))))) rlm@474: #+end_src rlm@474: rlm@474: rlm@474: Armed with the =touch!= function, =CORTEX= becomes capable of giving rlm@474: creatures a sense of touch. A simple test is to create a cube that is rlm@474: outfitted with a uniform distrubition of touch sensors. It can feel rlm@474: the ground and any balls that it touches. rlm@474: rlm@474: # insert touch cube image; UV map rlm@474: # insert video rlm@474: rlm@440: ** Proprioception is the sense that makes everything ``real'' rlm@436: rlm@436: ** Muscles are both effectors and sensors rlm@436: rlm@436: ** =CORTEX= brings complex creatures to life! rlm@436: rlm@436: ** =CORTEX= enables many possiblities for further research rlm@474: rlm@465: * COMMENT Empathy in a simulated worm rlm@435: rlm@449: Here I develop a computational model of empathy, using =CORTEX= as a rlm@449: base. Empathy in this context is the ability to observe another rlm@449: creature and infer what sorts of sensations that creature is rlm@449: feeling. My empathy algorithm involves multiple phases. First is rlm@449: free-play, where the creature moves around and gains sensory rlm@449: experience. From this experience I construct a representation of the rlm@449: creature's sensory state space, which I call \Phi-space. Using rlm@449: \Phi-space, I construct an efficient function which takes the rlm@449: limited data that comes from observing another creature and enriches rlm@449: it full compliment of imagined sensory data. I can then use the rlm@449: imagined sensory data to recognize what the observed creature is rlm@449: doing and feeling, using straightforward embodied action predicates. rlm@449: This is all demonstrated with using a simple worm-like creature, and rlm@449: recognizing worm-actions based on limited data. rlm@449: rlm@449: #+caption: Here is the worm with which we will be working. rlm@449: #+caption: It is composed of 5 segments. Each segment has a rlm@449: #+caption: pair of extensor and flexor muscles. Each of the rlm@449: #+caption: worm's four joints is a hinge joint which allows rlm@451: #+caption: about 30 degrees of rotation to either side. Each segment rlm@449: #+caption: of the worm is touch-capable and has a uniform rlm@449: #+caption: distribution of touch sensors on each of its faces. rlm@449: #+caption: Each joint has a proprioceptive sense to detect rlm@449: #+caption: relative positions. The worm segments are all the rlm@449: #+caption: same except for the first one, which has a much rlm@449: #+caption: higher weight than the others to allow for easy rlm@449: #+caption: manual motor control. rlm@449: #+name: basic-worm-view rlm@449: #+ATTR_LaTeX: :width 10cm rlm@449: [[./images/basic-worm-view.png]] rlm@449: rlm@449: #+caption: Program for reading a worm from a blender file and rlm@449: #+caption: outfitting it with the senses of proprioception, rlm@449: #+caption: touch, and the ability to move, as specified in the rlm@449: #+caption: blender file. rlm@449: #+name: get-worm rlm@449: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn worm [] rlm@449: (let [model (load-blender-model "Models/worm/worm.blend")] rlm@449: {:body (doto model (body!)) rlm@449: :touch (touch! model) rlm@449: :proprioception (proprioception! model) rlm@449: :muscles (movement! model)})) rlm@449: #+end_src rlm@449: #+end_listing rlm@452: rlm@436: ** Embodiment factors action recognition into managable parts rlm@435: rlm@449: Using empathy, I divide the problem of action recognition into a rlm@449: recognition process expressed in the language of a full compliment rlm@449: of senses, and an imaganitive process that generates full sensory rlm@449: data from partial sensory data. Splitting the action recognition rlm@449: problem in this manner greatly reduces the total amount of work to rlm@449: recognize actions: The imaganitive process is mostly just matching rlm@449: previous experience, and the recognition process gets to use all rlm@449: the senses to directly describe any action. rlm@449: rlm@436: ** Action recognition is easy with a full gamut of senses rlm@435: rlm@449: Embodied representations using multiple senses such as touch, rlm@449: proprioception, and muscle tension turns out be be exceedingly rlm@449: efficient at describing body-centered actions. It is the ``right rlm@449: language for the job''. For example, it takes only around 5 lines rlm@449: of LISP code to describe the action of ``curling'' using embodied rlm@451: primitives. It takes about 10 lines to describe the seemingly rlm@449: complicated action of wiggling. rlm@449: rlm@449: The following action predicates each take a stream of sensory rlm@449: experience, observe however much of it they desire, and decide rlm@449: whether the worm is doing the action they describe. =curled?= rlm@449: relies on proprioception, =resting?= relies on touch, =wiggling?= rlm@449: relies on a fourier analysis of muscle contraction, and rlm@449: =grand-circle?= relies on touch and reuses =curled?= as a gaurd. rlm@449: rlm@449: #+caption: Program for detecting whether the worm is curled. This is the rlm@449: #+caption: simplest action predicate, because it only uses the last frame rlm@449: #+caption: of sensory experience, and only uses proprioceptive data. Even rlm@449: #+caption: this simple predicate, however, is automatically frame rlm@449: #+caption: independent and ignores vermopomorphic differences such as rlm@449: #+caption: worm textures and colors. rlm@449: #+name: curled rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn curled? rlm@449: "Is the worm curled up?" rlm@449: [experiences] rlm@449: (every? rlm@449: (fn [[_ _ bend]] rlm@449: (> (Math/sin bend) 0.64)) rlm@449: (:proprioception (peek experiences)))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Program for summarizing the touch information in a patch rlm@449: #+caption: of skin. rlm@449: #+name: touch-summary rlm@452: #+attr_latex: [htpb] rlm@452: rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn contact rlm@449: "Determine how much contact a particular worm segment has with rlm@449: other objects. Returns a value between 0 and 1, where 1 is full rlm@449: contact and 0 is no contact." rlm@449: [touch-region [coords contact :as touch]] rlm@449: (-> (zipmap coords contact) rlm@449: (select-keys touch-region) rlm@449: (vals) rlm@449: (#(map first %)) rlm@449: (average) rlm@449: (* 10) rlm@449: (- 1) rlm@449: (Math/abs))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: rlm@449: #+caption: Program for detecting whether the worm is at rest. This program rlm@449: #+caption: uses a summary of the tactile information from the underbelly rlm@449: #+caption: of the worm, and is only true if every segment is touching the rlm@449: #+caption: floor. Note that this function contains no references to rlm@449: #+caption: proprioction at all. rlm@449: #+name: resting rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (def worm-segment-bottom (rect-region [8 15] [14 22])) rlm@449: rlm@449: (defn resting? rlm@449: "Is the worm resting on the ground?" rlm@449: [experiences] rlm@449: (every? rlm@449: (fn [touch-data] rlm@449: (< 0.9 (contact worm-segment-bottom touch-data))) rlm@449: (:touch (peek experiences)))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Program for detecting whether the worm is curled up into a rlm@449: #+caption: full circle. Here the embodied approach begins to shine, as rlm@449: #+caption: I am able to both use a previous action predicate (=curled?=) rlm@449: #+caption: as well as the direct tactile experience of the head and tail. rlm@449: #+name: grand-circle rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (def worm-segment-bottom-tip (rect-region [15 15] [22 22])) rlm@449: rlm@449: (def worm-segment-top-tip (rect-region [0 15] [7 22])) rlm@449: rlm@449: (defn grand-circle? rlm@449: "Does the worm form a majestic circle (one end touching the other)?" rlm@449: [experiences] rlm@449: (and (curled? experiences) rlm@449: (let [worm-touch (:touch (peek experiences)) rlm@449: tail-touch (worm-touch 0) rlm@449: head-touch (worm-touch 4)] rlm@449: (and (< 0.55 (contact worm-segment-bottom-tip tail-touch)) rlm@449: (< 0.55 (contact worm-segment-top-tip head-touch)))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: rlm@449: #+caption: Program for detecting whether the worm has been wiggling for rlm@449: #+caption: the last few frames. It uses a fourier analysis of the muscle rlm@449: #+caption: contractions of the worm's tail to determine wiggling. This is rlm@449: #+caption: signigicant because there is no particular frame that clearly rlm@449: #+caption: indicates that the worm is wiggling --- only when multiple frames rlm@449: #+caption: are analyzed together is the wiggling revealed. Defining rlm@449: #+caption: wiggling this way also gives the worm an opportunity to learn rlm@449: #+caption: and recognize ``frustrated wiggling'', where the worm tries to rlm@449: #+caption: wiggle but can't. Frustrated wiggling is very visually different rlm@449: #+caption: from actual wiggling, but this definition gives it to us for free. rlm@449: #+name: wiggling rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn fft [nums] rlm@449: (map rlm@449: #(.getReal %) rlm@449: (.transform rlm@449: (FastFourierTransformer. DftNormalization/STANDARD) rlm@449: (double-array nums) TransformType/FORWARD))) rlm@449: rlm@449: (def indexed (partial map-indexed vector)) rlm@449: rlm@449: (defn max-indexed [s] rlm@449: (first (sort-by (comp - second) (indexed s)))) rlm@449: rlm@449: (defn wiggling? rlm@449: "Is the worm wiggling?" rlm@449: [experiences] rlm@449: (let [analysis-interval 0x40] rlm@449: (when (> (count experiences) analysis-interval) rlm@449: (let [a-flex 3 rlm@449: a-ex 2 rlm@449: muscle-activity rlm@449: (map :muscle (vector:last-n experiences analysis-interval)) rlm@449: base-activity rlm@449: (map #(- (% a-flex) (% a-ex)) muscle-activity)] rlm@449: (= 2 rlm@449: (first rlm@449: (max-indexed rlm@449: (map #(Math/abs %) rlm@449: (take 20 (fft base-activity)))))))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: With these action predicates, I can now recognize the actions of rlm@449: the worm while it is moving under my control and I have access to rlm@449: all the worm's senses. rlm@449: rlm@449: #+caption: Use the action predicates defined earlier to report on rlm@449: #+caption: what the worm is doing while in simulation. rlm@449: #+name: report-worm-activity rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn debug-experience rlm@449: [experiences text] rlm@449: (cond rlm@449: (grand-circle? experiences) (.setText text "Grand Circle") rlm@449: (curled? experiences) (.setText text "Curled") rlm@449: (wiggling? experiences) (.setText text "Wiggling") rlm@449: (resting? experiences) (.setText text "Resting"))) rlm@449: #+end_src rlm@449: #+end_listing rlm@449: rlm@449: #+caption: Using =debug-experience=, the body-centered predicates rlm@449: #+caption: work together to classify the behaviour of the worm. rlm@451: #+caption: the predicates are operating with access to the worm's rlm@451: #+caption: full sensory data. rlm@449: #+name: basic-worm-view rlm@449: #+ATTR_LaTeX: :width 10cm rlm@449: [[./images/worm-identify-init.png]] rlm@449: rlm@449: These action predicates satisfy the recognition requirement of an rlm@451: empathic recognition system. There is power in the simplicity of rlm@451: the action predicates. They describe their actions without getting rlm@451: confused in visual details of the worm. Each one is frame rlm@451: independent, but more than that, they are each indepent of rlm@449: irrelevant visual details of the worm and the environment. They rlm@449: will work regardless of whether the worm is a different color or rlm@451: hevaily textured, or if the environment has strange lighting. rlm@449: rlm@449: The trick now is to make the action predicates work even when the rlm@449: sensory data on which they depend is absent. If I can do that, then rlm@449: I will have gained much, rlm@435: rlm@436: ** \Phi-space describes the worm's experiences rlm@449: rlm@449: As a first step towards building empathy, I need to gather all of rlm@449: the worm's experiences during free play. I use a simple vector to rlm@449: store all the experiences. rlm@449: rlm@449: Each element of the experience vector exists in the vast space of rlm@449: all possible worm-experiences. Most of this vast space is actually rlm@449: unreachable due to physical constraints of the worm's body. For rlm@449: example, the worm's segments are connected by hinge joints that put rlm@451: a practical limit on the worm's range of motions without limiting rlm@451: its degrees of freedom. Some groupings of senses are impossible; rlm@451: the worm can not be bent into a circle so that its ends are rlm@451: touching and at the same time not also experience the sensation of rlm@451: touching itself. rlm@449: rlm@451: As the worm moves around during free play and its experience vector rlm@451: grows larger, the vector begins to define a subspace which is all rlm@451: the sensations the worm can practicaly experience during normal rlm@451: operation. I call this subspace \Phi-space, short for rlm@451: physical-space. The experience vector defines a path through rlm@451: \Phi-space. This path has interesting properties that all derive rlm@451: from physical embodiment. The proprioceptive components are rlm@451: completely smooth, because in order for the worm to move from one rlm@451: position to another, it must pass through the intermediate rlm@451: positions. The path invariably forms loops as actions are repeated. rlm@451: Finally and most importantly, proprioception actually gives very rlm@451: strong inference about the other senses. For example, when the worm rlm@451: is flat, you can infer that it is touching the ground and that its rlm@451: muscles are not active, because if the muscles were active, the rlm@451: worm would be moving and would not be perfectly flat. In order to rlm@451: stay flat, the worm has to be touching the ground, or it would rlm@451: again be moving out of the flat position due to gravity. If the rlm@451: worm is positioned in such a way that it interacts with itself, rlm@451: then it is very likely to be feeling the same tactile feelings as rlm@451: the last time it was in that position, because it has the same body rlm@451: as then. If you observe multiple frames of proprioceptive data, rlm@451: then you can become increasingly confident about the exact rlm@451: activations of the worm's muscles, because it generally takes a rlm@451: unique combination of muscle contractions to transform the worm's rlm@451: body along a specific path through \Phi-space. rlm@449: rlm@449: There is a simple way of taking \Phi-space and the total ordering rlm@449: provided by an experience vector and reliably infering the rest of rlm@449: the senses. rlm@435: rlm@436: ** Empathy is the process of tracing though \Phi-space rlm@449: rlm@450: Here is the core of a basic empathy algorithm, starting with an rlm@451: experience vector: rlm@451: rlm@451: First, group the experiences into tiered proprioceptive bins. I use rlm@451: powers of 10 and 3 bins, and the smallest bin has an approximate rlm@451: size of 0.001 radians in all proprioceptive dimensions. rlm@450: rlm@450: Then, given a sequence of proprioceptive input, generate a set of rlm@451: matching experience records for each input, using the tiered rlm@451: proprioceptive bins. rlm@449: rlm@450: Finally, to infer sensory data, select the longest consective chain rlm@451: of experiences. Conecutive experience means that the experiences rlm@451: appear next to each other in the experience vector. rlm@449: rlm@450: This algorithm has three advantages: rlm@450: rlm@450: 1. It's simple rlm@450: rlm@451: 3. It's very fast -- retrieving possible interpretations takes rlm@451: constant time. Tracing through chains of interpretations takes rlm@451: time proportional to the average number of experiences in a rlm@451: proprioceptive bin. Redundant experiences in \Phi-space can be rlm@451: merged to save computation. rlm@450: rlm@450: 2. It protects from wrong interpretations of transient ambiguous rlm@451: proprioceptive data. For example, if the worm is flat for just rlm@450: an instant, this flattness will not be interpreted as implying rlm@450: that the worm has its muscles relaxed, since the flattness is rlm@450: part of a longer chain which includes a distinct pattern of rlm@451: muscle activation. Markov chains or other memoryless statistical rlm@451: models that operate on individual frames may very well make this rlm@451: mistake. rlm@450: rlm@450: #+caption: Program to convert an experience vector into a rlm@450: #+caption: proprioceptively binned lookup function. rlm@450: #+name: bin rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@450: #+begin_src clojure rlm@449: (defn bin [digits] rlm@449: (fn [angles] rlm@449: (->> angles rlm@449: (flatten) rlm@449: (map (juxt #(Math/sin %) #(Math/cos %))) rlm@449: (flatten) rlm@449: (mapv #(Math/round (* % (Math/pow 10 (dec digits)))))))) rlm@449: rlm@449: (defn gen-phi-scan rlm@450: "Nearest-neighbors with binning. Only returns a result if rlm@450: the propriceptive data is within 10% of a previously recorded rlm@450: result in all dimensions." rlm@450: [phi-space] rlm@449: (let [bin-keys (map bin [3 2 1]) rlm@449: bin-maps rlm@449: (map (fn [bin-key] rlm@449: (group-by rlm@449: (comp bin-key :proprioception phi-space) rlm@449: (range (count phi-space)))) bin-keys) rlm@449: lookups (map (fn [bin-key bin-map] rlm@450: (fn [proprio] (bin-map (bin-key proprio)))) rlm@450: bin-keys bin-maps)] rlm@449: (fn lookup [proprio-data] rlm@449: (set (some #(% proprio-data) lookups))))) rlm@450: #+end_src rlm@450: #+end_listing rlm@449: rlm@451: #+caption: =longest-thread= finds the longest path of consecutive rlm@451: #+caption: experiences to explain proprioceptive worm data. rlm@451: #+name: phi-space-history-scan rlm@451: #+ATTR_LaTeX: :width 10cm rlm@451: [[./images/aurellem-gray.png]] rlm@451: rlm@451: =longest-thread= infers sensory data by stitching together pieces rlm@451: from previous experience. It prefers longer chains of previous rlm@451: experience to shorter ones. For example, during training the worm rlm@451: might rest on the ground for one second before it performs its rlm@451: excercises. If during recognition the worm rests on the ground for rlm@451: five seconds, =longest-thread= will accomodate this five second rlm@451: rest period by looping the one second rest chain five times. rlm@451: rlm@451: =longest-thread= takes time proportinal to the average number of rlm@451: entries in a proprioceptive bin, because for each element in the rlm@451: starting bin it performes a series of set lookups in the preceeding rlm@451: bins. If the total history is limited, then this is only a constant rlm@451: multiple times the number of entries in the starting bin. This rlm@451: analysis also applies even if the action requires multiple longest rlm@451: chains -- it's still the average number of entries in a rlm@451: proprioceptive bin times the desired chain length. Because rlm@451: =longest-thread= is so efficient and simple, I can interpret rlm@451: worm-actions in real time. rlm@449: rlm@450: #+caption: Program to calculate empathy by tracing though \Phi-space rlm@450: #+caption: and finding the longest (ie. most coherent) interpretation rlm@450: #+caption: of the data. rlm@450: #+name: longest-thread rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@450: #+begin_src clojure rlm@449: (defn longest-thread rlm@449: "Find the longest thread from phi-index-sets. The index sets should rlm@449: be ordered from most recent to least recent." rlm@449: [phi-index-sets] rlm@449: (loop [result '() rlm@449: [thread-bases & remaining :as phi-index-sets] phi-index-sets] rlm@449: (if (empty? phi-index-sets) rlm@449: (vec result) rlm@449: (let [threads rlm@449: (for [thread-base thread-bases] rlm@449: (loop [thread (list thread-base) rlm@449: remaining remaining] rlm@449: (let [next-index (dec (first thread))] rlm@449: (cond (empty? remaining) thread rlm@449: (contains? (first remaining) next-index) rlm@449: (recur rlm@449: (cons next-index thread) (rest remaining)) rlm@449: :else thread)))) rlm@449: longest-thread rlm@449: (reduce (fn [thread-a thread-b] rlm@449: (if (> (count thread-a) (count thread-b)) rlm@449: thread-a thread-b)) rlm@449: '(nil) rlm@449: threads)] rlm@449: (recur (concat longest-thread result) rlm@449: (drop (count longest-thread) phi-index-sets)))))) rlm@450: #+end_src rlm@450: #+end_listing rlm@450: rlm@451: There is one final piece, which is to replace missing sensory data rlm@451: with a best-guess estimate. While I could fill in missing data by rlm@451: using a gradient over the closest known sensory data points, rlm@451: averages can be misleading. It is certainly possible to create an rlm@451: impossible sensory state by averaging two possible sensory states. rlm@451: Therefore, I simply replicate the most recent sensory experience to rlm@451: fill in the gaps. rlm@449: rlm@449: #+caption: Fill in blanks in sensory experience by replicating the most rlm@449: #+caption: recent experience. rlm@449: #+name: infer-nils rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@449: #+begin_src clojure rlm@449: (defn infer-nils rlm@449: "Replace nils with the next available non-nil element in the rlm@449: sequence, or barring that, 0." rlm@449: [s] rlm@449: (loop [i (dec (count s)) rlm@449: v (transient s)] rlm@449: (if (zero? i) (persistent! v) rlm@449: (if-let [cur (v i)] rlm@449: (if (get v (dec i) 0) rlm@449: (recur (dec i) v) rlm@449: (recur (dec i) (assoc! v (dec i) cur))) rlm@449: (recur i (assoc! v i 0)))))) rlm@449: #+end_src rlm@449: #+end_listing rlm@435: rlm@441: ** Efficient action recognition with =EMPATH= rlm@451: rlm@451: To use =EMPATH= with the worm, I first need to gather a set of rlm@451: experiences from the worm that includes the actions I want to rlm@452: recognize. The =generate-phi-space= program (listing rlm@451: \ref{generate-phi-space} runs the worm through a series of rlm@451: exercices and gatheres those experiences into a vector. The rlm@451: =do-all-the-things= program is a routine expressed in a simple rlm@452: muscle contraction script language for automated worm control. It rlm@452: causes the worm to rest, curl, and wiggle over about 700 frames rlm@452: (approx. 11 seconds). rlm@425: rlm@451: #+caption: Program to gather the worm's experiences into a vector for rlm@451: #+caption: further processing. The =motor-control-program= line uses rlm@451: #+caption: a motor control script that causes the worm to execute a series rlm@451: #+caption: of ``exercices'' that include all the action predicates. rlm@451: #+name: generate-phi-space rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (def do-all-the-things rlm@451: (concat rlm@451: curl-script rlm@451: [[300 :d-ex 40] rlm@451: [320 :d-ex 0]] rlm@451: (shift-script 280 (take 16 wiggle-script)))) rlm@451: rlm@451: (defn generate-phi-space [] rlm@451: (let [experiences (atom [])] rlm@451: (run-world rlm@451: (apply-map rlm@451: worm-world rlm@451: (merge rlm@451: (worm-world-defaults) rlm@451: {:end-frame 700 rlm@451: :motor-control rlm@451: (motor-control-program worm-muscle-labels do-all-the-things) rlm@451: :experiences experiences}))) rlm@451: @experiences)) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: #+caption: Use longest thread and a phi-space generated from a short rlm@451: #+caption: exercise routine to interpret actions during free play. rlm@451: #+name: empathy-debug rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (defn init [] rlm@451: (def phi-space (generate-phi-space)) rlm@451: (def phi-scan (gen-phi-scan phi-space))) rlm@451: rlm@451: (defn empathy-demonstration [] rlm@451: (let [proprio (atom ())] rlm@451: (fn rlm@451: [experiences text] rlm@451: (let [phi-indices (phi-scan (:proprioception (peek experiences)))] rlm@451: (swap! proprio (partial cons phi-indices)) rlm@451: (let [exp-thread (longest-thread (take 300 @proprio)) rlm@451: empathy (mapv phi-space (infer-nils exp-thread))] rlm@451: (println-repl (vector:last-n exp-thread 22)) rlm@451: (cond rlm@451: (grand-circle? empathy) (.setText text "Grand Circle") rlm@451: (curled? empathy) (.setText text "Curled") rlm@451: (wiggling? empathy) (.setText text "Wiggling") rlm@451: (resting? empathy) (.setText text "Resting") rlm@451: :else (.setText text "Unknown"))))))) rlm@451: rlm@451: (defn empathy-experiment [record] rlm@451: (.start (worm-world :experience-watch (debug-experience-phi) rlm@451: :record record :worm worm*))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: The result of running =empathy-experiment= is that the system is rlm@451: generally able to interpret worm actions using the action-predicates rlm@451: on simulated sensory data just as well as with actual data. Figure rlm@451: \ref{empathy-debug-image} was generated using =empathy-experiment=: rlm@451: rlm@451: #+caption: From only proprioceptive data, =EMPATH= was able to infer rlm@451: #+caption: the complete sensory experience and classify four poses rlm@451: #+caption: (The last panel shows a composite image of \emph{wriggling}, rlm@451: #+caption: a dynamic pose.) rlm@451: #+name: empathy-debug-image rlm@451: #+ATTR_LaTeX: :width 10cm :placement [H] rlm@451: [[./images/empathy-1.png]] rlm@451: rlm@451: One way to measure the performance of =EMPATH= is to compare the rlm@451: sutiability of the imagined sense experience to trigger the same rlm@451: action predicates as the real sensory experience. rlm@451: rlm@451: #+caption: Determine how closely empathy approximates actual rlm@451: #+caption: sensory data. rlm@451: #+name: test-empathy-accuracy rlm@452: #+attr_latex: [htpb] rlm@452: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (def worm-action-label rlm@451: (juxt grand-circle? curled? wiggling?)) rlm@451: rlm@451: (defn compare-empathy-with-baseline [matches] rlm@451: (let [proprio (atom ())] rlm@451: (fn rlm@451: [experiences text] rlm@451: (let [phi-indices (phi-scan (:proprioception (peek experiences)))] rlm@451: (swap! proprio (partial cons phi-indices)) rlm@451: (let [exp-thread (longest-thread (take 300 @proprio)) rlm@451: empathy (mapv phi-space (infer-nils exp-thread)) rlm@451: experience-matches-empathy rlm@451: (= (worm-action-label experiences) rlm@451: (worm-action-label empathy))] rlm@451: (println-repl experience-matches-empathy) rlm@451: (swap! matches #(conj % experience-matches-empathy))))))) rlm@451: rlm@451: (defn accuracy [v] rlm@451: (float (/ (count (filter true? v)) (count v)))) rlm@451: rlm@451: (defn test-empathy-accuracy [] rlm@451: (let [res (atom [])] rlm@451: (run-world rlm@451: (worm-world :experience-watch rlm@451: (compare-empathy-with-baseline res) rlm@451: :worm worm*)) rlm@451: (accuracy @res))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: Running =test-empathy-accuracy= using the very short exercise rlm@451: program defined in listing \ref{generate-phi-space}, and then doing rlm@451: a similar pattern of activity manually yeilds an accuracy of around rlm@451: 73%. This is based on very limited worm experience. By training the rlm@451: worm for longer, the accuracy dramatically improves. rlm@451: rlm@451: #+caption: Program to generate \Phi-space using manual training. rlm@451: #+name: manual-phi-space rlm@452: #+attr_latex: [htpb] rlm@451: #+begin_listing clojure rlm@451: #+begin_src clojure rlm@451: (defn init-interactive [] rlm@451: (def phi-space rlm@451: (let [experiences (atom [])] rlm@451: (run-world rlm@451: (apply-map rlm@451: worm-world rlm@451: (merge rlm@451: (worm-world-defaults) rlm@451: {:experiences experiences}))) rlm@451: @experiences)) rlm@451: (def phi-scan (gen-phi-scan phi-space))) rlm@451: #+end_src rlm@451: #+end_listing rlm@451: rlm@451: After about 1 minute of manual training, I was able to achieve 95% rlm@451: accuracy on manual testing of the worm using =init-interactive= and rlm@452: =test-empathy-accuracy=. The majority of errors are near the rlm@452: boundaries of transitioning from one type of action to another. rlm@452: During these transitions the exact label for the action is more open rlm@452: to interpretation, and dissaggrement between empathy and experience rlm@452: is more excusable. rlm@450: rlm@449: ** Digression: bootstrapping touch using free exploration rlm@449: rlm@452: In the previous section I showed how to compute actions in terms of rlm@452: body-centered predicates which relied averate touch activation of rlm@452: pre-defined regions of the worm's skin. What if, instead of recieving rlm@452: touch pre-grouped into the six faces of each worm segment, the true rlm@452: topology of the worm's skin was unknown? This is more similiar to how rlm@452: a nerve fiber bundle might be arranged. While two fibers that are rlm@452: close in a nerve bundle /might/ correspond to two touch sensors that rlm@452: are close together on the skin, the process of taking a complicated rlm@452: surface and forcing it into essentially a circle requires some cuts rlm@452: and rerragenments. rlm@452: rlm@452: In this section I show how to automatically learn the skin-topology of rlm@452: a worm segment by free exploration. As the worm rolls around on the rlm@452: floor, large sections of its surface get activated. If the worm has rlm@452: stopped moving, then whatever region of skin that is touching the rlm@452: floor is probably an important region, and should be recorded. rlm@452: rlm@452: #+caption: Program to detect whether the worm is in a resting state rlm@452: #+caption: with one face touching the floor. rlm@452: #+name: pure-touch rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (def full-contact [(float 0.0) (float 0.1)]) rlm@452: rlm@452: (defn pure-touch? rlm@452: "This is worm specific code to determine if a large region of touch rlm@452: sensors is either all on or all off." rlm@452: [[coords touch :as touch-data]] rlm@452: (= (set (map first touch)) (set full-contact))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: After collecting these important regions, there will many nearly rlm@452: similiar touch regions. While for some purposes the subtle rlm@452: differences between these regions will be important, for my rlm@452: purposes I colapse them into mostly non-overlapping sets using rlm@452: =remove-similiar= in listing \ref{remove-similiar} rlm@452: rlm@452: #+caption: Program to take a lits of set of points and ``collapse them'' rlm@452: #+caption: so that the remaining sets in the list are siginificantly rlm@452: #+caption: different from each other. Prefer smaller sets to larger ones. rlm@452: #+name: remove-similiar rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn remove-similar rlm@452: [coll] rlm@452: (loop [result () coll (sort-by (comp - count) coll)] rlm@452: (if (empty? coll) result rlm@452: (let [[x & xs] coll rlm@452: c (count x)] rlm@452: (if (some rlm@452: (fn [other-set] rlm@452: (let [oc (count other-set)] rlm@452: (< (- (count (union other-set x)) c) (* oc 0.1)))) rlm@452: xs) rlm@452: (recur result xs) rlm@452: (recur (cons x result) xs)))))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: Actually running this simulation is easy given =CORTEX='s facilities. rlm@452: rlm@452: #+caption: Collect experiences while the worm moves around. Filter the touch rlm@452: #+caption: sensations by stable ones, collapse similiar ones together, rlm@452: #+caption: and report the regions learned. rlm@452: #+name: learn-touch rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn learn-touch-regions [] rlm@452: (let [experiences (atom []) rlm@452: world (apply-map rlm@452: worm-world rlm@452: (assoc (worm-segment-defaults) rlm@452: :experiences experiences))] rlm@452: (run-world world) rlm@452: (->> rlm@452: @experiences rlm@452: (drop 175) rlm@452: ;; access the single segment's touch data rlm@452: (map (comp first :touch)) rlm@452: ;; only deal with "pure" touch data to determine surfaces rlm@452: (filter pure-touch?) rlm@452: ;; associate coordinates with touch values rlm@452: (map (partial apply zipmap)) rlm@452: ;; select those regions where contact is being made rlm@452: (map (partial group-by second)) rlm@452: (map #(get % full-contact)) rlm@452: (map (partial map first)) rlm@452: ;; remove redundant/subset regions rlm@452: (map set) rlm@452: remove-similar))) rlm@452: rlm@452: (defn learn-and-view-touch-regions [] rlm@452: (map view-touch-region rlm@452: (learn-touch-regions))) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: The only thing remining to define is the particular motion the worm rlm@452: must take. I accomplish this with a simple motor control program. rlm@452: rlm@452: #+caption: Motor control program for making the worm roll on the ground. rlm@452: #+caption: This could also be replaced with random motion. rlm@452: #+name: worm-roll rlm@452: #+begin_listing clojure rlm@452: #+begin_src clojure rlm@452: (defn touch-kinesthetics [] rlm@452: [[170 :lift-1 40] rlm@452: [190 :lift-1 19] rlm@452: [206 :lift-1 0] rlm@452: rlm@452: [400 :lift-2 40] rlm@452: [410 :lift-2 0] rlm@452: rlm@452: [570 :lift-2 40] rlm@452: [590 :lift-2 21] rlm@452: [606 :lift-2 0] rlm@452: rlm@452: [800 :lift-1 30] rlm@452: [809 :lift-1 0] rlm@452: rlm@452: [900 :roll-2 40] rlm@452: [905 :roll-2 20] rlm@452: [910 :roll-2 0] rlm@452: rlm@452: [1000 :roll-2 40] rlm@452: [1005 :roll-2 20] rlm@452: [1010 :roll-2 0] rlm@452: rlm@452: [1100 :roll-2 40] rlm@452: [1105 :roll-2 20] rlm@452: [1110 :roll-2 0] rlm@452: ]) rlm@452: #+end_src rlm@452: #+end_listing rlm@452: rlm@452: rlm@452: #+caption: The small worm rolls around on the floor, driven rlm@452: #+caption: by the motor control program in listing \ref{worm-roll}. rlm@452: #+name: worm-roll rlm@452: #+ATTR_LaTeX: :width 12cm rlm@452: [[./images/worm-roll.png]] rlm@452: rlm@452: rlm@452: #+caption: After completing its adventures, the worm now knows rlm@452: #+caption: how its touch sensors are arranged along its skin. These rlm@452: #+caption: are the regions that were deemed important by rlm@452: #+caption: =learn-touch-regions=. Note that the worm has discovered rlm@452: #+caption: that it has six sides. rlm@452: #+name: worm-touch-map rlm@452: #+ATTR_LaTeX: :width 12cm rlm@452: [[./images/touch-learn.png]] rlm@452: rlm@452: While simple, =learn-touch-regions= exploits regularities in both rlm@452: the worm's physiology and the worm's environment to correctly rlm@452: deduce that the worm has six sides. Note that =learn-touch-regions= rlm@452: would work just as well even if the worm's touch sense data were rlm@452: completely scrambled. The cross shape is just for convienence. This rlm@452: example justifies the use of pre-defined touch regions in =EMPATH=. rlm@452: rlm@465: * COMMENT Contributions rlm@454: rlm@461: In this thesis you have seen the =CORTEX= system, a complete rlm@461: environment for creating simulated creatures. You have seen how to rlm@461: implement five senses including touch, proprioception, hearing, rlm@461: vision, and muscle tension. You have seen how to create new creatues rlm@461: using blender, a 3D modeling tool. I hope that =CORTEX= will be rlm@461: useful in further research projects. To this end I have included the rlm@461: full source to =CORTEX= along with a large suite of tests and rlm@461: examples. I have also created a user guide for =CORTEX= which is rlm@461: inculded in an appendix to this thesis. rlm@447: rlm@461: You have also seen how I used =CORTEX= as a platform to attach the rlm@461: /action recognition/ problem, which is the problem of recognizing rlm@461: actions in video. You saw a simple system called =EMPATH= which rlm@461: ientifies actions by first describing actions in a body-centerd, rlm@461: rich sense language, then infering a full range of sensory rlm@461: experience from limited data using previous experience gained from rlm@461: free play. rlm@447: rlm@461: As a minor digression, you also saw how I used =CORTEX= to enable a rlm@461: tiny worm to discover the topology of its skin simply by rolling on rlm@461: the ground. rlm@461: rlm@461: In conclusion, the main contributions of this thesis are: rlm@461: rlm@461: - =CORTEX=, a system for creating simulated creatures with rich rlm@461: senses. rlm@461: - =EMPATH=, a program for recognizing actions by imagining sensory rlm@461: experience. rlm@447: rlm@447: # An anatomical joke: rlm@447: # - Training rlm@447: # - Skeletal imitation rlm@447: # - Sensory fleshing-out rlm@447: # - Classification