changeset 220:c5f6d880558b

making hearing.org up-to-date
author Robert McIntyre <rlm@mit.edu>
date Sat, 11 Feb 2012 07:08:38 -0700
parents 5f14fd7b1288
children 7c374c6cfe17
files org/hearing.org
diffstat 1 files changed, 229 insertions(+), 121 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/hearing.org	Sat Feb 11 00:51:54 2012 -0700
     1.2 +++ b/org/hearing.org	Sat Feb 11 07:08:38 2012 -0700
     1.3 @@ -9,39 +9,46 @@
     1.4  
     1.5  * Hearing
     1.6  
     1.7 -I want to be able to place ears in a similar manner to how I place
     1.8 -the eyes.  I want to be able to place ears in a unique spatial
     1.9 -position, and receive as output at every tick the F.F.T. of whatever
    1.10 -signals are happening at that point.
    1.11 +At the end of this post I will have simulated ears that work the same
    1.12 +way as the simulated eyes in the last post.  I will be able to place
    1.13 +any number of ear-nodes in a blender file, and they will bind to the
    1.14 +closest physical object and follow it as it moves around. Each ear
    1.15 +will provide access to the sound data it picks up between every frame.
    1.16  
    1.17  Hearing is one of the more difficult senses to simulate, because there
    1.18  is less support for obtaining the actual sound data that is processed
    1.19 -by jMonkeyEngine3.
    1.20 +by jMonkeyEngine3. There is no "split-screen" support for rendering
    1.21 +sound from different points of view, and there is no way to directly
    1.22 +access the rendered sound data.
    1.23 +
    1.24 +** Brief Description of jMonkeyEngine's Sound System
    1.25  
    1.26  jMonkeyEngine's sound system works as follows:
    1.27  
    1.28   - jMonkeyEngine uses the =AppSettings= for the particular application
    1.29     to determine what sort of =AudioRenderer= should be used.
    1.30 - - although some support is provided for multiple AudioRendering
    1.31 + - Although some support is provided for multiple AudioRendering
    1.32     backends, jMonkeyEngine at the time of this writing will either
    1.33 -   pick no AudioRenderer at all, or the =LwjglAudioRenderer=
    1.34 +   pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
    1.35   - jMonkeyEngine tries to figure out what sort of system you're
    1.36     running and extracts the appropriate native libraries.
    1.37 - - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
    1.38 + - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
    1.39     Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
    1.40 - - =OpenAL= calculates the 3D sound localization and feeds a stream of
    1.41 -   sound to any of various sound output devices with which it knows
    1.42 -   how to communicate.
    1.43 + - =OpenAL= renders the 3D sound and feeds the rendered sound directly
    1.44 +   to any of various sound output devices with which it knows how to
    1.45 +   communicate.
    1.46  
    1.47  A consequence of this is that there's no way to access the actual
    1.48 -sound data produced by =OpenAL=.  Even worse, =OpenAL= only supports
    1.49 -one /listener/, which normally isn't a problem for games, but becomes
    1.50 -a problem when trying to make multiple AI creatures that can each hear
    1.51 -the world from a different perspective.
    1.52 +sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
    1.53 +one /listener/ (it renders sound data from only one perspective),
    1.54 +which normally isn't a problem for games, but becomes a problem when
    1.55 +trying to make multiple AI creatures that can each hear the world from
    1.56 +a different perspective.
    1.57  
    1.58  To make many AI creatures in jMonkeyEngine that can each hear the
    1.59 -world from their own perspective, it is necessary to go all the way
    1.60 -back to =OpenAL= and implement support for simulated hearing there.
    1.61 +world from their own perspective, or to make a single creature with
    1.62 +many ears, it is necessary to go all the way back to =OpenAL= and
    1.63 +implement support for simulated hearing there.
    1.64  
    1.65  * Extending =OpenAL=
    1.66  ** =OpenAL= Devices
    1.67 @@ -71,22 +78,25 @@
    1.68  
    1.69  Therefore, in order to support multiple listeners, and get the sound
    1.70  data in a form that the AIs can use, it is necessary to create a new
    1.71 -Device, which supports this features.
    1.72 +Device which supports this features.
    1.73  
    1.74  ** The Send Device
    1.75  Adding a device to OpenAL is rather tricky -- there are five separate
    1.76  files in the =OpenAL= source tree that must be modified to do so. I've
    1.77 -documented this process [[./add-new-device.org][here]] for anyone who is interested.
    1.78 +documented this process [[../../audio-send/html/add-new-device.html][here]] for anyone who is interested.
    1.79  
    1.80 -
    1.81 -Onward to that actual Device!
    1.82 -
    1.83 -again, my objectives are:
    1.84 +Again, my objectives are:
    1.85  
    1.86   - Support Multiple Listeners from jMonkeyEngine3
    1.87   - Get access to the rendered sound data for further processing from
    1.88     clojure.
    1.89  
    1.90 +I named it the "Multiple Audio Send" Deives, or =Send= Device for
    1.91 +short, since it sends audio data back to the callig application like
    1.92 +an Aux-Send cable on a mixing board.
    1.93 +
    1.94 +Onward to the actual Device!
    1.95 +
    1.96  ** =send.c=
    1.97  
    1.98  ** Header
    1.99 @@ -172,7 +182,7 @@
   1.100  Switching between contexts is not the normal operation of a Device,
   1.101  and one of the problems with doing so is that a Device normally keeps
   1.102  around a few pieces of state such as the =ClickRemoval= array above
   1.103 -which will become corrupted if the contexts are not done in
   1.104 +which will become corrupted if the contexts are not rendered in
   1.105  parallel. The solution is to create a copy of this normally global
   1.106  device state for each context, and copy it back and forth into and out
   1.107  of the actual device state whenever a context is rendered.
   1.108 @@ -398,13 +408,13 @@
   1.109  }
   1.110  #+end_src
   1.111  
   1.112 -=OpenAL= normally renders all Contexts in parallel, outputting the
   1.113 +=OpenAL= normally renders all contexts in parallel, outputting the
   1.114  whole result to the buffer.  It does this by iterating over the
   1.115  Device->Contexts array and rendering each context to the buffer in
   1.116  turn.  By temporally setting Device->NumContexts to 1 and adjusting
   1.117  the Device's context list to put the desired context-to-be-rendered
   1.118 -into position 0, we can get trick =OpenAL= into rendering each slave
   1.119 -context separate from all the others.
   1.120 +into position 0, we can get trick =OpenAL= into rendering each context
   1.121 +separate from all the others.
   1.122  
   1.123  ** Main Device Loop
   1.124  #+name: main-loop
   1.125 @@ -419,7 +429,6 @@
   1.126    addContext(Device, masterContext);
   1.127  }
   1.128  
   1.129 -
   1.130  static void renderData(ALCdevice *Device, int samples){
   1.131    if(!Device->Connected){return;}
   1.132    send_data *data = (send_data*)Device->ExtraData;
   1.133 @@ -451,8 +460,8 @@
   1.134  #+end_src
   1.135  
   1.136  The main loop synchronizes the master LWJGL context with all the slave
   1.137 -contexts, then walks each context, rendering just that context to it's
   1.138 -audio-sample storage buffer.
   1.139 +contexts, then iterates through each context, rendering just that
   1.140 +context to it's audio-sample storage buffer.
   1.141  
   1.142  ** JNI Methods
   1.143  
   1.144 @@ -461,9 +470,9 @@
   1.145  waiting patiently in internal buffers, one for each listener.  We need
   1.146  a way to transport this information to Java, and also a way to drive
   1.147  this device from Java.  The following JNI interface code is inspired
   1.148 -by the way LWJGL interfaces with =OpenAL=.
   1.149 +by the LWJGL JNI interface to =OpenAL=.
   1.150  
   1.151 -*** step
   1.152 +*** Stepping the Device
   1.153  #+name: jni-step
   1.154  #+begin_src C
   1.155  ////////////////////   JNI Methods
   1.156 @@ -490,7 +499,7 @@
   1.157  its environment.
   1.158  
   1.159  
   1.160 -*** getSamples
   1.161 +*** Device->Java Data Transport
   1.162  #+name: jni-get-samples
   1.163  #+begin_src C
   1.164  /*
   1.165 @@ -639,9 +648,9 @@
   1.166  }
   1.167  #+end_src
   1.168  
   1.169 -** Boring Device management stuff
   1.170 +** Boring Device Management Stuff / Memory Cleanup
   1.171  This code is more-or-less copied verbatim from the other =OpenAL=
   1.172 -backends. It's the basis for =OpenAL='s primitive object system.
   1.173 +Devices. It's the basis for =OpenAL='s primitive object system.
   1.174  #+name: device-init
   1.175  #+begin_src C
   1.176  ////////////////////   Device Initialization / Management
   1.177 @@ -732,62 +741,98 @@
   1.178  * The Java interface, =AudioSend=
   1.179  
   1.180  The Java interface to the Send Device follows naturally from the JNI
   1.181 -definitions. It is included here for completeness. The only thing here
   1.182 -of note is the =deviceID=. This is available from LWJGL, but to only
   1.183 -way to get it is reflection. Unfortunately, there is no other way to
   1.184 -control the Send device than to obtain a pointer to it.
   1.185 +definitions. The only thing here of note is the =deviceID=. This is
   1.186 +available from LWJGL, but to only way to get it is with reflection.
   1.187 +Unfortunately, there is no other way to control the Send device than
   1.188 +to obtain a pointer to it.
   1.189  
   1.190 -#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code
   1.191 +#+include: "../../audio-send/java/src/com/aurellem/send/AudioSend.java" src java 
   1.192 +
   1.193 +* The Java Audio Renderer, =AudioSendRenderer=
   1.194 +
   1.195 +#+include: "../../jmeCapture/src/com/aurellem/capture/audio/AudioSendRenderer.java" src java
   1.196 +
   1.197 +The =AudioSendRenderer= is a modified version of the
   1.198 +=LwjglAudioRenderer= which implements the =MultiListener= interface to
   1.199 +provide access and creation of more than one =Listener= object.
   1.200 +
   1.201 +** MultiListener.java
   1.202 +
   1.203 +#+include: "../../jmeCapture/src/com/aurellem/capture/audio/MultiListener.java" src java
   1.204 +
   1.205 +** SoundProcessors are like SceneProcessors
   1.206 +
   1.207 +A =SoundProcessor= is analgous to a =SceneProcessor=. Every frame, the
   1.208 +=SoundProcessor= registered with a given =Listener= recieves the
   1.209 +rendered sound data and can do whatever processing it wants with it.
   1.210 +
   1.211 +#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java  
   1.212  
   1.213  * Finally, Ears in clojure! 
   1.214  
   1.215 -Now that the infrastructure is complete the clojure ear abstraction is
   1.216 -simple. Just as there were =SceneProcessors= for vision, there are
   1.217 -now =SoundProcessors= for hearing.
   1.218 +Now that the =C= and =Java= infrastructure is complete, the clojure
   1.219 +hearing abstraction is simple and closely parallels the [[./vision.org][vision]]
   1.220 +abstraction.
   1.221  
   1.222 -#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java 
   1.223 +** Hearing Pipeline
   1.224  
   1.225 -
   1.226 +All sound rendering is done in the CPU, so =(hearing-pipeline)= is
   1.227 +much less complicated than =(vision-pipelie)= The bytes available in
   1.228 +the ByteBuffer obtained from the =send= Device have different meanings
   1.229 +dependant upon the particular hardware or your system.  That is why
   1.230 +the =AudioFormat= object is necessary to provide the meaning that the
   1.231 +raw bytes lack. =(byteBuffer->pulse-vector)= uses the excellent
   1.232 +conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of
   1.233 +floats which represent the linear PCM encoded waveform of the
   1.234 +sound. With linear PCM (pulse code modulation) -1.0 represents maximum
   1.235 +rarefaction of the air while 1.0 represents maximum compression of the
   1.236 +air at a given instant.
   1.237  
   1.238  #+name: ears
   1.239  #+begin_src clojure
   1.240 -(ns cortex.hearing
   1.241 -  "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
   1.242 -  listeners at different positions in the same world. Automatically
   1.243 -  reads ear-nodes from specially prepared blender files and
   1.244 -  instantiates them in the world as actual ears."
   1.245 -  {:author "Robert McIntyre"}
   1.246 -  (:use (cortex world util sense))
   1.247 -  (:use clojure.contrib.def)
   1.248 -  (:import java.nio.ByteBuffer)
   1.249 -  (:import java.awt.image.BufferedImage)
   1.250 -  (:import org.tritonus.share.sampled.FloatSampleTools)
   1.251 -  (:import (com.aurellem.capture.audio
   1.252 -            SoundProcessor AudioSendRenderer))
   1.253 -  (:import javax.sound.sampled.AudioFormat)
   1.254 -  (:import (com.jme3.scene Spatial Node))
   1.255 -  (:import com.jme3.audio.Listener)
   1.256 -  (:import com.jme3.app.Application)
   1.257 -  (:import com.jme3.scene.control.AbstractControl))
   1.258 +(in-ns 'cortex.hearing)
   1.259  
   1.260 -(defn sound-processor
   1.261 -  "Deals with converting ByteBuffers into Vectors of floats so that
   1.262 -  the continuation functions can be defined in terms of immutable
   1.263 -  stuff."
   1.264 +(defn hearing-pipeline
   1.265 +  "Creates a SoundProcessor which wraps a sound processing
   1.266 +  continuation function. The continuation is a function that takes
   1.267 +  [#^ByteBuffer b #^Integer int numSamples #^AudioFormat af ], each of which
   1.268 +  has already been apprpiately sized."
   1.269    [continuation]
   1.270    (proxy [SoundProcessor] []
   1.271      (cleanup [])
   1.272      (process
   1.273        [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
   1.274 -      (let [bytes  (byte-array numSamples)
   1.275 -            num-floats (/ numSamples  (.getFrameSize audioFormat))
   1.276 -            floats (float-array num-floats)]
   1.277 -        (.get audioSamples bytes 0 numSamples)
   1.278 -        (FloatSampleTools/byte2floatInterleaved
   1.279 -         bytes 0 floats 0 num-floats audioFormat)
   1.280 -        (continuation
   1.281 -         (vec floats))))))
   1.282 +      (continuation audioSamples numSamples audioFormat))))
   1.283  
   1.284 +(defn byteBuffer->pulse-vector
   1.285 +  "Extract the sound samples from the byteBuffer as a PCM encoded
   1.286 +   waveform with values ranging from -1.0 to 1.0 into a vector of
   1.287 +   floats." 
   1.288 +  [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
   1.289 +  (let [num-floats (/ numSamples  (.getFrameSize audioFormat))
   1.290 +        bytes  (byte-array numSamples)
   1.291 +        floats (float-array num-floats)]
   1.292 +    (.get audioSamples bytes 0 numSamples)
   1.293 +    (FloatSampleTools/byte2floatInterleaved
   1.294 +     bytes 0 floats 0 num-floats audioFormat)
   1.295 +    (vec floats)))
   1.296 +#+end_src
   1.297 +
   1.298 +** Physical Ears
   1.299 +
   1.300 +Together, these three functions define how ears found in a specially
   1.301 +prepared blender file will be translated to =Listener= objects in a
   1.302 +simulation. =(ears)= extracts all the children of to top level node
   1.303 +named "ears".  =(add-ear!)= and =(update-listener-velocity!)= use
   1.304 +=(bind-sense)= to bind a =Listener= object located at the initial
   1.305 +position of an "ear" node to the closest physical object in the
   1.306 +creature. That =Listener= will stay in the same orientation to the
   1.307 +object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding
   1.308 +demonstration]].  =OpenAL= simulates the doppler effect for moving
   1.309 +listeners, =(update-listener-velocity!)= ensures that this velocity
   1.310 +information is always up-to-date.
   1.311 +
   1.312 +#+begin_src clojure
   1.313  (defvar 
   1.314    ^{:arglists '([creature])}
   1.315    ears
   1.316 @@ -818,15 +863,19 @@
   1.317    (let [target (closest-node creature ear)
   1.318          lis (Listener.)
   1.319          audio-renderer (.getAudioRenderer world)
   1.320 -        sp (sound-processor continuation)]
   1.321 +        sp (hearing-pipeline continuation)]
   1.322      (.setLocation lis (.getWorldTranslation ear))
   1.323      (.setRotation lis (.getWorldRotation ear))
   1.324      (bind-sense target lis)
   1.325      (update-listener-velocity! target lis)
   1.326      (.addListener audio-renderer lis)
   1.327      (.registerSoundProcessor audio-renderer lis sp)))
   1.328 +#+end_src
   1.329  
   1.330 -(defn hearing-fn
   1.331 +** Ear Creation
   1.332 +
   1.333 +#+begin_src clojure
   1.334 +(defn hearing-kernel
   1.335    "Returns a functon which returns auditory sensory data when called
   1.336     inside a running simulation."
   1.337    [#^Node creature #^Spatial ear]
   1.338 @@ -836,19 +885,14 @@
   1.339           (fn [#^Application world]
   1.340             (add-ear!
   1.341              world creature ear
   1.342 -            (fn [data]
   1.343 -              (reset! hearing-data (vec data))))))]
   1.344 +            (comp #(reset! hearing-data %)
   1.345 +                  byteBuffer->pulse-vector))))]
   1.346      (fn [#^Application world]
   1.347        (register-listener! world)
   1.348        (let [data @hearing-data
   1.349              topology              
   1.350 -            (vec (map #(vector % 0) (range 0 (count data))))
   1.351 -            scaled-data
   1.352 -            (vec
   1.353 -             (map
   1.354 -              #(rem (int (* 255 (/ (+ 1 %) 2)))  256)
   1.355 -              data))]
   1.356 -        [topology scaled-data]))))
   1.357 +            (vec (map #(vector % 0) (range 0 (count data))))]
   1.358 +        [topology data]))))
   1.359      
   1.360  (defn hearing!
   1.361    "Endow the creature in a particular world with the sense of
   1.362 @@ -856,58 +900,87 @@
   1.363     which when called will return the auditory data from that ear."
   1.364    [#^Node creature]
   1.365    (for [ear (ears creature)]
   1.366 -    (hearing-fn creature ear)))
   1.367 +    (hearing-kernel creature ear)))
   1.368 +#+end_src
   1.369  
   1.370 +Each function returned by =(hearing-kernel!)= will register a new
   1.371 +=Listener= with the simulation the first time it is called.  Each time
   1.372 +it is called, the hearing-function will return a vector of linear PCM
   1.373 +encoded sound data that was heard since the last frame. The size of
   1.374 +this vector is of course determined by the overall framerate of the
   1.375 +game. With a constant framerate of 60 frames per second and a sampling
   1.376 +frequency of 44,100 samples per second, the vector will have exactly
   1.377 +735 elements.
   1.378 +
   1.379 +** Visualizing Hearing
   1.380 +
   1.381 +This is a simple visualization function which displaye the waveform
   1.382 +reported by the simulated sense of hearing. It converts the values
   1.383 +reported in the vector returned by the hearing function from the range
   1.384 +[-1.0, 1.0] to the range [0 255], converts to integer, and displays
   1.385 +the number as a greyscale pixel.
   1.386 +
   1.387 +#+begin_src clojure
   1.388  (defn view-hearing
   1.389    "Creates a function which accepts a list of auditory data and
   1.390     display each element of the list to the screen as an image."
   1.391    []
   1.392    (view-sense
   1.393     (fn [[coords sensor-data]]
   1.394 -     (let [height 50
   1.395 +     (let [pixel-data 
   1.396 +           (vec
   1.397 +            (map
   1.398 +             #(rem (int (* 255 (/ (+ 1 %) 2))) 256)
   1.399 +             sensor-data))
   1.400 +           height 50
   1.401             image (BufferedImage. (count coords) height
   1.402                                   BufferedImage/TYPE_INT_RGB)]
   1.403         (dorun
   1.404          (for [x (range (count coords))]
   1.405            (dorun
   1.406             (for [y (range height)]
   1.407 -             (let [raw-sensor (sensor-data x)]
   1.408 +             (let [raw-sensor (pixel-data x)]
   1.409                 (.setRGB image x y (gray raw-sensor)))))))
   1.410         image))))
   1.411 -
   1.412  #+end_src
   1.413  
   1.414 -#+results: ears
   1.415 -: #'cortex.hearing/hearing!
   1.416 +* Testing Hearing
   1.417  
   1.418 -* Example
   1.419 +** Advanced Java Example
   1.420 +
   1.421 +I wrote a test case in Java that demonstrates the use of the Java
   1.422 +components of this hearing system. It is part of a larger java library
   1.423 +to capture perfect Audio from jMonkeyEngine. Some of the clojure
   1.424 +constructs above are partially reiterated in the java source file. But
   1.425 +first, the video! As far as I know this is the first instance of
   1.426 +multiple simulated listeners in a virtual environment using OpenAL.
   1.427 +
   1.428 +#+begin_html
   1.429 +<div class="figure">
   1.430 +<center>
   1.431 +<video controls="controls" width="500">
   1.432 +  <source src="../video/java-hearing-test.ogg" type="video/ogg"
   1.433 +	  preload="none" poster="../images/aurellem-1280x480.png" />
   1.434 +</video>
   1.435 +</center>
   1.436 +<p>The blue ball is emitting a constant sound. Each blue box is
   1.437 +  listening for sound, and will change color from blue to green if it
   1.438 +  detects sound which is louder than a certain threshold. As the blue
   1.439 +  sphere travels along the path, it excites each of the cubes in turn.</p>
   1.440 +</div>
   1.441 +
   1.442 +#+end_html
   1.443 +
   1.444 +#+include "../../jmeCapture/src/com/aurellem/capture/examples/Advanced.java" src java  
   1.445 +
   1.446 +Here is a small clojure program to drive the java program and make it
   1.447 +available as part of my test suite.
   1.448  
   1.449  #+name: test-hearing
   1.450 -#+begin_src clojure :results silent
   1.451 -(ns cortex.test.hearing
   1.452 -  (:use (cortex world util hearing))
   1.453 -  (:import (com.jme3.audio AudioNode Listener))
   1.454 -  (:import com.jme3.scene.Node
   1.455 -	   com.jme3.system.AppSettings))
   1.456 +#+begin_src clojure
   1.457 +(in-ns 'cortex.test.hearing)
   1.458  
   1.459 -(defn setup-fn [world]
   1.460 -  (let [listener (Listener.)]
   1.461 -    (add-ear world listener #(println-repl (nth % 0)))))
   1.462 -  
   1.463 -(defn play-sound [node world value]
   1.464 -  (if (not value)
   1.465 -    (do
   1.466 -      (.playSource (.getAudioRenderer world) node))))
   1.467 -
   1.468 -(defn test-basic-hearing []
   1.469 -   (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)]
   1.470 -     (world
   1.471 -      (Node.)
   1.472 -      {"key-space" (partial play-sound node1)}
   1.473 -      setup-fn
   1.474 -      no-op)))
   1.475 -
   1.476 -(defn test-advanced-hearing
   1.477 +(defn test-java-hearing
   1.478    "Testing hearing:
   1.479     You should see a blue sphere flying around several
   1.480     cubes.  As the sphere approaches each cube, it turns
   1.481 @@ -919,21 +992,56 @@
   1.482         (.setAudioRenderer "Send")))
   1.483      (.setShowSettings false)
   1.484      (.setPauseOnLostFocus false)))
   1.485 -
   1.486  #+end_src
   1.487  
   1.488 -This extremely basic program prints out the first sample it encounters
   1.489 -at every time stamp. You can see the rendered sound being printed at
   1.490 -the REPL.
   1.491 +** Adding Hearing to the Worm
   1.492  
   1.493 +
   1.494 +
   1.495 +* Headers
   1.496 +
   1.497 +#+name: hearing-header
   1.498 +#+begin_src clojure
   1.499 +(ns cortex.hearing
   1.500 +  "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
   1.501 +  listeners at different positions in the same world. Automatically
   1.502 +  reads ear-nodes from specially prepared blender files and
   1.503 +  instantiates them in the world as actual ears."
   1.504 +  {:author "Robert McIntyre"}
   1.505 +  (:use (cortex world util sense))
   1.506 +  (:use clojure.contrib.def)
   1.507 +  (:import java.nio.ByteBuffer)
   1.508 +  (:import java.awt.image.BufferedImage)
   1.509 +  (:import org.tritonus.share.sampled.FloatSampleTools)
   1.510 +  (:import (com.aurellem.capture.audio
   1.511 +            SoundProcessor AudioSendRenderer))
   1.512 +  (:import javax.sound.sampled.AudioFormat)
   1.513 +  (:import (com.jme3.scene Spatial Node))
   1.514 +  (:import com.jme3.audio.Listener)
   1.515 +  (:import com.jme3.app.Application)
   1.516 +  (:import com.jme3.scene.control.AbstractControl))
   1.517 +#+end_src
   1.518 +
   1.519 +#+begin_src clojure
   1.520 +(ns cortex.test.hearing
   1.521 +  (:use (cortex world util hearing))
   1.522 +  (:import (com.jme3.audio AudioNode Listener))
   1.523 +  (:import com.jme3.scene.Node
   1.524 +	   com.jme3.system.AppSettings))
   1.525 +#+end_src
   1.526 +
   1.527 +
   1.528 +* Next
   1.529   - As a bonus, this method of capturing audio for AI can also be used
   1.530     to capture perfect audio from a jMonkeyEngine application, for use
   1.531     in demos and the like.
   1.532  
   1.533  
   1.534 +
   1.535  * COMMENT Code Generation
   1.536  
   1.537  #+begin_src clojure :tangle ../src/cortex/hearing.clj
   1.538 +<<hearing-header>>
   1.539  <<ears>>
   1.540  #+end_src
   1.541