rlm@162: #+title: Simulated Sense of Hearing rlm@162: #+author: Robert McIntyre rlm@162: #+email: rlm@mit.edu rlm@162: #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 rlm@162: #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI rlm@162: #+SETUPFILE: ../../aurellem/org/setup.org rlm@162: #+INCLUDE: ../../aurellem/org/level-0.org rlm@162: #+BABEL: :exports both :noweb yes :cache no :mkdirp yes rlm@162: rlm@162: * Hearing rlm@162: rlm@220: At the end of this post I will have simulated ears that work the same rlm@220: way as the simulated eyes in the last post. I will be able to place rlm@220: any number of ear-nodes in a blender file, and they will bind to the rlm@220: closest physical object and follow it as it moves around. Each ear rlm@220: will provide access to the sound data it picks up between every frame. rlm@162: rlm@162: Hearing is one of the more difficult senses to simulate, because there rlm@162: is less support for obtaining the actual sound data that is processed rlm@220: by jMonkeyEngine3. There is no "split-screen" support for rendering rlm@220: sound from different points of view, and there is no way to directly rlm@220: access the rendered sound data. rlm@220: rlm@220: ** Brief Description of jMonkeyEngine's Sound System rlm@162: rlm@162: jMonkeyEngine's sound system works as follows: rlm@162: rlm@162: - jMonkeyEngine uses the =AppSettings= for the particular application rlm@162: to determine what sort of =AudioRenderer= should be used. rlm@220: - Although some support is provided for multiple AudioRendering rlm@162: backends, jMonkeyEngine at the time of this writing will either rlm@220: pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=. rlm@162: - jMonkeyEngine tries to figure out what sort of system you're rlm@162: running and extracts the appropriate native libraries. rlm@220: - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game rlm@162: Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] rlm@220: - =OpenAL= renders the 3D sound and feeds the rendered sound directly rlm@220: to any of various sound output devices with which it knows how to rlm@220: communicate. rlm@162: rlm@162: A consequence of this is that there's no way to access the actual rlm@220: sound data produced by =OpenAL=. Even worse, =OpenAL= only supports rlm@220: one /listener/ (it renders sound data from only one perspective), rlm@220: which normally isn't a problem for games, but becomes a problem when rlm@220: trying to make multiple AI creatures that can each hear the world from rlm@220: a different perspective. rlm@162: rlm@162: To make many AI creatures in jMonkeyEngine that can each hear the rlm@220: world from their own perspective, or to make a single creature with rlm@220: many ears, it is necessary to go all the way back to =OpenAL= and rlm@220: implement support for simulated hearing there. rlm@162: rlm@162: * Extending =OpenAL= rlm@162: ** =OpenAL= Devices rlm@162: rlm@162: =OpenAL= goes to great lengths to support many different systems, all rlm@162: with different sound capabilities and interfaces. It accomplishes this rlm@162: difficult task by providing code for many different sound backends in rlm@162: pseudo-objects called /Devices/. There's a device for the Linux Open rlm@162: Sound System and the Advanced Linux Sound Architecture, there's one rlm@162: for Direct Sound on Windows, there's even one for Solaris. =OpenAL= rlm@162: solves the problem of platform independence by providing all these rlm@162: Devices. rlm@162: rlm@162: Wrapper libraries such as LWJGL are free to examine the system on rlm@162: which they are running and then select an appropriate device for that rlm@162: system. rlm@162: rlm@162: There are also a few "special" devices that don't interface with any rlm@162: particular system. These include the Null Device, which doesn't do rlm@162: anything, and the Wave Device, which writes whatever sound it receives rlm@162: to a file, if everything has been set up correctly when configuring rlm@162: =OpenAL=. rlm@162: rlm@162: Actual mixing of the sound data happens in the Devices, and they are rlm@162: the only point in the sound rendering process where this data is rlm@162: available. rlm@162: rlm@162: Therefore, in order to support multiple listeners, and get the sound rlm@162: data in a form that the AIs can use, it is necessary to create a new rlm@220: Device which supports this features. rlm@162: rlm@162: ** The Send Device rlm@162: Adding a device to OpenAL is rather tricky -- there are five separate rlm@162: files in the =OpenAL= source tree that must be modified to do so. I've rlm@220: documented this process [[../../audio-send/html/add-new-device.html][here]] for anyone who is interested. rlm@162: rlm@220: Again, my objectives are: rlm@162: rlm@162: - Support Multiple Listeners from jMonkeyEngine3 rlm@162: - Get access to the rendered sound data for further processing from rlm@162: clojure. rlm@162: rlm@220: I named it the "Multiple Audio Send" Deives, or =Send= Device for rlm@220: short, since it sends audio data back to the callig application like rlm@220: an Aux-Send cable on a mixing board. rlm@220: rlm@220: Onward to the actual Device! rlm@220: rlm@162: ** =send.c= rlm@162: rlm@162: ** Header rlm@162: #+name: send-header rlm@162: #+begin_src C rlm@162: #include "config.h" rlm@162: #include rlm@162: #include "alMain.h" rlm@162: #include "AL/al.h" rlm@162: #include "AL/alc.h" rlm@162: #include "alSource.h" rlm@162: #include rlm@162: rlm@162: //////////////////// Summary rlm@162: rlm@162: struct send_data; rlm@162: struct context_data; rlm@162: rlm@162: static void addContext(ALCdevice *, ALCcontext *); rlm@162: static void syncContexts(ALCcontext *master, ALCcontext *slave); rlm@162: static void syncSources(ALsource *master, ALsource *slave, rlm@162: ALCcontext *masterCtx, ALCcontext *slaveCtx); rlm@162: rlm@162: static void syncSourcei(ALuint master, ALuint slave, rlm@162: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@162: static void syncSourcef(ALuint master, ALuint slave, rlm@162: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@162: static void syncSource3f(ALuint master, ALuint slave, rlm@162: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@162: rlm@162: static void swapInContext(ALCdevice *, struct context_data *); rlm@162: static void saveContext(ALCdevice *, struct context_data *); rlm@162: static void limitContext(ALCdevice *, ALCcontext *); rlm@162: static void unLimitContext(ALCdevice *); rlm@162: rlm@162: static void init(ALCdevice *); rlm@162: static void renderData(ALCdevice *, int samples); rlm@162: rlm@162: #define UNUSED(x) (void)(x) rlm@162: #+end_src rlm@162: rlm@162: The main idea behind the Send device is to take advantage of the fact rlm@162: that LWJGL only manages one /context/ when using OpenAL. A /context/ rlm@162: is like a container that holds samples and keeps track of where the rlm@162: listener is. In order to support multiple listeners, the Send device rlm@162: identifies the LWJGL context as the master context, and creates any rlm@162: number of slave contexts to represent additional listeners. Every rlm@162: time the device renders sound, it synchronizes every source from the rlm@162: master LWJGL context to the slave contexts. Then, it renders each rlm@162: context separately, using a different listener for each one. The rlm@162: rendered sound is made available via JNI to jMonkeyEngine. rlm@162: rlm@162: To recap, the process is: rlm@162: - Set the LWJGL context as "master" in the =init()= method. rlm@162: - Create any number of additional contexts via =addContext()= rlm@162: - At every call to =renderData()= sync the master context with the rlm@162: slave contexts with =syncContexts()= rlm@162: - =syncContexts()= calls =syncSources()= to sync all the sources rlm@162: which are in the master context. rlm@162: - =limitContext()= and =unLimitContext()= make it possible to render rlm@162: only one context at a time. rlm@162: rlm@162: ** Necessary State rlm@162: #+name: send-state rlm@162: #+begin_src C rlm@162: //////////////////// State rlm@162: rlm@162: typedef struct context_data { rlm@162: ALfloat ClickRemoval[MAXCHANNELS]; rlm@162: ALfloat PendingClicks[MAXCHANNELS]; rlm@162: ALvoid *renderBuffer; rlm@162: ALCcontext *ctx; rlm@162: } context_data; rlm@162: rlm@162: typedef struct send_data { rlm@162: ALuint size; rlm@162: context_data **contexts; rlm@162: ALuint numContexts; rlm@162: ALuint maxContexts; rlm@162: } send_data; rlm@162: #+end_src rlm@162: rlm@162: Switching between contexts is not the normal operation of a Device, rlm@162: and one of the problems with doing so is that a Device normally keeps rlm@162: around a few pieces of state such as the =ClickRemoval= array above rlm@220: which will become corrupted if the contexts are not rendered in rlm@162: parallel. The solution is to create a copy of this normally global rlm@162: device state for each context, and copy it back and forth into and out rlm@162: of the actual device state whenever a context is rendered. rlm@162: rlm@162: ** Synchronization Macros rlm@162: #+name: sync-macros rlm@162: #+begin_src C rlm@162: //////////////////// Context Creation / Synchronization rlm@162: rlm@162: #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ rlm@162: void NAME (ALuint sourceID1, ALuint sourceID2, \ rlm@162: ALCcontext *ctx1, ALCcontext *ctx2, \ rlm@162: ALenum param){ \ rlm@162: INIT_EXPR; \ rlm@162: ALCcontext *current = alcGetCurrentContext(); \ rlm@162: alcMakeContextCurrent(ctx1); \ rlm@162: GET_EXPR; \ rlm@162: alcMakeContextCurrent(ctx2); \ rlm@162: SET_EXPR; \ rlm@162: alcMakeContextCurrent(current); \ rlm@162: } rlm@162: rlm@162: #define MAKE_SYNC(NAME, TYPE, GET, SET) \ rlm@162: _MAKE_SYNC(NAME, \ rlm@162: TYPE value, \ rlm@162: GET(sourceID1, param, &value), \ rlm@162: SET(sourceID2, param, value)) rlm@162: rlm@162: #define MAKE_SYNC3(NAME, TYPE, GET, SET) \ rlm@162: _MAKE_SYNC(NAME, \ rlm@162: TYPE value1; TYPE value2; TYPE value3;, \ rlm@162: GET(sourceID1, param, &value1, &value2, &value3), \ rlm@162: SET(sourceID2, param, value1, value2, value3)) rlm@162: rlm@162: MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); rlm@162: MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); rlm@162: MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); rlm@162: MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); rlm@162: rlm@162: #+end_src rlm@162: rlm@162: Setting the state of an =OpenAL= source is done with the =alSourcei=, rlm@162: =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to rlm@162: completely synchronize two sources, it is necessary to use all of rlm@162: them. These macros help to condense the otherwise repetitive rlm@162: synchronization code involving these similar low-level =OpenAL= functions. rlm@162: rlm@162: ** Source Synchronization rlm@162: #+name: sync-sources rlm@162: #+begin_src C rlm@162: void syncSources(ALsource *masterSource, ALsource *slaveSource, rlm@162: ALCcontext *masterCtx, ALCcontext *slaveCtx){ rlm@162: ALuint master = masterSource->source; rlm@162: ALuint slave = slaveSource->source; rlm@162: ALCcontext *current = alcGetCurrentContext(); rlm@162: rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); rlm@162: syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); rlm@162: rlm@162: syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); rlm@162: syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); rlm@162: syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); rlm@162: rlm@162: syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); rlm@162: syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); rlm@162: rlm@162: alcMakeContextCurrent(masterCtx); rlm@162: ALint source_type; rlm@162: alGetSourcei(master, AL_SOURCE_TYPE, &source_type); rlm@162: rlm@162: // Only static sources are currently synchronized! rlm@162: if (AL_STATIC == source_type){ rlm@162: ALint master_buffer; rlm@162: ALint slave_buffer; rlm@162: alGetSourcei(master, AL_BUFFER, &master_buffer); rlm@162: alcMakeContextCurrent(slaveCtx); rlm@162: alGetSourcei(slave, AL_BUFFER, &slave_buffer); rlm@162: if (master_buffer != slave_buffer){ rlm@162: alSourcei(slave, AL_BUFFER, master_buffer); rlm@162: } rlm@162: } rlm@162: rlm@162: // Synchronize the state of the two sources. rlm@162: alcMakeContextCurrent(masterCtx); rlm@162: ALint masterState; rlm@162: ALint slaveState; rlm@162: rlm@162: alGetSourcei(master, AL_SOURCE_STATE, &masterState); rlm@162: alcMakeContextCurrent(slaveCtx); rlm@162: alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); rlm@162: rlm@162: if (masterState != slaveState){ rlm@162: switch (masterState){ rlm@162: case AL_INITIAL : alSourceRewind(slave); break; rlm@162: case AL_PLAYING : alSourcePlay(slave); break; rlm@162: case AL_PAUSED : alSourcePause(slave); break; rlm@162: case AL_STOPPED : alSourceStop(slave); break; rlm@162: } rlm@162: } rlm@162: // Restore whatever context was previously active. rlm@162: alcMakeContextCurrent(current); rlm@162: } rlm@162: #+end_src rlm@162: This function is long because it has to exhaustively go through all the rlm@162: possible state that a source can have and make sure that it is the rlm@162: same between the master and slave sources. I'd like to take this rlm@162: moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very rlm@162: good description of =OpenAL='s internals. rlm@162: rlm@162: ** Context Synchronization rlm@162: #+name: sync-contexts rlm@162: #+begin_src C rlm@162: void syncContexts(ALCcontext *master, ALCcontext *slave){ rlm@162: /* If there aren't sufficient sources in slave to mirror rlm@162: the sources in master, create them. */ rlm@162: ALCcontext *current = alcGetCurrentContext(); rlm@162: rlm@162: UIntMap *masterSourceMap = &(master->SourceMap); rlm@162: UIntMap *slaveSourceMap = &(slave->SourceMap); rlm@162: ALuint numMasterSources = masterSourceMap->size; rlm@162: ALuint numSlaveSources = slaveSourceMap->size; rlm@162: rlm@162: alcMakeContextCurrent(slave); rlm@162: if (numSlaveSources < numMasterSources){ rlm@162: ALuint numMissingSources = numMasterSources - numSlaveSources; rlm@162: ALuint newSources[numMissingSources]; rlm@162: alGenSources(numMissingSources, newSources); rlm@162: } rlm@162: rlm@162: /* Now, slave is guaranteed to have at least as many sources rlm@162: as master. Sync each source from master to the corresponding rlm@162: source in slave. */ rlm@162: int i; rlm@162: for(i = 0; i < masterSourceMap->size; i++){ rlm@162: syncSources((ALsource*)masterSourceMap->array[i].value, rlm@162: (ALsource*)slaveSourceMap->array[i].value, rlm@162: master, slave); rlm@162: } rlm@162: alcMakeContextCurrent(current); rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: Most of the hard work in Context Synchronization is done in rlm@162: =syncSources()=. The only thing that =syncContexts()= has to worry rlm@162: about is automatically creating new sources whenever a slave context rlm@162: does not have the same number of sources as the master context. rlm@162: rlm@162: ** Context Creation rlm@162: #+name: context-creation rlm@162: #+begin_src C rlm@162: static void addContext(ALCdevice *Device, ALCcontext *context){ rlm@162: send_data *data = (send_data*)Device->ExtraData; rlm@162: // expand array if necessary rlm@162: if (data->numContexts >= data->maxContexts){ rlm@162: ALuint newMaxContexts = data->maxContexts*2 + 1; rlm@162: data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); rlm@162: data->maxContexts = newMaxContexts; rlm@162: } rlm@162: // create context_data and add it to the main array rlm@162: context_data *ctxData; rlm@162: ctxData = (context_data*)calloc(1, sizeof(*ctxData)); rlm@162: ctxData->renderBuffer = rlm@162: malloc(BytesFromDevFmt(Device->FmtType) * rlm@162: Device->NumChan * Device->UpdateSize); rlm@162: ctxData->ctx = context; rlm@162: rlm@162: data->contexts[data->numContexts] = ctxData; rlm@162: data->numContexts++; rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: Here, the slave context is created, and it's data is stored in the rlm@162: device-wide =ExtraData= structure. The =renderBuffer= that is created rlm@162: here is where the rendered sound samples for this slave context will rlm@162: eventually go. rlm@162: rlm@162: ** Context Switching rlm@162: #+name: context-switching rlm@162: #+begin_src C rlm@162: //////////////////// Context Switching rlm@162: rlm@162: /* A device brings along with it two pieces of state rlm@162: * which have to be swapped in and out with each context. rlm@162: */ rlm@162: static void swapInContext(ALCdevice *Device, context_data *ctxData){ rlm@162: memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); rlm@162: memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); rlm@162: } rlm@162: rlm@162: static void saveContext(ALCdevice *Device, context_data *ctxData){ rlm@162: memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); rlm@162: memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); rlm@162: } rlm@162: rlm@162: static ALCcontext **currentContext; rlm@162: static ALuint currentNumContext; rlm@162: rlm@162: /* By default, all contexts are rendered at once for each call to aluMixData. rlm@162: * This function uses the internals of the ALCdevice struct to temporally rlm@162: * cause aluMixData to only render the chosen context. rlm@162: */ rlm@162: static void limitContext(ALCdevice *Device, ALCcontext *ctx){ rlm@162: currentContext = Device->Contexts; rlm@162: currentNumContext = Device->NumContexts; rlm@162: Device->Contexts = &ctx; rlm@162: Device->NumContexts = 1; rlm@162: } rlm@162: rlm@162: static void unLimitContext(ALCdevice *Device){ rlm@162: Device->Contexts = currentContext; rlm@162: Device->NumContexts = currentNumContext; rlm@162: } rlm@162: #+end_src rlm@162: rlm@220: =OpenAL= normally renders all contexts in parallel, outputting the rlm@162: whole result to the buffer. It does this by iterating over the rlm@162: Device->Contexts array and rendering each context to the buffer in rlm@162: turn. By temporally setting Device->NumContexts to 1 and adjusting rlm@162: the Device's context list to put the desired context-to-be-rendered rlm@220: into position 0, we can get trick =OpenAL= into rendering each context rlm@220: separate from all the others. rlm@162: rlm@162: ** Main Device Loop rlm@162: #+name: main-loop rlm@162: #+begin_src C rlm@162: //////////////////// Main Device Loop rlm@162: rlm@162: /* Establish the LWJGL context as the master context, which will rlm@162: * be synchronized to all the slave contexts rlm@162: */ rlm@162: static void init(ALCdevice *Device){ rlm@162: ALCcontext *masterContext = alcGetCurrentContext(); rlm@162: addContext(Device, masterContext); rlm@162: } rlm@162: rlm@162: static void renderData(ALCdevice *Device, int samples){ rlm@162: if(!Device->Connected){return;} rlm@162: send_data *data = (send_data*)Device->ExtraData; rlm@162: ALCcontext *current = alcGetCurrentContext(); rlm@162: rlm@162: ALuint i; rlm@162: for (i = 1; i < data->numContexts; i++){ rlm@162: syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); rlm@162: } rlm@162: rlm@162: if ((ALuint) samples > Device->UpdateSize){ rlm@162: printf("exceeding internal buffer size; dropping samples\n"); rlm@162: printf("requested %d; available %d\n", samples, Device->UpdateSize); rlm@162: samples = (int) Device->UpdateSize; rlm@162: } rlm@162: rlm@162: for (i = 0; i < data->numContexts; i++){ rlm@162: context_data *ctxData = data->contexts[i]; rlm@162: ALCcontext *ctx = ctxData->ctx; rlm@162: alcMakeContextCurrent(ctx); rlm@162: limitContext(Device, ctx); rlm@162: swapInContext(Device, ctxData); rlm@162: aluMixData(Device, ctxData->renderBuffer, samples); rlm@162: saveContext(Device, ctxData); rlm@162: unLimitContext(Device); rlm@162: } rlm@162: alcMakeContextCurrent(current); rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: The main loop synchronizes the master LWJGL context with all the slave rlm@220: contexts, then iterates through each context, rendering just that rlm@220: context to it's audio-sample storage buffer. rlm@162: rlm@162: ** JNI Methods rlm@162: rlm@162: At this point, we have the ability to create multiple listeners by rlm@162: using the master/slave context trick, and the rendered audio data is rlm@162: waiting patiently in internal buffers, one for each listener. We need rlm@162: a way to transport this information to Java, and also a way to drive rlm@162: this device from Java. The following JNI interface code is inspired rlm@220: by the LWJGL JNI interface to =OpenAL=. rlm@162: rlm@220: *** Stepping the Device rlm@162: #+name: jni-step rlm@162: #+begin_src C rlm@162: //////////////////// JNI Methods rlm@162: rlm@162: #include "com_aurellem_send_AudioSend.h" rlm@162: rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: nstep rlm@162: * Signature: (JI)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep rlm@162: (JNIEnv *env, jclass clazz, jlong device, jint samples){ rlm@162: UNUSED(env);UNUSED(clazz);UNUSED(device); rlm@162: renderData((ALCdevice*)((intptr_t)device), samples); rlm@162: } rlm@162: #+end_src rlm@162: This device, unlike most of the other devices in =OpenAL=, does not rlm@162: render sound unless asked. This enables the system to slow down or rlm@162: speed up depending on the needs of the AIs who are using it to rlm@162: listen. If the device tried to render samples in real-time, a rlm@162: complicated AI whose mind takes 100 seconds of computer time to rlm@162: simulate 1 second of AI-time would miss almost all of the sound in rlm@162: its environment. rlm@162: rlm@162: rlm@220: *** Device->Java Data Transport rlm@162: #+name: jni-get-samples rlm@162: #+begin_src C rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: ngetSamples rlm@162: * Signature: (JLjava/nio/ByteBuffer;III)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples rlm@162: (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, rlm@162: jint samples, jint n){ rlm@162: UNUSED(clazz); rlm@162: rlm@162: ALvoid *buffer_address = rlm@162: ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); rlm@162: ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); rlm@162: send_data *data = (send_data*)recorder->ExtraData; rlm@162: if ((ALuint)n > data->numContexts){return;} rlm@162: memcpy(buffer_address, data->contexts[n]->renderBuffer, rlm@162: BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: This is the transport layer between C and Java that will eventually rlm@162: allow us to access rendered sound data from clojure. rlm@162: rlm@162: *** Listener Management rlm@162: rlm@162: =addListener=, =setNthListenerf=, and =setNthListener3f= are rlm@162: necessary to change the properties of any listener other than the rlm@162: master one, since only the listener of the current active context is rlm@162: affected by the normal =OpenAL= listener calls. rlm@162: #+name: listener-manage rlm@162: #+begin_src C rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: naddListener rlm@162: * Signature: (J)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener rlm@162: (JNIEnv *env, jclass clazz, jlong device){ rlm@162: UNUSED(env); UNUSED(clazz); rlm@162: //printf("creating new context via naddListener\n"); rlm@162: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@162: ALCcontext *new = alcCreateContext(Device, NULL); rlm@162: addContext(Device, new); rlm@162: } rlm@162: rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: nsetNthListener3f rlm@162: * Signature: (IFFFJI)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f rlm@162: (JNIEnv *env, jclass clazz, jint param, rlm@162: jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ rlm@162: UNUSED(env);UNUSED(clazz); rlm@162: rlm@162: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@162: send_data *data = (send_data*)Device->ExtraData; rlm@162: rlm@162: ALCcontext *current = alcGetCurrentContext(); rlm@162: if ((ALuint)contextNum > data->numContexts){return;} rlm@162: alcMakeContextCurrent(data->contexts[contextNum]->ctx); rlm@162: alListener3f(param, v1, v2, v3); rlm@162: alcMakeContextCurrent(current); rlm@162: } rlm@162: rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: nsetNthListenerf rlm@162: * Signature: (IFJI)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf rlm@162: (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, rlm@162: jint contextNum){ rlm@162: rlm@162: UNUSED(env);UNUSED(clazz); rlm@162: rlm@162: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@162: send_data *data = (send_data*)Device->ExtraData; rlm@162: rlm@162: ALCcontext *current = alcGetCurrentContext(); rlm@162: if ((ALuint)contextNum > data->numContexts){return;} rlm@162: alcMakeContextCurrent(data->contexts[contextNum]->ctx); rlm@162: alListenerf(param, v1); rlm@162: alcMakeContextCurrent(current); rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: *** Initialization rlm@162: =initDevice= is called from the Java side after LWJGL has created its rlm@162: context, and before any calls to =addListener=. It establishes the rlm@162: LWJGL context as the master context. rlm@162: rlm@162: =getAudioFormat= is a convenience function that uses JNI to build up a rlm@162: =javax.sound.sampled.AudioFormat= object from data in the Device. This rlm@162: way, there is no ambiguity about what the bits created by =step= and rlm@162: returned by =getSamples= mean. rlm@162: #+name: jni-init rlm@162: #+begin_src C rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: ninitDevice rlm@162: * Signature: (J)V rlm@162: */ rlm@162: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice rlm@162: (JNIEnv *env, jclass clazz, jlong device){ rlm@162: UNUSED(env);UNUSED(clazz); rlm@162: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@162: init(Device); rlm@162: } rlm@162: rlm@162: /* rlm@162: * Class: com_aurellem_send_AudioSend rlm@162: * Method: ngetAudioFormat rlm@162: * Signature: (J)Ljavax/sound/sampled/AudioFormat; rlm@162: */ rlm@162: JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat rlm@162: (JNIEnv *env, jclass clazz, jlong device){ rlm@162: UNUSED(clazz); rlm@162: jclass AudioFormatClass = rlm@162: (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); rlm@162: jmethodID AudioFormatConstructor = rlm@162: (*env)->GetMethodID(env, AudioFormatClass, "", "(FIIZZ)V"); rlm@162: rlm@162: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@162: int isSigned; rlm@162: switch (Device->FmtType) rlm@162: { rlm@162: case DevFmtUByte: rlm@162: case DevFmtUShort: isSigned = 0; break; rlm@162: default : isSigned = 1; rlm@162: } rlm@162: float frequency = Device->Frequency; rlm@162: int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); rlm@162: int channels = Device->NumChan; rlm@162: jobject format = (*env)-> rlm@162: NewObject( rlm@162: env,AudioFormatClass,AudioFormatConstructor, rlm@162: frequency, rlm@162: bitsPerFrame, rlm@162: channels, rlm@162: isSigned, rlm@162: 0); rlm@162: return format; rlm@162: } rlm@162: #+end_src rlm@162: rlm@220: ** Boring Device Management Stuff / Memory Cleanup rlm@162: This code is more-or-less copied verbatim from the other =OpenAL= rlm@220: Devices. It's the basis for =OpenAL='s primitive object system. rlm@162: #+name: device-init rlm@162: #+begin_src C rlm@162: //////////////////// Device Initialization / Management rlm@162: rlm@162: static const ALCchar sendDevice[] = "Multiple Audio Send"; rlm@162: rlm@162: static ALCboolean send_open_playback(ALCdevice *device, rlm@162: const ALCchar *deviceName) rlm@162: { rlm@162: send_data *data; rlm@162: // stop any buffering for stdout, so that I can rlm@162: // see the printf statements in my terminal immediately rlm@162: setbuf(stdout, NULL); rlm@162: rlm@162: if(!deviceName) rlm@162: deviceName = sendDevice; rlm@162: else if(strcmp(deviceName, sendDevice) != 0) rlm@162: return ALC_FALSE; rlm@162: data = (send_data*)calloc(1, sizeof(*data)); rlm@162: device->szDeviceName = strdup(deviceName); rlm@162: device->ExtraData = data; rlm@162: return ALC_TRUE; rlm@162: } rlm@162: rlm@162: static void send_close_playback(ALCdevice *device) rlm@162: { rlm@162: send_data *data = (send_data*)device->ExtraData; rlm@162: alcMakeContextCurrent(NULL); rlm@162: ALuint i; rlm@162: // Destroy all slave contexts. LWJGL will take care of rlm@162: // its own context. rlm@162: for (i = 1; i < data->numContexts; i++){ rlm@162: context_data *ctxData = data->contexts[i]; rlm@162: alcDestroyContext(ctxData->ctx); rlm@162: free(ctxData->renderBuffer); rlm@162: free(ctxData); rlm@162: } rlm@162: free(data); rlm@162: device->ExtraData = NULL; rlm@162: } rlm@162: rlm@162: static ALCboolean send_reset_playback(ALCdevice *device) rlm@162: { rlm@162: SetDefaultWFXChannelOrder(device); rlm@162: return ALC_TRUE; rlm@162: } rlm@162: rlm@162: static void send_stop_playback(ALCdevice *Device){ rlm@162: UNUSED(Device); rlm@162: } rlm@162: rlm@162: static const BackendFuncs send_funcs = { rlm@162: send_open_playback, rlm@162: send_close_playback, rlm@162: send_reset_playback, rlm@162: send_stop_playback, rlm@162: NULL, rlm@162: NULL, /* These would be filled with functions to */ rlm@162: NULL, /* handle capturing audio if we we into that */ rlm@162: NULL, /* sort of thing... */ rlm@162: NULL, rlm@162: NULL rlm@162: }; rlm@162: rlm@162: ALCboolean alc_send_init(BackendFuncs *func_list){ rlm@162: *func_list = send_funcs; rlm@162: return ALC_TRUE; rlm@162: } rlm@162: rlm@162: void alc_send_deinit(void){} rlm@162: rlm@162: void alc_send_probe(enum DevProbe type) rlm@162: { rlm@162: switch(type) rlm@162: { rlm@162: case DEVICE_PROBE: rlm@162: AppendDeviceList(sendDevice); rlm@162: break; rlm@162: case ALL_DEVICE_PROBE: rlm@162: AppendAllDeviceList(sendDevice); rlm@162: break; rlm@162: case CAPTURE_DEVICE_PROBE: rlm@162: break; rlm@162: } rlm@162: } rlm@162: #+end_src rlm@162: rlm@162: * The Java interface, =AudioSend= rlm@162: rlm@162: The Java interface to the Send Device follows naturally from the JNI rlm@220: definitions. The only thing here of note is the =deviceID=. This is rlm@220: available from LWJGL, but to only way to get it is with reflection. rlm@220: Unfortunately, there is no other way to control the Send device than rlm@220: to obtain a pointer to it. rlm@162: rlm@220: #+include: "../../audio-send/java/src/com/aurellem/send/AudioSend.java" src java rlm@220: rlm@220: * The Java Audio Renderer, =AudioSendRenderer= rlm@220: rlm@220: #+include: "../../jmeCapture/src/com/aurellem/capture/audio/AudioSendRenderer.java" src java rlm@220: rlm@220: The =AudioSendRenderer= is a modified version of the rlm@220: =LwjglAudioRenderer= which implements the =MultiListener= interface to rlm@220: provide access and creation of more than one =Listener= object. rlm@220: rlm@220: ** MultiListener.java rlm@220: rlm@220: #+include: "../../jmeCapture/src/com/aurellem/capture/audio/MultiListener.java" src java rlm@220: rlm@220: ** SoundProcessors are like SceneProcessors rlm@220: rlm@220: A =SoundProcessor= is analgous to a =SceneProcessor=. Every frame, the rlm@220: =SoundProcessor= registered with a given =Listener= recieves the rlm@220: rendered sound data and can do whatever processing it wants with it. rlm@220: rlm@220: #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java rlm@162: rlm@162: * Finally, Ears in clojure! rlm@162: rlm@220: Now that the =C= and =Java= infrastructure is complete, the clojure rlm@220: hearing abstraction is simple and closely parallels the [[./vision.org][vision]] rlm@220: abstraction. rlm@162: rlm@220: ** Hearing Pipeline rlm@162: rlm@273: All sound rendering is done in the CPU, so =hearing-pipeline= is rlm@273: much less complicated than =vision-pipelie= The bytes available in rlm@220: the ByteBuffer obtained from the =send= Device have different meanings rlm@220: dependant upon the particular hardware or your system. That is why rlm@220: the =AudioFormat= object is necessary to provide the meaning that the rlm@273: raw bytes lack. =byteBuffer->pulse-vector= uses the excellent rlm@220: conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of rlm@220: floats which represent the linear PCM encoded waveform of the rlm@220: sound. With linear PCM (pulse code modulation) -1.0 represents maximum rlm@220: rarefaction of the air while 1.0 represents maximum compression of the rlm@220: air at a given instant. rlm@164: rlm@221: #+name: hearing-pipeline rlm@162: #+begin_src clojure rlm@220: (in-ns 'cortex.hearing) rlm@162: rlm@220: (defn hearing-pipeline rlm@220: "Creates a SoundProcessor which wraps a sound processing rlm@220: continuation function. The continuation is a function that takes rlm@220: [#^ByteBuffer b #^Integer int numSamples #^AudioFormat af ], each of which rlm@220: has already been apprpiately sized." rlm@162: [continuation] rlm@162: (proxy [SoundProcessor] [] rlm@162: (cleanup []) rlm@162: (process rlm@162: [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] rlm@220: (continuation audioSamples numSamples audioFormat)))) rlm@162: rlm@220: (defn byteBuffer->pulse-vector rlm@220: "Extract the sound samples from the byteBuffer as a PCM encoded rlm@220: waveform with values ranging from -1.0 to 1.0 into a vector of rlm@220: floats." rlm@220: [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] rlm@220: (let [num-floats (/ numSamples (.getFrameSize audioFormat)) rlm@220: bytes (byte-array numSamples) rlm@220: floats (float-array num-floats)] rlm@220: (.get audioSamples bytes 0 numSamples) rlm@220: (FloatSampleTools/byte2floatInterleaved rlm@220: bytes 0 floats 0 num-floats audioFormat) rlm@220: (vec floats))) rlm@220: #+end_src rlm@220: rlm@220: ** Physical Ears rlm@220: rlm@220: Together, these three functions define how ears found in a specially rlm@220: prepared blender file will be translated to =Listener= objects in a rlm@273: simulation. =ears= extracts all the children of to top level node rlm@273: named "ears". =add-ear!= and =update-listener-velocity!= use rlm@273: =bind-sense= to bind a =Listener= object located at the initial rlm@220: position of an "ear" node to the closest physical object in the rlm@220: creature. That =Listener= will stay in the same orientation to the rlm@220: object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding rlm@220: demonstration]]. =OpenAL= simulates the doppler effect for moving rlm@273: listeners, =update-listener-velocity!= ensures that this velocity rlm@220: information is always up-to-date. rlm@220: rlm@221: #+name: hearing-ears rlm@220: #+begin_src clojure rlm@164: (defvar rlm@164: ^{:arglists '([creature])} rlm@164: ears rlm@164: (sense-nodes "ears") rlm@164: "Return the children of the creature's \"ears\" node.") rlm@162: rlm@163: (defn update-listener-velocity! rlm@162: "Update the listener's velocity every update loop." rlm@162: [#^Spatial obj #^Listener lis] rlm@162: (let [old-position (atom (.getLocation lis))] rlm@162: (.addControl rlm@162: obj rlm@162: (proxy [AbstractControl] [] rlm@162: (controlUpdate [tpf] rlm@162: (let [new-position (.getLocation lis)] rlm@162: (.setVelocity rlm@162: lis rlm@162: (.mult (.subtract new-position @old-position) rlm@162: (float (/ tpf)))) rlm@162: (reset! old-position new-position))) rlm@162: (controlRender [_ _]))))) rlm@162: rlm@169: (defn add-ear! rlm@164: "Create a Listener centered on the current position of 'ear rlm@164: which follows the closest physical node in 'creature and rlm@164: sends sound data to 'continuation." rlm@162: [#^Application world #^Node creature #^Spatial ear continuation] rlm@162: (let [target (closest-node creature ear) rlm@162: lis (Listener.) rlm@162: audio-renderer (.getAudioRenderer world) rlm@220: sp (hearing-pipeline continuation)] rlm@162: (.setLocation lis (.getWorldTranslation ear)) rlm@162: (.setRotation lis (.getWorldRotation ear)) rlm@162: (bind-sense target lis) rlm@163: (update-listener-velocity! target lis) rlm@162: (.addListener audio-renderer lis) rlm@162: (.registerSoundProcessor audio-renderer lis sp))) rlm@220: #+end_src rlm@162: rlm@220: ** Ear Creation rlm@220: rlm@221: #+name: hearing-kernel rlm@220: #+begin_src clojure rlm@220: (defn hearing-kernel rlm@164: "Returns a functon which returns auditory sensory data when called rlm@164: inside a running simulation." rlm@162: [#^Node creature #^Spatial ear] rlm@164: (let [hearing-data (atom []) rlm@164: register-listener! rlm@164: (runonce rlm@164: (fn [#^Application world] rlm@169: (add-ear! rlm@164: world creature ear rlm@220: (comp #(reset! hearing-data %) rlm@220: byteBuffer->pulse-vector))))] rlm@164: (fn [#^Application world] rlm@164: (register-listener! world) rlm@164: (let [data @hearing-data rlm@164: topology rlm@220: (vec (map #(vector % 0) (range 0 (count data))))] rlm@220: [topology data])))) rlm@164: rlm@163: (defn hearing! rlm@164: "Endow the creature in a particular world with the sense of rlm@164: hearing. Will return a sequence of functions, one for each ear, rlm@164: which when called will return the auditory data from that ear." rlm@162: [#^Node creature] rlm@164: (for [ear (ears creature)] rlm@220: (hearing-kernel creature ear))) rlm@220: #+end_src rlm@162: rlm@273: Each function returned by =hearing-kernel!= will register a new rlm@220: =Listener= with the simulation the first time it is called. Each time rlm@220: it is called, the hearing-function will return a vector of linear PCM rlm@220: encoded sound data that was heard since the last frame. The size of rlm@220: this vector is of course determined by the overall framerate of the rlm@220: game. With a constant framerate of 60 frames per second and a sampling rlm@220: frequency of 44,100 samples per second, the vector will have exactly rlm@220: 735 elements. rlm@220: rlm@220: ** Visualizing Hearing rlm@220: rlm@220: This is a simple visualization function which displaye the waveform rlm@220: reported by the simulated sense of hearing. It converts the values rlm@220: reported in the vector returned by the hearing function from the range rlm@220: [-1.0, 1.0] to the range [0 255], converts to integer, and displays rlm@220: the number as a greyscale pixel. rlm@220: rlm@221: #+name: hearing-display rlm@220: #+begin_src clojure rlm@221: (in-ns 'cortex.hearing) rlm@221: rlm@189: (defn view-hearing rlm@189: "Creates a function which accepts a list of auditory data and rlm@189: display each element of the list to the screen as an image." rlm@189: [] rlm@189: (view-sense rlm@189: (fn [[coords sensor-data]] rlm@220: (let [pixel-data rlm@220: (vec rlm@220: (map rlm@220: #(rem (int (* 255 (/ (+ 1 %) 2))) 256) rlm@220: sensor-data)) rlm@220: height 50 rlm@221: image (BufferedImage. (max 1 (count coords)) height rlm@189: BufferedImage/TYPE_INT_RGB)] rlm@189: (dorun rlm@189: (for [x (range (count coords))] rlm@189: (dorun rlm@189: (for [y (range height)] rlm@220: (let [raw-sensor (pixel-data x)] rlm@189: (.setRGB image x y (gray raw-sensor))))))) rlm@189: image)))) rlm@162: #+end_src rlm@162: rlm@220: * Testing Hearing rlm@220: ** Advanced Java Example rlm@220: rlm@220: I wrote a test case in Java that demonstrates the use of the Java rlm@220: components of this hearing system. It is part of a larger java library rlm@220: to capture perfect Audio from jMonkeyEngine. Some of the clojure rlm@220: constructs above are partially reiterated in the java source file. But rlm@220: first, the video! As far as I know this is the first instance of rlm@220: multiple simulated listeners in a virtual environment using OpenAL. rlm@220: rlm@220: #+begin_html rlm@220:
rlm@220:
rlm@220: rlm@220:
rlm@224:

The blue sphere is emitting a constant sound. Each gray box is rlm@224: listening for sound, and will change color from gray to green if it rlm@220: detects sound which is louder than a certain threshold. As the blue rlm@220: sphere travels along the path, it excites each of the cubes in turn.

rlm@220:
rlm@220: #+end_html rlm@220: rlm@220: #+include "../../jmeCapture/src/com/aurellem/capture/examples/Advanced.java" src java rlm@220: rlm@220: Here is a small clojure program to drive the java program and make it rlm@220: available as part of my test suite. rlm@162: rlm@221: #+name: test-hearing-1 rlm@220: #+begin_src clojure rlm@220: (in-ns 'cortex.test.hearing) rlm@162: rlm@220: (defn test-java-hearing rlm@162: "Testing hearing: rlm@162: You should see a blue sphere flying around several rlm@162: cubes. As the sphere approaches each cube, it turns rlm@162: green." rlm@162: [] rlm@162: (doto (com.aurellem.capture.examples.Advanced.) rlm@162: (.setSettings rlm@162: (doto (AppSettings. true) rlm@162: (.setAudioRenderer "Send"))) rlm@162: (.setShowSettings false) rlm@162: (.setPauseOnLostFocus false))) rlm@162: #+end_src rlm@162: rlm@220: ** Adding Hearing to the Worm rlm@162: rlm@221: To the worm, I add a new node called "ears" with one child which rlm@221: represents the worm's single ear. rlm@220: rlm@221: #+attr_html: width=755 rlm@221: #+caption: The Worm with a newly added nodes describing an ear. rlm@221: [[../images/worm-with-ear.png]] rlm@221: rlm@221: The node highlighted in yellow it the top-level "ears" node. It's rlm@221: child, highlighted in orange, represents a the single ear the creature rlm@221: has. The ear will be localized right above the curved part of the rlm@221: worm's lower hemispherical region opposite the eye. rlm@221: rlm@221: The other empty nodes represent the worm's single joint and eye and are rlm@221: described in [[./body.org][body]] and [[./vision.org][vision]]. rlm@221: rlm@221: #+name: test-hearing-2 rlm@221: #+begin_src clojure rlm@221: (in-ns 'cortex.test.hearing) rlm@221: rlm@221: (cortex.import/mega-import-jme3) rlm@221: (import java.io.File) rlm@221: rlm@221: (use 'cortex.body) rlm@221: rlm@221: (defn test-worm-hearing [] rlm@221: (let [the-worm (doto (worm) (body!)) rlm@221: hearing (hearing! the-worm) rlm@221: hearing-display (view-hearing) rlm@221: rlm@221: tone (AudioNode. (asset-manager) rlm@221: "Sounds/pure.wav" false) rlm@221: rlm@221: hymn (AudioNode. (asset-manager) rlm@221: "Sounds/ear-and-eye.wav" false)] rlm@221: (world rlm@221: (nodify [the-worm (floor)]) rlm@221: (merge standard-debug-controls rlm@221: {"key-return" rlm@221: (fn [_ value] rlm@221: (if value (.play tone))) rlm@221: "key-l" rlm@221: (fn [_ value] rlm@221: (if value (.play hymn)))}) rlm@221: (fn [world] rlm@221: (light-up-everything world) rlm@221: (com.aurellem.capture.Capture/captureVideo rlm@221: world rlm@221: (File."/home/r/proj/cortex/render/worm-audio/frames")) rlm@221: (com.aurellem.capture.Capture/captureAudio rlm@221: world rlm@221: (File."/home/r/proj/cortex/render/worm-audio/audio.wav"))) rlm@221: rlm@221: (fn [world tpf] rlm@221: (hearing-display rlm@221: (map #(% world) hearing) rlm@221: (File. "/home/r/proj/cortex/render/worm-audio/hearing-data")))))) rlm@221: #+end_src rlm@221: rlm@221: In this test, I load the worm with its newly formed ear and let it rlm@221: hear sounds. The sound the worm is hearing is localized to the origin rlm@221: of the world, and you can see that as the worm moves farther away from rlm@221: the origin when it is hit by balls, it hears the sound less intensely. rlm@221: rlm@221: The sound you hear in the video is from the worm's perspective. Notice rlm@221: how the pure tone becomes fainter and the visual display of the rlm@221: auditory data becomes less pronounced as the worm falls farther away rlm@221: from the source of the sound. rlm@221: rlm@221: #+begin_html rlm@221:
rlm@221:
rlm@221: rlm@221:
rlm@221:

The worm can now hear the sound pulses produced from the rlm@221: hymn. Notice the strikingly different pattern that human speech rlm@221: makes compared to the insturments. Once the worm is pushed off the rlm@221: floor, the sound it hears is attenuated, and the display of the rlm@221: sound it hears beomes fainter. This shows the 3D localization of rlm@221: sound in this world.

rlm@221:
rlm@221: #+end_html rlm@221: rlm@221: *** Creating the Ear Video rlm@221: #+name: magick-3 rlm@221: #+begin_src clojure rlm@221: (ns cortex.video.magick3 rlm@221: (:import java.io.File) rlm@221: (:use clojure.contrib.shell-out)) rlm@221: rlm@221: (defn images [path] rlm@221: (sort (rest (file-seq (File. path))))) rlm@221: rlm@221: (def base "/home/r/proj/cortex/render/worm-audio/") rlm@221: rlm@221: (defn pics [file] rlm@221: (images (str base file))) rlm@221: rlm@221: (defn combine-images [] rlm@221: (let [main-view (pics "frames") rlm@221: hearing (pics "hearing-data") rlm@221: background (repeat 9001 (File. (str base "background.png"))) rlm@221: targets (map rlm@221: #(File. (str base "out/" (format "%07d.png" %))) rlm@221: (range 0 (count main-view)))] rlm@221: (dorun rlm@221: (pmap rlm@221: (comp rlm@221: (fn [[background main-view hearing target]] rlm@221: (println target) rlm@221: (sh "convert" rlm@221: background rlm@221: main-view "-geometry" "+66+21" "-composite" rlm@221: hearing "-geometry" "+21+526" "-composite" rlm@221: target)) rlm@221: (fn [& args] (map #(.getCanonicalPath %) args))) rlm@221: background main-view hearing targets)))) rlm@221: #+end_src rlm@221: rlm@221: #+begin_src sh rlm@221: cd /home/r/proj/cortex/render/worm-audio rlm@221: ffmpeg -r 60 -i out/%07d.png -i audio.wav \ rlm@221: -b:a 128k -b:v 9001k \ rlm@222: -c:a libvorbis -c:v libtheora worm-hearing.ogg rlm@221: #+end_src rlm@220: rlm@220: * Headers rlm@220: rlm@220: #+name: hearing-header rlm@220: #+begin_src clojure rlm@220: (ns cortex.hearing rlm@220: "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple rlm@220: listeners at different positions in the same world. Automatically rlm@220: reads ear-nodes from specially prepared blender files and rlm@221: instantiates them in the world as simulated ears." rlm@220: {:author "Robert McIntyre"} rlm@220: (:use (cortex world util sense)) rlm@220: (:use clojure.contrib.def) rlm@220: (:import java.nio.ByteBuffer) rlm@220: (:import java.awt.image.BufferedImage) rlm@220: (:import org.tritonus.share.sampled.FloatSampleTools) rlm@220: (:import (com.aurellem.capture.audio rlm@220: SoundProcessor AudioSendRenderer)) rlm@220: (:import javax.sound.sampled.AudioFormat) rlm@220: (:import (com.jme3.scene Spatial Node)) rlm@220: (:import com.jme3.audio.Listener) rlm@220: (:import com.jme3.app.Application) rlm@220: (:import com.jme3.scene.control.AbstractControl)) rlm@220: #+end_src rlm@220: rlm@221: #+name: test-header rlm@220: #+begin_src clojure rlm@220: (ns cortex.test.hearing rlm@220: (:use (cortex world util hearing)) rlm@221: (:use cortex.test.body) rlm@220: (:import (com.jme3.audio AudioNode Listener)) rlm@220: (:import com.jme3.scene.Node rlm@220: com.jme3.system.AppSettings)) rlm@220: #+end_src rlm@220: rlm@222: * Source Listing rlm@222: - [[../src/cortex/hearing.clj][cortex.hearing]] rlm@222: - [[../src/cortex/test/hearing.clj][cortex.test.hearing]] rlm@222: #+html: rlm@222: - [[http://hg.bortreb.com ][source-repository]] rlm@222: rlm@220: * Next rlm@222: The worm can see and hear, but it can't feel the world or rlm@222: itself. Next post, I'll give the worm a [[./touch.org][sense of touch]]. rlm@162: rlm@162: rlm@220: rlm@162: * COMMENT Code Generation rlm@162: rlm@163: #+begin_src clojure :tangle ../src/cortex/hearing.clj rlm@220: <> rlm@221: <> rlm@221: <> rlm@221: <> rlm@221: <> rlm@162: #+end_src rlm@162: rlm@162: #+begin_src clojure :tangle ../src/cortex/test/hearing.clj rlm@221: <> rlm@221: <> rlm@221: <> rlm@221: #+end_src rlm@221: rlm@221: #+begin_src clojure :tangle ../src/cortex/video/magick3.clj rlm@221: <> rlm@162: #+end_src rlm@162: rlm@162: #+begin_src C :tangle ../../audio-send/Alc/backends/send.c rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: <> rlm@162: #+end_src rlm@162: rlm@162: