rlm@15: #+title: Simulated Sense of Hearing rlm@0: #+author: Robert McIntyre rlm@0: #+email: rlm@mit.edu rlm@15: #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 rlm@15: #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI rlm@15: #+SETUPFILE: ../../aurellem/org/setup.org rlm@15: #+INCLUDE: ../../aurellem/org/level-0.org rlm@0: #+BABEL: :exports both :noweb yes :cache no :mkdirp yes rlm@0: rlm@15: * Hearing rlm@0: rlm@20: I want to be able to place ears in a similar manner to how I place rlm@0: the eyes. I want to be able to place ears in a unique spatial rlm@20: position, and receive as output at every tick the F.F.T. of whatever rlm@0: signals are happening at that point. rlm@0: rlm@15: Hearing is one of the more difficult senses to simulate, because there rlm@15: is less support for obtaining the actual sound data that is processed rlm@15: by jMonkeyEngine3. rlm@15: rlm@15: jMonkeyEngine's sound system works as follows: rlm@15: rlm@20: - jMonkeyEngine uses the =AppSettings= for the particular application rlm@15: to determine what sort of =AudioRenderer= should be used. rlm@15: - although some support is provided for multiple AudioRendering rlm@15: backends, jMonkeyEngine at the time of this writing will either rlm@20: pick no AudioRenderer at all, or the =LwjglAudioRenderer= rlm@15: - jMonkeyEngine tries to figure out what sort of system you're rlm@20: running and extracts the appropriate native libraries. rlm@18: - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game rlm@18: Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] rlm@15: - =OpenAL= calculates the 3D sound localization and feeds a stream of rlm@15: sound to any of various sound output devices with which it knows rlm@15: how to communicate. rlm@15: rlm@15: A consequence of this is that there's no way to access the actual rlm@20: sound data produced by =OpenAL=. Even worse, =OpenAL= only supports rlm@15: one /listener/, which normally isn't a problem for games, but becomes rlm@15: a problem when trying to make multiple AI creatures that can each hear rlm@15: the world from a different perspective. rlm@15: rlm@15: To make many AI creatures in jMonkeyEngine that can each hear the rlm@15: world from their own perspective, it is necessary to go all the way rlm@15: back to =OpenAL= and implement support for simulated hearing there. rlm@15: rlm@19: * Extending =OpenAL= rlm@15: ** =OpenAL= Devices rlm@15: rlm@15: =OpenAL= goes to great lengths to support many different systems, all rlm@20: with different sound capabilities and interfaces. It accomplishes this rlm@15: difficult task by providing code for many different sound backends in rlm@15: pseudo-objects called /Devices/. There's a device for the Linux Open rlm@20: Sound System and the Advanced Linux Sound Architecture, there's one rlm@15: for Direct Sound on Windows, there's even one for Solaris. =OpenAL= rlm@15: solves the problem of platform independence by providing all these rlm@15: Devices. rlm@15: rlm@15: Wrapper libraries such as LWJGL are free to examine the system on rlm@20: which they are running and then select an appropriate device for that rlm@15: system. rlm@15: rlm@15: There are also a few "special" devices that don't interface with any rlm@15: particular system. These include the Null Device, which doesn't do rlm@20: anything, and the Wave Device, which writes whatever sound it receives rlm@15: to a file, if everything has been set up correctly when configuring rlm@15: =OpenAL=. rlm@15: rlm@15: Actual mixing of the sound data happens in the Devices, and they are rlm@15: the only point in the sound rendering process where this data is rlm@15: available. rlm@15: rlm@15: Therefore, in order to support multiple listeners, and get the sound rlm@15: data in a form that the AIs can use, it is necessary to create a new rlm@15: Device, which supports this features. rlm@15: rlm@15: ** The Send Device rlm@15: Adding a device to OpenAL is rather tricky -- there are five separate rlm@15: files in the =OpenAL= source tree that must be modified to do so. I've rlm@15: documented this process [[./add-new-device.org][here]] for anyone who is interested. rlm@15: rlm@18: rlm@18: Onward to that actual Device! rlm@18: rlm@18: again, my objectives are: rlm@18: rlm@18: - Support Multiple Listeners from jMonkeyEngine3 rlm@18: - Get access to the rendered sound data for further processing from rlm@18: clojure. rlm@18: rlm@18: ** =send.c= rlm@18: rlm@18: ** Header rlm@29: #+name: send-header rlm@15: #+begin_src C rlm@15: #include "config.h" rlm@15: #include rlm@15: #include "alMain.h" rlm@15: #include "AL/al.h" rlm@15: #include "AL/alc.h" rlm@15: #include "alSource.h" rlm@15: #include rlm@15: rlm@15: //////////////////// Summary rlm@15: rlm@15: struct send_data; rlm@15: struct context_data; rlm@15: rlm@15: static void addContext(ALCdevice *, ALCcontext *); rlm@15: static void syncContexts(ALCcontext *master, ALCcontext *slave); rlm@15: static void syncSources(ALsource *master, ALsource *slave, rlm@15: ALCcontext *masterCtx, ALCcontext *slaveCtx); rlm@15: rlm@15: static void syncSourcei(ALuint master, ALuint slave, rlm@15: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@15: static void syncSourcef(ALuint master, ALuint slave, rlm@15: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@15: static void syncSource3f(ALuint master, ALuint slave, rlm@15: ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); rlm@15: rlm@15: static void swapInContext(ALCdevice *, struct context_data *); rlm@15: static void saveContext(ALCdevice *, struct context_data *); rlm@15: static void limitContext(ALCdevice *, ALCcontext *); rlm@15: static void unLimitContext(ALCdevice *); rlm@15: rlm@15: static void init(ALCdevice *); rlm@15: static void renderData(ALCdevice *, int samples); rlm@15: rlm@15: #define UNUSED(x) (void)(x) rlm@18: #+end_src rlm@15: rlm@20: The main idea behind the Send device is to take advantage of the fact rlm@18: that LWJGL only manages one /context/ when using OpenAL. A /context/ rlm@18: is like a container that holds samples and keeps track of where the rlm@18: listener is. In order to support multiple listeners, the Send device rlm@18: identifies the LWJGL context as the master context, and creates any rlm@18: number of slave contexts to represent additional listeners. Every rlm@18: time the device renders sound, it synchronizes every source from the rlm@18: master LWJGL context to the slave contexts. Then, it renders each rlm@18: context separately, using a different listener for each one. The rlm@18: rendered sound is made available via JNI to jMonkeyEngine. rlm@18: rlm@18: To recap, the process is: rlm@18: - Set the LWJGL context as "master" in the =init()= method. rlm@18: - Create any number of additional contexts via =addContext()= rlm@18: - At every call to =renderData()= sync the master context with the rlm@20: slave contexts with =syncContexts()= rlm@18: - =syncContexts()= calls =syncSources()= to sync all the sources rlm@18: which are in the master context. rlm@18: - =limitContext()= and =unLimitContext()= make it possible to render rlm@18: only one context at a time. rlm@18: rlm@18: ** Necessary State rlm@29: #+name: send-state rlm@18: #+begin_src C rlm@15: //////////////////// State rlm@15: rlm@15: typedef struct context_data { rlm@15: ALfloat ClickRemoval[MAXCHANNELS]; rlm@15: ALfloat PendingClicks[MAXCHANNELS]; rlm@15: ALvoid *renderBuffer; rlm@15: ALCcontext *ctx; rlm@15: } context_data; rlm@15: rlm@15: typedef struct send_data { rlm@15: ALuint size; rlm@15: context_data **contexts; rlm@15: ALuint numContexts; rlm@15: ALuint maxContexts; rlm@15: } send_data; rlm@18: #+end_src rlm@15: rlm@18: Switching between contexts is not the normal operation of a Device, rlm@18: and one of the problems with doing so is that a Device normally keeps rlm@18: around a few pieces of state such as the =ClickRemoval= array above rlm@18: which will become corrupted if the contexts are not done in rlm@18: parallel. The solution is to create a copy of this normally global rlm@18: device state for each context, and copy it back and forth into and out rlm@18: of the actual device state whenever a context is rendered. rlm@15: rlm@18: ** Synchronization Macros rlm@29: #+name: sync-macros rlm@18: #+begin_src C rlm@15: //////////////////// Context Creation / Synchronization rlm@15: rlm@15: #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ rlm@15: void NAME (ALuint sourceID1, ALuint sourceID2, \ rlm@15: ALCcontext *ctx1, ALCcontext *ctx2, \ rlm@15: ALenum param){ \ rlm@15: INIT_EXPR; \ rlm@15: ALCcontext *current = alcGetCurrentContext(); \ rlm@15: alcMakeContextCurrent(ctx1); \ rlm@15: GET_EXPR; \ rlm@15: alcMakeContextCurrent(ctx2); \ rlm@15: SET_EXPR; \ rlm@15: alcMakeContextCurrent(current); \ rlm@15: } rlm@15: rlm@15: #define MAKE_SYNC(NAME, TYPE, GET, SET) \ rlm@15: _MAKE_SYNC(NAME, \ rlm@15: TYPE value, \ rlm@15: GET(sourceID1, param, &value), \ rlm@15: SET(sourceID2, param, value)) rlm@15: rlm@15: #define MAKE_SYNC3(NAME, TYPE, GET, SET) \ rlm@15: _MAKE_SYNC(NAME, \ rlm@15: TYPE value1; TYPE value2; TYPE value3;, \ rlm@15: GET(sourceID1, param, &value1, &value2, &value3), \ rlm@15: SET(sourceID2, param, value1, value2, value3)) rlm@15: rlm@15: MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); rlm@15: MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); rlm@15: MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); rlm@15: MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); rlm@15: rlm@18: #+end_src rlm@18: rlm@20: Setting the state of an =OpenAL= source is done with the =alSourcei=, rlm@18: =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to rlm@20: completely synchronize two sources, it is necessary to use all of rlm@18: them. These macros help to condense the otherwise repetitive rlm@20: synchronization code involving these similar low-level =OpenAL= functions. rlm@18: rlm@18: ** Source Synchronization rlm@29: #+name: sync-sources rlm@18: #+begin_src C rlm@15: void syncSources(ALsource *masterSource, ALsource *slaveSource, rlm@15: ALCcontext *masterCtx, ALCcontext *slaveCtx){ rlm@15: ALuint master = masterSource->source; rlm@15: ALuint slave = slaveSource->source; rlm@15: ALCcontext *current = alcGetCurrentContext(); rlm@15: rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); rlm@15: syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); rlm@15: rlm@15: syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); rlm@15: syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); rlm@15: syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); rlm@15: rlm@15: syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); rlm@15: syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); rlm@15: rlm@15: alcMakeContextCurrent(masterCtx); rlm@15: ALint source_type; rlm@15: alGetSourcei(master, AL_SOURCE_TYPE, &source_type); rlm@15: rlm@15: // Only static sources are currently synchronized! rlm@15: if (AL_STATIC == source_type){ rlm@15: ALint master_buffer; rlm@15: ALint slave_buffer; rlm@15: alGetSourcei(master, AL_BUFFER, &master_buffer); rlm@15: alcMakeContextCurrent(slaveCtx); rlm@15: alGetSourcei(slave, AL_BUFFER, &slave_buffer); rlm@15: if (master_buffer != slave_buffer){ rlm@15: alSourcei(slave, AL_BUFFER, master_buffer); rlm@15: } rlm@15: } rlm@15: rlm@15: // Synchronize the state of the two sources. rlm@15: alcMakeContextCurrent(masterCtx); rlm@15: ALint masterState; rlm@15: ALint slaveState; rlm@15: rlm@15: alGetSourcei(master, AL_SOURCE_STATE, &masterState); rlm@15: alcMakeContextCurrent(slaveCtx); rlm@15: alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); rlm@15: rlm@15: if (masterState != slaveState){ rlm@15: switch (masterState){ rlm@15: case AL_INITIAL : alSourceRewind(slave); break; rlm@15: case AL_PLAYING : alSourcePlay(slave); break; rlm@15: case AL_PAUSED : alSourcePause(slave); break; rlm@15: case AL_STOPPED : alSourceStop(slave); break; rlm@15: } rlm@15: } rlm@15: // Restore whatever context was previously active. rlm@15: alcMakeContextCurrent(current); rlm@15: } rlm@18: #+end_src rlm@20: This function is long because it has to exhaustively go through all the rlm@18: possible state that a source can have and make sure that it is the rlm@18: same between the master and slave sources. I'd like to take this rlm@18: moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very rlm@18: good description of =OpenAL='s internals. rlm@15: rlm@18: ** Context Synchronization rlm@29: #+name: sync-contexts rlm@18: #+begin_src C rlm@15: void syncContexts(ALCcontext *master, ALCcontext *slave){ rlm@15: /* If there aren't sufficient sources in slave to mirror rlm@15: the sources in master, create them. */ rlm@15: ALCcontext *current = alcGetCurrentContext(); rlm@15: rlm@15: UIntMap *masterSourceMap = &(master->SourceMap); rlm@15: UIntMap *slaveSourceMap = &(slave->SourceMap); rlm@15: ALuint numMasterSources = masterSourceMap->size; rlm@15: ALuint numSlaveSources = slaveSourceMap->size; rlm@15: rlm@15: alcMakeContextCurrent(slave); rlm@15: if (numSlaveSources < numMasterSources){ rlm@15: ALuint numMissingSources = numMasterSources - numSlaveSources; rlm@15: ALuint newSources[numMissingSources]; rlm@15: alGenSources(numMissingSources, newSources); rlm@15: } rlm@15: rlm@20: /* Now, slave is guaranteed to have at least as many sources rlm@15: as master. Sync each source from master to the corresponding rlm@15: source in slave. */ rlm@15: int i; rlm@15: for(i = 0; i < masterSourceMap->size; i++){ rlm@15: syncSources((ALsource*)masterSourceMap->array[i].value, rlm@15: (ALsource*)slaveSourceMap->array[i].value, rlm@15: master, slave); rlm@15: } rlm@15: alcMakeContextCurrent(current); rlm@15: } rlm@18: #+end_src rlm@15: rlm@18: Most of the hard work in Context Synchronization is done in rlm@18: =syncSources()=. The only thing that =syncContexts()= has to worry rlm@20: about is automatically creating new sources whenever a slave context rlm@18: does not have the same number of sources as the master context. rlm@18: rlm@19: ** Context Creation rlm@29: #+name: context-creation rlm@18: #+begin_src C rlm@15: static void addContext(ALCdevice *Device, ALCcontext *context){ rlm@15: send_data *data = (send_data*)Device->ExtraData; rlm@15: // expand array if necessary rlm@15: if (data->numContexts >= data->maxContexts){ rlm@15: ALuint newMaxContexts = data->maxContexts*2 + 1; rlm@15: data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); rlm@15: data->maxContexts = newMaxContexts; rlm@15: } rlm@15: // create context_data and add it to the main array rlm@15: context_data *ctxData; rlm@15: ctxData = (context_data*)calloc(1, sizeof(*ctxData)); rlm@15: ctxData->renderBuffer = rlm@15: malloc(BytesFromDevFmt(Device->FmtType) * rlm@15: Device->NumChan * Device->UpdateSize); rlm@15: ctxData->ctx = context; rlm@15: rlm@15: data->contexts[data->numContexts] = ctxData; rlm@15: data->numContexts++; rlm@15: } rlm@18: #+end_src rlm@15: rlm@18: Here, the slave context is created, and it's data is stored in the rlm@18: device-wide =ExtraData= structure. The =renderBuffer= that is created rlm@18: here is where the rendered sound samples for this slave context will rlm@18: eventually go. rlm@15: rlm@19: ** Context Switching rlm@29: #+name: context-switching rlm@18: #+begin_src C rlm@15: //////////////////// Context Switching rlm@15: rlm@15: /* A device brings along with it two pieces of state rlm@15: * which have to be swapped in and out with each context. rlm@15: */ rlm@15: static void swapInContext(ALCdevice *Device, context_data *ctxData){ rlm@15: memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); rlm@15: memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); rlm@15: } rlm@15: rlm@15: static void saveContext(ALCdevice *Device, context_data *ctxData){ rlm@15: memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); rlm@15: memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); rlm@15: } rlm@15: rlm@15: static ALCcontext **currentContext; rlm@15: static ALuint currentNumContext; rlm@15: rlm@15: /* By default, all contexts are rendered at once for each call to aluMixData. rlm@20: * This function uses the internals of the ALCdevice struct to temporally rlm@15: * cause aluMixData to only render the chosen context. rlm@15: */ rlm@15: static void limitContext(ALCdevice *Device, ALCcontext *ctx){ rlm@15: currentContext = Device->Contexts; rlm@15: currentNumContext = Device->NumContexts; rlm@15: Device->Contexts = &ctx; rlm@15: Device->NumContexts = 1; rlm@15: } rlm@15: rlm@15: static void unLimitContext(ALCdevice *Device){ rlm@15: Device->Contexts = currentContext; rlm@15: Device->NumContexts = currentNumContext; rlm@15: } rlm@18: #+end_src rlm@15: rlm@20: =OpenAL= normally renders all Contexts in parallel, outputting the rlm@18: whole result to the buffer. It does this by iterating over the rlm@18: Device->Contexts array and rendering each context to the buffer in rlm@20: turn. By temporally setting Device->NumContexts to 1 and adjusting rlm@18: the Device's context list to put the desired context-to-be-rendered rlm@18: into position 0, we can get trick =OpenAL= into rendering each slave rlm@18: context separate from all the others. rlm@15: rlm@18: ** Main Device Loop rlm@29: #+name: main-loop rlm@18: #+begin_src C rlm@15: //////////////////// Main Device Loop rlm@15: rlm@18: /* Establish the LWJGL context as the master context, which will rlm@15: * be synchronized to all the slave contexts rlm@15: */ rlm@15: static void init(ALCdevice *Device){ rlm@15: ALCcontext *masterContext = alcGetCurrentContext(); rlm@15: addContext(Device, masterContext); rlm@15: } rlm@15: rlm@15: rlm@15: static void renderData(ALCdevice *Device, int samples){ rlm@15: if(!Device->Connected){return;} rlm@15: send_data *data = (send_data*)Device->ExtraData; rlm@15: ALCcontext *current = alcGetCurrentContext(); rlm@15: rlm@15: ALuint i; rlm@15: for (i = 1; i < data->numContexts; i++){ rlm@15: syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); rlm@15: } rlm@15: rlm@25: if ((ALuint) samples > Device->UpdateSize){ rlm@15: printf("exceeding internal buffer size; dropping samples\n"); rlm@15: printf("requested %d; available %d\n", samples, Device->UpdateSize); rlm@15: samples = (int) Device->UpdateSize; rlm@15: } rlm@15: rlm@15: for (i = 0; i < data->numContexts; i++){ rlm@15: context_data *ctxData = data->contexts[i]; rlm@15: ALCcontext *ctx = ctxData->ctx; rlm@15: alcMakeContextCurrent(ctx); rlm@15: limitContext(Device, ctx); rlm@15: swapInContext(Device, ctxData); rlm@15: aluMixData(Device, ctxData->renderBuffer, samples); rlm@15: saveContext(Device, ctxData); rlm@15: unLimitContext(Device); rlm@15: } rlm@15: alcMakeContextCurrent(current); rlm@15: } rlm@18: #+end_src rlm@15: rlm@18: The main loop synchronizes the master LWJGL context with all the slave rlm@18: contexts, then walks each context, rendering just that context to it's rlm@18: audio-sample storage buffer. rlm@15: rlm@19: ** JNI Methods rlm@19: rlm@19: At this point, we have the ability to create multiple listeners by rlm@19: using the master/slave context trick, and the rendered audio data is rlm@19: waiting patiently in internal buffers, one for each listener. We need rlm@19: a way to transport this information to Java, and also a way to drive rlm@19: this device from Java. The following JNI interface code is inspired rlm@19: by the way LWJGL interfaces with =OpenAL=. rlm@19: rlm@19: *** step rlm@29: #+name: jni-step rlm@18: #+begin_src C rlm@15: //////////////////// JNI Methods rlm@15: rlm@15: #include "com_aurellem_send_AudioSend.h" rlm@15: rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: nstep rlm@15: * Signature: (JI)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep rlm@15: (JNIEnv *env, jclass clazz, jlong device, jint samples){ rlm@15: UNUSED(env);UNUSED(clazz);UNUSED(device); rlm@15: renderData((ALCdevice*)((intptr_t)device), samples); rlm@15: } rlm@19: #+end_src rlm@19: This device, unlike most of the other devices in =OpenAL=, does not rlm@19: render sound unless asked. This enables the system to slow down or rlm@19: speed up depending on the needs of the AIs who are using it to rlm@19: listen. If the device tried to render samples in real-time, a rlm@19: complicated AI whose mind takes 100 seconds of computer time to rlm@19: simulate 1 second of AI-time would miss almost all of the sound in rlm@19: its environment. rlm@15: rlm@19: rlm@19: *** getSamples rlm@29: #+name: jni-get-samples rlm@19: #+begin_src C rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: ngetSamples rlm@15: * Signature: (JLjava/nio/ByteBuffer;III)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples rlm@15: (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, rlm@15: jint samples, jint n){ rlm@15: UNUSED(clazz); rlm@15: rlm@15: ALvoid *buffer_address = rlm@15: ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); rlm@15: ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); rlm@15: send_data *data = (send_data*)recorder->ExtraData; rlm@15: if ((ALuint)n > data->numContexts){return;} rlm@15: memcpy(buffer_address, data->contexts[n]->renderBuffer, rlm@15: BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); rlm@15: } rlm@19: #+end_src rlm@15: rlm@19: This is the transport layer between C and Java that will eventually rlm@19: allow us to access rendered sound data from clojure. rlm@19: rlm@19: *** Listener Management rlm@19: rlm@19: =addListener=, =setNthListenerf=, and =setNthListener3f= are rlm@19: necessary to change the properties of any listener other than the rlm@19: master one, since only the listener of the current active context is rlm@19: affected by the normal =OpenAL= listener calls. rlm@29: #+name: listener-manage rlm@19: #+begin_src C rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: naddListener rlm@15: * Signature: (J)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener rlm@15: (JNIEnv *env, jclass clazz, jlong device){ rlm@15: UNUSED(env); UNUSED(clazz); rlm@15: //printf("creating new context via naddListener\n"); rlm@15: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@15: ALCcontext *new = alcCreateContext(Device, NULL); rlm@15: addContext(Device, new); rlm@15: } rlm@15: rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: nsetNthListener3f rlm@15: * Signature: (IFFFJI)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f rlm@15: (JNIEnv *env, jclass clazz, jint param, rlm@15: jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ rlm@15: UNUSED(env);UNUSED(clazz); rlm@15: rlm@15: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@15: send_data *data = (send_data*)Device->ExtraData; rlm@15: rlm@15: ALCcontext *current = alcGetCurrentContext(); rlm@15: if ((ALuint)contextNum > data->numContexts){return;} rlm@15: alcMakeContextCurrent(data->contexts[contextNum]->ctx); rlm@15: alListener3f(param, v1, v2, v3); rlm@15: alcMakeContextCurrent(current); rlm@15: } rlm@15: rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: nsetNthListenerf rlm@15: * Signature: (IFJI)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf rlm@15: (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, rlm@15: jint contextNum){ rlm@15: rlm@15: UNUSED(env);UNUSED(clazz); rlm@15: rlm@15: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@15: send_data *data = (send_data*)Device->ExtraData; rlm@15: rlm@15: ALCcontext *current = alcGetCurrentContext(); rlm@15: if ((ALuint)contextNum > data->numContexts){return;} rlm@15: alcMakeContextCurrent(data->contexts[contextNum]->ctx); rlm@15: alListenerf(param, v1); rlm@15: alcMakeContextCurrent(current); rlm@15: } rlm@19: #+end_src rlm@15: rlm@20: *** Initialization rlm@19: =initDevice= is called from the Java side after LWJGL has created its rlm@19: context, and before any calls to =addListener=. It establishes the rlm@19: LWJGL context as the master context. rlm@19: rlm@20: =getAudioFormat= is a convenience function that uses JNI to build up a rlm@19: =javax.sound.sampled.AudioFormat= object from data in the Device. This rlm@19: way, there is no ambiguity about what the bits created by =step= and rlm@19: returned by =getSamples= mean. rlm@29: #+name: jni-init rlm@19: #+begin_src C rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: ninitDevice rlm@15: * Signature: (J)V rlm@15: */ rlm@15: JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice rlm@15: (JNIEnv *env, jclass clazz, jlong device){ rlm@15: UNUSED(env);UNUSED(clazz); rlm@15: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@15: init(Device); rlm@15: } rlm@15: rlm@15: /* rlm@15: * Class: com_aurellem_send_AudioSend rlm@15: * Method: ngetAudioFormat rlm@15: * Signature: (J)Ljavax/sound/sampled/AudioFormat; rlm@15: */ rlm@15: JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat rlm@15: (JNIEnv *env, jclass clazz, jlong device){ rlm@15: UNUSED(clazz); rlm@15: jclass AudioFormatClass = rlm@15: (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); rlm@15: jmethodID AudioFormatConstructor = rlm@15: (*env)->GetMethodID(env, AudioFormatClass, "", "(FIIZZ)V"); rlm@15: rlm@15: ALCdevice *Device = (ALCdevice*) ((intptr_t)device); rlm@15: int isSigned; rlm@15: switch (Device->FmtType) rlm@15: { rlm@15: case DevFmtUByte: rlm@15: case DevFmtUShort: isSigned = 0; break; rlm@15: default : isSigned = 1; rlm@15: } rlm@15: float frequency = Device->Frequency; rlm@15: int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); rlm@15: int channels = Device->NumChan; rlm@15: jobject format = (*env)-> rlm@15: NewObject( rlm@15: env,AudioFormatClass,AudioFormatConstructor, rlm@15: frequency, rlm@15: bitsPerFrame, rlm@15: channels, rlm@15: isSigned, rlm@15: 0); rlm@15: return format; rlm@15: } rlm@19: #+end_src rlm@15: rlm@22: ** Boring Device management stuff rlm@19: This code is more-or-less copied verbatim from the other =OpenAL= rlm@19: backends. It's the basis for =OpenAL='s primitive object system. rlm@29: #+name: device-init rlm@19: #+begin_src C rlm@20: //////////////////// Device Initialization / Management rlm@15: rlm@15: static const ALCchar sendDevice[] = "Multiple Audio Send"; rlm@15: rlm@15: static ALCboolean send_open_playback(ALCdevice *device, rlm@15: const ALCchar *deviceName) rlm@15: { rlm@15: send_data *data; rlm@15: // stop any buffering for stdout, so that I can rlm@20: // see the printf statements in my terminal immediately rlm@15: setbuf(stdout, NULL); rlm@15: rlm@15: if(!deviceName) rlm@15: deviceName = sendDevice; rlm@15: else if(strcmp(deviceName, sendDevice) != 0) rlm@15: return ALC_FALSE; rlm@15: data = (send_data*)calloc(1, sizeof(*data)); rlm@15: device->szDeviceName = strdup(deviceName); rlm@15: device->ExtraData = data; rlm@15: return ALC_TRUE; rlm@15: } rlm@15: rlm@15: static void send_close_playback(ALCdevice *device) rlm@15: { rlm@15: send_data *data = (send_data*)device->ExtraData; rlm@15: alcMakeContextCurrent(NULL); rlm@15: ALuint i; rlm@15: // Destroy all slave contexts. LWJGL will take care of rlm@15: // its own context. rlm@15: for (i = 1; i < data->numContexts; i++){ rlm@15: context_data *ctxData = data->contexts[i]; rlm@15: alcDestroyContext(ctxData->ctx); rlm@15: free(ctxData->renderBuffer); rlm@15: free(ctxData); rlm@15: } rlm@15: free(data); rlm@15: device->ExtraData = NULL; rlm@15: } rlm@15: rlm@15: static ALCboolean send_reset_playback(ALCdevice *device) rlm@15: { rlm@15: SetDefaultWFXChannelOrder(device); rlm@15: return ALC_TRUE; rlm@15: } rlm@15: rlm@15: static void send_stop_playback(ALCdevice *Device){ rlm@15: UNUSED(Device); rlm@15: } rlm@15: rlm@15: static const BackendFuncs send_funcs = { rlm@15: send_open_playback, rlm@15: send_close_playback, rlm@15: send_reset_playback, rlm@15: send_stop_playback, rlm@15: NULL, rlm@15: NULL, /* These would be filled with functions to */ rlm@15: NULL, /* handle capturing audio if we we into that */ rlm@15: NULL, /* sort of thing... */ rlm@15: NULL, rlm@15: NULL rlm@15: }; rlm@15: rlm@15: ALCboolean alc_send_init(BackendFuncs *func_list){ rlm@15: *func_list = send_funcs; rlm@15: return ALC_TRUE; rlm@15: } rlm@15: rlm@15: void alc_send_deinit(void){} rlm@15: rlm@15: void alc_send_probe(enum DevProbe type) rlm@15: { rlm@15: switch(type) rlm@15: { rlm@15: case DEVICE_PROBE: rlm@15: AppendDeviceList(sendDevice); rlm@15: break; rlm@15: case ALL_DEVICE_PROBE: rlm@15: AppendAllDeviceList(sendDevice); rlm@15: break; rlm@15: case CAPTURE_DEVICE_PROBE: rlm@15: break; rlm@15: } rlm@15: } rlm@15: #+end_src rlm@15: rlm@19: * The Java interface, =AudioSend= rlm@15: rlm@19: The Java interface to the Send Device follows naturally from the JNI rlm@19: definitions. It is included here for completeness. The only thing here rlm@19: of note is the =deviceID=. This is available from LWJGL, but to only rlm@20: way to get it is reflection. Unfortunately, there is no other way to rlm@19: control the Send device than to obtain a pointer to it. rlm@15: rlm@19: #+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code rlm@15: rlm@19: * Finally, Ears in clojure! rlm@15: rlm@20: Now that the infrastructure is complete (modulo a few patches to rlm@19: jMonkeyEngine3 to support accessing this modified version of =OpenAL= rlm@19: that are not worth discussing), the clojure ear abstraction is rather rlm@19: simple. Just as there were =SceneProcessors= for vision, there are rlm@19: now =SoundProcessors= for hearing. rlm@15: rlm@19: #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java rlm@15: rlm@29: #+name: ears rlm@0: #+begin_src clojure rlm@19: (ns cortex.hearing rlm@19: "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple rlm@19: listeners at different positions in the same world. Passes vectors rlm@20: of floats in the range [-1.0 -- 1.0] in PCM format to any arbitrary rlm@19: function." rlm@19: {:author "Robert McIntyre"} rlm@19: (:use (cortex world util)) rlm@19: (:import java.nio.ByteBuffer) rlm@19: (:import org.tritonus.share.sampled.FloatSampleTools) rlm@19: (:import com.aurellem.capture.audio.SoundProcessor) rlm@19: (:import javax.sound.sampled.AudioFormat)) rlm@19: rlm@0: (defn sound-processor rlm@19: "Deals with converting ByteBuffers into Vectors of floats so that rlm@19: the continuation functions can be defined in terms of immutable rlm@19: stuff." rlm@0: [continuation] rlm@0: (proxy [SoundProcessor] [] rlm@0: (cleanup []) rlm@0: (process rlm@19: [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] rlm@19: (let [bytes (byte-array numSamples) rlm@19: floats (float-array numSamples)] rlm@19: (.get audioSamples bytes 0 numSamples) rlm@19: (FloatSampleTools/byte2floatInterleaved rlm@19: bytes 0 floats 0 rlm@19: (/ numSamples (.getFrameSize audioFormat)) audioFormat) rlm@19: (continuation rlm@19: (vec floats)))))) rlm@0: rlm@0: (defn add-ear rlm@19: "Add an ear to the world. The continuation function will be called rlm@0: on the FFT or the sounds which the ear hears in the given rlm@0: timeframe. Sound is 3D." rlm@0: [world listener continuation] rlm@0: (let [renderer (.getAudioRenderer world)] rlm@0: (.addListener renderer listener) rlm@0: (.registerSoundProcessor renderer listener rlm@0: (sound-processor continuation)) rlm@0: listener)) rlm@0: #+end_src rlm@0: rlm@19: * Example rlm@0: rlm@29: #+name: test-hearing rlm@0: #+begin_src clojure :results silent rlm@29: (ns cortex.test.hearing rlm@19: (:use (cortex world util hearing)) rlm@19: (:import (com.jme3.audio AudioNode Listener)) rlm@29: (:import com.jme3.scene.Node rlm@29: com.jme3.system.AppSettings)) rlm@0: rlm@0: (defn setup-fn [world] rlm@0: (let [listener (Listener.)] rlm@19: (add-ear world listener #(println-repl (nth % 0))))) rlm@0: rlm@0: (defn play-sound [node world value] rlm@0: (if (not value) rlm@0: (do rlm@0: (.playSource (.getAudioRenderer world) node)))) rlm@0: rlm@19: (defn test-basic-hearing [] rlm@19: (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] rlm@19: (world rlm@19: (Node.) rlm@19: {"key-space" (partial play-sound node1)} rlm@19: setup-fn rlm@29: no-op))) rlm@29: rlm@29: (defn test-advanced-hearing rlm@29: "Testing hearing: rlm@29: You should see a blue sphere flying around several rlm@29: cubes. As the sphere approaches each cube, it turns rlm@29: green." rlm@29: [] rlm@29: (doto (com.aurellem.capture.examples.Advanced.) rlm@29: (.setSettings rlm@29: (doto (AppSettings. true) rlm@29: (.setAudioRenderer "Send"))) rlm@29: (.setShowSettings false) rlm@29: (.setPauseOnLostFocus false))) rlm@29: rlm@0: #+end_src rlm@0: rlm@19: This extremely basic program prints out the first sample it encounters rlm@29: at every time stamp. You can see the rendered sound being printed at rlm@19: the REPL. rlm@15: rlm@22: - As a bonus, this method of capturing audio for AI can also be used rlm@22: to capture perfect audio from a jMonkeyEngine application, for use rlm@22: in demos and the like. rlm@22: rlm@22: rlm@0: * COMMENT Code Generation rlm@0: rlm@15: #+begin_src clojure :tangle ../../cortex/src/cortex/hearing.clj rlm@15: <> rlm@0: #+end_src rlm@0: rlm@29: #+begin_src clojure :tangle ../../cortex/src/cortex/test/hearing.clj rlm@0: <> rlm@0: #+end_src rlm@0: rlm@15: #+begin_src C :tangle ../Alc/backends/send.c rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@21: <> rlm@15: #+end_src rlm@19: rlm@19: