# HG changeset patch # User Robert McIntyre # Date 1328345013 25200 # Node ID b8bc24918d63638f8a8f83eb58cd997f7cc787f7 # Parent 0e794e48a0cc0d74ccd60b64b959eec937ec1d91 moved ear.org into cortex diff -r 0e794e48a0cc -r b8bc24918d63 org/ear.org --- a/org/ear.org Fri Feb 03 06:40:59 2012 -0700 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,952 +0,0 @@ -#+title: Simulated Sense of Hearing -#+author: Robert McIntyre -#+email: rlm@mit.edu -#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 -#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI -#+SETUPFILE: ../../aurellem/org/setup.org -#+INCLUDE: ../../aurellem/org/level-0.org -#+BABEL: :exports both :noweb yes :cache no :mkdirp yes - -* Hearing - -I want to be able to place ears in a similar manner to how I place -the eyes. I want to be able to place ears in a unique spatial -position, and receive as output at every tick the F.F.T. of whatever -signals are happening at that point. - -Hearing is one of the more difficult senses to simulate, because there -is less support for obtaining the actual sound data that is processed -by jMonkeyEngine3. - -jMonkeyEngine's sound system works as follows: - - - jMonkeyEngine uses the =AppSettings= for the particular application - to determine what sort of =AudioRenderer= should be used. - - although some support is provided for multiple AudioRendering - backends, jMonkeyEngine at the time of this writing will either - pick no AudioRenderer at all, or the =LwjglAudioRenderer= - - jMonkeyEngine tries to figure out what sort of system you're - running and extracts the appropriate native libraries. - - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game - Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] - - =OpenAL= calculates the 3D sound localization and feeds a stream of - sound to any of various sound output devices with which it knows - how to communicate. - -A consequence of this is that there's no way to access the actual -sound data produced by =OpenAL=. Even worse, =OpenAL= only supports -one /listener/, which normally isn't a problem for games, but becomes -a problem when trying to make multiple AI creatures that can each hear -the world from a different perspective. - -To make many AI creatures in jMonkeyEngine that can each hear the -world from their own perspective, it is necessary to go all the way -back to =OpenAL= and implement support for simulated hearing there. - -* Extending =OpenAL= -** =OpenAL= Devices - -=OpenAL= goes to great lengths to support many different systems, all -with different sound capabilities and interfaces. It accomplishes this -difficult task by providing code for many different sound backends in -pseudo-objects called /Devices/. There's a device for the Linux Open -Sound System and the Advanced Linux Sound Architecture, there's one -for Direct Sound on Windows, there's even one for Solaris. =OpenAL= -solves the problem of platform independence by providing all these -Devices. - -Wrapper libraries such as LWJGL are free to examine the system on -which they are running and then select an appropriate device for that -system. - -There are also a few "special" devices that don't interface with any -particular system. These include the Null Device, which doesn't do -anything, and the Wave Device, which writes whatever sound it receives -to a file, if everything has been set up correctly when configuring -=OpenAL=. - -Actual mixing of the sound data happens in the Devices, and they are -the only point in the sound rendering process where this data is -available. - -Therefore, in order to support multiple listeners, and get the sound -data in a form that the AIs can use, it is necessary to create a new -Device, which supports this features. - -** The Send Device -Adding a device to OpenAL is rather tricky -- there are five separate -files in the =OpenAL= source tree that must be modified to do so. I've -documented this process [[./add-new-device.org][here]] for anyone who is interested. - - -Onward to that actual Device! - -again, my objectives are: - - - Support Multiple Listeners from jMonkeyEngine3 - - Get access to the rendered sound data for further processing from - clojure. - -** =send.c= - -** Header -#+name: send-header -#+begin_src C -#include "config.h" -#include -#include "alMain.h" -#include "AL/al.h" -#include "AL/alc.h" -#include "alSource.h" -#include - -//////////////////// Summary - -struct send_data; -struct context_data; - -static void addContext(ALCdevice *, ALCcontext *); -static void syncContexts(ALCcontext *master, ALCcontext *slave); -static void syncSources(ALsource *master, ALsource *slave, - ALCcontext *masterCtx, ALCcontext *slaveCtx); - -static void syncSourcei(ALuint master, ALuint slave, - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); -static void syncSourcef(ALuint master, ALuint slave, - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); -static void syncSource3f(ALuint master, ALuint slave, - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); - -static void swapInContext(ALCdevice *, struct context_data *); -static void saveContext(ALCdevice *, struct context_data *); -static void limitContext(ALCdevice *, ALCcontext *); -static void unLimitContext(ALCdevice *); - -static void init(ALCdevice *); -static void renderData(ALCdevice *, int samples); - -#define UNUSED(x) (void)(x) -#+end_src - -The main idea behind the Send device is to take advantage of the fact -that LWJGL only manages one /context/ when using OpenAL. A /context/ -is like a container that holds samples and keeps track of where the -listener is. In order to support multiple listeners, the Send device -identifies the LWJGL context as the master context, and creates any -number of slave contexts to represent additional listeners. Every -time the device renders sound, it synchronizes every source from the -master LWJGL context to the slave contexts. Then, it renders each -context separately, using a different listener for each one. The -rendered sound is made available via JNI to jMonkeyEngine. - -To recap, the process is: - - Set the LWJGL context as "master" in the =init()= method. - - Create any number of additional contexts via =addContext()= - - At every call to =renderData()= sync the master context with the - slave contexts with =syncContexts()= - - =syncContexts()= calls =syncSources()= to sync all the sources - which are in the master context. - - =limitContext()= and =unLimitContext()= make it possible to render - only one context at a time. - -** Necessary State -#+name: send-state -#+begin_src C -//////////////////// State - -typedef struct context_data { - ALfloat ClickRemoval[MAXCHANNELS]; - ALfloat PendingClicks[MAXCHANNELS]; - ALvoid *renderBuffer; - ALCcontext *ctx; -} context_data; - -typedef struct send_data { - ALuint size; - context_data **contexts; - ALuint numContexts; - ALuint maxContexts; -} send_data; -#+end_src - -Switching between contexts is not the normal operation of a Device, -and one of the problems with doing so is that a Device normally keeps -around a few pieces of state such as the =ClickRemoval= array above -which will become corrupted if the contexts are not done in -parallel. The solution is to create a copy of this normally global -device state for each context, and copy it back and forth into and out -of the actual device state whenever a context is rendered. - -** Synchronization Macros -#+name: sync-macros -#+begin_src C -//////////////////// Context Creation / Synchronization - -#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ - void NAME (ALuint sourceID1, ALuint sourceID2, \ - ALCcontext *ctx1, ALCcontext *ctx2, \ - ALenum param){ \ - INIT_EXPR; \ - ALCcontext *current = alcGetCurrentContext(); \ - alcMakeContextCurrent(ctx1); \ - GET_EXPR; \ - alcMakeContextCurrent(ctx2); \ - SET_EXPR; \ - alcMakeContextCurrent(current); \ - } - -#define MAKE_SYNC(NAME, TYPE, GET, SET) \ - _MAKE_SYNC(NAME, \ - TYPE value, \ - GET(sourceID1, param, &value), \ - SET(sourceID2, param, value)) - -#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ - _MAKE_SYNC(NAME, \ - TYPE value1; TYPE value2; TYPE value3;, \ - GET(sourceID1, param, &value1, &value2, &value3), \ - SET(sourceID2, param, value1, value2, value3)) - -MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); -MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); -MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); -MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); - -#+end_src - -Setting the state of an =OpenAL= source is done with the =alSourcei=, -=alSourcef=, =alSource3i=, and =alSource3f= functions. In order to -completely synchronize two sources, it is necessary to use all of -them. These macros help to condense the otherwise repetitive -synchronization code involving these similar low-level =OpenAL= functions. - -** Source Synchronization -#+name: sync-sources -#+begin_src C -void syncSources(ALsource *masterSource, ALsource *slaveSource, - ALCcontext *masterCtx, ALCcontext *slaveCtx){ - ALuint master = masterSource->source; - ALuint slave = slaveSource->source; - ALCcontext *current = alcGetCurrentContext(); - - syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); - syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); - - syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); - syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); - syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); - - syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); - syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); - - alcMakeContextCurrent(masterCtx); - ALint source_type; - alGetSourcei(master, AL_SOURCE_TYPE, &source_type); - - // Only static sources are currently synchronized! - if (AL_STATIC == source_type){ - ALint master_buffer; - ALint slave_buffer; - alGetSourcei(master, AL_BUFFER, &master_buffer); - alcMakeContextCurrent(slaveCtx); - alGetSourcei(slave, AL_BUFFER, &slave_buffer); - if (master_buffer != slave_buffer){ - alSourcei(slave, AL_BUFFER, master_buffer); - } - } - - // Synchronize the state of the two sources. - alcMakeContextCurrent(masterCtx); - ALint masterState; - ALint slaveState; - - alGetSourcei(master, AL_SOURCE_STATE, &masterState); - alcMakeContextCurrent(slaveCtx); - alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); - - if (masterState != slaveState){ - switch (masterState){ - case AL_INITIAL : alSourceRewind(slave); break; - case AL_PLAYING : alSourcePlay(slave); break; - case AL_PAUSED : alSourcePause(slave); break; - case AL_STOPPED : alSourceStop(slave); break; - } - } - // Restore whatever context was previously active. - alcMakeContextCurrent(current); -} -#+end_src -This function is long because it has to exhaustively go through all the -possible state that a source can have and make sure that it is the -same between the master and slave sources. I'd like to take this -moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very -good description of =OpenAL='s internals. - -** Context Synchronization -#+name: sync-contexts -#+begin_src C -void syncContexts(ALCcontext *master, ALCcontext *slave){ - /* If there aren't sufficient sources in slave to mirror - the sources in master, create them. */ - ALCcontext *current = alcGetCurrentContext(); - - UIntMap *masterSourceMap = &(master->SourceMap); - UIntMap *slaveSourceMap = &(slave->SourceMap); - ALuint numMasterSources = masterSourceMap->size; - ALuint numSlaveSources = slaveSourceMap->size; - - alcMakeContextCurrent(slave); - if (numSlaveSources < numMasterSources){ - ALuint numMissingSources = numMasterSources - numSlaveSources; - ALuint newSources[numMissingSources]; - alGenSources(numMissingSources, newSources); - } - - /* Now, slave is guaranteed to have at least as many sources - as master. Sync each source from master to the corresponding - source in slave. */ - int i; - for(i = 0; i < masterSourceMap->size; i++){ - syncSources((ALsource*)masterSourceMap->array[i].value, - (ALsource*)slaveSourceMap->array[i].value, - master, slave); - } - alcMakeContextCurrent(current); -} -#+end_src - -Most of the hard work in Context Synchronization is done in -=syncSources()=. The only thing that =syncContexts()= has to worry -about is automatically creating new sources whenever a slave context -does not have the same number of sources as the master context. - -** Context Creation -#+name: context-creation -#+begin_src C -static void addContext(ALCdevice *Device, ALCcontext *context){ - send_data *data = (send_data*)Device->ExtraData; - // expand array if necessary - if (data->numContexts >= data->maxContexts){ - ALuint newMaxContexts = data->maxContexts*2 + 1; - data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); - data->maxContexts = newMaxContexts; - } - // create context_data and add it to the main array - context_data *ctxData; - ctxData = (context_data*)calloc(1, sizeof(*ctxData)); - ctxData->renderBuffer = - malloc(BytesFromDevFmt(Device->FmtType) * - Device->NumChan * Device->UpdateSize); - ctxData->ctx = context; - - data->contexts[data->numContexts] = ctxData; - data->numContexts++; -} -#+end_src - -Here, the slave context is created, and it's data is stored in the -device-wide =ExtraData= structure. The =renderBuffer= that is created -here is where the rendered sound samples for this slave context will -eventually go. - -** Context Switching -#+name: context-switching -#+begin_src C -//////////////////// Context Switching - -/* A device brings along with it two pieces of state - * which have to be swapped in and out with each context. - */ -static void swapInContext(ALCdevice *Device, context_data *ctxData){ - memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); - memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); -} - -static void saveContext(ALCdevice *Device, context_data *ctxData){ - memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); - memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); -} - -static ALCcontext **currentContext; -static ALuint currentNumContext; - -/* By default, all contexts are rendered at once for each call to aluMixData. - * This function uses the internals of the ALCdevice struct to temporally - * cause aluMixData to only render the chosen context. - */ -static void limitContext(ALCdevice *Device, ALCcontext *ctx){ - currentContext = Device->Contexts; - currentNumContext = Device->NumContexts; - Device->Contexts = &ctx; - Device->NumContexts = 1; -} - -static void unLimitContext(ALCdevice *Device){ - Device->Contexts = currentContext; - Device->NumContexts = currentNumContext; -} -#+end_src - -=OpenAL= normally renders all Contexts in parallel, outputting the -whole result to the buffer. It does this by iterating over the -Device->Contexts array and rendering each context to the buffer in -turn. By temporally setting Device->NumContexts to 1 and adjusting -the Device's context list to put the desired context-to-be-rendered -into position 0, we can get trick =OpenAL= into rendering each slave -context separate from all the others. - -** Main Device Loop -#+name: main-loop -#+begin_src C -//////////////////// Main Device Loop - -/* Establish the LWJGL context as the master context, which will - * be synchronized to all the slave contexts - */ -static void init(ALCdevice *Device){ - ALCcontext *masterContext = alcGetCurrentContext(); - addContext(Device, masterContext); -} - - -static void renderData(ALCdevice *Device, int samples){ - if(!Device->Connected){return;} - send_data *data = (send_data*)Device->ExtraData; - ALCcontext *current = alcGetCurrentContext(); - - ALuint i; - for (i = 1; i < data->numContexts; i++){ - syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); - } - - if ((ALuint) samples > Device->UpdateSize){ - printf("exceeding internal buffer size; dropping samples\n"); - printf("requested %d; available %d\n", samples, Device->UpdateSize); - samples = (int) Device->UpdateSize; - } - - for (i = 0; i < data->numContexts; i++){ - context_data *ctxData = data->contexts[i]; - ALCcontext *ctx = ctxData->ctx; - alcMakeContextCurrent(ctx); - limitContext(Device, ctx); - swapInContext(Device, ctxData); - aluMixData(Device, ctxData->renderBuffer, samples); - saveContext(Device, ctxData); - unLimitContext(Device); - } - alcMakeContextCurrent(current); -} -#+end_src - -The main loop synchronizes the master LWJGL context with all the slave -contexts, then walks each context, rendering just that context to it's -audio-sample storage buffer. - -** JNI Methods - -At this point, we have the ability to create multiple listeners by -using the master/slave context trick, and the rendered audio data is -waiting patiently in internal buffers, one for each listener. We need -a way to transport this information to Java, and also a way to drive -this device from Java. The following JNI interface code is inspired -by the way LWJGL interfaces with =OpenAL=. - -*** step -#+name: jni-step -#+begin_src C -//////////////////// JNI Methods - -#include "com_aurellem_send_AudioSend.h" - -/* - * Class: com_aurellem_send_AudioSend - * Method: nstep - * Signature: (JI)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep -(JNIEnv *env, jclass clazz, jlong device, jint samples){ - UNUSED(env);UNUSED(clazz);UNUSED(device); - renderData((ALCdevice*)((intptr_t)device), samples); -} -#+end_src -This device, unlike most of the other devices in =OpenAL=, does not -render sound unless asked. This enables the system to slow down or -speed up depending on the needs of the AIs who are using it to -listen. If the device tried to render samples in real-time, a -complicated AI whose mind takes 100 seconds of computer time to -simulate 1 second of AI-time would miss almost all of the sound in -its environment. - - -*** getSamples -#+name: jni-get-samples -#+begin_src C -/* - * Class: com_aurellem_send_AudioSend - * Method: ngetSamples - * Signature: (JLjava/nio/ByteBuffer;III)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples -(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, - jint samples, jint n){ - UNUSED(clazz); - - ALvoid *buffer_address = - ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); - ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); - send_data *data = (send_data*)recorder->ExtraData; - if ((ALuint)n > data->numContexts){return;} - memcpy(buffer_address, data->contexts[n]->renderBuffer, - BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); -} -#+end_src - -This is the transport layer between C and Java that will eventually -allow us to access rendered sound data from clojure. - -*** Listener Management - -=addListener=, =setNthListenerf=, and =setNthListener3f= are -necessary to change the properties of any listener other than the -master one, since only the listener of the current active context is -affected by the normal =OpenAL= listener calls. -#+name: listener-manage -#+begin_src C -/* - * Class: com_aurellem_send_AudioSend - * Method: naddListener - * Signature: (J)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener -(JNIEnv *env, jclass clazz, jlong device){ - UNUSED(env); UNUSED(clazz); - //printf("creating new context via naddListener\n"); - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); - ALCcontext *new = alcCreateContext(Device, NULL); - addContext(Device, new); -} - -/* - * Class: com_aurellem_send_AudioSend - * Method: nsetNthListener3f - * Signature: (IFFFJI)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f - (JNIEnv *env, jclass clazz, jint param, - jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ - UNUSED(env);UNUSED(clazz); - - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); - send_data *data = (send_data*)Device->ExtraData; - - ALCcontext *current = alcGetCurrentContext(); - if ((ALuint)contextNum > data->numContexts){return;} - alcMakeContextCurrent(data->contexts[contextNum]->ctx); - alListener3f(param, v1, v2, v3); - alcMakeContextCurrent(current); -} - -/* - * Class: com_aurellem_send_AudioSend - * Method: nsetNthListenerf - * Signature: (IFJI)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf -(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, - jint contextNum){ - - UNUSED(env);UNUSED(clazz); - - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); - send_data *data = (send_data*)Device->ExtraData; - - ALCcontext *current = alcGetCurrentContext(); - if ((ALuint)contextNum > data->numContexts){return;} - alcMakeContextCurrent(data->contexts[contextNum]->ctx); - alListenerf(param, v1); - alcMakeContextCurrent(current); -} -#+end_src - -*** Initialization -=initDevice= is called from the Java side after LWJGL has created its -context, and before any calls to =addListener=. It establishes the -LWJGL context as the master context. - -=getAudioFormat= is a convenience function that uses JNI to build up a -=javax.sound.sampled.AudioFormat= object from data in the Device. This -way, there is no ambiguity about what the bits created by =step= and -returned by =getSamples= mean. -#+name: jni-init -#+begin_src C -/* - * Class: com_aurellem_send_AudioSend - * Method: ninitDevice - * Signature: (J)V - */ -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice -(JNIEnv *env, jclass clazz, jlong device){ - UNUSED(env);UNUSED(clazz); - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); - init(Device); -} - -/* - * Class: com_aurellem_send_AudioSend - * Method: ngetAudioFormat - * Signature: (J)Ljavax/sound/sampled/AudioFormat; - */ -JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat -(JNIEnv *env, jclass clazz, jlong device){ - UNUSED(clazz); - jclass AudioFormatClass = - (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); - jmethodID AudioFormatConstructor = - (*env)->GetMethodID(env, AudioFormatClass, "", "(FIIZZ)V"); - - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); - int isSigned; - switch (Device->FmtType) - { - case DevFmtUByte: - case DevFmtUShort: isSigned = 0; break; - default : isSigned = 1; - } - float frequency = Device->Frequency; - int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); - int channels = Device->NumChan; - jobject format = (*env)-> - NewObject( - env,AudioFormatClass,AudioFormatConstructor, - frequency, - bitsPerFrame, - channels, - isSigned, - 0); - return format; -} -#+end_src - -** Boring Device management stuff -This code is more-or-less copied verbatim from the other =OpenAL= -backends. It's the basis for =OpenAL='s primitive object system. -#+name: device-init -#+begin_src C -//////////////////// Device Initialization / Management - -static const ALCchar sendDevice[] = "Multiple Audio Send"; - -static ALCboolean send_open_playback(ALCdevice *device, - const ALCchar *deviceName) -{ - send_data *data; - // stop any buffering for stdout, so that I can - // see the printf statements in my terminal immediately - setbuf(stdout, NULL); - - if(!deviceName) - deviceName = sendDevice; - else if(strcmp(deviceName, sendDevice) != 0) - return ALC_FALSE; - data = (send_data*)calloc(1, sizeof(*data)); - device->szDeviceName = strdup(deviceName); - device->ExtraData = data; - return ALC_TRUE; -} - -static void send_close_playback(ALCdevice *device) -{ - send_data *data = (send_data*)device->ExtraData; - alcMakeContextCurrent(NULL); - ALuint i; - // Destroy all slave contexts. LWJGL will take care of - // its own context. - for (i = 1; i < data->numContexts; i++){ - context_data *ctxData = data->contexts[i]; - alcDestroyContext(ctxData->ctx); - free(ctxData->renderBuffer); - free(ctxData); - } - free(data); - device->ExtraData = NULL; -} - -static ALCboolean send_reset_playback(ALCdevice *device) -{ - SetDefaultWFXChannelOrder(device); - return ALC_TRUE; -} - -static void send_stop_playback(ALCdevice *Device){ - UNUSED(Device); -} - -static const BackendFuncs send_funcs = { - send_open_playback, - send_close_playback, - send_reset_playback, - send_stop_playback, - NULL, - NULL, /* These would be filled with functions to */ - NULL, /* handle capturing audio if we we into that */ - NULL, /* sort of thing... */ - NULL, - NULL -}; - -ALCboolean alc_send_init(BackendFuncs *func_list){ - *func_list = send_funcs; - return ALC_TRUE; -} - -void alc_send_deinit(void){} - -void alc_send_probe(enum DevProbe type) -{ - switch(type) - { - case DEVICE_PROBE: - AppendDeviceList(sendDevice); - break; - case ALL_DEVICE_PROBE: - AppendAllDeviceList(sendDevice); - break; - case CAPTURE_DEVICE_PROBE: - break; - } -} -#+end_src - -* The Java interface, =AudioSend= - -The Java interface to the Send Device follows naturally from the JNI -definitions. It is included here for completeness. The only thing here -of note is the =deviceID=. This is available from LWJGL, but to only -way to get it is reflection. Unfortunately, there is no other way to -control the Send device than to obtain a pointer to it. - -#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code - -* Finally, Ears in clojure! - -Now that the infrastructure is complete (modulo a few patches to -jMonkeyEngine3 to support accessing this modified version of =OpenAL= -that are not worth discussing), the clojure ear abstraction is rather -simple. Just as there were =SceneProcessors= for vision, there are -now =SoundProcessors= for hearing. - -#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java - -#+name: ears -#+begin_src clojure -(ns cortex.hearing - "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple - listeners at different positions in the same world. Passes vectors - of floats in the range [-1.0 -- 1.0] in PCM format to any arbitrary - function." - {:author "Robert McIntyre"} - (:use (cortex world util sense)) - (:import java.nio.ByteBuffer) - (:import org.tritonus.share.sampled.FloatSampleTools) - (:import com.aurellem.capture.audio.SoundProcessor) - (:import javax.sound.sampled.AudioFormat)) - -(cortex.import/mega-import-jme3) - -(defn sound-processor - "Deals with converting ByteBuffers into Vectors of floats so that - the continuation functions can be defined in terms of immutable - stuff." - [continuation] - (proxy [SoundProcessor] [] - (cleanup []) - (process - [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] - (let [bytes (byte-array numSamples) - num-floats (/ numSamples (.getFrameSize audioFormat)) - floats (float-array num-floats)] - (.get audioSamples bytes 0 numSamples) - (FloatSampleTools/byte2floatInterleaved - bytes 0 floats 0 num-floats audioFormat) - (continuation - (vec floats)))))) - - - - -;; Ears work the same way as vision. - -;; (hearing creature) will return [init-functions -;; sensor-functions]. The init functions each take the world and -;; register a SoundProcessor that does foureier transforms on the -;; incommong sound data, making it available to each sensor function. - -(defn creature-ears - "Return the children of the creature's \"ears\" node." - ;;dylan - ;;"The ear nodes which are children of the \"ears\" node in the - ;;creature." - [#^Node creature] - (if-let [ear-node (.getChild creature "ears")] - (seq (.getChildren ear-node)) - (do (println-repl "could not find ears node") []))) - - -;;dylan (defn follow-sense, adjoin-sense, attach-stimuli, -;;anchor-qualia, augment-organ, with-organ - - -(defn update-listener-velocity - "Update the listener's velocity every update loop." - [#^Spatial obj #^Listener lis] - (let [old-position (atom (.getLocation lis))] - (.addControl - obj - (proxy [AbstractControl] [] - (controlUpdate [tpf] - (let [new-position (.getLocation lis)] - (.setVelocity - lis - (.mult (.subtract new-position @old-position) - (float (/ tpf)))) - (reset! old-position new-position))) - (controlRender [_ _]))))) - -(import com.aurellem.capture.audio.AudioSendRenderer) - -(defn attach-ear - [#^Application world #^Node creature #^Spatial ear continuation] - (let [target (closest-node creature ear) - lis (Listener.) - audio-renderer (.getAudioRenderer world) - sp (sound-processor continuation)] - (.setLocation lis (.getWorldTranslation ear)) - (.setRotation lis (.getWorldRotation ear)) - (bind-sense target lis) - (update-listener-velocity target lis) - (.addListener audio-renderer lis) - (.registerSoundProcessor audio-renderer lis sp))) - -(defn enable-hearing - [#^Node creature #^Spatial ear] - (let [hearing-data (atom [])] - [(fn [world] - (attach-ear world creature ear - (fn [data] - (reset! hearing-data (vec data))))) - [(fn [] - (let [data @hearing-data - topology - (vec (map #(vector % 0) (range 0 (count data)))) - scaled-data - (vec - (map - #(rem (int (* 255 (/ (+ 1 %) 2))) 256) - data))] - [topology scaled-data])) - ]])) - -(defn hearing - [#^Node creature] - (reduce - (fn [[init-a senses-a] - [init-b senses-b]] - [(conj init-a init-b) - (into senses-a senses-b)]) - [[][]] - (for [ear (creature-ears creature)] - (enable-hearing creature ear)))) - - -#+end_src - -* Example - -#+name: test-hearing -#+begin_src clojure :results silent -(ns cortex.test.hearing - (:use (cortex world util hearing)) - (:import (com.jme3.audio AudioNode Listener)) - (:import com.jme3.scene.Node - com.jme3.system.AppSettings)) - -(defn setup-fn [world] - (let [listener (Listener.)] - (add-ear world listener #(println-repl (nth % 0))))) - -(defn play-sound [node world value] - (if (not value) - (do - (.playSource (.getAudioRenderer world) node)))) - -(defn test-basic-hearing [] - (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] - (world - (Node.) - {"key-space" (partial play-sound node1)} - setup-fn - no-op))) - -(defn test-advanced-hearing - "Testing hearing: - You should see a blue sphere flying around several - cubes. As the sphere approaches each cube, it turns - green." - [] - (doto (com.aurellem.capture.examples.Advanced.) - (.setSettings - (doto (AppSettings. true) - (.setAudioRenderer "Send"))) - (.setShowSettings false) - (.setPauseOnLostFocus false))) - -#+end_src - -This extremely basic program prints out the first sample it encounters -at every time stamp. You can see the rendered sound being printed at -the REPL. - - - As a bonus, this method of capturing audio for AI can also be used - to capture perfect audio from a jMonkeyEngine application, for use - in demos and the like. - - -* COMMENT Code Generation - -#+begin_src clojure :tangle ../../cortex/src/cortex/hearing.clj -<> -#+end_src - -#+begin_src clojure :tangle ../../cortex/src/cortex/test/hearing.clj -<> -#+end_src - -#+begin_src C :tangle ../Alc/backends/send.c -<> -<> -<> -<> -<> -<> -<> -<> -<> -<> -<> -<> -<> -#+end_src - - diff -r 0e794e48a0cc -r b8bc24918d63 org/send.org --- a/org/send.org Fri Feb 03 06:40:59 2012 -0700 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,11 +0,0 @@ -#+title: send.c -#+author: Robert McIntyre -#+email: rlm@mit.edu -#+description: some code I wrote to extend open-al -#+SETUPFILE: ../../aurellem/org/setup.org -#+INCLUDE: ../../aurellem/org/level-0.org - - -#+include /home/r/proj/audio-send/Alc/backends/send.c src C - -